id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
53511866 | pes2o/s2orc | v3-fos-license | By Design or by Default : Capacity Development in Fragile States and the Limits of Program Planning
State fragility is generally understood today as a question of capacity deficits. There will be no resilience, development or peace without governance capacity. In this sense, capacity is not an abstract value or feature: it is the concrete competence and will of the individuals inhabiting the offices of governance. The international community thus acknowledges capacity as a sine qua non of resilience, development and peace. This has been recognized in a number of recent reports and statements, including the United Nations Civilian Capacity (CIVCAP) initiative and the 2013 Security Council Resolution 2086 on multidimensional peace building.1 Despite the recent focus on capacity, no one has come up with a proven workable solution to the problem of capacity deficits in the world’s most fragile states. The last decades of alchemistic toying with various concepts of and approaches to “state building” failed to deliver any golden formula. Capacity development remains the weak link, if not the key conundrum, in international state and peace building. Nevertheless, lessons have been learned, perhaps the most important of which is that the reform of government institutions and civil servants cannot be installed from above – it needs to grow from below. The calls for local ownership, contextualisation, and bottom-up and inside-out approaches all express this realism. Though these concepts have become increasingly popular, there is still a lack of understanding of how the ideas can be translated into actual programming. Apart from project evaluations, few case studies have been made in this field, and expertise is generally feeble. This article builds on a research project on the IGAD Initiative, a capacity developRosén, F and Haldrup, S 2013 By Design or by Default: Capacity Development in Fragile States and the Limits of Program Planning. Stability: International Journal of Security & Development, 2(2): 46, pp. 1-8, DOI: http://dx.doi.org/10.5334/sta.cg
Introduction
State fragility is generally understood today as a question of capacity deficits.There will be no resilience, development or peace without governance capacity.In this sense, capacity is not an abstract value or feature: it is the concrete competence and will of the individuals inhabiting the offices of governance.The international community thus acknowledges capacity as a sine qua non of resilience, development and peace.This has been recognized in a number of recent reports and statements, including the United Nations Civilian Capacity (CIVCAP) initiative and the 2013 Security Council Resolution 2086 on multidimensional peace building. 1espite the recent focus on capacity, no one has come up with a proven workable solution to the problem of capacity deficits in the world's most fragile states.The last decades of alchemistic toying with various concepts of and approaches to "state building" failed to deliver any golden formula.Capacity development remains the weak link, if not the key conundrum, in international state and peace building.Nevertheless, lessons have been learned, perhaps the most important of which is that the reform of government institutions and civil servants cannot be installed from above -it needs to grow from below.The calls for local ownership, contextualisation, and bottom-up and inside-out approaches all express this realism.Though these concepts have become increasingly popular, there is still a lack of understanding of how the ideas can be translated into actual programming.Apart from project evaluations, few case studies have been made in this field, and expertise is generally feeble.ment initiative in South Sudan.Our research comprises more than a hundred interviews with people working with the Initiative on the diplomatic, management and implementing levels.We have discussed various aspects of the IGAD Initiative elsewhere. 2his article points out a peculiar aspect of the IGAD Initiative which has general relevance to the broader capacity development agenda.It seems that the IGAD Initiative's success in facilitating locally owned and context-embedded capacity development has emerged more by default than by design.A vague project design seems to have provided the space needed for capacity development to genuinely take the context as the starting point.It should be noted that we are not concerned with the overall output of the IGAD Initiative here, which remains to be assessed.In this article, we are simply presenting an analytical narrative for the purpose of provoking debate and thinking about the design of capacity development programming.
The IGAD Initiative
The IGAD Initiative, also known as the Regional Capacity Enhancement Initiative (RCEI), is a regional capacity development cooperation for South Sudan.As part of the initiative, Ethiopia, Kenya and Uganda have seconded (by April 2013) 199 Civil Service Support Officers (CSSOs) to South Sudanese ministries at the state and national levels for two-year terms.In these ministries the CSSOs have been 'twinned' with South Sudanese civil servants.The Initiative seeks to address the grave capacity gaps in South Sudan's civil service while accommodating the calls for culturally and technically appropriate capacity, local ownership and regional cooperation.The project presents itself as an alternative to conventional short-term technical assistance, which has demonstrated limited success in fragile state environments.It also reflects strong Ethiopian, Kenyan and Ugandan interests in a resilient South Sudanese state, with whom they all share borders, a region and markets.
As a development aid program, the IGAD Initiative can be described as triangularly organized south-south cooperation in capacity development.The CSSOs will remain on the payroll of their home countries for the entire two-year deployment period.Norway is providing an additional US$18 million to cover project costs and UNDP is contributing with project management along with the Government of South Sudan's Ministry of Labour as the key implementing partner.
The CSSOs have been deployed to nineteen South Sudanese ministries at the national and state levels, and the stated aim of the CSSOs is to ' coach and mentor' their south Sudanese 'twins' through on-the-job training with the aim of developing their twins' capacity to perform the duties of civil servants.The IGAD Initiative lists this 'knowledge transfer' from CSSO to twin as its ultimate objective.During our field research in South Sudan in January 2013, we encountered CSSOs, among other places, in the air control tower at Juba Airport, next to the minister's office in the Ministry of Foreign Affairs, in the laboratory at the Ministry of Animal Resources and Fishery, in the National Legislative Assembly, and in the hospitals in Juba and the regional states.
Best Practice
The IGAD Initiative appears to accommodate most recommendations from the United Nations and the Organisation for Economic Co-operation and Development's (OECD) framework for engagement in fragile states in terms of south-south cooperation, ownership, addressing local needs and priorities, and developing local capacities, bottom-up approaches, long-term engagement, flexibility, context and nimbleness (da Costa et al. 2013a).Considering this performance it would be reasonable to assume that the project has had a sophisticated design and tight management.However plausible, this appears not to have been the case.The purpose of the project, its desired outcomes and the CSSOs' Terms of References were very general and did not specify what exactly was to be done and how.Needs assessments and matching procedures were part of the IGAD Initiative's design, but they had not been translated into actual implementation plans and clear Terms of Reference.Most CSSOs therefore more or less arrived in South Sudan with vague mandates and unspecified terms of reference.Similarly, on the South Sudanese side, there was little awareness or understanding of what a CSSO was and how to engage with one.Consequently, there was also a lack of immediate work for the CSSOs to take on when they arrived in Juba and elsewhere.
This confused situation, however, allowed for considerable flexibility on the ground.It gave the CSSOs the time and freedom to familiarize themselves with their new work context, as well as an opportunity to identify existing capacities and to address the most acute needs in the particular environment in collaboration with their South Sudanese colleagues.It allowed them to work on these issues with their colleagues in a more culturally sensitive and more locally owned and bottom-up manner as far as the South Sudanese were concerned.
Altogether, the vague and unspecified project design allowed -or forced -the CSSOs to genuinely take the context as the starting point of their capacity development efforts.A great number of the CSSOs developed a variety of work tasks on their own.These ranged from building ministerial archives and working with twins to develop pension schemes in the Ministry of Labour.Others established twinning arrangements with doctors, nurses and surgeons in Malakal, Jambio and Bentiu or developed work plans for ministries, thereby improving the staff's drafting skills and ability to take minutes at meetings or advise ambassadors and ministers on a variety of issues.Many CSSOs had given up the idea of working with individual twins in a classical coaching and mentoring scheme and (with the consent of their South Sudanese supervisors) had twinned with groups or with whole departments, where they provided expertise and advices for all kinds of enquiries.
Combined with the general absence of ministerial structure and work plans, the ad hoc and bottom up-driven approach was in many ways the result of the CSSOs' weak Terms of Reference.Everything simply had to be invented from scratch.This was not what the CSSOs had expected.They had expected to work with relatively qualified twins in institutions with at least a minimum of structure in place.But instead of bowing out, most CSSOs began to identify and address needs in their immediate working contexts on their own initiative.They engaged in long-term, explorative needs assessments.One of the CSSOs described this as akin to an anthropological research project.They dined with their twins, joined them in church, and spent a lot of time 'hanging out' and observing what was actually going on in the department in which they were stationed.
The IGAD Initiative's occasional 'best practice' in terms of context-sensitive and locally owned capacity development thus appears not to be a result of a detailed project design.Instead, it developed by default out of freedom, flexibility and individual initiatives.It is our impression that voluntarism and freedom were critical factors in this process.
Voluntarism
Voluntarism is at the core of the idea of coaching and mentoring.One Ugandan diplomat emphasized to us that although Uganda's post-colonial aversion to intervention in other states was strong, the IGAD Initiative was acceptable to Uganda because it was based on coaching and mentoring.It was not a matter of intervention, but about facilitating self-help for South Sudan.In other words, Uganda viewed the IGAD Initiative not as an exercise in state hegemony but as a voluntary offer from one state to another to provide demand-driven assistance.Furthermore, they did not see it as interventionist aid delivery.In this regard it mattered a great deal that the idea of coaching and mentoring, the central aspect of the project, presupposes the voluntary, active participation of the coachee/mentee.To be sure, coaching and mentoring depend on some sort of kinship, a receptive heart of the receiver and a gracious heart of the giver.It is a learning relationship, which presupposes voluntarism.
We often encountered such sentiments during our interviews.The IGAD Initiative's aim of supporting the South Sudanese in their own decision-making was seen in a very positive light.This was evident through the CSSOs' work in the ministries, which was generally based on individual consent-based initiatives.They had no means of forcing twins to work with them.While the Ministries allocated twins to CSSOs, this did not work in cases where the twins were unwilling to cooperate.Also, there were no predefined instructions or guidelines for conducting coaching and mentoring.Yet, some of the CSSOs had fairly clear conceptions of what coaching and mentoring was about.Those who came from long careers in human resource management were very articulate about the concepts.However, technical concepts mostly fell short in the South Sudanese environment, and the CSSOs needed to tailor their approach to the actual and immediate ministerial surroundings.
The CSSOs and their twins preferred the concept of 'twinning' to describe their interaction and partnerships -a slightly undefined concept, though it worked well for all parties.Twinning appears to express a more equal relationship and thus to facilitate the notion of brother-and sisterhood, which was a strong part of the IGAD Initiative's self-identity.
From the outside, the concept of 'twinning' seems fairly apolitical compared to the kind of tasks in which the 'twins' were engaged.Such tasks include; policy development at all levels, drafting legislation, restructuring ministries, building archives, developing pension schemes and participating in the process of shaping the civilian air-space of South Sudan.Despite the involvement of CSSOs in such critical tasks, it was also clear that they acted in agreement or direct cooperation with the under-secretaries and director generals of their respective ministries.Thus, the IGAD Initiative did not appear to be attempting to steer South Sudanese opinions or decisions.
The IGAD Initiative was presented as a case of international cooperation on capacity development.The CSSOs and their twins constituted the practical interface between the IGAD states involved and South Sudan.The concept of twinning, or coaching and mentoring as is written in the project documents, functioned as a form of interaction, a way of organizing international relations.Furthermore, since twinning, the ultimate objective of the initiative, presupposes voluntarism on both sides of the relationship, the fundamental concept and mechanics of the IGAD Initiative seem to require personal and on-going autonomous initiatives that cannot easily be written into formulas.
To the extent that the stated aim of the IGAD Initiative is to develop civil service capacity through twinning, we may view all funding and management functions as aimed ultimately at facilitating good relationships between CSSOs and their twins.The project is about facilitating a space in which twinning can thrive and where voluntarism can flourish.In that way, the IGAD Initiative appears to be a project that promotes, and depends upon, individual creative thinking and entrepreneurship unfolding in a space of freedom.Freedom and volunteering are the foundation of the Initiative's self-understanding.
Freedom
The variety of tasks performed by the CSSOs did not come about overnight.The CSSOs arrived in their designated ministries with unclear Terms of Reference and a general lack of awareness on the South Sudanese side about their role and mandate.CSSOs used on average between three to six months to familiarise themselves with the ministries.Some never succeeded.South Sudanese attitudes towards the newcomers were often antagonistic and in some instances almost violent.They suspected the CSSOs of taking their jobs or being spies.Slowly, however, most CSSOs succeeded in winning the trust of their South Sudanese counterparts and managed to build working relationships with their new colleagues.
Furthermore, a number of CSSOs reported that their status as ' coaches and mentors' protected them, to some degree, from certain South Sudanese officeholders who regarded them merely as an auxiliary work force.This made it possible for the CSSOs to decline orders from their supervisors to do practical work, preserved their autonomy, and allowed them to take their time to develop role definitions and to balance expectations with their South Sudanese counterparts.
As mentioned above, the vague mandate and low level of preparation led the CSSOs to initiate a broad range of activities, including many things other than one-on-one coaching and mentoring.Some CSSOs felt they were doing something very different from what they had anticipated.A good example was a Kenyan CSSO in the Ministry of Transportation who was deployed to the air control unit at Juba International Airport.When he signed up, he believed he would be coaching and mentoring South Sudanese air traffic controllers to develop their skills.When he arrived he found only two people qualified to man the air control tower.In addition to 'twinning' with existing air controllers and others he had himself recruited, the CSSO began to identify other needs and issues to be addressed and, together with UNMISS, he initiated a comprehensive training programme for South Sudanese air controllers.Together with his twins, and in agreement with the Director General of the Ministry of Transport, he also helped develop the general air control facilities of Juba International airport and the civilian airspace control for South Sudan.Many of the needs addressed, capacities developed and projects launched were only identified by the CSSO once he was on the ground.The freedom and flexibility provided by vague Terms of Reference and an underspecified project design allowed the CSSO to work in this way.
Instead of providing coaching and mentoring for specific twins in peer-to-peer relations, the CSSOs took on all sorts of other activities in the ministries.They invented projects, structured work, wrote ministerial policies or rewrote the work of international consultants to adjust concepts and wording to South Sudanese political circumstances.They built archives and record management systems.They assisted with computer knowhow.In one ministry the bulk of computers were down when one of the CSSOs arrived, but the CSSO managed to get them up and running simply by installing anti-virus software.A minor thing with great impact.CSSOs also functioned as ad hoc supervisors to ambassadors, ministers and civil servants.They proved to be versatile resource personnel.They worked together with their twins on a variety of critical ministerial issues and were available for whomever needed professional expertise and advice.
Voluntarism again emerges as a key dynamic in this process of capacity development.The CSSOs were not obliged to instigate a restructuring of their work environments in the way they did -it was not in their job description.Some did it out of a sense of obligation, some out of interest, and others because they were bored.All, however, did it voluntarily and because they had the freedom to do so.In this regard, flexibility and vagueness in the IGAD Initiative's design and in the mandate of the CSSOs allowed the dynamics of freedom and voluntarism to flourish and grow.An unspecified design and a vague mandate allowed the CSSOs the freedom to use their own expertise to do what they felt was needed and appropriate in their particular situations.
Better management and awareness could have prepared the ground better for the IGAD Initiative's deployment of CSSOs.The question is to what extent.Notwithstanding the initial difficulties most CSSOs encountered during their deployments, the vague mandate allowed CSSOs the freedom to identify capacities and capacity needs after arrival and thus to take the context as the starting point.It is uncertain to what extent awareness-raising and a better balancing of expectations could have prepared those involved for a better coaching and mentoring milieu.There would most likely still be a lack of qualified twins, initial mistrust and hostility, a lack of office and job definitions and a lack of funds to implement activities.
By Design or By Default?
The ' explorative' practices of the IGAD Initiative stand in contrast to capacity enhancement initiatives where activities, needs, priorities and the capacities to be built have been specified in advance by "Northern" donors and programme designers, often with little in-depth knowledge about the situation in question.In this regard, it is worth pointing out that CSSOs built their particular identity with reference to their differences from international consultants, who ' come and go and never really leave anything', as one CSSO expressed it.The CSSOs employed a much greater sensitivity towards the South Sudanese context compared to short-term international consultants.
This practice note argues that the 'best practice' capacity development process of the IGAD Initiative was not a result of detailed project design and tightly managed implementation from the top down.Work tasks and the needs, priorities and capacities addressed in the IGAD initiative were often not pre-specified or part of a detailed implementation plan.Instead they grew out of underspecified Terms of Reference and vague project objectives, allowing freedom and voluntarism to flourish.In this sense the best practice capacity development happened more by default than by design.At least, in our research we did not encoun-ter evidence of any pre-planned default dynamics.Asked directly, key staff in UNDP's management unit agreed they had not considered these.In its meetings the IGAD Initiative's Board (South Sudan, Kenya, Ethiopia, Uganda, UNDP and the IGAD Ambassador) has primarily addressed strategy and implementation, which indicates a lack of focus on default dynamics.
In any case, the IGAD Initiative, with all its challenges and problems, emerges as a project that meets the international development agenda's calls for local ownership, nimbleness and contextualization.This raises important questions.Are vagueness and a lack of control from 'the top' necessary preconditions for locally grounded capacity development to take place?Should future capacity development programming be intentionally vaguely designed in order to give the front-line implementers the freedom and flexibility that might be necessary for success?Would the IGAD Initiative have unfolded differently if it had formally aimed at the kinds of tasks it ended up facilitating?How can vaguely designed capacity development projects be evaluated?What balance can be struck between design and default, or control and flexibility?
For the CSSOs deployed to the ministerial corridors, it would not have been possible to know in advance the many activities that they gradually embarked on.It is doubtful to what extent a preoperational needs assessment would have been able to point out the tasks that the CSSOs identified step by step through their daily interactions within the ministries.There is also the question of who should have conducted a needs assessment for up to two hundred individual deployments, how it could have been done and what kinds of resources it would have required.It took the CSSOs' specialized technical knowledge and a familiarity with the ministries to develop their work.In this connection some CSSOs suggested that the first several months of deployment in a project like this should be allocated to exploring the new environment.It is likely that not even a thorough pre-engagement needs assessment would have been enough to match CSSOs with the local environment, even though it would undoubtedly have made the initial deployment period smoother and more comfortable for them.On the outside the IGAD Initiative may present itself as an integrated initiative, but on the ground it unfolds as a series of fairly individual projects and experiences.
With regard to the question of evaluation, a number of supervisors and CSSOs viewed the default aspect of the way the IGAD Initiative unfolded as troublesome when compared to the Initiative's formal design.Like a number of other supervisors, a supervisor in the Ministry of Petroleum and Mining found it hard to evaluate the performance of CSSOs because they could not measure it based on a clear Terms of Reference.Hence, while we found exemplary processes of capacity developments within the context of the IGAD Initiative, it remains to be seen how the impact of the project is to be systematically measured and whether this impact will prove sustainable in the long term.
Conclusion
Most capacity development or governance reform projects today promote and support liberal governance.The IGAD Initiative embodies it by working through freedom and voluntarism.These concepts constitute the key to understanding the project in the larger context of global governance.They provide the cornerstones of the project's self-identity and also offer insights to understanding the difficulties of the project and connecting it to valuable experiences in the global field of governance.The IGAD Initiative stands out as a development project and therefore grapples with issues that more 'traditional' capacity building projects do not face to the same extent: how to design, monitor and evaluate programming whose success depends on vagueness, freedom and flexibility.More systematic analysis and theorising on this new type of programming are needed.
From a policy perspective, the distinction between design and default points to a core dilemma of international interventions: the contrast between generic approaches and the unruly heterogeneity of social and human life.If the best options for capacity development programming are ' default' processes in the context of little pre-planning, the question is whether defaults can be designed and, if they can, what the policy implications are in the context of international cooperation.From a historical perspective, default-and demand-driven development fits much better with the evolution of the family of developed states.
Notes
1 For documents relating to the civilian capacity review, see www.civcapreview.org/[Last accessed August 2013]; United Nations Security Council Resolution 2086, 21 January 2013, SC/10888.For an elaboration of the international policy agenda on capacity, resilience and development, see Haldrup and Rosén (2013). 2 For more on this research project and our research design, see da Costa et al. (2013b). | 2018-10-14T00:28:11.369Z | 2013-09-13T00:00:00.000 | {
"year": 2013,
"sha1": "a1a2958d47c722b78b80fae274ad7c6e434b9103",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-up-j-sijsd-files/journals/1/articles/118/submission/proof/118-1-613-1-10-20130913.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a1a2958d47c722b78b80fae274ad7c6e434b9103",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Political Science"
]
} |
259184412 | pes2o/s2orc | v3-fos-license | Efficacy and safety of botulinum toxin for treating motor dysfunction in patients with Parkinson’s disease: a systematic review and meta-analysis
Objective To evaluate the efficacy and safety of botulinum toxin (BTX) for motor dysfunction in Parkinson’s disease (PD). Design Systematic review and meta-analysis. Data sources Searches of PubMed, EMBASE and the Cochrane Library, from database inception to 20 October 2022. Eligibility criteria Studies reported in English with adult PD patients treated with BTX. Data extraction and synthesis Primary outcomes were United Parkinson’s Disease Rate Scale Section (UPDRS) III (or its items) and Visual Analogue Scale (VAS). Secondary outcomes were UPDRS-II (or its items), Freezing of Gait Questionnaire (FOG-Q), Timed Up and Go test (TUG) and treatment-related adverse events (TRAEs). Mean difference (MD) or standardised MD (SMD) before and after treatment with 95% CIs were used for continuous variables and risk ratios (RRs) with 95% CIs was used for TRAEs. Results Six randomised controlled trials (RCTs) and six non-RCTs (case series) were included (ntotal=224 participants, nRCT=165). No significant difference was found in pooled results of UPDRS-III (available in four RCTs and two non-RCTs, SMD=−0.19, 95% CI −0.98 to 0.60), UPDRS-II (four RCTs and one non-RCT, SMD=−0.55, 95% CI −1.22 to 0.13), FOG-Q (one RCT and one non-RCT, SMD=0.53, 95% CI −1.93 to 2.98) or the risk of TRAEs (five RCTs, RR 0.87, 95% CI 0.37 to 2.01). Significant decreases were found in pooled VAS score (three RCTs and five non-RCTs, MD=−2.14, 95% CI −3.05 to −1.23) and TUG (MD=−2.06, 95% CI −2.91 to −1.20) after BTX treatment. Conclusions BTX may not be associated with motor symptoms alleviation, although it benefits pain alleviation and functional mobility improvement.
1. English language needs to be edited by a native speaker 2. The manuscript gets a bit confusing and does not read very well The reader is mostly interested which conditions induced by PD are treated by BTX (and this has to be spelled consistently right in the manuscript which is not the case) and what is the magnitude of effect.
Essentially part of the table that summarizes the motor outcomes would have been nice to be discussed in more detail as a short text outlined the motor conditions seen in PD which can be treated by BTX
GENERAL COMMENTS
I congratulate the authors on this interesting systematic review and meta-analysis. I have two major concerns. Firstly, the use of random-effects or fixed-effects meta-analysis models based on the values of statistical heterogeneity. Based on current recommendations, one model should be used throughout. Based on the methodological heterogeneity in the included studies, the random-effects model is likely the most adequate. Secondly, combining data from randomised trials and observational studies will likely increase the risk of selection and confounding bias. In order to account for the different study designs I would recommend using adjustment methods, as in the following paper https://bmjopen.bmj.com/content/9/3/e025232.abstract.
Furthermore, subgroup analyses should likely be conducted for the effect of study design.
GENERAL COMMENTS
Your concern is very welcome and the implementation has also been successful. However, the work is not up to date. The literature research must be at least up to and including 2021. Only recently have some important publications appeared which have not been taken into account since the last few years were not included. I recommend resubmitting an updated manuscript,
REVIEWER
Yang, Ke Beijing University of Technology REVIEW RETURNED 25-Jul-2022
GENERAL COMMENTS
In this paper, the authors conducted meta-analysis to study the Effectiveness and Safety of the Botulinum Toxin. The statistical models the authors used are appropriate and clearly described.
The conclusion derived is consistent with the results of the metaanalytical models. From the statistical aspect, there is one place that can be further improved.
For line 40-41, I suggest the authors put 'P<0.1' before 'over 50%'. The reason is that the criterion of 'P<0.1' is based on Cochran's Q test, and the criterion of 'over 50%' is based on the I^2 index.
VERSION 1 -AUTHOR RESPONSE
Reviewer #1 Q1. English language needs to be edited by a native speaker Authors' response: We sincerely appreciate your valuable suggestion. Based on your suggestion, we have critically edited the whole manuscript to improve English.
Q2. The manuscript gets a bit confusing and does not read very well The reader is mostly interested which conditions induced by PD are treated by BTX (and this has to be spelled consistently right in the manuscript which is not the case) and what is the magnitude of effect. Essentially part of the table that summarizes the motor outcomes would have been nice to be discussed in more detail as a short text outlined the motor conditions seen in PD which can be treated by BTX Authors' response: Thanks. We have critically edited the whole manuscript to improve English, organized the structure throughout manuscript, and also refined the discussion.
Reviewer #2 Q1. I congratulate the authors on this interesting systematic review and meta-analysis. I have two major concerns. Firstly, the use of random-effects or fixed-effects meta-analysis models based on the values of statistical heterogeneity. Based on current recommendations, one model should be used throughout.
Based on the methodological heterogeneity in the included studies, the random-effects model is likely the most adequate. Authors' response: We sincerely thank the Reviewer for this comment. We have corrected the description of choosing a statistical model based on your suggestion: "Nevertheless, due to the anticipated heterogeneity of the included studies, particularly differences in study design, we performed meta-analysis using a random-effects model." Q2. Secondly, combining data from randomised trials and observational studies will likely increase the risk of selection and confounding bias. In order to account for the different study designs I would recommend using adjustment methods, as in the following paper https://bmjopen.bmj.com/content/9/3/e025232.abstract. Furthermore, subgroup analyses should likely be conducted for the effect of study design. Authors' response: Thanks. Based on your suggestion, we have calculated all estimates using an adjustment method: "The methodological study has pointed out that combining data from randomized and non-randomized studies may increase the risk of selection and confounding bias. Therefore, based on a previous study, when non-randomized studies were assessed as being at critical risk of bias, we adjusted the within-study variance-covariance matrix using a precision weight correction factor of 0.1 to provide a more conservative pooled estimate." At the same time, we also predesigned subgroup analyses according to the study design: "We pre-designed subgroup analyses according to study design (randomized and non-randomized studies)".
Reviewer #3 Q1. Your concern is very welcome and the implementation has also been successful. However, the work is not up to date. The literature research must be at least up to and including 2021. Only recently have some important publications appeared which have not been taken into account since the last few years were not included. I recommend resubmitting an updated manuscript, Authors' response: We thank the Reviewer for the comment. We have conducted an updated literature search to retrieve those studies published before October 20, 2022.
Reviewer #4
In this paper, the authors conducted meta-analysis to study the Effectiveness and Safety of the Botulinum Toxin. The statistical models the authors used are appropriate and clearly described. The conclusion derived is consistent with the results of the meta-analytical models. From the statistical aspect, there is one place that can be further improved. For line 40-41, I suggest the authors put 'P<0.1' before 'over 50%'. The reason is that the criterion of 'P<0.1' is based on Cochran's Q test, and the criterion of 'over 50%' is based on the I^2 index. Authors' response: We sincerely thank the reviewer for this comment. We have corrected what the reviewer pointed out: "We evaluated statistical heterogeneity between studies using the Cochrane Q and Higgins I2, and there was significant for statistical heterogeneity if P<0.1 and I2>50%."
GENERAL COMMENTS
Overall, I think this is a well-conducted systematic review and meta-analysis. I would recommend the following: | 2023-06-18T06:17:07.362Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "97d69bb2f97e4ff86cbd95a84567de4200dcc66f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f21ab08f2832b197b282f1c4c65654fb6ae32e2f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233852727 | pes2o/s2orc | v3-fos-license | Study on Natural Ventilation Potential of Ordinary Office Buildings in Guangzhou Based on Architectural Factors
As a traditional architectural design technology, natural ventilation has the functions of improving thermal comfort, indoor air quality and energy saving, which is in line with the architectural development concept of green and healthy buildings.Guangzhou is located in the hot-humid area of Lingnan, where is rich in wind resources and attaches great importance to the natural ventilation of buildings. Natural ventilation potential (NVP) are evaluation indexes that can effectively assist and optimize the natural ventilation design of buildings. With abundant relevant studies at home and abroad, various NVP evaluation indexes and calculation methods have been proposed, and strategies for natural ventilation design of buildings in different regions have also been given. Based on the parametric building performance simulation platform, this paper introduces s new NVP evaluation method to carry out parametric simulation research on the ordinary office buildings suitable for natural ventilation in Guangzhou. The results showed that the main factors that limit the NVP of ordinary office buildings in Guangzhou is condensation in summer, and low building density, building toward southwest, high glazing ratio or cross ventilation will bring about high NVP. Meanwhile there is an optimal value for the thermal resistance of the roof, and the greater the thermal storage coefficient of the main material, the greater the night natural ventilation potential.
Introduction
Natural ventilation has the functions of improving indoor air quality, improving comfort and reducing energy consumption. Even with the development and perfection of air conditioning system today, natural ventilation still has its irreplaceable advantages. Firstly, the use cost of natural ventilation is almost zero, because wind is one of the renewable energy resources that human beings can use endlessly. Secondly, natural ventilation can provide clean and fresh air, which is beneficial to people's physical health. Finally, compared with the mechanical ventilation with monotonous wind speed changes, natural ventilation with more random wind speed changes can make people feel the stimulation of irregular airflow and get closer to nature psychologically, which is beneficial to people's mental health. In order to facilitate the study of buildings natural ventilation, many evaluation indexes of NVP have been EMCEME 2020 IOP Conf. Series: Earth and Environmental Science 692 (2021) 032101 IOP Publishing doi: 10.1088/1755-1315/692/3/032101 2 proposed by scholars, wicth can be classified as direct natural ventilation potential (DNVP) and indirect natural ventilation potential (INVP) according to the ventilation function. DNVP refers to the potential of a building to meet acceptable indoor thermal comfort and air quality only by relying on natural ventilation during building use time. INVP, also known as night ventilation cooling potential, refers to the potential of natural ventilation cooling and cooling storage in buildings during non-use time.
There are three main factors affecting NVP, namely climate, architecture and technology. Early studies on NVP were mainly aimed at climatic factors, whose researchers used formula derivation to deduce outdoor climatic conditions that met the requirements of indoor thermal comfort and ventilation rate, or were conducive to night ventilation, so as to study the NVP of climate adaptability in different regions. For DNVP, Fracastoro G V [1], based on the theory of natural ventilation, expressed the effective pressure difference as a function of indoor and outdoor temperature difference and wind speed, and used the effective pressure difference method to evaluate the NVP of climate adaptability. Yang Lina [2] and Zhang Guoqiang et al. [3] evaluated NVP in different cities of China by using effective pressure difference method on the basis of predecessors. Axley J W and Emmerich S J [4] deduced the outdoor temperature and humidity range that can meet the requirements of indoor thermal comfort and ventilation rate at the same time based on the building thermal balance equation, and used the building thermal balance method to evaluate the NVP of climate adaptability. Jing Feng et al. [5] evaluated the NVP of office buildings in major cities in different climate regions of China by using the building thermal balance method on the basis of predecessors. For INVP, Givoni B [6] proposed the temperature difference ratio coefficient (TDR) to measure the night NVP of non-air-conditioned buildings. Fu Xianzhi [7] used the average outdoor temperature difference to measure night NVP in hot-summer and cold-winter zone in China. Artmann N et al. [8] proposed climatic cooling potential (CCP) to measure the cooling effect of night natural ventilation.
With the improvement of computer performance and the popularization of building performance simulation software, more and more researchers began to use computer simulation method to study NVP. Compared with the formula derivation method, the computer simulation method is more accurate and can better reflect the influence of architectural factors on NVP, but its shortcoming lies in the large amount of calculation, making it difficult to compare the NVP of climate adaptability in different regions. Yao R et al. [9], Qi Xiaoping [10], Zhou Junli [11], Qin Xinghong [12], Bu Gen [13], Yang Yulan et al. [14], Tong Z [15] and Cheng J et al. [16] all used computer simulation method to study NVP based on climate or architectural factors.
The existing domestic and foreign studies on NVP focus mainly on climatic factors, while few studies focus on the influence of architectural factors on NVP, and the consideration of architectural factors is mainly based on case studies. Guangzhou is located in the hot-humid area of Lingnan with abundant wind resources, whose local traditional buildings attach great importance to natural ventilation. At present, there are many studies on natural ventilation of traditional buildings in Guangzhou, but relatively few studies on natural ventilation of modern public buildings such as office buildings. Thus, this paper will be based on Rhinoceros & Grasshopper, a parametric architecture design platform, call EnergyPlus and its ventilation module COMIS and adopt two NVP evaluation indexes to carry out parametric simulation, in order to study the influence of architectural factors on NVP which provides reference for natural ventilation design of Guangzhou ordinary office buildings, and provide a new idea for more accurate NVP calculation. building energy consumption simulation software specified by EnergyPlus and IEA, in more than 100 kinds of basic and in-depth tests, EnergyPlus meets the reliability requirements, and the maximum deviation from other software is not more than 5.2% [17][18] [19]. As for the reliability of multizone model that COMIS uses for natural ventilation simulation, a study [20] reviewed the relevant experimental verification in the past, and the results showed that for natural ventilation driven by wind pressure and thermal pressure, the multizone model could reasonably predict ventilation rate and air flow between zones. At present, COMIS has been incorporated into EnergyPlus as the core of its ventilation module. Therefore, EnergyPlus has certain accuracy for natural ventilation calculation, which can be used in this study.
Research object
In this paper, the model of typical ordinary office buildings is established according to the Design Standard for Energy Efficiency of Public Buildings (GB 50189-2015 [21]). The building is 10 stories high, 3.6m high each floor, 45m wide and 17.4m deep. The total building area is 7830m 2 , the external area is 5275.8 m 2 , and the shape coefficient is 0.187. The architectural plan and model diagram are shown in Figure 1
Parameterization of architectural factors and other parameter Settings
For a typical ordinary office building, the main variables of climatic, architectural and technical factors that influence NVP can be seen in the Table 1. Considering the daylighting needs, the long side of the building should face south as far as possible, so the parameters of the long side orientation of the building are divided into -40° ~ 40° (due south is 0°, southwest is positive), and the difference is 10°. The parameters of building density are divided into low density, medium density and high density, corresponding to 0.1, 0.3 and 0.5 respectively. The thermal performance parameters of the main structure can be divided into two parts: wall (including external wall) and floor (including roof). For the wall, the thermal performance is mainly determined by the main material when the basic structure is unchanged. Therefore, according to the thermal conductivity from small to large, five main materials, namely air brick, clay brick, silicate brick, lime sand brick and reinforced concrete, are selected. For the floor, because the main material is reinforced concrete, the main factor affecting NVP is the thermal performance of the roof. Therefore, under the condition that the basic structure of the roof remains unchanged, the thermal performance parameters can be divided by reasonably changing the thickness of the roof insulation. In this paper, the thickness of the roof insulation (aerated concrete) is divided into 60mm, 100mm, 140mm and 180mm. Referring to the Design Standard for Energy Efficiency of Public Buildings and the National Technical Measures for Design of Civil Construction: Special Edition--Energy Conservation (2007 Edition), the main structure and material thermal parameters setting of the building model are shown in Table 2 and Table 3. Considering the daylighting of the building and the thickness of floor slab and beam and column, the parameters of the glazing ratio of the exterior walls wall were divided into a range of 0.2 ~ 0.8, with a difference of 0.1. The ventilation mode is divided into single-sided ventilation and cross ventilation according to whether the corridor is equipped with high ventilation window and opened. In this simulation experiment, the glazing ratio of the corridor high ventilation window is 0.1. The meteorological parameters used for simulation in this paper are selected from the typical meteorological year data of Guangzhou downloaded from the official website of EnergyPlus, and the setting of technical parameters includes various schedules and internal heat gain refer to the relevant suggestions and introductions in the Design Standard for Energy Efficiency of Public Buildings and the reference documents of EnergyPlus.
Calculation of NVP
Based on the previous NVP calculation methods, considering the use characteristics of office buildings, this paper proposes the following NVP calculation methods for office buildings.
DNVP refers to the potential that office buildings can meet acceptable indoor thermal comfort and air quality only by natural ventilation during working hours (8:00-18:00). Because it's needs to consider people's health and comfort demand of buildings during the working hours, this evaluation index will estimate the natural ventilation performance from three aspects: ventilation rate, condensation and indoor thermal comfort. On the basis of the effective hours of natural ventilation proposed in reference 4, this paper puts forward the following improved calculation formula of DNVP quantitative index.
(1)
Where ℎ , is the effective hours of direct natural ventilation in office buildings throughout the year, that is, the hours that can meet the requirements of ventilation rate limit, non-condensation and thermal comfort at the same time when using natural ventilation during working hours. The thermal comfort criterion refers to the previous study on thermal comfort calculation of human body in Guangzhou area [22]. The non-condensation criterion is that the temperature of the inner surface of the building is not lower than the dew point temperature. According to the specification [23], the hourly ventilation rate limit of office buildings is 1 ach. ℎ are the effective hours of direct natural ventilation in Guangzhou in the whole year, standing for the maximum potential to meet the requirements of natural ventilation in working hours only considering climatic factors in Guangzhou, that is, the hours that can meet the (2) Where and ℎ are the days and times of indirect natural ventilation, and the calculation interval is summer (May to November) in Guangzhou, referring to the method of dividing the climatic seasons in the Division of Climatic Season (QX / T 152-2012 [23] ). . and − represents indoor air temperature, outdoor air temperature and indoor cooling set point temperature respectively, only if > and − > the calculation of INVP is carried out. R ,ℎ and Ṙ, ℎ is the hourly ventilation rate and the maximum hourly ventilation rate that can be achieved under ideal conditions. is air density (1.29kg/m 3 ) and is air specific heat capacity, (1.005kJ/(kg· K)). The numerator of the formula refers to the total cooling capacity per unit area during indirect natural ventilation. And the denominator means the maximum total cooling capacity per unit area that can be achieved under ideal conditions, that is, the building is only composed of columns and floors, and the indoor wind speed is equal to the outdoor wind speed, so Ṙ, ℎ can be calculated by the following formula. Where , ,ℎ is hourly outdoor wind speed, and D is building depth. The above NVPs weaken the influence of climatic factors to study the influence of architectural factors on NVP, and they can also be used to study the NVP of a certain type of building in different regions. This study will take DNVP as the main evaluation index and INVP as the auxiliary evaluation index (when DNVP is the same, compare INVP ) to evaluate NVP of ordinary office buildings in Guangzhou, And the parametric NVP calculation module is shown in Fig. 3.
NVP optimization analysis
A total of 7560 working conditions were simulated in this study, and the setting of working conditions and optimization results of NVP are shown in Table 4 and Table 5 respectively. It can be seen from Table 5 that the maximum DNVP is 0.8494 under the above working conditions. The main reason why ℎ , is less than ℎ is that the indoor heat gain should be considered and the condensation criterion is added.
Under the working condition of optimal DNVP , ℎ , is 1387h, accounting for 38% of the annual working hours (3650h), while the annual effective hours of natural ventilation in office buildings is 2904h, accounting for 33.2% of the annual hours (8760h). And ℎ is 1633h, accounting for 44.7% of the annual working hours, while the annual effective hours of natural ventilation oriented by climatic factors in Guangzhou were 4349h, accounting for 48.5% of the annual hours. It can be seen from the above analysis that the proportion of ℎ in working hours is less than that in non-working hours when only considering climatic factors, while the proportion of ℎ , in working hours is greater than that in non-working hours after introducing architectural factors. Since the above effective hours all take into account the thermal comfort of users, it can be seen that architectural factors can improve DNVP by increasing the proportion of thermal comfort hours of working hours in the whole day.
Under the working condition of optimal DNVP , if the monthly cumulative proportion of thermal comfort hours, non-condensation hours, hours of meeting the ventilation rate limit and effective hours of natural ventilation in working hours are separately investigated, the results are shown in Fig. 4. It can be seen from Fig. 4 that the meteorological conditions in Guangzhou can meet the requirements of ventilation rate limit for office buildings throughout the year. For January, February, October, November and December, the main factor limiting DNVP is thermal comfort index, and other months are non-condensation index. Throughout the year, the adverse phenomenon of condensation in Guangzhou in summer restricts DNVP , and through building dehumidification measures, DNVP can be further improved.
Single factor analysis
In order to study the influence of different architectural factors on NVP, this paper will change the single independent variable respectively while keeping the others unchanged under the working condition of optimal DNVP and analysis the simulation data. The results are shown in Fig. 5~10, which demonstrate that the building orientation, building density, ventilation mode and glazing ratio have great influence on NVP, while the thickness of roof insulation and the main material of wall have little influence on NVP. It can be seen from Fig. 5 that there are larger NVP in the southwest working conditions, while that in the south is the smallest. After analyzing the monthly proportion of different evaluation indexes (the same below), it shows that compared with the south working condition, the thermal comfort hours in southwest working conditions decrease in summer (from May to November), and increase in winter, while the non-condensation hours change little. In general, ℎ , in the southwest working conditions increase in winter, and change little in summer because of low non condensation hours, which means that the increase of thermal comfort hours in winter leads to the improvement of DNVP . Similarly, compared with southeast working conditions, there are larger NVP in southwest working conditions for the same reason. The above results may be due to the fact that the dominant wind direction in Guangzhou is north in winter, followed by southeast, and the building towards the southwest can avoid excessive indoor wind speed in winter, which will cause thermal discomfort. It can be seen from Fig. 6 that the smaller the building density, the greater the NVP. After analysis, the smaller the building density is, the greater the thermal comfort hours in winter and the smaller in summer, and the greater the non-condensation hours in the whole year, especially in summer. In general, ℎ , increases most obviously in April, May, September and October. The above results may be due to the fact that the smaller the building density is, the less the surrounding buildings block the sunlight, which increases the temperature of the inner surface of the building and then increases the noncondensation hours of the whole year. However, due to excessive sunlight, the thermal comfort hours from June to August are greatly reduced. Therefore, ℎ , increases the most in April, may, September and October finally.
It can be seen from Fig. 7 that there is an optimum thickness of roof insulation, and the moderate thermal resistance of roof is more conducive to the improvement of NVP. After analysis, the thermal comfort hours in winter will decrease when the thermal resistance of roof is low, while the thermal comfort hours in summer will decrease when the thermal resistance of roof is high. The above effects are more obvious on the top floor, and the thermal resistance of roof has little effect on the noncondensation hours. The above results may be due to the fact that the low thermal resistance of roof will make the top floor heat dissipation serious in winter, while the high one will make it difficult for the top floor to dissipate the heat in summer, both of which will lead to the reduction of thermal comfort hours. It can be seen from Fig. 8 that the influence of the main material of wall on DNVP is limited, DNVP will slightly increase when the thermal conductivity increases. And the influence of the main material of wall on INVP is approximately linear with the thermal storage coefficient, and the greater the thermal storage coefficient, the greater the INVP . After analysis, the larger the thermal conductivity of the main material, the greater the thermal comfort hours in summer, and the annual non condensation hours have little change. In general, ℎ , has a little increase when the thermal conductivity increases, which may be because it's more conducive to the indoor heat dissipation in summer when the thermal resistance of the external wall is low, so as to increase the indoor thermal comfort hours in summer. The relationship between the thermal storage coefficient of the main material and INVP is shown in Fig. 11, in which we can find that when the thermal storage coefficient is low, it is approximately proportional to INVP . It may be that the larger the thermal storage coefficient of the main material is, the more cooling capacity can be stored in it during night natural ventilation. And when the thermal storage coefficient is too large, the growth trend of INVP slows down, which may be because the main material has not reached the maximum cooling storage capacity at night due to the limitation of meteorological conditions in Guangzhou.
It can be seen from Fig. 9 that the working condition with cross ventilation has greater NVP than that with single-sided ventilation. After analysis, compared with single-sided ventilation, the thermal comfort hours in the working condition with cross ventilation increase in winter and decrease in summer, and there was no significant difference in non-condensation hours between two ventilation modes. In general, the working condition with cross ventilation has a greater ℎ , in winter which results in larger DNVP . The above results may be due to the fact that under the cross ventilation, the indoor air flow is more uniform and the maximum wind speed is smaller in winter, and it can better let indoor air mixing between the south room and the nouth room, which cause a more uniform temperature to avoid the north room too cold in winter. It can be seen from Fig. 10 that the larger the glazing ratio is, the greater the NVP is. For DNVP , when the glazing ratio is greater than 0.5, the growth trend begins to slow down. And INVP is approximately proportional to the glazing ratio. After analysis, the larger the glazing ratio is, the greater the thermal comfort hours in winter, the smaller in summer, and the greater the annual non condensation hours, especially in summer. In general, ℎ , increases in all months except from June to September, this may be because the room can be exposed to more sunlight directly in winter when the glazing ratio increases, which increases the indoor temperature and then increases the thermal comfort hours. In summer, especially from June to September, the indoor solar heat gain increases when the glazing ratio increases, which makes the indoor temperature rise so as to diminish the thermal comfort hours. But for the non-condensation hours increasing due to the rise of the glazing ratio, ℎ , remains unchanged under the mutual influence of the above reasons. As for the growth of DNVP beginning to slow down after the glazing ratio exceeds 0.5, it may be because excessive glazing ratio will also increase the indoor and outdoor air exchange ratio in winter, which will offset the increase in solar heat gain caused by the increase of glazing ratio.
Conclusions
This paper proposed a new NVP evaluation method and carried out the parametric simulation optimization of ordinary office buildings in Guangzhou on the basis of it. After analyzing the comprehensive and single factor effects of building orientation, building density, thickness of roof insulation, main material of wall, ventilation mode and glazing ratio on NVP, it can be concluded that low building density, building toward southwest, high glazing ratio or cross ventilation will bring about high DNVP. Meanwhile there is an optimal value for the thermal resistance of roof, and the greater the thermal storage coefficient of the main material of wall is, the greater the INVP is. And the main factors that limit the DNVP of ordinary office buildings in Guangzhou is condensation in summer, the DNVP can be improved if appropriate measures are taken to prevent it.
The NVP evaluation method proposed in this paper can provide reference for architects' natural ventilation design, but there are also some shortcomings. Firstly, because many factors should be considered in architectural design, there will be many restrictions in practice. For example, in this paper, the best building orientation is 40 ° SBW, but in the actual design, many factors such as daylighting and surrounding environment need to be considered, which means that building orientation of 40 ° SBW is not necessarily the best. And the building density of 0.1 is often not practical in actual design. Secondly, although the calculation results of NVP by computer simulation method is more accurate, it is also more time-consuming. Therefore, in practical use, it is often necessary to reasonably divide the parameters and optimize the parameters within the acceptable range of independent variables.Thirdly, with no anticondensation measures taken in this simulation experiment, the optimization suggestions of architectural factors in the above conclusions are mainly to improve the thermal comfort hours in winter, and the results may be different for buildings that value anti-condensation. Finally, due to the limitation of computer performance and simulation time, the simulation experiment in this paper has some deficiencies in the selection, range and divided numbers of parameters, which will be further improved in the following study. | 2021-05-07T00:03:46.725Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "3492ea6aa07028b40f8c5f03fa5b9d86298a1432",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/692/3/032101",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b4cd687bacb637dcc1fbdc0fd32bac622254ba63",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
258296740 | pes2o/s2orc | v3-fos-license | A deep-learning-based dose verification tool utilizing fluence maps for a cobalt-60 compensator-based intensity-modulated radiation therapy system
Highlights • A dose verification tool for a cobalt-60 compensator-based IMRT system was developed.• A deep-learning network was introduced for patient-specific dose prediction using fluence maps.• This deep-learning model could promote a safer treatment planning process for cancer patients.
Introduction
A novel compensator-based intensity-modulated radiation therapy (IMRT) system using a cobalt-60 machine has been developed to provide cost-effective and high-quality radiation treatments to low-and middleincome countries (LMICs) [1]. This innovative treatment device utilizes a cobalt-60 source and nine compensators. Each compensator is manufactured by 3-dimensional (3D) printing of a plastic mold, filling ondemand with reusable 2-mm tungsten balls [2]. Currently, the prototype for this system is being manufactured by clinical and engineering collaborators in India. This technology was commissioned into the commercial treatment planning system (TPS) to integrate into the Radiation Planning Assistant (RPA), an automated solution for structure-contouring and treatment-planning in low-resource environments [3,4].
Quality assurance (QA) in radiation therapy ensures the safe implementation of the prescription in terms of the dose to the target volume, minimal dose to normal cells, minimal personal exposure, and adequate patient monitoring [5] and has traditionally focused on verification of the dose delivering the prescribed dose to the patient [6]. There are commercial QA tools for cobalt-60 machines and linac machines [7]. However, in the newly-developed cobalt-60 compensator-based IMRT system, these tools are likely not directly applicable due to the different geometry of the source and compensators. As such, there is no suitable tool to verify the dose calculation in the TPS.
In this study, we have developed a deep-learning-based dose verification method for accurate and efficient dose predictions using our novel compensator-based IMRT system. A neural network is used for 3D dose predictions for static treatment fields and IMRT plans for head and neck cancer (HNC) patients. Our deep-learning engine predicts patientspecific dose distributions using CT scans and fluence maps. Similar studies have been reported [21,32]; however, compared to these studies, converting 2D fluence maps to 3D is simpler and less computationally intensive. The approach should be translatable to other anatomical sites beyond the one for which it was developed and trained.
Static field dose verification
Dose verification for static fields is an important first step in verifying patient dose as it relates to beam commissioning. Cobalt-60 beam data commissioned into the Eclipse TPS (Varian Medical Systems, Palo Alto, CA, USA) were collected. As shown in Fig. 1(a), inputs comprised a homogeneous binary mask of a cube-shaped water-equivalent phantom, a beam binary mask with higher and lower borders that fit the phantom's height, and an overlapping section of the phantom and beam binary masks while the output is a 3D dose distribution within the phantom. All three input masks are required for the system to function properly; relying solely on the overlapped mask is not sufficient. The water-equivalent phantom data and dose distribution data were exported from Eclipse, and beam binary masks were calculated and generated according to the field size. No density information for the waterequivalent phantom was provided. There were four phantom sizes -20 × 20 × 20 cm 3 , 30 × 30 × 30 cm 3 , 40 × 40 × 40 cm 3 , and 50 × 50 × 50 cm 3 . There were 15 beam fields ranging from 2 ~ 30 cm wide at 2 cm intervals. Source-to-surface distances (SSDs) were affected by both the phantom size and source-to-axis distance (SAD). 80 cm SAD was used, and there were 10 SSDs at 1 cm intervals ranging from SSD-5 cm to SSD + 4 cm. A total of 600 data sets were generated, then randomly divided into 480-60-60 sets for training, validation, and testing. Input and output spatial dimensions were (256,256,256,3) and (256,256,256,1), respectively.
Patient-specific dose verification
Physician-approved volumetric modulated arc therapy (VMAT) plans for 45 head and neck cancer cases [4] were collected, de-identified according to a protocol approved by the institutional review board of the University of Texas MD Anderson Cancer Center, and then re-planned to create cobalt-60 compensator-based 9 fields IMRT plans using the same initial contours and dose prescriptions from the original VMAT plans. The angles employed in these IMRT plans ranged from 0 to 320 degrees, with intervals of 40 degrees. Then, patient-specific dose prediction was accomplished using two approaches: 1) Each of the 9 fields within the plan was treated as a separate field dose calculation to yield 405 sets. For training, validation, and testing, these were separated into 333-36-36 sets. Patient CT scans, binary beam masks that overlapped with patient CT scans, and fluence maps that overlapped with patient CT scans were employed as inputs, with the patient-specific 3D dose distribution for one field as the output ( Fig. 1(b)). Patient CT scans and binary beam masks that overlapped with patient CT scans were normalized from 0 to 1. The fluence map was projected in 3D. The resulting predicted dose distributions for 9 fields corresponding to a single plan were combined to create the total dose for the plan. 2) In addition to the 45 existing plans, we expanded the data set to include an additional 92 IMRT plans created from physician-approved RPA plans. In this case, 9 beam field masks, as well as 9 fluence maps, were combined into one for each plan, resulting in a total of 137 sets that were then divided into 111-13-13 training, validation, and testing sets. The inputs and outputs were identical to those of the previous approach ( Fig. 1(c)).
The Eclipse TPS calculated the dose in each voxel of a patient based on the energy-dependent fluence [33]. In contrast to linac, a cobalt-60 beam has a discrete energy spectrum, so fluence is meaningful as a direct input to the dose calculation. The fluence intensity exported from the Eclipse had a range from 0 to 1, and it determined the delivered doses based on the prescribed target doses [34]. Since the format of the fluence map is 2D, it needed to be converted to 3D. A broad beam raytracing algorithm and a 3D digital differential analyzer algorithm have been employed to project the fluence maps onto the dose domain in previous studies [21,32]; however, we adopted a simpler method. The 2D fluence map was extended in line with the beam divergence, utilizing the SAD as a reference point. No attenuation was incorporated in order to reduce computational complexity. The spatial dimensions of the input and output were (256,256,256,3) and (256,256,256,1), respectively.
Deep-learning framework for dose prediction
We investigated multiple deep-learning models for 3D dose prediction in patients with HNC and discovered that the 3D dense dilated U-Net (3D DDU-Net) performed the best [24]. 3D DDU-Net ( Fig. 1(d)) is a more advanced version of the fully dense U-Net, which has been shown to outperform the conventional U-Net [35]. Unlike other U-Net architectures, this model employed two encoding paths and two decoding paths, as well as continuous densely-connected dilated convolutions at the bottom stage. Each convolution in the densely-connected stage at the bottom is linked to all subsequent convolutions. The batch normalization is chosen to prevent overfitting during training [36], and ReLU is faster to compute than the sigmoid function, making a considerable difference in neural network training time [37]. The mean squared error (MSE) loss was minimized based on the Adam optimization with an initial learning rate of 1.0E-04, and a batch size of 1 and MSE was employed to quantify the difference between the ground truth dose and the predicted dose for each sample. The epoch was set at 10,000, and the early stopping function was activated to terminate the training if the model performance for validation sets did not improve after a large number of epochs had passed.
Quantitative analysis
Percent depth doses (PDDs) and in-plane dose profiles were extracted from the dose distribution of the static fields predicted by the deeplearning model. The average percent deviations over multiple phantom sizes, beam field sizes, and SSDs were compared to ground truth data from Eclipse TPS for verification. Furthermore, the ground truth and predicted dose distributions for each static field were used to evaluate gamma passing rates. Predicted doses were compared to clinical doses based on a compensator-based IMRT system commissioned in Eclipse for patientspecific dose verification [38]. Gamma analysis was performed on the transversal plane at the isocenter to compare the predicted doses to the clinical doses for each plan. The deep-learning-based dose verification system was tested by comparing dose-volume histograms (DVHs) of clinical and predicted dose distributions, followed by statistical assessments for PTVs and OARs. Since the sample size (N = 13) is smaller than 15, the median and range were used for the statistical analysis. For PTVs, D 98% , D 95%, D 5% , D max , D min , and D mean were evaluated. D max was evaluated for the spinal cord, optic nerve, lens, eyes, cochleae, chiasm, brain stem, and brain, whereas D mean was evaluated for parotid glands.
Results
As shown in Fig. 2(a) and (b), the predicted doses of the static fields by the deep-learning-based dose prediction tool had an excellent agreement with the ground truths having 99.9 % and 100.0 % average gamma passing rates for 2 mm/3% and 3 mm/3% criteria. In a representative case, Fig. 2(c) showed a good agreement between the PDDs and profiles from the ground truth and prediction. All PDDs and profiles had average percent deviations of 0.4 ± 0.4% and 0.3 ± 0.5%, respectively, across different phantom sizes, SSDs, and field sizes. Fig. 3 showed the DVHs for each field from a representative HNC patient, along with the predicted doses by the deep-learning model. The deep-learning model accurately predicted field-based doses. The average gamma passing rate for the criteria of 2 mm/2% was nearly 100.0%, and the mean absolute errors were in the range from 0.2 to 0.3 Gy. Though this agreement was excellent, small differences were noted. When the predicted doses from all nine fields were summed to provide a predicted plan dose, these small errors in each field accumulated, and the agreement was reduced. Relatively large structures, such as PTVs, were less affected, whereas smaller structures and structures near or outside of the high-dose regions, such as eyes and lenses, were significantly affected.
Plan-based dose prediction demonstrated better agreement. Fig. 4 illustrates the dose distributions and DVHs between the clinical plan and prediction using a deep-learning model for a representative HNC patient from the test sets. It showed that both doses from the clinical plan and the prediction by the deep-learning model had appropriate coverage for the PTVs and sparing for the OARs. The dose coverage for the PTVs was very similar, with D 98% , D 95% , D 5% , D min, and D mean values within 1%, except for D max (Table 1). Table 2 shows dose differences between clinical and predicted doses for field-based dose prediction and planbased dose prediction using OAR metrics. The dose distribution by the plan-based dose prediction was much improved over that by field-based dose prediction. Across all OAR metrics, the dose differences between the predicted and clinical doses were less than 1.0 Gy, except for the spinal cord, brain stem and brain.
Moreover, once the model was trained, it took less than two seconds for this deep-learning-based model to predict the 3D dose using a 32 GB GPU node.
Discussion
This study assessed a deep-learning-based secondary dose verification system for dose calculation accuracy in both homogeneous and inhomogeneous materials. The gamma analysis and the evaluation for distributed doses were performed for application to static fields associated with the commissioned cobalt-60 beams. As part of the patientspecific dose verification, we evaluated the gamma indices and analyzed the DVHs for PTVs and OARs using compensator-based IMRT plans for HNC patients. The dose was predicted accurately for each case, and the calculation time of this system was less than two seconds. This secondary dose verification system could potentially be used with a compensator-based IMRT system.
Previous studies have used contoured structures combined with patient CT images to predict 3D dose distributions [23,24,26,28,39]. In Fig. 3. Field-based DVH analysis for a representative head and neck cancer patient. this work, we used Inputs such as inhomogeneous patient computed tomography (CT) scans, binary beam masks, and fluence maps truncated to the patient CT in 3D. CT scans had inhomogeneous information, which affected the dose distribution, and beam masks provided the boundary lines for the dose distributions. The fluence data has a very close relationship to the dose calculation. A few studies have been conducted using fluence maps with the patient CT scans [21,32], and they used a ray-tracing algorithm to transfer 2D fluence maps to 3D volume data [40,41]. In this study, we used a simpler method to convert a 2D fluence map into 3D data than those studies and confirmed that the dose was accurately predicted using this method.
Another important factor in predicting accurate dose distribution is selecting the appropriate deep-learning model. The majority of studies have used ResNet [32,42], U-Net [18,26,28,39,43], or models derived from U-Net to predict the 3D dose, such as hierarchically densely connected U-Net [21,44]. Nguyen et al. [23] conducted performance tests based on 3D dose predictions on the HNC cases using standard U-Net, dense convolution network, and hierarchically densely connected U-Net (HD U-Net) and showed that the HD U-Net outperformed all other models in terms of dose coverage, dose conformity, and homogeneity. In addition, HD U-Net could predict patient dose more accurately and quickly with fewer parameters than other models. Furthermore, Gronberg et al. [24] evaluated various models, including DeepLabv3+, U-Net, and V-Net, which have traditionally been used for image segmentation, along with the HD U-Net and 3D DDU-Net, which were previously used for dose prediction. As a result of this study, 3D DDU-Net achieved the best performance for patient dose prediction. Although Gronberg et al. [24] used 3D DDU-Net to predict dose based on contoured structures, we used the same deep-learning network for building a fluence-based dose prediction system and achieved good agreement.
It may be possible to extend the prediction model to sites such as prostate, lung, rectum, and breast cancers other than head-and-neck cancer because this deep-learning model predicted the dose for HNC patients with high accuracy, although the treatment plans are relatively complicated. Also, this model may be more flexible than dose prediction models based on contoured structures, where it is necessary to adjust the contoured structures when the cancer site changes. This model simply predicts with the fluence maps, so it is not affected by any changes between cancer sites.
Our results indicate that the deep-learning-based model predicts dose distribution with high accuracy and efficiency. A small error was observed when the field-based dose prediction method was used to predict the 3D dose for each field. However, combining the doses of all 9 fields into one plan resulted in error propagation. A plan-based dose prediction method can resolve this issue because it does not have a combination process, increasing error. There is another model limitation associated with the boundary effect. Like other studies [32,45], the predicted dose in the beam field boundary area appears inconsistent. However, the dose in the beam boundary area is relatively small compared to the dose delivered to the treatment area and lies outside the fluence map boundary; it was determined that boundary effects do not significantly affect the dose prediction. In addition, this research showed that the dose differences were relatively little higher for the spinal cord, brain stem and brain compared to other OARs, although these differences were not statistically significant. This could be attributed to the fact that the volumes of both the spinal cord, brain stem and brain were relatively larger than those of the other OARs.
The deep-learning-based model proposed in this study can be used as a QA tool for secondary dose verification, resulting in significant benefits to cobalt-60 compensator-based IMRT systems in LMICs due to its relatively high efficiency, dose calculation speed, and reliability in predicting doses. Although this study was conducted only focusing on the cobalt-60 compensator-based IMRT system, it should be applicable to other commercially used linac systems that use the IMRT technique, and this will be the subject of further investigations in the future.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Table 1
Dose difference (median and range) and percent error between the clinical plan and prediction of the test sets for planning target volumes when using the planbased dose prediction method (N = 13). Table 2 Dose difference (median and range) comparison between clinical and predicted doses for the field-based dose prediction (N = 4) and the plan-based dose prediction (N = 13) for OAR metrics. | 2023-04-24T15:03:55.698Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "51ccf4c89db6388b9f2b35a914f6b7d069051d3c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.phro.2023.100440",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "330c5f884af914a37ffaa32c2b8f272ccf02d2bb",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14429462 | pes2o/s2orc | v3-fos-license | Effects of Short-Term Warming On Low and High Latitude Forest Ant Communities
ABSTRACT
INTRODUCTION
The ecological impacts of projected climatic change are likely to have a strong geographic signal.For species that have geographic ranges constrained by temperature, warming may facilitate population increases and range expansions at high latitudes while simultaneously decreasing population sizes and contracting ranges at low latitudes (Parmesan and Yohe 2003).
Other studies suggest that the consequences of warming will be more severe at lower latitudes, where organisms may be more sensitive to fluctuating temperatures (Deutsch et al. 2008, Tewksbury et al. 2008, Dillon et al. 2010).Differential responses of populations to warming at high versus low latitudes also can be accompanied by community-level changes such as increases in species diversity at high latitudes and decreases in species diversity at low latitudes (Menendez et al. 2006, Wilson et al. 2007).
Other factors may obscure, or even ameliorate, the geographic signal of climate change on ecological communities.For example, temperature increases are expected to be more pronounced at high latitudes (Solomon et al. 2007).Local adaptation to historical climates, and corresponding maladaptation to new climates, also may be more pronounced at high latitudes (Pelini et al. 2009).Although individual organisms at low latitudes may be more sensitive to climatic change than those at high latitudes, ecological communities at low latitudes could be more resilient to environmental change because they are generally more diverse (Wittebolle et al. 2009).Yet, because most experimental studies of the effects of warming have been conducted at single sites (but see Doak and Morris 2010), it is unclear whether warming will have differential effects on the structure and function of similar communities and ecosystems across latitude and Pelini et al.Page 4 of 32 diversity gradients.Here, we report the results of a temperature manipulation experiment on ant community composition and foraging activity in deciduous forests that was conducted simultaneously at two sites, separated by 8 degrees of latitude (~1000 km), in the eastern United States.
We focused on ants because they are numerically dominant in many terrestrial ecosystems, and their foraging activities, including seed dispersal, nectivory, granivory, predation, and scavenging, cut across many trophic levels and can affect ecosystem processes such as nutrient cycling (c.f.Hölldobler andWilson 1990, Folgarait 1998).We experimentally manipulated a key component of climatic changeatmospheric warmingbecause temperature is correlated with patterns of ant diversity and abundance (Kaspari et al. 2003, Sanders et al. 2007, Dunn et al. 2009), seasonal patterns of activity (Dunn et al. 2007), foraging behavior (Ruano et al. 2000), and the outcomes of interactions between species (Cerdá et al. 1997, Holway et al. 2002).We hypothesized that changes in air temperature would have different effects on ant abundance, species richness, species evenness, and foraging activities at the two sites.We expected that ant abundance, diversity and foraging activities would increase in the northern site, where cooler temperatures may be limiting, while ant abundance, diversity and foraging may decrease at the southern site, where many ant species are already exposed to temperatures near their thermal limits.
Pelini et al. Page 5 of 32
The warming experiment was conducted simultaneously at two sites, Harvard Forest ("northern site") and Duke Forest ("southern site").Harvard Forest is in central Massachusetts in the northern hardwood hemlock-white pine transition zone (42° 31' 48"N, 72° 11' 24"W, 300 m elevation above sea level (a.s.l.)).The mean annual temperature at Harvard Forest is 7.1° C and the mean annual precipitation is 1066 mm.Our experimental site at Harvard Forest is in an ~70yr-old oak-maple stand in the Prospect Hill Tract.Duke Forest is near Hillsborough, North Carolina (35° 52' 0" N, 79° 59' 45" W, 130 m a.s.l.), in the Piedmont region.The mean annual temperature at Duke Forest is 15.5°C and mean annual precipitation is 1140 mm.Our experimental site at Duke Forest is in an ~80-yr-old oak-hickory stand within the Eno River Unit.
Harvard Forest and Duke Forest share more than 30 ant species but they differ substantially in ant diversity and abundance (Pelini et al. 2011).An additional 65 species have been recorded at Duke Forest but not at Harvard Forest, and an additional 12 species have been recorded at Harvard Forest but not at Duke Forest.Ants are active at Harvard Forest April through November while those at Duke Forest are active year-round, but peak abundance occurs May-August at both sites.In our experimental chambers during the six months of this experiment, we captured 16,000 individuals from 28 species at Duke Forest and fewer than 1000 individuals from 9 species at Harvard Forest.Only one species, Aphaenogaster rudis, occurred in the experimental chambers at both sites.
Pelini et al. Page 6 of 32
We altered air temperatures in the forest understory near the forest floor by using passively heated and cooled minichambers (Lessard et al. 2010, Wittman et al. 2010).Each minichamber was a table-shaped frame of 1.3-cm-diameter PVC pipe that supported a 1 × 1 m open-top frame 57 cm above the ground.Previous work on ant communities has documented ant responses to both abiotic and biotic changes caused by these treatments in similar sized plots (e.g., Kaspari et al. 2003, Sanders et al. 2007, McGlynn et al. 2009).A common ant at both sites, Aphaenogaster rudis, has foraging and nest emigration distances shorter than 1m (Smallwood 1982).To reduce temperatures, we covered the top frame of 10 of the minichambers at each site with a 1 × 1 m piece of shade cloth mesh that reduced solar gain by 80% but allowed for rain penetration to the soil surface.To raise temperatures in 10 of the minichambers at each site, we attached clear polyethylene sheeting to the top and along each side down to a height of 9 cm above the soil.
We punched 25 6-mm-diameter holes in a uniform pattern in the top polyethylene to allow for rain penetration.We also established 10 control minichambers, which were PVC frames only.
We secured the legs of the minichambers to the ground with iron rods.
Under the forest canopy at both sites, we arranged the 30 minichambers in a completely randomized design, with neighboring minichambers being separated by at least five meters.We deployed the minichambers in April 2009, when many ant species actively move their nests (Smallwood 1982).We left the minichambers in place until the experiment was ended in September 2009.
Pelini et al. Page 7 of 32
We recorded air and soil temperatures in all of the northern minichambers with thermistors connected to a Campbell Scientific data logger (CR100, Logan, Utah).At the southern site, we measured air temperature in seven randomly-chosen minichambers of each treatment (i.e., 21 out of the 30 minichambers) using iButton® electronic temperature sensors (Dallas Semiconductors, Dallas, TX).We shielded all air temperature sensors from direct sun and rain and placed them 5 cm above the litter layer beneath the minichambers.
Though the minichamber treatments were implemented as one-factor ANOVA design with three treatment levels (cooling, warming, control), there was substantial variation in temperature within treatment groups due to microhabitat and other variables not manipulated in this study.
Thus, we treated the temperature manipulation as a continuous variable and used regression to determine the effects of variation in temperature on ant assemblage composition and foraging activities (Inouye 2001, Cottingham et al. 2005).We note that both regression and ANOVA are linear models of identical mathematical form, and unlike ANOVA, regression analysis can identify potential nonlinearities in associations between temperature and ant response variables (Cottingham et al. 2005, Meyers et al. 2009).
Though variation in soil temperature is also an important determinant of ant community structure and foraging activities, we used air temperature data in analyses of temperature effects on ant composition and activity because soil temperature was not measured at the southern site.Soil temperatures did track air temperatures similarly in the three minichamber treatments (i.e., the differences between average soil and air temperatures were the same in the three treatments) at Pelini et al. Page 8 of 32 Harvard Forest (ANOVA: F 2,37 = 1.6, P= 0.21; Figure 1).We are confident that the associations we report between air temperature and ant community structure and foraging activities reflect real responses to temperature change.Finally, we also calculated the average daily range of temperatures by subtracting the daily minimum from the maximum for each minichamber and used this variable to test whether or not diurnal variation in temperature affected the ant communities that we studied.
Ant Community Composition
In September 2009, we terminated the experiment and collected all of the leaf litter within each minichamber to sample ants.We extracted, identified and counted ants from all organic matter and loose surface soil in the 1 m 2 area using Winkler extractors (Fischer 1998).We used general linear models with Poisson error distributions to examine relationships between total ant abundance and species richness with average temperature and diurnal variation in temperature at both sites.We estimated species evenness using Hurlbert's PIE (probability of an interspecific encounter; Hurlbert 1971) for each minichamber.This diversity index is equivalent to the slope of an individual-based rarefaction curve measured at its base (Olsweski 2004).We used general linear models to examine the relationship between PIE and temperature in the southern site, but because of strong departures from normality in data from the northern site, we examined these latter data using locally weighted scatterplot smoothing.
Pelini et al. Page 9 of 32
We recorded the rate at which ants removed different kinds of baits to assess effects of temperature on foraging activities.We used Demerara sugar grains (Signature Brands, Ocala, FL) to estimate nectivory rates, live adult termites (Reticulatermes flavipes) to estimate predation rates (Wilson 1971), dead adults of R. flavipes or Tenebrio molitor (mealworms) to estimate scavenging rates (Jeanne 1979), and milled oat grain (Avena sativa) to estimate granivory rates (Valone and Kaspari 2005).We also measured rates of removal of seeds of wild ginger (Asarum canadense), a native forest understory species that occurs at both sites and that has seeds with eliasomes that are commonly dispersed by ants in the eastern US (Hölldobler and Wilson 1990).
We conducted the bait removal experiments at both sites in August through early September 2009.On each census day, we used only one bait type.In each minichamber, we placed one 55cm-diameter plastic petri dish with ten units of bait and recorded the number of bait units remaining at 30-minute intervals for two hours.We quantified removal of bait as the area under the curve of the number of baits removed versus time.This measure of activity integrates time to discovery, number of foragers, and rate of removal.To adjust for outliers, we used robust regression to examine the relationship between foraging activities and temperature at both sites.
Temperature
Pelini et al.Page 10 of 32 Average temperatures during the experiment at the southern and northern sites were 22.8 ± 0.3°C and 17.5 ± 0.4°C across treatments, respectively.Warming and cooling minichamber treatments increased and decreased average temperatures ~0.3°C relative to controls at both sites (Figure 2).
Ant Community Composition
We collected a total of 16,421 individuals and 28 ant species at the southern site and 780 individuals and 9 ant species at the northern site.Crematogaster lineolata was the most abundant ant species in the southern site, and Aphaenogaster rudis was the most abundant ant species in the northern site.Overall, ant abundance (i.e., number of individuals across all ant species), species richness, and evenness (PIE) were significantly higher at the southern site than at the northern site (abundance: F 1,43 =62, P <0.001; richness: F 1,43 =194, P <0.001; evenness: F 1,43 =119, P <0.001).
Total abundance of ants increased by 240% for every 1°C increase in temperature at the southern site (Χ 2 = 2800; P < 0.001), but was not associated with average temperature at the northern site (Figure 3, upper panels).Species evenness decreased by 60% with 1°C increase in average temperature in the southern site (Χ 2 = 6.9;P = 0.009) and was highest at intermediate temperatures in the northern site (Figure 3, middle panels).Species richness was not associated Pelini et al. Page 11 of 32 with average temperature at either site (Figure 3, lower panels).Species evenness also was highest at intermediate levels of diurnal variation in temperature in the northern site, but no other metrics of ant community composition were associated with diurnal variation in temperature (Figure 4).The abundance of C. lineolata, the most abundant ant species at the southern site, increased by 190% with temperature (Χ 2 = 5700; P < 0.001) while the abundance of Aphaenogaster rudis, the most common ant at the northern site did not vary with temperature (Χ 2 = 1.7;P = 0.19).
At the southern site, per degree of warming, seed dispersal, nectivory, and granivory decreased approx.50% from the site averages for these activities (Figure 5, left panels).At the northern site, none of the foraging activities were altered substantially by temperature (Figure 5, right panels).Diurnal variation in temperature was negatively associated with nectivory and granivory at the southern site and weakly positively associated with scavenging at the northern site (Figure 6).
DISCUSSION
Climatic change is expected to have differential effects on ecological communities in different geographic areas, but forecasts of climatic change based on global or even large-scale regional climatic patterns are unlikely to provide accurate assessments of short-term, small-scale changes in temperature, which ultimately regulates local ant abundance, richness, and foraging activities (Wehner et al. 1992, Cerdá et al. 1997, Azcarate et al. 2007, Chong and Lee 2009).Furthermore, few studies have experimentally demonstrated the effects of warming on communities simultaneously at different locales (but see Doak and Morris 2010).Our experimental results suggest that even modestly warmer average daytime temperatures can have large impacts, some mediated disproportionately by abundant species, on ant communities at lower latitudes.However, at higher latitudes, observed responses were much weaker and in general they may be slower than observed in other studies (cf.Parmesan and Yohe 2003).
At the more species-rich southern site, the abundance of Crematogaster lineolata, the most abundant species at that site, increased with temperature.Warming may have resulted in acceleration of successful brood production and development, or C. lineolata may have moved from cooler patches to the small islands of heat formed by the treatment (Moise and Henry 2010).Both effects are likely to occur as increases in mean temperatures create new thermal landscapes in which some, but not all, patches are warmer than current conditions.At the same time that the abundance of C. lineolata increased, species evenness and overall ant foraging activities decreased with increasing temperature.We suggest that this result may be due to competitive displacement by C. lineolata of other species in the chambers.Altered dominance Pelini et al.Page 13 of 32 patterns driven by climatic change have been shown in other systems and may be a common feature of the earliest responses of communities to warming (e.g., Kardol et al. 2010).
In contrast to the strong responses we observed at the southern site, we observed relatively weak responses at the northern site, even though foraging of colonies at the northern site is likely to be limited by cold temperatures (cf.Hölldobler and Wilson 1990).Among community measures, only species evenness was associated with temperature, reaching highest values at intermediate temperatures.These responses were opposite of our initial predictions.It is possible that the overall low ant abundance at the northern site limits the ability to detect responses.Greater increases in temperature may be needed before the abundances of northern populations increase.
Alternatively, it may be the case that the structure and dynamics of more temperate ant communities are not limited exclusively by temperature.Several studies now exist in which northern populations of insects do not experience changes in population sizes with warming (e.g., Adler et al. 2007, Pelini et al. 2009).
The different responses of ant communities to temperature at our two study sites also could be associated with other factors that co-vary with latitude.Although the two study sites do share many ant species and occur in similar deciduous forests, they differ dramatically in ant abundance, diversity and foraging activity.Furthermore, historical differences in climate, particularly temperature, and differences in seasonality may have been strong selective agents that constrain responses to temperature.For example, cold temperate species may have higher thermal maxima relative to ambient temperatures (Deutch et al.) such that species at higher latitudes have to be warmed more to experience fitness consequences.Pelini et al. Page 14 of 32 By manipulating temperature only during spring and summer, we focused on the effects of warming on rates of foraging, development and potentially mortality during the active period of ants in the two regions and avoided potential confounding effects of warming on winter survival.
When ants are most active, they respond to warming by shifts in foraging (and food intake) and/or shifts in development in their present locations.At the hottest temperatures we observed at the southern site, they may also respond through reduced activity or even mortality.Ants also may track environmental conditions by moving their colonies.Such a response to climatic change is also seen in other animals (Moise and Henry 2010).Outside of the minichambers at both sites, we have observed multiple, within-season relocations of colony sites by Aphaenogaster rudis, and other studies provide similar evidence for the redistribution of ant colonies during a single season (Hölldobler and Wilson 1990, Foitzik et al. 2004, McGlynn et al. 2009, Lessard et al. 2010).Just as for birds and mammals, actual responses to climatic change inevitably reflect a mix of behavioral responses to warming, such as local shifts in habitat use, and demographic responses.A third possibility is that individual foragers may move into treatments areas to forage.Future studies should consider the effects of warming during cooler periods on ant community composition and activity.
As the climate changes, trophic cascades and ecosystem processes dependent on ants are likely to change in tandem.(Folgarait 1998, Petchey et al. 1999, Lensing and Wise 2006, Suttle et al. 2007, Barton et al. 2009, Harmon et al. 2009, O'Connor et al. 2009, Gilman et al. 2010, Traill et al. 2010).We found this to be the case at our southern site, where we observed decreases in rates of granivory, seed dispersal, and nectivory.Such changes suggest that ant responses to climatic Pelini et al.Page 15 of 32 change may have cascading consequences for species dependent upon particular ants, such as ant-dispersed plants (Gove et al. 2007) or insects tended for honeydew in exchange for protection by ants (Stadler and Dixon 2008).More detailed, long-term studies of the responses of ants to climatic change, both observational and experimental, are needed to improve the forecasts of these changes.
Figure 2 .
Figure 2. Average temperatures June-August in the southern (left) and northern (right) | 2016-02-20T08:33:50.931Z | 2011-05-01T00:00:00.000 | {
"year": 2011,
"sha1": "93fa4f5c1dd7adfb3bcf1bb76fe109b0dba4ac0e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1890/ES11-00097.1",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fa8fd7569f9472540f4eafdd60ce2ce4855cc9b7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
77596926 | pes2o/s2orc | v3-fos-license | Herpes simplex virus infection: problems and prospects as perceived by a peripatetic pediatrician.
The multivaried aspects of the herpes simplex viruses (HSV) types 1 and 2 and the infections they produce are discussed. Points emphasized are: (1) the need for considering these (and other viruses) from an evolutionary perspective; (2) the necessity of disseminating current methods for virus identification; (3) the great progress in molecular-virological aspects and in the genetics of the virus which provide new tools for epidemiological and immunological studies and define more convincingly the possible causal role of HSV-2 in cervical carcinogenesis; (4) the problems with vaccines and the therapeutic advances and failures; (5) the great psychosocial aspect of some herpetic infections and the need to be sympathetic and supportive to afflicted patients and their families; (6) the overreaction regarding HSV that currently exists among physicians, nurses, the public, and the press resulting in increased misery for those afflicted or misdiagnosed, or in poor advice or management given by some physicians pressured in part by the fear of malpractice suits. The problems then are many but the prospects for their solution are in sight as more research at all levels is being conducted today in all corners of the world on the complex herpes simplex viruses.
I have been asked to write about the problems and prospects of herpes simplex virus (HSV) types 1 and 2 in a relatively small space. This restriction would not permit a detailed review of the multivaried aspects of these viruses about which I and various co-workers have written and continue to be asked to write [e.g., [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. I find myself in the much more enviable position of limiting my remarks to some perspectives on the present and future which I can give from having lived for almost 20 years with herpes viruses. Of the various possible ways of looking at these viruses, the most stimulating and the one which permits the most encompassing panorama of all aspects of virus-host interactions from the molecular to the epidemiological, is the evolutionary perspective. It is then with this "evovirological" [5,6,15,16] view that I will approach this topic. CLINICAL ASPECTS First to consider must be the clinical problems, since without them concern with HSV would be like that with Reoviruses 1, 2, and 3-of interest currently only to a few basic investigators. Herpes simplex viruses 1 and 2 are much more important clinically today, since many of the sites that they can infect can also become diseased, often only in special hosts ( Table 1). The most severe of these diseases are in large part a product of modern life and contemporary medicine. We have only to ask in whom we are seeing today the disseminated herpes, the herpetic pneumonias, or the chronic HSV infections. One growing group of individuals likely to be afflicted with Clinical form most usually seen: A primary (in newborns = no transplacental antibodies); Brecrudescent; Cabout equal frequency. s frequently subclinical x Information too incomplete regarding frequency of involvement, HSV type, or clinical form Italicized are those sites in which the diagnosis has heretofore most commonly been made clinically (and often erroneously). From Nahmias [8] these manifestations of the herpetic infection are those with cancer or other chronic diseases whom we are keeping, or trying to keep, alive with a variety of potent immunosuppressive drugs, e.g., bone marrow transplant recipients. Others are individuals experiencing acute insults, such as the severely burned cases who, in yesteryears, were doomed to die from their burns. Even the deeper ocular herpetic manifestations are believed by some old-time ophthalmologists to be related to the advent of the use of corticosteroids. As regards the newborn, it is not our advances in medicine only (fetal monitors have repeatedly been shown to introduce HSV into babies' scalps), but changes in sexual mores, which have increased the frequency of genital herpes-the major source of neonatal herpes [2,7,10]. Parenthetically, many babies are undoubtedly being saved, unbeknownst to the physician, from acquiring the mother's genital herpes virus at the time of delivery by the ever-increasing use of cesarean sections based on other reasons, rational or not. It is obvious that we cannot but continue to increase our efforts to keep our patients with aplastic anemia or with severe burns alive, and that we cannot legislate sex. So what can we do now-or what's ahead? I shall give my views regarding possible resolution to these problems and the impact of basic knowledge at the molecular, cellular, and immunological levels after I discuss further clinicoepidemiological problems.
ONCOGENIC POTENTIAL OF HSV Let us look for a moment at genital herpes and cervical neoplasia, a subject our group at Emory and now many others have struggled with for nigh 15 years [1,3,4,9,14]. Why would HSV or any other DNA viruses cause cancer which is generally a point of no return for the virus, since current information indicates that the virus in the transformed cell is no longer infectious, therefore incapable of being transmitted. Transformation, as discussed elsewhere [15], must somehow be linked to important functions needed for survival of the virus in its replication or perhaps in the establishment of latency. It could, however, be only a chance event when the virus (probably with a defective genome) enters a cell particularly susceptible to being transformed. I would similarly explain HSV encephalitis [5], a rare phenomenon (about one case per million), for which we have at present no evidence for any specifically neurovirulent HSV strain nor for any particular immunological susceptibility of the host [13]. In any case, cervical cancer, if it is indeed causally related to HSV, would have been a rare disease in Ancient Woman, as would all cancers detectable most usually in people 40 years of age or older.
Several observations have been made recently regarding the virus-cancer relationship. First has been the finding of viral-specified RNA transcripts and proteins in many cervical neoplasms [17][18][19]. Second has been the exciting molecular work [19,20] delineating portions of the HSV genome in which the transforming potential appears to reside in HSV-2 transformed hamster cells or in cervical neoplastic cells. This should open soon the possibility of defining in even more detail the specific carcinogene(s) and the proteins coded, allowing more sensitive technology for their detection in human neoplasms and for the demonstration of immunological reactions to these specific proteins in patients with HSV-associated neoplasms. Third has been the observation that inactivated HSV, not only transforms in vitro, but also when inoculated genitally in mice will produce large numbers of cervical tumors [21; Wentz W, personal communication]. A fourth recent finding suggest-s that the highest risk of developing cervical neoplasia isiri-women with primary genital HSV-2 infection, i.e., those who have had no previbus HSV-1 infection. If these observations made by U.S. and British workers independently, using different methodologies [22,23], are confirmed, then the potential of vaccination in HSV seronegative adolescents becomes a much more practical possibility.
PROSPECTS OF IMMUNIZATION
It is this concern that the transforming potential of HSV noted in vitro and in animals might also occur in humans which has limited vaccine approaches to those using glycoprotein vaccines which lack viral DNA, or to constructing viral mutants lacking specific nefarious genes. The development of such vaccines is under way [24]. As the questions of efficacy and side-effects are resolved, it might then be possible to demonstrate with well-designed studies over a 5-10 year period the influence of vaccines in protecting from cervical neoplasia-the final proof that the virus has some causal role in cervical carcinogenesis [4,14].
An important contributor to the possible development of vaccines for the two HSV types is the enormous progress made in recent years in our understanding of the structure and function of the viral genome [25]. That it took only a few years to obtain a map of the HSV genome allows one to believe that we will also have, in the not too distant future, the function of the proteins coded ascertained. Those proteins involved in immunogenicity may even be synthesized, if not in the chemist's laboratory, then by the use of bacterial recombinant systems. No longer will we ever talk in such crude terms as "soluble" antigen used for diagnostic or immunological studies. The availability of better characterized and purified proteins, besides permitting improved and cheaper potential vaccines, would permit definition of the immunological reactivity in different hosts with primary or recurrent infections or with HSV-associated tumors. Furthermore, the current application of hybridomas to herpes simplex viruses should offer monoclonal antibodies to specific proteins of the viruses.
Coincidentally with the molecular advances has been the development of immunological assays [1 1,13]. We now have dozens of methods to detect antibody function at the level of the virus and of the infected or transformed cells-antibodies not only in different classes of immunoglobulins, but also those which act with complement or with K lymphocytes, monocytes, or polymorphonuclear leukocytes to lyse the HSVinfected cells. The possible role of NK lymphocytes in herpetic infections [26] has just burgeoned over the past two years and assays for lymphocyte cytotoxicity and for various lymphokines are under active study. We are also beginning to appreciate the cyclic nucleotide-HSV-interferon interactions, as well as effects of other hormones on herpetic infections. The task that remains is to differentiate those immune or nonimmune host factors of relevance in humans from those which are recognizable in the host, but are really only secondary events playing no important role in host resistance mechanisms. Just beginning to be examined are the effects of the fine modulation of suppressor and helper systems in controlling the infection or preventing immunopathological disease.
LABORATORY DIAGNOSIS
We are already blessed today with a large number of serological techniques and virological methods for identifying the herpes viruses in the infected host [8,27]. Many of these tests require further work in order to establish their specificity and sensitivity. In particular, as HSV type-specific proteins are being characterized, their application to serological studies attempting to define antibodies to HSV-1, HSV-2, or both viruses will be most helpful. The biggest problem today is actually making available to clinicians throughout the U.S. current methods for virus identification. What would be most helpful would be the development of an assay which can be used routinely at the hospital laboratory level for detecting very rapidly an HSV infection. Such a test could be one detecting viral-specific enzymes or antigens in clinical specimens. We desperately need such tests, for example, for the rapid diagnosis of HSV encephalitis, at present only possible with a brain biopsy [28], and for detecting subclinical genital HSV infection in pregnant women at the time of delivery [7,10].
THERAPEUTIC ADVANCES AND FAILURES
The information obtained on viral replication in cells and of virus-specified enzymes has provided possible methods for treatment. It is now established that iododeoxyuridine, adenine arabinoside, and trifluorothymidine are effective in the treatment of ocular herpes. Also recently established is that systemically administered adenine arabinoside will curtail significantly the mortality and sequelae of HSV encephalitis and of neonatal herpes [28; Whitley R et al., unpublished observations]. For the latter two severe herpetic conditions, we badly need better methods for earlier diagnosis and possibly new drugs. On the horizon is a new antiviralacycloguanosine (acyclovir)-which is currently under control trials for HSV ocular so infections, as well as for genital and non-genital herpes, and is under open trial as a systemically administered drug for the treatment of the more severe herpetic forms of infection. Also requiring better definition is the possible use of interferon in the control of herpetic infections.
We have been unfortunately repeatedly disappointed by the various regimens suggested for use in HSV infections-smallpox vaccination, BCG, transfer factor, levamisole, ether, dye-light treatment, etc. We await more definitive studies on the use of 2-deoxyglucose, of lysine, and of several other therapies claimed to be helpful, but still with no firm scientific evidence. Always to be kept in mind is the possible harm we can physically cause our patients (or even their contacts in case of smallpox vaccination) with unproven regimens, as well as the psychological damage to patients who expect to have their herpes cured with the new "miracle" drug they read about in their daily newspaper or weekly magazine.
The use of topical therapies to curtail the duration of herpetic lesions, if proven to be effective and non-toxic, would indeed be an important advance. However, it would be unlikely to provide the solution to the problem of frequent recurrences and the concern of spreading virus to close contacts. Here is where the key to the HSV problem mainly lies. From an evolutionary perspective, what better way for a virus species to survive than to persist in a latent form to be available for transmission to others at a later time in the host's life? This is the Achilles heel of the herpes virus, if we can only find it. We have learned much more in the past few years about this crucial problem than heretofore. There is now firm evidence that the virus can remain latent in human ganglia of both the sensory and autonomic nervous system [29,30]. Work is actively under way to define the status of the virus genome during latency and mechanisms for establishment -of reactivation of the virus. Such information would permit a different strategy for controlling recurrences by "keeping the virus in" [12]. After all, who cares if the virus is latent in our body unless it is reactivated to infect others and/or cause recurrent lesions?
MOLECULAR EPIDEMIOLOGY Molecular virologists have aided us in this aspect, as well as at the epidemiological level. They have demonstrated that all HSV strains within one type are different unless epidemiologically related [31]. By the use of restriction enzyme analyses of the HSV genome, it has already been possible to relate the source and spread of neonatal and nosocomial infections and to demonstrate that some genital recurrences may be due to exogenous reinfection [32]. Epidemiology with these, as well as the improved serological tools noted earlier, will allow us to understand better the different modes of spread of herpes simplex viruses, their presence in different populations, and their association with human cancers. Such methodologies might also assist us in ascertaining how frequently a primary genital HSV-2 infection is subclinical and how frequently infection with HSV-l occurs after a primary HSV-2 infection.
PROBLEMS ONE DOESN'T WRITE MUCH ABOUT
There are several other clinical problems which may be associated with HSV and for which evidence is still flimsy, at best. Is HSV possibly teratogenic in humans? Could it be the cause of some chronic neurological diseases or psychiatric disorders? Of some cardiac or autoimmune diseases? Of other tumors than possibly cancer of the cervix, such as of the lips, oral cavity, endometrium, prostate? The technology currently or soon to be available will permit us to establish more definitely causal associations of the two HSV types to such entities.
There are several other aspects of HSV infection which are infrequently brought out. One is its psychological impact in many individuals. How can it help but affect the 14-year-old girl who develops a severe primary genital infection after her first sexual exposure? The knowledge that she can spread it to any future sexual contact and to her future babies, as well as its possible relationship to cervical cancer and the problem of recurrent lesions can ruin her social and mental health, as well as that of many other afflicted individuals. Several marriages have been broken in large part due to the transmission of genital herpes by a mother to her baby, grossly damaged or dead as a result of the infection. The blame of who-gave-it-to-whom can be psychologically devastating. We must therefore be sympathetic to such patients and help them in whatever way possible.' Another poorly discussed aspect is related to malpractice suits-for instance, against a person who may have transmitted the virus to another; against a doctor who did not separate a newborn from a mother with herpetic stomatitis or who did not perform a cesarean section when the mother had genital herpes at the time of delivery; or for doing or not doing a brain biopsy in a patient with possible herpes encephalitis. The end result has been an overreaction to the problem. For instance, women are being told they cannot have babies if they have genital herpes; some are unnecessarily aborted if they have recurrent herpes during the first trimester; others are delivered by cesarean section without any evidence of the virus around the time of delivery. Many individuals also suffer unnecessarily for an erroneous diagnosis of herpes without laboratory confirmation. Some national committees are recommending that all nursery personnel with a fever blister be removed from patient care. No similar recommendation is being made, however, to remove any asymptomatic oral shedders who may be even more dangerous (if either is at all). At least persons with fever blisters know they should not fondle babies and they can cover their lesions. Current evidence indicates that fetal monitors are definitely more dangerous than "neonatal monitors," and that the pregnant woman with genital virus at the time of delivery is the one who is most likely to transmit the virus to a baby. Yet, these same committees are not pushing in areas where the problem really lies. It appears that such august bodies prefer to avoid a still hypothetical risk without facing the problem of what it might mean to other babies who need care, when hard-to-get nursery personnel are removed for their cold sores, without apparent concern for the tremendous financial burden that would be incurred it all hospitals in the U.S. would practice what the committees recommend.
There is thus a sense of hysteria today pervading the public, the press, and the medical and nursing professions regarding HSV infection. Yet, there are rational approaches to several of the problems, e.g., monitoring pregnant women, obtaining frequent Pap smears, diagnosing accurately the infection, treating some of the diseases produced. Many of the problems, however, still require a firmer scientific basis for effective recommendations. What I have tried to emphasize here is that many of the problems are man-(and woman-) made and not necessarily caused by the virus. There is little question that if the next decade is as replete with new information at all levels about herpes simplex viruses as the past decade has been, the prospect is excellent that we will have the means to manage better many of the problems caused by these viruses. ' There is actually an organization called HELP to assist such afflicted patients (address: P.O. Box 100, Palo Alto, California 94302). | 2014-10-01T00:00:00.000Z | 1980-01-01T00:00:00.000 | {
"year": 1980,
"sha1": "6c446df59b7038fa02fcfa1eb05b9dfcee124aaf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "135cee750eb6205d65ec3cef1428e25344c602a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236472004 | pes2o/s2orc | v3-fos-license | Determining the critical factors of air-conditioning innovation using an integrated model of fuzzy Kano-QFD during the COVID-19 pandemic: The perspective of air purification
At present, people are demanding better indoor air quality during the COVID-19 pandemic. In addition to maintaining the basic functions, new air-conditioning should also add air purification functions to improve indoor air quality and reduce the possibility of virus transmission. Nowadays, there is lack of research results on the innovation of air-conditioning. The aim of this study is to present a two-stage mathematical model for identifying critical manufacturing factors in the innovation process of air conditioning. In this paper, Kano and quality function deployment (QFD) are used to analyze the critical factors affecting air-conditioning innovation. Some studies have proposed using Kano-QFD model to analyze product innovation, but the study only studies one stage, which loses the analysis of the subsequent stages of product innovation. Based on this, this paper studies the priority method of two-stage critical factors for air-conditioning innovation. Firstly, the questionnaire survey and fuzzy sets are used to collect demand information of multi-agent (customers and professional technicians). Secondly, the Kano model is used to classify and calculate satisfaction of multi-agent. Then, QFD is used to transform multi-agent demands into engineering property indexes (first stage) and technical property indexes (second stage) and calculate the weight of each index. Finally, the applicability and superiority of this method is illustrated by taking the central air-conditioning as an example.
Introduction
COVID-19 can rapidly and massively spread through the air. We need to purify the air in order to reduce the virus in the air. Air-conditioning with air purification function become the innovation orientation of air-conditioning. New air-conditioning not only need to increase the air purification function, but also should improve the existing basic functions. Air-conditioning innovation is a kind of incremental innovation of existing products. Product innovation is systematic engineering with complex manufacturing processes, high investment costs, high technical requirements, and high R&D risks [1]. The identification of critical factors for air-conditioning innovation is at the beginning of the product life cycle. Scientifically identifying the market's demand for products can effectively extend the product life cycle [2]. Under the conditions of cost, manufacturing technology and process equipment, choosing the critical factors that can best cater to customer demand preferences as the priority development factors can help companies gain market competitiveness and best corporate performance. Safizadeh pointed out that the company's choice of different product design methods will make the company have different competitive advantages. When the company's product innovation does not match customer needs, the company's performance will suffer [3]. Efficient and accurate selection of the critical factors in the product design process can make the company invincible in the fiercely competitive market [4]. Wang and Zhou pointed out that only product innovation that fully meets the demands of customers can be finally accepted by the market, so the research on demand is very important [5].
Scholars research on product innovation from a large number of perspectives. Eum et al. [6] connect production and innovation show that production advantages play an important role in technological innovation and product innovation. Dangelico et al. [7] study green product innovation based on the dynamic capability perspective of sustainable development. The current research subjects mostly focus on other product innovation, there is no research focusing on air-conditioning innovation. In addition, The innovation of product design only considers customer demands from the market, not technological innovation. Furthermore, the researchers ignore ambiguity and uncertainty of customer's demands due to the limitation of customer's personal knowledge background. These problems have become practical problems faced by enterprises in product innovation. In order to fill this gap, aiming at the identification of critical factors in the process of air-conditioning innovation, this paper proposes a method for identifying critical factors of air-conditioning innovation that considers customer demands and technological innovation in a fuzzy environment.
There are great differences and uncertainties between customers' and experts' demands for product innovation. Fuzzy sets can handled effectively numerical and linguistic uncertainties, which can transform uncertain information into quantifiable fuzzy number [8,9]. According to the driving factors of product innovation (demand-driven and technology-driven), collect product demands from the market customers and product designers through questionnaire surveys. The fuzzy sets are used to transform the demand information into fuzzy value to minimize the deviation of product demand information. Kano model is widely used in the research of demand classification and prioritization, which can obtain the nonlinear relationship between product performance and customer satisfaction [10]. QFD provides a robust framework to translate the customer demands into engineering or technical characteristics [11,12]. It can provide valuable information about which functions need to be improved and which functions should be replaced. Therefore, this paper chooses Kano and QFD methods to build the model, which transforms product requirements of multi-agent to manufacturing features, so as to identify the key critical factors that have the greatest impact on the manufacturing process.
In this study, we want to develop a (product planning and process planning stage) twostage model of critical factors for manufacturing process and give a fusion method of fuzzy number and Kano-QFD. Then, this model is applied to the innovation process of air conditioning. The research questions of the paper are the following: • How can the opinions of multi-agent (customers and professional technicians) for product be better integrated into product innovation?
• How to integrate fuzzy number into Kano model in requirements research?
• How to use QFD to give a (product planning and process planning stage) two-stage model of critical factors to make the theoretical model closer to the actual manufacturing process?
The main contributions of this study include three points. Firstly, the traditional product innovation only considers the customer's demand for product function, but product innovation comes from not only market demand, but also technological innovation and upgrading in practice, so we consider the influence of many factors (demand-driven factors and technology-driven factors). Secondly, fuzzy sets are integrated into Kano model to reduce the deviation between demand function and manufacturing characteristics. Thirdly, this paper uses QFD Model to identify the critical factors in the two stages of the product manufacturing process, and adopts two-stage analysis, which is closer to the real manufacturing situation. Specifically, the proposed new method is comprised by the following phases: • In the process of air-conditioning innovation, we consider two aspects of innovation driving factors: demand driven and technology driven, that is, market customers' demands for airconditioning and designers' demands for air-conditioning.
• Due to the differences of knowledge background and demand expression between multiagent, we introduce fuzzy sets to collect the demand of multiple agents, so that the demand is closer to the actual market situation • The Kano model is used to classify the demands of multi-agent and calculate the weights of different demands.
• The QFD model is used to decompose customer demands into engineering property indexes (product planning stage) and technical property indexes (process planning stage). After the decomposition of these two stages, customer demands can be effectively transformed into production tasks of design department and production department.
• The priority of air-conditioning innovation indexes under multi-agent demand preferences is calculated based on Kano-QFD.
The remainder of this paper is organized as follows. After reviewing some relevant literature in Section 2, we describe the research problem in Section 3 and give a method in Section 4. In Section 5, we provide a case study about air-conditioning innovation. Section 6 concludes this work.
Literature reviews
The selection of key quality characteristics (KQCs) that are significantly associated with product quality is essential for improving product quality [13]. Wiiam et al. [14] believe that in addition to technical factors, market factors also play an important role in product innovation. Li et al. [15] proposes a key quality characteristics (KQCs) selection method, which want to get maximizing feature (i.e., quality characteristic) importance and minimizing percentage of selected features. After studying the sample of 2126 manufacturing companies, Liao et al. [16] find that customer demands have a positive moderating effect on the impact of innovation intensity and innovation ability. Choudhary and Singh [17] took the hotel industry as the research object and discusses the impact of customer demand and competitiveness on propensity for innovation in the hospitality sector. Customers want flexibility so that they can choose specific products and services according to their needs [18]. Considering the uncertainty of manufacturing resources, Xu and Yu [19] proposed a discrete manufacturing decision-making model under fuzzy environment, which comprehensively considered customer demand preference and supplier profit maximization. Dragan et al. [20] introduce fuzzy numbers into Best-Worst Method (BWM) and todim (Iterative Multi-Criteria Decision Making) methods, and presents a multi criteria prioritization methodology for automobile industry. Torkayesh et al. [21] propose a new MCDM (multi-criteria decision-making) method, the stratified MCDM, this method can effectively deal with the uncertainty of environment and the fluctuation of index weight. Further more, he uses geographic information system (GIS), best and worst method (BWM) and compromise method (MARCOS) to rank the landfill location. This method can obtain a decision matrix with the ideal and anti ideal under grey interval set considering sustainability factors [22]. Yazdani et al. [23] study the problem of supplier evaluation and propose a interval valued fuzzy neutrosophic (IVFN) model. Taking into account the uncertainty of expert evaluation information, he adopt linguistic measures and their corresponding neutrosophic values to obtain this information. Tirkolaee et al. [24] use the fuzzy analysis method (FANP), the fuzzy decision-making trial and evaluation laboratory (DEMA-TEL) and the technique for order preference by similarity to an Ideal solution (TOPSIS) to rank and select suppliers. The practitioners can better express their opinions (their direction and intensity) based on the fuzzy technique.
Inaccurate identification of market demands will result in poor matching between product innovation and market demands. Kano model is a method to study the relationship between product quality and customer satisfaction [25]. QFD model is a method to transform customer demands into product design or innovation [26]. Jia et al. [27] use the Kano model for mobile phone software development. He uses the Kano model to identify the customer demands for software and determine the priority of software development modules. Based on the differences of decision makers, Yang et al. [28] propose an improved Kano model of customer demand preferences to determine the priority of customer demands. Loucanova and Olsiakova [29] apply the Kano model to the innovation process of wood products. The results show that consumers have a positive understanding of product development. Zhang et al. [30] study customer satisfaction demand identification methods and proposed a simple and easy fuzzy group decision-making method. Take the innovative design of kitchenware as an example to verify the applicability of the method. Silva et al. [31] described a method that integrates Quality Function Deployment with Theory of Inventive Problem Solving, which requires technical innovation specified from an analysis of customers' needs. Ocampo et al. [32] give a models of fuzzy QFD multiple attribute decision making (QFD-MADM), which helps to promote sustainability by incorporating requirements at an early stage of design process. Taking food processing as the research object, the innovation stage of food processing in the Philippines was studied. Chen et al. [33] evaluate the relationship between customer requirements (CRs) and design requirements (DRs) and the correlations among DRs in QFD processes based on the QFD. This study adopts experimental design and fuzzy set to collect the data. On this basis, the paper constructed a fuzzy mathematical model of each CR s satisfaction level, which can represent the interaction between the CR s satisfaction level and the fulfillment levels of DRs. Cho et al. [34] give a new mode, considering user's personal preferences for requirements, which combines the benefits of QFD with those of TOPSIS. The model can be used to analyze positive/negative ideal criteria and limit values between multiple market products and user requirements.
According to Table 1, we can obtain: 1. The existing studies focus on the analysis of innovation factors and product innovation design. Researchers prefer to adopt the method of model research and there are abundant research results on the deterministic environment.
2. The analysis of innovation factors is an important research in product innovation. However, nearly all related papers in the domain of product innovation have considered the problem from the single perspective (customer's viewpoint), while in the real situation, the product innovation is influenced customer demands and technological progresses. In this study, the influencing factors of product innovation are analyzed from two aspects: market driven and technology driven.
3. Most studies focus on product innovation design, proving that product design is a key point in product innovation, so the paper focuses on the selection of critical factors of airconditioning innovation. However, the majority of the studies limit themselves to product planning while losing control over other subsequent phases of process planning. Thus, this paper attempts to give a two-stage model based on Kano-QFD.
4. Most studies are carried out in a deterministic environment, but the uncertain environment is more in line with the real situation. The paper focus on product innovation in uncertain environment. In the process of this study, the fuzzy number is used to deal with the uncertainty problems in the real situation, which improves the applicability of the model. 5. Due to the availability and uncertainty of information in decision making, the fuzziness of human emotion and recognition, it is often difficult to accurately evaluate and convey the emotion and recognition of decision making objects. An expert more efficiently employs their implicit knowledge, experiences, and information through language evaluation. A fuzzy set is a versatile tool both for linguistic and numerical modeling, which can transform linguistic information into corresponding computable fuzzy numbers, while grey interval numbers, hierarchical theory and neutral sets cannot deal with this problem. Therefore, when dealing with language problems, the fuzzy set is adopted.
Problem description
The paper considers the air-conditioning innovation design for multi-agent demand preferences in fuzzy environment. Air-conditioning innovation from two aspects: demand-driven and technology-driven. Customs may design products according to their own demands. However, the product demands will differ due to customs particular and distinct preferences [35,36]. At the same time, product designers will improve the product in conjunction with the development of technology. In addition, product innovation may also be affected by objective conditions such as technological constraints and environmental constraints. Only by accurately identifying the market's demands for product innovation, can maximize the market's acceptance and adoption of products, and ultimately achieve the desired innovation benefits [37]. Along with the study of Ding et al., our paper takes all possible individual preferences among the indexes into account [38]. Through market surveys, we can get air-conditioning demand information of multi-agent. Using fuzzy sets to transform demand information into corresponding fuzzy values. According to the demand information and the Kano model, product demands are divided into five categories: Must-be, One-dimensional, Attractive, Indifferent, Reverse. At the same time, the Kano model is used to calculate the satisfactory of demands. The QFD two-stage model is used to transform customer demand into engineering property indexes and technical property indexes respectively, and calculate the weight of each index considering multi-agent demand preferences. Thus, the indexes ranking are obtained and the key product innovation indexes which have the greatest impact on demand are determined. The logic of paper is shown in Fig 1. Considering the situation where there are three entities, i.e., market customer group (MC), process design group (DT), product manufacturing group (MT), that give theirs demands and satisfactory. Let . . . ; S S d n g represent product customer satisfaction of three entities. Therefore, the importance degree of the j-th engineering property index (EW j ) can be calculated by Eq (1). In which CW S d ij represents the correlation matrix between product satisfactory and engineering property indexes given by multi-agent.
Based on this, We can further obtain the importance degree of technical property indexes as Here, TW k represents the importance degree of the k-th technical property index, CT Z d jk represents the correlation matrix between engineering property indexes and technical property indexes given by multi-agent An intuitive practice is to determine accurate numbers of different dimensions with respect to each indicator, and then employ typical aggregation function to get importance degrees of indicators [39]. However, it is often difficult for customers to express their specific demands for products with precise values. For example, customers want food to be fresher. Liao [40,41] point out that the use of fuzzy sets to represent an expert's preferences when assessing a linguistic variable, increases the flexibility of eliciting and representing linguistic information. Based on fuzzy sets and multi-agent demand preferences, this paper integrates the Kano model and the QFD model to rank the importance orders of the key product innovation factors for a air-conditioning.
Methodology
In this section, a priority methodology is proposed for dealing with air-conditioning innovation, the main feature of which is considering multi-agent demand preferences and vague expressions.
Our methodology can be divided into three-fold, as shown in Fig 2. Firstly, the multi-agent demand preference information of air-conditionings is collected, and then the demand preference information is transformed into fuzzy value based on fuzzy sets. Secondly, we can obtain the classify of product demands and the demand weights of multi-dimensional satisfactory according to multi-agent demand preferences. Further more, according to the demand analysis principle of the QFD model, the product demands are mapped to engineering design and The Kano model is used to identify multi-agent demands in the innovative design of airconditioning. On the one hand, it can divide multi-agent demands scientifically and reasonably. On the other hand, it can help the product design department to effectively know and control multi-agent demands for products.
In the Kano questionnaire, each demand is designed into two dimensions: "With" and "Without". Under each dimension, there are five types of answers: "Favorite", "Necessary", "Indifferent", "Reluctant" and "Disgusting". According to the two-dimensional attributes, the multi-agent demands are classified, so as to realize demand classification of air-conditioning. The corresponding demand classification judgment matrix is shown in Table 2. Let CR =
PLOS ONE
Determining the critical factors of air-conditioning innovation {CR 1 ,. . .,CR n } represents multi-agent demands, in which CR i represents demand of i-th agent.
Here multi-agent includes market customer group (MC), process design group (DT), product manufacturing group (MT).
Satisfaction function.
If a demand is an indifferent demand, no matter whether the company increases or decreases the demand, multi-agent satisfactory and dissatisfaction will not change. Therefore, this paper does not analyze indifferent demand satisfactory calculation.
4.1.2.1 Attractive demand satisfaction function. Attractive demand refers to the unexpected demand of customers. If air-conditioning have this function or feature, customer satisfaction can increase rapidly; If a air-conditioning does not have this function or feature, customer satisfaction will not decrease. If the i-th demand is attractive demand, its satisfactory can be calculate by following Here, in which, S = {S 1 ,. . .,S n } represents a set of multi-agent satisfactory. S i represents satisfactory of multi-agent for the i-th demand (CR i ). a a i and b a i are the adjustment coefficient of satisfaction function. satisfactory and dissatisfaction of CR i represented by CS i and DS i respectively [43], which can can be calculated by Eqs (5) and (6).
Here, M i represents the number of people who think the demand is the product must-be, and then, O i , A i , I i and R i also represent the number of people.
x � i is the i-th agent demand expectation after normalization [44].
Where, x i is the actual evaluation value of multi-agent for CR i ; x Ie i is minimum expectation; x Ae i is maximum expectation.
One-dimensional demand satisfaction function.
One-dimensional demand refers to the functions and features that customers want air-conditioning to possess. The higher realization degree of one-dimensional demand, the greater customer satisfaction. There is a positive correlation between realization degree and customer satisfaction. If the i-th demand is onedimensional demand, its satisfactory can be calculate by following a o i and b o i are adjustment coefficient of one-dimensional demand satisfaction function, which can be calculated by Eq (9).
Must-be demand satisfaction function.
Must-be demand refer to the functions and features that customers think air-conditioning should possess. If air-conditioning has this function or feature, customer satisfaction will not be significantly increased. However, if airconditioning do not have this function or feature, customer satisfaction will be significantly reduced. If the i-th demand is must-be demand, its satisfactory can be calculate by following a m i and b m i are adjustment coefficient of must-be demand satisfaction function, which can be calculated by Eq (11).
Reverse demand satisfaction function.
Reverse demand refers to the functions and features that customers do not want air-conditioning to have. If air-conditioning has this function or feature, customer satisfaction will decrease. The greater degree of realization, the greater dissatisfaction. There is a negative correlation between reverse demand and customer satisfaction. If the i-th demand is reverse demand, its satisfaction can be calculate by following a r i and b r i are adjustment coefficient of reverse demand satisfaction function, which can be calculated by Eq (13).
Where, demand expectation x ; i can be calculated by Eq (14).
Modification of satisfaction function.
In order to make the calculated customer satisfaction closer to the actual situation, we need to modify satisfaction function. Tan et al. [45] propose a method to modify satisfaction function.
Here, AI � i is adjustment coefficient, which can be calculated by Eqs (16 and 17).
Here, k is the Kano factor; AI i is the initial adjustment coefficient of satisfaction function. Based on this, we can get four types of satisfaction functions for Must-be demand, Onedimensional demand, Reverse demand and Attractive demand, as shown in Table 3.
Ranking model of critical factors 4.2.1 QFD and fuzzy sets. 4.2.1.1 QFD.
The most important function of QFD method is to transform customer demands into product manufacturing performances and determine the critical factors in the product manufacturing process [46]. QFD method is widely used in the product design stage to accurately understand customer demands for products. The QFD method uses a series of product planning matrices, the house of quality, to decompose customer demands in four stages, which are: product planning stage, process planning stage, part configuration stage, and production planning stage. Because this paper only identifies and analyzes the critical factors in the manufacturing process of air-conditioning, this paper only studies the two stages of product planning stage and process planning stage, as shown in Fig 4.
Fuzzy sets.
Some product demands are difficult to quantify. At this time, using language descriptions is more in line with customers' psychology of product functional requirements. Language description is more in line with customers' psychological demands for product functions. For example, when customers evaluate food, language such as "fresh" and "not fresh" can better express customer satisfaction. Because of the uncertainty and fuzziness of language description, we adopt fuzzy sets to deal with it. The fuzzy sets proposed by Professor Zadeh [47] has proved to be an important tool for effectively dealing with the problems of Table 3. Satisfaction function. ambiguity and uncertainty. This paper uses triangular fuzzy numbers to describe multi-agent demands.
KC a i b i S i SW i
Assuming that B is a fuzzy subset of fuzzy set U. For any x (x2U), there is a corresponding u(x), u(x)2[0,1]. we can say u(x) is membership function of x, that is fuzzy number. If B is a triangular fuzzy number, B = (b 1 ,b 2 ,b 3 ), its membership function can be calculated by Eq (18). Due to the complex manufacturing process of air-conditioning, in addition to multi-agent demands, design feasibility and process feasibility should also be considered for innovative design of products. Therefore, the product demands in this paper is not only customer demands of market, but also design demands and manufacturing demands. In order to facilitate readers' better reading, this paper summarizes the three agents of Market customer group (MC), process design group (DT), product manufacturing group (MT) into an expert team. Due to the difference of knowledge background and requirement understanding of expert teams, the design of air-conditioning is fuzzy and ambiguous. This paper uses fuzzy sets to evaluate the two-stage QFD model. The corresponding fuzzy evaluation values are shown in Table 4.
Engineering property indexes ranking.
Let D Z d jg represents the autocorrelation between engineering property indexes EP j and EP g given by expert Z d ,j6 ¼g2 [1,m]. Let E Z d ij represents the correlation between multi-agent demand CR i and engineering property index EP j given by expert Z d , i2 [1, n] Here, h 1 represents the numbers of MC; h 2 represents the numbers of DT; h 3 represents the numbers of MT. The average expert evaluation information can be obtained by Eq (19). Furthermore, we can obtain the correlation matrix (CM) of multi-agent demands and engineering property indexes, the autocorrelation matrix (AM) of engineering property indexes, as shown below.
CW is the improved correlation between multi-agent demands and engineering property indexes.
According to Eqs (15) and (21), we can get satisfaction function and correlation matrix. Eq (22) is used to obtain the j-th project performance indexes and normalize it.
Technical property indexes ranking. Let I Z d
jk represents the correlation between engineering property index EP j and technical property index TP k given by expert Z d ,j2 [1,m], k2 [1, l]. Let H Z d kp represents the autocorrelation between technical property indexes TP k and TP p given by expert Z d , p2 [1,l] According to the calculation rule of Eqs (19)-(21), we can get the improved correlation (CT)between engineering property index EP j and technical property index TP k , as follow.
Multiply the importance of the engineering property indexes obtained by Eq (23) and the relationship matrix obtained by Eq (24) to obtain the importance of technical property indexes, as shown in Eq (25).
Ranking model.
According to the previous paper, we can get the importance set of engineering property indexes and the importance set of technical property indexes, EW � and TW � . According to the triangular fuzzy number calculation rule, EW � and TW � are still triangular fuzzy numbers, EW � = (EW �1 , EW �2 , EW �3 ) and TW � = (TW �1 , TW �2 , TW �3 ). Since fuzzy numbers cannot be compared numerically, we need to convert fuzzy numbers into exact numbers. The comparison and sorting of fuzzy numbers requires the introduction of a cut-set (that is the confidence level), which is an important method to turn fuzzy numbers into exact numbers [48]. The fuzzy number converted into an exact number under the α cut-set, which is calculated by Eqs (27) and (28).
a respectively represent the upper and lower lines of the fuzzy number EW � under the α cut-set.
In order to effectively describe the ambiguity and uncertainty of the air-conditioning design process, a weighted modified average level α cut-set defuzzification method is adopted. This method can effectively solve the problem of difficulty in sorting caused by the aggregation of multi-index triangular fuzzy numbers. The calculation Eq is (29).
Similarly, we can use Eq (30) to calculate the importance of each technical property index, and sort them, so as to identify the critical factors in the process of air-conditioning innovation design.
QT k is the important of the k-th technical property index.
Example simulation
In this paper, the KFR air-conditioning of Gree Electric Appliances, Inc.of Zhuhai (that is simply as Gree) is taken as the research object. The product structure is shown in Fig 5. The KFR air-conditioning will be put on the market in 2018. Now it is planned to transform or replace part of its function and utility in order to improve air purification capacity during the COVID-19 pandemic. The company organized 5 experts from the process design department to carry out demands and design concept, according to product order. Four customer demands, eight engineering property indexes and fifteen technical property indexes were obtained, as shown in Table 5. According to the characteristics of product demands, Kano questionnaire was designed and collected online. A total of 200 questionnaires were collected, including market customer group demands (MC), process design group demands (DT), product manufacturing group demands (MT), with weight ratios of 0.5, 0.3 and 0.2. After removing the invalid data, the classification of product demands and the average value of evaluation information are obtained, as shown in Table 6.
Combining Eqs (4)- (21) to obtain the weight of the importance of multi-agent demands, as shown in Table 7. According to the importance symbols shown in Table 3, the correlation evaluation information of QFD in two stages given by experts is collected, as shown in Tables 8 and 9. In the first stage, the expert groups include market customer group (MC), process design group (DT), product manufacturing group (MT). Considering the limitation of market customer group knowledge background, in the second stage, the expert groups only include process design group (DT), product manufacturing group (MT).
According to fuzzy sets, the evaluation information given by experts is transformed into corresponding fuzzy numbers. Eq (19) is used to obtain the fuzzy mean value of the correlation matrix and the autocorrelation matrix for multi-agent. In the first stage, the fuzzy matrix of the correlation matrix (CM) and the autocorrelation matrix (AM) are shown in Table 10.
According to Table 10, the fuzzy evaluation information values (CM and AM) given by the DT expert group are obtained. Eq (21) is used to obtain the improved correlation CM � between multi-agent demands CR and engineering property indexes EP. And then, Eq (22) is used to obtain the absolute importance of engineering property indexes EW. Furthermore, we obtained the normalized importance of EW � by Eq (23). Due to the same calculation process and limited paper layout, the paper only gives the absolute importance and the normalized Table 8. The evaluation information in the first stage (One export). https://doi.org/10.1371/journal.pone.0255051.t008 importance of market customer group (MC) and product manufacturing group (MT), as shown in Table 10, the specific calculation steps are not described in this paper. According to the Eq (27) and Table 11, the fuzzy number is transformed into the exact number, which the results are shown in Table 12. Eq (29) is used to weighted average the interval number. We can get the importance of engineering property indexes and sort them by Eq (29), and the importance mean of each index is calculated by mean method. which shown in Table 13. Because the market customer group is limited by the production background and process knowledge, it is impossible to effectively evaluate the correlation between the technical property indexes and the engineering property indexes. Therefore, in the second stage, we just collect the evaluation information of process design group (DT) and product manufacturing group (MT). According to the correlation matrix and the autocorrelation matrix, combing with Eqs (24) and (25). The importance mean of each index is calculated by mean method. we can obtain the importance ranking of technical property indexes of multi-agent, as shown in Table 14.
To further facilitate the study, Tables 13 and 14 are drawn in Figs 6 and 7.
Through the quantitative analysis of engineering property indexes, it is known that Gree should first improve or innovate the product operation efficiency (EP 5 ) that multi-agent are most concerned about. Then the engineering property indexes most concerned by multi-agent ranked second and third are Motor units (EP 1 ) and Control settings (EP 7 ). Further quantitative analysis of the product's property indexes shows that the importance of frequency conversion design (TP 12 ), filter screen design (TP 13 ), operating range design (TP 5 ), air inlet/outlet design (TP 1 ), low/high voltage protection design (TP 9 ) rank in the top five of the overall technical indexes. In order to improve air purification capacity during the COVID-19 pandemic, these five aspects are given priority in the transformation or replacement of the KFR airconditioning.
Conclusion
This paper studies the KFR air-conditioning innovation of Gree based on the multi-agent perspective under fuzzy environment. This paper proposes a product innovation index ranking method considering multi-agent demand preferences in a fuzzy environment. The engineering property indexes and the technical property indexes should be given priority in the next stage of product innovation.
Firstly, the Kano model is used to classify multi-agent demands. Secondly, QFD model is used to decompose multi-agent demand into engineering property indexes (product planning stage) and technical property indexes (process planning stage). Then, based on fuzzy sets, we can get the fuzzy evaluation information of the expert group (market customer group, process design group, product manufacturing group). Furthermore, the cut-set is used to transform the fuzzy evaluation information into accurate information. Finally, the model is used to analyze engineering and technical property indexes of the KFR air-conditioning which should be focused on in the next improvement or innovation. The model proposed in this paper fully considers the demand preferences and fuzzy environment of multi-agent for product innovation in the actual innovation process, and analyzes the two stages of product innovation.
The main contributions of this paper are in two aspects. On the one hand, the paper considers the impact of market demands and technological progress on product innovation, and uses fuzzy sets to collect multi-agent evaluation information to reduce the loss of evaluation information. On the other hand, through two-stage continuous decomposition, product demands are gradually decomposed into product planning designs and manufacturing process designs (that is, from product demands to product designs to process designs), which refines the product design process and makes the design tasks of the design department and the process department more clear. The method is clear and easy to operate, which lays a foundation for the research on critical factors of the same type of air-conditioning innovation.
Products that meet market demands is the fundamental goal of an enterprise's production pursuit. Grasping market demands and technological innovation trends, adjusting product functions and structure, thereby extending product life cycle, this is a very worthwhile issue for enterprises to study. This paper focuses on determining the critical factors of air-conditioning innovation during the COVID-19 pandemic. The proposed methodology has demonstrated high flexibility and the way in which decision-making based on uncertain information can be improved. The method can be widely used in the innovation process of industrial and manufacturing products, but in specific applications, it is necessary to analyze specific issues. In addition, because the calculation of this paper is complex, in the future, we should explore how to use intelligent algorithms to solve the model. | 2021-07-29T06:17:51.551Z | 2021-07-27T00:00:00.000 | {
"year": 2021,
"sha1": "12d5dfb07ba78515acddb9c660b5f5e3a078de73",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0255051&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "14567f48ae01e092f38102d8d0799f509187b7ed",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246852187 | pes2o/s2orc | v3-fos-license | Comparison of Chest Computed Tomography Between the Two Waves of Coronavirus Disease 2019 in Belgium Using Artificial Intelligence
Background In this study, we aimed to compare two outbreaks of coronavirus disease 2019 (COVID-19) in Belgium in tomographic and biological-clinical aspects with artificial intelligence (AI). Methodology We performed an observational retrospective study. Adult patients who were symptomatic in the first seven days with COVID-19 infection, diagnosed by chest computed tomography (CT) and/or reverse transcription-polymerase chain reaction, were included in this study. The first wave of the pandemic lasted from March 25, 2020, to May 25, 2020, and the second wave lasted from October 7, 2020, to December 7, 2020. For each wave, two subgroups were defined depending on whether respiratory failure occurred during the course of the disease. The quantitative estimation of COVID-19 lung lesions was performed by AI, radiologists, and radiology residents. The chest CT severity score was calculated by AI. Results In the 202 patients included in this study, we found statistically significant differences for obesity, hypertension, and asthma. The differences were predominant in the second wave. Moreover, a mixed distribution (central and peripherical) of pulmonary lesions was noted in the second wave, but no differences were noted regarding mortality, respiratory failure, complications, and other radiological and biological elements. Chest CT severity score was among the risk factors of mortality and respiratory failure. There was a mild agreement between AI and visual evaluation of pulmonary lesion extension (K = 0.4). Conclusions Between March and December 2020, in our cohort, for the majority of the parameters analyzed, we did not record significant changes between the two waves. AI can reduce the experience and performance gap of radiologists and better establish a hospitalization criterion.
Introduction
The first cases of coronavirus disease 2019 (COVID- 19) were described in the city of Wuhan in December 2019 when the global pandemic began. In Belgium, the first wave occurred between March and May 2020, and the second wave occurred between October 2020 and January 2021 [1]. Although severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) mutates slowly, 12,000 mutations have been described, of which the most frequent is D614G [2]. This mutation studied in vitro increases the transmissibility of the disease [3]. These adaptive mutations make it difficult to develop effective drugs and vaccines [4]. The delta variant, first found in India and then spread to England and the rest of the world, appears to carry twice the risk of hospitalization and has demonstrated moderate resistance to available vaccines [5,6]. Another recent variant is the omicron. The second wave in Belgium was characterized by a higher peak in the number of admissions to intensive care units (ICUs), but mortality remained lower compared to the first wave [1]. The radiological semiology of COVID-19 pneumopathy has been well codified. We can distinguish typical, indeterminate, and atypical signs. Among the typical signs, we cite ground-glass opacities with peripheral distribution with or without consolidation or crazy paving. There is a correlation between radiological elements, clinical data, and the temporal evolution of the disease [7]. Additionally, a significant correlation has been noted between the extension of lung disease and patient mortality [8]. The percentage of the parenchyma affected can be established by quantitative methods using software, visual qualitative methods, or semi-quantitative methods using scores [9,10]. The objective of the study is to compare the two outbreaks of COVID-19 in Belgium in tomographic and biological-clinical aspects using artificial intelligence (AI).
Materials And Methods
In this observational retrospective study, we compared the two waves of COVID-19 at the Centre Hospitalier Universitaire (CHU) Brugmann in Brussels. The periods were established based on the epidemiological curves published by Sciensano. Accordingly, the first wave lasted from March 25, 2020, to May 25, 2020, and the second wave lasted from October 7, 2020, to December 7, 2020. The cases were obtained using the Picture Archiving and Communication System. We included adult patients symptomatic for seven days or less with COVID-19 infection diagnosed using computed tomography and/or reverse transcription-polymerase chain reaction (RT-PCR). We excluded adult patients symptomatic for more than seven days, pregnant women, and minors. Subsequently, for each wave, two subgroups were defined depending on whether respiratory failure occurred during the disease. The expected sample size was 101 patients per group.
For the first wave, the sampling was done randomly among 814 chest CTs without contrast injection and 100 with contrast injection. For the second wave, 711 chest CTs without contrast injection and 100 with contrast product injection were included. The clinical biological data were recorded from the patient records of CHU Brugmann. Qualitative and quantitative lung parenchyma damage on chest CT was established based on examination reports and by the experimenters. The quantitative analysis was performed using the application Pneumonia Analysis of the software Syngo.Via (Siemens Healthcare, Erlangen, Germany) with automatic contouring of the opacities based on a threshold value of density in Hounsfield unit (HU) ( Figure 1). The chest CT severity score [10] was used and calculated by AI ( Table 1).
FIGURE 1: Quantitative analysis using AI.
A: Ground-glass opacities tracked by AI. B: Volume-rendering reconstruction to show lung involvement. Regarding the scanography elements, the elements considered included the presence or absence of groundglass opacities, the distribution (central, peripheral, mixed), consolidations, crazy paving, spider web sign ( Figure 2), pleural effusions, and adenopathy. We compared the pulmonary damage determined by AI to the percentage of damage visually determined by a radiologist or radiology resident. A low-dose acquisition protocol was performed using Somaton definition AS/AS+ and Somaton Drive (Siemens Healthcare, Erlangen, Germany).
FIGURE 2: Spider web sign.
CT coronal view showing ground-glass opacities, consolidation, and spider web sign (arrow).
CT: computed tomography
The biological data were recorded from the first available blood sample after the diagnosis of COVID-19 but not after the seventh day from the onset of symptoms.
Univariate analysis was performed by comparing the two groups and subgroups intra and interwave using the t-test for continuous quantitative variables, Fisher's exact test, or the chi-square test for discrete variables. Test k was performed for intermethod agreement on the evaluation of pulmonary damage by AI and visually. Binary logistic regression was performed for groups and subgroups with mortality and respiratory failure as independent variables and the different clinical and biological elements collected during the study as predictive variables.
Results
In this study, 202 patients were included, 101 for each wave. The subgroups consisted of 17 patients with respiratory failure in the first wave and 14 in the second. Mortality was 23% and 27%, respectively, for the first and second waves. Among patients with respiratory failure, mortality increased to 59% for the first and 78.5% for the second wave. In the first wave, the average percentage of pulmonary damage estimated by the observer was 26% (SD = 19) and by AI was 19.7% (SD = 20); the average chest CT severity score was 10 (SD = 5). In the second wave, the corresponding values were 34.5% (SD = 22), 23.3% (SD = 21), and 11/25 (SD = 5), respectively. The percentage of hospitalization in the COVID Unit and ICU was 77% and 13% for the first wave and 78% and 20.6% for the second wave, respectively.
The intermodality concordance (observer-AI) was low (K = 0.4). Five patients required invasive ventilation in the first wave and four in the second. Between the first and second waves, the parameters that significantly differed included high blood pressure (p = 0.046), obesity (p = 0.038), and asthma (p = 0.09), which were predominant in the second wave. Among the subgroups of the first wave, significant differences were seen in crazy paving (p = 0. Table 2. In the first wave, the risk factors established by logistic regression for mortality included respiratory failure (odds ratio (OR) = 6.7, p = 0.003) and chest CT severity score (OR = 1.2, p = 0.003). For respiratory failure, only chest CT severity score was considered a risk factor (OR = 1.25, p = 0.0001). In the second wave, for mortality, risk factors included respiratory failure (OR = 5.9, p = 0.022) and complications (OR = 20.8, p = 0.006). For respiratory failure, only diabetes (OR = 12.7, p = 0.002) was a risk factor. Lung damage was characterized by 98% and 99% ground-glass opacities per wave, respectively, with an average involvement of four lobes. Mixed distribution predominated in the second wave (p = 0.017). The frequency of the other scanning elements is shown in Table 2. The mean values of the different biological parameters in our cohort were hemoglobin 12.9 g/dL (SD = 2.2), C-reactive protein 89.5 mg/dL (SD 78.4), mild lymphopenia 1,093 (SD = 654), normal neutrophil and platelet counts, increased D-dimer 1,752 ng/mL (SD = 3,416), slightly decreased saturation to 91% (SD = 7), and average oxygen requirement of 2.4 L (SD = 3.6).
The most frequent complications included respiratory failure (17/101 vs. 14/101), cardiac decompensation (11/101 vs. 8/101), and bacterial infections (8/101 vs. 13/101). Among the least frequent were Kawasaki-like manifestations (one in the first wave), encephalopathies (two in the first wave), pulmonary embolism (two vs. two), and pericarditis (one in the second wave). Figures 3, 4 show the estimated differences in the percentage of pulmonary damage quantization between the observers and AI (negative values correspond to underestimates and positive values to overestimates). An error interval greater than 10% was found in 40% of the cases during both waves. The maximum overestimate and underestimate values were 39.2% and -23% for the first wave and 33.48% and -25.08% for the second wave, respectively. For 8% of the patients in our cohort (14/181), a discrepancy between the observer and AI implied a false-positive hospitalization criterion. For 21 patients, parenchymal disease quantization did not appear on the examination report.
FIGURE 3: Difference between visual estimation and AI during the first wave.
AI: artificial intelligence
Discussion
Ground-glass hyperdensities represent an average of the CT system for hyperdensities smaller than the spatial resolution of the system. They may originate from the alveolar, interstitial, or capillary compartment, which explains their low specificity [11]. In our cohort, there was a significant difference in the distribution of lesions between the two waves (mixed distribution predominating in the second wave). The presence of central ground-glass opacities can be explained by bronchial and vascular syndrome, active heart problems and cardiac complications were not different, and coinfection by other respiratory viruses reported in the literature. Davis et al. [12] have shown a variable coinfection frequency (16.8-26.8%), which can explain this difference. The second wave occurred in autumn when multiple respiratory viruses are endemic in our region.
The following three stages of radiological evolution have been described: the first, called the rapid progressive period, lasting from one to seven days after the onset of symptoms; the second, called the advanced period (8-14 days), where the pulmonary damage is more severe; and the third after the 14th day when pulmonary damage begins to decrease [13].
In this study, we investigated the first and second wave clinical and radiological stages, the risk factors, including the presence of respiratory failure, complications, diabetes, and extent of radiological involvement, and the chest CT severity score. Biology during the initial period remains slightly inflammatory, characterized by a higher-than-normal level of D-dimers and moderate lymphopenia. This biological presentation was constant during both waves, highlighting the unpredictability of the disease, which evolved rapidly in the second phase without any biological marker able to be the initial predictor of its decline. Risk factors for poor prognosis include age, male sex, heart disease, chronic pneumonia, the presence of two or more comorbidities, high Sequential Organ Failure Assessment score, and obesity [14][15][16]. In our cohort, we found a likelihood between the two waves despite a significant difference in some comorbidities associated with a poor prognosis (asthma and obesity in the second wave).
The criteria for hospitalization are multiple and are primarily clinical-biological [17,18], although a pejorative impairment of imaging remains a criterion used in some institutions [19]. At CHU Brugmann, a parenchymal threshold of 50% is used as a criterion for hospitalization. AI can be used at different levels of COVID-19 control in radiology [20]. The most common applications include lesion detection, quantitative estimation, and differential diagnosis with other lung pathologies [21].
In our cohort, the interobserver (AI and observers) match was low, with an error greater than 10% in approximately 40% of cases. Moreover, in 8% of cases, this discrepancy resulted in a false-positive hospitalization criterion.
Given the utility of the chest CT severity score as a predictor for respiratory failure and mortality [8], AI can be a useful tool for the estimation of pulmonary involvement and calculation of the chest CT severity score.
Wawina-Bokalanga et al. analyzed the genome of SARS-CoV-2 and its evolution during the first wave in Belgium, identifying more than 42 different SARS-CoV-2 lineages [22]. However, the first variants, alpha and beta, with likely significant impact in Europe according to the European Center for Disease Prevention and Control in September 2020 [23] and according to the report of January 21, 2021, of the Belgium genomic monitoring group of SARS-CoV-2, the variants of concern represented 10-15% of the cases [24].
Our study did not show significant changes in patient status during the initial period of the disease, either radiologically, clinically, or epidemiologically, or statistical differences in outcome (mortality, respiratory failure, and complications).
A limitation of the study is its sample size. This is a retrospective monocentric study, and we are not aware of the genotype of SARS-CoV-2 that infected patients because it was not sequenced at the time in our institution.
Conclusions
In our cohort, between March and December 2020, for the majority of the parameters analyzed, we did not record significant changes between the two waves, either radiologically or clinico-biologically. This trend suggests that the mutations in progress may not be more virulent, at least in the time window explored. AI in daily practice is a useful tool for estimating pulmonary damage from COVID-19 pneumonia and can be one of the hospitalization criteria in an environment where hospital beds are limited.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Comité d'Éthique Hospitalier du CHU Brugmann issued approval CE 2021/35. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-02-16T16:04:53.044Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "3b65bed7c77867f6fb1fdf2fa894e6d79e2ff12f",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/81435-comparison-of-chest-computed-tomography-between-the-two-waves-of-coronavirus-disease-2019-in-belgium-using-artificial-intelligence.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e4eee6d4feef3b90d4a8251e5c394893f5eed63",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222211632 | pes2o/s2orc | v3-fos-license | A Study on Improving Secure Routing Performance Using Trust Model in MANET
the
Introduction
MANET is a network consisting of only mobile nodes without a fixed infrastructure. It does not require a wired network, access point, and base station in the process of configuring the network. It can be constructed quickly at a low cost because there is no restriction on host movement [1][2][3]. e utilization range of MANET has been used in situations where network configuration is difficult, by the rapid development of wireless networks and the spread of mobile terminals. e mobile nodes composing MANET do not only perform the transmission and reception of data that the existing host performs but also act as routers. In route settings, it can support multipathing to neighbor nodes and perform routing dynamically because the mobile nodes act as routers [4,5]. However, it is exposed to many security vulnerabilities due to the nature of the dynamic topology and the wireless network by the movement of nodes. In order to solve the problem, various routing techniques for stable data transmission and reception between nodes have been studied. In the existing routing protocol studied, the demand-based AODV protocol has shown excellent performance in various mobility pattern, density, and traffic. However, there is a problem that the number of control packets increases in order to maintain the routing to the destination. Also, it has been studied on various routing attacks by malicious nodes. In order to cope with such a routing attack, the technique that uses the reliability of mobile nodes participating in routing or involves authentication nodes to routing by issuing certificates to mobile nodes has been studied. In particular, if packet loss or route connection failure occurs by various attacks existing on the route, it will take a long time to reconstruct a new path from the source node to the destination node, the number of control packets increases and the resulting overhead is also increased. erefore, the study on safer and more efficient secure routing techniques is necessary in order to increase the reliability of MANET [6,7].
In this paper, we propose a trust model-based secure routing technique to improve the efficiency of the trust evaluation and the performance of secure routing problem of security routing in the existing studies. is technique consists of the trust evaluation step and security routing step. In the trust evaluation step, a hierarchical structure is applied to increase the efficiency of the reliability measurement for each node. In the security routing step, the security communication function through the routing based reliability and key exchange is provided in order to security routing performance. e main function of the proposed technique is secure data communication through security routing based on reliability evaluation of nodes and the detection of anomaly node through traffic and Destination Sequence Number (DSN) check. e proposed technique uses cluster hierarchy to improve the reliability evaluation efficiency. e reliability evaluation is performed by measuring the packet forwarding rate of the neighbor nodes of all nodes. e trust management node manages the measured reliability of the mobile nodes in each cluster and the measured reliability is used to set a route between the source and destination node. For secure data communication, the key generation and exchange between nodes without the help of Certification Authority (CA) is applied. In this way, the key generation process is simplified, and the processing speed can be improved while improving the communication data. e secure routing performance can be improved by excluding the malicious node from participating in the network. Also, the traffic on the route is checked to detect anomaly node on the path. If the traffic on the route is higher than the average traffic in the cluster, it checks the DSN of the intermediate nodes existing between the source node and the destination node and detects an anomaly node that transmits a packet to a node using a wrong DSN or a node ID that does not exist. e improved performance of the trustbased model security routing technique proposed in this paper is confirmed by minimizing routing efficiency and the number of control packets through performance analysis experiments with SAODV based the proposed simulation parameters and performance metrics. e composition of this paper is as follows. In Section 2 we discuss the kind of routing attacks and secure routing techniques existing in MANET. In Section 3, we describe the trust-based model secure routing techniques proposed in this paper. In Section 4, we verify the performance of the proposed technique through experiments and finally conclude in Section 5.
Routing Protocols.
e routing protocol in MANET can be classified into table-driven routing protocols using the Bellman-Ford algorithm and hybrid method that combines the advantage of table-driven routing protocol and on-demand routing protocol [8][9][10]. e table-driven routing protocol is a method to maintain the latest network information by storing the entire path for all nodes in each entry of the table and broadcasting routing information periodically or when the network topology changes. When there is a connection request due to traffic occurrence, it has a benefit that connection setup is fast because of having the path information. But, it has a problem that the broadcasting overhead of the control message for path management is large and resources are consumed for discovering a path that is not used for frequent phase changes. erefore, it is studying to minimize the number of control messages. e routing protocols of this type include Destination Sequenced Distance Vector (DSDV), Wireless Routing Protocol (WRP), and Source-Tree Adaptive Routing (STAR) [11][12][13].
is routing method can be divided into two different methods according to the method of transferring data. First, the source routing method is that a transmitting node calculates routing information for transmitting data and the data including the routing information in the header is transmitted to the destination. Link Quality Source Routing (LQSR) is a typical protocol. e intermediate node only refers to the information of the header and delivers to the next node. But the payload of the frame is reduced. Second, the hop-by-hop routing method is that all nodes have all information of the next hop for delivering to the destination. e immediate node delivers frame to the next hop of its routing information by referring to the destination information of the header.
ere is less overhead because it is a simple method. However, loops can occur in the step of setting routing metrics, so a method to avoid this is necessary. Table 1 shows the main characteristics of the two routing techniques. e on-demand routing protocol does not always maintain the full path for all mobile nodes, but the path gain procedure is performed when data transmission is required.
is means that a routing table to a destination node is generated after performing a path search process only when data transmission is required. erefore, there is a disadvantage that the delay time for path discovery is increased. But there is an advantage that the accurate path can be set because the mobility of the mobile node can be reflected immediately when the path is set. In addition, if the path to the destination node cannot be searched, problems such as a broadcast storm can be caused because a message requesting the path continuously is generated until the path is searched.
us, on-demand routing protocol focuses on minimizing the optimal path search and delay time of the path search. ese routing protocols include Dynamic Source Routing (DSR), Ad Hoc On-Demand Distance Vector (AODV), and Dynamical MANET On-Demand Routing (DYMO) [14][15][16].
Hybrid routing protocol is a method of mixing proactive and reactive methods.
is performs mixed routing that proactive method is used for nodes in the environment where there is little change in topology due to small movement of nodes and the reactive method is used where the nodes are frequently moved. is can perform efficient routing since this uses a mixture of advantages of the existing methods. But it is not easy to implement and has a complicated operation. Table 2 shows the characteristics of MANET routing techniques.
Mobile Information Systems
Energy-Aware AODV (EAODV) utilizes backup routing techniques based on AODV. Since this technique sets a path in consideration of the remaining energy of a node, it can reduce link errors due to energy exhaustion and the network can be maintained for a long time. Also, if the energy level of nodes becomes less than the threshold by setting the threshold energy level of nodes, the data loss and transmission delay that occur in case of path change and path resetting can be reduced by transmitting error packets to the source node [17].
PS-AODV is a technique for determining routing based on a load situation between nodes. e node first checks the current load before forwarding the RREQ packet for route discovery to neighbor nodes. e RREQ packet is discarded if the node load is very high. Subsequently, if the load of the node decreases, the next RREQ packet is forwarded again. In this way, the routing considered this will be done because the higher the load of the node is, the more energy is consumed [18].
Routing Attacks.
MANET is vulnerable to various routing attacks because it has an easy structure to attack such as packet eavesdropping or tapping by nature of the wireless environment and routing and data transmission are performed in a hop-by-hop manner by mobile nodes. Routing attacks can be divided into passive attacks which can cause a lot of damage through the eavesdropping or tapping of packets, and active attacks which prevent routing or make packet transmission impossible by inserting, discarding, or modifying incorrect information in the routing process [19][20][21][22]. e typical attack among these routing attacks includes the black hole attack, wormhole attack, Jellyfish attack, and Sybil attack. e black hole attack is an attack in which an attack node changes route by sending incorrect routing information to the source node. In other words, it is an attack which intercepts all packets to be transmitted to the destination node by analyzing RREQ packet for route discovery and transmitting RREQ as if the shortest route to the destination node is itself to the source node [23][24][25]. e wormhole attack has two ways. One is to eavesdrop on data packets that two attack nodes trick as if the neighbor nodes are close to each other and the route formed by the two nodes is optimal. e other is to deplete the energy of the attack node by including target nodes in many routes [26,27].
e Jellyfish attack is an attack that interrupts data transmission by delaying transmission of data packet or discarding after the attack node normally transmits the RREQ or RREP packet for route discovery and the route through itself is set [28,29]. e Sybil attack is an attack in which the attack node generates multiple IDs and makes other nodes be recognized as multiple identifiers. It is very threatening to the routing method using geographic information.
Jamming attack is a type of denial of service attack that is detrimental to the reliability of wireless communication.
is attack interferes with communication between nodes and causes data transmission failure by transmitting any meaningless signal to the corresponding wireless channel.
is leads to continuous attempts of message retransmission by nodes to recover the failed path and consumes a lot of energy on each node. As a result, in a wireless sensor network composed of sensor nodes with limited power, it is an important issue to apply a routing technique in which energy efficiently and effectively considers the defence against Jamming attacks [30].
Secure Routing
Method. Secure Ad Hoc On-Demand Vector (SAODV) as a typical routing technique in MANET uses digital signatures for RREQ and RREP authentication and authenticates hops using hash chains [31]. First, a maximum number of hops are set and a one-way hash function with one greater than the number of hops is created. en, the RREQ transformed by the hash function is All nodes have all information of the hop to the destination e immediate node delivers frame to the next hop of its routing information Less overhead in a simple way e nodes receiving the RREQ authenticate the RREQ packet and the RREQ is created and transmitted in the same way if it is correct. In this way, a secure route is set through a signature check on RREQ and RREP.
Secure Energy-Efficient Routing (SEER) authenticates data using a one-way hash chain and uses a shared secret key between the mobile node and the base station to improve confidentially [32]. is technique creates a tree based on the base station and initializes the one-way hash chain. And, then, if the mobile node detects an event through its neighbor node, the data can be transmitted to the base station through the selected immediate node. Each node uses the only one-way hash chain that it manages in order to transmit securely data to the base station.
Feedback based secure routing protocol (FBSR) is an energy efficiency-oriented routing protocol using evaluation functions [33]. is technique provides security by using a one-way hash function which is authentication of the MAC layer. e evaluation function uses a combination of energy level and distance, and the energy level is used by the threshold evaluation function. is technique provides two methods to prevent routing attacks. First, the feedback from the neighbor nodes is signed by one-way hash chain. e second is to utilize feedback to base station in order to distinguish attack nodes [34]. e Ariadne technique is a DSR-based secure routing technique and uses authentication using MAC and shared keys. e source node creates the MAC value using a shared secret key in order to search route to the destination node and includes it in the RREQ. When the destination node receiving it authenticates the RREQ packet and transmits the RREP message, it is authenticated by the source node. rough this process, the source node can be set a secure route with the destination node [35][36][37].
Trust-Based Routing Protocol.
In MANET, secure routing protocol has been studied for various ways that utilize key management, encryption, or continuous monitoring of neighbor. However, most of these methods have the disadvantage that these are too costly for secure routing and are not suitable for the proposed MANET. erefore, the structures of various trust-based security routing are discussed. Trust-based AODV routing protocol is a technique of isolating malicious nodes and is applied to the public key [38].
is has a disadvantage that route path discovery is delayed a lot because this does not allow intermediate nodes on the path to transmit RREP packets. e trust-embedded AODV (T-AODV) technique is an extension of the trust-based AODV technique in which the reliability is calculated by distributing and updated [39]. is is performed only when malicious nodes send erroneous information. Each node consumes more memory because it scans and maintains the table periodically. is technique assumes that all nodes have the same frequency range. It is proposed that intermedia node plays a role as a trust gateway maintaining the trust level in order to avoid malicious nodes in [39]. Each node monitors its neighbor and maintains its reliability directly. e source node calculates the optimal path by using this trust information. e reliability calculation is based on forwarding behavior of nodes. e trusted gateway node should consume a lot of energy and be less mobile. In TAODV, reliability is determined by the opinion used in the subjective logic. Other nodes increase the opinion if a node behaves normally; otherwise, they decrease the opinion. e nodes authenticate each other by verifying the certificates of the nodes.
is protocol cannot detect internal attacks that malicious nodes can refuse packet forwarding. Trust-Based Minimum Cost Aware (TMCQA) proposes a technique for efficient data collection on the network. is technique uses machine learning to evaluate the trust of data reporter. And a selection strategy of an optimized data reporter based on three key evaluations is used [40]. Trust Detection-Based Secured Routing (TDSR) uses a sensor node to evaluate the trust of an intermediate node for a secure path between a source node and a destination node. TDSR technique has the advantage of not affecting the network life by using node selection and path discovery considering energy [41].
The Trust-Based Model Secure
Routing Technique e trust-based model secure routing technique proposed in this paper used the cluster structure for reliability evaluation, management, and security routing. e trust management node and the trust agent node are used for reliability evaluation and management of nodes. e trust management node is responsible for managing the reliability of the nodes in each cluster and providing the information. e trust agent node collects reliability of each neighbor node while supporting the trust management node.
e trust-based model security routing technique proposed in this paper consists of three modules: trust management module, security path module, and secure data communication module. First, the trust management module stores the reliability value of the nodes collected by the trust agent in each cluster and updates the neighbor trust management node and reliability information periodically. e reliability measurement on nodes is based on the traffic received from the neighbor nodes and checks whether the traffic is packet generated by the neighbor nodes or forwarded. And the average value of reliability for the nodes in the cluster is calculated periodically. Second, the secure path module performs a security routing based on measured reliability when the path is set from the source node to the destination node. For setting of security path between the source node and the destination node, the reliability of each node and the reliability average value of the cluster are reflected. And it detects anomaly nodes based on the traffic measurement on the set path. e third secure data communication module performs data communication after key exchange between the source node and the destination node for secure data communication. In particular, this key exchange can provide integrity and nonrepudiation as a technique for providing a security function of a routing protocol without CA assistance. It is possible to perform the more rapid authentication process and solve the certificate management problem because there is no certificate issuance process from the CA. Figure 1 shows the system structure of the trust-based model secure routing technique proposed in this paper.
Reliability Measurement and Security
Routing. In this paper, we use a hierarchical cluster structure for efficient trust evaluation and management of nodes. e node with the highest number of connections with nodes within each cluster is designated as the trust management node, and this node manages the reliability value of the nodes in each cluster. In addition, the Member Trust Table (MTT) storing the reliability is periodically updated while exchanging information with the trust management node of the adjacent cluster. In order to improve security when setting the route, the average value of the reliability is periodically calculated and used as a threshold value. e reliability measurement for nodes within each cluster is made on all nodes that act as trust agents. at is, the reliability measurement is calculated using the ratio of packets forwarded by each node. However, the reliability may not be measured accurately if only the delivery of the packets is used. is is because the rate of the packet transmission may increase due to various reasons such as traffic increase, the communication state of wireless network, and malicious attack.
erefore, the quality of packet forwarding is reflected to improve the accuracy of reliability measurement. In order to measure the reliability of a specific node, the contents of packets received from the neighbor node are analyzed. First, the IP header of the received packet is checked to determine whether the packet is a packet generated by a neighbor node or simply a forwarded packet. en, the reliability for each node is calculated by the following equation: Here, α and β mean the weight according to the time that node i and node j participate in the network. G i (p j ) means that node j delivers the generated packet to node i, F i (p j ) means that node j is packet delivered to node i packet received from the neighbor node. And G j (p i ) means that node i delivers to node j generated packet, F j (p i ) means that node i is the packet delivered to node j packet received from the neighbor node. is is a way to measure the selfish behavior of a node and the reliability is decreased if a packet received from a neighbor node does not deliver and only its own data is transmitted. e security path between the source node and the destination node is set based on the reliability for each node calculated by the abovementioned method. e reliability information for all nodes participating in the network is stored in the trust information table (TIT) in the trust management node. Figure 2 shows the structure of the trust information table.
As shown in Figure 2, the reliability of node A is stored neighbor node transmitted packet from node A, and the value is calculated by node H and node S. e reliability value measured by each neighbor node is recalculated by the following equation: In the trust management node of each cluster, the reliability average value of the cluster is calculated periodically after the reliability value for all nodes is measured, and this is calculated by equation (3). Ci means the number of clusters constituting the network and is an expression for calculating the average reliability of each cluster: e source node (S) broadcasts the RREQ message to establish the path to the destination node (D). e nodes that receive this message transmit the packet to the destination node and find the paths to the destination node through the response of RREQ. e source node deletes a node whose reliability is less than the average value of cluster reliability among the various paths to the destination node collected by the response. And then, the path with the highest reliability value is selected. Figure 3 shows an example of a reliability-based path setting. As shown in the figure, there are several paths from the source node (S) to the destination node (D). Among them, the node F, the node J, and the node L are excluded from the route setting because they are less than the reliability value of the cluster. erefore, the security path based on the reliability is that the path having a higher average value of all paths than the path length is selected.
Security Data Transmission Technique.
In the method mentioned in the previous section, the key exchange technique is applied for secure data transmission after the secure path is established between a source node and a destination node. is sets the path based on the reliability check of the nodes for secure path setup. And this is applied to enhance the security and integrity of data transmission because malicious nodes cannot be completely excluded through this process. Also, the rapid security function is provided through key exchange between nodes without CA's help for certificate issuance. Each node receives periodically its own reliability information from the trust management node. e information is signed using the public key shared between trust management nodes to prevent falsification from nodes.
is trust information is used as information to guarantee its identity for secure data transmission at the time of key exchange. e key exchange between nodes is performed as follows. First, the source node sends its public key and hash signature of the public key to nodes of the set secure path for secure data transmission. e destination node which received the packet transmits a response message including a public key and an Integrity Detection Code (IDC) of the public key. e source code generates a shared key and encrypts it to the public key of the destination node and Mobile Information Systems transmits. And the source node encrypts data to be transmitted to the destination node and transmits. is technique improves the safety and integrity of data transmission. e process described above is shown in Figure 4. e source node requests its trust information from its trust management node as a preparation step for key generation with the destination node. e received trust information is transmitted to the destination together. e destination node which received it identifies the source node through the process of requesting the identity of the node to the trust management node.
Key_Req means a key agreement request, S_(pub_key) means a public key of a source node, and IDC(hash(-S_(pub_key))) means an integrity check code for its public key. Key_Rep means a key agreement response, S_(sec_key) means a secret key of the source node, and IDC(hash(S_(sec_key))) means an integrity check code of its secret key.
Anomaly Detection.
e performance of the routing protocol is reduced by malicious nodes in the network. In this section, the following process is performed in order to detect the anomaly nodes in the routing process. First, a suspicious node is detected in the secure path module through traffic checks on a node. Second, a malicious node is detected by a DSN check existing in a path table entry of the node. e traffic from the source node to the destination node will be measured for t hours. Here, t value uses the Round Trip Time (RTT) between the source node and the destination node, and the average value of the traffic is calculated by the following equation: Here, T 0 represents the timeout and p represents the pack loss rate. If the value measured by equation (4) is higher than the average traffic of the cluster, it is judged that a malicious node exists in the path. And, it checks the DSN of the packets transmitted by nodes existing in the path and detects a wrong DSN. e false DSN check is an important factor for detecting anomaly node because it relies on the DSN to grantee loop-free to the destination node. Routing information checks are performed in preparation for an attack that may occur in the data transmission step. In this process, it detects an anomaly node that responds to nonexistent node ID or transmits a packet using an invalid DSN. e information of the detected node is transmitted to the trust management node, the reliability value is set to 0, and the routing participation of the node is excluded. Figure 5 shows the anomaly detection process described above.
Simulation Parameters.
In this section, we evaluate the main performance of the trust-based model secure routing technique proposed in this paper. e simulations are conducted in NS2. e experimental environment for simulation is as follows: e mobile node used in the experiments is a random waypoint model that changes the location freely while moving the network. In our simulation, the mobile speed is varied 5, 10, 15, and 20 m/s and the battery consumption of the nodes was not considered. e total experiment time was 300 s, and, during the experiment, Hello flooding attack, Jellyfish attack, and Jamming attack occurred 5 times. e type of Jamming attack used in this experiment used deceptive Jamming operating on the network layer. Table 3 shows the experimental variables used for the experiment.
Performance Metrics.
We experimented in two ways in order to evaluate the performance of the proposed technique in this paper. e first experiment evaluated security routing performance according to the presence or absence of an attack with SAODV and the second experiment evaluated routing performance according to the network structure with EAODV. e performance evaluation criterion is set as a packet delivery ratio, end-to-end delay time, the number of control packets, network throughput, routing overhead, and average path length.
Packet delivery ratio: it is the ratio of the number of packets received successfully and the total number of packets transmitted End-to-end delay time: the end-to-end delay is averaged over all surviving data packets from the sources to the destinations Control packet: the number of the total packets, such as RREQ, RREP, and RERR, transmitted for data transfer between the source node and the destination node Network throughput: this is a data packet transmitted between a source node and a destination node for a certain period of time Routing overhead: the total number of routing packets for route discovery and route maintenance Average path length: the average number of hops between the source node and the destination node where data is transmitted Figure 6 shows the measurement results of the packet delivery ratio, which is the main performance evaluation criterion of the routing protocol. As shown in the figure, we confirmed that the performance difference between the two techniques was not large when the attack did not exist, but the difference was large when the attack did exist. e SAODV technique showed a low performance in Hello flooding attack.
Results and Analysis
is technique sets the path after performing authentication of RREQ and RREP for path discovery, and special secure technique is not applied when the data is transmitted. erefore, we confirmed that the performance was greatly degraded with the Hello flooding attack taking a normal action until the path setting. However, the proposed technique showed excellent performance in the Hello flooding attack because data transmission takes Mobile Information Systems place after performing the key exchange process with the source node and the destination node even after setting the path. Figure 7 shows the result of measuring the packet transmission ratio according to the presence of a Jellyfish attack. As the results show, the performance of the SAODV was not good when the Jellyfish attack occurred. e SAODV technique performs authentication for RREQ and RREP for path discovery and sets the path. e special security technique is not applied when data is transmitted. It is confirmed that the performance is degraded greatly for the Jellyfish attack performing a normal action until the path is set. However, the proposed technique showed the result of the excellent performance for the Jellyfish attack because it performs key exchange process between the source node and the destination node and data is transmitted even after routing. Figure 8 shows the result of confirming the effect of packet delivery between the source node and the destination node due to the Jamming attack. As the results show, the performance of SADODV was not good in the event of the Jellyfish attack. In the detection of inserted abnormal packets, the performance of packet delivery was degraded because discovery was made after data transmission was completed. On the other hand, the proposed technique can get good results even for Jamming attack due to blocking packet reception from malicious attack node through the process of the key exchange between nodes before data transmission. Figure 9 shows the measurement result of transmission delay time between the source node and the destination node by Hello flooding attack, Jellyfish attack, and Jamming attack. e SAODV technique uses TTL values and digital signatures of RREQ and RREP for secure routing. e delay Desitination node Immediate node Source node <Key _Req , S_(p ub key), IDC (hash (s_(p ub key)) )> <Key _Req , I_(pu b key), IDC (hash (I_(p ub key)) )> <Key _Req , S_(Se cure key), IDC (hash (s_(se c key)) )> <Key _Req , I_ (Secu re key),I DC (hash (I_ (sec key)) )> <K ey_ Rep , ID C (ha sh( I_( sec key )))> <Key_ Rep,,I DC (hash (I_ (pub key))) > <K ey_ Rep , IDC (ha sh (I_ (se c key )))> Key exchange steps to set up secure path Figure 4: Internodes key exchange process for secure data communication.
time exists due to this authentication process, and it is longer when an attack occurs. In particular, it is also the cause of low-security performance for attacks after setting the path. We confirmed that the proposed technique was not significantly affected by the attack, but the end-to-end latency appeared rather long because data is transmitted after the path setting based on the reliability of the nodes and key exchange process between the source node and the destination node. e number of control packets can influence the overall performance of the network. Figure 10 shows the measurement result of the number of control packets generated in each technique during the experiment time. e SAODV technique showed the authentication process for secure path set, and the number of control packets increases. Also, the more nodes moved, the more the amount increased. e proposed technique showed stable performance with little change in the number of control packets even in the event of an attack although it does not go through the authentication server and the control packet is rather high by key exchange between nodes for secure data transmission. Figure 11 shows the result of the network throughput depending on the existence of Jellyfish attack. e network throughput is an important indicator which can confirm the performance of the routing protocol as the amount of data transmitted from the source node to the destination node during the unit time. SAODV showed a large difference depending on the existence of Jellyfish attack because the security technique is not applied during data transmission. But the proposed technique applies the average reliability of the cluster and the reliability of the nodes existing in the path during the path setting and goes through an anomaly detection process based on the traffic. erefore, the technique is not influenced by the presence of Jellyfish and shows superior performance compared to SAODV. Figure 12 shows the measurement result of the average path length between the source node and the destination node according to the movement time of nodes and the attack. e average path length becomes longer as the movement speed of nodes is faster. e proposed technique shows the long path length because it sets the path with higher reliability than the path length. It shows that the path length depending on the attack is also long and the proposed technique is less influenced by the attack due to secure data communication through key exchange and traffic-based malicious node detection process. e routing overhead describes the number of routing packets for route discovery and route maintenance needed to be sent in order to propagate the CBR packets. Figure 13 shows the comparison result of routing overhead between SAODV and the proposed technique. As the number of malicious nodes increases, routing overhead also increases. SAODV is not significantly affected by attacks because it authenticates control packets in the route discovery step. Routing overhead is increased greatly as it is vulnerable to
10
Mobile Information Systems attacks in the data transmission process.
e proposed technique is that data is transmitted through a key exchange process even after setting a secure route between the source node and the destination node. erefore, routing overhead by attacks does not increase significantly although the key exchange occurs. Figure 14 shows the experimental results for EAODV and the packet transmission rate when the number of nodes is 50 and 100 to evaluate the routing performance. e proposed technique selects the shortest path that does not consider residual energy of the node through the path discovery process based on the cluster. Also, the cluster head manages the information of the nodes in the cluster and routes are set based on this. So, more efficient routes are set, but EAODV selects the node with the high energy level, long path life, and fewer hops. EAODV showed good results when the movement of nodes was less but it showed the lower the result by reflecting the energy threshold calculation as the movement speed of the nodes is faster.
In Figure 15, the throughput which is an important metric shows the result depending on the number of nodes and the moving speed. We can see that the network throughput gradually decreases as the moving speed of the node increases.
is means that as link failures by the movement of the nodes increase and the demand for new routing increases, the consumption of bandwidth increases. As the result shows, the proposed technique that the node is managed by the cluster head shows better performance than EAODV. e comparison between the number of routing packets and the node speed is shown in Figure 16. As the nodes move faster, the number of routing packets both protocols increases. However, it shows that the routing packet of the proposed technique has fewer routing choices compared to EAODV. erefore, the number of routing packets for route discovery and maintenance can be reduced.
Conclusions
e routing protocol plays a very important role in determining the overall network performance because MANET consists of mobile nodes with limited resources. Dynamic topology by the movement of nodes and path setting by hop-by-hop provide a threatening cause to many security threats. An internal attack by malicious nodes, especially, is more damaging. It is necessary to provide a technique to eliminate the participation of malicious nodes in routing and data transmission through proper trust evaluation of nodes. For this, the cluster structure was used to measure the reliability of nodes participating in the network in this paper. In order to improve the accuracy of reliability, the quality of the packet as well as the number of packets transmitted between the nodes was included. at is to reflect in the reliability calculation by determining whether a packet received from a neighbor node is generated. e reliability information and management of the nodes in each cluster were done by the trust management node. e trust management node calculated the reliability average value of the cluster and transmits the information to the neighbor trust management node every time the reliability value for each node was updated. In this way, even if the nodes move, the trust information of each node can be known. Also, the trust information of each cluster node is digitally signed and transmitted. e path setting was made by combining the reliability of the measured nodes and the reliability average value of each cluster. Among the various paths existing between the source node and the destination node, a node having a value smaller than the reliability average value of the cluster was excluded from the path setting. e path with the highest reliability among the remaining nodes was selected. If the path had been set, the data was transmitted after the key exchange process between the source node and the destination node. e key exchange between nodes was performed without the CA and the trust information received from the trust management node was used to guarantee the identity of the node. We also measured the traffic on the path between the source node and the destination node in order to detect anomaly nodes. If the traffic occurring from a specific path was higher than the average traffic in the cluster, the nodes in the path checked the DSN of the transmitted packet, the node transmitted the wrong DSN was recognized, and network participation was excluded. In order to evaluate the performance of the proposed technique, the experiment was performed as compared with SAODV technique for packet delivery ratio, end-to-end delay time, the number of control packets, network throughput, and average path length. In addition, to evaluate the routing performance, the experiments are performed on packet transmission rate, throughput, and routing packet performance criteria with EAODV. rough the experiment, it was confirmed that the management of the nodes and route discovery using a cluster-based network structure is more effective as the moving speed of the nodes increases. As can be seen from the experiment, the better performance of the proposed technique compared to SADOV is confirmed in the presence of the attack. is shows the superiority of the trust evaluation and the security path setting for the proposed nodes. In the future, research on the energyaware trust model will be conducted to improve the efficiency of the secure routing protocol.
Data Availability e simulated evaluation data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-10-09T13:08:06.773Z | 2020-09-16T00:00:00.000 | {
"year": 2020,
"sha1": "f84571c45ca994679c1a709b826dc19edd390612",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/misy/2020/8819587.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b08b0d5d6a96ff3af01e3e09f166f15433c6b216",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216496936 | pes2o/s2orc | v3-fos-license | In Situ Epoxidation of Sesame Seed Oil for Synthesis of a Bio-based Resin
This study is aimed at the modification of sesame seed oil to develop a biobased resin. Neat sesame seed oil was epoxidized via in-situ conventional method with peroxyacetic acid at 75 o C. Biobased resins were synthesized by further modifying the epoxidised oil with acrylic and methacrylic acid respectively at 90 o C in the presence of hydroquinone. Conversion rate of 76.54% and high oxirane content of 3.83% at a reaction time of 7 hours was achieved for the epoxidation process. FT-IR spectra of the acrylated and methacrylated epoxy resins of sesame seed oil were identified at 2710 and 3002cm -1 respectively. This ascertained the functionalization of the acids used for modification and clearly demonstrates the potential of sesame seed oil for biobased resin synthesis.
Currently the synthesis of biobased polymeric resins from readily available, sustainable, eco-firendly vegetable seeds has paved way for its utilization as functional based material for the production of resins (Blavo et al., 2001;Campanella et al., 2004;Xia & Larock, 2010).
Plant oils are being considered as important renewable raw materials for the production of bio-based polymer materials due to the unsaturated fatty triglycerides it contains (Leveneur, 2017;Nikesh et al., 2015;Rakotonirainy & Padua, 2001). These unsaturated C=C present in the oil can be modified into reactive group via epoxidation (Chen et al., 2019) hence this has been actualized by various researchers. Sinadovic-Fiser et al. (2001) have studied the kinetics of the epoxidation of soybean oil in bulk by peracetic acid formed in situ, in the presence of an ion exchange resin as the catalyst, the kinetic model proposed was validated, and the optimal conditions for obtaining 91% conversion, 5.99% epoxide content in product were found to be 0.5 mole of glacial acetic acid and 1.1 mol of hydrogen peroxide per mole of ethylenic unsaturation at 75 o C, over the reaction time of 8 hours. Goud et al. (2007) worked on the kinetics of epoxidation of Jatropha oil using peroxyacetic/peroxyformic acid in the presence of an acidic ion exchange resin as catalyst in or without toluene. The kinetic model proposed was validated and the activation energy for the epoxidation was found to be 53.6KJ/mol. The kinetics of the epoxidation of rubber seed oil was investigated by Okiemen et al. (2002), hence there was a good fitting between experimental and the actual data, which indicates that the oil was suitably epoxidised.
Epoxidized vegetable oil not only improves the stability of oils, but also provides adequate reactivity to form chemical linkages with other polymer chains (Cai et al., 2008;Campanella et al., 2004), hence the need to further modify epoxidised oil via acrylation, methacrylation or hydroxylation to produce various range of thermosets. These thermosets present improved physical properties such as higher flexibility, adhesion and resistance to water and chemicals (Xia & Larock, 2010). Nwosu-Obieogu et al. (2017) developed biobased polymeric resins from modified linseed and sunflower oil, the FTIR analysis showed that the oils were suitably modified.
Sesame seed (Sesamum Indicum L.) a member of the family Pedaliaceae is among the earliest crops processed for oil production. (Anilakumar et al., 2010;Hassan, 2012), it is grown in Asia and Africa, applied in medicine, pharmaceuticals and nutrition. It is very rich in unsaturated fatty triglycerides (Crews et al., 2006), not good for frying due to its decomposition at low temperature, hence the need to utilize it effectively through conversion to biobased resins and biofuels. (Bang et al., 2014;Musik & Milchert, 2017;Saydut et al., 2008). Gharby et al. (2015) characterized the chemical and nutritional constituents of sesame seed oil grown in Morrocco, the results revealed a high degree of unsaturation with linoleic acid (46.9%) followed by oleic acid (37.4%), which is an indication that it can be functionalized. Music et al. (2018) compared epoxidation of sesame oil with performic acid and peracetic acid, performic acid gave a higher epoxy number than peracetic acid, hence this work tends to add value to sesame seed oil by epoxidation and further modification via acrylation and methacrylation of the epoxidised oil.
Epoxidation Procedure
The sesame oil was extracted using mechanical expression from its seed. The epoxidation method reported by Goud et al. (2007) was adopted in this work with little variation in procedure. Sesame oil (60g) was placed in the flask, 4.3g of acetic acid was added to the flask after about five minutes, and the mixture was stirred continuously for 30mins. Then 29.89g of 30wt% aqueous hydrogen peroxide was added drop wise to the reaction mixture, as oxygen donor, at a rate such that the hydrogen peroxide addition was completed within half an hour; considering the completion of hydrogen peroxide addition as zero time. The mole ratio of the components used was 1:1.5:0.5. After the complete addition of hydrogen peroxide, the mixture was heated under reflux at the same desired temperature of 75 o C and with rapid stirring. The rapid stirring of 200rpm was maintained throughout the experiment to achieve uniform dispersion of oil and to avoid zones of high peroxide concentration that could lead to explosion. The reaction setup was repeated for 5, 6 and 7 hours after the first 4 hour setup. The collected samples (ESO) were then immediately washed with sodium carbonate dissolved in distilled water to remove free acids and other unreacted components. 20g of Na2CO3 was first dissolved in 100ml of distilled water. Then, another 100ml of distilled water was further added to the mixture. The total mixture was separated using a separation funnel.
Iodine value
The iodine value of the oil sample was determined by the Wiji's method of the Association of Oil Chemists. 0.5 g of the sample was poured in a conical flask. 10 ml of carbon tetrachloride was added to the oil and was shaken to allow the oil dissolve. Also 20 ml of Wiji's iodine solution was later added to the mixture. It was stirred vigorously, stoppered and kept in the dark for 30 minutes. Subsequently, 15 ml of potassium iodide solution followed by 100 ml of distilled water was added. The mixture was titrated against 0.01N sodium thiosulphate solution. A reagent black was also titrated as well.
Iodine value of epoxidized samples was calculated after analysis using the formula:
Oxirane Oxygen content
The percentage of the oxirane oxygen was determined by direct method established by using hydrobromic acid solution in glacial acetic acid. The content oxirane oxygen (OO) was calculated according to the consumed amount of the halogen atom (Swern et al., 1947).
The Oxirane Oxygen Content of the analyzed samples was calculated using the formula:
Synthesis of Acrylated Sesame Seed Oil
The epoxidized sesame seed oil (30.0g) was heated at a room temperature while acrylic acid (9.79g) containing hydroquinone (0.02g, 0.25wt %) was added at 30 minutes. The reaction mixture was heated under reflux for 6hours at 90°C with constant stirring. The mixture was then cooled to room temperature. The obtained product, Acrylated Epoxidized Sesame Oil (AESO) was washed with distilled water and then isolated.
Synthesis of Methacrylated Sesame Seed Oil
The epoxidized sesame oil (30.0g) was heated at a room temperature while methacrylic acid (9.79g) containing hydroquinone (0.02g) was added at 30 minutes of the experiment. The reaction mixture was heated under reflux for 6hours at 90°C with constant stirring. The mixture was then cooled to room temperature and washed with distilled water. The obtained product, Methacrylated Epoxidized Sesame Oil (MESO) was isolated.
FT-IR Analysis
FT-IR spectroscopy analysis was employed using Nicolet iS50 model to identify the various samples of pure, epoxidised and acrylated and methacrylated epoxidised sesame seed oils.
RESULTS AND DISCUSSION
The result indicated that the percentage conversion of iodine value and oxirane value in the epoxidized sesame oil increased with reaction time, hence maximum conversion of 76.54 (with iodine value of 24.68gl2/100goil) in Table 1 as well as 3.83% for oxirane value in Table 2 was achieved at 7hours reaction time which is in agreement with the findings of (Goud et al., 2007). From the FT-IR spectra shown in Figures 1-4, it is observed that at the wave number 3890,700 the pure sesame oil tends to form a hydroxyl group with an unsaturated carbon-carbon bond in Figure 1. The epoxidized sesame oil became saturated and formed an epoxy group at wave number of 1120 in Figure 2. The acrylated and methacrylated epoxy resins of sesame oil were obtained at the wave number of 2710 and 3002 respectively to form the acrylic and methacrylic groups in Figures 3 and 4 respectively, which is a pointer to the fact that sesame seed oil has been modified.
CONCLUSION
Increase in the concern about environmental pollution from petrochemical-derived polymer products, has made plant seed oil a potential replacement. From the results obtained in this work, It was found that sesame seed oil is a good starting material for oil epoxy synthesis and further modification via acrylation and methacrylation to produce biobased resin. Hence the information is for possible utilisation and application of sesame seed oils for the synthesis of bioresins and composites preparation. | 2020-03-19T10:27:36.440Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "0884f7000be607a4823e30df199c419213f94a89",
"oa_license": "CCBY",
"oa_url": "https://www.ejosdr.com/download/in-situ-epoxidation-of-sesame-seed-oil-for-synthesis-of-a-bio-based-resin-7830.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "586085c4afd58cdb6ce864930a822ff3d7adf432",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
209497304 | pes2o/s2orc | v3-fos-license | Phytotoxicity of silver nanoparticles (AgNPs) prepared by green synthesis using sage leaves (Salvia officinalis)
Silver nanoparticles (AgNPs) are widely investigated with regard to their physical, chemical, but also biological properties. Antibacterial and antitumor properties of AgNPs have been intensively studied. In addition, the synthesis using a green approach brings further significant biological properties. However, it is also necessary to monitor the potential toxicity of such nanoparticles in different ecosystems. In this study, the effect of AgNO3 and AgNPs on germinated plants of Zea mays was studied. Effects on basic growth and physiological parameters were observed. There was a statistically significant difference between the variants tested.
Introduction
In the current decade, nanomaterials (NPs) are intensively investigated. The knowledge about their toxicity and behavior in eco-environment is very important [1,2]. Silver nanoparticles are defined as individual silver particles or small silver aggregates with a size of 1-100 nm [3][4][5].
Due to their small size, large surface capacity and antimicrobial effects, they are widely used in many industries including electronics, medical diagnostics, textile industry, or also in personal care [6]. Biological synthesis of nanoparticles using plants and microorganisms has been described in a previous review paper [7,8] and is still given great attention [9].
Fig. 1. The expected biological effects of AgNPs on plants. (A)
The soil particles are surrounded by water potential gradients that affect plant root growth (due to positive hydrotropism, the growth of the roots is directed towards the water) and thus also the nutrient intake. Hydropatterning changes the distribution of root hairs and lateral roots along the root circumference. It has recently been discovered that the transcription factor ARF7 activates the LDB16 gene, which induced the initiation of lateral roots formation when water is available. These recent molecular findings can bring new information on the NPs [10]. (B) AgNPs enter the plant through conductive elements and are transported to its organs and cells. AgNPs induce reactive oxygen species (ROS) formation and distinct reactions in peroxisomes in the cells. The most important predicted effects of AgNPs phytotoxicity on the nucleus (DNA damage, transcription factor binding, etc.), mitochondria (e.g., respiratory chain damage, damage to DNA and membranes), chloroplast (e.g., damage to DNA, photosystem, and membranes). Changes lead to activation of caspases and cell death. An important effect is a decrease in GSH concentration and consequent damage to cellular components. These complex changes result in changes in overall metabolism.
Increased use of silver nanoparticles (AgNPs) enhances the risk of their release into the ecoenvironment. There is also growing concern about health and safety risks [11]. It is therefore essential to address the potential effect of AgNPs on plants as primary producers and an essential component of the ecosystem [4,6]. These effects may vary depending on the size, shape and concentration of the nanoparticles. They can also be different depending on the age and type of plant. Terrestrial plants can be exposed to nanoparticles in many ways. The main ones are potential leakage into the environment, irrigation with contaminated surface water, application of contaminated biosolids, or wastewater drain [11]. However, only a very limited number of publications have so far dealt with the interaction between higher plants and AgNPs.
The aim of this work was to study the effect of different concentrations of silver nitrate (AgNO3) and AgNPs on germinated plants of maize (Zea mays) of Silen variety.
Chemicals
Silver nitrate, methanol, NaCl and other chemicals were purchased from Merck (USA) at a purity >99%. All chemicals that we used for gel electrophoresis were purchased from VWR (Germany). All plastic materials used (tubes, tips) in this study were purchased from Eppendorf (Hamburg, Germany). Deionised water was prepared by using the reverse osmosis equipment Aqual 25 (Brno, Czech Republic), and was further purified by using an ELGA apparatus equipped with a UV lamp (Lane End, United Kingdom). The resistance was 18 MΩ and the pH was measured using the pH meter (WTW). Electrochemical analysis of silver was performed by DPV method, 0.2 M acetate buffer (pH 5), scan from -0.1 to 0.6 V, polarization rate of 25 mV/s (working carbon electrode, Metrohm, Switzerland). Characterization of nanoparticle surface was performed by methods previously optimized [12][13][14][15][16][17][18]. Ferric Reducing Antioxidant Power (FRAP) is based on the reduction of 2,4,6-tripyridyl-s-triazine complex (TPTZ) with FeCl3 · 6H2O and the absorbance was measured at 605 nm. The radical of 2,2'-azino-bis(3-ethylbenzothiazoline)-6-sulfonic acid (ABTS, 7 mM) and potassium peroxodisulfate (5 mM) were mixed in water. The solutions were then prepared by diluting with water in a ratio of 1:9 v/v, stored for 12 hours in the dark at 4 °C prior to use and the absorbance was measured at 660 nm The DMPD (N,N-dimethyl)-1,4diaminobenzene method is based on quenching the color of the radical whose absorbance is measured at 450 nm. Total phenols were determined by the Folin-Ciocalteau reagent (1.5 mL of reagent was mixed with 200 mL of the sample), in which the mixture was left at 25 °C for 5 min followed by the addition of 1.5 mL of 6% (w/v) Na2CO3. The sample was left for 90 min at 25 °C and the absorbance was measured at 725 nm. Concentration dependence of gallic acid was prepared. The total flavons content was determined by the following procedure: 0.5 mL of the sample was mixed with 1.5 mL of methanol, 0.1 mL of 10% aluminium chloride, 0.1 mL of 1 M potassium acetate and 2.8 mL of water. The resulting solution was left at 25 °C for 30 min and the absorbance was measured at 415 nm. The quercetin calibration dependence was prepared to determine the concentration of quercetin. Total protein was determined by biuret (510 nm) and pyrogallic method (520 nm).
Surface morphology of the nanoparticles
Surface morphology of the nanoparticles was investigated with field emission scanning electron microscopy (FESEM) using operating voltage of 10 kV in the SEM (Zeiss) instrument. Surface charging effect was minimized by coating the samples with gold on copper stubs with a coating instrument. Transmission electron microscope (TEM) and higher resolution TEM (JEOL) were determined on a copper stub with carbon glue and coated with gold before analysis. The samples for TEM and HRTEM were placed in vials containing absolute ethanol and ultrasonicated for 10 min. thereafter, holey/lacey carbon grids (10 μm) were dipped into the vials containing the ultrasonicated samples and dried before microstructural determination.
Preparation of AgNPs by green synthesis
Silver nanoparticles were prepared using green synthesis. The sage leaves (Salvia officinalis) were purchased in single packs (50 g, Valdemar Grešík -Natura s.r.o., Czech Republic). The leaves were first homogenized by milling to a particle size of 1-2 mm and then extracted. The mixture was stirred in ultrapure water (80°C, 60 min) at a ratio of 5 DW g/100 mL, v/w. The leachate was further centrifuged (30 min, 4000 g). Then filtration through filter paper (100 μm) was performed. The leachate was mixed with 0.1 M silver nitrate (1: 1) and stirred on a magnetic stirrer (80°C, 24 h). The particles were precipitated with methanol (1: 1) and left on a magnetic stirrer for 60 min. After purification, the supernatant was pipetted off and the particles were allowed to dry (24 h, 60°C, VWR drier, USA). The particles were dispersed in ultrasoundirradiated water at a concentration of 3 mg/mL for 40 minutes.
Analysis of photosynthetic dyes
To determine the number of chlorophylls, 1 g of the above-ground portion of the plant was weighed, placed in a mortar and grinded with sea sand. Then, 1 mg of magnesium oxide was added and, after a short period of grinding, 10 mL of acetone was added. The sample was filtered through filter paper (with 100 μm pore size). The filtered sample was refilled with acetone to a 25 mL volume of volumetric flask. The extracted chlorophyll was diluted into a 2 mL glass cuvette with acetone at a ratio of 1: 9. Chlorophyll measurements were performed in the range 350-650 nm, scan 2 nm (UV-3100PC spectrophotometer; VWR International, USA).
Microscopic and photographic analysis
To evaluate the effect induced by AgNO3and AgNPs, the plants were photographed (Canon, FulHD 20.3 Mpx). Microscopic analysis was performed using a computer-connected microscope VisiScope (VWR, USA) allowing photos collection (10 MPx) on a PC. Image analysis was performed by ColorTest (Prevention Medicals, Czech Republic).
Statistical evaluation of data
Experimental work was performed in at least three independent experiments. Each sample in the experiments was analysed at least five times. The obtained data presented in this paper are the average values. No experimental points were excluded from the proposed experimental study. All the obtained data were stored in the Qinslab database (Prevention Medicals, Czech Republic). If possible, data were processed and evaluated mathematically and statistically in the Qinslab database. The results were expressed as mean ± standard deviation (SD). Photos were processed by the ColorTest program, which assigns an intensity to the individual pixels of the studied image in a given color area [19]. For preparing the publication, the data were processed using MICROSOFT software (USA).
Characterization of prepared plant extracts from S. officinalis leaves
The total protein content of extracts measured using pyrogallol red decreased with increasing temperature applied for extract preparation (from 85 to 55 g/L). Using the biuret test, the highest protein content (93.5 g/L) was found in the extract prepared at 20°C. The concentration of phenolic compounds in the extracts increased directly in proportion to the preparation temperature of the phytoextract (3-5 mg/L). The content of flavonoids in extracts obtained from S. officinalis increased with extraction temperature (from 1 up to 5.5 mg/L). Based on the DPPH method, free radicals (10.5 GA g/L) were most quenched by sage leaf extracts prepared at 80°C.
Using the ABTS method, the ability of extracts to quench free radicals was significantly reduced with increasing preparation temperature. The color evaluation of the prepared extracts at different temperatures indicated that the color intensity decreased with increasing temperature applied for the preparation of the extract. At the highest temperature used, the color intensity decreased by 15% compared to the lowest preparation temperature.
The prepared extract was mixed with silver nitrate (1: 1, 500 rpm, 25℃). AgNPs formation was monitored spectrophotometrically. The signal of around 450 nm in the UV VIS absorption spectrum confirmed the presence of AgNPs in the mixture [20]. AgNPs were characterized: the hydrodynamic size ranged from 20-60 nm, the absorption spectra achieved maximum peak at 455 nm. AgNPs formation rate constants were determined by the integration method and were experimentally around 0.3 µM/s/AU. AgNPs were produced most rapidly using an extract prepared at 60°C. An ideal time was found to produce the largest amount of AgNPs in solution which ranged between 24-48 h. The yield of AgNPs produced by green synthesis using sage was 65%. Simple reactions (total phenols, flavones, ability to quench free radicals, total protein) were used for basic characterization of AgNPs surface properties. Chemical properties: the ABTS method -40-80% of control, the DPPH method -a decrease by 15-55% after 15 min.
of radical quenching, total phenols (extract): 1200-1800 mg/mL eq. GA. The SEM analysis showed that the particles were mostly spherical in shape with a size of 50 nm. The SPR method determined the particle size at intervals of 20-60 nm and the zeta potential in the range of -20 to -5 mV (Fig. 2).
The effect of AgNPs on germinated plants of Zea mays
The interaction between AgNPs and autotrophs was studied. The uptake, translocation, and the accumulation of the AgNPs in cells depend on the cellular structure, its permeability, size of the nanoparticles [8]. The soil particles are surrounded by water potential gradients that affect plant root growth (due to positive hydrotropism, the growth of the roots is directed towards the water) and thus also the nutrient intake. Hydropatterning changes the distribution of root hairs and lateral roots along the root circumference. It has recently been discovered that the transcription factor ARF7 activates the LDB16 gene, which induced the initiation of lateral roots formation when water is available); see in Fig. 1. These recent molecular findings can bring new information on the NPs [10,21]. The plants were exposed to AgNO3 and AgNPs (0, 50 and 150 mg/L). The plants were collected on days 4, 5 and 6 of the experiment. With increasing concentrations of AgNO3 and AgNPs, significant growth retardation, discoloration and leaf tip drying have been monitored. Fig. 3 shows the dependence of the mean length and weight of plant biomass on the variants tested (control, AgNO3, AgNPs). Plant biomass with increasing concentration and exposure day decreased. For all studied variants, the effect on the root system was noticeable. With increasing concentrations of applied amounts of AgNO3 and AgNPs, the roots became slightly brown. The overall reduction in plant biomass was due to plant stress response. The plant probably needed energy and substances to transport and inhibit heavy metals. The increased root biomass is probably related to ensuring an equalized water balance of plants.
Biological effects of AgNPs on maize plants -growth characteristics
The length of the longest root was measured for each sample. Its mean length was 12.7, 8.7 and 9.9 cm in the control sample, in the presence of AgNO3 and in the presence of AgNPs, respectively. Another observed parameter was the length of the above-ground portion of the plants. Its mean length was 15.9, 10.8 and 11.6 cm in the control sample, in the presence of AgNO3 and in the presence of AgNPs, respectively (growth reduction of about 40%).
Subsequently, the average number of roots was counted. There were no differences among group means (control 7.4; AgNO3 group 7 and AgNPs group 7.5). Also, the fresh root weight was found. Compared to control (246.3 mg), there was a decrease of 27.2% in the presence of AgNO3; and in the presence of AgNPs, a decrease of 17.8% was observed. We also monitored the fresh weight of the above-ground portions of plants, which was 377.3 mg in control; for AgNO3 we recorded a decrease of 47.8% and for AgNPs a decrease of 35.6%. Compared to control, statistically significant differences were found in all studied variants at 95% significance level. analysis was performed using Qinslab, with 95% difference being considered to be statistically significant. Fig. 1 shows the expected effect of AgNPs on chloroplasts and thus also on the number of photosynthetic dyes. Furthermore, the amounts of chlorophyll a, chlorophyll b, carotenes and xanthophylls were calculated by absorbance. We found that the number of photosynthetic dyes increased (control: 675 µg/mL, AgNO3 group: 827 µg/mL, and AgNPs group: 1261 µg/mL).
Effect of AgNPs on photosynthetic pigments
The increase in the number of photosynthetic dyes by more than 50% in the presence of AgNPs is likely caused by plant defensive reactions due to increased oxidative stress. However, this possible link should be further studied carefully [22]. A statistically significant difference compared with control was found in all studied variants at 95% significance level.
Conclusion
AgNPs were prepared by green synthesis using sage and biophysically characterized. These particles were subsequently tested for their phytotoxicity in germinated plants of maize. AgNPs were chemically stable throughout the experiment and exhibited a number of biological effects in most of the analyzed parameters, however, these need to be further investigated. | 2019-12-28T14:13:15.518Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "bc35a4626a4fa81b140b1209a0eb0721f305b9d0",
"oa_license": "CCBYNC",
"oa_url": "https://rgu-repository.worktribe.com/preview/824238/GARGULAK%202019%20Phytotoxity%20of%20silver.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "ba9837f4d1d9956dfa624af8291f7674ae0e5151",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
207930235 | pes2o/s2orc | v3-fos-license | Singularity theorems in the effective field theory for quantum gravity at second order in curvature
In this paper we discuss singularity theorems in quantum gravity using effective field theory methods. To second order in curvature, this effective field theory contains two new degrees of freedom which have important implications for the derivation of these energy theorems: a massive spin-2 field and a massive spin-0 field. Using an explicit mapping of this theory from the Jordan frame to the Einstein frame, we show that both the cosmological and the black hole singularity theorems may not hold due to the presence of a massive spin-2 field in the particle spectrum of quantum gravity. Furthermore, we show that the massive scalar field can lead to a violation of the assumptions used to derive Hawking's singularity theorem. On the other hand, it does not affect Penrose's singularity theorem.
Introduction
The significance of singularity theorems in general relativity first presented in the seminal papers of Penrose and Hawking [1,2] cannot be overemphasized. Since these foundational works several adaptions and refinements of the singularity theorems have been developed (see e.g. [3][4][5][6][7]). In general, all these theorems boil down to the same principle: the assumption of some energy condition together with some global statement about space-time leads to the prediction of geodesic incompleteness somewhere in the space-time. Geodesic incompleteness is then often taken as equivalent to the existence of a singularity, although the latter is a slightly stronger statement (see e.g. [8]).
A crucial ingredient for the proof of most of singularity theorems is the Raychaudhuri equation 1 , that can be derived from the Einstein field equations. It is therefore crucial to assume classical general relativity for singularity theorems to hold, and for any deviations of general relativity one would have to reassess the derivation of singularity theorems, as was done, for example, for f (R) gravity [10].
It is clear that general relativity needs to be embedded in a gravitational theory which can be quantized, i.e. a theory of quantum gravity, if one accounts for the quantum properties of matter and space-time. Such a theory of quantum gravity is not known yet, but many different approaches to such a theory have been formulated. Furthermore any theory of quantum gravity should in the infrared limit reduce to general relativity. Despite the lack of a unique theory of quantum gravity, quantum corrections to general relativity solutions can be calculated using effective field theory methods [11][12][13][14][15][16][17]. Calculations done in this framework apply to any ultra-violet complete theory of quantum gravity and are valid at energies scales up to the Planck mass, and thus in the entire spectrum that can potentially be probed experimentally.
It is expected that in a theory for quantum gravity singularities will be resolved, since singularities lead to pathologies both in general relativity and quantum field theory. However, singularities cannot be avoided as long as singularity theorems hold. It is therefore an important question whether the assumptions of the singularity theorems break down in a theory for quantum gravity. A discussion of possible quantum loop holes for the singularity theorems can for example be found in [18].
In this work we discuss the validity of the singularity theorems in the framework of the effective field theory approach to quantum gravity. A drawback of this approach is that the theory is not valid at energy scales larger than the Planck mass which corresponds to regions of large curvature, where singularities are expected to form. We shall assume that the physics responsible for the avoidance of singularities becomes relevant at energies below the Planck mass and can thus be described within our mathematical framework, an example would be, e.g., a bounce solution in FLRW cosmology which would avoid a Big Crunch solution, see for example [19]. We note that this approach goes beyond general relativity and it is applicable to any theory of quantum gravity. This paper is organized as follows: in the next section we derive the action for effective quantum gravity in the Einstein frame. In section 4 we discuss singularity theorems in effective quantum gravity using this action. In section 4 we then conclude. Furthermore, in appendix A we discuss the classical Hawking and Penrose singularity theorems, and in appendix B we discuss a refined statement of Hawking's theorem using weakened energy conditions.
In this paper we work in the (+ − −−) metric and use the conventions
Effective quantum gravity in the Einstein Frame
In this section we map the effective field theory for quantum gravity to the Einstein frame. Such mappings for R and R µν theories have been discussed in [20][21][22][23][24][25]. Furthermore, the case of effective gravity without non-local interactions has been discussed in [26]. Here we adapt these approaches to include the non-local terms in the effective quantum gravity formalism. The effective action for quantum gravity can be obtained by integrating out the graviton fluctuations and potentially other massless degrees of freedom. It is known that the graviton self interactions [27] make the form factors illdefined, as the Wilson coefficients become gauge dependent. However, there is a well defined procedure to resolve these ambiguities [12,13]. The resulting effective action is given by Using the Gauss-Bonnet theorem this can be rewritten to 2 where C is the Weyl tensor and We apply a Legendre transform to the function and find We integrate the first equation and fix the integration constant such that where we use the notationL −1 1 to denote the Green's function of the operatorL 1 . If we apply a conformal transformation to the metric where we have introduced a new field χ, we can rewrite the action as where we have used that the Weyl tensor does not transform under a conformal rescaling of the metric. Furthermore, X represents all matter fields.
We can drop the total divergence term, since it does not affect the equations of motion, and apply the Gauss-Bonnet theorem to rewrite the Weyl tensor. We then find We consider the function and apply a Legendre transform to this part of the action, which results in where 3R We integrate the first equation and fix the integration constant such that 4 (2.17) We perform another metric transformation such that We obtain the transformed action 3 Note that the spin-2 field is symmetric in its indices, since Rµν is symmetric. 4 The potential V2 is real, which can easily be shown by evaluating the expression. where We again drop the total derivative terms, and we define a new spin-2 field ξ such that After this transformation the action becomes In addition, we expand the terms containing a potential usingL =L + O(κ) and find where indices on ξ are raised an lowered withg. We then find the equations of motion for the scalar field:˜ We can solve the equation of motion for the Green's function (6κ 2L 1 ) −1 by Fourier transformation: This results in the mass of the scalar field given by which corresponds to earlier results (see e.g. [31]). We can do a similar analysis for the tensor field, which yields (cf. [31]) This resulting action is We can then find the equation of motion for the metric This can be rewritten in the form 3 Singularity theorems in effective quantum gravity
Massive scalar field
It is known that a massive scalar field always satisfies the null energy condition, but can easily violate the strong condition (cf. [32,33]). The energy momentum tensor is given by where v is an arbitrary null vector. We conclude that the null energy condition is satisfied. However, which leads to where t is an arbitrary normalized time-like vector. We see that this expression could be both larger and smaller to 0. Consequently the strong energy condition does not necessarily hold. We conclude that the scalar field arising in effective quantum gravity could resolve cosmological singularities, but not black hole singularities.
Bounds on the mass of the massive scalar field
Using the results from appendix B we can derive a bound on the mass of the scalar field for which the cosmological singularity theorem still holds. First consider the action (2.34) containing only the massive scalar. Eq. (2.36) then reduces tõ Let us consider a globally hyperbolic 4-dimensional space-time with compact Cauchy hypersurface S, and assume |χ| < χ max is bounded towards te past of S.
whereγ is a normalized tangent vector to a past directed time-like geodesic and where we have used the strong energy condition in the first line. We find for any C > 0. The right hand side is maximized for C = 3 2 κm 0 χ max . By Theorem 3 we then find that M is past geodesically incomplete, if the singularity theorem still holds.
We can use the expression for the mass of the scalar (2.32) to find a condition for the Wilson coefficients. Let us first ignore the nonlocal terms α, β, γ. We then find . (3.10) We thus find that the singularity theorem holds for where we have assumed 3c 1 (µ) + c 2 (µ) + c 3 (µ) > 0, as the opposite would imply that the scalar field is tachyonic. If we include the non-local contributions, we find instead where only the logarithm has a complex part that accounts for the decay width of the field [34][35][36].
We can make an estimate of the expansion parameter for our universe, by assuming the FLRW-metric, and by assuming that we live on a compact Cauchy hypersurface with a Hubble parameter that is constant along the surface. We find where the Hubble parameter is fixed by experiment 5 . In addition, we require an estimate for χ max , which will rely on theoretical prejudice. However, for the effective action to be consistent one would expect that both the scalar and tensor fields arising in the Einstein frame do not exceed the Planck scale. We thus make the rough estimate Furthermore, the non-local part leads to a correction given by where we have used the known values for α, β, γ assuming only standard model fields [27]. Furthermore, we have set the cutoff scale µ ≈ κ −1 . These non-local corrections are thus negligible compared to the local contributions. We conclude that the singularity theorem holds, if The singularity theorem can thus be violated for a large range of values.
The scalar and spin-2 particles give rise to corrections to the Newtonian potential according to the formula The Eöt-Wash experiment [37] sets bounds on deviations from this potential. Assuming that the corrections do not cancel each other, both corrections should satisfy these experimental bounds, i.e. m 0 , m 2 ≥ 10 −3 eV/c 2 (3.20) Hence, the singularity theorem can be violated for all feasible values of the Wilson coefficients.
It might seem counterintuitive that tiny Wilson coefficients already lead to a breakdown of the assumptions of the singularity theorems, while large Wilson coefficients do not. In particular, since the smaller the Wilson coefficients the closer the action is to the Einstein Hilbert action. However, small Wilson coefficients lead to very massive scalar fields, which can violate the strong energy condition, as can be seen in eq. 3.4. Furthermore, the Einstein equation is a second order differential equation, while the introduction of the terms quadratic in the Ricci scalar and tensor make it a fourth order equation. As is well known solutions of differential equations are generically not stable against perturbations that change the class of the differential equation (cf. [38] for a discussion of this fact in the context of general relativity).
Spin-2 massive ghost
Let us now turn to the massive spin-2 field. Since this field is a ghost one would expect it to violate the null energy condition. Indeed we can write the energy momentum tensor explicitly In order to show that the field can violate the null energy condition, we construct a counterexample. We consider the special case in which the tensor field is aligned with the metric: This results in an energy momentum tensor given by Hence, where v is an arbitrary null-like vector and where we assumed the field ξ to be an eigenvector of ∇ µ ∇ ν with eigenvector k µ k ν , as is the case if the field exhibits sinusoidal behavior with wave vector k.
Since the spin-2 field can violate the null energy condition, it can violate the strong energy condition as well. We conclude that the massive spin-2 field can resolve both kinds of singularities, since it does not satisfy any of the required energy conditions. The fact that the ghost field can resolve singularities is less of a surprise, if one takes into account that the ghost field leads to a repulsive contribution to Newton's potential [39,40], and could thus result in a effective repulsive force at small distances.
Conclusion and Outlook
It is well known that the classical singularity theorems [1,2] only hold if general relativity is assumed. Quantum gravity, however, leads to deviation from general relativity, as can easily be shown using effective field theory methods. Furthermore, one of the main objectives of quantum gravity theories is to resolve singularities. In this work, we have discussed the validity of the singularity theorems in the context of an effective field theory for quantum gravity at second order in curvature.
We have considered singularity theorems by making an explicit mapping to the Einstein frame. It is well known that the local terms in this theory give rise to an additional scalar and tensor field at second order in curvature. We have shown that the inclusion of the nonlocal terms at this order only give rise to a shift in the mass of these fields.
We have then shown that the massive spin-2 ghost field can easily violate the null energy condition and thus the strong energy condition as well. Although this is expected from a ghost field, it shows that the ghost field can be useful for resolving singularities in quantum gravity. We stress that the ghost field in effective theories for quantum gravity is not problematic, since it must be treated as a classical field in this framework [40].
Furthermore, we have shown that the scalar field cannot resolve black hole singularities but can for certain values of the Wilson coefficients lead to resolution of cosmological singularities. These bounds follow purely from the singularity theorems formulated for weakened energy conditions in [6]. It should be noted that cosmological singularity avoidance in this framework has already been found in [19]. On the other hand, black hole solutions do not get corrected at this order [28] in the effective field theory framework, which is an indication that the classical black hole singularity persists at this order in an effective theory. However, other examples of singularity resolution in various theories such as higher derivative gravity [41,42], string theory [43] and polynomial gravity models [44] have been found.
It is important to notice that the breakdown of the assumptions of Hawking's and Penrose's singularity theorem does not imply the non-existence of singularities. However, it does imply that singularities can potentially be avoided, which is impossible, if the assumptions hold. In particular, in the black hole case, where the ghost field violates the conditions for the singularity theorem, it is known that there are no correction to the metric at second order in curvature. The standard general relativity singularity is still present at this order in the effective field theory. A potential resolution of singularity must come from higher order curvature terms in the action. Alternatively, one could also hope that other black hole solutions [45][46][47] arising at second order in curvature may not be affected by singularities and are thus the solutions which are relevant physically.
Furthermore, we should notice that these results only hold up to second order in curvature. Inclusion of higher orders might force us back into a regime where the singularity theorems hold or might draw us further away from this regime. The effects of these terms is not negligible, since singularities form in highly curved regions of space-time. However, it is interesting that singularities can potentially already be resolved at second order in curvature and can help guide the way to singularity resolution in ultra-violet complete theories of quantum gravity.
Acknowledgments
This work is supported in part by the Science and Technology Facilities Council (grant number ST/P000819/1).
A Classical singularity theorems A.1 Hawking's cosmological singularity theorem
In this appendix we state and proof Hawking's singularity theorem [2]. Theorem 1. Let M be a globally hyperbolic n-dimensional space-time with n ≥ 2 and a Cauchy surface S. Assume that ∃ C > 0 such that θ x < −C ∀ x ∈ S, where θ = 1 2 g µν ∂ τ g νµ is the expansion parameter. Furthermore, assume that matter within this space-time satisfies the strong energy condition for every normalized time-like vector t µ everywhere in the future of the Cauchy surface S. Then the space-time M is geodesically incomplete towards the future of S. Moreover, if θ x > C ∀ x ∈ S and the strong energy condition is satisfied everywhere in the past of S, then M is geodesically incomplete towards the past of S.
Proof. Consider an n-dimensional globally hyperbolic space-time M with Cauchy surface S. Then we can find an open neighborhoodŜ ⊃ S and a coordinate system onŜ such that the metric is given by In order to proof Hawking's singularity theorem [2], we can write down the Raychaudhuri equation [48]: where the expansion θ and shear σ µν are given by where we defined V = det(g) (A. 6) and the time-derivative byV = ∂ τ V . Furthermore, θ and σ µν are taken along a time-like path γ parametrized by τ with normalized tangent vectors t µ , and γ(0) ∈ S.
If we use the Einstein field equation, we can rewrite the Raychaudhuri equation to Assuming the strong energy condition Assume ∃ C > 0 such that θ x (0) < −C ∀ x ∈ S, then we can integrate (A.10) and obtain Hence for τ ∈ −∞, n−1 (A.12) We can rewrite in terms of V and integrate to find We thus conclude that any geodesic emanating from the Cauchy surface will develop a focal point for 0 < τ ≤ n−1 C . Furthermore, since S is a Cauchy surface and M is globally hyperbolic, any point y ∈ M is connected to a point x ∈ S through a causal path of maximal proper time. We thus conclude that no geodesic γ(τ ) can be extended to τ ≥ n−1 C . Therefore, the space-time is geodesically incomplete towards the future. This proves the future version of the theorem. The past version immediately follows by inverting the time direction in the proof.
We conclude this subsection by mentioning an immediate result of the theorem: if there exists a Cauchy surface S such that the Hubble parameter H ≥ H 0 > 0 on the entire surface S, and the strong energy condition is expected to hold anywhere in the past of this surface, then the space-time is geodesically incomplete towards the past. More precisely no geodesic can be extended beyond τ = H −1 0 towards the past. To see this, we recall that the Hubble constant given by for the FLRW-metric
A.2 Penrose's black hole singularity theorem
In this appendix we state and prove Penrose's singularity theorem [1]. Here we closely follow the proof provided in [8].
Theorem 2. Let M be a globally hyperbolic n-dimensional space-time with n ≥ 3 and a non-compact Cauchy surface S. Assume that M contains a compact trapped surface 6 U . Furthermore, assume that matter within this space-time satisfies the null energy condition for every null-like vector v µ everywhere in the future of the trapped surface U . Then the space-time M is null-geodesically incomplete towards the future of U .
Proof. Consider a globally hyperbolic n-dimensional space-time with non-compact Cauchy surface S, and a compact trapped surface U . Then we can find an open neighborhood U ⊃ U and a coordinate system onÛ such that the metric is given by (cf. [8,49]) where x A is an arbitrary but fixed local coordinate system on the (n − 2)-dimensional surface U . Furthermore, q and c are respectively a scalar and vector function of the coordinates. In this metric we can evaluate the Ricci tensor and find We can define the area of a bundle of orthogonal null geodesics locally by which allows us to define the null expansion as where the dot represents a derivative with respect to u. Furthermore, we can define the null shear by We then find the null Raychaudhuri equation given by Furthermore, we can use the Einstein equation and the fact g uu = 0 to write Imposing the null energy condition results in Using that U is a trapped surface ∃ C > 0 such that θ x < −C ∀x ∈ U , one can integrate this equation in a similar way as was done in the proof of theorem 1. One obtains Therefore, all future going null like geodesics develop a focal point for an affine distance 0 < u ≤ n−2 C .
Let us now assume that all null-geodesics can be extended beyond this focal point, and let us pick such a geodesic l arbitrarily. Then at least a small segment of this geodesic is prompt, and lies in the lightcone ∂J + (U ). Furthermore, the part of l that lies in ∂J + (U ) is connected, and the part beyond its first focal point cannot be in ∂J + (U ), since it is not prompt. Therefore l ∩ ∂J + (U ) is a finite non-empty interval, which has to be closed, since ∂J + (U ) is closed in M.
If we take an arbitrary point p ∈ ∂J + (U ), then this point can be reached by a null geodesic originating from U . This point is thus determined by the point q ∈ U , where the geodesic emanates, the value of the affine parameter u measured along the geodesic and the direction (i.e. ingoing or outgoing) of the geodesic. Since U is compact and since the affine parameters measured along the geodesics range over a compact interval, we find that ∂J + (U ) is compact.
However, by construction ∂J + (U ) is an achronal codimension 1 submanifold of M. Furthermore, by assumption M is a globally hyperbolic manifold with noncompact Cauchy hypersurface S, and thus does not allow for an achronal codimension 1 submanifold (see e.g. [8]). Hence, we arrive at a contradiction and conclude that at least one of the future going null geodesics orthogonal to U cannot be extended beyond an affine distance (n − 2)/C, which proves the theorem.
B Singularity theorems for weakened energy conditions
In this section, we state a theorem and its proof from [6]. The theorem is similar to Hakwing's cosmological singularity theorem, but uses relaxed conditions on the energy momentum tensor.
Theorem 3. Let M be a globally hyperbolic n-dimensional space-time (n ≥ 2) with a compact Cauchy surface S. Assume that ∃ C ≥ 0 such that along every future directed geodesic γ issuing orthogonally from S we have where x 0 = γ(0) ∈ S, θ(x 0 ) is the expansion at x 0 , andγ(τ ) is a normalized time-like tangent vector of γ(τ ). Then M is geodesically incomplete towards the future of S.
with γ a past directed geodesic, then M is geodesically incomplete towards the past of S.
For the proof we will use the following lemma which is proved in [6].
Lemma 1. Consider the initial value problem ẋ(t) = x(t) 2 q(t) + p(t), where q(t) and p(t) are continuous on [0, ∞) and q(t) > 0 ∀t ∈ [0, ∞). If Proof of Theorem 3. We follow the same argument as in the proof of Theorem 1 and find the Raychaudhuri equation Let us finally note that one can derive a similar theorem for the black hole case [6]. | 2019-11-13T16:03:00.000Z | 2019-11-13T00:00:00.000 | {
"year": 2020,
"sha1": "d0872950883458e6e28f36627505dcdbcf0e11eb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1997/6/10/171/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "ee885d2b963ae86a3ffe546cc54f0780837c39d8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
198801776 | pes2o/s2orc | v3-fos-license | On the Role of Bonding Time on Microstructure and Mechanical Properties of TLP Bonded Al / Mg 2 Si Composite
: Transient liquid phase di ff usion bonding of an aluminum metal matrix composite with 15 wt.% Mg 2 Si reinforcement particles using Cu powder interlayer at 560 ◦ C for di ff erent bonding times has been studied. Three di ff erent zones were identified at the bonding line: athermally solidified zone, isothermally solidified zone and base metal. By increasing the bonding time, due to the di ff usion of copper to the substrate, the width of the athermally solidified zone decreased, became more homogenous and the amount of intermetallic phase (CuAl 2 ) decreased. Therefore, the shear strength increased to a maximum of 60 MPa for the samples with a holding time of 5 h at the bonding temperature.
Introduction
Aluminum alloys are substantially employed in the automotive industry and aerospace due to formability, superior strength-to-weight ratio, corrosion resistance, and recyclability and castability [1,2]. Recently, the substantial need to use lightweight materials in aerospace and automotive industries has prompted research to improve the mechanical properties of lightweight alloys such as aluminum and magnesium. In this sense, efforts have been made to develop aluminum metal matrix composite (AlMMC) by adding different types of reinforcement particles such as SiC and Al 2 O 3 to improve the mechanical properties of MMC [3][4][5].
The emergence of using additive manufacturing processes such as selective laser melting (SLM) or selective laser sintering (SLS) processes changed the traditional way of manufacturing of metallic components [6][7][8][9]; however, traditional manufacturing processes such as casting, forging and extrusion are still preferable techniques to produce aluminum alloys and AlMMC parts due to low absorptivity of the laser beam, tenacious oxide films, and low boiling point of aluminum. Also, aluminum alloys are reactive and require using a high vacuum or high purity inert gas atmosphere for SLM and SLS processes [1,10,11] A newly developed type of reinforced aluminum composite with Mg 2 Si particles has been used in the automotive industry due to the need of lower ratio of weight to strength, good wear resistance and good castability [4], especially in the brake system, which using a wear resistance composite is preferable [12]. The Al/Mg 2 Si composite is produced by in-situ technique; thus, lower cost production and being scalable in large production [4]. However, the difficulties in joining the process of AlMMC hinder widespread industrial application [3]. Different techniques were used to join the AlMMC, but each has its own drawbacks [5,13,14]. For example, mechanical fastening may damage reinforcement particles of the composite, and fusion welding may form low viscosity melt; resulting in the evolution of occluded gas. This occluded gas could cause extensive cracking in the heat affected zone and weld porosities. Also, the high temperature of melt could cause the reaction of reinforcement particles and matrix, producing a detrimental intermetallic compound on the interface [13]. Furthermore, in solid state diffusion bonding of AlMMC, the presence of a tenacious and stable aluminum oxide layer on the surface could inhibit metal-to-metal contact; resulting in an improperly joined interface. Overcoming this problem needs the application of high pressure to disrupt the oxide layer and force the surfaces into close contact, causing excessive plastic deformation and losing dimension accuracy [5].
Transient liquid phase (TLP) bonding has been widely used in the joining of AlMMC [5,13,14]. TLP diffusion bonding involves applying an interlayer between two pieces to be joined, and heating to a temperature that forms a thin liquid layer between the two pieces and solidification of the melt at a constant temperature due to inter-diffusion of the liquid's atoms [5]. The TLP process has advantages of low bonding temperature, low bonding pressure, absence of a heat affected zone, disruption of the surface oxide film by the transient liquid phase, and a low probability of unfavorable reaction.
Maity et al. [15] investigated the role of holding time up to 6 h in microstructure evolution of TLP bonded extruded Al 6061-15 wt.% SiC p composite and achieved bond strength of 90% of the as-received composite using a copper interlayer foil. The common interlayer used for TLP bonding of aluminum composite is copper, due to low bonding temperature; the eutectic temperature of the Al-Cu system is 548 • C, which is far below the solidus temperature of base material to prevent melting or distortion of the base material [15][16][17]. However, different kind of elements such as Ni [18] or a mixture of Cu and Ni [19] were also adopted to join aluminum composites using the TLP technique. Ghayoor et al. [19] demonstrated the feasibility of using mixed Cu-Ni powder as an interlayer in TLP bonding of Al/Mg 2 Si, achieving 90% shear strength of the as-received composite.
In this study, the TLP bonding process using copper powder was adopted to join the Al/Mg 2 Si composite and the role of holding time on microstructure evolution and mechanical properties were investigated.
Materials and Preparation
The base material, Al/Mg 2 Si composite, was produced by gravity metal mold casting, containing 15 wt.% Mg 2 Si particles with average particle size of 30 µm. The chemical composition of the Al/Mg 2 Si is listed in Table 1. The bonding specimens were cut into 8 mm × 8 mm × 5 mm samples out of parent ingot using wire electrical discharge machining (EDM). The Cu powder (Merck Co.) with particle size of <67µm and purity of >99% was utilized as an interlayer for TLP bonding of Al/Mg 2 Si composite ( Table 2).
Joining Procedure
The bonding specimens were polished to 1000 grit size, ultrasonically cleaned in an acetone bath for 10 min. Afterward, a batch of 30 mg of the Cu powder was applied on the ultrasonically cleaned surface of one specimen, followed by making of a slurry on the surface of the specimen with the aid of ethanol to make a uniform interlayer. The other specimen was placed on the spread interlayer and pressed to make a joining unit. Then, the joining unit was held using a graphite jig fixture and 0.2 MPa pressure was applied during the entire process to firmly keep the joining surfaces in close contact. After that, the assembly was placed in a tube furnace purged with argon (99.995%) gas and heated at a rate of 10 • C/min to the bonding temperatures of 540 • C, 550 • C and 560 • C. The bonding specimens were held in the furnace for four different times: 30 min, 1 h, 2 h, and 5 h and then the furnace was cooled down to room temperature. The specimens were labeled as Cu-0.5, Cu-1, Cu-2, and Cu-5, respectively.
Microstructural Analyses
In order to analyze the microstructure of the joined interface, the bonded specimens were cross-sectioned and polished using standard techniques. Microstructure characterization was done by optical microscopy and scanning electron microscopy (SEM, TESCAN), which was equipped with Oxford Instruments X-Max 50 silicon drift energy dispersive X-Ray spectroscopy (EDS) system, at beam acceleration voltage of 15 kV in both secondary electron (SE) and backscattered electron (BSE) modes at the working distance of 12.3 mm. X-ray diffraction (XRD) analysis was conducted on the bonded interface using a Philips X'Pert Pro X-ray diffractometer with a Cu Kα radiation source at 45 kV and 40 mA; the continuous scan mode was used and scan rate was 5 • /min. For detection of different phases, Philips PANanalytical X'Pert software was adopted. The shear strength of the bonded joints was evaluated by means of a specially designed fixture, as illustrated in Figure 1, in a tensile testing machine (SANTAM) with a loading speed of 0.1 mm/min. For each bonding condition, three specimens were tested and the average value was reported as the shear strength (bond strength). pressed to make a joining unit. Then, the joining unit was held using a graphite jig fixture and 0.2 MPa pressure was applied during the entire process to firmly keep the joining surfaces in close contact. After that, the assembly was placed in a tube furnace purged with argon (99.995%) gas and heated at a rate of 10 °C/min to the bonding temperatures of 540 °C, 550 °C and 560 °C. The bonding specimens were held in the furnace for four different times: 30 min, 1 h, 2 h, and 5 h and then the furnace was cooled down to room temperature. The specimens were labeled as Cu-0.5, Cu-1, Cu-2, and Cu-5, respectively.
Microstructural Analyses
In order to analyze the microstructure of the joined interface, the bonded specimens were crosssectioned and polished using standard techniques. Microstructure characterization was done by optical microscopy and scanning electron microscopy (SEM, TESCAN), which was equipped with Oxford Instruments X-Max 50 silicon drift energy dispersive X-Ray spectroscopy (EDS) system, at beam acceleration voltage of 15 kV in both secondary electron (SE) and backscattered electron (BSE) modes at the working distance of 12.3 mm. X-ray diffraction (XRD) analysis was conducted on the bonded interface using a Philips X'Pert Pro X-ray diffractometer with a Cu Kα radiation source at 45 kV and 40 mA; the continuous scan mode was used and scan rate was 5°/min. For detection of different phases, Philips PANanalytical X'Pert software was adopted. The shear strength of the bonded joints was evaluated by means of a specially designed fixture, as illustrated in Figure 1, in a tensile testing machine (SANTAM) with a loading speed of 0.1 mm/min. For each bonding condition, three specimens were tested and the average value was reported as the shear strength (bond strength).
Bonding Temperature
The bonded samples at 540 °C, due to lack of enough strength between the specimens, fell apart after the sample was pulled out from the fixture. At 540 °C, the temperature was not high enough to form the liquid in the interlayer; therefore, no bonding was formed between the specimens. Furthermore, the bonded specimens at 550 °C had shown very low bonding strength and there was a considerable gap between the specimens, probably due to low flowability of the formed liquid. The results of these experiments led us to select a bonding temperature of 560 °C, which is above the eutectic temperature of the Al-Cu diagram and below the solidus temperature of the base material [20] and could form adequate liquid with high flowability to fully cover the bonding surface and fill the pours, which will be illustrated in detail in the following sections.
Bonding Temperature
The bonded samples at 540 • C, due to lack of enough strength between the specimens, fell apart after the sample was pulled out from the fixture. At 540 • C, the temperature was not high enough to form the liquid in the interlayer; therefore, no bonding was formed between the specimens. Furthermore, the bonded specimens at 550 • C had shown very low bonding strength and there was a considerable gap between the specimens, probably due to low flowability of the formed liquid. The results of these experiments led us to select a bonding temperature of 560 • C, which is above the eutectic temperature of the Al-Cu diagram and below the solidus temperature of the base material [20] and could form adequate liquid with high flowability to fully cover the bonding surface and fill the pours, which will be illustrated in detail in the following sections. Figure 2a shows the microstructure of the as-polished Al/Mg 2 Si MMC. The microstructure consists of primary Mg 2 Si reinforcement particles, eutectic α-Al/Mg 2 Si, and the α-Al phase, as labeled in Figure 2a. Figure 2b shows the bonding interface of sample Cu-2. The interface line was not straight across the bonding line probably due to the following reasons: (1) The as-received base material had many porosities due to the casting process. Therefore, if these porosities were located in the bonding interface, the formed liquid at the bonding temperature would flow into the porosities and change the shape of the interface; and (2) using Cu powder as an interlayer, due to the inherent interstitial space between the powder particles, could have the same effect as porosities in the matrix and hinder the liquid to spread homogenously at the bonding interface. Figure 2a shows the microstructure of the as-polished Al/Mg2Si MMC. The microstructure consists of primary Mg2Si reinforcement particles, eutectic α-Al/Mg2Si, and the α-Al phase, as labeled in Figure 2a. Figure 2b shows the bonding interface of sample Cu-2. The interface line was not straight across the bonding line probably due to the following reasons: (1) The as-received base material had many porosities due to the casting process. Therefore, if these porosities were located in the bonding interface, the formed liquid at the bonding temperature would flow into the porosities and change the shape of the interface; and (2) using Cu powder as an interlayer, due to the inherent interstitial space between the powder particles, could have the same effect as porosities in the matrix and hinder the liquid to spread homogenously at the bonding interface. (1) Athermally Solidified Zone (ASZ), which solidified by decreasing temperature from bonding temperature to room temperature.
Microstructure of Base Material and the Bonding Zone
(2) Isothermally Solidified Zone (ISZ), which solidified at the constant temperature (bonding temperature) and segregation of reinforcement particles was obvious in this zone.
(3) Base Material (BM), which did not have an effect on this zone with increasing temperature. (1) Athermally Solidified Zone (ASZ), which solidified by decreasing temperature from bonding temperature to room temperature. (2) Isothermally Solidified Zone (ISZ), which solidified at the constant temperature (bonding temperature) and segregation of reinforcement particles was obvious in this zone. Figure 2a shows the microstructure of the as-polished Al/Mg2Si MMC. The microstructure consists of primary Mg2Si reinforcement particles, eutectic α-Al/Mg2Si, and the α-Al phase, as labeled in Figure 2a. Figure 2b shows the bonding interface of sample Cu-2. The interface line was not straight across the bonding line probably due to the following reasons: (1) The as-received base material had many porosities due to the casting process. Therefore, if these porosities were located in the bonding interface, the formed liquid at the bonding temperature would flow into the porosities and change the shape of the interface; and (2) using Cu powder as an interlayer, due to the inherent interstitial space between the powder particles, could have the same effect as porosities in the matrix and hinder the liquid to spread homogenously at the bonding interface. (1) Athermally Solidified Zone (ASZ), which solidified by decreasing temperature from bonding temperature to room temperature.
Microstructure of Base Material and the Bonding Zone
(2) Isothermally Solidified Zone (ISZ), which solidified at the constant temperature (bonding temperature) and segregation of reinforcement particles was obvious in this zone.
(3) Base Material (BM), which did not have an effect on this zone with increasing temperature. Formation of three different zones in TLP bonding of Al/Mg 2 Si MMC could be explained according to the phase diagram of Al-Cu, as shown in Figure 4. In the first stage, the copper atoms from the interlayer diffuse to the base metal (aluminum) and aluminum atoms diffuse from the base metal into the copper interlayer. By continuing the diffusion of copper to the aluminum, the copper composition of the contact region between the base metal and interlayer might reach more than C αL composition and these regions could start to be melted and form a liquid in the interface. Then, the copper interlayer and the top layer of the base metal, which were already in contact with the interlayer, could completely dissolve and therefore, the liquid in the bonding interface could reach a chemical composition between C Lα and C Lβ . Gradually, this interlayer became wider (second stage); dissolving the reinforcement particles that have been trapped in the Al matrix.
The concentration of copper atoms in the interface is higher than the top layer of the base metal, which was in contact with the liquid. The surface of the base metal (Al), which was in contact with the liquid, had a chemical composition of C αL and the liquid had a chemical composition of C Lα . From this moment onwards, the isothermal solidification could start by nucleation of the solid embryos on the base metal surface (third stage) and these nuclei have a chemical composition equal to C αL . The tsolid/liquid interface tries to maintain a thermodynamical balance between C Lα of the liquid and C αL of the solid, but since the establishment of such a balance needs solid state diffusion, the speed of moving solid/liquid interface is very slow and this step is time-consuming (fourth stage).
As shown in the Cu-0.5 sample micrograph (Figure 3), the athermally solidified zone had formed, implying that the isothermal solidification stage (third stage) had not been completed. In TLP bonding, if the sufficient time for the isothermal solidification was provided, the melt completely solidifies isothermally and the final structure would be solid-phase with C αL composition. However, if the bonding time was not sufficient, as was the case in our study, while the melt is at the bonding temperature, it continues to isothermally solidify and when the holding time ends, the bonding temperature drops and the melt solidifies conventionally and forms an athermally solidified zone. Formation of three different zones in TLP bonding of Al/Mg2Si MMC could be explained according to the phase diagram of Al-Cu, as shown in Figure 4. In the first stage, the copper atoms from the interlayer diffuse to the base metal (aluminum) and aluminum atoms diffuse from the base metal into the copper interlayer. By continuing the diffusion of copper to the aluminum, the copper composition of the contact region between the base metal and interlayer might reach more than C αL composition and these regions could start to be melted and form a liquid in the interface. Then, the copper interlayer and the top layer of the base metal, which were already in contact with the interlayer, could completely dissolve and therefore, the liquid in the bonding interface could reach a chemical composition between C Lα and C Lβ . Gradually, this interlayer became wider (second stage); dissolving the reinforcement particles that have been trapped in the Al matrix.
The concentration of copper atoms in the interface is higher than the top layer of the base metal, which was in contact with the liquid. The surface of the base metal (Al), which was in contact with the liquid, had a chemical composition of C αL and the liquid had a chemical composition of C Lα . From this moment onwards, the isothermal solidification could start by nucleation of the solid embryos on the base metal surface (third stage) and these nuclei have a chemical composition equal to C αL . The solid/liquid interface tries to maintain a thermodynamical balance between C Lα of the liquid and C αL of the solid, but since the establishment of such a balance needs solid state diffusion, the speed of moving solid/liquid interface is very slow and this step is time-consuming (fourth stage).
As shown in the Cu-0.5 sample micrograph (Figure 3), the athermally solidified zone had formed, implying that the isothermal solidification stage (third stage) had not been completed. In TLP bonding, if the sufficient time for the isothermal solidification was provided, the melt completely solidifies isothermally and the final structure would be solid-phase with C αL composition. However, if the bonding time was not sufficient, as was the case in our study, while the melt is at the bonding temperature, it continues to isothermally solidify and when the holding time ends, the bonding temperature drops and the melt solidifies conventionally and forms an athermally solidified zone. [20]. Figure 5 shows the SEM micrograph of sample Cu-2. According to the color contrast in Figure 5, three different phases can be detected in the bonding line: Darkest phase, dark phase, and bright phase. According to the EDS analyses, the darkest phase, with an irregular shape morphology (pointed out in Figure 5), had 64 at.% of Mg and 35 at.% of Si, representing Mg2Si, the reinforcement particles. The dark phase, which is spread on the entire sample, had 98 at.% Al, representing α-Al. Lastly, the bright phase had 64 at.% Al and 34 at.% Cu, representing CuAl2, which is the most possible phase, according to the phase diagram of Al-Cu [20].The XRD analyses from the fracture surface of the bonded surface, as shown in Figure 6, also confirmed the result of the EDS analyses. Figure 5 shows the SEM micrograph of sample Cu-2. According to the color contrast in Figure 5, three different phases can be detected in the bonding line: Darkest phase, dark phase, and bright phase. According to the EDS analyses, the darkest phase, with an irregular shape morphology (pointed out in Figure 5), had 64 at.% of Mg and 35 at.% of Si, representing Mg 2 Si, the reinforcement particles. The dark phase, which is spread on the entire sample, had 98 at.% Al, representing α-Al. Lastly, the bright phase had 64 at.% Al and 34 at.% Cu, representing CuAl 2 , which is the most possible phase, according to the phase diagram of Al-Cu [20].The XRD analyses from the fracture surface of the bonded surface, as shown in Figure 6, also confirmed the result of the EDS analyses. Figure 7 shows the optical and SEM micrograph of the bonding zone, demonstrating porosities in the interlayer. Because large particle size (<67 μm) copper powder was used as the interlayer, although the joining unit was pressed, there were some porosities in ASZ after melting and solidifying the interlayer. The SEM micrograph, Figure 7b, and the EDS analyses illustrated the presence of two different phases in the ASZ: α-Al (dark phase) and CuAl2 (bright phase). The EDS spot analyses of the bright phase showed that the composition of the bright phase was 62 at.% Al and 36 at.% Cu. By the diffusion of Al to the interlayer (Cu), the most probable phase, according to the phase diagram of Al-Cu (Figure 4), is an intermetallic phase of CuAl2. Figure 7 shows the optical and SEM micrograph of the bonding zone, demonstrating porosities in the interlayer. Because large particle size (<67 μm) copper powder was used as the interlayer, although the joining unit was pressed, there were some porosities in ASZ after melting and solidifying the interlayer. The SEM micrograph, Figure 7b, and the EDS analyses illustrated the presence of two different phases in the ASZ: α-Al (dark phase) and CuAl2 (bright phase). The EDS spot analyses of the bright phase showed that the composition of the bright phase was 62 at.% Al and 36 at.% Cu. By the diffusion of Al to the interlayer (Cu), the most probable phase, according to the phase diagram of Al-Cu (Figure 4), is an intermetallic phase of CuAl2. a b Figure 6. XRD pattern from the fractured surface of sample Cu-2. Figure 7 shows the optical and SEM micrograph of the bonding zone, demonstrating porosities in the interlayer. Because large particle size (<67 µm) copper powder was used as the interlayer, although the joining unit was pressed, there were some porosities in ASZ after melting and solidifying the interlayer. The SEM micrograph, Figure 7b, and the EDS analyses illustrated the presence of two different phases in the ASZ: α-Al (dark phase) and CuAl 2 (bright phase). The EDS spot analyses of the bright phase showed that the composition of the bright phase was 62 at.% Al and 36 at.% Cu. By the diffusion of Al to the interlayer (Cu), the most probable phase, according to the phase diagram of Al-Cu (Figure 4), is an intermetallic phase of CuAl 2 .
Athermally Solidification Zone (ASZ)
although the joining unit was pressed, there were some porosities in ASZ after melting and solidifying the interlayer. The SEM micrograph, Figure 7b, and the EDS analyses illustrated the presence of two different phases in the ASZ: α-Al (dark phase) and CuAl2 (bright phase). The EDS spot analyses of the bright phase showed that the composition of the bright phase was 62 at.% Al and 36 at.% Cu. By the diffusion of Al to the interlayer (Cu), the most probable phase, according to the phase diagram of Al-Cu (Figure 4), is an intermetallic phase of CuAl2.
Isothermal Solidification Zone (ISZ)
During the isothermal solidification, nucleation of phase α-Al initiates at the interface of liquid and Mg 2 Si particles, which needs lower activation energy due to heterogeneous nucleation [21]. According to the phase diagram of Al-Mg 2 Si [22], the melting temperature of Mg 2 Si is 580 • C; therefore, at the bonding temperature of 560 • C, the eutectic and fiber like Mg 2 Si did not melt. Figure 8 shows the agglomeration of reinforcement particles in ISZ. The reinforcement particles were attached to each other and agglomerated due to the tendency of lowering surface energy. Agglomerated particles could reduce the surface energy between the particles and the liquid [21]. The eutectic Mg 2 Si were engulfed in the α-Al phase or joined to the primary Mg 2 Si, changing the rectangular shape of primary Mg 2 Si to a more spherical shape (Figure 8).
Isothermal Solidification Zone (ISZ)
During the isothermal solidification, nucleation of phase α-Al initiates at the interface of liquid and Mg2Si particles, which needs lower activation energy due to heterogeneous nucleation [21]. According to the phase diagram of Al-Mg2Si [22], the melting temperature of Mg2Si is 580 °C; therefore, at the bonding temperature of 560 °C, the eutectic and fiber like Mg2Si did not melt. Figure 8 shows the agglomeration of reinforcement particles in ISZ. The reinforcement particles were attached to each other and agglomerated due to the tendency of lowering surface energy. Agglomerated particles could reduce the surface energy between the particles and the liquid [21]. The eutectic Mg2Si were engulfed in the α-Al phase or joined to the primary Mg2Si, changing the rectangular shape of primary Mg2Si to a more spherical shape (Figure 8.) As shown in Figure 3, Mg2Si particles were segregated and distributed non-uniformly, at two sides of the bonding line, in the ISZ. According to the literature [13,[15][16][17] the primary α-Al is very efficient at rejecting insoluble particles, pushing the particles ahead of the solid/liquid interface; thus, particles would segregate at the last stage of solidification. In this regard, a critical velocity for solid/liquid interface has been reported; below the critical velocity, the insoluble particles are pushed by the moving interface and above that, the particles are engulfed. Stefanesco et al. [23] defined the critical velocity of the solid/liquid interface as below: ∆ 12 (1) where Vc is critical velocity, Δσ is surface energy, α is the shape factor of the interface, is the viscosity of the melt, and R is particle size. The critical velocity (Vc) is directly related to surface energy (Δσ) and inversely related to particle size (R) [23]. Since the first two stages of TLP bonding (dissolution of interlayer and widening of liquid) were taking place instantly, the velocity of solid/liquid interface motion is substantially high [15]. As a result, Mg2Si particles were not pushed away by the solid/liquid interface from the bond centerline during widening of the liquid. In the next stage As shown in Figure 3, Mg 2 Si particles were segregated and distributed non-uniformly, at two sides of the bonding line, in the ISZ. According to the literature [13,[15][16][17] the primary α-Al is very efficient at rejecting insoluble particles, pushing the particles ahead of the solid/liquid interface; thus, particles would segregate at the last stage of solidification. In this regard, a critical velocity for solid/liquid interface has been reported; below the critical velocity, the insoluble particles are pushed by the moving interface and above that, the particles are engulfed. Stefanesco et al. [23] defined the critical velocity of the solid/liquid interface as below: where V c is critical velocity, ∆σ is surface energy, α is the shape factor of the interface, η is the viscosity of the melt, and R is particle size. The critical velocity (V c ) is directly related to surface energy (∆σ) and inversely related to particle size (R) [23]. Since the first two stages of TLP bonding (dissolution of interlayer and widening of liquid) were taking place instantly, the velocity of solid/liquid interface motion is substantially high [15]. As a result, Mg 2 Si particles were not pushed away by the solid/liquid interface from the bond centerline during widening of the liquid. In the next stage (isothermal solidification), the velocity of solid/liquid interface motion is very slow, because the solid-state diffusion controls the process, taking several hours to complete. During the isothermal solidification, due to the low velocity of the solid/liquid interface, most Mg 2 Si particles were pushed by the moving solid/liquid interface, as evident in the present work ( Figure 3). Therefore, particles segregated at bond centerline, along with the residual liquid phase that solidified during the cooling stage. At the isothermal solidification stage, the α-Al nucleates. If there is sufficient time for the isothermal solidification stage to be completed, all the liquid could solidify at the bonding temperature by decreasing the temperature. By decreasing the solubility limit of Cu in Al, CuAl 2 precipitated in the Al matrix. In our study, due to insufficient time for completion of the isothermal solidification stage, even in the Cu-5 sample, residual liquid solidified normally, resulting in the final microstructure of: (1) Isothermal solidified α-Al (2) Primary α-Al precipitated from the liquid (C αl ), solidifying from the bonding temperature to eutectic temperature (3) Eutectic (CuAl 2 +α) phase, which solidified from the liquid under the eutectic temperature Intermetallic phase CuAl 2 decreases strength. All of the samples have shown a quantity of CuAl 2 ; therefore, it is concluded that isothermal solidification has not been completed after 5 h. By increasing the bonding time and growth of isothermal α-Al, the amount of CuAl 2 (the brightest phase contrast) decreased as shown in Figure 9. (1) Isothermal solidified α-Al (2) Primary α-Al precipitated from the liquid (C αl ), solidifying from the bonding temperature to eutectic temperature (3) Eutectic (CuAl2+α) phase, which solidified from the liquid under the eutectic temperature Intermetallic phase CuAl2 decreases strength. All of the samples have shown a quantity of CuAl2; therefore, it is concluded that isothermal solidification has not been completed after 5 h. By increasing the bonding time and growth of isothermal α-Al, the amount of CuAl2 (the brightest phase contrast) Figure 10 shows the central bonding zones of all the samples. The width of the ASZ was calculated by measuring the mean of ten points (reported in Table 3). As shown in Figure 10, by increasing the bonding time, more Cu could diffuse from the interlayer to the base metal. Therefore, the width of ASZ decreased by increasing holding time due to more diffusion of Cu from the interlayer to the base metal. The width of the segregated zone is related to particle size and the amount of liquid phase (which is related to the thickness of the interlayer and bonding temperature) [17]; therefore, when the amount of liquid phase in the bonding zone decreased, the number of reinforcement particles, which were trapped in the liquid phase, would also reduce, resulting in fewer particle segregation in the bonding zone. a b Figure 9. Optical micrograph of bonding interface of (a) Cu-0.5 and (b) Cu-5. Figure 10 shows the central bonding zones of all the samples. The width of the ASZ was calculated by measuring the mean of ten points (reported in Table 3). As shown in Figure 10, by increasing the bonding time, more Cu could diffuse from the interlayer to the base metal. Therefore, the width of ASZ decreased by increasing holding time due to more diffusion of Cu from the interlayer to the base metal. The width of the segregated zone is related to particle size and the amount of liquid phase (which is related to the thickness of the interlayer and bonding temperature) [17]; therefore, when the amount of liquid phase in the bonding zone decreased, the number of reinforcement particles, which were trapped in the liquid phase, would also reduce, resulting in fewer particle segregation in the bonding zone. the width of ASZ decreased by increasing holding time due to more diffusion of Cu from the interlayer to the base metal. The width of the segregated zone is related to particle size and the amount of liquid phase (which is related to the thickness of the interlayer and bonding temperature) [17]; therefore, when the amount of liquid phase in the bonding zone decreased, the number of reinforcement particles, which were trapped in the liquid phase, would also reduce, resulting in fewer particle segregation in the bonding zone.
Shear Strength of Samples
The result of shear strength is given in Figure 11. Shear strength was calculated by dividing the maximum force to the area of the bonding zone. By increasing the bonding time and continuous diffusion, the bonding zone became more homogenous and furthermore, the number of microporosities, preferable spot for crack initiation, decreased [24]. Therefore, by increasing the bonding time, the shear strength increased to a maximum of 60 MPa, which is 75% of the as-received composite. Moreover, by increasing bonding time, the width of ASZ decreased due to the diffusion of Cu; thus, the formation of intermetallic phases, which could potentially weaken the bonding zone, could be avoided. It can be concluded that increasing shear bonding strength is related to decreasing the width of the bonding zone due to diffusion and a smaller amount of brittle phase (CuAl2). In another study by the same author [19], the mixed Cu-Ni powder was used as the interlayer for TLP bonding of Al/Mg2Si. Using mixed Cu-Ni powder yielded a higher shear strength (70 MPa), compared to the Cu interlayer, due to the combined effect of a lower amount of brittle intermetallic phase (CuAl2) and higher viscosity of the formed interlayer, which caused a narrower ISZ in bonded samples using the Cu-Ni mixed powder.
Shear Strength of Samples
The result of shear strength is given in Figure 11. Shear strength was calculated by dividing the maximum force to the area of the bonding zone. By increasing the bonding time and continuous diffusion, the bonding zone became more homogenous and furthermore, the number of microporosities, preferable spot for crack initiation, decreased [24]. Therefore, by increasing the bonding time, the shear strength increased to a maximum of 60 MPa, which is 75% of the as-received composite. Moreover, by increasing bonding time, the width of ASZ decreased due to the diffusion of Cu; thus, the formation of intermetallic phases, which could potentially weaken the bonding zone, could be avoided. It can be concluded that increasing shear bonding strength is related to decreasing the width of the bonding zone due to diffusion and a smaller amount of brittle phase (CuAl 2 ). In another study by the same author [19], the mixed Cu-Ni powder was used as the interlayer for TLP bonding of Al/Mg 2 Si. Using mixed Cu-Ni powder yielded a higher shear strength (70 MPa), compared to the Cu interlayer, due to the combined effect of a lower amount of brittle intermetallic phase (CuAl 2 ) and higher viscosity of the formed interlayer, which caused a narrower ISZ in bonded samples using the Cu-Ni mixed powder. the width of the bonding zone due to diffusion and a smaller amount of brittle phase (CuAl2). In another study by the same author [19], the mixed Cu-Ni powder was used as the interlayer for TLP bonding of Al/Mg2Si. Using mixed Cu-Ni powder yielded a higher shear strength (70 MPa), compared to the Cu interlayer, due to the combined effect of a lower amount of brittle intermetallic phase (CuAl2) and higher viscosity of the formed interlayer, which caused a narrower ISZ in bonded samples using the Cu-Ni mixed powder.
Conclusions
(1) Three zones were identified in the bonding interface: (i) An athermally solidified zone, which was in the center of the bonding line; it had porosities due to the use of Cu power (ii) An isothermally solidified zone, which can be characterized by the segregation of reinforcement particles (iii) Base metal, which did not have an effect on the microstructure even with increasing temperature (2) By increasing bonding time due to the diffusion of copper from the interlayer to the base metal, the width of ASZ decreased. In addition, by decreasing the amount of liquid formed at the bonding temperature, the reinforcement particles engulfed in the liquid decreased. Therefore, the width of the segregated zone (ISZ) decreased. (3) By increasing bonding time and diffusion of Cu from the interlayer, the amount of intermetallic phase (CuAl 2 ) decreased in ISZ. (4) The maximum shear strength, 60 MPa (75% shear strength of as received composite), was obtained from the Cu-5 sample. Decrease in the amount of CuAl 2 , the width of ASZ, and homogenizing of the bonding zone due to the diffusion of copper were considered as the reasons for increasing shear strength by increasing bonding time. | 2019-07-26T12:37:14.351Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "50d342274df96c0387275c766a7c9a3a5e87ce89",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-477X/3/3/66/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "78db426e3e43ce7d389e253a1490076b14386a73",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260417873 | pes2o/s2orc | v3-fos-license | Comparative study of norepinephrine and phenylephrine infusion for prophylaxis against post-spinal hypotension in patients undergoing elective cesarean section
Background: Maternal hypotension is a physiological response during cesarean section (CS) with spinal anesthesia (SA) and can cause adverse maternal and fetal outcomes. Aim: The present study aimed at comparing the efficacy and safety of norepinephrine and phenylephrine infusion in a CS under SA. Methods: In a randomized clinical trial, 164 ASA I and II parturients undergoing CS under SA were randomized to have a prophylactic infusion of norepinephrine 0.05 µg/kg/min (group N) or phenylephrine 0.75µg/kg/min (group P). The primary outcome was the incidence of post-spinal hypotension. Incidence of severe post-spinal hypotension, reactive hypertension, and bradycardia, total vasopressor rescue bolus doses required, number of physician interventions, nausea and vomiting, and Apgar score at 1 and 5 mins were secondary outcomes. Results: The incidence of post-spinal hypotension in group P (24 %) and group N (29.26 %); severe post-spinal hypotension in group P (3.6 %) and group N (2.4%) respectively and were comparable (p-value >0.05). No of bolus dose of vasopressor required between the two groups, and the incidence of bradycardia and reactive hypertension were comparable. Nausea and vomiting were very low in both groups and comparable. The number of physician interventions needed was significantly higher in group P (39.02%) compared to group N (28.04%) (p-value < 0.05). Conclusion: Norepinephrine is associated with a lower number of physician interventions as compared to phenylephrine; otherwise, hemodynamics is comparable when used to prevent hypotension.
Regional central-neuraxial anesthesia, primarily spinal anesthesia (SA) is the anesthetic technique of choice for elective cesarean section (CS). 1 Maternal hypotension is a physiological response during CS with SA and it is thought to be a major factor in the development of adverse maternal outcomes like nausea, vomiting, dizziness, and even cardiovascular collapse. Additionally, fetal acidosis, hypoxia, and even postnatal neurological injury are concerns prompted by compromised placental perfusion. Therefore, it is crucial from a clinical standpoint to prevent and treat maternal spinal hypotension effectively. Sympathetic block resulting in peripheral vasodilation is cited as the main mechanism leading to a decrease in systemic vascular resistance (SVR). 2 SA also decreases splanchnic blood flow by approximately 20%. The resulting splanchnic hypoperfusion releases emetogenic factors such as serotonin from the gastrointestinal tract. Also, acute sympathetic blockade may cause unopposed vagal action and subsequent hyperactivity in the gastrointestinal tract. 3 The use of prophylactic vasopressor reduces the incidence of intraoperative nausea and vomiting induced by hypotension. Vasopressors nullify the primary physiological derangement induced by sympathetic blocks, like arteriolar vasodilatation, decrease systemic vascular resistance, and also maintain vascular tone in venous and splanchnic vessels thereby maintaining venous return and cardiac filling. 4 However, one of the biggest problems in obstetric anesthesia is still determining the best course of action to take to The preferred vasopressor for the management of postspinal hypotension (PSH) during CS is phenylephrine (PE), although ephedrine and mephentermine are widely used as a vasopressor. 3 It has immediate onset and moderate duration of action on the direct α1 receptor, causing baroreceptormediated bradycardia, which subsequently lowers cardiac output. It is a sympathomimetic amine that causes arteriolar vasoconstriction to raise mean blood pressure and systemic vascular resistance. It is less likely to cause neonatal acidosis than ephedrine while still maintaining uteroplacental blood flow. 5 Noradrenalin or norepinephrine (NE) has potent α 1 and modest β receptor agonist effects leading to significant vasoconstriction with some direct inotropic effects. Its administration leads to higher heart rates than comparable doses of PE. However, its role is mostly limited in septic shock intensive care units (ICU) and OTs with hypovolemic shock. Recently, NE has been tried as a possible alternative to PE in controlling maternal hypotension under SA. 5 When compared to relying solely on rescue dosing, prophylactic continuous infusion with rescue bolus dosing improves hemodynamic stability while decreasing clinician workload and improving maternal comfort. The null hypothesis of our study is that there is no difference in the hemodynamics following SA in elective CS when a prophylactic infusion of NE or PE is used. The current study aimed to compare the effectiveness of PE and NE infusions in patients undergoing elective CS under SA.
Methods
It was a randomized clinical trial conducted in the obstetrics operation theatre under the department of anaesthesiology and critical Care, Guwahati Medical College and Hospital, Guwahati for one year from 1 st August 2021 to 31st July 2022, with prior permission and approval from the Institutional Ethics Committee (No. MC/190/2007/Pt-11/July-2021/TH-20). Our study included parturient of the American Society of Anesthesiologists (ASA) physical status I and II, aged between 18 to 40 years old, with a gestation of 37 weeks or more, uncomplicated, pregnant women undergoing elective CS under SA. Patients with ASA≥ II, multiple pregnancies, premature rupture of membrane, pregnancy-induced hypertension (PIH), antepartum hemorrhage (APH), patients in active labor, diabetes mellitus, ischemic heart disease, cerebrovascular disease, hepatic and renal disease, and patients contraindicated for spinal anesthesia were excluded from the study.
One sixty-four patients were randomized into 2 groups following computer-generated random numbers using a randomizer website and allocated with a concealed envelope. Patients were divided into two groups; Group N (n=82) and Group P (n=82). Group N (n = 82): Received NE infusion at the rate of 0.05 µg/kg/min. Group P (n=82): Received PE infusion at the rate of 0.75µg/kg/min. The research substance was kept blinded by the patients and the attending anaesthesiologists. The research medicine was given as vasopressor infusion after subarachnoid block according to the allocated study groups: Primary outcome: Incidence of PSH (post-spinal hypotension).
Secondary outcome: Incidence of severe post-spinal hypotension (SPSH). Total vasopressor rescue bolus doses required. Number of physician interventions. Apgar score at 1 and 5 minutes. Nausea and vomiting. Incidence of reactive hypertension (RH). All parturients were visited the night before the study and explained about the study. Written and informed consent was taken. The patients were kept nil orally for 6hrs. All of the patients were given intravenous (IV) injections of pantoprazole 40 mg and metoclopramide 10 mg intramuscular (IM) as premeditated. In the operating room, standard monitoring devices such as a pulse oximeter, non-invasive blood pressure (NIBP), and electrocardiogram (ECG) were connected. The baseline NIBP and heart rate (HR) were measured and recorded. SA was performed at the L2-L3 or L3-L4 vertebral interspace with the patients in lateral decubitus position with a 25 G Quincke needle (Spinocan® G25) under all aseptic precautions and 2.5-3 mL hyperbaric bupivacaine (bupivac heavy) with injection buprenorphine 0.2ml (60 µg) at a rate of 0.2 ml/sec was administered as per our institutional protocol after a free flow of cerebrospinal fluid (CSF). The patients were then positioned supine with a wedge on their left side. Block success was assessed after intrathecal injection using the pinprick method. Supplemental oxygen was given through a facemask at a flow rate of 3 liters/min. After obtaining T6-T4 sensory level to pinprick, surgery was allowed to proceed. Co-loading was performed with crystalloid solution (Ringer lactate) at the rate of 20 mL/kg which was divided into two halves. The first part (10 mL/kg) was given before and the second half (10 mL/kg) was infused after spinal anesthesia. After subarachnoid block (SAB), patients received the vasopressor infusion according to the allocated study groups. The vasopressor was infused in the same line with IV fluids using a three-way cannula. The junior resident who was not involved in the study administered the SAB did an intraoperative and postoperative assessment of the patient's parameters and started infusions of the study drugs to the patient as per group allocation.
In case of failure to achieve adequate spinal block or block failure, the patient was converted to general anesthesia and study drug was discarded and excluded from the study. The patient received 10 IU of oxytocin in 500 ml of normal saline (0.9% NaCl solution) IV (150 mL/hour) and 10 IU of oxytocin IM after delivery of the anterior shoulder.
After the administration of SAB, the following parameters were noted: Haemodynamic of the patient till 60 minutes. Episodes of hypotension, RH, and bradycardia. Total number of vasopressor bolus used. Maternal side effects like nausea, vomiting, and chest discomfort. Apgar score at1and 5 minutes. Hemodynamic parameters were defined and managed accordingly as follows.
Post spinal hypotension (PSH) was defined as a combination of two criteria, i.e. systolic blood pressure (SBP) ≤ 100 mmHg, or < 80% baseline. It was corrected by giving a vasopressor bolus. The vasopressor bolus was PE 50µg IV (if the HR>75/min) or IV ephedrine 6mg if HR<75/min). Severe post spinal hypotension (SPSH) was defined as SBP <60% of the baseline reading. It was corrected by administration of either IV PE 100 µg (if HR > 75 /min) or IV ephedrine 15 mg (if the heart rate < 75/min). 3 Reactive hypertension was defined as SBP>120% from the baseline reading. It was managed by the stoppage of the infusion till the next SBP reading. The infusion will be then restarted at a reduced rate (50% of the initial dose) when SBP decreases back within 20% of the baseline reading. Bradycardia (<50/min) was treated with incremental doses of injection atropine 0.3mg. Physician intervention was defined as any of the following as vasopressor bolus, atropine bolus, cessation, restarting, and changing of the vasopressor infusion rate.
The sample size was calculated using G-Power 3.1.9.7 statistical software. The sample size required for this study was estimated from a previous study which demonstrated an incidence of PSH in the PE group was 32% and in the NE group 30% with an effect size of 0.23 4 . Based on α = 0.05, β = 0.20, and a mean difference of 20%, with an estimated standard deviation of 20±5.7, a sample size of 75 per group was required. Considering an attrition rate of 10%, 82 patients in each group were included in this study.
Figure 1: Consort flow diagram
A total of 240 patients were assessed for eligibility, out of which 86 patients were excluded from the study. Sixty-six patients were excluded from preoperative visits due to not meeting inclusion criteria. Eighteen patients had an inadequate block or were converted to general anesthesia. Two patients declined to participate and were therefore excluded from the study. A total of 164 patients were enrolled in our study, with 82 numbers of patients in each group (figure1).
Statistical analysis: The data were entered into Microsoft Excel spreadsheets. The description of the data is in the form of mean ± SD for quantitative data while in the form of % proportion for qualitative (categorical) data. Chi-square and Fisher's exact test were used to evaluate the association between categorical variables. Data were checked for normality using Kolmogorov-Smirnov and Shapiro-Wilk tests. Within the same group, the dependent t-test was used to compare the mean difference. The unpaired t-test was used to compare the mean difference between the two independent groups depending on the fulfilment of the normality assumption for continuous variables. For nonnormal data Mann-Whitney test was used. The statistical analyses were done using PSW software version 21.0. A pvalue < 0.05 is considered significant.
Results
The patient characteristics are shown in table 1. There was no statistical difference between the two groups in demographic data. Duration of operation, baseline HR, SBP, DBP, and MAP were not statistically different either.
Hemodynamic parameters were measured at the start and at fixed time intervals for 60 mins. The mean HR was comparable between the two groups (p-value > .05) (figure 2). The mean SBP, DBP, and MAP were comparable between the two groups (p-value >.05) (figure 3). The incidence of PSH was 24% (n=20) and 29.26% (n=24) in group P and group N respectively. There was no statistically significant difference between the groups (p-value > 0.05) (table 1). The incidence of SPSH in group P was 3.6% (n=3) while in group N was 2.4% (n=2). There was no statistically significant difference between the groups (p-value > 0.05) (table 2). No of bolus dose of vasopressor required between the two groups was comparable and there was no statistically significant difference between the groups (p-value > 0.05) (table 3). Reactive hypertension was noticed in both groups, 14.20% (n=12) and 7.30% (n=6) in group P and group N respectively. There was no statistically significant difference between the groups (p-value > 0.05) (table 4). The Apgar score of the babies was measured at min1 and min 5 in both groups and was comparable. There was no statistically significant difference between the groups (pvalue > 0.05) (table 3). Many studies in patients have shown PE to be effective and have a potent direct a 1 -effect, with no β-effects at clinical doses, and have demonstrated that PE is the best choice of vasopressor in obstetrics anesthesia. However, when higher than required doses administered; it may induce baroreceptor-mediated bradycardia with a consequent reduction in maternal cardiac output. NE being a potent α 1adrenergic agonist, with modest β-agonist activity, causes marked vasoconstriction with some direct inotropic effects resulting in higher heart rates than with comparable doses of PE. 5 So, in our study, the primary outcome was to evaluate the effect of PE and NE infusion for prophylaxis of PSH in patients undergoing CS under SA.
Figure 2: Comparison of HR variation between the groups
The results of our study showed that NE had similar efficacy for maintaining blood pressure compared with PE during SA for CS. The mean SBP observed in our study in group P and group N was not statistically significant. The incidence of PSH in group P was 24% (n=20) while in group N was 29.26% (n=24) and comparable. None of the previous studies done in patients undergoing elective CS had shown a significant difference in the systolic blood pressure between the use of PE and NE infusion for preventing PSH. A study conducted by Cho WJ et al compared the effects of NE and PE used as intermittent boluses in elective CSs under SA. They assigned groups to receive either intermittent bolus dosing of PE (100 g/ml) or NE (5 g/ml). They discovered significant within-group differences in the SBP, HR, and SVR. 6 We observed a very low incidence of SPSH in our patients in groups, 3 patients (3.6 %) in group P and 2 patients (3.6%) in group N. The finding was statistically insignificant and comparable to Hasanin A et al, wherein they found the occurrence of SPSH statistically nonsignificant as the doses they used were comparable with our study. 7 No other relevant study has been found to evaluate SPSH. In our study, we also evaluated the incidence of bradycardia between the two groups intraoperatively. The incidence of bradycardia was 8.50% (n=7) and 10.90% (n=9) in group P and group N respectively and were comparable. This finding of our study was consistent with the studies of Hasanin A et al and Vallejo MC et al where they used both drugs as an infusion. 7,8 In a study conducted by Mohta M et al comparing the effects of 100 µg PE and 5 µg NE administered as boluses used for the treatment of PSH during elective CS comparing the incidence of maternal bradycardia. They found no statistically significant differences in the incidence of bradycardia stating the fact of using the bolus technique and administering low doses. 9 However, studies done by Osmani et al, Wang X et al, Abdelmaboud MA et al and Sharkey AM et al showed that the incidence of bradycardia was significantly lower in the NE group in comparison to the PE group. [10][11][12][13] It may be attributable to the fact that the drugs were given as bolus doses and not as infusions.
Chen Z et al. conducted an RCT in 100 parturients with twin gestation undergoing CS with SA where they found a significant difference in the incidence of bradycardia, lower in the NE group. 14 The probable reason behind this finding might be that the vasopressor was infused at a fixed rate for all the patients, instead of manually adjusted infusion nor closed-loop feedback computer-controlled infusion and the study population consisted of twin gestation.
Ngan Kee WD et al conducted an RCT on healthy patients scheduled for CS under SA to compare computercontrolled infusions of PE (0-100µg/min) and NE (0 -5µg/min) used to maintain arterial blood pressure. They found that the incidence of bradycardia was lower in the NE group compared with that of the PE group. 4 The cause of this heterogeneity may be due to the higher dosage of the PE considered in their study as they compared NE at a concentration of 5 μg/ml versus PE at a concentration of 100 μg/ ml according to their estimate of a potency ratio of 20:1; However, they found that the median infusion rate required to maintain blood pressure was greater in the NE group. They concluded that the true potency ratio for NE:PE for maintaining blood pressure under the conditions of their study is probably less than 20:1, whereas, in our study, we used NE and PE infusions at a potency ratio of 15:1.
There was no significant difference in the total number of rescue bolus doses of vasopressor in both the groups in our study. This finding of our study is consistent with the findings of Vallejo MC et al. 8 It may be due to the fact that the patients received PE infused at a rate of 0.1 µg/kg/min and NE infused at a rate of 0.05 µg/kg/min which was comparable to our study. Geol et al also found no statistically significant difference in usage of rescue boluses for the treatment of hypotensive episodes which is attributable to the fact that the patients in both groups were receiving prophylactic doses of NE and PE. 15 Our findings were inconsistent with Mohta M et al where the total numbers of boluses used were significantly higher in the PE group. 9 Puthenveettil et al conducted a study comparing NE and PE boluses for the treatment of hypotension during SA for CS, with group P receiving PE 50 µg and group N receiving 4µg of NE respectively as IV bolus to treat spinal hypotension. The number of boluses of vasopressor required to treat hypotension was significantly lower in group N. 16 This might be attributable to the fact that both these studies used bolus doses of study.
The Apgar score at 1 minute and 5 minutes in our study were comparable between the groups. This finding of our study was consistent with the findings of Vallejo MC et al, and Ngan Kee et al. 4,8 The occurrence of reactive hypertension in both groups was also compared in our study and no significant difference was found between the two groups. This finding was found to be consistent with Mwaura L et al, Mohta M et al, and Hasanin A et al. 7,9,17 It might be due to the fact that the doses in these studies were comparable with our study. Jaitawat SS et al found significant reactive hypertension when using PE 100 µg as a bolus in comparison to PE 75 µg bolus in his trial. In our study infusion was used instead of a bolus. 18 In our study, we also compared the incidence of nausea and vomiting in both study groups. There was no significant difference in terms of nausea and vomiting between the two groups. Only 7 patients in group P and 9 patients in group N developed nausea and vomiting. Our findings were consistent with Goel et al and Hasanin A et al. 7,15 In our study, there was a statistically significant difference found in the physician interventions that were required between the two groups. This finding of our study was consistent with the findings of Hasanin A et al who also found a significant difference in physician interventions required between the groups. 7 This might be attributable to the fact that doses for infusion of the study drugs were similar in both studies. Chen Z et al conducted an RCT in 100 parturients with twin gestation undergoing CS with SA. They were randomized to receive prophylactic NE (3.2μg/min) or PE infusion (40μg/min). They found that the requirements of physician interventions to correct maternal hemodynamic abnormalities were similar in both groups. 14 This inconsistency might be due to the fact that they used 2.5ml of 0.5% isobaric bupivacaine for SA without any opioids adjuvant with the patient in the left lateral position and measured blood pressure until the delivery of the second baby.
In our study, we decided to use an infusion of vasopressor to prevent hypotension. Many studies have recommended the use of prophylactic infusions of the drug PE and NE for PSH. Infusions of PE and NE were effective in decreasing the incidence of hypotension and resulted in more stable BP control compared with a control group that received rescue boluses of the above-mentioned drugs. Also, it offers the advantage of limiting clinician workload and increasing maternal comfort. In our study, we gave prophylactic infusions to the patients based on weight as it was found that the incidence of hypotension was significantly less than in the weight-adjusted intervention group in comparison with the fixed-dose control group. 5 Limitations: It is a single-hospital study, but a multihospital study is considered to be better. Our study population was not large enough to assess the difference in the occurrence of adverse effects and postoperative nausea and vomiting. Umbilical arterial blood gas analysis was not performed in our study to evaluate the neonates' biochemical abnormalities due to the effect on cardiac output. We excluded pregnant women with uteroplacental insufficiency and fetuses with intrauterine growth retardation.
Conclusion
Hemodynamics in patients undergoing elective CS was comparable when PE or NE infusion was used to prevent hypotension following SA. NE is associated with a lower | 2023-08-03T15:06:41.258Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "7105422c110f514bd5b173c91b4d93ba81f05b4e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.21276/obgyn.2023.10.1.8",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a2ffc5f9ba461c9a53cb69c9fb8c421b85aca1db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
6673390 | pes2o/s2orc | v3-fos-license | An equivalent circuit model for onset and offset exercise response
Background The switching exercise (e.g., Interval Training) has been a commonly used exercise protocol nowadays for the enhancement of exerciser’s cardiovascular fitness. The current difficulty for simulating human onset and offset exercise responses regarding the switching exercise is to ensure the continuity of the outputs during onset-offset switching, as well as to accommodate the exercise intensities at both onset and offset of exercise. Methods Twenty-one untrained healthy subjects performed treadmill trials following both single switching exercise (e.g., single-cycle square wave protocol) and repetitive switching exercise (e.g., interval training protocol). During exercise, heart rate (HR) and oxygen uptake (VO 2) were monitored and recorded by a portable gas analyzer (K4b 2, Cosmed). An equivalent single-supply switching resistance-capacitor (RC) circuit model was proposed to accommodate the observed variations of the onset and offset dynamics. The single-cycle square wave protocol was utilized to investigate the respective dynamics at onset and offset of exercise with the aerobic zone of approximate 70% - 77% of HR max, and verify the adaption feature for the accommodation of different exercise strengths. The design of the interval training protocol was to verify the transient properties during onset-offset switching. A verification method including Root-mean-square-error (RMSE) and correlation coefficient, was introduced for comparisons between the measured data and model outputs. Results The experimental results from single-cycle square wave exercises clearly confirm that the onset and offset characteristics for both HR and VO 2 are distinctly different. Based on the experimental data for both single and repetitive square wave exercise protocols, the proposed model was then presented to simulate the onset and offset exercise responses, which were well correlated indicating good agreement with observations. Conclusions Compared with existing works, this model can accommodate the different exercise strengths at both onset and offset of exercise, while also depicting human onset and offset exercise responses, and guarantee the continuity of outputs during onset-offset switching. A unique adaption feature by allowing the time constant (Continued on next page) (Continued from previous page) and steady state gain to re-shift back to their original states, more closely mimics the different exercise strengths during normal daily exercise activities.
http://www.biomedical-engineering-online.com/content/13/1/145 (Continued from previous page) and steady state gain to re-shift back to their original states, more closely mimics the different exercise strengths during normal daily exercise activities.
Keywords: Heart rate, Oxygen uptake, Mathematical modeling, Cardiovascular system, Single-cycle square wave, Interval training
Background
One of the greatest public health challenges confronting many industrialised countries is the obesity epidemic. Low-to-moderate intensity exercise, suitable for every fitness level, remains one of the healthiest and risk averse methods for reducing body fat [1]. Heart rate (HR) and oxygen uptake (VO 2 ) are commonly applied to assess metabolic demands [2][3][4][5][6][7]. To develop an effective exercise protocol to improve human cardiovascular fitness, this study first explores the dynamic responses of HR and VO 2 by using a portable gas analyzer (K4b 2 , Cosmed) during treadmill experiments. Twenty-one untrained healthy subjects performed treadmill exercise following the predefined single-cycle square wave and interval training protocols. The single-cycle square wave protocol was utilized to investigate the respective dynamics at onset and offset of exercise with a certain submaximal exercise capacity (an approximate range of 70% -77% of HR max , or 56% -65% of VO 2max [8]). Additionally, an interval training protocol [9] is generally inclusive of three different periods: warm-up, exercise (three-cycle of high intensity period and recovery period), and cool-down. The design of the interval training protocol regarding this study was to verify the transient properties during onset-offset switching.
Previous literatures [10][11][12] have studied human cardiorespiratory responses at onset and offset of exercise, and found the different dynamic characteristics (i.e., time constants and steady state gains) at onset and offset of exercise. We further explored dynamics in the particular aerobic zone (approximate 70% -77% of HR max , or 56% -65% of VO 2max [8]), which has well confirmed the observation reported in literatures [13]. Past works also focused on building a model for estimates of HR and/or VO 2 responses to exercise. See [14][15][16][17][18][19][20][21] for examples. These models utilized only a single non-switching model for either onset or offset exercises. The traces of onset and/or offset dynamics would have been accurately described but the transient properties during onset-offset switching are almost overlooked. Switching models produce much better results than single non-switching models. The switching resistance-capacitor (RC) circuit introduced by [13] used a dualsupply threshold-based solution to simulate HR and VO 2 responses towards the interval training protocol. Despite a better performance being observed (vs. the non-switching models), particularly for transient behaviors during switching, there are still some limitations since dynamical characteristics (i.e., time constant and steady state gain) of model are not allowed to re-shift back to their original states, especially at the offset of exercise.
In this paper we propose an innovative single-supply switching RC circuit model. This will depict and analyze HR and VO 2 dynamics to exercise, consisting of only one power supply, linked with onset and offset RC switching circuits. The main advantages of this model are that it can well accommodate the observed onset and offset dynamics, guarantee the continuity of model outputs during switching, and adaptively match the measured output for different exercise strengths at both onset and offset of exercise. http://www.biomedical-engineering-online.com/content/13/1/145 The list of nomenclature information is included in Table 1. The remainder of the paper is organized as follows. Section 'Experiment' introduces experimental equipment, exercise procedures and protocols. Section 'Data analysis' shows the data analysis for parameter identification of the proposed model. Section 'The proposed modeling and verification methods' describes the proposed single-supply switching RC circuit model and its verification methods. Section 'Results' provides the parameter configuration, verifications, and discussions. Finally, Section 'Conclusion' concludes this study.
Experiment
In order to investigate HR and VO 2 responses with a certain submaximal exercise capacity [8], twenty-one male healthy untrained subjects participated in the single-cycle square wave and interval training exercises. The UTS Human Research Ethics Committee (UTS HREC 2009000227) approved this study and an informed consent was obtained from all participants before commencement of data collection. The physical characteristics of the participants joined the single-cycle square wave exercise are presented in Table 2.
Prior nutritional intake, physical activity and environment conditions were standardized for all participants. The participants consumed a standardized light meal at least two hours before the experiment and were not to engage in any exercises for one day prior to each experiment [22,23]. The temperature and humidity of the laboratory were set at 20 -25°C and 50% relative humidity, respectively.
The step responses of HR and VO 2 at onset and offset of exercise were measured following the predefined two protocols: the single-cycle square wave and interval training protocols. Figure 1 shows the exercise intensities and durations of these exercise protocols. The single-cycle square wave protocol (see Figure 1a) was repetitively performed by twenty subjects for minimizing effects of the intra subject variability. The inter subject variability (e.g., the fast response of vagal withdrawal, sudden increase of body temperature, nervousness at the start of exercise) was as well considered through the initiating warm-up, asking subjects to gently walk on the treadmill with 5 km/h before the onset of the experiment. Figure 2 shows a typical experiment result of the ensemble averages of HR and VO 2 responses following such protocol across twenty subjects. To explore the transient behaviors during onset and offset of exercise, a new male subject AZAM (Age = 30 year, Height = 185 cm, and Mass = 84 kg) was invited to run on the treadmill following the interval training protocol, proposed in Figure 1b.
In order to investigate cardiorespiratory responses to the moderate exercise intensity level, the aerobic zone of approximate 70% -77% of HR max (or 56% -65% of VO 2max ) was targeted for exercisers following both exercise protocols [8], since the relationship between HR and VO 2 in this zone is nearly linear [24]. To determine HR max for any individual subject, the equation employed for this study was developed by Inbar [25]: All physiological measurements in this study were collected by a Cosmed portable gas analyzer (K4b 2 , Cosmed, Rome, Italy). The Cosmed system includes a compatible HR monitor which consists of one transmitter in the elastic belt and one receiver. The two parts are assembled as close as possible for capturing the most effective communication signals. K4b 2 gas analyzer and its compatible products are chosen because they have been reported to be valid, accurate and reliable [26][27][28]. To avoid random errors and improve the accuracy of the recorded data, each exercise was repeated twice by subjects and the obtained data filtered, interpolated, and averaged.
Data analysis
It has been widely known that the step responses of HR and VO 2 can be approximated as a first-order process [29], K Ts+1 , where K is the steady state gain and T is the time constant. On the basis of the experimental data of the single-cycle square wave protocol Matlab System Identification Toolbox was used to establish the first-order process for both HR and VO 2 responses over all trials. The coefficients (K and T) for each trial are identified, and the mean and standard deviation (STD) of twenty subjects at onset and offset of exercise are illustrated in Table 3. Those results indicate that the steady state gain (K) at offset of exercise is obviously smaller than that at onset of exercise for both HR and VO 2 .
The mean values of time constant (T) at offset of exercise, however, is notably larger than that at onset for both HR and VO 2 . http://www.biomedical-engineering-online.com/content/13/1/145 T on , T off , K on and K off represent time constant (T) and steady state gain (K) at onset and offset of exercise respectively.
The proposed modeling and verification methods
The single-supply switching RC circuit model Figure 3a shows the overview of the proposed single-supply switching RC circuit model, which is inclusive of one DC power supply (V ), one diode, one double-pole double-throw (DPDT) switch, two capacitors (C 1 and C 2 ), and three resistors (R 1 , R 2 and R 3 ). Figures 3b and c-1/c-2 are the subcircuits of the proposed model linked by the DPDT switch representing cardiorespiratory behaviors at onset and offset, respectively. The voltage of C 1 with respect to exercise time represents the amplitude of HR and/or VO 2 dynamics during moderate exercise and its subsequent recoveries, since in moderate exercise both HR and VO 2 have similar behaviors [13,24]. The functionality of D 1 is to configure the resistance amplitude of the onset and offset circuits, which will short R 2 off during the activation of the onset circuit. The process of modeling both HR and VO 2 dynamics at onset and offset of exercise and long-term recovery is as follows. At first, the onset behaviors are simulated by switching DPDT to poles a 1 and b 1 , (see Figure 3b). The function of the dioxide D 1 is to short the R 2 out. In this period, the DC power supply V charges the capacitor C 1 , from baseline up to V 1 that approximately equals the DC power supply V . Figure 4 shows the dynamic variations of capacitors C 1 and C 2 in the proposed model during exercise and recovery. The voltage of C 1 is expressed as: where the steady state value of V c 1 (t) is known as V .
During the offset period from t 1 to t 2 (see Figure 4), both circuits c-1 and c-2 would be applicable for the analysis of this period. However, if assume R 3 is sufficiently big, the current passing through R 3 would be negligible, meaning that both circuits (c-1 and c-2) with such assumption for R 3 are approximately equivalent. The offset processes for C 1 and C 2 can be described as: during which the capacitor C 1 is discharging and its voltage follows an exponential decay down to V 2 at time t 2 , while the capacitor C 2 is charging resulting in an exponential growth of its voltage from 0 at time t 1 to V 3 at time t 2 . It is also required that V 2 ≈ V 3 ≈ C 1 V 1 C 1 +C 2 at the end time of offset portion, t 2 . The particular offset dynamics of C 1 was intended to mimic a repetitive switching training behavior (e.g., interval training [30]). At this stage, the steady state level of C 1 would shift from a high level (e.g., V 1 ) to a low level (e.g., V 2 ) comparing to the initiating level at warm-up (called the baseline level) herein being considered as zero. The high-and lowlevels can easily implement by manipulating the amplitudes of resistances and capacitors of the proposed model. Considering the single switching exercise (e.g., a single-cycle square wave exercise introduced in Section 'Experiment'), however, the steady state level must re-shift back to the baseline since the human metabolic rates will generally return back to their baseline levels during the long-term recovery. It could be well achieved by setting the model with the alternative subcircuit c-2, which can consume all energies http://www.biomedical-engineering-online.com/content/13/1/145 stored in capacitors C 1 and C 2 through the resistance R 3 . Figure 4 shows this long-term recovery process where the C 1 and C 2 voltages fall down to the baseline at time t 3 .
Based on equations (2)-(4), the normalized time constants and steady state gains for both onset and offset processes could be derived as follows: where K on , T on , K off , and T off represent the steady state gains and the time constants of onset and offset respectively. New defined parametersK on andK off are applied to normalize steady state gains. If K on , K off , T on , and T off are given and assume R 2 is a pre-defined free parameter, the values of capacitors and resistor (C 1 , C 2 , and R 1 ) then could be easily configured by:
Quantitative description for the concept of 'oxygen debt'
The physiological interpretation for the dynamics of HR/VO 2 responses at onset and offset of exercise may be associated with the term 'oxygen debt', as first coined by A. V. Hill and others [31]. According to the term 'oxygen debt' [31], the body's carbohydrate stores are linked to energy 'credits'. If these stored credits are expended during onset of exercise, then a 'debt' is occurred. The greater energy 'deficit', or use of available stored energy credits, the larger energy 'debt' occurs [10]. The ongoing oxygen uptake after onset of exercise is then thought to represent the metabolic cost of repaying this debt. This concept used financial-accounting terms to qualify exercise metabolism; in fact, it is still popularized to the day.
Moreover, this study attempts to develop an electronic term to quantitatively analyze the switching exercise processes. First of all, the onset circuit could well support the hypothesis made by the term 'oxygen debt' [31]. During this period shown in Figure 4, V c1 (t) exponentially grows implying an increase of HR. It has been well known that the cardiac output (Q), the total power pumped by the heart, can be expressed as Q = stroke volume (SV) × HR. As during moderate exercises SV is assumed to be constant, the integral of HR with respect to time should be proportional to Q, which also can be depicted by the integral of equation 2, see the white area of the onset period in Figure 4(a). In the concept of 'oxygen debt', this white area is thought as energy 'credits', and the line shadowed area is considered as energy 'deficit' representing the amount of ATPs that are not capable to be pumped out to satisfy the tissue's urgent demands. Similar with the proposed circuit model, a simply RC serial circuit is employed for approximations of the onset dynamics. Since V c1 (t) cannot instantaneously reach to the steady state level (V ) at the beginning of exercise, energy 'credits' and 'deficit' occur. http://www.biomedical-engineering-online.com/content/13/1/145 Currently, the precise biochemical explanation for offset of exercise is not possible because the specific chemical dynamics are still unclear [10]. A. V. Hill [31] first hypothesized that all energies generated during the offset period (the line shadowed area plus the cross line shadowed area between t 1 and t 2 in Figure 4(a)) are thought to represent the metabolic cost of repaying energy 'debt'. However, this study proves that the amount of energy 'debt' is much larger than that of energy 'deficit', which means energy 'debt' is only a part of energies generated during the offset period. Instead, glycogenesis and all other processes related for the recovery of the body to its pre-exercise conditions also are taking place in the offset period.
The experimental observation (see section 'Experiment') has shown that the time constant at offset of exercise is larger than that at onset of exercise, meaning that the line shadowed area plus the cross line shadowed area in the offset period (see Figure 4(a)) is greater than the area of energy 'deficit' in the onset period. If the two line shadowed areas (the areas of energy 'deficit' and 'debt' in Figure 4(a)) could equal each other (the debt equals to the deficit), a question is raised: what does the extra area (the cross line shadowed area in Figure 4(a)) represents? According to the mass-energy equivalence relation (E = MC 2 ), any change in the energy of an object causes a change in the mass of that object. Thus, the extra cross line shadowed area perhaps implies there must exist an energy storage process, which converts the energy into 'molecules', and further causes a change in the body's mass. As the specific chemical dynamics are still unclear [10], it might be safely concluded that any physiological process that contributes to the recovery of the body to its pre-exercise conditions may result in the appearance of such extra area, e.g., glycogenesis (a process of glycogen synthesis). For this reason, it is probably that the proposed element C 2 is going to store this kind of energy, like the liver stores glycogen. Overall, the model outputs indicate that the cross line shadowed area in Figure 4(b) is presumably equal to the one with the same mark in Figure 4(a).
Model verification
In order to verify the proposed modeling work, two independent and widely used metrics were used for comparative purposes. Root-mean-square-error (RMSE), as described in Equation (7), was calculated to provide a measure of the average error between the two waveforms.
where x 1,i and x 2,i are the ith sample from measured data and model output respectively and n is the number of samples. Correlation coefficient, as described in Equation (8), was used to provide a measure of the similarity in the shape of the model outputs versus the averaged experiment results.
where P 1 and P 2 are the measured and estimated data in terms of HR and VO 2 response at onset and offset of exercise respectively. http://www.biomedical-engineering-online.com/content/13/1/145
Parameter configuration
Based on the dynamic characteristics of observed HR and VO 2 and normalization process shown in Table 3 and Equations (5) -(6) respectively, the tuned circuit model parameters for the proposed single-supply switching RC circuit model were demonstrated in Table 4. The tuned averaged models for both HR and VO 2 then were employed to simulate those dynamic variations following both single-cycle square wave and interval training protocols. The simulation was performed by the Matlab/Simulink module, and the timing of DPDT switching between exercise intensities strictly follows the reference protocols shown in Figure 1.
Comparison for single-cycle square wave protocol Figure 5 was an example to show the model accuracy towards the single-cycle square wave protocol. The proposed model was first tuned by settings of parameters based on Table 4, then run in Matlab/simulink following proposed duty cycles of the predefined single-cycle square wave protocol. The mean and variance of the distributions can be found in Table 5.
The RMSE of HR and VO 2 across the general average measurements of all subjects for the proposed model was 3.13 bpm and 97.35 ml/min respectively. The correlation coefficient between the actual measurements and model estimations was 98.11% and 97.98%. It can be seen in Figure 5 that the proposed model significantly performed the estimation of HR and VO 2 dynamics for an averaged general-population set following single switching exercise protocols.
Comparison for interval training protocol
The subject AZAM was invited to perform the predefined interval training regarding the model verification for the repetitive switching exercise. Experimental results for both HR and VO 2 dynamics are shown in red curves of Figure 6. Of the three-cycle interval training exercise, model parameters were determined by using the first cycle measurement and those tuned circuit models for onset and offset exercises were accurately switched following the predefined protocol durations illustrated in Figure 1b. The dashed blue curves in Figure 6 indicate the model outputs of proposed interval training exercise for both HR and VO 2 dynamics of subject AZAM. When comparing the model accuracy versus the observations from the specific-subject data following the repetitive switching exercise, based on correlation coefficients shown in Table 5, the model outputs can generally describe the dynamics of HR and VO 2 with a high similarity (97.34% and 83.85%, respectively). When the RMSE for HR and VO 2 was examined, it was evident that the model output for HR again were fairly accurate but that for VO 2 had errors with 234.42. This was primarily due to the presence of random errors, which caused more variability of the repetitive exercise in the specific-subject data versus the averaged general-population data.
Discussion
This model was tested through those exercise protocols with few iterations of onset and offset periods, but even with more iterations, it enables estimates of the dynamic response of HR and VO 2 . The employed switching mechanism could well unify the difference at onset and offset of exercise, as well as satisfy the requirement of the continuity of model outputs during switching. This feature results in an accurately quantitative analysis for human exercise responses, and can further apply to regulating and improving cardiorespiratory fitness. Currently, Azzam et al. developed a dual-supply threshold-based solution to simulate HR and VO 2 responses towards the interval training protocols, which employs dual power supplies to set a threshold value for each onset and offset scenario [13]. Figure 7 shows the RC circuit introduced by Azzam et al. Although this model can well describe the switching properties during onset and offset of exercise, there are still some limitations since dynamical characteristics (i.e., time constant and steady state gain) of model are not allowed to re-shift back to their original states due to the effect of V off . It is probably inefficient when applying it to a single switching exercise, as it requires the metabolic rate can adaptively vary from V 1 down to zero (see Figure 4). Compared with the one shown in Figure 7, the proposed model provided sound results for both single and repetitive switching exercises. Further investigation would be made to explore the subject-specific model across a population of individuals, although it has been found the proposed model can work on the averaged experimental observations with acceptable correlations.
Moreover, to regulate the proposed switching model the implementation of bump-less switching between two or more higher dimensional systems based on multi-realization theory will also be discussed in the next step [32,33].
Conclusion
In this work a novel single-supply switching RC circuit model is presented to accommodate the variations of onset and offset dynamics following both single-cycle square wave and interval training protocols. Twenty-one healthy untrained subjects were invited to participant the treadmill exercises. The portable gas analyzer K4b 2 was used to measure breath-by-breath VO 2 and beat-by-beat HR values. It has been concluded that the observed results can be reliably described by the proposed model. Unlike some other existing modeling works, it provided accurate analyses for the different responses of onset and offset exercises, guaranteed the continuity of model outputs during onset-offset Figure 7 The RC circuit introduced by Azzam et al., where the voltage of capacitor C was used to simulate the HR and VO 2 dynamics towards interval training protocols, V on , R on , V off , and R off are the onset and offset supplies and resistances respectively that were switched by a designed SPDT switch. http://www.biomedical-engineering-online.com/content/13/1/145 switching, and is capable of accommodating exercise strengths. The validity of the proposed model is confirmed by comparing the simulated model outputs with the averaged experimental observations. In the next step, a subject-specific model will be investigated and a general framework for the implementation of bump-less switching between two or more higher dimensional systems based on multi-realization theory [32,33] then will be developed for the issue of human exercise regulation. | 2015-03-27T18:11:09.000Z | 2014-10-18T00:00:00.000 | {
"year": 2014,
"sha1": "6effa3f9e4488139c8f6c63b90eccbb0ddc30def",
"oa_license": "CCBY",
"oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/1475-925X-13-145",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd9bb78f3dfe05e7622857abe5a47dbbb3f5d1a8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
7249647 | pes2o/s2orc | v3-fos-license | Uniqueness of Bounded Solutions to a Viscous Diffusion Equation
In this paper we prove the uniqueness of bounded solutions to a viscous diusion equation based on approximate Holmgren’s approach.
Introduction
We consider the uniqueness of bounded solutions to the following viscous diffusion equation in one dimension of the form with the initial and boundary condition u(x, 0) = u 0 (x), x ∈ [0, 1], (1.3) where λ > 0 is the viscosity coefficient, Q T = (0, 1) × (0, T ), A(s), B(s is a constant, and f is a function only of x and t. If λ = 0, then the equation (1.1) becomes In the case that A (s) ≥ 0, the equation (1.4) is the one dimensional form of the wellknown nonlinear diffusion equation, which is degenerate at the points where A (u) = 0 and has been studied extensively.In particular, the discussion of the uniqueness of solutions can be found in many papers, see for example have been discussed by Chen, Gurtin [11] and Ting, Showalter [12].
In this paper, we establish the uniqueness of the solutions to the initial-boundary problem of the equation (1.1) by using an approximate Holmgren's approach.It is worth recalling the work of [1] concerning related parabolic problems (1.4).Due to the degeneracy, the problem (1.1)-(1.3)admits only weak solutions in general.So our result is concerned with the generalized solutions to the problem (1.1)- (1.3).
Our main result is the following theorem.
3) has at most one generalized solution in the sense of Definition 1.1.
Preliminaries
Let u 1 , u 2 ∈ L ∞ (Q T ) be solutions of the boundary value problem (1.1)- (1.3).By the definition of generalized solutions, we have where For small η > 0, let Let Ãε and λ η,ε be a C ∞ approximation of à and λ η respectively, such that For given g ∈ C ∞ 0 (Q T ), consider the approximate adjoint problem
EJQTDE, 2003 No. 17, p. 5 3 Proof of Theorem 1.1 Given g ∈ C ∞ 0 (Q T ).Let ϕ be a solution of (2.1)-(2.3).Then As indicated above, from the definition of generalized solutions, we have Now we are ready to estimate all terms on the right side of (3.1).First, from Lemma 2.1 We also have [1],[3]-[7].While if A (s)EJQTDE, 2003 No. 17, p. 1 is permitted to change sign, (1.4) is called the forward-backward nonlinear diffusion equation.For the case of λ > 0, Cohen and Pego[10] considered the equation (1.1) with B(s) = 0 and f = 0s) has no monotonicity.Their interests center on the steady state solution for the equation (1.5), and the uniqueness of the solution of the Neumann initial-boundary value problem and the Dirichlet initial-boundary value problem of the linear case of the equation (1.5),
Here and in the sequel, we use C to denote a universal constant, indenpent of η and ε, which may take different value on different occasions.
.3) It is easily to see that the solution to the problem (2.1)-(2.3) is in C ∞ from the smooth of g in (2.1).Lemma 2.1 The solution ϕ of the problem (2.1)-(2.3)satisfies | 2014-10-01T00:00:00.000Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "5ac57bd76cc3b4d6bf888a1a88d148c6f8d76f78",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14232/ejqtde.2003.1.17",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "5ac57bd76cc3b4d6bf888a1a88d148c6f8d76f78",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
214204506 | pes2o/s2orc | v3-fos-license | Muscular ventricular septal defects: how I close them
A ventricular septal defect (VSD) is a communication between the interventricular chambers. Muscular ventricular septal defects have exclusively muscular borders and are more likely to be multiple. Their location and multiplicity can sometimes make them a very challenging clinical problem.
or; in cases of multiple VSDs if the combined shunt is high, patients may become symptomatic in early infancy.
The management of a single muscular VSD in the inlet area is very similar to the management of a perimembranous VSD. Medical management can be used till the age of 3 months to control symptoms and then surgical closure of the VSD can be done. My preference of patch material is autologous pericardium sutured in with a continuous prolene suture, though other techniques like interrupted sutures or other patch material can also be used. Care must be paid to the location of the conduction system because in these patients, the atrioventricular conduction axis penetrates into the ventricular septum on the superior side of the VSD and extreme care must be taken while placing sutures on the superior edge of the VSD.
In patients that have both perimembranous and an inlet muscular VSD I prefer to use a single patch to close both defects. This avoids taking sutures in the narrow muscle bundle between these two defects. Most likely in this subgroup the bundle of His travels in this narrow muscle bundle.
Isolated mid muscular VSDs are most accessible to closure in the catheterization lab. Most of these patients will never see a cardiac surgeon, though hybrid approaches have been advocated by some surgeons and cardiologists.
I consider mid muscular VSDs, apical muscular VSDs and anterior trabecular VSDs as a "field of defects". Several times, these VSDs may have one or two openings on the compact left ventricular side but have several openings on the trabeculated non compact right ventricular (RV) side. Closure of this "field of defects" cannot be done with the typical technique used in the closure of perimembranous VSDs. In a perimembranous VSD, the margins of the VSD are easily defined and should be covered with an exact sized patch and fine needles that only go through the exact margin of the VSD. In muscular VSDs, I always get an approximate idea of where the "field of defects" is from the echocardiogram. Intraoperatively I approach them through the right atrium and then use an oversized patch that goes a bit beyond this "field of defects". In infants, I use a 5.0 prolene suture with a bigger needle which would otherwise rarely be used on neonatal or infant VSDs ( Figure 1). The larger needle allows me to compensate for any additional extensions of the "field of defects" that I might encounter while sewing in the patch. Any suspicious trabeculations at the periphery of the field will be included.
In the case of mid muscular VSDs, the moderator band may be at the bottom of the defect or may traverse the defect. Some surgeons describe dividing the moderator band completely to visualize the defect. This is not my preference. Since I am only looking for the approximate "field of defect" most of the times I can use an oversized patch to exclude that field from the remaining RV cavity. Minor muscle bundles may be divided to position the patch under or over the moderator band.
Other techniques may be used for closure like a hybrid approach with a perventricular device closure or by placing the patch on the left ventricular side either through the VSD or via a separate left ventriculotomy. The long-term consequences of a left ventriculotomy are always a concern with this approach. When the patch is placed via the VSD onto the left ventricular side, then the left ventricular pressure helps in fixing the patch since it is higher than RV pressure (2).
Apical VSDs have all the characteristics described above for mid muscular VSDs. However, their location further distally near the ventricular apex can make exposure more difficult. In these apical VSDs, I have not used the typical short length retractors, like a vein retractor, often used in perimembranous VSDs. I have found that a Teflon coated narrow malleable bent to the appropriate shape can provide excellent exposure deeper into the RV cavity. Teflon coated retractors have better traction on the RV muscle and do not slip as easily as a metal malleable retractor would, when used in this circumstance. Once this exposure is obtained, then I again identify the field of defects in the apex. Once this is done, then I use another oversized patch to close these defects. Rather than trying to define the exact margin of each individual defect, a more effective strategy is to attempt to exclude this field of defects from the main RV chamber. Dimpling of the epicardium can often be seen with this because the larger needle allows me to take bigger bites of the trabeculations at the margin of this field of defects. Care must be taken not to get close to the left anterior descending (LAD). If this happens, epicardial scoring incisions can be made to ensure the LAD lumen isn't compromised. If at the completion of the procedure the apical VSD patch is felt to be too patulous and impinging on the RV cavity, then a few anchor sutures with pledgets may be passed through the patch and brought out through the ventricular apical epicardium. Normal cardiovascular surgical needles do not have the length for this maneuver so I have used straight 23-gauge hollow bore hypodermic needles to pass these anchoring sutures. Closure of anterior apical VSDs can be the most challenging because of the proximity to the LAD so extra caution must be taken.
Anterior VSDs away from the apex can be closed by suturing the patch to the annulus of the pulmonary valve and using the remainder of the patch to exclude this field of defects from the RV chamber. This patch should also be generous but should not be oversized that it billows into the RV outflow tract.
An additional maneuver that is helpful in case of multiple VSDs, is by passing a right-angled clamp through the VSD closest to the inlet to define the field of defects distally.
At the end of the procedure, saline may be insufflated through the atrial septal defect (ASD) and mitral valve to check if the ventricular septum can be made to bulge into the right ventricle. If this happens then its highly likely that most of the interventricular shunt has been closed. Also, if the left ventricle (LV) vent is draining mostly blood and not a mixture of air and blood, this could be an indirect sign of complete closure.
Muscular VSDs may also be associated with coarctation or transposition. In case of transposition, a useful technique is to introduce a right angle through the neoaortic valve and probe the VSDs. Since the right angle is on the LV side it is more likely to "fall into" the VSD and a heavy silk suture can be passed from the tricuspid valve. This is a useful marker of the "field of defects" that needs to be closed. At the end of the procedure, if there is any residual VSDs seen on the post-operative echocardiogram then I do an estimation of the shunt fraction by drawing superior vena caval blood and pulmonary arterial blood at 30% FiO 2 . If the shunt fraction is less than 1.5 then I consider it a satisfactory result. For higher shunt fractions, I make a judgement call if there are accessible additional VSDs to decide for further attempts at closure versus a pulmonary artery (PA) band. I have personally found a right ventriculotomy of limited use since it allows access only to the anterior part of the interventricular septum.
In conclusion, muscular VSDs are a challenging problem in neonates and infants when they present with significant congestive heart failure from interventricular shunting. However, with careful adjustments to technique, most of these can be closed via the right atrium and rarely need the use of a ventriculotomy or a PA band.
Footnote
Provenance and Peer Review: This article was commissioned and reviewed by the Guest Editor (Raghav A. Murthy) for the series "Management of Congenital Heart Disease" published in Journal of Thoracic Disease. The article did not undergo external peer review.
Conflicts of Interest: The series "Management of Congenital
Heart Disease" was commissioned by the editorial office without any funding or sponsorship. The author has no other conflicts of interest to declare.
Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. | 2020-03-19T19:45:10.201Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "02cf029d84e12ca1ce602393a8db17dd057bb3aa",
"oa_license": "CCBYNCND",
"oa_url": "https://jtd.amegroups.com/article/viewFile/33927/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c31f883fb3ccea8facfd65bcc3127c7014de428e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88521274 | pes2o/s2orc | v3-fos-license | Integrating summarized data from multiple genetic variants in Mendelian randomization: bias and coverage properties of inverse-variance weighted methods
Mendelian randomization is the use of genetic variants as instrumental variables to assess whether a risk factor is a cause of a disease outcome. Increasingly, Mendelian randomization investigations are conducted on the basis of summarized data, rather than individual-level data. These summarized data comprise the coefficients and standard errors from univariate regression models of the risk factor on each genetic variant, and of the outcome on each genetic variant. A causal estimate can be derived from these associations for each individual genetic variant, and a combined estimate can be obtained by inverse-variance weighted meta-analysis of these causal estimates. Various proposals have been made for how to calculate this inverse-variance weighted estimate. In this paper, we show that the inverse-variance weighted method as originally proposed (equivalent to a two-stage least squares or allele score analysis using individual-level data) can lead to over-rejection of the null, particularly when there is heterogeneity between the causal estimates from different genetic variants. Random-effects models should be routinely employed to allow for this possible heterogeneity. Additionally, over-rejection of the null is observed when associations with the risk factor and the outcome are obtained in overlapping participants. The use of weights including second-order terms from the delta method is recommended in this case.
Mendelian randomization is the use of genetic variants as instrumental variables to assess whether a risk factor is a cause of a disease outcome. Increasingly, Mendelian randomization investigations are conducted on the basis of summarized data, rather than individual-level data. These summarized data comprise the coefficients and standard errors from univariate regression models of the risk factor on each genetic variant, and of the outcome on each genetic variant. A causal estimate can be derived from these associations for each individual genetic variant, and a combined estimate can be obtained by inverse-variance weighted meta-analysis of these causal estimates. Various proposals have been made for how to calculate this inverse-variance weighted estimate. In this paper, we show that the inverse-variance weighted method as originally proposed (equivalent to a two-stage least squares or allele score analysis using individuallevel data) can lead to over-rejection of the null, particularly when there is heterogeneity between the causal estimates from different genetic variants. Random-effects models should be routinely employed to allow for this possible heterogeneity. Additionally, over-rejection of the null is observed when associations with the risk factor and the outcome are obtained in overlapping participants. The use of weights including second-order terms from the delta method is recommended in this case. 2015). Additionally, some authors have used fixed-effect meta-analysis for the combination of estimates from different genetic variants (Nelson et al., 2015), whereas other authors have used random-effects meta-analysis (Ahmad et al., 2015).
In this paper, we compare the bias and coverage properties of estimates from the inverse-variance weighted method for different choices of weights, and using fixedeffect, additive random-effects, and multiplicative random-effects models for combining the estimates. In Section 2, we introduce the inverse-variance weighted method, and demonstrate its equivalence to both a two-stage least squares analysis and to a weighted linear regression of the association estimates. We also present the different versions of the method that are investigated further in this paper. In Section 3, we provide an example analysis that was the motivation for this work. In this example, subtly different choices in the analysis method result in estimates that differ considerably and lead to substantively different conclusions. In Section 4, we perform a simulation study to compare the bias and coverage properties of the different versions of the method. Finally, in Section 5, we discuss the findings of this paper and their relevance to applied practice.
Methods.
We provide a brief introduction to Mendelian randomizationthe use of genetic variants as instrumental variables; further introductory references to the subject area are available (Davey Smith and Ebrahim, 2003;Lawlor et al., 2008;Schatzkin et al., 2009). The objective of Mendelian randomization is to judge whether intervention on a modifiable risk factor would affect a disease outcome. This is achieved by testing whether genetic variants that satisfy the assumptions of an instrumental variable for the risk factor are associated with the outcome. An instrumental variable is a variable that is associated with the risk factor, but not associated with confounders of the risk factor-outcome association, nor is there any causal pathway from the instrumental variable to the outcome except for that via the risk factor (see Greenland (2000); Martens et al. (2006) for further information on instrumental variables). This means that the genetic variant is an unconfounded proxy for variation in the risk factor, and therefore can be treated as similar to treatment assignment in a randomized trial, where the treatment is to change the level of the risk factor (Nitsch et al., 2006). Similarly to an intention-to-treat analysis in a randomized trial, an association between such a genetic variant and the outcome implies a causal effect of the risk factor (VanderWeele et al., 2014). Additionally, under further parametric assumptions, the magnitude of the causal effect of the risk factor on the outcome can be estimated (Didelez, Meng and Sheehan, 2010). In this paper, we assume that the effect of the risk factor on the outcome is linear with no effect modification, and the associations of the genetic variants with the risk factor and with the outcome are linear without effect modification (Didelez and Sheehan, 2007): where X is the risk factor, G 1 , . . . , G J are the genetic variants, Y is the outcome, U is an unmeasured confounder, do(X = x) is the do-operator of Pearl meaning that the value of the risk factor is set to x by intervention (Pearl, 2000), and the causal effect parameter β = β Y j β Xj for all j = 1, . . . , J. We also assume that the effects of the genetic variants on the risk factor are the same in all individuals. Although these assumptions are not necessary to identify a causal parameter (weaker assumptions have been proposed (Swanson and Hernán, 2013)), alternative assumptions mean that the causal parameters identified by different instrumental variables are likely to be different. While these assumptions are restrictive, a causal estimate has an interpretation as a test statistic for the null hypothesis that the risk factor is not causal for the outcome without requiring the assumptions of linearity and homogeneity of the genetic effects on the risk factor (Burgess, Butterworth and Thompson, 2015).
We assume that summarized data are available in the form of association estimates (beta-coefficients and standard errors) with the risk factor and with the outcome for j = 1, . . . , J genetic variants that are instrumental variables. The association estimates with the risk factor are denotedβ Xj with standard error σ Xj ; association estimates with the outcome are denotedβ Y j with standard error σ Y j . The genetic variants are assumed to be independently distributed (that is, not in linkage disequilibrium).
2.1. Standard inverse-variance weighted method. The ratio estimate of the causal effect of the risk factor on the outcome based on the jth genetic variant isβ Y ĵ β Xj (Lawlor et al., 2008). We refer to this asβ IV j . The variance of the ratio of two random variables can be calculated using the delta method; the formula including first-and second-order terms for the variance ofβ IV j is: Xj where θ is the correlation betweenβ Y j andβ Xj (Thomas, Lawlor and Thompson, 2007). This can be rewritten in terms of the causal estimatesβ IV j as: Assuming that the correlation betweenβ Y j andβ Xj is zero (this would be the case if the associations with the risk factor and with the outcome were estimated in nonoverlapping datasets -known as a two-sample analysis (Pierce and Burgess, 2013)), the variance is: If only the first-order term from the delta formula is taken, then the variance is: imsart-aoas ver. 2014/10/16 file: ivwchoices_aoas.tex date: December 15, 2015 The inverse-variance weighted (IVW) estimate is a weighted mean of the causal estimates from each genetic variant considered individually: This is equivalent to meta-analysing the causal estimates from each genetic variant using the standard inverse-variance weighted formula (hence the name "inverse-variance weighted estimate") under a fixed-effect model (Borenstein et al., 2009). Using the first-order variance estimates (equation 5), the IVW estimate is: This is the same estimate as would be obtained from a weighted linear regression of theβ Y j coefficients on theβ Xj coefficients with no intercept term, using the σ −2 Y j as weights.
Using the first-order weights and assuming a fixed-effect model (Section 2.3), the standard error is: This is the form of the inverse-variance weighted estimate as it was initially proposed (Johnson, 2013;Ehret et al., 2011;Dastani et al., 2012).
2.2.
Equivalence to two-stage least squares estimate. The inverse-variance weighted estimate using first-order weights is also equal to the estimate obtained from the twostage least squares method that is commonly used with individual-level data (sample size N ). If the we write the risk factor as X (usually an N × 1 matrix, although the result can be generalized for multiple risk factors (Burgess, Dudbridge and Thompson, 2015a)), the outcome as Y (an N × 1 matrix), and the instrumental variables as Z (an N × J matrix), then the two-stage least squares estimate of causal effects (Baum, Schaffer and Stillman, 2003) is: This estimate can be obtained by sequential regression of the risk factor on the instrumental variables, and then the outcome on fitted values of the risk factor from the first-stage regression.
Regression of Y on Z gives beta-coefficientsβ Y = (Z T Z) −1 Z T Y with standard errors the square roots of the diagonal elements of the matrix (Z T Z) −1 σ 2 where σ is the residual standard error. If the instrumental variables are perfectly uncorrelated, then the off-diagonal elements of (Z T Z) −1 σ 2 are all equal to zero. Regression of X on Z gives beta-coefficientsβ X = (Z T Z) −1 Z T X. Weighted linear regression of the beta-coefficientsβ Y on the beta-coefficientsβ X using the inverse-variance weights (Z T Z)σ −2 gives an estimate: The assumption of uncorrelated instrumental variables ensures that the regression coefficients from univariate regressions (as in the regression-based methods) equal those from multivariable regression (as in the two-stage least squares method). In practice, the two-stage least squares and weighted regression-based estimates will differ slightly as there will be non-zero correlations between the genetic variants in finite samples, even if the variants are truly uncorrelated in the population. However, these differences are likely to be slight, and to tend to zero asymptotically (Burgess, Dudbridge and Thompson, 2015b).
2.3.
Fixed-versus random-effects. A fixed-effect meta-analysis assumes that the causal effects targeted by each genetic variant are all equal. While this would be true if all the genetic variants are valid instrumental variables, and also under the additional linearity assumptions stated above, this may not be true in practice. For instance, genetic variants may affect the exposure via different mechanisms, leading to different magnitudes of effect on the outcome. Alternatively, some variants may have direct effects on the outcome that do not pass via the risk factor, and hence not all genetic variants may be valid instrumental variables. To combat heterogeneity in the causal effects identified by each genetic variant, a random-effects meta-analysis may be preferred. We outline two ways to model this heterogeneity: an additive randomeffects model, and a multiplicative random-effects model.
2.4.
Additive and multiplicative random-effects models. In a fixed-effect metaanalysis, we assume that the estimates from each instrumental variableβ IV j can be modelled as normally distributed with common mean β j = β and variance σ 2 IV j . In a random-effects meta-analysis, the mean values β j are additionally assumed to vary (Higgins, Thompson and Spiegelhalter, 2009). In an additive random-effects model, the β j are assumed to be normally distributed with mean µ β and variance φ 2 A . Any additional variability beyond that predicted by the fixed-effect model (φ A > 0) is interpreted as heterogeneity between the causal effects targeted by each instrumental variable. An estimate of the heterogeneity parameterφ A is often obtained by a method of moments estimator, developed by DerSimonian and Laird (DerSimonian and Laird, 1986).
In a multiplicative random-effects model, theβ IV j estimates are assumed to be normally distributed with mean β and variance φ 2 M σ 2 IV j . This model can be fitted by linear regression of theβ Y j on theβ Xj using the σ −2 Y j as weights. A fixed-effect model can be fitted by setting the residual standard error in the regression model to be one; this can be achieved after fitting the regression model by dividing the standard error by the estimate of the residual standard error (Thompson and Sharp, 1999). A multiplicative random-effects model can be fitted by allowing the residual standard error (which is equivalent to the heterogeneity parameter φ M ) to be estimated as part of the model. The multiplicative random-effects model is therefore equivalent to an overdispersed regression model. In case of underdispersion (that is, the estimated residual standard error is less than one), the standard errors should be fixed by settingφ M = 1, as any underdispersion is assumed to occur by chance, and not to be empirically justified.
(additive random-effects model) The point estimate from a fixed-effect meta-analysis is identical to that from a multiplicative random-effects meta-analysis (Thompson and Sharp, 1999). However, it differs to that from an additive random-effects meta-analysis whenφ A = 0, as the weights in the random-effects meta-analysis are inflated to account for heterogeneity. As heterogeneity increases, weights become more similar, which results in estimates with low weights being upweighted (relatively speaking) in an additive random-effects meta-analysis.
2.5. Weak instrument bias. Although instrumental variable estimates are consistent (and so they are asymptotically unbiased), they can suffer from substantial bias in finite samples (Staiger and Stock, 1997;Stock, Wright and Yogo, 2002). This bias, known as 'weak instrument bias', occurs when the instrumental variables explain a small proportion of variance in the risk factor . In a conventional Mendelian randomization analysis in which the risk factor and outcome are measured in the same participants (a one-sample analysis), weak instrument bias is in the direction of the observational association between the risk factor and the outcome (Burgess, Thompson and CRP CHD Genetics Collaboration, 2011). It can also lead to overly narrow confidence intervals and overrejection of the causal null hypothesis . Bias from the inverse-variance weighted method using the first-order weights and a fixed-effect model has been shown to be similar to that from the two-stage least squares method in a realistic simulation study (Burgess, Butterworth and Thompson, 2013). However, bias and coverage properties have not been investigated for different choices of the weights or for random-effects models.
3. Motivating example: analysis of the causal effect of early menopause on triglycerides. This paper was motivated by a particular implementation of two versions of the inverse-variance weighted method with different choices of weights that gave substantially different answers. A Mendelian randomization analysis was performed to assess the causal effect of early menopause risk on triglycerides using imsart-aoas ver. 2014/10/16 file: ivwchoices_aoas.tex date: December 15, 2015 47 genetic variants. Associations of the genetic variants with early menopause (and their standard errors) were obtained from Day et al. (2015); associations represent number of years earlier menopause per additional effect allele. Associations of the genetic variants with triglycerides (and their standard errors) were obtained from the The Global Lipids Genetics Consortium (2013). These associations are provided in Appendix Table A1 and displayed graphically in Appendix Figure A1. Analyses for the motivating example were performed in Microsoft Excel (Windows 2000 version) and R (version 3.1.2) (R Core Team, 2014).
Fixed-effect inverse-variance weighted methods were performed using the secondorder weights (equation 4) and the first-order weights (equation 5). The weights were substantially the same in both cases; 35 out of the 47 weights differed by less than 5%, and 44 of the weights differed by less than 10%. Using the second-order weights (equation 4), the causal effect of early menopause on triglycerides was estimated as 0.0021 (standard error, 0.0037; 95% confidence interval: -0.0052, 0.0095). Using the first-order weights (equation 5), the causal effect estimate was 0.0103 (standard error, 0.0036; 95% confidence interval: 0.0032, 0.0175). These estimates represent the change in triglycerides in standard deviation units per 1 year earlier menopause. The applied implications of this analysis are not the focus of this paper, and depend on the validity of the instrumental variable assumptions for the genetic variants used in the analysis. However, the magnitude of the difference between the estimates (over twice the standard error of the estimates) is striking, and the conclusions from the two analyses would be diametrically opposite. In the first case, the causal null hypothesis that early menopause is a causal risk factor for triglycerides would not be rejected (p = 0.57), whereas in the second case, the causal null hypothesis would be rejected (p = 0.005). By comparison, using the first-order weights and a multiplicative random effects model, the standard error is 0.0103, meaning that the causal null hypothesis would not be rejected (p = 0.32).
It turns out that the genetic variant with the greatest difference between the firstand second-order weights is rs704795, the variant that also has the greatest causal estimate. The estimate from this variant is heavily downweighted in the analysis using the second-order weights compared with using the first-order weights. Omitting this variant from the analysis led to similar estimates using the second-and firstorder weights (0.0000 versus −0.0001). Another interesting observation is that use of the second-order weights reduced heterogeneity between the causal estimates from each genetic variant (for example, in the multiplicative random-effects model,φ M was 1.69 using the second-order weights compared with 2.83 using the first-order weights). This suggests that, even though the second-order standard errors for the causal estimates from the individual variants will always be greater than the firstorder standard errors, precision of the overall causal estimate under a random-effects model may be improved by using the second-order weights when there is heterogeneity between the causal estimates (in this example, se(β IV W ) = 0.0063 in the multiplicative random-effects model using the second-order weights, se(β IV W ) = 0.0103 using the first-order weights).
Estimates from each of the methods are summarized in β Xj will be downweighted by the second-order weights. This means that genetic variants that have large and heterogeneous effects on the outcome compared with other variants and/or are weak will be downweighted. Further methodological investigation is therefore needed to investigate the impact on the bias and coverage properties of inverse-variance weighted methods for Mendelian randomization analyses, and which of the versions of the method should be preferred in applied practice.
4. Simulation study. In this manuscript, we consider estimates from the inversevariance weighted method using weights from equations (4, second-order) and (5, firstorder), and fixed-effect, additive random-effects, and multiplicative random-effects models for combining the estimates from different genetic variants. Code for implementing these methods is provided in the Appendix. Analyses for the simulation study were performed in R (version 3.1.2).
The data-generating model is as follows: Individuals are indexed by i. The 20 genetic variants z ij , indexed by j, are drawn from binomial distributions, corresponding to single nucleotide polymorphisms (SNPs) with minor allele frequency 1/3. The risk factor x i is a linear combination of the genetic variants, a confounder (u i ) that is assumed to be unmeasured, and an independent error term (ǫ Xi ). The risk factor y i is a linear combination of the genetic variants, the risk factor, confounder, and a further independent error term (ǫ Y i ). The per allele effects of the genetic variants on the risk factor (α j ) are drawn from a normal distribution with mean α and variance 0.02 2 . The direct effects of the genetic variants on the outcome (β Zj , these effects are not via the risk factor) are zero when the genetic variants are valid instrumental variables. The causal effect of the risk factor on the outcome, the main parameter of interest, is β X . The effect of the confounder on the outcome is β U .
We consider four scenarios: 1. a one-sample analysis in which the genetic variants are all valid instrumental variables; 2. a one-sample analysis in which the genetic variants have direct effects on the outcome; 3. a two-sample analysis in which the genetic variants are all valid instrumental variables; 4. and a two-sample analysis in which the genetic variants have direct effects on the outcome.
In scenarios 1 and 2, data are generated for N = 5000 participants, and associations with the risk factor and with the outcome are estimated in these participants. In scenarios 3 and 4, data are generated for N = 10 000 participants. Associations with the risk factor are estimated in the first 5000 participants, and associations with the outcome in the second 5000 participants. Two-sample analyses are common in Mendelian randomization, particularly when the association estimates are obtained from publicly available data sources . In a two-sample analysis, weak instrument bias acts in the direction of the null, and hence should not lead to misleading inferences (Pierce and Burgess, 2013). However, it is common that many participants in large genetic consortia overlap, such that even if the associations with the risk factor and with the outcome are obtained from separate consortia, they may not be estimated in separate participants. Hence, the one-sample and two-sample settings are both of interest in this paper.
In scenarios 1 and 3, the β Zj parameters are all set to zero, and the genetic variants are all valid instrumental variables. In scenarios 2 and 4, the β Zj parameters are drawn from a normal distribution with mean 0 and variance 0.02 2 . This is a situation known as "balanced pleiotropy" . Pleiotropy refers to a genetic variant having an independent effect on the outcome that is not via the risk factor (Davey Smith and Hemani, 2014). Balanced pleiotropy means that the pleiotropic effects for all strengths of instrument have mean zero. Such pleiotropic effects should induce heterogeneity between the causal estimates using different genetic variants. Simulations conducted under a multiplicative random-effects model with balanced pleiotropy have suggested that estimates may not be biased on average . Additional simulations for the case of directional (that is, unbalanced) pleiotropy are considered in the Appendix.
Four sets of parameters are considered -two values of the causal effect: β X = 0 (null causal effect), and β X = 0.2 (positive causal effect); and two values of the confounder effect: β U = +1 (positive confounding), and β U = −1 (negative confounding). Additionally, four values of instrument strength are considered for each set of parameters: α = 0.03, 0.05, 0.08, 0.10. 10 000 simulated datasets are generated in each case. 4.1. Results. Scenarios 1 and 2: Results from scenario 1 (one-sample, valid instruments) and scenario 2 (one-sample, invalid instruments) are presented in Table 2. For each value of the instrument strength, set of parameters, and scenario, the mean estimate and empirical power of the 95% confidence interval (estimate plus or minus 1.96 times the standard error) to reject the null hypothesis is given. The coverage is 100% minus the power; power under the null hypothesis should be 5%. The Monte Carlo standard error for the mean estimate is around 0.001 or less, and for the power is 0.2% under the null, and at most 0.5% otherwise. Additionally, to judge the instrument strength, the mean F statistic and the mean coefficient of determination (R 2 statistic) are given in each case.
With a null causal effect, the results demonstrate the well-known bias and inflated Type 1 error rate of instrumental variable estimates with weak instruments in a one-sample setting. Although bias is similar for both choices of weights (slightly less with the first-order weights), coverage rates are much worse with the first-order weights. Neither the additive nor the multiplicative random-effects models detect heterogeneity in the vast majority of cases (particularly for weaker instruments) with the second-order weights. With the first-order weights, heterogeneity is detected in a greater proportion of simulated datasets. In scenario 1, heterogeneity is not present in the underlying data-generating model, and only estimated by chance; in scenario 2, heterogeneity is expected. For the second-order weights, coverage properties are similar in scenarios 1 and 2; whereas for the first-order weights, coverage properties are worse in scenario 2 for the fixed-effect model, but improved for the random-effects models. For weaker instruments, coverage properties are best using the second-order weights, whereas for stronger instruments, estimates using the first-order weights and a random-effects model perform almost as well, and occasionally better particularly when there is heterogeneity (scenario 2). However, there is inflation of Type 1 error rates even in the best-case scenarios.
With a positive causal effect, estimates with the first-order weights generally have better power to detect a causal effect than those using the second-order weights, particularly with weaker instruments. However, in the light of the Type 1 error rate inflation, this property should not be overvalued. Making fewer Type 2 errors (fewer false negative findings) at the expense of making more Type 1 errors (more false positive findings) is not generally a desirable trade-off.
Additional results from scenarios 1 and 2 are presented in Appendix Table A3. For each value of the instrument strength, the (Monte Carlo) standard deviation and the mean standard error of estimates are presented. This helps judge whether uncertainty in the effect estimates is correctly accounted for in the standard errors.
The estimates using second-order weights are the least variable throughout, with the lowest standard deviations. The standard deviation of estimates using second-order weights was always less than the mean standard error of the estimates. In contrast, for scenario 1, the estimates using first-order weights were more variable, but generally had lower average standard errors. This was always true for the fixed-effect analyses, and usually true for the random-effects analyses. However, when there was heterogeneity in the causal estimates identified by the instrumental variables (scenario 2), mean standard errors for the random-effects analyses using first-order weights could be greater than those using second-order weights, despite the second-order standard errors for each causal estimate being uniformly than the first-order standard errors. In scenario 2, mean standard errors for the fixed-effect analyses were generally similar to those in scenario 1, but the standard deviations of the estimates were increased. For the random-effects analyses using the first-order weights in scenario 2, mean standard errors and standard deviations were similar in magnitude. However, mean standard errors using the second-order weights were typically slightly lower, with no loss in coverage (recall Table 2).
Under the null, standard deviations and mean standard errors are similar whether there is positive or negative confounding, whereas under the alternative, standard errors appear to be wider when confounding is in the same direction as the causal effect, and narrower when confounding is in the opposite direction. This has previously been observed ; see Figure 3 of that reference for a potential explanation.
Scenarios 3 and 4: Results from scenario 3 (two-sample, valid instruments) and scenario 4 (two-sample, invalid instruments) are presented in Table 3 for the mean and power and in Appendix Table A4 for the standard deviation and standard error. These results demonstrate the well-known bias in the direction of the null in the two-sample setting.
With a null causal effect, no bias is observed. Coverage levels for the second-order weights are conservative, with power below the nominal 5% level. By contrast, in scenario 3, coverage levels with the first-order weights are close to nominal levels, with slight undercoverage for random-effects models. In scenario 4, there is inflation of Type 1 error rates with the first-order weights for a fixed-effect model, but coverage for both the additive and multiplicative random-effects models is close to nominal levels.
With a positive causal effect, bias is in the direction of the null. The bias is more severe using the second-order weights. Power to detect a causal effect is substantially lower using the second-order weights than using the first-order weights, particularly for weaker instruments.
For the first-order weights, mean standard errors are fairly close to the standard deviations of estimates for the fixed-effect model when there is no heterogeneity in the causal effects, and for the random-effects models when there is heterogeneity in the causal effects. In contrast, for the second-order weights, the mean standard errors are larger than the standard deviations throughout. This corresponds with the coverage properties: in a two-sample setting using first-order weights, estimates are unbiased under the null with correct rejection rates, whereas using second-order weights, rejection rates are conservative.
Choice of random-effects model: As for choosing between the additive and multiplicative random-effects models, with the second-order weights, there was little difference between the results, or even with the results for a fixed-effect model. However, as viewed in the motivating example, there will be a difference if the level of heterogeneity is increased. With the first-order weights, bias was generally slightly less with the additive random-effect model. Coverage under the null was better with an additive random-effects model, and power to detect a causal effect was better with a multiplicative random-effects model. However, differences were slight. Because of the better properties under the null, we therefore prefer the additive random-effects model for the scenarios considered in this paper, although the preference is not a strong one.
Directional pleiotropy: Results with directional pleiotropy are presented in Appendix Table A5. In brief, the results echo those with no pleiotropy and with balanced pleiotropy: the importance of random-effects models, and the preference for use of second-order weights in a one-sample setting, and first-order weights in a two-sample setting. Simulation study results for scenarios 1 and 2 (one-sample setting, valid and invalid instrumental variables): mean estimate and power (%) of 95% confidence interval for various inverse-variance weighted methods with four sets of parameter values (null and positive causal effect, positive and negative confounding. The strength of the genetic variants as instruments is indicated: mean per allele effect on the risk factor (α), mean F statistic (F ) and mean coefficient of determination (R 2 ) from regression of risk factor on the genetic variants. Table 3 Simulation study results for scenarios 3 and 4 (two-sample setting, valid and invalid instrumental variables): mean estimate and power (%) of 95% confidence interval for various inverse-variance weighted methods with four sets of parameter values (null and positive causal effect, positive and negative confounding. The strength of the genetic variants as instruments is indicated: mean per allele effect on the risk factor (α), mean F statistic (F ) and mean coefficient of determination (R 2 ) from regression of risk factor on the genetic variants.
4.2.
Additional scenario: extreme outlying variants. In the motivating example, the difference between estimates seemed to be driven by a single rogue variant. In order to better evaluate bias and coverage in this scenario, we considered an additional simulation scenario 5. Rather than generating the direct effects of the genetic variants on the outcome (the β Zj parameters) from a normal distribution with mean 0 and variance 0.02 2 , instead we generated them from a t distribution with 2 degrees of freedom, and multiplied the result by 0.02. The t distribution with a small number of degrees of freedom has much heavier tails than a normal distribution, and so extreme outliers will be more frequent. With 2 degrees of freedom, the variance of the t distribution is not even defined. Simulation results are only considered in the one-sample setting and under the null (β X = 0), as inflated Type 1 error rates in this scenario are the primary concern.
In Table 4, results are given for the inverse-variance weighted methods with different choices of weights and different models for combining the estimates. With a fixed-effect model, coverage rates for the second-order weights are similar to those in scenario 2 with the normally distributed direct effects. For the first-order weights, coverage rates are substantially worse and well above the nominal 5% level even for the strongest instruments considered in this paper, although bias is similar to that in scenario 2. This corresponds to the motivating example, in which the outlying variant had a large influence on the pooled estimate using the first-order weights, but was heavily downweighted using the second-order weights. However, for a random-effects model using the second-order weights, particularly with the multiplicative random-effects model and for the additive random-effects model with weaker instruments, results were similar to those with a fixed-effect model. In contrast, for a random-effects model with the first-order weights, mean estimates were generally closer to the null (with one notable exception -scenario 5b, α = 0.05 -that was mostly driven by a single aberrant estimate) and coverage levels were much improved. Coverage levels with a random-effects model were generally slightly better with the first-order weights than with the second-order weights, although not uniformly and the difference was slight. As observed in the motivating example, and particularly with weaker instruments, heterogeneity is more often detected using the first-order weights, as the second-order weights tend to downweight the influence of the outlying variants. Table 4 Simulation study results for scenarios 5 (one-sample setting, invalid instrumental variables with extreme outliers): mean estimate and power (%) of 95% confidence interval for various inverse-variance weighted methods with two sets of parameter values (null causal effect, positive and negative confounding. The strength of the genetic variants as instruments is indicated: mean per allele effect on the risk factor (α), mean F statistic (F ) and mean coefficient of determination (R 2 ) from regression of risk factor on the genetic variants.
5. Discussion. Several high-profile Mendelian randomization analyses have employed summarized data and some version of an inverse-variance weighted method. These include analyses of the causal effect of blood pressure on coronary heart disease risk (Ehret et al., 2011), height on coronary heart disease risk (Nelson et al., 2015), adiponectin on type 2 diabetes risk (Dastani et al., 2012), lipids on type 2 diabetes risk (Fall et al., 2015), and telomere length on risk of various cancers (Zhang et al., 2015), amongst several others. The statistical properties of estimates from the inversevariance weighted method are therefore of considerable interest.
In this paper, we demonstrated that Type 1 error rates for the inverse-variance weighted method as it was initially proposed (first-order weights, fixed-effect model) are likely to be inflated in a one-sample Mendelian randomization setting either when the instruments are weak, or when there is heterogeneity between the causal estimates targeted by different genetic variants. This can be resolved either by using secondorder weights or a random-effects model to combine the estimates from multiple genetic variants. These approaches affect the analysis in different ways: the second-order weights tend to downweight the influence of weak and heterogeneous variants on the overall causal estimate, whereas the random-effects models tend to increase standard errors by allowing for heterogeneity between the causal estimates in the model. While both approaches can be applied simultaneously, our simulations indicate that heterogeneity is less substantial when using the second-order weights. However, there is little disadvantage in assuming a random-effects model, as in the absence of heterogeneity, the fixed-effect analysis is recovered, and in the presence of heterogeneity, the random-effects analysis is more appropriate. Our results provide slight preference for an additive random-effects model over a multiplicative random-effects model.
In a two-sample Mendelian randomization setting, weak instruments do not lead to inflated Type 1 error rates but rather attenuate of estimates towards the null. The use of second-order weights was demonstrated to lead to conservative inference, whereas first-order weights gave correct coverage rates under the null. When there was heterogeneity in the causal estimates from different genetic variants, which was simulated to arise due to genetic variants having pleiotropic effects, a fixed-effect model with first-order weights was shown to lead to undercoverage, although this was corrected by use of a random-effects model.
A conclusion from this paper is the need to assess heterogeneity between the causal estimates from different genetic variants prior to performing a Mendelian randomization analysis based on multiple genetic variants, for example by a scatter plot of the gene-risk factor and gene-outcome associations (Appendix Figure A1). The presence of heterogeneous variants is likely to indicate violation of the instrumental variable assumptions for some of the variants, and can lead to misleading estimates as observed in the motivating example. Assessment for heterogeneity is also relevant when performing an analysis using individual-level data, for example using a two-stage least squares or allele score method. 5.1. Limitation of simulation studies. Our conclusions are limited as they are based on simulation studies. This is by necessity, as the properties of the estimators that we want to assess are finite-sample properties, not asymptotic properties.
Our findings may have differed if we had considered a different data-generating mechanism, or more substantial heterogeneity between estimates from genetic variants. However, the findings are in line with theoretical considerations, and we believe the scenarios that we have chosen to be representative of a typical Mendelian randomization investigation in practice.
5.2.
Unbalanced pleiotropy and robust methods (Egger regression, median-based approaches). In particular, we mostly considered scenarios in this paper corresponding to balanced pleiotropy. In the case of unbalanced (or directional) pleiotropy, causal estimates from inverse-variance weighted methods are biased and Type 1 error rates are inflated in all settings, even in the asymptotic limit . This can be resolved in a number of ways. In Egger regression, we perform a weighted linear regression of the gene-outcome association estimates (β Y j ) on the gene-risk factor association estimates (β Xj ) in the same way as in an inverse-variance weighted method, except that an intercept term is included in the regression model. This intercept term represents the average direct effect of the genetic variants on the outcome. (It is additionally required that all genetic variants are orientated such that theβ Xj estimates are all positive, or are all negative.) The causal estimate from Egger regression is the slope parameter from this regression model. It is a consistent estimate of the causal effect under the alternative assumption that the direct effects of the genetic variants are uncorrelated with the instrument strength; this is known as the InSIDE (instrument strength independent of direct effect) assumption. In the notation of the data-generating model of equation (9), the α j parameters must be uncorrelated with the β Zj parameters; in the balanced pleiotropy examples of this paper, these parameters are drawn from independent distributions. This is a weaker assumption than the standard instrumental variable assumptions (the β Zj parameters all equal zero) or the assumption of balanced pleiotropy (the β Zj parameters have mean zero).
Similar considerations as to the choice of weights in Egger regression could be considered; the original proposal was equivalent to using the first-order weights. Informal simulations (not presented) have suggested that the same conclusions from this paper also hold for Egger regression (particularly the use of random-effects models). However, a full investigation would require simulating data with unbalanced pleiotropy (potentially both when the InSIDE assumption is satisfied and when it is violated); this is considered to be beyond the scope of this paper.
One notable difference about Egger regression is that if the genetic variants are allowed to have direct effects on the outcome, then heterogeneity in the causal estimates from individual variants is expected. Therefore, while heterogeneity in an inverse-variance weighted analysis is unwelcome and a potential sign that the assumptions are not satisfied, heterogeneity in the Egger method is a natural consequence of weakening the instrumental variable assumptions and does not necessarily invalidate the analysis.
Another approach for dealing with unbalanced pleiotropy is a median-based approach. The median of the causal estimates from each of the genetic variants taken individually is a consistent estimate of the causal effect under the assumption that at least 50% of the genetic variants are valid instrumental variables (Han, 2008). This is a different assumption to the InSIDE assumption, and neither assumption includes all cases of the other. Confidence intervals for the median can be obtained by bootstrapping; we suggest estimating a bootstrap standard error and forming confidence intervals from the standard error . A weighted median estimator can also be obtained using inverse-variance weights in a weighted median function . This method may have better asymptotic properties than an inverse-variance weighted method in a number of cases, as outlying estimates do not influence the median of the distribution. Simulations performed using second-and first-order weights from the delta method suggested that weighted median estimates were not sensitive to the particular choice of weighting function. In a median-based approach, the choice of weights influences not only the bias and variability of estimates, but also the identification condition, as the consistency criterion for a weighted median estimator is that 50% of the weight in the analysis corresponds to valid instrumental variables. Hence, in some cases, the simple (unweighted) median estimator may be preferred even if it is less efficient.
5.3.
Overlap between the samples in a 'two-sample' analysis. In practice, before following the recommendation to use first-order weights in a two-sample Mendelian randomization setting, it is advisable to check whether the samples used to estimate the gene-risk factor and the gene-outcome associations truly do not overlap. In the motivating example of the paper, genetic associations with early menopause are obtained from a consortium of 33 studies, and genetic associations with triglycerides from a consortium of 23 studies. Although the consortia appear to be different, in fact, at least 17 of the studies are included in both consortia, meaning that the analysis is not a true two-sample analysis. It is not clear exactly the extent of the overlap without having the individual-level data, but it is likely to be substantial.
Although the full second-order expression for the variance of a causal estimate (equation 2) includes a term θ that depends on the overlap between the two datasets, in this paper we have set θ = 0 even in a one-sample setting. This was undertaken for computational simplicity in the simulation study setting. If the individual-level data were available, an estimate of θ could be obtained by bootstrapping the samples, and calculating the correlation between the bootstrapped distributions ofβ Y j andβ Xj for each j. However, this was infeasible in the simulation study. Additionally, if the individual-level data are not available, it is unclear how to estimate θ. A sensitivity analysis can be performed for the value of θ; results for the motivating example of this paper are shown in Appendix Table A2. We see that different choices of θ lead to similar causal estimates and 95% confidence intervals for each of the inverse-variance weighted methods.
5.4.
Interpretation of a random-effects estimate. A theoretical concern in recommending the use of random-effects models for Mendelian randomization is the interpretation of the random-effects estimate. Under the assumptions of linearity and no effect modification, and in particular under the stable unit treatment value assumption (SUTVA (Cox, 1958) -this states that the effect on the outcome of modifying the risk factor should be the same for all possible interventions on the risk factor, also expressed as "no multiple versions of treatment" (VanderWeele and Hernán, 2013)), the causal estimates from different instrumental variables should target the same causal parameter. However, in reality, taking the context of the motivating example, different interventions on age at menopause (such as ooectomy, hysterectomy, and hormone therapy) may have different effects on triglyceride levels; similar heterogeneity is expected for genetic variants that affect age at menopause via different biological pathways. By allowing for heterogeneity in causal estimates from different genetic variants, the notion of a single causal effect of the risk factor on the outcome is lost, and it is not clear for what intervention on the risk factor the causal estimate is targeting. Additionally, if the choice of genetic variants changes, then the causal parameter also changes, as the random-effects distribution is taken across a different set of variants. The random-effects estimate is correctly interpreted not as targeting a common causal effect, but as targeting the average value of the distribution of causal effects identified by the different variants (Riley et al., 2011). This subtlety is not unique to causal estimation, rather it is relevant in meta-analysis more widely (Higgins, Thompson and Spiegelhalter, 2009). However, heterogeneity is more forgiveable in meta-analysis; it could be argued that any deviation from homogeneity should be interpreted as evidence that the instrumental variable assumptions are violated for at least one of the genetic variants, and so a causal estimate based on all the genetic variants should not be presented.
We take a practical approach, and view these theoretical concerns as secondary to the primary concern of obtaining reliable causal inferences (Burgess, Butterworth and Thompson, 2015). Our view is that a literal interpretation of causal effect estimates from Mendelian randomization is rarely justified, due to differences between the way in which genetic variants influence the risk factor and any potential clinical intervention on the risk factor in practice . However, if there is substantial heterogeneity, or if there are individual genetic variants that clear outliers, then the overall causal estimate is likely to be unreliable even as a test of causality, and the instrumental variable assumptions should be examined carefully, particularly for the outlying variants. 5.5. Conclusion. In conclusion, in a Mendelian randomization analysis using summarized data in a (strict) two-sample setting (that is, when there is no overlap between the datasets in which associations with the risk factor and with the outcome are estimated), the inverse-variance weighted method with first-order weights may be preferred, although a random-effects model for combining the causal effects from the individual genetic variants should be used. In a one-sample setting, or if there is any overlap between the datasets, then a random-effects model using the secondorder weights should be preferred to avoid false-positive findings. If the overlap is not substantial, then an analysis using the first-order weights may be presented as a sensitivity analysis, as it may have increased power to detect a causal effect.
We conduct inverse-variance weighted analyses using weights derived from equation (2) and fixed-effect, additive random-effects, and multiplicative random-effects models for θ = −0.2, −0.1, 0, 0.1, 0.2, 0.3. The causal estimates and 95% confidence intervals from each analysis are presented in Appendix Table A2. We see that the estimates and confidence intervals do not change substantially despite the wide range of values of θ considered.
The true value of θ should be zero if the associations with the risk factor and outcome are estimated in non-overlapping samples, and similar to the correlation between the risk factor and the outcome if the associations are estimated in the same individuals. With partial overlap, the value of θ will be between these two values. Table A2 Estimates and 95% confidence intervals (CI) from inverse-variance weighted analyses using second-order weights from equation (2) with different values of the correlation parameter θ.
A.4. Additional results from simulation study. Additional results from scenarios 1 and 2 are presented in Appendix Table A3, and from scenarios 3 and 4 in Appendix Table A4. For each value of the instrument strength, the (Monte Carlo) standard deviation and the mean standard error of estimates are presented. Using second-order weights, only results from the fixed-effect analyses are presented, as heterogeneity was not detected in the vast majority of datasets, and so results were the same up to 3 decimal places in almost all cases. Further simulation study results for scenarios 1 and 2 (one-sample setting, valid and invalid instrumental variables): standard deviation (SD) of estimates and mean standard error (SE) for various inverse-variance weighted methods with four sets of parameter values (null and positive causal effect, positive and negative confounding) for different strengths of instrument (α). Further simulation study results for scenarios 3 and 4 (two-sample setting, valid and invalid instrumental variables): standard deviation (SD) of estimates and mean standard error (SE) for various inverse-variance weighted methods with four sets of parameter values (null and positive causal effect, positive and negative confounding) for different strengths of instrument (α).
A.5. Additional simulation with directional pleiotropy. To provide some guidance as to the performance of the inverse-variance weighted method when there is directional pleiotropy, we perform a further simulation under this scenario. The parameters and scenarios are taken to be the same as those in the main body of the paper, except that rather than drawing the genetic effects on the risk factor (α j ) and the direct effects of the genetic variants on the outcome (β Zj ) from independent normal distributions as in Scenarios 2 and 4, we draw them from a bivariate normal distribution. The univariate distributions of these parameters are the same (the α j parameters have mean α and variance 0.02 2 ; the β Zj parameters have mean 0 and variance 0.02 2 ), but the correlation between the distributions is set to 0.4. This correlation means that the direct effects of genetic variants on the outcome are greater for those variants that have stronger effects on the risk factor, and so for those variants that receive more weight in the analysis. Hence, although the overall mean pleiotropic effect has mean zero, pleiotropic effects of weak and strong instruments separately do not have mean zero. We refer to the one-sample setting with directional pleiotropy as Scenario 6, and the two-sample setting with directional pleiotropy as Scenario 7. Results for the mean estimate and empirical power to detect a causal effect are given in Appendix Table A5. In the one-sample setting (scenario 6), there is bias in the direction of confounding in all cases. While Type 1 error rates under the null are inflated throughout, there is a clear preference for the use of second-order weights and random-effects models, as well as a slight preference for the additive randomeffects model (based on slightly more conservative coverage properties with first-order weights). This mirrors the advice in the main paper. In the two-sample setting (scenario 7), bias under the null is in the positive direction, whereas bias under the alternative is towards the null. Type 1 error rates under the null with random-effects models are close to nominal levels, with conservative coverage for second-order weights, and slightly anti-conservative coverage for first-order weights. However, the advice from the main paper to use first-order weights in a two-sample setting would not lead to overly misleading inferences, as Type 1 error rates with first-order weights are close to the nominal 5% level. Power to detect a causal effect is greater using first-order weights in this case. Hence, on the basis of these simulations, the advice in the main body of the paper also holds with directional pleiotropy.
In practice, we repeat that estimates from the inverse-variance weighted method will typically be biased if the genetic variants are not valid instruments (and the example of directional pleiotropy considered here is far from extreme), and recommend the use of robust methods (such as the Egger method and median-based methods introduced in the discussion of the paper) as sensitivity analyses for applied Mendelian randomization investigations. | 2015-11-27T17:12:33.000Z | 2015-11-27T00:00:00.000 | {
"year": 2015,
"sha1": "2fbe3e3e8f77272eac86a716a1c32493219ab5ad",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2fbe3e3e8f77272eac86a716a1c32493219ab5ad",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
12732677 | pes2o/s2orc | v3-fos-license | Is a persistent central canal a risk factor for neurological injury in patients undergoing surgical correction of scoliosis?
Background Scoliosis patients with associated syringomyelia are at an increased risk of neurological injury during surgical deformity correction. The syrinx is therefore often addressed surgically prior to scoliosis correction to minimize this risk. It remains unclear if the presence of a persistent central canal (PCC) within the spinal cord also poses a similar risk. The aim of this study is to determine whether there is any evidence to suggest that patients with a PCC are also at a higher risk of neurological injury during surgical scoliosis correction. Methods Eleven patients with a PCC identified on pre-operative magnetic resonance imaging who had undergone correction of adolescent idiopathic scoliosis (AIS) over a 7-year study period at our institution were retrospectively identified. The incidence of abnormal intra-operative spinal cord monitoring (SCM) traces in this group was in turn compared against 44 randomly selected age- and sex-matched controls with no PCC who had also undergone surgical correction of AIS during the study period. Fisher’s exact test was applied to determine whether there was a significant difference in the incidence of abnormal intra-operative SCM traces between the two groups. Results Statistical analysis demonstrated no significant difference in the incidence of abnormal intra-operative SCM signal traces between the PCC group and the control group. Conclusions This study demonstrates no evidence to suggest a PCC increases the risk of neurological complications during scoliosis correction. We therefore suggest that surgical correction of scoliosis in patients with a PCC can be carried out safely with routine precautions.
Background
One of the most devastating potential complications of scoliosis correction surgery is iatrogenic neurological injury [1,2]. Numerous factors have been implicated as increasing the risk of such a complication including the presence of abnormalities within the spinal cord [3]. The incidence of spinal cord pathology in paediatric patients with scoliosis has previously been reported to be between 3 and 20%, with pre-operative magnetic resonance imaging (MRI) demonstrating various intra-spinal abnormalities including syringomyelia, Chiari malformation, diastematomyelia, tethered cord and spinal cord tumours [1,4,5]. The mechanism of neurological injury arising from surgical correction of scoliosis can be from an instrument or implant striking the spinal cord, from a vascular injury related to the implant causing stretching or compression of vessels or from vascular compromise not directly related to the implant such as ischaemia secondary to hypotension [6].
Previous studies have demonstrated patients with spinal cord pathology undergoing surgical correction of scoliosis are at an increased risk of sustaining intraoperative iatrogenic neurological injury [1,[7][8][9]. However, to the authors' knowledge, there has not been a published study addressing the question as to whether the presence of a persistent central canal (PCC) also poses an increased risk of intra-operative neurological injury during surgical correction of scoliosis. The aim of this study is to therefore address this question.
Methods
The null hypothesis to be tested was defined as patients with a PCC are at an equal risk of developing intraoperative neurological complications during surgical correction of scoliosis as patients without a PCC.
In order to test this hypothesis, all patients who had undergone surgical correction of adolescent idiopathic scoliosis (AIS) over a 7-year period between June 2004 and October 2011 at our institution who had a co-existing PCC confirmed with routine pre-operative whole spine MRI were retrospectively identified using an electronic database and were included in the study. MRI was performed at 1.5 Tesla with a phased array coil, and image sequences included T1-weighted spin echo (SE) and T2weighted fast SE (FSE) sagittal and axial images. All MRIs were reported by a consultant musculoskeletal radiologist.
A PCC was defined as a filiform or slit-like, centrally located, intra-medullary cavity of a maximum diameter of 4 mm [10] not communicating with the fourth ventricle and extending over at least two vertebral levels [11] in the absence of any co-existing neuro-axis abnormality (NAA) on neuro-radiological imaging potentially responsible for cerebrospinal fluid (CSF) flow disturbances, with no prior history of spinal trauma, spinal infections or previous spinal/ neurosurgical intervention. Additional inclusion criteria was the use of intra-operative spinal cord monitoring (SCM) during the surgical correction of the spinal deformity.
Eleven patients in total met these criteria and were included in the study. Forty-four sex-and age-matched control group patients who had also undergone surgical correction of AIS during the same study period with no underlying NAA evident on pre-operative MRI screening were randomly selected from a list of 1150 patients using Stata/IC version 12 software (StataCorp, College Station, TX, USA). Therefore, in total, 55 patients were included in the study.
The pre-and post-operative neurological status as determined by clinical examination findings up to the time of 3-month outpatient follow-up, the type of deformity correction and the intra-operative SCM traces were identified in the medical records of each patient included in the study. The intra-operative SCM traces for each patient were analysed by the Department of Neurophysiology, with somato-sensory evoked potentials (SSEPs) being used throughout the surgery to monitor for potential neurological compromise. Deviation from baseline SCM traces was classified as either 'Green' (no trace change), ' Amber' (an event causing an indirect effect with partial trace change) or 'Red' (an event causing a direct effect with partial to complete trace loss).
The SCM equipment used to monitor intra-operative SSEPs during the study period were Nihon Kohden Neuromaster (Tokyo, Japan) and Nicolet Biomedical (Viking Madison, WI, USA).
In order to assess whether there was any significant difference in the incidence of abnormal intra-operative SCM traces between the PCC group and the control group, a Fisher's exact test was performed given the categorical nature of the data, again using Stata/IC version 12 software. In order to calculate a Fisher's exact test, a 2 × 2 contingency table was created using the results of the intraoperative SCM traces for each of the 55 patients included in the study. For the purposes of entering this data into the contingency table, the patients in the PCC group were subdivided into those with normal ('Green') and abnormal (' Amber' or 'Red') intra-operative SCM traces. The patients in the control group were similarly subdivided into those with normal and abnormal intra-operative SCM traces.
In addition to comparing the incidence of abnormal intra-operative SCM traces between the PCC and control group, a comparison was also made of the incidence of post-operative neurological deficit apparent on clinical examination between the two groups to assess for any difference.
Results
During the 7-year study period, 1161 AIS corrections were conducted out of which 11 patients met the criteria of having a PCC identified on pre-operative MRI. There was one male and ten females in the PCC group with an average age of 15.9 years (range [14][15][16][17][18][19][20]. Only one patient in the PCC group had a pre-operative clinical neurological deficit in the form of mildly diminished sensation in the S1 distribution of the right foot with no associated motor weakness. No definite cause for this was demonstrated on preoperative MRI and nerve conduction studies. Four patients in the PCC group underwent an anterior instrumented fusion (AIF), six patients underwent posterior instrumented fusion (PIF) and one patient underwent combined anterior release (AR) and PIF (Table 1 and Fig. 1).
There were 44 age-and sex-matched AIS controls. These included 4 males and 40 females with an average age of 15.9 years (range [14][15][16][17][18][19][20]. Eleven patients in the control group underwent an AIF and 33 patients underwent a PIF (Table 2 and Fig. 2). No patient in the control group had a clinical neurological deficit pre-operatively.
In both the PCC and control group, baseline SSEPs were obtained for all patients. In the PCC group, no patient had an intra-operative deviation from the baseline traces, compared to four (9.1%) patients in the control group. These four patients all had a PIF procedure. Of these four patients, two patients had ' Amber' warning signal changes intra-operatively and two patients had 'Red' warning signal changes intra-operatively (Fig. 3). In all four control group patients who developed abnormal intra-operative SCM traces, the traces returned to normal when appropriate intra-operative measures were taken to reverse the precipitating factor such as reducing the degree of distraction being applied to the spine. No patient in either the PCC or the control group had a new onset neurological deficit post-operatively evident on clinical examination.
The exact anatomical level of the PCC in each patient within the PCC group, the vertebral levels instrumented and the percentage deformity correction achieved were also recorded ( Table 3). The PCC was found to be located either entirely or partially within the instrumented spinal levels in all patients within the PCC group. The average percentage deformity correction achieved was 70%.
Fisher's exact test did not demonstrate any statistically significant difference in the incidence of abnormal intraoperative SCM traces between patients in the PCC The null hypothesis 'patients with a PCC are at an equal risk of developing intra-operative neurological complications during surgical correction of scoliosis as patients without a PCC' could therefore not be rejected.
Discussion
The central canal is an ependymal-lined structure in the spinal cord that extends inferiorly from the fourth ventricle to the conus medullaris [12]. Anatomical studies suggest the central canal is only seen in foetal and newborn spinal cords and undergoes age-related stenosis such that it is obliterated in the vast majority of adults [13][14][15][16]. In a PCC, a degree of age related stenosis has occurred such that the central canal no longer extends all the way from the fourth ventricle to the conus medullaris. A partial remnant may persist, however, as shown in autopsy studies, and although reported to be seen in only 1.5% of MRI studies of the spinal cord, it can normally be regarded as an incidental finding [10,14,15]. The 4 mm maximum diameter used to form our definition of a PCC is based on the paper by Petit-Lacour et al. [10] which was the first study published describing the visibility of the central canal on MRI. The central canal can communicate with the fourth ventricle beyond infancy, but this is uncommon and is usually associated with hydrocephalus which excludes it from being a PCC which is essentially idiopathic. The typical appearance of a PCC on T2-weighted coronal and axial spinal MRI is demonstrated in Figs. 4 and 5, respectively.
The term 'hydromyelia' is often used to refer to an ependymal-lined, CSF-filled spinal cord cavity which most likely represents persistence into adulthood of a foetal configuration of the anatomy of the central canal of the spinal cord [17]. Hydromyelia can therefore be used interchangeably with PCC as they represent the same entity, although it could be argued that calling it a 'persistent central canal' is a more literal description.
In contrast to syringomyelia, the literature defining a PCC and determining its clinical significance remains limited. There is currently no widely accepted definition of a PCC in the literature, and debate continues on the criteria for distinguishing between a PCC and syringomyelia. Syringomyelia tends to be used to refer to a CSF-filled cavity within the spinal cord which is surrounded by a wall comprised of glial cells (which therefore implies it is related to a pathological process) and may present with abnormal neurological signs and symptoms. The term PCC is generally used to refer to an ependymal-lined, CSF-filled cavity within the spinal cord. They are usually asymptomatic and have no identifiable underlying cause. Much of the confusion arises from the fact that in practice, it is often not possible to distinguish between the two radiologically and therefore the umbrella term 'syrinx' is applied despite the fact this encompasses more than one entity. Previous studies have recommended neurosurgical intervention should be performed prior to any orthopaedic procedure to reduce the higher risk of neurological injury associated with surgical correction of scoliosis in patients with neurological etiologies compared with patients with AIS [7][8][9][18][19][20][21][22][23][24]. Whether or not a PCC should also be regarded as a risk factor for neurological injury during scoliosis correction is currently not clear and has to date not been addressed in the medical literature.
The general consensus is that a PCC may represent an anatomical variant with no identifiable underlying cause that is generally asymptomatic and most probably represents a different clinical entity from syringomyelia [11][12][13]25]. Whether the presence of a PCC poses a risk of an individual ultimately developing syringomyelia in the future at present remains uncertain and open to discussion.
Of the 11 patients in the PCC group, all had 'adolescent idiopathic scoliosis' , with the vast majority being female as would be expected in a group of patients with this condition. In order to minimise the risk of neurological injury during scoliosis correction, SCM is now routinely used during spinal deformity surgery and SSEPs represent the standard of care, their reliability in alerting the surgeon to a potential cord injury having been clearly established [26]. None of these patients had any intra-operative deviation from their baseline SCM traces or developed any postoperative neurological deficit as a complication of the surgical correction of their spinal deformity. There were no instances of any false negative SCM traces. When compared to their Levels instrumented Pre-operative Cobb angle Post-operative Cobb angle % correction 1 T3-L1 Thoraco-Lumbar T11-L3 52 12 77 2 T3-T8 Thoracic T2-T12 55 20 64 3 C5-L1 Thoracic T2-L1 59 22 62 4 T8-T12 Thoraco-Lumbar T11-L3 54 19 65 5 C4-T3 Thoracic T2-L2 61 15 75 6 C6-T4 Thoracic T2-L1 62 19 matched controls with regard to incidence of intraoperative deviation from baseline SSEPs, there was no statistically significant difference between the two groups (p = 0.57). Therefore, the null hypothesis could not be rejected, i.e. the presence of a PCC does not increase the risk of iatrogenic neurological injury during surgical correction of scoliosis. The PCC was found to be located either entirely or partially within the instrumented spinal levels in all patients within the PCC group, and a satisfactory percentage curve correction was achieved in all 11 PCC patients. The absence of abnormal SCM traces in any of the 11 patients in the PCC group therefore cannot be attributed to either the PCC being anatomically remote to the level of instrumentation or be related to a minimal curve correction.
This study does have inherent weaknesses, the major one being the relatively small numbers involved. In order to increase the statistical power of the study, four times as many sex-and age-matched controls were included. However, the study spanned a 7-year period during which 1161 surgical scoliosis corrections were performed. This suggests that a PCC in a patient presenting with scoliosis is a rare finding, in the order of 0.95% in our study group. This is in a similar range to the 1.5% described by Petit-Lacour et al. [10], which at present remains the only published estimation of the prevalence of PCCs in the general population, based on a retrospective study of 794 whole spine MRI scans of patients who had initially been investigated for a variety of symptoms. Significantly increasing the numbers involved in our study would entail having to either lengthen the already considerable study period by a substantial amount of time or conduct a multicentre study, both of which have inherent difficulties.
Another weakness of this study is that it is retrospective and therefore has deficiencies inherent to all investigations of this nature. However, once again owing to the relative scarcity of PCCs, conducting a prospective study to address this research question is not practicable and is therefore very unlikely to ever be performed.
The significance of the results of this study relates to the fact that with the resolution of MRI scans progressively increasing and routine pre-operative MRI screening of all patients presenting with scoliosis becoming commonplace across a greater number of institutions, it is very likely that a growing number of patients with PCCs will be identified pre-operatively. At present, the pre-operative identification of a fluid collection within the spinal cord of a child with scoliosis but no other NAA often causes uncertainty for clinicians. The significance of such findings and the degree of additional risk of intra-operative neurological injury they may pose, if any, currently remains uncertain. Once identified, these patients are often pre-operatively referred for a neurosurgical opinion on the appropriate management of these entities. At present, this remains largely unknown due to the absence of any literature to guide clinicians in these circumstances. This is therefore an issue this study may help to address.
Conclusion
Despite being based on relatively small numbers, our study does not provide any evidence to suggest that the presence of a PCC increases the risk of a neurological injury secondary to surgical correction of scoliosis. We therefore suggest that surgical correction of scoliosis in patients with a PCC can be carried out safely with routine precautions. | 2017-10-19T00:05:27.003Z | 2017-09-14T00:00:00.000 | {
"year": 2017,
"sha1": "133c3b28f3fdcd138f6b055ac9e287862dc643bd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13013-017-0133-z",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "133c3b28f3fdcd138f6b055ac9e287862dc643bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
170078482 | pes2o/s2orc | v3-fos-license | Integrate CRISPR/Cas9 for protein expression of HLA-B*38:68Q via precise gene editing
The determination of null- or low-expressed HLA alleles is clinically relevant in both hematopoietic stem cell transplantation and solid organ transplantation. We studied the expression level of a questionable (Q) HLA-B*38:68Q allele, which carries a 9-nucleotide (nt) deletion at codon 230–232 in exon 4 of HLA-B*38:01:01:01 using CRISPR/Cas9 gene editing technology. CRISPR/Cas9 gene editing of HLA-B*38:01:01:01 homozygous EBV B cell line resulted in one HLA-B*38:68Q/B*38:01:01:01 heterozygous and one HLA-B*38:68Q homozygous clone. Flow cytometric analysis of monoclonal anti-Bw4 antibody showed the protein expression of HLA-B*38:01:01:01 in homozygous cells was 2.2 fold higher than HLA-B*38:68Q/B*38:01:01:01 heterozygous cells, and the expression of HLA-B*38:68Q/B*38:01:01:01 heterozygous cells was over 2.0 fold higher than HLA-B*38:68Q homozygous cells. The HLA-B*38:68Q expression was further confirmed using anti-B38 polyclonal antibody. Similarly, the expression of the HLA-B*38:01:01:01 homozygous cells was 1.5 fold higher than that of HLA-B*38:68Q/B*38:01:01:01 heterozygous cells, and the HLA-B*38:68Q/B*38:01:01:01 heterozygous cells was over 1.6 fold higher than that of HLA-B*38:68Q homozygous cells. The treatment of HLA-B*38:68Q homozygous cells with IFN-γ significantly increased its expression. In conclusion, we demonstrate that HLA-B*38:68Q is a low-expressing HLA allele. The CRISPR/Cas9 technology is a useful tool to induce precise gene editing in HLA genes to enable the characterization of HLA gene variants on expression and function.
remains challenging. Therefore, it is important to determine the expression patterns of abnormally expressed HLA variants 7 .
The CRISPR (clustered regularly interspaced short palindromic repeats) is an adoptive immune system in bacteria that protects the bacterium from invading foreign genetic elements such as plasmid and bacteriophages 10 . The CRISPR/Cas9 system contains two components: a guide RNA (gRNA) and a CRISPR-associated endonuclease (Cas protein) 11,12 . The gRNA is a short RNA composed of a scaffold sequence needed for Cas-binding and a user-defined ∼20 nucleotide spacer that defines the genomic target to be modified 13 . The gRNA spacer sequence could be designed to target DNA sites with Protospacer Adjacent Motif (PAM) 14,15 . The most common PAM sequence recognized by Cas9 is NGG that is found directly downstream of the target DNA. The CRISPR/Cas9 cuts double strand DNA to generate double strand breaks (DSBs) between 3-4 bp upstream of the NGG PAM under the guidance of gRNA 16 . The DSBs can be repaired by non-homologous end joining (NHEJ), which is an error-prone process that introduces unpredictable insertions and deletions (indels); DSBs can also be repaired by homology directed repair (HDR) with the presence of DNA template, which induces desired DNA editing 11,12,17 . Two types of the DNA template can be used for HDR: a small single stranded DNA (ssDNA) oligonucleotide with 30-67 nt homology arms flanking the gene editing site 18 or a double stranded DNA (dsDNA) plasmid with long homology arms of 1-3 kb 19 .
The recent discovery of CRIPSR/Cas9 system provides a faster and more economical approach to gene editing 20 compared to the traditional zinc-finger nucleases (ZFNs) 21 and transcription activator-like effector nuclease (TALEN) methods 22 . The goal of this study was to generate homozygous and heterozygous cells carrying the HLA-B*38:68Q with deletion at codon 230-232 at exon 4 using CRISPR/Cas9 gene editing to study the effect of this mutation on HLA-B*38:01:01:01 expression.
Results
crRNA design and selection. We identified a new HLA B allele that is similar to HLA-B*38:01:01:01 except for a nine-nucleotide deletion (5′-CTTGTGGAG-3′) at codon 230 to 232 that results in a coding shift at α3 domain of HLA-B38 (Fig. 1A). The sequence was submitted to the GenBank database (accession number MF069211) and IMGT/HLA databases (submission number HWS10028807). Since the expression level of this novel B*38 allele is unknown, it was named HLA-B*38:68Q. To determine if the deletion at codon 230-232 affected the level of protein expression, the homozygous HLA-B*38:01:01:01 EBV B cell line TEM665 was used to generate homozygous HLA-B*38:68Q alleles to study its expression (Fig. 1B). GeneArt TM CRISPR Search and Design Tool was used to design crRNAs targeting the DNA sequences close to the 9-nt deletion site at exon 4 of 1C). crRNA1 (5′-GGATGGCGAGGACCAAACTC-3′) was designed to recognize −12 to −31 bp upstream of the 9-nt deletion, and crRNA2 (5′-TGGTCTGGTCTCCACAAGCT-3′) was designed to recognize −2 to +9 bp sequence of the 9-nt deletion. Both crRNAs share a ssDNA target with a 67-nt of left arm and a 30-nt of right arm of the deletion site (Fig. 1C). The crRNA1 and crRNA2 were then mixed with universal tracrRNA to form gRNA1 and gRNA2. Next, Cas9 protein (1.5 µg) and gRNA1/gRNA2 (360 ng) were mixed to form Cas9 ribonucleoprotein (RNP), respectively 18 . Twenty four electroporation conditions were tested to optimize transportation efficiency using Neon transfection system 23 (Supplemental Table 1). The program of the highest transportation efficiency (58.4%) was selected for transfecting HLA-B*38:01:01:01 homozygous EBV B cell line. Our results showed that gRNA1 induced 22.6% DSB cleavage and gRNA2 induced 13.8% DSB cleavage using GeneArt ® Genomic Cleavage Detection assay. The gRNA1 was chosen for transfection with ssDNA target due to its high efficiency.
Allograft rejection often correlates with increased cytokine production including IFN-γ 26 . We therefore tested if the expression of HLA-B*38:68Q will be stimulated by IFN-γ under inflammatory environment. Our results showed that in the HLA-B*38:68Q homozygous cells, the treatment of IFN-γ significantly increased the HLA-B expression and resulted in a binding of 36,036 ± 887 MFI, 1.9 fold higher than the HLA-B*38:68Q homozygous cells without IFN-γ treatment (19,379 ± 900 MFI, P < 0.0001, Fig. 6). Similarly, in the HLA-B*38:01:01:01 homozygous cells, the expression of HLA-B treated with IFN-γ was at 57,646 ± 357 MFI, which was 1.2 fold higher than the untreated group (46,642 ± 231 MFI, P < 0.0001). The results showed that although the expression of HLA-B*38:68Q can be upregulated by IFN-γ treatment, the upregulation of HLA-B expression was much lower compared to IFN-γ treated HLA-B*38:01:01:01 homozygous cells (Fig. 6).
Discussion
In this study, we reported efficient HLA-B*38:01:01:01 gene modification and expression in an EBV B cell line using the CRISPR/Cas9 system. We successfully introduced gene editing in 84% of clones and achieved precise deletion at codon 230-232 at exon 4 in 5 alleles. Similar to other publications, the CRISPR/Cas9 gene editing of HLA-B*38:01:01:01 involved DNA repair via either NHEJ or HDR pathway 27 . However, even in the presence of guided DNA templates, 72% of gene editing was through the NHEJ repair pathway compared to 10% in HDR pathway. HDR pathway provides desired repair of the target DNA in the presence of template DNA. The low incidence of HDR makes the selection of precise gene editing challenging. In order to achieve higher HDR gene editing efficiency, the DSBs induced by CRISPR/ Cas9 nuclease should be in close proximity to the edit site 18 . The homologous recombination rate could be increased with larger flanking sequences, therefore standard gene deletion/disruption protocols typically use flanking regions over 1 kb on either side of the target gene to increase HDR 28 . The evidence of using cell lines deficient in NHEJ pathways increased the levels of HDR suggesting these two pathways are competitive 29 . Recent studies have demonstrated the use of KU70, KU80 or DNA ligase IV to suppress key NHEJ molecule can increase HDR pathway 30 .
Our study successfully demonstrated that the deletion at Currently, there are 44 questionable HLA alleles (Q) in IMGT/HLA data base 2 . The frequency of these questionable alleles has not been well established, particularly the HLA allele frequency is largely based on the Caucasian population. Therefore, the frequency of these questionable alleles may be underestimated in other populations. With the advancement of the full gene HLA sequencing by NGS technology, the laboratory is able to obtain high resolution HLA typing with minimum ambiguities. However, due to additional sequencing information on exons outside the antigen recognition sites (ARS) and introns, it is likely that more questionable alleles will be discovered. The knowledge of the expression level of these questionable alleles may be important for donor selection in HSCT. Petersdorf et al. demonstrated that increasing expression level of the patient's mismatched HLA-C allele was associated with increased risk of grades III-IV acute GVHD with odds ratio of 1.34 in HSCT 34 . In addition, the understanding of the expression level can also help to identify donors with the least immunogenic mismatches, or select donors to cross permissive immunological barriers for highly sensitized patients in solid organ transplantation. These questionable alleles could also be potential null alleles. Failure to identify HLA null alleles in donors may cause severe GVHD in HSCT 7 . Increasing knowledge of the expression level of HLA variant alleles will help to improve the understanding of HLA allogenicity in both HSCT and solid organ transplantation. The CRISPR/Cas9 system provides an effective tool to study the expression level of these variant HLA alleles. In addition, CRISPR/Cas9 can also introduce insertions and deletions at the UTRs, exons and introns to study the regulations and functions of HLA genes.
CRISPR/Cas9 has been used to facilitate correction of mutated genes in various diseases. Recently, CRISPR/ Cas9 has been used to treat single nucleotide polymorphism in the β-globulin gene to treat sickle cell diseases in a mouse model 35 . The chimeric antigen receptor (CAR) modified T cells have been applied to various cancers, especially in B cell hematologic malignancies 36 . With the application CRISPR/Cas9 system, Liu et al. 37 have successfully down regulated the expression of HLA class I and TCRα to generate a universal chimeric antigen receptor (CAR) T cells. The CRISPR/Cas9 technology has also come under the spotlight in transplantation. Entry of human immunodeficiency virus (HIV) into target cells requires both CD4 and CCR5 receptors 38 . A 32-base pair deletion in CCR5 (CCR5-Δ32) is associated with reduced HIV transmission risk and delayed disease progression 39 . In HIV+ patients with hematological malignancies, gene editing using CRISPR/Cas9 40 has been used to generate homozygous CCR5-Δ32 deletion in CD34+ cells to introduce HIV resistance. Currently, there are several ongoing clinical trials evaluating the safety of transplantation of CRISPR modified CCR5-Δ32 CD34+ cells in HIV+ patients with hematological malignances 41,42 . In conclusion, the CRISPR/Cas9 system is a powerful gene editing tool that can be used to study HLA gene expression and function and improve HLA matching in hematopoietic stem cell and solid organ transplantation. Modification of HLA gene expression by CRISPR/Cas9 also promises to provide new approaches for cellular therapies in transplantation. www.nature.com/scientificreports www.nature.com/scientificreports/
Material and Methods
HLA-B*38:68Q was submitted to the GenBank database (accession number MF069211) and IMGT/HLA database (submission number HWS10028807) with full genomic allele sequence as a questionable allele due to its unknown surface expression by our center in 2017 43 . Research approval for performing CRISPR/CAS9 on the sample was granted by the UCLA Institutional Review Board (IRB#14-000516). Fig. 1B) was selected for gene editing, and in addition, a Bw6 homozygous EBV B cell line AOH749 (from AOH Workshop, AOH9004) and/or K562 (from UCLA Immunogenetics Center) which lacks HLA expression were used as negative controls for monoclonal or polyclonal antibody test. AOH749 expresses HLA A31, B65, Bw6, C8, DR1, DQ5, DP3 and DP0401, which does not cross reactive to the anti-Bw4 antibody. All cell lines were cultured in RPMI-1640 (GE Healthcare Life Sciences, USA) containing 20% FBS (Omega, USA), 1%
ssDNA target design/Homologous recombination assays. To create homologous recombination (HR) assays, two gRNAs target the sequence near the deletion site within the HLA-B gene were designed and synthesized 18 . The Cas9 RNPs were then used to transfect cells via Neon ® electroporation (Invitrogen, USA) for 24-well format electroporation transfection testing for B cell line. The genomic cleavage efficiency was then evaluated using the GeneArt ® Genomic Cleavage Detection kit (ThermoFisher, USA) at 48 h post transfection. The cleavage efficiencies were calculated based on the relative agarose gel band intensity, which was quantified using a high sensitivity DNA chip on TapeStation 2200 (Agilent, USA). Cleaved efficiency was calculated following the manufacturer's instruction. A program with voltage set at 1700 V, pulse width set at 20 ms, and one pulse was used for the subsequent study. The gRNA with highest editing was selected for the subsequent HR assays. For ssDNA target design, typically the mutation site was positioned at the center flanked by 67-nt to 30-nt on each side (Fig. 1C). To measure homologous recombination efficiency, the ssDNA target was co-transfected with Cas9 RNPs into cells via electroporation. The genomic loci were PCR-amplified using the corresponding primers and then subjected to GeneArt ® Genomic Cleavage Detection assay (ThermoFisher, USA) for restriction enzyme digestion.
Culture of single cell derived clones. Transfected cells were washed with 500 μL of PBS buffer (Corning, USA) and resuspended at the density of 8 cells/mL with a total volume of 50 mL. 100 μL of the cell suspension was transferred into the wells of the 96 well tissue culture plates to ensure each well contained a single cell. The plates were incubated at 37 °C, 5% CO 2 incubator (Thermo Scientific, USA). The plates were then scanned for single cell colonies as soon as small aggregates of cells are visible under a 4× microscope (usually after first week, depending on the growth rate of the cell line) to ensure the cell colonies were derived from a single cell. The cells were incubated for an additional 2-3 weeks to expand the clonal populations for further analysis and characterization.
Harvest single cell derived clones. Single cell derived clones were washed with 100 µL of 1× PBS buffer (Corning, USA). 1 × 10 5 of the cells from each clones were transferred into the PCR plate containing 25 µL "Direct lysis buffer". The "Direct lysis buffer" was made by adding 10 μL of Proteinase K (Thermo Scientific, USA) to 1 mL DirectPCR Lysis Reagent (Thermo Scientific, USA). The PCR plate was incubated at 55 °C for 30 mins to lyse the cells and followed by 95 °C for 45 mins to denature the Proteinase K.
HLA gene amplification and next generation sequencing. Multiplex long-range PCR were employed using AllType NGS assay (One Lambda, USA) to co-amplify 11 HLA loci including HLA-A, -B, -C, -DRB1,3,4,5 -DQB1 -DPB1, -DQA1 and -DPA1. HLA-A, -B, -C, -DQA1, and -DPA1 were amplified from 5′UTR to 3′UTR, and remaining loci are beginning at intron 1 to 3′UTR. Library construction was automated on the Biomek FX (Beckman coulter, USA). Sequence-ready libraries were validated and quantitated on the High Sensitivity D1000 ScreenTape (Agilent Technologies, USA) to allow for library normalization and equimolar pooling of all study samples on the Biomek FX (Beckman coulter, USA). Pooled libraries were diluted and loaded at Ion Chef (Thermo Scientific, USA) for template amplification. Sequencing on Ion S5 XL is followed the manufacture instruction (Thermo Scientific, USA). When the sequencing was done, the TSVEngine v1.2.0 (One Lambda, USA) was employed to analyze the data.
Single antigen antibody testing. Neat and serum at 1:2 dilution were treated with DTT and tested for HLA antibodies using the IgG-SAB Assay from One Lambda (Canoga Park, CA) as previously described 44 . Antibodies were considered positive if the MFI >1000 for HLA-A, -B, -DR, -DQ and >2000 was used for HLA-C and -DP to correct for the enhanced amount of HLA-C and -DP antigens conjugated to the Luminex beads compared to the lower cell surface expression on lymphocytes.
www.nature.com/scientificreports www.nature.com/scientificreports/ Flow cytometric analysis of HLA protein expression. Expression of HLA-B locus antigens in the edited cell lines containing the HLA-B*38:68Q allele and control cells were determined by flow cytometry using a FITC-conjugated monoclonal anti-IgG Bw4 antibody (One Lambda, USA). Approximately 10 5 cells were incubated with 0.5, 1, and 2 μL (10 mg/mL) anti-Bw4 monoclonal antibody on ice for 30 mins. Isotype control cells were incubated with 0.5 μL of FITC-conjugated mouse IgG secondary antibody (Jackson ImmunoResearch, USA). After staining, cells were washed with 1× PBS buffer (Corning, USA) containing 2% FBS (Omega, USA) and suspended in 300 μL of PBS/2% FBS. TEM665 homozygous HLA-B*38:01:01:01 EBV transformed B cell line was used as the positive control, AOH749 and K562 were used as the negative control cells. Samples were tested in triplicate. Analysis of HLA B locus expression was performed using FlowJo software version 10 (BD, USA).
Expression of HLA-B38 in the edited cell lines containing the HLA-B*38:68Q allele was determined using UCLA serum exchange sample that contains anti-B38 antibody but lacks Bw4 activity. Approximately 1.5 × 10 5 cells/tube were incubated with 25 μL UCLA serum exchange serum at neat and 1:2 dilution for 30 mins at room temperature. After incubation, cells were washed 4 times with PBS/2% FBS followed by labeling for 20 mins at 4 °C with an anti-human IgG FITC-conjugated antibody (Jackson ImmunoResearch, USA). Negative sera routinely used in the clinical lab were used as controls. Samples were tested in triplicate. Analysis of HLA-B38 expression was performed using FlowJo software version 10 (BD, USA). Statistical analysis. Each experiment of protein expression was tested in triplicate and the expression level is shown as mean fluorescence intensity (MFI) ± SD. Statistical analysis was performed the Student's t test or ANOVA on GraphPad Prism 7 (GraphPad, USA). P < 0.05 was denoted as significant.
Data Availability
All data generated or analyzed during this study are included in this article. | 2019-05-31T13:52:53.512Z | 2019-05-30T00:00:00.000 | {
"year": 2019,
"sha1": "21c09b3d4e7d0da68f4421e36a43c1d83c10b50c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-44336-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2da277643a66f7bf5648739cfcf72c8add97a02",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248408350 | pes2o/s2orc | v3-fos-license | The clinician’s guide to prevention and treatment of osteoporosis
Osteoporosis is the most common metabolic bone disease in the USA and the world. It is a subclinical condition until complicated by fracture(s). These fractures place an enormous medical and personal burden on individuals who suffer from them and take a significant economic toll. Any new fracture in an adult aged 50 years or older signifies imminent elevated risk for subsequent fractures, particularly in the year following the initial fracture. What a patient perceives as an unfortunate accident may be seen as a sentinel event indicative of bone fragility and increased future fracture risk even when the result of considerable trauma. Clinical or subclinical vertebral fractures, the most common type of osteoporotic fractures, are associated with a 5-fold increased risk for additional vertebral fractures and a 2- to 3-fold increased risk for fractures at other sites. Untreated osteoporosis can lead to a vicious cycle of recurrent fracture(s), often resulting in disability and premature death. In appropriate patients, treatment with effective antifracture medication prevents fractures and improves outcomes. Primary care providers and medical specialists are critical gatekeepers who can identify fractures and initiate proven osteoporosis interventions. Osteoporosis detection, diagnosis, and treatment should be routine practice in all adult healthcare settings. The Bone Health and Osteoporosis Foundation (BHOF) – formerly the National Osteoporosis Foundation – first published the Clinician’s Guide in 1999 to provide accurate information on osteoporosis prevention and treatment. Since that time, significant improvements have been made in diagnostic technologies and treatments for osteoporosis. Despite these advances, a disturbing gap persists in patient care. At-risk patients are often not screened to establish fracture probability and not educated about fracture prevention. Most concerning, the majority of highest risk women and men who have a fracture(s) are not diagnosed and do not receive effective, FDA-approved therapies. Even those prescribed appropriate therapy are unlikely to take the medication as prescribed. The Clinician’s Guide offers concise recommendations regarding prevention, risk assessment, diagnosis, and treatment of osteoporosis in postmenopausal women and men aged 50 years and older. It includes indications for bone densitometry as well as fracture risk thresholds for pharmacologic intervention. Current medications build bone and/or decrease bone breakdown and dramatically reduce incident fractures. All antifracture therapeutics treat but do not cure the disease. Skeletal deterioration resumes sooner or later when a medication is discontinued—sooner for nonbisphosphonates and later for bisphosphonates. Even if normal BMD is achieved, osteoporosis and elevated risk for fracture are still present. The diagnosis of osteoporosis persists even if subsequent DXA T-scores are above − 2.5. Ongoing monitoring and strategic interventions will be necessary if fractures are to be avoided. In addition to pharmacotherapy, adequate intake of calcium and vitamin D, avoidance of smoking and excessive alcohol intake, weight-bearing and resistance-training exercise, and fall prevention are included in the fracture prevention armamentarium. Where possible, recommendations in this guide are based on evidence from RCTs; however, relevant published data and guidance from expert clinical experience provides the basis for recommendations in those areas where RCT evidence is currently deficient or not applicable to the many osteoporosis patients not considered for RCT participation due to age and morbidity.
Synopsis of major recommendations to the clinician
These recommendations apply to postmenopausal women and men aged 50 years and older.
Universal recommendations
& Counsel individual patients on their risk for osteoporosis, fractures, and potential consequences of fractures (functional deterioration, loss of independence, increased mortality). & Recommend a diet with adequate total calcium intake (1000 mg/day for men aged 50-70 years; 1200 mg/day for women ≥ 51 years and men ≥ 71 years), incorporating calcium supplements if intake is insufficient. & Monitor serum 25-hydroxyvitamin D levels. & Maintain serum vitamin D sufficiency (≥ 30 ng/mL but below ≤ 50 ng/mL) [1][2][3]. Prescribe supplemental vitamin D (800-1000 units/day) as needed for individuals aged 50 years and older to achieve a sufficient vitamin D level. Higher doses may be necessary in some adults, especially those with malabsorption. (Note: in healthy individuals a serum 25(OH) vitamin D level ≥ 20 ng/mL may be sufficient, but in the setting of known or suspected metabolic bone disease ≥ 30 ng/mL is appropriate.) & Identify and address modifiable risk factors associated with falls, such as sedating medications, polypharmacy, hypotension, gait or vision disorders, and out-of-date prescription glasses. & Provide guidance for smoking cessation, and avoidance of excessive alcohol intake; refer for care as appropriate. & Counsel or refer patients for instruction on balance training, muscle-strengthening exercise, and safe movement strategies to prevent fracture(s) in activities of daily life. & In community-dwelling patients, refer for at-home fall hazard evaluation and remediation. & In post-fracture patients who are experiencing pain, prescribe over-the-counter analgesia, heat/ice home care, limited bed rest, physical therapy, and alternative non-pharmacologic therapies when appropriate. In cases of intractable or chronic pain, refer to a pain specialist or physiatrist. & Coordinate post-fracture patient care via fracture liaison service (FLS) and multidisciplinary programs in which patients with recent fractures are referred for osteoporosis evaluation and treatment, rehabilitation, and transition management.
Diagnostic assessment recommendations
& Investigate any broken bone in adulthood as suspicious for osteoporosis, regardless of cause [4,5]. & Measure height annually, preferably with a wall-mounted stadiometer (without shoes). & Record history of falls. & Perform BMD testing in the following: -Women aged ≥ 65 years and men aged ≥ 70 years.
-Postmenopausal women and men aged 50-69 years, based on risk profile. -Postmenopausal women and men aged ≥ 50 years with history of adult-age fracture. -DXA facilities that employ accepted quality assurance measures. -The same facility and on the same densitometry device for each test whenever possible. & Maintain diagnosis of osteoporosis in patient diagnosed by fracture in adulthood or T-score (− 2.5 or below), even if subsequent DXA T-score is above − 2.5. & To detect subclinical vertebral fractures, perform vertebral fracture imaging (X-ray or DXA vertebral fracture assessment) in the following: -Women aged 65 years and older if T-score is less than or equal to − 1.0 at the femoral neck [6]. -Women aged 70 years or older and men aged 80 years or older if T-score is less than or equal to − 1.0 at the lumbar spine, total hip, or femoral neck. -Men aged 70-79 years if T-score is less than or equal to − 1.5 at the lumbar spine, total hip, or femoral neck. -Postmenopausal women and men aged ≥ 50 years with the following specific risk factors: ○ Fracture(s) during adulthood (any cause). ○ Historical height loss of ≥ 1.5 in. (defined as the difference between the current height and peak height) [7]. ○ Prospective height loss of ≥ 0.8 in. (defined as the difference between the current height and last documented height measurement) [7]. ○ Recent or ongoing long-term glucocorticoid treatment. ○ Diagnosis of hyperparathyroidism [8]. & Rule out secondary causes of bone loss, osteoporosis, and/ or fractures. & In appropriate untreated postmenopausal women, selectively measure bone turnover markers to help gauge rapidity of bone loss. & Prior to elective orthopedic procedures, evaluate skeletal health and measure BMD as indicated by risk profile (e.g., inflammatory arthritis, osteoarthritis, chronic kidney disease, or adverse events from surgery or other risk factors) [9][10][11].
The decision to treat should be individualized in persons with a fracture of the proximal humerus, pelvis, or distal forearm who do not have osteopenia or low BMD [12,13]. & Initiate antiresorptive therapy following discontinuation of denosumab, teriparatide, abaloparatide, or romosozumab.
Monitoring patients and treatment response
& Perform BMD testing 1 to 2 years after initiating or changing medical therapy for osteoporosis and at appropriate intervals thereafter according to clinical circumstances.
-More frequent BMD testing may be warranted in higher-risk individuals (multiple fractures, older age, very low BMD). -Less frequent BMD testing may be warranted as follow-up for patients with initial T-scores in the normal or slightly below normal range (osteopenia) and for patients who have remained fracture free on treatment. & In patients receiving osteoporosis pharmacologic treatment: -Routinely reassess risk for fracture, patient satisfaction and adherence with therapy, and need for continued or modified treatment. The appropriate interval between initiation and reassessment differs with agent prescribed. -Serially measure changes in BMD at lumbar spine, total hip, or femoral neck; if lumbar spine, hip, or both are not evaluable or according to clinical judgment, consider monitoring at 33% distal radius. -Reassess patient and BMD status for consideration of a drug holiday after 5 years of oral and 3 years of intravenous bisphosphonate in patients who are no longer at high risk of fracture (T-score ≥ − 2.5, no new fractures) [14]. -At each healthcare encounter, ask open-ended questions about treatment to elicit patient feedback on possible side effects and concerns. Communicate risk-benefit trade-offs and confirm understanding: both the risk of adverse events with treatment (usually very low) and risk of fractures and their negative consequences without treatment (usually much higher).
Osteoporosis: impact and overview
Osteoporosis is a disease characterized by low bone density, deterioration of bone tissue, disrupted bone microarchitecture, compromised bone strength, and fracture. According to the World Health Organization (WHO) diagnostic classification, osteoporosis is defined by BMD at the hip or lumbar spine that is less than or equal to 2.5 standard deviations below the mean BMD of a young adult reference population (T-score).
Osteoporosis is a risk factor for fracture, just as hypertension is for stroke and hypercholesterolemia is for heart disease. While risk is highest in individuals with extremely low BMD, the majority of fractures occur in patients with T-scores better than − 2.5. Non-BMD factors contribute to fracture risk, such as falls, frailty, and poor bone quality.
Scope of the problem
Osteoporosis affects an enormous number of people, both men and women, of all races. Among Caucasian adults in the USA aged 50 years and older, about 50% of women and 20% of men will experience an osteoporotic fracture in their remaining lifetime [15]. Rates of fracture differ by ethnic/ racial population and skeletal site.
For fracture at any site in women, after adjusting for BMD, weight, and other covariates, non-Hispanic white and Hispanic-American women have the highest risk for fracture, followed by Native Americans, African Americans, and Asian Americans [16,17]. For hip fracture in men, the age-adjusted incidence was highest for non-Hispanic white men, similar among Hispanic-American and black men, and lowest in Asian men.
Many factors are thought to contribute to these divergent fracture rates including BMD, cortical thickness, access to healthcare, comorbidities (such as diabetes), and skeletal geometry (e.g., hip axis length) [20]. Fracture rates do not track uniformly with the risk of osteoporosis among different racial/ ethnic groups. For example, while fewer African Americans have osteoporosis, those diagnosed with osteoporosis experience fracture rates comparable to Non-Hispanic Whites and experience worse overall post-fracture outcomes [19]. Native Americans have BMD similar to Non-Hispanic Whites but higher rates of hip fracture, possibly reflecting challenges with screening, nutrition, lifestyle, and follow-up (Fig. 1).
Based on data from the National Health and Nutrition Examination Survey III (NHANES III), BHOF previously estimated that more than 10.2 million Americans have osteoporosis and an additional 43.4 million have low bone density [21]. Prevalence of fractures continues to increase as the population ages. It is currently projected that 12.3 million Americans have osteoporosis [22]. At present the 2 million new cases of osteoporotic fracture per year exceeds the annual number of new cases of myocardial infarction, breast cancer, and prostate cancer combined [23][24][25]. Annual fracture incidence is expected to increase 68%, to 3.2 million by 2040 [26].
Osteoporosis remains a disease that is underdiagnosed and undertreated despite effective antifracture interventions and the potentially lethal consequences of fractures [27]. Hip fractures significantly increase risk of death in the year following fracture and are highly predictive of additional fractures. Nonetheless, as many as 80-95% of patients in some practice settings are discharged following hip fracture repair with no antifracture treatment or management plan [28][29][30].
Crisis in osteoporosis patient care
The benefits of timely diagnosis and treatment have been well documented. Treatment reduces fracture incidence, forestalling injury, disability, and excess mortality. This effect is seen in Medicare claims analyses demonstrating a significant drop in age-adjusted risk for hip fracture in the ten years between 2002 and 2012. This decade-long decline coincided with the advent of bone density testing and application of effective osteoporosis therapies.
However, after declining for decades, incidence rates plateaued between 2013 and 2015 ( Fig. 2) [31]. Although more data are needed to draw causal conclusions, it is likely that multiple factors have contributed. In the USA, patient access to osteoporosis care has declined. There are fewer office-based DXA facilities performing smaller numbers of DXA studies. Fewer women and men are diagnosed with [20] osteoporosis and/or treated to prevent fractures. Not surprisingly, we have seen an uptick in fractures.
The osteoporosis treatment gap (difference between number meeting treatment indications and number receiving treatment) is recognized globally as a crisis in patient care [21,32,33]. Since many factors contribute to this crisis, multifactorial approaches should be considered to reverse the trend, including cultivating trust in at-risk patients; generating more data on comparative effectiveness and safety of current osteoporosis drugs; engaging physicians, governmental, and public health organizations; improving insurance coverage for key fracture prevention services, including FLS programs; and adopting quality measures to incentivize clinicians, hospitals, and health systems to routinely screen and treat high-risk patients.
Medical impact
Fractures and their complications are the clinical sequelae of osteoporosis. The most common fractures are those of the vertebrae (lumbar spine), proximal femur (hip), and distal forearm (wrist). Most fractures in older adults are due at least in part to low bone mass, even when they result from considerable trauma. All fractures are associated with some degree of low BMD and increased risk of subsequent fracture in older adults [5]. In fact, a large cohort study found high-trauma and low-trauma fractures to be comparably predictive of low BMD and elevated future fracture risk [4].
A recent fracture at any major skeletal site in an adult ≥ 50 years of age should be considered a sentinel event that indicates urgent need for further assessment and treatment. Fractures of fingers, toes, face, and skull are not considered osteoporotic fractures since they are typically traumatic and unrelated to bone fragility.
Fractures may be followed by full recovery or by chronic pain, disability, and premature death. Hip, vertebral, and distal radius fractures lead to a substantial reduction in quality of life, with the greatest hardship among hip fracture patients [34]. Low-energy fractures of the pelvis and/or humerus are common in people with osteoporosis and contribute to increased morbidity and mortality. Psychosocial symptoms, most notably depression and loss of self-esteem, are common consequences of fracture, as patients grapple with pain, physical limitations, and loss of independence.
Hip fractures
Hip fractures are associated with 8.4-36% excess mortality at 1 year, with higher mortality in men than in women [26,35]. Hip fracture can have devastating impacts on a patient's life. Approximately 20% of hip fracture patients require long-term nursing home care, and 60% do NOT fully regain pre-fracture independence [27]. In addition, hip fractures are associated with a 2.5-fold increased incidence of secondary fractures [36].
Vertebral fractures
Although the majority of vertebral fractures are subclinical, they can cause pain, disability, deformity, and premature death [37]. Pain and postural changes associated with multiple vertebral compression fractures (kyphosis) can limit mobility and independent function, resulting in significantly diminished quality of life [38]. Multiple thoracic fractures can cause restrictive lung disease. Lumbar fractures can alter abdominal anatomy, leading to constipation, abdominal pain, early satiety, and weight loss. Vertebral fractures, whether clinically apparent or silent, are associated with a 5-fold increased risk for additional vertebral fractures and a 2-to 3-fold increased risk for fractures at other sites.
Wrist fractures
Wrist fractures are five times more common in women than men. They tend to occur earlier in life than other fractures (i.e., between 50 and 60 years of age). When wrist fractures are recognized as evidence of bone fragility and appropriate osteoporosis treatment is prescribed, future fractures could be avoided. While less disabling than hip or vertebral fractures, wrist fractures can be equally detrimental to quality of life, causing pain and limiting activities necessary for independent living.
Wrist fractures are strongly predictive of future fractures, as demonstrated in longitudinal studies of women in the Women's Health Initiative (WHI) and men in the Osteoporotic Fractures in Men Study (MrOs) [39][40][41]. Among recipients of Medicare, increased risk of other [2018] Osteoporos Int. Reprinted with added arrow by permission of author.) [31] fractures following a wrist fracture (regardless of BMD) is comparable to risk following hip or spine fracture in the year after the index event [12]. Low BMD at spine, hip, or forearm is a risk factor for wrist fractures in women and men; however, BMD alone is an imperfect predictor of fracture. In women with forearm fractures, advanced imaging with highresolution peripheral quantitative computed tomography (HR-pQCT) has identified poor bone quality in fracturing women and girls compared with their nonfracturing peers at similar BMDs: lower total and trabecular bone density, decreased trabecular number and thickness, and lower cortical density and thickness. These differences in bone quality remained after adjusting for age and BMD at the hip and 33% radius [42].
Unfortunately, rates of evaluation and treatment for osteoporosis after wrist fractures are low in women and even lower in men [43]. Seventy-nine percent of adult male wrist fracture patients in one prospective, randomized study did not receive a bone density test following fracture repair [44]. This is significant because patients who received BMD measurement were more likely to be prescribed effective antifracture therapy.
As the population ages, it is critical for clinicians to intervene after a sentinel fracture. Appropriate, timely intervention offers the best opportunity to prevent the cycle of recurrent fractures, disability, and premature death in these patients [45].
Economic toll
The personal and economic costs of fractures are enormous. Fractures result in more than 432,000 hospital admissions, almost 2.5 million medical office visits, and about 180,000 nursing home admissions in the US [26]. Annual fracturerelated costs are expected to increase from $57 billion to over $95 billion by 2040 [26]. This heavy toll could be significantly reduced with routine use of effective treatments and screenings, including VFA in women aged 65 and older with osteopenia (T-score ≤ − 1.0) [23,27].
Basic pathophysiology
The human skeleton is comprised of living tissue. Critical to locomotion, skeletal bone houses much of the hematopoietic system and is the major repository for calcium and phosphorus-minerals essential to multiple physiologic systems. Constant serum calcium and adequate cellular calcium and phosphorus are maintained by a complex system of regulatory hormones that act directly on bone and indirectly on other tissues, such as the intestine and kidney. These demands can challenge skeletal equilibrium. When inadequate mineral is present in serum, it is withdrawn from skeletal stores. Over time, continued removal of bone tissue degrades skeletal microarchitecture thereby elevating risk for fractures that occur spontaneously or from minimal trauma.
Skeletal lifecycle
During childhood and adolescence, bones undergo a process called modeling, during which new bone is formed at one site and old bone is removed from another site within the same bone. This process enables individual bones to develop in size, shape, and position. Childhood and adolescence are critical periods of skeletal accrual. This is particularly important for girls, who acquire 40-50% of their total bone mass during early teen years.
During rapid skeletal growth in childhood and adolescence, it takes several months to mineralize the protein scaffolding for new bone, called osteoid. This lag between formation and mineralization produces periods of relatively low bone density and increased propensity to fracture, particularly between ages 10 and 14 years [46]. In the early 20s, fracture rates level off with attainment of peak bone mass. Mineral density stabilizes in most adults by their early 40s, when it begins a gradual decline, which accelerates at menopause in women (~2%/ year for the 10 years following menopause) [47]. Agerelated bone loss thins trabecular bone and increases cortical porosity, creating the preconditions for future fragility and fractures.
Genetic factors appear to account for 60-80% of total adult bone mass [48]. Substantial contributions are made by multiple modifiable factors that include nutrition, physical activity, smoking, chronic illness, and bone-damaging medications. Suboptimal bone acquisition is associated with fracture earlier in adulthood. Conversely, high peak adult bone mass, all other things being equal, protects against osteoporosis later in life.
Bone remodeling
The skeleton responds dynamically to hormonal, mechanical, and pharmacologic stimuli through the resorption and formation processes of bone remodeling, or turnover. After epiphyseal closure, the skeleton repairs damage through bone remodeling, which occurs on bone surfaces throughout the skeleton. The majority of bone surface area resides in trabecular bone, the resilient bony latticework predominantly found inside vertebrae. Remodeling is initiated by bone-resorbing cells, osteoclasts, that breakdown and remove damaged bone in a process called resorption. Excavated bone is replaced with new bone produced by osteoblasts.
The mechanisms that regulate bone formation involve complex interactions but are mediated, in part, by cells called osteocytes. Osteocytes play a role in both bone modeling and remodeling. For example, at sites of specific mechanical strain, osteocytes produce less sclerostin, a cytokine and powerful inhibitor of bone formation. The result is stimulation of new bone formation. In several RCTs, a fully human neutralizing sclerostin antibody drug called romosozumab has blocked sclerostin, thereby markedly increasing bone formation and decreasing bone resorption [49].
Osteocytes make RANK-ligand (RANKL) a cytokine required for osteoclast formation. The fully human monoclonal antibody to RANKL, denosumab, is a potent antiresorptive drug that directly inhibits osteoclast formation, causes apoptosis of mature osteoclasts, and leads to decreased bone resorption and higher BMD. In addition to these agents, the anabolic PTH analogs (teriparatide and abaloparatide) affect remodeling-and modeling-based bone formation, leading to a net increase in BMD (see US FDA-Approved Drugs for Osteoporosis).
Pathogenesis of osteoporosis
In healthy young adults, the bone turnover cycle is balanced such that resorption is matched by formation. Bone remodeling accelerates in settings of chronic disease, aging, and a variety of mechanical, hormonal, and biochemical exposures such as glucocorticoids. Over time, this process leads to greater and greater deficits in mineralized bone.
Accelerated bone turnover affects cortical and trabecular bone somewhat differently. Bone resorption takes place on the surface of the bone. Because of its higher ratio of surface area to mass, trabecular bone is depleted more rapidly than cortical bone. With each remodeling cycle, there is a net loss of bone tissue. When bone remodeling rates increase-for example, in the setting of estrogen deficiency at menopause-bone loss is seen first at skeletal sites rich in trabecular bone, such as the spine, while sites that have a mix of cortical and trabecular bone, such as the hip, develop clinically apparent loss of bone later (Fig. 3).
Diagnostic considerations
BHOF recommends a multimodal, comprehensive approach to diagnosis of osteoporosis: detailed assessment of individual fracture risk, personal and family history, physical examination, and in patients with suggestive presentations (such as height loss, back pain, and/or fractures), focused studies to rule out secondary causes of bone fragility and vertebral imaging to detect prevalent fractures. This is a process of screening and evaluation. Fracture risk increases exponentially with age and BMD declines with age. Screening of all older persons on this basis is appropriate. In persons with fractures or conditions associated with elevated fracture risk, more detailed evaluation is needed to monitor and manage their skeletal health. Referral to a metabolic bone specialist may be appropriate [51].
Fracture risk assessment
All postmenopausal women and men aged 50 years and older should be evaluated for osteoporosis risk in order to determine need for BMD testing and/or vertebral imaging. In general, the more risk factors, the more likely a patient will break a bone.
Osteoporotic fractures are preventable. Even after a fracture, osteoporosis is treatable. However, because there are no warning signs, many people with osteoporosis are not diagnosed until a fracture occurs. Factors that have been associated with an increased risk of osteoporosis-related fracture are listed in Table 1. Primary among these is history of broken bones in adulthood, with highest risk in first 1-2 years after the initial fracture [52,53]. Patients must be evaluated soon after a fracture and receive appropriate treatments to optimize risk reduction.
Most fractures in older adults are associated with a fall. Falls occur in approximately one third of adults aged 65 years and older and this risk increases with age. Fall risk assessment is, therefore, a key component of primary and secondary fracture prevention. Factors associated with falls are shown in [50] muscle weakness, gait and balance disturbances, sedating or hypnotic medications, visual impairment, and any condition associated with dizziness, such as dehydration and orthostatic hypotension [55,56]. Importantly, multiple studies have demonstrated the safety and efficacy of physical therapy and exercise regimens targeted to fall risk reduction.
Evaluation of patients with fractures
In patients aged 50 years or older, consider hip, vertebral, and/ or forearm fractures to be highly suggestive of osteoporosis or other metabolic bone disease, unless excluded by clinical evaluation and imaging. Risk for fracture at all sites rises substantially in the period immediately following an initial fracture. Therefore, any fracture in adulthood should be viewed as a red flag signaling urgent need for focused attention [57].
Secondary skeletal etiologies should be investigated in all patients who present with fractures, low bone mass, or osteoporosis (Table 3). Chronic kidney disease, hyperparathyroidism, osteomalacia, and other diseases can cause skeletal fragility, multiple vertebral fractures, and very low bone density. For some metabolic bone diseases, osteoporosis therapies are not appropriate and may be harmful (e.g., osteomalacia or aplastic bone disease). Relevant blood and urine studies (Table 3) to rule out secondary etiologies should be obtained prior to initiating antifracture therapy. Patients found to have secondary, treatable causes of bone fragility may require no additional therapy once the underlying condition is addressed (Table 1). Osteoporosis affects a significant number of men, yet largely goes undetected and untreated. Some of the laboratory testing to assess secondary etiologies in men differs from that in women. Screening BMD and vertebral imaging recommendations are outlined in Tables 6 and 7. For additional guidance, readers should refer to Osteoporosis in Men: an Endocrine Society Clinical Practice Guideline, which provides a detailed approach to evaluation and treatment of osteoporosis in men [58].
Bone mineral density (BMD) measurement and classification DXA measurement of hip and lumbar spine is the preferred method for establishing and/or confirming a diagnosis of osteoporosis, predicting future fracture risk, and monitoring patients. Areal BMD by DXA is expressed in absolute terms of grams of mineral per square centimeter scanned (g/cm 2 ) and as a relationship to two BMD norms: an age-, sex-, and ethnicity-matched reference population (Z-score), or a young-adult reference population (T-score). The International Society for Clinical Densitometry (ISCD) recommends using a Caucasian (nonrace adjusted) young female normative database for women AND men of ALL ethnic groups. Recommendations may vary with use of sex-and race-adjusted young normal controls for Tscores and these are used by some co-authors of this guide [59].
The difference between a patient's BMD and the mean BMD of the reference population, divided by the standard deviation of the reference population, is used to calculate Zscores and T-scores. An individual's BMD is reported as the standard deviations above or below the mean BMD, as outlined in Table 4. The BMD diagnosis of normal bone mass, low bone mass (osteopenia), and osteoporosis are based on [60]. BMD has been shown to correlate well with bone strength. The recent FNIH Bone Quality Study found that improvements in DXA-based BMD predicted reductions in fracture risk. In a meta-regression analysis of 38 placebo-controlled trials of 19 osteoporosis medications, with~111,000 study participants, the FNIH study group found that increased BMD at the total hip and lumbar spine predicted fracture risk reduction at both of these sites [61]. Larger increases in BMD were associated with greater reductions in risk. For example, a 2% increase in total hip BMD could be expected to reduce vertebral fracture risk by 28% and hip fracture risk by 16%, while a 6% increase in hip BMD would result in a 66% reduction in vertebral fracture risk and a 40% reduction in risk factors for hip fractures (Table 5).
DXA scans are associated with exposure to trivial amounts of radiation. These highly sensitive measurements of lumbar spine, hip, and/or forearm must be performed by trained technologists on well-calibrated instruments. For meaningful interpretation, serial scans should be performed on the same densitometry device at the same facility.
In postmenopausal women and men aged 50 years and older, WHO diagnostic T-score criteria (normal, low bone mass, and osteoporosis) are applied to BMD measurement by central DXA at the lumbar spine and femoral neck [62]. BMD measured by DXA at the 33% radius is used for diagnosing osteoporosis when hip or lumbar spine cannot be measured; scans are unusable or cannot be interpreted, in clinical conditions associated with low forearm BMD, or as dictated by clinical judgment [59,62].
It is important to note that DXA of the lumbar spine can be difficult to accurately interpret. This is in large part due to degenerative changes in the lumbar spine, very common in older adults, that are typically characterized by localized bone proliferation. In this setting, DXA findings can overestimate spinal BMD and underestimate fracture risk. Patients with degenerative spinal changes may benefit from trabecular volumetric BMD (vBMD) measured with quantitative computed tomography (QCT), which is less affected by these changes, although this technology is not widely available [63,64].
These diagnostic classifications should not be applied to everyone. Premenopausal women, men less than 50 years of age, and children cannot be diagnosed on the basis of densitometric criteria alone. In populations between 20 and 50 years of age, the ISCD recommends that ethnicity-or raceadjusted Z-scores be used instead. Z-scores of − 2.0 or lower are classified as low BMD for chronological age and those above − 2.0 classified as within the expected range Table 4 Diagnostic criteria for osteoporosis: WHO BMD-based classification system and clinical-factor based diagnostic criteria. (Note: These criteria are sufficient for a diagnosis of osteoporosis. However, they should not serve as the sole determinant of fracture risk and/or dictate treatment decisions. Non-BMD risk factors that affect bone quality independently contribute to bone fragility and fractures.)
BMD Criteria for Osteoporosis Diagnosis in Postmenopausal Women and Men Aged ≥ 50 Years
Normal BMD within 1.0 SD of the mean for a young-adult reference population T-score -1.0 and above Low Bone Mass BMD between 1.0 and 2.5 SD below for a young-adult reference population T-score between -1.0 the mean and -2.5 Osteoporosis BMD 2.5 SD or more below the mean for a young-adult reference population T-score at or below -2.5 Clinical Criteria for Osteoporosis Diagnosis in Postmenopausal Women and Men Aged ≥ 50 Years Incident Fracture Hip, vertebral, and/or forearm fractures are consistent with osteoporosis (unless excluded by clinical evaluation and imaging) FRAX® Score T-score between -1.0 and -2.5 at the femoral neck or total hip by DXA accompanied by a FRAX-projected 10-year risk of ≥3% for hip fracture and/or >20% for major osteoporosis-related fracture (i.e. clinical vertebral, hip, forearm, or proximal humerus) based on U.S, adapted FRAX® model) [59]. In children, height-for-age Z-score (HAZ) (BMC/BMD haz ) has been demonstrated to most effectively offset the effect of short or tall stature on BMC/BMD Z-scores. A calculator for pediatric Z-score adjustment is available at https://zscore.research.chop.edu.
Who should be tested?
The decision to perform initial bone density measurement should be based on an individual's fracture risk profile and skeletal health assessment. Measuring bone density is not indicated unless test results will influence treatment and management decisions. The BHOF recommends screening densitometry in women aged ≥ 65 years and men aged ≥ 70 years, younger postmenopausal women aged 50-64 years, and men aged 50-69 years with risk factors for osteoporosis. The BHOF also recommends BMD testing for women and men with fracture(s). These recommendations are in concert with those of the ISCD and Endocrine Society clinical practice guidelines for osteoporosis in men [58,59]. BHOF recommendations for BMD testing are listed in Table 6. Routine bone density measurement is not recommended for children or adolescents and is not routinely indicated in healthy young men or premenopausal women unless there is a significant fracture history or specific risk factors for bone loss (such as glucocorticoid use).
Recommended screening densitometry in men
BHOF (formerly NOF) and other societies recommend BMD testing in men to inform clinical decisions regarding treatment ( Table 6). This includes men aged 70 years and older regardless of risk factors, men aged 50-69 years with clinical risk factors for fracture, and men who have broken a bone at age 50 years or older. In addition, men with conditions or on treatments associated with bone loss or low bone mass should be considered appropriate candidates for BMD screening (in its 2018 report, the US Preventive Services Task Force [USPSTF] confirmed the utility of BMD by DXA in predicting fracture in both women and men, but they found insufficient evidence at that time to recommend routine testing in men) [22,65].
Vertebral fracture assessment
Vertebral fracture in an adult aged 50 years or older is diagnostic of osteoporosis, even in the absence of a bone density diagnosis. The presence of a single vertebral fracture signifies a 5-fold increased risk for additional vertebral fractures and a 2-to 3-fold increased risk for hip or other fractures [66]. Unfortunately, most vertebral fractures are subclinical and/ or completely asymptomatic. As a result, they may go undiagnosed for many years. At the same time, a high proportion of women with asymptomatic vertebral fractures have BMD levels that would not warrant treatment based on BMD alone [67]. The finding of a previously unrecognized vertebral fracture may change a patient's diagnostic classification, alter fracture risk calculations, and determine treatment decisions [68]. Proactive investigation is required to detect these fractures so that further bone damage can be prevented.
Traditionally, conventional lateral thoracic/lumbar spine X-ray has been considered the gold standard for identification of vertebral fractures and minor vertebral deformities. However, DXA-assisted vertebral fracture assessment (DXA-VFA) is emerging as an alternative to radiography for its convenience, low cost, and minimal radiation exposure. Recently performed MRI or CT imaging studies done for other purposes can and should also be evaluated for presence of vertebral fractures or evidence of vertebral deformity.
Because subclinical vertebral fractures are so prevalent in older individuals, vertebral fracture assessment is recommended for the high-risk individuals listed in Table 7 [7,8,69]. As demonstrated in a recent study, incorporation of Table 6 Indications for BMD testing Consider BMD testing in the following individuals Women ≥ 65 years of age and men ≥ 70 years of age, regardless of clinical risk factors Younger postmenopausal women, women in the menopausal transition, and men aged 50 to 69 years with clinical risk factors for fracture Adults who have a fracture at age 50 years and older Adults with a condition (e.g., rheumatoid arthritis, organ transplant) or taking a medication (e.g., glucocorticoids, aromatase inhibitors, androgen deprivation therapy) associated with low bone mass or bone loss DXA-VFA into routine DXA screening for postmenopausal women with osteopenia or osteoporosis (T-score ≤ − 1, aged ≥ 65 years) has demonstrated cost-effectiveness for predicting increased risk of osteoporotic fractures [6]. Baseline DXA-VFA imaging provides a benchmark for future comparison when DXA-BMD is reassessed or when suggestive symptoms present: such as prospective height loss, new back pain, or postural changes [7]. Follow-up vertebral imaging may also be appropriate for patients being considered for a bisphosphonate holiday (temporary suspension of pharmacotherapy), since discontinuing antifracture therapy would not be advisable in patients who have recent vertebral fractures [70].
Using US-adapted Fracture Risk Assessment Tool (FRAX®) The Fracture Risk Assessment Tool (FRAX®) was developed to calculate 10-year probabilities of hip fracture and major osteoporotic fracture (defined as clinical vertebral, hip, forearm or proximal humerus fracture). The FRAX® algorithm takes into account the validated clinical risk factors for fractures shown in Table 8. FRAX® is validated for women and men aged 40-90 years. FRAX® was tested in treatment-naïve patients not on osteoporosis medications. It may, however, be useful for assessing risk in previously treated individuals who have discontinued bisphosphonate therapy for 2 years or nonbisphosphonate therapy for 1 year [65,71].
A country-specific FRAX® score can be calculated with BMD, without BMD, with BMD and body mass index (BMI), or with BMI alone. Studies have demonstrated modest agreement between assessments of FRAX®-with-BMD and FRAX®-with-BMI (correlation coefficient~0.5) [72]. While FRAX®-with-BMI may overestimate probability in older frail adults, it may underestimate fracture risk in younger patients compared to FRAX-with-BMD [73,74].
FRAX® can be calculated with either femoral neck BMD or total hip BMD (in g/cm 2 ), but, when available, femoral neck BMD is preferred. The use of BMD from non-hip sites is not recommended. Caution should be taken when using FRAX® without BMD to estimate fracture risk. (Although FRAX® allows input of T-score, we do not recommend this since the reference database for T-score calculation with clinical DXA systems may not be the same as that used in the FRAX® algorithm.) Therapeutic intervention recommendations in FRAX® incorporate data on risk-benefit analyses, cost-effectiveness of treatments, and competition for resources in the USA [75,76].These recommendations exist for guidance purposes only and are not absolute rules. Developers of FRAX® determined that for many secondary causes of osteoporosis, fracture risk is mediated primarily through impact on BMD [77]. For this reason, when low femoral neck BMD is entered into FRAX®, the secondary causes of osteoporosis button is automatically inactivated.
FRAX® scores should not deter clinicians or patients from considering intervention strategies when clinically assessed risk indicates utility. Conversely, these recommendations do not mandate treatment, particularly in patients with bone mass that is low but above the osteoporosis range. For patients with scores above FRAX® treatment thresholds, who do not have prevalent fracture of the hip or spine or secondary risk factors for accelerated bone loss, it is currently unclear if pharmacologic treatment significantly improves fracture risk with a reasonable number needed to treat. Management decisions must be made on a case-by-case basis [78,79].
FRAX and US ethnicity data
The US adaptation of FRAX requires selecting 1 of 4 ethnicities for each patient (Caucasian, Black, Hispanic, Asian). Among these populations, data indicates differences in fracture risk even at the same BMD. Although many limitations to this methodology have been described, it provides fracture risk stratification that can direct treatment to high-risk individuals most likely to benefit and avoid treatment of those at low risk [80]. Other countries, including some with considerable ethnic diversity, have used an alternative approach, with a single version of FRAX regardless of ethnicity. Smoking (current) Secondary causes of osteoporosis: type 1 diabetes, osteogenesis imperfecta in adults, untreated long-standing hyperthyroidism, hypogonadism or premature menopause (< 40 years), chronic malnutrition or malabsorption, and chronic liver disease FRAX® with trabecular bone score Trabecular bone score (TBS) is an assessment of how evenly or unevenly mineral is structurally distributed in trabecular bone. A TBS is generated from lumbar spine BMD images using software installed on a DXA machine. No additional scan time or radiation exposure is required. The TBS gray-scale texture model captures local differences in mineral concentrations, providing an index of bone microarchitecture that predicts fracture risk independent of BMD and FRAX® scores. TBS is correlated with BMD at spine and hip as well as with FRAX® risk projections for hip and major osteoporotic fracture [81,82]. Adding TBS to FRAX®, which is possible on late-model densitometry devices, increases the ability of FRAX® to predict fractures (TBS-adjusted FRAX®) [83]. TBS is most applicable to patients who have low bone mass, rather than those with osteoporosis according to BMD criteria, for whom treatment is already indicated [84,85]. TBS is FDA approved and provides additional utility in fracture risk assessment among people with secondary causes of bone loss and fractures, such as type 2 diabetes [83,86,87].
Potential limitations of FRAX®
The FRAX® tool is not a perfect predictor of fracture and its use requires clinical judgment. Because data validating the relative weight of all known risk factors are not yet available, they are not included in the FRAX® algorithm. These variables include risks associated with falls, non-DXA bone density measurements, rapidity of bone loss, specific secondary causes of osteoporosis (e.g., type 2 diabetes), and multiple fractures experienced in a short period of time. Other risks that are important in older adults not included in FRAX include frailty, multiple comorbid conditions, multiple medications associated with falls/fractures, and life expectancy.
The FRAX® tool is most useful in patients with low femoral neck BMD. The FRAX® algorithm has not been validated for use with lumbar spine BMD. Utilizing FRAX® in patients with low BMD at the lumbar spine, but relatively normal BMD at the femoral neck, underestimates fracture risk (Fig. 4).
The yes/no scoring employed by FRAX® computes average risk associated with individual clinical variables. As a result, dose-response effects of risk factors included in FRAX® are lost. For such variables, presumably higher doses increase risk more than lower doses. (Adjustments to FRAX to better account for dose effect of glucocorticoid dose have been proposed [88]. ) The FRAX® algorithm is available at http://www. bonehealthandosteoporosis.org as well as at http://www. sheffield.ac.uk/FRAX. It is available on newer DXA systems or with software upgrades that provide the FRAX® scores as well as the TBS-adjusted FRAX® on the bone density report.
Alternative bone densitometry technologies
Technologies other than DXA can be used to assess BMD, bone structure, bone strength, and fracture risk.These include quantitative computed tomography (QCT) to measure volumetric (v) BMD of the spine and proximal femur and derive areal BMD values that can be used for diagnostic classification with the WHO criteria and for input for FRAX. Opportunistic QCT uses QCT images performed for non-skeletal indications to detect fractures and measure BMD with synchronous or asynchronous calibration [89]. Quantitative ultrasound (QUS) measures non-BMD parameters of bone strength that are correlated with fracture risk. Imaging technologies used in research settings and sometimes in clinical practice include: pulse echo ultrasound (PEUS), and finite element analysis (FEA) with biomechanical computed tomography (BCT) [90,91]. Other bone imaging tools largely used in research include peripheral QCT (pQCT), high-resolution pQCT (HR-pQCT), and magnetic resonance imaging (MRI).
Biochemical markers of bone turnover
While not currently FDA approved for diagnosis of osteoporosis, measurements of biochemical bone turnover markers (BTMs) can play a role in assessing fracture risk in appropriate individuals: for example, to gauge rate of bone loss in women following treatment for breast cancer.
Products of the remodeling process can be measured as indicators of turnover activity. Biochemical markers of bone remodeling include resorption markers serum Ctelopeptide (CTX) and urinary N-telopeptide (NTX) and formation markers serum amino-terminal propeptide of type 1 procollagen (P1NP), bone-specific alkaline phosphatase (BALP), and osteocalcin (OC).
BTMs may [92]: & Predict rapidity of bone loss in untreated postmenopausal women. & Predict extent of fracture risk reduction when repeated after 3-6 months of treatment with FDA-approved therapies. & Predict magnitude of BMD increases with FDA-approved therapies. & Characterize patient compliance and persistence with osteoporosis therapy using a serum CTX for an antiresorptive medication and P1NP for an anabolic therapy (least significant change [LSC] is approximately a 40% reduction in CTX). & Potentially be used during a bisphosphonate holiday to suggest when medication should be restarted, although more data are needed to support this recommendation.
The FNIH Bone Quality Project conducted a large analysis of antiresorptive therapies to evaluate the utility of BTM changes as a surrogate for fracture risk reduction in drug development. In a recent pooled meta-regression analysis of antiresorptive therapies, changes in CTX or NTX did not predict antifracture efficacy. Changes in the bone formation markers BALP and P1NP, however, were strongly predictive of risk reduction for vertebral fractures, but these changes did not reach significance for non-vertebral or hip fractures [93].
Universal bone health recommendations
Several interventions to preserve bone strength can be recommended to the general population. These include adequate intake of calcium and vitamin D, cessation of tobacco use, identification and treatment of excessive alcohol intake, regular weight-bearing and muscle-strengthening exercise, and Adequate intake of calcium Sufficient calcium intakes are necessary for acquisition of peak bone mass and maintenance of bone health across the lifespan. The skeleton contains 99% of the body's calcium stores; when the exogenous supply is inadequate, bone tissue is resorbed from the skeleton to maintain constant serum calcium levels.
BHOF supports the Institute of Medicine's (IOM) calcium intake recommendations: 1000 mg/day for men aged 19-70 years and women aged 19-50 years; 1200 mg/day for women 51 years and older and men 71 years and older (Tables 9 and 10) [95]. There is no evidence that calcium intakes in excess of recommended amounts confer additional bone benefit. However, there is evidence that intake of supplemental calcium above 1200 to 1500 mg/day can increase risk for developing kidney stones in at-risk individuals [96].
A balanced diet rich in low-fat dairy products, select dark greens, fish with bone, fruits, vegetables, and fortified foods (like the nondairy supplemented beverages including orange juice, or soy and almond milk) provides calcium as well as numerous nutrients needed for good health. Table 9 illustrates a simple method for estimating the calcium in a patient's diet. Most people do not get enough. Average daily dietary calcium intake for adults age ≥ 50 years is 600 to 700 mg/day. Increasing dietary calcium is the first-line approach, but calcium supplements should be used when an adequate dietary intake cannot be achieved [97,98].
Calcium intake recommendations refer to milligrams of elemental calcium in the supplement. Content varies: calcium carbonate contains 40% elemental calcium by weight, whereas calcium citrate contains 21%. Patients should be advised to read the Supplement Facts panel for elemental calcium content when choosing a supplement.
Supplemental calcium is most widely available as calcium carbonate and calcium citrate. Calcium carbonate requires stomach acid for absorption and so is best taken with food, while calcium citrate is absorbed equally well on an empty stomach. Calcium of all types is best absorbed in doses of~500 mg or less. Splitting doses may be needed to ensure optimal absorption [99].
Calcium citrate is useful for people with achlorhydria, inflammatory bowel disease, absorption disorders, and those on proton pump inhibitors that reduce gastric acid. Individuals who experience gastrointestinal side effects taking calcium carbonate may benefit from taking multiple small doses, taking calcium carbonate with meals and/ or switching to calcium citrate. Other varieties of calcium commonly in supplements or fortified foods include gluconate, lactate, and phosphate. Calcium citrate malate is a well-absorbed form of calcium found in some fortified juices. Elemental calcium in fortified foods varies.
Some studies have reported increased risk of cardiovascular disease linked to calcium supplements with or without vitamin D, but conflicting data are reported [100][101][102][103]. A large systematic review and meta-analysis including RCTs and cohort studies found no evidence that calcium with or without vitamin D increased cardiovascular disease [104]. The large VITamin D and OmegA-3 Trial (VITAL), sponsored by the NIH, tested supplemental vitamin D (2000 units/day) on cardiovascular outcomes and found no adverse effects [105].
Adequate intake of vitamin D
Vitamin D facilitates calcium absorption that is necessary for mineralization of bone. The BHOF recommends a daily intake of 800 to 1000 units of vitamin D for adults aged 50 Table 9 Estimating daily dietary calcium intake Step 1: Estimate calcium intake from calcium-rich foods* [94]. A slightly higher serum 25(OH)D level (approximately 30 ng/mL) is associated with optimal calcium absorption and so is preferred by the BHOF [106][107][108][109][110]. The upper limits for vitamin D intake according to the IOM is 4000 units/day for adults, above which there is a potential for adverse effects. The current normal range for 25(OH)D levels is 20 to 50 ng/ mL. Some studies suggest that excessive intake of vitamin D may have adverse impacts on bone through increased risk for falls and fractures [110,111]. Chief dietary sources of vitamin D include fortified milk (400 units per quart) and breakfast cereals (generally 40-300 units per serving), saltwater fish (e.g., salmon, mackerel, tuna), and cod liver oil. Some, but not all non-dairy milk substitutes, such as rice or soy milk, are supplemented with vitamin D and calcium and so it is important to read the labels. Some calcium supplements and most multivitamin tablets contain vitamin D. Supplementation with either vitamin D 2 (ergocalciferol) or vitamin D 3 (cholecalciferol) is effective, but cholecalciferol, which is the form produced in humans, is preferable. Vitamin D 2 is derived from plant sources and may be preferred by individuals on a strict vegan/vegetarian diet.
Many conditions prevalent in older patients contribute to vitamin D deficiency, such as chronic renal insufficiency and limited sun exposure due to disability. Of note, a high prevalence of vitamin D deficiency is seen in patients with advanced osteoarthritis presenting for total hip replacement as well as in hip fracture patients with osteoporosis (including those on antifracture medications) [9,112]. Vitamin D deficiency should be corrected to optimize surgical and/or pharmacologic outcomes.
Supplemental vitamin D should be administered in amounts capable of raising serum 25(OH)D levels to approximately 30 ng/mL (75 nmol/L) and maintaining it at this level. Adults who are vitamin D deficient are typically treated with 50,000 units of vitamin D 2 or vitamin D 3 once a week (or the equivalent daily dose of 7000 units vitamin D 2 or vitamin D 3 ) for 5-8 weeks to achieve a 25(OH)D blood level of approximately 30 ng/mL. This regimen should be followed by maintenance therapy of 1000 to 2000 units/day or whatever dose is needed to maintain the target serum level [113,114]. Adults with ongoing malabsorption may require higher replacement doses of vitamin D to reach and sustain sufficiency.
Supplemental vitamin D and BMD
Systematic reviews and meta-analyses have found insufficient or conflicting evidence to support the use of supplemental vitamin D alone (without calcium) to promote musculoskeletal health in adults living in the community [115][116][117][118][119]. The large VITAL study in generally healthy women and men (≥ 55/≥ 50 years respectively) not selected for low bone mass or vitamin D insufficiency, reported no effect of high-dose, supplemental vitamin D (cholecalciferol 2000 units/day) versus placebo on BMD or bone structural measures over 2 years [120,121]. Effects did not vary by sex, race/ethnicity, body mass index, or baseline 25(OH)D levels. The baseline 25(OH)D level (mean) was 27 ng/mL, suggesting that VITAL participants may already be at serum vitamin D levels sufficient to support normal bone health. These findings do not apply to persons with extremely low vitamin D levels or osteoporosis or younger adults. Ongoing studies in VITAL are examining effects of supplemental vitamin D on incident fractures among 25,871 women and men nationwide [121,122].
Supplemental vitamin D and fall risk
A possible role for supplemental vitamin D in fall prevention has been a subject of study and inconclusive data. Results from the VITAL study, the largest placebo-controlled RCT of supplemental vitamin D on health outcomes, did not support the use of supplemental vitamin D (2000 units/day vs placebo groups) to prevent falls in generally healthy population not selected for high falls risk or vitamin D insufficiency [123]. These findings are consistent with recent meta-analyses and other randomized controlled studies in populations around the world that have not found supplemental vitamin D to be effective in reducing fall risk [118,[124][125][126].
Vitamin D absorption and synthesis
Gastrointestinal absorption of vitamin D differs between individuals and can be significantly decreased in patients with celiac disease, inflammatory bowel disease, bariatric surgery, and other disorders. Variability in skin activation and synthesis of vitamin D results from differences in pigmentation, season (weak UV light in the winter and fall), time spent outdoors, and use of sunscreens. For example, African Americans have lower 25(OH)D levels than non-Hispanic white Americans due to decreased skin activation (and possibly differences in vitamin D binding proteins). People who live in northern latitudes typically experience a decrease in serum vitamin D in winter that rebounds in spring and summer.
Cessation of tobacco use and avoidance of excessive alcohol intake
The use of tobacco products is detrimental to the skeleton as well as to overall health [127][128][129][130]. BHOF strongly recommends smoking cessation to support primary and secondary prevention of osteoporosis.
Moderate alcohol intake has no known negative effect on bone and may even be associated with slightly higher bone density and lower risk of fracture in postmenopausal women. However, alcohol intake of more than two drinks a day for women or three drinks a day for men may be detrimental to bone health. It has been associated with reduced calcium absorption and increased risk for falls. Clinicians should identify patients at risk for chronic heavy drinking and/or binge drinking who require further evaluation and treatment [131].
Regular weight-bearing and muscle-strengthening physical activity
The BHOF strongly endorses physical activity at all ages, both for fracture prevention and overall fitness. In childhood and adolescence, consistent weight-bearing and high-impact activities contribute to acquisition of optimal peak bone mass [132]. Weight-bearing exercises (in which bones and muscles work against gravity with feet and legs bearing body weight) include walking, jogging, tai chi, stair climbing, dancing, and tennis. Muscle-strengthening exercises include weight training and resistive exercises, such as yoga, Pilates, and boot camp calisthenics. To avoid injury, patients should be evaluated before initiating a new exercise program, particularly one involving compressive or contractile stressors (such as running or weightlifting).
A multicomponent program is recommended for people with osteoporosis: one that includes progressive resistance training, balance training, back extensor strengthening, core stabilizers, cardiovascular conditioning, and impact or ground-reaction forces to stimulate bone. In people with osteoporosis, improved fall outcomes have been documented following high-intensity exercise programs that combine resistance, balance, and weight-bearing activities [133][134][135][136]. In research settings, structured exercise programs have resulted in modest increases in bone density [137][138][139]. Muscle growth has been reported even in frail elderly individuals with established sarcopenia (agerelated muscle loss) who participate in short-burst highintensity exercise. For safety, any such program of physical activity must be developed and supervised by certified fitness personnel experienced with skeletal fragility in geriatric patients. (See "Protecting fragile bones in daily life and recreation" section.)
Motivating patients to stick with a program of physical activity
Sticking with any lifestyle change can be difficult. However, persistence is easier when that change is linked to something of value to an individual. In this case, what probably matters most is preserving independence by avoiding an injury that results in nursing home admission. Visual aids that show graphical comparisons of risk, can help patients see the connection between bone health recommendations and quality of life.
Consultation with a trained physical therapist and/or participation in group exercise led by certified fitness personnel help ensure patient safety, motivate daily participation, and promote social engagement. As long as principles of safe movement are followed, walking and daily activities such as housework and gardening are practical ways to contribute to maintenance of fitness and bone mass.
Fall prevention strategies
Among adults aged 65 or older, falls are the leading cause of both fatal and nonfatal injuries including the majority of all fractures and over 90% of hip fractures [142][143][144]. According to CDC statistics, in 2018, more than 32,000 adults aged ≥ 65 years were killed by unintentional fall injuries [145].
Major risk factors for falls are shown above in Table 2. Many of these are modifiable: muscle strength and balance can be improved through targeted exercise; visual impairment can be addressed; severe vitamin D deficiency can be corrected; fall hazards in the home and work environment can be remediated; and medications that induce dizziness and disorientation can be replaced or reduced.
Multiple studies have demonstrated the efficacy of therapeutic physical activity in reducing falls. A recent meta-analysis of RCTs investigating moderate-intensity multicomponent physical activity (aerobic, balance, and strength training) 3 times a week for 1 year or more reported significant fall reductions: 22% lower risk for falls and 26% lower risk for injurious falls.
Risk of fractures was reduced by 16%, although the significance of this finding is weakened by the small number of fractures in the study (p = .05) [146]. For individuals who have already experienced a fall, regular weight-bearing and musclestrengthening physical activity may reduce the risk of future falls and fractures [124,[147][148][149].
A 12-month, single-blinded RCT among 345 high-risk older adults aged ≥ 70 years who had fallen in the year prior compared usual care (geriatrician provided fall prevention instruction) or a home-based exercise program focused on strength and balance training. At 1 year, fall incidence was 74% lower in the homebased exercise group than in the group that received usual care. No adverse events related to the intervention were reported [150].
Regarding fracture outcomes among persons with osteoporosis, there are few exercise/physical activity studies with fractures as a primary endpoint. However, a recent meta-analysis examining physical activity and fall outcomes in older adults in the general population provides evidence that physical activity may prevent fractures in older adults [135]. Another metaanalysis of 10 studies (n = 4047) reported that physical activity may reduce the number of older community-dwelling adults experiencing ≥ 1 fall-related fracture (RR 0.73, 95% CI 0.56 to 0.95), but the evidence is judged to be of low certainty [151].
In the WHI, among 77,206 postmenopausal women across the USA followed for a mean of 14 years, there was an association between higher levels of physical activity and lower total fracture risk and lower risk for hip fracture. It is important to note that even low-intensity activities such as walking or gardening reduced risk for hip fracture when compared to sedentary activities [152].
There are a limited number of studies with men and few RCT exercise studies with fracture outcomes comparing those who exercise to those who did not exercise.
Antifracture benefits of FDA-approved drugs for osteoporosis have been studied primarily in postmenopausal women. We have more limited fracture data on efficacy in patients with secondary causes of osteoporosis (e.g., diabetes, glucocorticoids) and men diagnosed with osteoporosis by fracture or T-score.
Potential benefits and risks of therapy should be assessed in the context of a drug's fracture efficacy, onset of effect, duration parameters, magnitude of effect, and site of optimal fracture prevention (spine vs hip). In general, a therapy that has been shown to reduce risk of both vertebral and non-vertebral fractures (alendronate, risedronate, zoledronic acid, denosumab, teriparatide, abaloparatide, or romosozumab) should be considered over one that has not (raloxifene, calcitonin, ibandronate). In most of these pivotal studies, participants were on appropriate amounts of calcium and vitamin D.
The BHOF does not advocate the use of drugs that are not approved by the FDA for prevention and/or treatment of osteoporosis.
Bisphosphonates (alendronate, ibandronate, risedronate, zoledronic acid) Bisphosphonates are a class of potent antiresorptive agents. Composed of two phosphate groups, bisphosphonates have also been called diphosphonates. All bisphosphonates can affect renal function and are contraindicated in patients with estimated glomerular filtration rate (GFR) below 30-35 mL/min. Bisphosphonates may cause or exacerbate hypocalcemia, and therefore, hypocalcemia must be corrected before treatment. [140,141] Alendronate, brand name: Fosamax®, Fosamax Plus D, Binosto™ (liquid preparation) and generic alendronate Alendronate sodium is approved by the FDA for prevention (5 mg daily and 35 mg weekly tablets) and treatment of postmenopausal osteoporosis (10 mg daily tablet, 70 mg weekly tablet [most commonly used dose], 70 mg weekly tablet with 2800 units or 5600 units of vitamin D3, and 70 mg effervescent tablet). Alendronate is approved as treatment to increase bone mass in men with osteoporosis and for treatment of osteoporosis in men and women taking glucocorticoids [154].
Drug efficacy Alendronate reduces incidence of spine and hip fractures by about 50% over 3 years in patients with prior vertebral fracture and in patients who have hip T- scores diagnostic of osteoporosis (≤ − 2.5) [155,156]. It reduces incidence of vertebral fractures by 48% over 3 years in patients without prior vertebral fracture.
Administration Oral alendronate (generic and Fosamax®) tablets must be taken at least 30 min before the first food, beverage, or medication of the day with plain water only. Tablets must be swallowed whole with a full glass of plain water (6 to 8 oz). Effervescent alendronate (Binosto) must be dissolved in 4 oz of room temperature water and taken on an empty stomach first thing in the morning. Patients should remain upright and eat/drink nothing for 30 min following ingestion.
Side effects and drug safety Side effects are similar for all oral bisphosphonate medications and include gastrointestinal problems such as difficulty swallowing, esophageal inflammation, stomach pain, and rare cases of atypical femur fractures (AFF) and osteonecrosis of the jaw (ONJ). (See boxed discussion below.) Ocular inflammation (anterior uveitis and episcleritis) has been documented. All bisphosphonates can affect renal function and are contraindicated in patients with estimated GFR below 30-35 mL/min.
Ibandronate, brand name: Boniva® and generic ibandronate
Oral and intravenous ibandronate sodium are approved by the FDA for treatment of postmenopausal osteoporosis (150 mg monthly tablet and 3 mg every 3 months by intravenous injection). Oral ibandronate is also approved for prevention of postmenopausal osteoporosis and is available as a generic in the USA.
Drug efficacy Ibandronate reduces incidence of vertebral fractures by about 33-50% over 3 years but does not reduce risk of non-vertebral fracture (hip/nonhip) [157].
Administration Oral ibandronate must be taken on an empty stomach, first thing in the morning, with 8 oz of plain water (no other liquid). Tablets must be swallowed whole with a full glass of plain water (6 to 8 oz). After taking ibandronate, patients must remain upright and wait at least 60 min before eating, drinking, or taking any other medication. Intravenous ibandronate, 3 mg/3 mL prefilled syringe, is administered over 15 to 30 s once every 3 months. Serum creatinine should be checked before each injection.
Side effects and drug safety Side effects are similar for all oral bisphosphonate medications and include gastrointestinal problems such as difficulty swallowing, esophageal inflammation, and stomach pain and rare cases of AFF and ONJ.
(See boxed discussion below.) Ocular inflammation has been documented. Like other bisphosphonates, ibandronate may cause or exacerbate hypocalcemia, and therefore, hypocalcemia must be corrected before treatment. All bisphosphonates can affect renal function and are contraindicated in patients with estimated glomerular filtration rate (GFR) below 30-35 mL/min.
Risedronate, brand name: Actonel®, Atelvia™, and generic risedronate Risedronate sodium is approved by the FDA for prevention and treatment of postmenopausal osteoporosis (5 mg daily tablet; 35 mg weekly tablet; 35 mg weekly delayed-release tablet; 75 mg tablets taken on two consecutive days every month; and 150 mg tablet taken monthly). Actonel® is approved to increase bone mass in men with osteoporosis and to prevent and treat osteoporosis in men and women who are either initiating or taking glucocorticoids [158,159].
Drug efficacy Compared with placebo, risedronate reduced incidence of vertebral fractures by 39%, hip fractures by 27%, and non-vertebral fractures by 22% in a meta-analysis conducted by Barrionuevo et al. in 2019 [160]. Significant risk reduction occurred within 1 year of treatment in patients with a prior vertebral fracture.
Administration Oral risedronate (generic and Actonel®) must be taken on an empty stomach, first thing in the morning, with 8 oz of plain water (no other liquid). Tablets must be swallowed whole with a full glass of plain water (6 to 8 oz). After taking risedronate, patients must remain upright and wait at least 30 min before eating, drinking, or taking any other medication. Oral delayed-release risedronate (Atelvia®) is taken not on an empty stomach, but directly after breakfast with ≥ 4 oz of plain water (no other liquid). Patients should remain upright (sitting or standing) for at least 30 min.
Side effects and drug safety Side effects are similar for all oral bisphosphonate medications and include gastrointestinal problems such as difficulty swallowing, esophageal inflammation, and stomach pain and rare cases of AFF and ONJ. (See boxed discussion below.) Ocular inflammation (anterior uveitis and episcleritis) has been documented. All bisphosphonates can affect renal function and are contraindicated in patients with estimated GFR below 30-35 mL/min. Because risedronate can cause or exacerbate hypocalcemia, hypocalcemia must be corrected before treatment. All bisphosphonates can affect renal function and are contraindicated in patients with estimated glomerular filtration rate (GFR) below 30-35 mL/min.
Zoledronic acid, brand name: Reclast®
Zoledronic acid is approved by the FDA for prevention and treatment of osteoporosis in postmenopausal women (5 mg once yearly for treatment and once every 2 years for prevention). It is approved to improve bone mass in men with osteoporosis and for prevention and treatment of osteoporosis in men and women expected to be on glucocorticoid therapy for at least 12 months. (Efficacy of less-frequent dosing is currently being investigated.) Zoledronic acid is indicated for prevention of new clinical fractures in patients (both women and men) who have recently had a low-trauma hip fracture. A recent placebo-controlled study in women aged ≥ 65 years with low hip BMD found that zoledronic acid administered every 18 months for 6 years reduced vertebral and nonvertebral fractures. In this study, the number needed to treat to prevent 1 incident fracture was 15 [161].
Drug efficacy Zoledronic acid reduces incidence of vertebral fractures by 62-70% (with significant reduction at 1 year), hip fractures by 41%, and non-vertebral fractures by 21-25% over 3 years in patients with osteoporosis defined by prevalent vertebral fractures and/or osteoporosis by BMD of the hip [160].
Administration of zoledronic acid compared with placebo in postmenopausal women with low bone mass every 18 months reduces vertebral fractures by 55%, non-vertebral fractures by 34% and forearm and wrist fractures by 44% at 6 years [161].
Administration Zoledronic acid (generic and Reclast®), 5 mg in 100 mL, is given once yearly by intravenous infusion administered over at least 15 min. Some physicians infuse this over 30 min. Flu-like symptoms (arthralgia, headache, myalgia, fever) have occurred in 32% of patients after the first dose, 7% after the second dose, and 3% after the third dose. To reduce likelihood of acute-phase reactions, patients should be well hydrated, drink 2 glasses of water before the infusion and pre-treat with acetaminophen (unless contraindicated).
Side effects and drug safety We recommend a 25(OH) vitamin D level should be obtained and any vitamin D deficiency or insufficiency corrected before treatment. Zoledronic acid may cause or exacerbate hypocalcemia, and therefore, hypocalcemia must be corrected before treatment. Zoledronic acid is contraindicated in patients with creatinine clearance less than 35 mL/min or in patients with evidence of acute renal impairment. Creatinine clearance should be measured prior to each dose [162]. Ocular inflammation (anterior uveitis and episcleritis) has been documented [163]. (See boxed discussion below.) Estrogen-related therapies (ET/HT, raloxifene, conjugated estrogens/bazedoxifene) A variety of medications that act on estrogen receptors in bone are prescribed to prevent the bone loss associated with postmenopausal osteoporosis.
Drug efficacy The Women's Health Initiative (WHI) found that 5 years of oral HT (Prempro®) reduced incidence of clinical vertebral fractures and hip fractures by 34% and other osteoporotic fractures by 23% [164]. Meta-analysis sponsored by the Endocrine Society found that HT reduced fractures of the spine by 35%, hip by 28%, and non-vertebral skeleton by 22% [160].
Drug administration ET/HT is available in a wide variety of oral and transdermal preparations that contain estrogen only, progestin only, and combination estrogen-progestin. ET/HT dosages include cyclic, sequential, and continuous regimens. When treatment is discontinued, bone loss can be rapid. Follow-on antifracture agents should be considered to maintain BMD.
Side effects and drug safety Potential risks for women include biliary issues, breast cancer (with combined estrogen-progestin), endometrial hyperplasia/cancer (with inadequately opposed estrogen). Initial WHI data found elevated risk of myocardial infarction, stroke, pulmonary emboli, and deep vein thrombosis during 5 years of treatment with conjugated equine estrogen and medroxyprogesterone acetate (Prempro®) [165,166]. Subsequent analyses of WHI substudy data showed no increase in cardiovascular disease in women starting treatment within 10 years of menopause [167].
The North American Menopause Society (NAMS) and American Association of Clinical Endocrinologists (AACE)/ American College of Endocrinology (ACE) recommend tailoring ET/HT formulation, dose, and route of administration to individual postmenopausal women. Risk-benefit profiles differ by patient age, time since menopause, and other factors [168,169].
The Endocrine Society guidelines recommend ET/HT to prevent fractures in some high-fracture-risk postmenopausal women < 60 years of age or < 10 years past menopause who are experiencing vasomotor and/or climacteric symptoms and cannot take bisphosphonates or denosumab [170].
When ET/HT use is considered solely for fracture prevention, the FDA recommends that approved non-estrogen treatments first be carefully considered.
Raloxifene, brand name: Evista® and generic raloxifene
Raloxifene is an estrogen agonist/antagonist (selective estrogen receptor modulator/SERM) approved by the FDA for both prevention and treatment of osteoporosis in postmenopausal women. Raloxifene is indicated for the reduction in risk of invasive breast cancer in postmenopausal women with osteoporosis [171][172][173][174]. Raloxifene does not reduce the risk of coronary heart disease.
The Endocrine Society guidelines recommend raloxifene or combination conjugated equine estrogen/bazedoxifene to prevent vertebral fractures in postmenopausal women who have low risk of deep vein thrombosis for whom bisphosphonates or denosumab are not appropriate or for women with a history of or high risk for breast cancer [166].
Drug efficacy Raloxifene reduces incidence of vertebral fractures by about 30-40% in patients with a prior vertebral fracture and by about 55% in patients without a prior vertebral fracture. Raloxifene does not reduce risk of non-vertebral fractures.
Drug administration Raloxifene is available as a 60-mg tablet, which may be taken with or without food (60 mg).
Side effects and drug safety Raloxifene increases risk for deep vein thrombosis to a degree similar to that observed with estrogen. It can increase hot flashes and cause leg cramps.
Conjugated estrogens/bazedoxifene, brand name: Duavee®
Conjugated estrogens/bazedoxifene is FDA approved as an oral tablet for women who suffer from moderate-to-severe hot flashes associated with menopause and to prevent osteoporosis after menopause.
Conjugated estrogens/bazedoxifene combines conjugated estrogen with bazedoxifene, an estrogen agonist/antagonist. Bazedoxifene reduces risk for endometrial hyperplasia eliminating need for progestins in women who have not undergone hysterectomy.
Drug efficacy In pivotal trials, this combination drug significantly increased mean lumbar spine BMD (treatment difference 1.51%) at 12 months compared to placebo in women who had been postmenopausal between 1 and 5 years. Treatment with conjugated estrogens/bazedoxifene also increased total hip BMD. The treatment difference in total hip BMD at 12 months was 1.21% [175][176][177][178].
Drug administration Available as a tablet containing conjugated estrogens and bazedoxifene 0.45 mg/20 mg, to be taken once daily without regard to meals.
Conjugated estrogens/bazedoxifene is intended only for postmenopausal women who have not had hysterectomy. Like other products containing estrogen, its use should be consistent with treatment goals and risks for the individual woman. When being considered solely for the prevention of osteoporosis, such use should be limited to women who are at significant risk of fracture and only after carefully considering alternatives that do not contain estrogen. When treatment is discontinued, bone loss can be rapid. An antifracture agent should be considered to maintain BMD.
Side effects and drug safety Side effects of conjugated estrogens/bazedoxifene include muscle spasms, nausea, diarrhea, dyspepsia, upper abdominal pain, oropharyngeal pain, dizziness, and neck pain. Because this product contains estrogen, it is approved with the same Boxed Warning and other Warnings and Precautions that have been approved with estrogen products.
Parathyroid hormone analogs (teriparatide, abaloparatide)
Parathyroid hormone (PTH) regulates calcium homeostasis. Constant high exposure to PTH causes bone resorption, while intermittent administration of exogenous recombinant PTH stimulates bone formation. Two anabolic agents derived from synthetic analogs of PTH are currently FDA approved: teriparatide and abaloparatide.
Teriparatide, brand name: Forteo® and the bioequivalent Bonsity™
Teriparatide is a synthetic fragment of human PTH that is approved by the FDA for treatment of osteoporosis in men and women at high risk for fracture (which is defined as a history of osteoporotic fracture, multiple risk factors for fracture, or failure/intolerance to other available osteoporosis therapy). It is approved to treat glucocorticoid-induced osteoporosis in men and women at high risk for fracture [179]. The FDA has approved an expanded indication for teriparatide for treatment of osteoporosis associated with sustained systemic glucocorticoid therapy (≥ 5 mg/day of prednisone). Forteo® is currently available as 20 μg daily subcutaneous injection. Biosimilar preparations are now available as the patented expired in 2019.
Drug efficacy Teriparatide reduces risk of vertebral fractures by 65-77%, and non-vertebral fractures by 35-53% in patients with osteoporosis, after an average of 18 months of therapy [180]. The VERO trial that compared teriparatide and risedronate in postmenopausal women with severe osteoporosis reported~56% fewer new vertebral fractures in the teriparatide group after 24 months [181]. It is important to follow teriparatide treatment with an antiresorptive agent, usually a bisphosphonate or denosumab, to maintain or further increase BMD.
Drug administration Teriparatide is administered by 20 μg daily subcutaneous injection. When treatment is discontinued, bone loss can be rapid and alternative agents should be considered to maintain BMD. Treatment duration was previously restricted to 24 months, but this was recently changed to open the possibility of longer treatment in high-risk patients.
Side effects and drug safety Side effects of teriparatide include transient orthostatic hypotension, leg cramps, and nausea. Teriparatide transiently increases serum calcium which may predispose patients to digitalis toxicity. It should be used with caution in patients with active or recent kidney stones, hypercalcemia and hypercalcemic disorders, and/or cutaneous calcification.
Until recently, teriparatide treatment was restricted to 2 years in response to elevated osteosarcoma seen in rodent studies. Increased osteosarcoma was not observed in humans during 15 years of post-marketing studies. As a result, the revised teriparatide label now states that use for more than 2 years during a patient's lifetime can be considered if a patient remains at or has returned to having a high risk for fracture.
Its use should be avoided in settings of increased risk for osteosarcoma: Paget's disease of the bone, prior radiation therapy involving the skeleton, open epiphyses (children and young adults), history of bone metastases or malignancies, unexplained elevated alkaline phosphatase, and hereditary disorders predisposing to osteosarcoma [182].
Abaloparatide, brand name: Tymlos®
Abaloparatide is a synthetic peptide analog of human PTHrelated protein approved by the FDA for treatment of osteoporosis in postmenopausal women at high risk for fracture defined as a history of osteoporotic fracture, multiple risk factors for fracture, or failure/intolerance to other available osteoporosis therapy.
Drug efficacy Abaloparatide reduces risk of new vertebral fractures by about 86% and non-vertebral fractures by about 43% in postmenopausal women with osteoporosis, after an average of 18 months of therapy [183]. In an extension study (ACTIVE-Extend) after 18 months of abaloparatide or placebo, the addition of 6 months of oral alendronate for a total of up to 24 months of therapy resulted in a relative risk reduction of radiographic spine fractures by 87%, non-vertebral fractures by 52%, and major osteoporotic fractures by 58% [184].
Drug administration Abaloparatide is administered by 80 μg daily subcutaneous injection in the periumbilical area of the abdomen. When treatment is discontinued, bone loss can be rapid. An antiresorptive agent should be considered to maintain BMD. Abaloparatide treatment duration is recommended not to exceed 24 months.
Side effects drug safety Side effects of abaloparatide include leg cramps, nausea, and dizziness. Avoid use in patients with increased risk of osteosarcoma (e.g., Paget's disease of bone, bone metastases, prior skeletal radiation). Patients with hypercalcemia, or a history of an unexplained elevated alkaline phosphatase or skeletal malignancy should not receive abaloparatide therapy. Abaloparatide may increase urinary calcium. It should be used with caution in patients with active or recent kidney stones because of the potential to exacerbate this condition. It is common practice to follow abaloparatide treatment with an antiresorptive agent, usually a bisphosphonate or denosumab, to maintain or further increase BMD.
RANKL inhibitor (denosumab)
The cytokine RANK-ligand (RANKL) produced by osteocytes is required for osteoclast formation. Suppressing RANKL blocks osteoclast formation, leading to less bone resorption and higher bone density.
Denosumab, brand name Prolia®
Denosumab is a fully human monoclonal antibody against RANKL approved by the FDA for treatment of men and women at high risk for fracture (which is defined as a history of osteoporotic fracture and/or multiple risk factors for fracture). It is approved for treatment of patients who have failed or are intolerant to other available osteoporosis therapy, to treat postmenopausal women with osteoporosis at high risk for fracture, to increase bone mass in men with osteoporosis at high risk for fracture, to treat glucocorticoid-induced osteoporosis in men and women at high risk for fracture, to increase bone mass in men at high risk for fracture receiving androgen deprivation therapy for nonmetastatic prostate cancer, and to increase bone mass in women at high risk for fracture receiving adjuvant aromatase inhibitor therapy for breast cancer.
Drug efficacy Denosumab is one of the most potent antiresorptive drugs available to treat osteoporosis because it directly inhibits osteoclast formation and causes apoptosis of mature osteoclasts. Denosumab reduces incidence of vertebral fractures by about 68% at 1 year, hip fractures by about 40% and non-vertebral fractures by about 20% at 3 years, with continued fracture reduction in studies extended to 5 years [160,185,186]. Longer-term use is associated with a significant 48% reduction in the risk of all upper limb fractures and a 43%, 43%, and 58% reduction in risk of forearm, wrist, and humerus fractures at 7 years [187,188].
Drug administration Denosumab is administered as 60 mg subcutaneous injection by a health professional every 6 months.
Side effects and drug safety Denosumab may cause or exacerbate hypocalcemia, and therefore, hypocalcemia must be corrected before treatment. Denosumab has been associated with hypersensitivity reactions, including angioedema, erythema multiforme, dermatitis, rash, and urticaria. Studies have reported higher incidence of serious infection in women taking denosumab; however, no clear clinical pattern has emerged to suggest a relationship to duration of exposure to denosumab [189]. Safety profiles overall are similar to bisphosphonates and placebo, with no new safety concerns emerging in extension trials up to 10 years, although a theoretical infection risk exists with RANKL inhibition and prescribing information states that patients on concomitant immunosuppressant agents or with impaired immune systems may be at increased risk for serious infections [190,191]. Denosumab has been associated with very rare cases of AFF and ONJ. (See boxed discussion below.) Discontinuation of denosumab treatment is associated with rapid bone loss that may result in multiple vertebral fractures, especially in patients with a prior vertebral fracture [192]. For this reason, a drug holiday is not appropriate with denosumab. During periods of suspended treatment, and as recommended by the FDA, alternate antiresorptive therapy should be considered to maintain gains in bone density. Following denosumab with alendronate has been shown to preserve bone mass, while following it with teriparatide has been associated with bone loss at some skeletal sites [193].
Romosozumab-aqqg, brand name EVENITY™
Romosozumab is a fully human monoclonal antibody to sclerostin. It is currently FDA-approved for treatment of osteoporosis in postmenopausal women at high risk for fracture-defined as a history of osteoporotic fracture, or multiple risk factors for fracture, or poor response or intolerance to other available osteoporosis therapies. (Romosozumab is approved for men with osteoporosis at high risk of fracture in some countries but not in the USA.) Drug efficacy Romosozumab reduces fractures and increases BMD at the lumbar spine and total hip more than placebo, alendronate, and teriparatide in postmenopausal women with low bone mass [194][195][196]. In the pivotal FRAME trial, romosozumab compared to placebo for 12 months reduced risk of new vertebral fracture by 73% and clinical fractures by 36% [196]. In the ARCH study, high-risk postmenopausal women had significantly fewer fractures when treated with romosozumab than with alendronate (48% fewer new vertebral fractures, 19% fewer non-vertebral fractures, and 38% fewer hip fractures) for 12 months [197].
Extension studies have reported BMD trending back towards pretreatment levels after discontinuing therapy. Follow-on therapy with denosumab and, to a lesser degree, alendronate preserve or continue to accrue BMD benefits following romosozumab therapy [196,198,199].
Drug administration Romosozumab (210 mg) is administered in monthly doses by subcutaneous injection for 12 months. Each dose consists of two injections (105 mg each) that are given one immediately following the other by a healthcare professional. Use is limited to 1 year due to the waning of bone-forming effect after 12 months/doses.
Side effects and drug safety Romosozumab received FDA approval with a boxed warning stating that it may increase risks for myocardial infarction, stroke, and cardiovascular (CV) death. It should not be taken by women who experienced a stroke or CV event in the previous year. Romosozumab may cause hypocalcemia, and therefore, hypocalcemia must be corrected before treatment. In studies, romosozumab has been associated with hypersensitivity reactions, including angioedema, erythema multiforme, dermatitis, rash, and urticaria. Romosozumab has been associated with rare cases of AFF and ONJ (fewer cases than denosumab). (See boxed discussion below.)
Calcitonin salmon
Calcitonin is a hormone endogenous in humans that is found in salmon and other fish, reptiles, birds, and mammals. It works by preventing bone breakdown, thereby increasing bone density. Because more effective drugs are available for prevention of bone loss and reduction of fracture risk, calcitonin salmon is considered second-line therapy reserved for women in whom alternative treatments are not suitable.
Calcitonin, brand name, Miacalcin® or Fortical® and generic calcitonin
Calcitonin is FDA approved for the treatment of osteoporosis in postmenopausal women who are at least 5 years following menopause.
Drug efficacy In two RCTs, calcitonin salmon nasal spray increased lumbar vertebral BMD relative to placebo in women with low bone mass who were greater than 5 years post menopause. No increase in BMD has been demonstrated in cortical bone of the forearm or hip.
Calcitonin reduces vertebral fracture occurrence by about 30% in those with prior vertebral fractures but does not reduce the risk of non-vertebral fractures [200]. Calcitonin significantly reduces pain associated with vertebral, crush fractures in many patients, making early mobilization possible [201,202].
Drug administration Calcitonin is administered in 200-unit doses delivered as a single daily intranasal spray. Subcutaneous administration by injection also is available.
Side effects and drug safety Intranasal calcitonin can cause rhinitis, epistaxis, and allergic reactions. Long-term post-marketing data meta-analysis of 21 RCTs found cancer risk was higher among calcitonin salmon-treated patients (4.1%) compared with placebo-treated patients (2.9%); therefore, the need for continued therapy should be reevaluated on a periodic basis. Because of its risk-benefit profile, calcitonin is banned in Canada and Europe; it is infrequently used in the USA [203,204].
Possible Adverse Events Associated with Antiresorptive Therapies: ONJ and AFF People using bisphosphonates and denosumab are at low but increased risk for ONJ, a condition in which bone is persistently exposed (usually following an extraction), and AFF, in which a femur breaks spontaneously, often with no warning. Romosozumab use has rarely been associated with ONJ and AFF according to the current studies.
Osteonecrosis of the Jaw (ONJ)
ONJ is more frequently associated with high-dose intravenous bisphosphonate treatment for cancer (96% of cases reported). For patients taking oral bisphosphonates to manage osteoporosis, the incidence of ONJ is estimated to be between 1/10,000 and 1/100,000 and is only slightly higher than the ONJ incidence in the general population [205][206][207]. The risk of ONJ appears to increase with bisphosphonate treatment beyond 5 years. ONJ has been reported in >2% of studied cancer patients taking high doses of denosumab (XGEVA®). 4 The American Dental Association (ADA) reports that sound oral hygiene practices and regular dental care may be the optimal method for lowering risk of drug-related ONJ. No validated diagnostic technique is currently available to determine which patients are at increased risk. The magnitude of risk reduction associated with discontinuing antiresorptive therapy even in those with ONJ is not known but must be weighed against known negative outcomes of low bone density and fractures [207,209,210].
Atypical Femur Fracture (AFF)
While reports show that ONJ is more common in cancer patients treated with bisphosphonates, rates of AFF appear lower in these patients, possibly related to shorter duration of use or other mechanisms [205,211,212]. AFFs can occur with little or no trauma and may be bilateral. AFF incidence is very low in the general untreated population. Higher risk is associated with Asian ethnicity (North American), lateral bowing of the femur, autoimmune disease, and glucocorticoid use [213]. AFF has been reported in people taking bisphosphonates, denosumab, and romosozumab (association with duration of use is not established).
AFFs are often preceded by pain in the thigh and/or groin area. Clinicians should closely monitor symptoms related to these unusual fractures, proactively questioning patients about occurrence of any thigh and/or groin pain. Patients who present with this prodrome may have experienced stress fracture in the subtrochanteric region or femoral shaft. Bilateral femoral X-rays should be ordered, followed by an MRI or a radionuclide bone scan when clinical suspicion is high enough [214].
Another option, available on newer DXA systems, is single-energy X-ray absorptiometry, an imaging method that detects early signs of AFF [215]. The femur is imaged using a single X-ray beam to detect localized cortical abnormalities characteristic of an incomplete atypical femur fracture. The test is generally rapid (under 1 minute) and can be used to identify AFF in patients on bisphosphonates, denosumab, or romosozumab, who are experiencing groin or thigh pain suggestive of stress fracture in the subtrochanteric region or femoral shaft. Surgical fixation of one or both femurs is required in some cases of AFF; whereas, medical conservative treatment is appropriate in other cases. If AFF is confirmed, bisphosphonates should be discontinued [14]. Although off-label treatment with an anabolic agent following AFF in association with bisphosphonate use is promising, there are limited data to support this regimen [216] For patients taking bisphosphonates for osteoporosis, the absolute risk of AFF is low: ranging between 3.2 and 50 cases/100,000 person-years, an estimate that appears to double with prolonged duration of bisphosphonate use (> 3 years, median duration 7 years), and decline rapidly with discontinuation [206,217]. AFF has been seen in patients taking denosumab for osteoporosis (1/2343 patients in the FREEDOM Trial extension followed for 10 years) [218,219]. Denosumab treatment should be discontinued in the event of the rare occurrence of AFF in patients on denosumab. Another antiresorptive therapy should be started for a few years after stopping denosumab (post AFF) [220].
Romosozumab has rarely been associated with ONJ or AFF. However, because it is a weak antiresorptive, these adverse side effects are biologically plausible.
When discussing risk of ONJ and AFF with high-risk adults, it is important to make clear that the risk for fracture associated with not treating far exceeds the risk for these unusual adverse effects of treatment [212,221,222]. Treatment considerations: pharmacologic therapy (Note: Risk reduction data for vertebral and non-vertebral fractures being discussed in this Guide come from the FDA Prescribing Information, which includes RCTs. In the absence of head-to-head trials, direct comparisons of risk reduction among drugs cannot be made.) All patients being considered for osteoporosis treatment should be counseled on risk factor reduction, including the importance of calcium, vitamin D, elimination of tobacco use, moderation of alcohol intake, physical activity, and fall prevention (Table 12). Prior to initiating treatment, patients should be evaluated for secondary causes of bone fragility and have BMD measurements by central DXA, when available, and vertebral imaging studies when appropriate. (See vertebral imaging above.) Postmenopausal women and men aged 50 years and older presenting with the following should be considered for treatment: & A hip or vertebral fracture (clinically apparent or found on vertebral imaging) regardless of T-score. There are abundant data in patients with spine or hip fractures treated with approved pharmacologic agents that fracture incidence goes down. This is true for patients with previous fractures whether the T-score classification is normal, low bone mass (i.e., osteopenia), or osteoporosis [155,157,185,200,[223][224][225][226][227]. In patients with a hip or spine fracture, T-score is not as important as fracture history in predicting future risk of fracture and antifracture efficacy from treatment. & A fracture of the pelvis, proximal humerus, or distal forearm in a person with low bone mass or osteopenia, whether a postmenopausal woman or a man aged ≥ 50 years [40,41,228]. In persons with fractures of the pelvis, proximal humerus, or distal forearm who do not have osteopenia or low BMD, the decision to treat should be individualized [12,13]. & T-score ≤ − 2.5 at the femoral neck, total hip, lumbar spine, or 33% radius (significant correlation between T-scores at the wrist, hip, and lumbar spine T-score has been reported in research). Decades of high-quality evidence demonstrate that pharmacotherapy prevents fracture in patients with osteoporosis by BMD-DXA at any clinically relevant site [65, 164, 180, 183-185, 196, 198, 224, 228-237]. & Low bone mass and FRAX® score above recommended treatment threshold. High fracture risk and need for pharmacologic intervention are indicated by T-score between − 1.0 and − 2.5 at the femoral neck or total hip and a 10year probability of a hip fracture ≥ 3% or a 10-year probability of a major osteoporosis-related fracture ≥ 20% based on the US-adapted FRAX® algorithm [17,18,76,238]. A major osteoporotic fracture is defined as a fracture at the hip, wrist, humerus, or spine. Although FRAX®-calculated fracture risk prediction has been confirmed in multiple studies, there are relatively few data confirming fracture risk reductions in patients selected for treatment on the basis of FRAX® score alone.
Setting and reaching goals of therapy
With the availability of measurable benchmarks such as BMD, fracture incidence, and biochemical markers of bone turnover, the "treat-to-target" strategy of outcomes-focused therapy, monitoring, and reassessment can be applied to management of osteoporosis. For appropriate patients initiating therapy, a reasonable 3-year target outcome could be to increase T-score from − 2.8 to > − 2.5 and have no fractures. Stable BMD and a year with no new fractures could be a measurable goal for someone with low BMD and prior fragility fractures. In both cases, if the patient is not on track to reach the target or fails to reach the target, consideration should be given to clinical reassessment and possibly a change in therapy.
However, fundamental to the concept of "treat-to-target" is the principle that response to therapy is not necessarily sufficient to achieve an acceptable level of risk. A patient may reach their "target" BMD and still be at unacceptably high risk for fracture. This principle has implications for the selection of initial therapy to reduce fracture risk [239]. For example, while an oral bisphosphonate alone can reduce risk to an acceptable level in a moderate-risk patient (T-score > − 2.5, no fractures, low FRAX®), it may not be sufficient in a high-risk patient (T-score < − 2.5, multiple fractures, high FRAX® score). In the high-risk patient, an anabolic agent followed by antiresorptive therapy might have a better chance of achieving meaningful increases in bone density than antiresorptive therapy alone.
Treat-to-target management recommendations
The ideal medication for initiating therapy is one best able to sufficiently reduce risk, while accommodating a patient's needs and preferences. Consistent with the treat-to-target concept, individual patients with osteoporosis should be risk stratified before initiating treatment. Site-specific vulnerabilities can be factored in, such as recent wrist or vertebral fracture, and presented to the patient along with fracture reduction data for each of the treatments.
Speed of effect onset should be considered in relation to a patient's imminent fracture risk. In some settings, such as recent fracture or very low BMD, an agent with rapid effect onset may be preferable to one that takes longer to act. Many RCTs of osteoporosis therapies have shown benefit for fracture reduction at the spine within the first year of treatment (e.g., zoledronic acid, denosumab, and romosozumab) [33,240]. It is important to treat patients promptly after a fracture to reduce future risk. A patient with a recent fracture and/or very low BMD (e.g., T-score < − 3.0) is at especially elevated risk and more rapid-acting aggressive antifracture therapy should be considered.
A systematic review and meta-analysis of 107 RCTs of osteoporosis interventions in postmenopausal women (mean age 66 years) with primary osteoporosis was performed and included in the 2019 Endocrine Society Clinical Practice Guideline [166]. The Endocrine Society's treatment algorithm provides guidance on the management of postmenopausal osteoporosis according to fracture risk: Low risk: (No previous spine or hip fracture; a T-score at hip and spine above − 1.0 and a FRAX® score below treatment thresholds.) Reassess fracture risk in 2 to 4 years.
Moderate risk: (No previous spine or hip fracture; a Tscore between − 1.0 and − 2.5 and a FRAX® score below treatment thresholds.) Reassess fracture risk in 2 to 4 years.
High risk: (Prior spine or hip fracture; or a lumbar spine or hip T-score of − 2.5 or below; and/or a FRAX® 10-year absolute fracture risk above treatment threshold.) Initial treatment with bisphosphonates (alendronate, risedronate, or zoledronic acid). Initial treatment with denosumab as alternative therapy to reduce fracture risk. (Ibandronate not recommended to reduce hip and non-vertebral fractures.) Raloxifene or bazedoxifene to prevent vertebral fractures in women with a high risk of breast cancer. In postmenopausal women, estrogen treatment to reduce the risk of vertebral fractures in women with a low risk for deep vein thrombosis and for whom bisphosphonates or denosumab are not appropriate. Nasal spray calcitonin should be prescribed only in women who cannot tolerate raloxifene, bisphosphonates, estrogen, denosumab, abaloparatide, or teriparatide or for whom these therapies are not considered appropriate.
Very high risk: (Multiple spine fractures/hip fracture and Tscore of − 2.5 or lower at lumbar spine or hip.) Teriparatide or abaloparatide treatment for up to 2 years or romosozumab for 1 year. Following a course of anabolic, treatment with antiresorptive osteoporosis therapies should be used to maintain bone density gains.
More information on the Endocrine Society treatment algorithm is presented in the Endocrine Society published Clinical Practice Guideline [166].
Sequential and combination therapy
Patients with recent fractures and/or very low BMD (e.g., Tscore < − 3.0) are at especially high risk for future fracture(s). Monotherapy with antiresorptives may not be sufficient to lower risk to acceptable levels in such patients. Consideration of more aggressive therapy with combination or sequential use of antifracture medications may be warranted [197,[241][242][243][244][245].
General principles
• Obtain a detailed patient history pertaining to clinical risk factors for osteoporosis-related fractures and falls.
• Perform physical examination, measure height, and obtain diagnostic studies to evaluate for signs of osteoporosis and its secondary causes.
• Modify diet/supplements, lifestyle, and other modifiable clinical risk factors for fracture.
• Perform vertebral imaging when appropriate to complete risk assessment.
• Decisions on whom to treat and how to treat should be based on clinical judgment using this Guide and all available clinical information.
Consider FDA-approved medical therapies based on the following in adults ≥ 50 years • Fracture of vertebrae (clinical or subclinical), hip, wrist, pelvis, or humerus.
• DXA T-score − 2.5 or lower in the lumbar spine, femoral neck, or total hip. Predictive value of isolated measurement of 1/3 radius is currently being investigated (use clinical judgment). • Low bone mass (osteopenia) and a US-adapted WHO 10-year probability of a hip fracture ≥ 3% or 10-year probability of any major osteoporosis-related fracture ≥ 20%. • Patient preferences may indicate treatment for people with 10-year fracture probabilities above or below these levels.
Consider non-medical therapeutic interventions
• Evaluate and address modifiable risk factors related to bone loss and/or falling.
• Referral for physical and/or occupational therapy evaluation (e.g., walking aids and other assistive devices).
• Encourage weight-bearing, muscle-strengthening, and balance-training activities and refer as needed.
Follow-up
• Patients not requiring medical therapies at the time of initial evaluation should be clinically reevaluated as medically appropriate.
• Patients taking FDA-approved medications should have laboratory and bone density reevaluation after 2 years or more frequently when medically appropriate. • To identify any new vertebral fractures that have occurred in the interval, vertebral imaging should be repeated if there is documented height loss, new back pain, postural change, or suspicious finding on chest X-ray, following the last (or first) vertebral imaging test and in patients being considered for a temporary cessation of bisphosphonate therapy. • Regularly assess compliance and persistence with the therapeutic regimen (at least annually).
Combination and/or sequential use of anabolic (e.g., teriparatide) and potent antiresorptive (e.g., denosumab) have been shown to increase BMD and improve bone microarchitecture and strength more effectively than monotherapy with any one agent [239,241,242,246]. Combination therapy in which an anabolic agent and antiresorptive therapy are co-administered may be appropriate in a setting of very high risk, such as multiple vertebral fractures. Further studies are needed to test effects of combination therapy on incident fractures. There are no indications for combining two antiresorptive treatments.
There is accumulating evidence that BMD and fracture outcomes are significantly influenced by the order in which antifracture agents are administered. An anabolic agent administered following antiresorptive therapy has demonstrably less impact on BMD than if the anabolic is administered first [247][248][249]. Anabolic therapy after a potent antiresorptive agent may be followed by an attenuation of effect or even bone loss [193,250]. When sequential treatment is considered, starting with anabolic therapy and following with an antiresorptive agent is preferred.
Multiple variables affect outcomes: agent prescribed, patient characteristics, and duration of treatment, for example. More research is needed to determine the best order and most appropriate drugs for combination and sequential therapy in individual patients.
Improving patient adherence with prescribed treatment
An estimated 25-30% of osteoporosis patients do not start taking their prescribed medication and 50% or more do not continue treatment after 1 year [251,252]. The consequences are significant: 30% higher incidence of fracture in nonadherent patients compared to adherent patients with attendant higher morbidity, mortality, and healthcare costs [253,254].
Patients may unintentionally fail to initiate treatment due to forgetfulness, complexity of treatment regimen, and/or drug affordability [255]. In patients who intentionally do not adhere to recommended treatment, the main reasons cited in studies include limited knowledge of osteoporosis, fear of side effects, distrust of physicians or medication in general, and a lack of belief in the need for medication and/or its effectiveness [256][257][258][259].
Acceptance of risk is sometimes influenced by competing priorities. This is reflected in findings from a systematic review of research on women's preferences and values in relation to osteoporosis management published by Barrionuevo et al. in 2019 [260]. The top-ranked consideration was a tie between drug effectiveness and side effects. Not as important were convenience and frequency of doses. (Oral doses were preferred except in the case of biannual or annual dosing, in which case, injection ranked higher.) Even less important were cost and duration of treatment.
Patients often do not understand their personal risk for fractures and the profoundly negative impact that fractures could have on their quality of life, particularly their ability to live independently [261]. This is a challenge inherent to treating "silent diseases" like osteoporosis in which symptoms do not get observably better or worse in response to therapy.
Patient awareness of risk for fractures and their devastating consequences does not guarantee acceptance of antifracture treatment. The 2019 Patient Oriented Value Report commissioned by BHOF appears to indicate that even when awareness of risks and available treatments were high, most individuals at risk for a fragility fracture choose not to take medications needed to reduce their risk. Various factors were associated with willingness to start or continue treatment: dual anabolic-antiresorptive action increased acceptance of a novel treatment agent; history of fragility fracture increased willingness to continue treatment. In a subset of patients, side effects and/or cost burden severely limited willingness to start and stay on treatment [262].
Getting off to a good start matters. Population studies of patients taking oral bisphosphonates demonstrate a strong association between optimal adherence the first year of treatment and higher rates of adherence in subsequent years. This suggests that focused support and monitoring early in treatment may help improve a patient's long-term adherence and fracture outcomes.
When discussing medication options with patients, solicit their questions and concerns regarding the drug, dosing regimen (daily, weekly, monthly, every 6 months, or yearly), its benefits, and side effects. Asking questions about patient preferences and addressing fears and misconceptions as part of the medication selection process can promote better adherence to prescribed treatment and better outcomes in the form of fractures and disability prevented.
Duration of treatment
Like any lifelong chronic disease, osteoporosis is most successfully managed with continued therapy and monitoring. Therapeutic benefits can be maintained only with treatment. Once pharmacologic therapy is stopped, BMD and fracture risk can be expected to return to baseline or worse-slowly, in the case of bisphosphonates, or quickly, in the case of nonbisphosphonates, when discontinuation is associated with accelerated bone turnover, rapid bone loss, and increased risk for spontaneous fractures.
Successful treatment can increase BMD, reduce fracture risk, and improve T-score to the low bone mass or even the normal range. However, in a person with a history of osteoporosis, a T-score in the osteopenic or normal range does not change their diagnosis. The patient still has osteoporosis. BMD may be improved, and fracture risk reduced; however, microarchitectural deterioration remains, as do disease processes responsible for that deterioration.
With this in mind, serial DXA scans must be interpreted in the context of past DXA T-scores, fracture history, and the other factors that established the original osteoporosis diagnosis [263]. Changing a patient's diagnosis to osteopenia from osteoporosis could limit that patient's treatment options and may be detrimental to their bone health.
Available evidence indicates the incidence of rare adverse events such as AFF increases with longer-term antiresorptive therapy (over 3 or 5 years depending on agent) [217,264]. Consideration of potential risks associated with continued therapy must be weighed against potential risks of discontinuing therapy.
Bisphosphonate holiday
For patients on bisphosphonates who appear to be at modest risk of fracture (e.g., T-score > − 2.5 and no recent fracture) temporary discontinuation ("holiday") can be considered after 3 years on an intravenous therapy or 5 years on an oral therapy. A bisphosphonate holiday is defined as a temporary suspension of bisphosphonate therapy (up to 5 years) [166,265]. For patients who continue to demonstrate high fracture risk (e.g., T-score ≤ − 2.5 and/or recent fracture), continued treatment with a bisphosphonate or alternate therapy should be considered up to 10 years with an oral bisphosphonate and up to 6 years with annual IV zoledronic acid. This suggestion is consistent with ASBMR task force recommendations on managing patients on long-term bisphosphonate therapy [14].
The rationale for a bisphosphonate holiday is the expectation that prolonged skeletal retention will confer antifracture benefits for some period of time, perhaps several years, in appropriately selected patients. A period off the drug may reduce risk for ONJ and AFF [221,229]. Decisions about how long to treat with a particular drug must be tailored to individual patients, applying the best available clinical guidelines and expert recommendations [266].
For patients treated with a non-bisphosphonate, therapeutic effect rapidly dissipates with discontinuation. Studies indicate that discontinuing denosumab results in increased bone turnover markers, reduced BMD, and increased risk of multiple vertebral fractures, especially in patients with a prior vertebral fracture [192,267]. The Endocrine Society guideline for treatment of postmenopausal osteoporosis recommends that denosumab be continued for 5 to 10 years depending on fracture risk [166]. After discontinuing treatment with denosumab, it is recommended by the FDA that patients be switched to another antiresorptive agent, such as a bisphosphonate, to preserve bone density gains [268]. Studies are ongoing to assess the time course for starting antiresorptive therapies after stopping denosumab.
The management algorithm for bisphosphonate treatment in postmenopausal osteoporosis shown in Fig. 6 is based on ASBMR task force evaluation of data from the Fracture Intervention Trial Long-term Extension (FLEX) and the Health Outcomes and Reduced Incidence with Zoledronic Acid Once Yearly (HORIZON) extension studies [14]. It suggests that women who experience a fracture before or after being treated with bisphosphonates (oral 5 years, IV 3 years) should continue bisphosphonate therapy (oral up to 10 years, IV up to 6 years). Patients who fracture on therapy should be assessed for adherence and secondary causes of osteoporosis. (Note: We lack sufficient data to make specific recommendations regarding alternative antifracture therapy after prolonged bisphosphonate treatment.) High fracture risk in this algorithm is defined by older age (70-75 years), 1 or more clinical risk factors for fracture, and/ or FRAX score above country-specific intervention thresholds. Recommended reassessment includes clinical evaluation, risk assessment, and bone density measurement by DXA. The interval between DXA scans should be based upon changes that are detectable and clinically significant. Reassessment may be necessary at less than 2 years in patients with a new fracture or in patients who can be expected to experience rapid bone loss due to new clinical risk factors (such as initiation of aromatase inhibitor or androgen deprivation therapy) (See Fig. 6).
Pharmacotherapy should be periodically reviewed to determine whether treatment should be continued, changed, stopped, or resumed. It is reasonable to evaluate patients every 1 to 2 years during any hiatus from active bisphosphonate treatment.
Further research is needed to clarify best practices in this area, although, as noted by the ASBMR in their report, due to advanced age, life expectancy, and comorbidities, it is unlikely that future RCTs will provide data for formulating definitive recommendations in this patient population.
Antifracture treatment in men with osteoporosis
Medications currently FDA approved for osteoporosis treatment in men include: bisphosphonates alendronate, risedronate, and zoledronic acid; bone anabolic teriparatide; and the RANKL inhibitor denosumab. Unless contraindicated, osteoporosis treatment in hypogonadal men with testosterone levels < 200 mg/dL and symptoms of androgen deficiency should include consideration of testosterone therapy. In hypogonadal men at high risk for fracture who are receiving testosterone, addition of a proven antifracture therapy is indicated [58].
All FDA-approved medications to treat osteoporosis in men have been demonstrated in RCTs to increase BMD. Comparable RCT data for fracture risk reduction exist but are more limited. Fixed-effects meta-analyses of 22 studies demonstrated significantly fewer vertebral fractures in men taking alendronate (67% reduction) and risedronate (57% reduction), but not in men taking calcitonin or denosumab [269]. Another meta-analysis, conducted for the USPSTF found that available data suggest zoledronic acid reduces risk of morphometric vertebral fractures in men by 67%, with no comparable reduction in risk of clinical vertebral fractures or hip fractures [22].
None of the RCTs evaluating efficacy of bisphosphonates in treating men with cancer treatment-induced bone loss (CTIBL) have been powered to evaluate fracture rates as a primary outcome. However, the denosumab Hormone Ablation Bone Loss Trial (HALT) was adequately powered to demonstrate a statistically significant decrease in new vertebral fractures in men treated for 3 years with denosumab (1.5% versus 3.9% with placebo, relative risk = 0.38; 95% CI = 0.19-0.78; P = 0.006) [270,271].
Antifracture treatment in patients treated with glucocorticoids
An estimated 3% of adults aged 50 years and older are treated with glucocorticoids [272]. Glucocorticoid therapy is associated with an early increased risk of fractures through multiple mechanisms, including accelerated bone resorption; alterations in PTH pulsatility; and reduction in bone formation, sex steroids, and renal calcium reabsorption [273]. Glucocorticoids cause a dose-dependent loss of BMD in the spine and hip, with the greatest loss in vertebral trabecular bone [274]. Among glucocorticoid users, fracture incidence rises with longer-term use of prednisone (over 5 years), higher doses (> 7.5 mg/day), older age (> 55 years), female sex, and Caucasian ethnicity [275].
The American College of Rheumatology (ACR) 2017 guidelines recommend risk stratifying patients when making decisions about antifracture treatment. Adults ≥ 40 years of age receiving long-term glucocorticoids should be designated as either moderate-to-high risk or low risk of fracture based on BMD, fracture history, and 10-year FRAX® fracture score (with glucocorticoid use selected on FRAX calculator). FRAX® calculations assume a prednisolone dose of 2.5-7.5 mg/day (prednisolone and prednisone doses are nearly equivalent). For people taking higher doses (> 7.5 mg/day), proportional increases in fracture risk can be approximated by raising the FRAX® score: a relative 15% for major osteoporotic fracture and 20% for hip fracture risk [88]. For example, a hip fracture risk estimated at 2.0% with glucocorticoid use checked in FRAX® should be increased to 2.4% if the patient's prednisone dose is higher than 7.5 mg/day. Regardless of glucocorticoid dose, patients who exceed the adjusted FRAX® intervention threshold should receive antifracture pharmacotherapy. Likewise, treatment should be initiated in postmenopausal women and men ≥ 50 years of age on glucocorticoid therapy who experience a fragility fracture and/or have a T-score of − 2.5 or lower.
Antifracture treatment in glucocorticoid users has been shown in a Cochrane analysis of RCTs to reduce new vertebral fractures by 43%, similar to effects seen in postmenopausal osteoporosis [276]. In a 3-year study reported by Saag et al., teriparatide produced greater increases in BMD and fewer new vertebral fractures than alendronate in comparable glucocorticoid-treated patients [277]. No significant difference was observed in hip or non-spine fracture outcomes.
Meta-analysis of 3 large RCTs suggests that denosumab is effective in treating patients on glucocorticoids, outperforming bisphosphonates in its effects on lumbar spine and total hip BMD in patients with GIOP. The studies were not sufficiently powered for fracture outcomes [278].
There has been concern that, theoretically, denosumab could increase infection risk in patients on glucocorticoids or concomitant biologic therapies. Data currently available suggest any such increased risk is low and/or Fig. 6 Management of long-term bisphosphonate (BP) treatment in postmenopausal women. Note: This flowchart illustrates ASBMR task force recommendations for management of patients taking bisphosphonates. All other osteoporosis drugs lose effect rapidly when discontinued and must be promptly followed by alternative antifracture therapies. Adler RA, et al. (2016), J Bone Miner Res [14] comparable to that seen with risedronate and zoledronic acid [279][280][281][282].
Antifracture treatment for older-old adults
Current data show that antifracture treatment confers benefits throughout old age. In healthy community-dwelling adults over age 75 years, reported fracture reduction with zoledronic acid, denosumab, teriparatide, and abaloparatide is similar to that seen in younger community-dwelling adults [237,[283][284][285]. In frail elderly long-term care patients, safety and BMD improvement have been demonstrated in RCTs of alendronate and zoledronic acid treatment [286,287].
Monitoring treatment response
Appropriate response to treatment and the need for continued medication to treat osteoporosis should be reviewed annually. Clinical assessment should be performed to identify new fractures, falls, and/or new or worsening comorbidities. Repeat bone densitometry and vertebral imaging should be done in patients exhibiting signs of vertebral fracture, such as height loss or back pain. It may be appropriate to measure biochemical markers of bone turnover in specific patients.
Ongoing clinical assessment
It is important to have accurate baseline values against which to compare serial test results. For example, significant height loss detected through yearly measurement may be an indicator of disease progression. Wallmounted stadiometers are more reliable than freestanding devices. Patients who lose 0.8 in. or more in height either acutely or 1.5 in. cumulatively should have repeat vertebral imaging to determine if fractures have occurred since prior tests. Vertebral fracture while on treatment is associated with very high fracture risk. Consideration of untreated secondary causes of bone loss and/or changes to therapy are appropriate in such patients.
Typically, subclinical morphometric vertebral fractures are diagnostic of osteoporosis. In a patient with significant height loss, diagnosis can be confirmed with VFA performed at the same time as BMD on most modern DXA systems or with conventional lateral thoracic and lumbar spine X-ray.
Serial BMD measurement
Central DXA assessment of the total hip, femoral neck, or lumbar spine is the "gold standard" for serial assessment of BMD. Biological changes in BMD are small compared to inherent error in the test itself, and accurate interpretation of serial BMD studies requires knowing the smallest change in BMD that exceeds testing error. This least significant change (LSC) differs with the densitometry device used, patient assessed, measurement site, and technologist's skill with patient positioning and test analysis [288]. BMD changes of less than 3-6% at the hip and 2-4% at the spine may be due to precision error of the testing itself. The BHOF recommends considering monitoring BMD at the 33% radius in patients for whom BMD cannot be measured at the spine or hip and in those with hyperparathyroidism or hyperthyroidism or on androgen deprivation therapy for prostate cancer, in those undergoing orthopedic surgery of an upper extremity, or according to clinical judgment [8,11]. Information on how to assess precision and calculate the LSC for a particular device and/or facility is available at http://www.ISCD.org.
Serial central DXA testing is an important component of osteoporosis management. Measurements for monitoring patients should be performed in accordance with medical necessity, expected response, and in consideration of local regulatory requirements. According to the ISCD, intervals between testing should be guided by the clinical status of each patient. A follow-up BMD should be done after 1 year of initial therapy or a change in therapy, with longer intervals once an effective treatment is established. The American College of Physicians recommends against monitoring BMD in postmenopausal women within a 5year treatment interval. However, this recommendation was based on low-quality evidence and was rated as a weak recommendation [289]. The BHOF recommends repeating BMD assessments every 2 years in adults ages 65 and older, with the understanding that testing less or more frequently may be warranted in individual patients.
DXA is currently the preferred approach for monitoring treatment response. According the ISCD, if DXA is not available, QCT of the spine or hip or pQCT of the radius can be used in high-risk individuals for decisions regarding treatment. Information about the use of these measures and QCT-based finite element analysis for clinical decisions regarding monitoring and treatment can be found on the ISCD website at https://iscd.org/learn/official-positions/adultpositions/ [59,290,291]. Of note: central QCT requires high exposure to ionizing radiation [292].
Biochemical markers of bone turnover
Monitoring bone turnover markers is an alternative way of identifying poor response or nonadherence to therapy. In large RCTs, decreased biochemical markers of bone resorption after 3-6 months of treatment with specific antiresorptive therapies and increased biochemical markers of formation after 1-3 months of specific anabolic therapies have been predictive of greater BMD responses and (in some cases) fracture risk reduction [93,293]. In order to be meaningful, changes in biochemical markers must exceed the LSC for the specific biomarker being measured. The LSC is calculated by multiplying the "precision error" of a biochemical formation marker (laboratory provided) by 2.77 (95% confidence level). Tests should be obtained early morning after overnight fast to offset effects of diurnal variation and diet. Serial measurements should be made at the same time of day at the same laboratory. (See "Biochemical markers of bone turnover" section.) Vertebral imaging/vertebral fracture assessment (VFA) When current imaging by MRI and/or CT performed for other purposes is available, it should be evaluated for identification of vertebral fractures. Vertebral fractures can be directly imaged using standard lateral spine X-ray or DXA-based VFA. Once the first vertebral imaging test has been performed to determine prevalent vertebral fractures (indications above), repeat testing should be performed to identify incident vertebral fractures if there is a change in the patient's status suggestive of new fracture, including documented height loss, undiagnosed back pain, postural change, or a finding of new vertebral deformity on chest X-ray [67]. If patients are being considered for a bisphosphonate holiday, vertebral imaging can be done to identify any fractures that have occurred during treatment, which would indicate the need for continued treatment with bisphosphonates or another antifracture agent. (See "Vertebral fracture assessment" section.)
Rehabilitation following fragility fracture
Patient care following fragility fracture is a complex process involving three components: minimizing pain, reducing fracture risk, and improving function. Such multifaceted care is most effectively accomplished by a coordinated team of health professionals, often overseen by a primary care provider or, in ideal circumstances, by dedicated fracture liaison (FLS) personnel.
Ongoing physical activity that supports healing and maintenance of bone mass is a key part of rehabilitation following fracture. For patients with fractures or at high risk for fractures instruction in safe body mechanics can reduce disability, improve physical function and quality of life, and lower risk for injurious falls.
The most common fragility fractures are those of the proximal femur (hip), vertebrae (spine), and distal forearm (wrist) [294]. All contribute to disability, pain, and reduced quality of life. An estimated 21% of hip fracture patients 60 years and older die in the year following fracture [295,296]. Vertebral fractures, which can cause pain and disability, confer smaller but significant increases in hospitalization and mortality risk [297,298].
Hip fracture rehabilitation
Hip fracture typically requires surgical repair or replacement (proximal femur and/or acetabulum). While RCT data are sparse on the impact of specific rehabilitation protocols, settings, and durations, large observational studies conducted in Italy and Taiwan suggest a mortality benefit for patients who receive intensive, inpatient rehabilitation following hip fracture [299,300]. Patients who received continuous inpatient rehabilitation had lower death rates at 6 and 12 months than those receiving no therapy or, in the case of the Italian study, those receiving outpatient physical therapy. Furthermore, in a small, randomized trial of functionally limited older adults who had received standard rehabilitation after hip fracture, an additional program of home-based function-oriented activities resulted in modest improvement at 6 and 9 months after randomization. Additional RCTs are needed to assess the clinical relevence of these findings [301].
Fewer than half of hospitalized hip fracture patients recover their pre-fracture competence in activities of daily living [302]. Only one fourth regains previous levels of social functioning [303]. Six months after a fracture, just 15% of hip fracture patients can walk across a room unaided [304]. Consequently, 10-20% of those living independently before a hip fracture require institutional long-term care afterwards [305].
Vertebral fracture rehabilitation
Two thirds of vertebral fractures are subclinical "silent" fractures. The typical symptomatic vertebral compression fracture is characterized by intense back pain lasting more than a couple of days that gets better when the patient lies down. If a spine fracture is suspected, further evaluation by X-ray, MRI, CT, or VFA can confirm the diagnosis.
Vertebral fractures do not usually require hospitalization [306]. However, multiple thoracic and lumbar fractures can cause spinal deformity, leading to restrictive lung disease, constipation, pain, distention, and reduced appetite [307,308]. Chronic pain, postural weakness, and altered gait can result in impairment equal to that following a hip fracture.
Treatment for acute vertebral fracture includes use of analgesics, bracing (for 2 to 6 weeks), and partial bed rest (4 days or less). If bed rest is recommended, a few 30-to 60-min periods each day of sitting upright and walking around are valuable to avoid stiffness and prevent loss of bone and muscle tissue. Prolonged inactivity should be avoided. Removal of mechanical loads and/or resistive stresses stimulates bone resorption, further weakening bone and muscle [309,310].
A variety of light-weight back braces and postural supports are available that restrict spinal motion near a fracture site to ease pain and promote healing. Bracing may facilitate stimulation of proprioception to improve spinal extensor muscle control. These orthoses are custom molded and can be fitted by a physiatrist, physical therapist, or other trained clinician. A systematic review, including 4 RCTs (n = 281), investigated effects of spinal orthoses after a vertebral fracture during the acute and chronic phases post-fracture. Evidence for the benefit of bracing on pain in the acute phase (3-12 weeks after fracture) is lacking. However, there is low-quality evidence (high risk of bias due to no blinding) that bracing may have beneficial effects on pain, spinal strength, kyphosis, pulmonary volume, and quality of life at 6 months following fracture. Bracing worn 2 hours a day over 6 months appears beneficial. Type of brace does not appear to make a difference. There is no evidence that bracing improves physical function or disability [311].
Wrist fracture rehabilitation
Osteoporosis-related forearm or wrist fractures (fractures of the 1/3 radius, ulna, or both) are the most common fractures of the upper extremities. Depending on the type of fracture, treatment may consist of splint, cast, or brace immobilization. If a radius fracture is not displaced, a cast or functional brace is used until there is radiographic evidence of union. Surgical treatment has been used more recently because of faster functional recovery. Open reduction with internal fixation (ORIF) and closed reduction with percutaneous pinning (CRPP) are procedures often used for unstable distal radius fractures [39,312,313]. During the cast or bracing stage, arm elevation, early mobilization, and edema-control measures are implemented.
There is literature to suggest that early rehabilitation focused on digital mobility yields superior functional outcomes and patient satisfaction [314]. Targeted therapy can improve finger dexterity, even while the hand is immobilized in a cast. Unfortunately, 90% of wrist fracture patients are not referred to physical/occupational therapy during this critical period.
Management of acute fracture pain
Because pain is a barrier to movement and activity, effective pain management is a cornerstone of fracture rehabilitation, preservation of bone tissue, and ongoing fracture prevention. Conservative therapeutic options for acute pain from recent vertebral fractures include analgesics such as acetaminophen, nonsteroidal anti-inflammatory drugs, narcotics, and calcitonin, as well as limited bed rest, bracing, physical therapy, nerve root blocks, and epidural injections.
Multifactorial pain management strategies are currently underutilized. The recent US National Pain Strategy Report emphasizes the need for development and implementation of effective interdisciplinary pain treatment programs focused on patient-directed self-care that employ a range of approaches, both pharmacologic and non-pharmacologic [315].
Multimodal pain management is now a mandated performance measure for hospitals and medical facilities accredited by The Joint Commission (USA). These modalities include acupuncture therapy, chiropractic therapy, ice/heat, massage therapy, physical therapy (PT), electrical stimulation (E-Stim), relaxation therapy, and cognitive behavioral therapy (CBT) [316].
In the 3-5 days immediately following fracture, acetaminophen and/or low-dose narcotics administered around the clock (rather than as needed for pain) can work very well in appropriate patients [317]. When given on a regular schedule over several weeks, this regimen allows patients to remain active and avoid disuse-related muscle and bone loss. Specialist referral is advisable if neurologic involvement is suspected.
Calcitonin salmon has been shown to dramatically reduce acute pain due to recent, nontraumatic osteoporotic vertebral crush fractures. One small RCT that randomized patients to calcitonin nasal spray or placebo spray plus high-dose acetaminophen reported that calcitonin-treated patients had significantly better pain control. This was associated with weeksearlier mobilization and functional improvement (sitting, standing, walking).
To prevent falls, it is essential to consider disorientation, sedation, and other potential side effects of pain medications, either alone or in combination with other drugs. Because many fracture patients are medicated simultaneously for multiple comorbid conditions, a medical history should include careful attention to potential polypharmacy and drug interactions that could contribute to fall-inducing side effects.
Surgical procedures for acute painful vertebral fracture
A primary source of the intense pain caused by vertebral fracture is movement of fracture margins and/or bone fragments against one another. This is a particular problem in the lumbar spine, which is highly articulated to allow free flexion and rotation. Immobilizing fractured vertebral bone dramatically reduces pain. Prolonged bed rest is not an ideal remedy given resultant deconditioning and bone loss. Extended bracing and physical therapy have been used for this purpose.
Patients with severe acute fracture pain may benefit from referral to a pain specialist and/or interventional radiologist. Unremitting pain that persists despite conservative therapy may respond to short-term specialist treatment and/or minimally invasive vertebral augmentation surgery [318,319].
Although RCTs comparing vetebroplasty/kyphoplasty to medical management (but not to placebo) have reported conflicting results, some studies found short-term pain control with vertebral augmentation [320][321][322][323]. However, when in 2019, the second ASBMR task force compared vertebral augmentation procedures to sham procedures (with/without injected analgesia), it reported little benefit of vertebroplasty for pain control in either acute or sub-acute fracture and insufficient evidence to recommend kyphoplasty over nonsurgical management [324].
Serious complications reported with these procedures include cement pulmonary embolism, osteomyelitis, and epidural cement leak. While fractures of adjacent vertebrae have been reported, analyses of study data are inconclusive [325][326][327][328]. Additional long-term data from large well-designed, placebo or sham-operated controlled RCTs are needed to clarify issues related to safety and efficacy of these procedures. Treatment for severe pain should be individualized. Whether recommending specialist surgical or nonsurgical management for pain associated with spine fractures, clinicians should prescribe antifracture pharmacotherapy for the underlying osteoporosis.
Managing chronic post-fracture pain
Acute pain typically resolves 6-8 weeks following vertebral fracture. However, some people have pain for months or years after a fracture heals. Persistent pain like this can make it difficult to sleep, walk, and eat; it can make a person irritable or depressed by depriving him or her of independence and meaningful participation in self-care and community life.
The need for continued activity to prevent loss of bone and muscle mass underlines the importance of pain control. Untreated pain is a strong incentive to avoid potentially painful activities and develop sedentary behavior. This can quickly lead to musculoskeletal deterioration and frailty. Early and sustained physical engagement is essential to restoration of function and quality of life.
Complications of analgesic drugs, such as addiction, kidney failure, and gastrointestinal bleeding, limit their long-term use for many patients. Increasingly, clinicians are employing a variety of non-pharmacologic approaches to managing persistent pain, including cognitive behavioral therapy, hypnosis, mindfulness training, biofeedback, and stress management. As there are few studies of psychological therapies for chronic pain, available evidence is of low-to-moderate quality, and data in support of one modality over another are not currently available [329][330][331]. Additional research is needed that focuses on risks and benefits for people with osteoporosis and related fractures [332] (Table 13).
Patients with pain following fragility fractures may benefit from one or more of the therapeutic interventions described in Table 13. Recommendations are based on available evidence with limited RCT data to support the clinical effectiveness of many of these practices. It is highly recommended that patients work alongside trained professionals and/or an interprofessional team for a given modality.
Protecting fragile bones in daily life and recreation
Following a fragility fracture, modifications to standard activities of daily life and recreation should be considered to prevent subsequent injury. A trained physical therapist and/or occupational therapist can be instrumental in educating patients about safe body dynamics (Fig. 7).
Avoidance of prolonged or excessive loading of individual skeletal sites is a fundamental principle of safety for people with osteoporosis. Distribution of skeletal load is achieved by alignment of the head, shoulders, spine, hips, knees, and ankles, which centers the body's mass over the lower extremities. The Step to turn so that leading foot, torso, and extended arm face the same direction. -Modification: Shift weight from front to back foot with a straight spine to move the vacuum back and forth.
Recreational pursuits and athletic activities that exert intense forces on weakened bone and/or involve abrupt or high-impact loading can break bones in people with osteoporosis http://www.bonehealthandosteoporosis.org/wp-content/ uploads/BoningUpBrochure_8.5x11.pdf [355][356][357]. Fortunately, many can be modified for safety with input from a trained physical therapist. Ensuring that patients understand potential risks, while focusing on safe approaches to preferred pastimes and sports enables patients to stay active. Potentially injurious activities for individuals with osteoporosis include the following: & Jumping rope or jumping on a trampoline & Horseback riding, downhill skiing, parasailing, sky diving Table 13 Pain management strategies and interventions for osteoporotic fractures [333][334][335][336] Pain management measure Applications and considerations for osteoporosis patient care Acetaminophen 650 mg orally every 4-6 h; maximum dose 4000 mg/day for treatment of mild to moderate pain. No evidence of benefit for neuropathic pain. Liver damage risk (overdose) [336].
Acupuncture
Acupuncture has been demonstrated to control pain in patients with chronic low back pain. Many health insurance providers now offer coverage for these therapies; however, the quality of evidence for their efficacy is low (issues of study design, placebo effect, etc.) [337].
Anti-inflammatories (NSAIDs)
Dose depends on drug. Beneficial for suppressing mild-to-moderate inflammation-related pain. May delay bone healing following fracture, except anti-COX-2 NSAIDs. Over-the-counter NSAIDS taken every 6 h following fracture or alternating with acetaminophen can help with pain relief. Adverse reactions of concern include gastrointestinal bleeding, renal insufficiency, myocardial infarction, stroke, and dizziness. No evidence of benefit for neuropathic pain.
Gabapentin Pregabalin
First-line therapies for neuropathic pain. Gabapentin 900-3600 mg orally in 3 divided doses. Pregabalin 300-600 mg/day orally in 2 divided doses [336]. Side effects in common: dizziness, somnolence, headache, peripheral edema, nausea, blurred vision, and increased suicidal thoughts. Use with caution in patients with impaired renal function. Abuse and dependence have been reported. Additional side effects/risks of gabapentin: fever, infection, lack of coordination. Additional side effects of pregabalin: weight gain and disorientation.
Antispasmodics
Efficacy in relieving pain is not well established and risk for adverse (anticholinergic) effects is high [339]. May increase risk for falls, constipation, and indigestion.
Aspirin 350-650 mg orally every 4 h; maximum dose 3600 mg/day [336]. Beneficial for mild pain (temporary uses). Adverse reactions of concern include gastrointestinal bleeding, tinnitus, insomnia, and dizziness. No evidence of benefit for neuropathic pain.
Bed rest (limited/intermittent) While prolonged bed rest causes bone and muscle loss, immediately following vertebral compression fracture, patients are generally prescribed an initial period of strict bed rest (no sitting or standing) [340]. Even when a patient is back on his/her feet, lying flat for 10 min every couple of hours, for example, is recommended to support activity by keeping pain under control. Further RCT evidence is needed to support specific protocols for rest during recuperation from vertebral fracture [341].
Bracing and spinal orthoses A variety of soft, semirigid, rigid, and dynamic braces are available for use following vertebral fracture to control pain, promote fracture consolidation, support posture, and improve balance, physical function, and quality of life [342]. Patients typically are instructed to wear orthoses for 12 to 24 weeks until resolution of pain and vertebral instability. RCT data are currently lacking to make evidence-based recommendations [311].
Calcitonin salmon
Calcitonin salmon has been found to mitigate acute pain from recent vertebral fractures. Limiting use duration is recommended due to potential increased risk for cancer. Not shown to be effective at ameliorating chronic pain from vertebral fractures [343].
Cognitive behavioral therapy (CBT)
Although RCT data are not available, studies have demonstrated CBT and other psychosocial complementary therapies can improve function and quality of life in patients suffering from chronic pain [344,345].
Complementary therapies
Deep breathing, progressive muscle relaxation, guided imagery, and other relaxation techniques can help release muscle tension and direct a patient's attention away from pain and related anxiety. Biofeedback therapy can be helpful for managing acute and/or chronic pain due to fractures. Referral should be made to biofeedback specialist [336].
Electric stimulation (E-Stim) E-Stim, also called transdermal electrical nerve stimulation (TENS), considered an effective non-pharmacologic therapy for chronic pain, uses transmission of a mild electrical current applied to a patient's skin at the site of injury or pain [346]. Referral to physiatry or physical therapy is required.
Ice and heat
Application of ice and/or heat, alternating or individually, can promote healing and be effective in reducing swelling, improving blood flow, and relieving pain of muscle spasms. Specific injury dictates appropriate method, purpose, and application (e.g., heat may not be appropriate for acute fracture with inflammation).
& Running/jogging (beneficial for hip BMD, can be dangerous for low spinal BMD) & Golf, tennis/racquet ball, and bowling (done conventionally with twisting at waist) The fear of fracture can be a powerful incentive to avoid physical activity, causing predictable harm to bone, muscle, and general health. Spine-sparing strategies for approaching tasks and pastimes help prevent injury while promoting continued mobility and self-confidence. Rather than blanket restrictions (e.g., no bending, no lifting > 10 lb). BHOF recommends guidance on spine-sparing techniques (e.g., hip hinge) by trained occupational and/or physical therapy professionals who have experience working with older individuals.
Safety considerations for physical activity
Older adults with low bone density, osteoporosis, and fractures can safely benefit from activities that promote muscle strength and balance. In the LIFTMOR study, supervised high-intensity physical activity increased bone density, improved function, and reduced kyphosis in postmenopausal women aged 65 ± 5 years with osteoporosis and osteopenia-without elevating risk for vertebral fractures [358,354].
On the other hand, when done incorrectly, high-intensity and/or impact activities can cause musculoskeletal injuries, especially in people with vertebral fractures, sarcopenia, or cognitive impairment. However, with appropriate technique, intensity, and therapeutic progression, even these vulnerable populations can realize improvements in physical performance [359,360].
Supervision is recommended to ensure physical activities are safe and sustainable given an individual's health status, bone fragility, and overall fitness. Individuals with low bone density, osteoporosis, or spinal kyphosis should engage in physical activities with a straight or supported back. Activities that are typically performed with flexion (forward bending under stress) should be avoided unless they are modified to protect the spine. Extreme, end-ofrange flexion or rotation should be avoided, especially when loaded (as in lifting objects from the floor). Slow, controlled twisting with the spine supported is acceptable as is midrange (but not end-range) spine flexion/extension
Massage
Although no large-scale RCT data exist, evidence from small studies suggest that massage may improve post-fracture pain and disability compared to sham therapies and other non-manipulative interventions (such as relaxation techniques). The ACP guideline on management of chronic low back pain includes a strong recommendation for massage therapy, chiropractic therapy, or spinal manipulation (acknowledged low-quality evidence) [347]. Intense or deep-tissue massage therapy should be avoided in people who have experienced fragility fractures. Cases of massage-induced fractures have been reported [348].
Nerve root block injection
Percutaneous dorsal root ganglion block (nerve block) has been demonstrated to provide immediate and prolonged improvement of chronic pain from vertebral osteoporotic compression fracture in patients who failed conservative treatment or had residual pain after vertebroplasty [349,350]. Lidocaine injection provides significant short-term (up to 2 weeks) pain relief in new fractures [351] and may promote early mobilization. The AAOS includes nerve root block in its recommended treatments of acute pain following vertebral fracture [352].
Opioids
Opioids are very effective analgesia for acute pain. However, if used chronically, they lose potency, induce dependence, raise risk for addiction, and lead to constipation, falls, and central sensitization. Recommended only for very short-term use with acute fractures. Hence, non-narcotic treatments are preferred.
Topical pain relievers
Capsaicin Lidocaine Lidocaine 1.8% or 5% patch applied to intact skin at site of pain for up 12 h daily is recommended for chronic peripheral neuropathic pain. Capsaicin 8% patch is a second-line therapy that can be applied in a clinical setting every 3 months [336]. Side effects common to both: application site pain/skin irritation, pruritus, and erythema. Capsaicin can increase blood pressure transiently and can lead to desensitization. Over-the-counter preparations of menthol, methyl salicylate, or OTC capsaicin have shown little to no effect on chronic pain.
Vertebroplasty/kyphoplasty (Not generally recommended) Little benefit of vertebroplasty for pain control and there is insufficient evidence to recommend kyphoplasty over nonsurgical management [324]. [357] in which some of the body's weight is supported by extremities (bent knee, arm behind back, etc.) (Fig. 8).
Recommended progressive resistance training, balance training, and increased loading exercises include the following ( The American Board of Physical Therapy Specialties offers certification to qualified physical therapists who specialize in geriatrics. Patients can find a board-certified geriatric physical therapist in their area through the public portal on the American Physical Therapy Association's website (http://apta.org).
Secondary fracture prevention
Ideally, all at-risk individuals could be identified and managed to prevent their first fracture (primary prevention). Improvements have been made in detection and management of osteoporosis in women aged 65 years and older. Medicare utilization data show many women in this age group are currently screened by DXA in compliance with HEDIS measures, an increase from 64.4% in 2006 to 72.5% in 2017. Improvements have been seen in treatment following fracture (secondary prevention). Medicare utilization data show testing and treatment rates following any fracture increased from 20.4% in 2007 to 41.1% in 2020 [361]. However, analysis of Medicare data from 2008 to 2014 found that following hip fracture repair, fewer than 1 in 5 women received recommended interventions, despite being at very high risk for future fractures [362].
Other studies have shown even worse rates, with up to 95% of patients discharged following hip fracture repair with no antifracture treatment and a 2.5-fold increased risk of future fracture [29,30,363]. Failure to treat high-risk patients can lead to disability and premature death that might have been avoided with appropriate care.
Patient perceptions and beliefs contribute to underutilization of effective osteoporosis therapies. As detailed in the ASBMR report on secondary fracture prevention, most patients do not recognize fracture as a symptom of disease [363,364]. Clinicians may find it challenging to convince a patient that tripping and breaking a bone is not bad luck, or a particularly hard fall, it is osteoporosis and it will lead to additional fractures if untreated, particularly in the short term.
Understanding the link between treatment and fracture is critical to motivating patients to undertake the many individual steps required to reduce their risk. Simple interventions to preserve bone strength can be recommended at each office visit. In addition to antifracture medication, these interventions include adequate intake of calcium, vitamin D, and protein; regular participation in weight-bearing and muscle-strengthening physical activity; cessation of tobacco use; and recognition and treatment of alcohol abuse.
There are structural factors that contribute to the problem of osteoporosis underdiagnosis and undertreatment as well. Skeletal health overlaps multiple specialties of practice, in both inpatient and outpatient settings. In today's fragmented healthcare environment, it can be unclear who is responsible for bone health. The orthopedic surgeon who repairs a hip fracture may assume the primary care doctor has it covered, while the primary care doctor assumes the orthopedist took care of any needed bone-related diagnosis and/or treatment when the patient was hospitalized. Continuity of care is complicated by multiple handoffs, particularly after hospitalization: skilled nursing stay, home health, etc. Not only that, there is the challenge of identifying patients at highest risk due to the fact that most fractures occur in people with bone density above the threshold diagnostic of osteoporosis. They have low bone density, but not low enough to meet bone density criteria for intervention [365].
Institutional approaches to secondary fracture prevention have been initiated in the USA and abroad to ensure that patients who fracture are evaluated, treated, and followed so that the potential cascade of fractures is stopped after the first. Evidence-based practice models have emerged that can be adapted for various clinical practice settings. One such model gaining acceptance is the fracture liaison service (FLS).
The fracture liaison service model of care The FLS system of care in the USA was developed through the National Bone Health Alliance (NBHA), a public-private partnership of 50-plus member organizations along with representatives from the Centers for Disease Control and Prevention, Centers for Medicare & Medicaid Services, National Institutes of Health, and the US Food and Drug Administration [13].
In an FLS system, a multidisciplinary team of healthcare providers works in coordination to implement evidence-based diagnostic and treatment protocols to follow for post-fracture care. The process is overseen by an FLS coordinator (a nurse or other allied health professional) who is charged with overall organization, tracking, and documentation of post-fracture patient care. It is a simple concept, yet its implementation is complicated, requiring planning, division of responsibilities, coordination of staff, systematic and consistent patient monitoring, and knowledge of billing and coding technicalities. Because management of osteoporosis is a multidimensional and longterm undertaking, treatment plan coordination is critical to its effectiveness. Equally critical is patient collaboration. Every aspect of the plan must accommodate patient needs, goals, values, habits, abilities, and living conditions [366,367].
Since early pilot programs began a decade ago, FLS programs have been successful in the USA and abroad. They have markedly reduced recurrent fractures, particularly in closed medical systems, by targeting interventions at postfracture patients, recognizing that this group is at highest risk of future fractures.
FLS pilot programs outcomes to date include the following: & Kaiser Permanente's Healthy Bones program, which has led to an overall 38% reduction in their program's expected hip fracture rate since 1998. & Geisinger Health System osteoporosis disease management program, which achieved $7.8 million in cost savings over 5 years through reduction of secondary fractures. & American Orthopaedic Association's Own the Bone program has significantly improved rates of treatment and counseling, BMD testing, initiation of pharmacotherapy, and coordination of care for patients following fragility fracture [368]. & NBHA FLS Demonstration Project, a turnkey FLS solution created for sites to automate, benchmark, and improve performance related to selected osteoporosis/post-fracture quality measures demonstrated an increase in DXA and vitamin D level testing and treatment following implementation of the FLS program in three academic hospital settings [45].
The goal of the FLS model, like any practice management program is to ensure patients with a fracture are evaluated and treated for their underlying osteoporosis, while making the best use of clinician time and expertise. Creative approaches optimize use of electronic medical records and practice management software, delegate tasks, automate as much as possible, take advantage of the patient's waiting room time, and team up colleagues, specialists, allied health professionals, and support staff. There are many tools available for every type of practice, from sole practitioner to hospital-based multispecialty clinic.
Recommendations for secondary fracture prevention
In 2019, a coalition convened by the ASBMR published Clinical Recommendations for Secondary Fracture Prevention to treat the osteoporosis in women and men aged 65 years or older who suffer a spine or hip fracture. Here is a concise summary of the coalition's recommendations [363]. The Bone Health and Osteoporosis Foundation (BHOF) is committed to continuing the effort to answer these and other questions related to this debilitating disease with the goal of eliminating osteoporosis as a threat to the health of present and future generations. For additional resources on osteoporosis and bone health, visit http://www.bonehealthandosteoporosis.org.
Summary
The osteoporosis treatment gap is truly a public health crisis, putting patients at risk for fragility fractures that cause avoidable suffering, disability, dependence, and premature death and cost millions in healthcare expenditures. To close this gap in care, we need to engage physicians, governmental entities, and public health organizations in efforts to improve access and insurance coverage for key fracture prevention services. Osteoporosis detection, diagnosis, and treatment must become routine components of clinical practice. Healthcare providers of all types can lend their support by raising awareness of fracture prevention and bone preservation interventions and lifestyle modifications among patients, caregivers, and fellow health professionals.
We have the tools at our disposal. Proven diagnostic technologies and bone-sparing therapies are widely available at low cost. Pharmacologic agents that build bone and/or decrease bone breakdown dramatically reduce fracture incidence. Non-pharmacologic interventions preserve bone tissue, build muscle, and help prevent falls and fall-related fractures. However, these and other effective strategies are underutilized at every stage of healthcare delivery from inpatient to at-home and continuing care.
However effective, no single intervention or modality is adequate to preserve bone and prevent fractures in vulnerable patients. Collaborative approaches piloted in FLS programs are multifactorial and wholistic. They start with the recognition that a fracture in an adult is a clinical sign of osteoporosis that warrants further investigation to identify and mitigate underlying conditions that contribute to bone loss and fractures. Multifaceted patient care must be coordinated to ensure implementation of the full range of pharmacologic, dietary, fall prevention, physical therapy, and exercise recommendations.
As our population ages, preservation of skeletal health becomes more important every year. By applying recommended fracture risk assessment, pharmacologic treatment, risk reduction counseling, and long-term monitoring, clinicians across the healthcare spectrum who care for adults can contribute to extending the healthy independent lives of their patients.
Glossary
Abaloparatide (Tymlos®): An anabolic therapy approved for the treatment of osteoporosis. The pivotal study indicates that abaloparatide, compared with placebo, reduced the risk of new vertebral fractures by 86% and non-vertebral fractures by 43% after 18 months of therapy in patients with osteoporosis.
Alendronate (Fosamax®, Binosto™): A bisphosphonate approved by the US Food and Drug Administration for prevention and treatment of osteoporosis; accumulates and persists in the bone. Studies indicate about a 50% reduction in vertebral and hip fractures in patients with osteoporosis.
Atypical femur fractures (AFF): These are atraumatic or spontaneous fractures characterized by distinct radiographic and clinical features that resemble stress fractures (transverse fracture line, periosteal callus formation at the fracture site, little or no comminution, prodromal pain, and bilaterally, in some instances). These fractures are thought to be associated with long-term use of potent antiresorptive medications and are distinguished from ordinary osteoporotic femoral diaphyseal fractures.
Biochemical markers of bone turnover: Biochemical markers of bone remodeling can be measured in serum and urine. These include the resorption markers serum Ctelopeptide (CTX) and urinary N-telopeptide (NTX) and the formation markers serum bone specific alkaline phosphatase (BALP), osteocalcin (OC), and amino-terminal propeptide of type 1 procollagen (P1NP). Elevated markers of bone turnover may predict bone loss, while declines in these markers after 3-6 months of treatment may suggest fracture risk reduction.
Bone Health and Osteoporosis Foundation (BHOF): In October 2021, the National Osteoporosis Foundation (NOF) changed its name to the Bone Health and Osteoporosis Foundation (BHOF) to reflect the Foundation's dual focus on preventing osteoporosis and fracture in addition to osteoporosis diagnosis and treatment across the lifespan.
Bone mineral density (BMD): A risk factor for fractures. By DXA, BMD is expressed as the amount of mineralized tissue in the area scanned (g/cm 2 ); with QCT, BMD is expressed as the amount per volume of bone (mg/cm 3 ). Hip BMD by DXA is considered the best predictor of hip fracture; it appears to predict other types of fractures as well as measurements made at other skeletal sites. Lumbar spine BMD may be preferable to assess changes early in menopause and after bilateral ovariectomy and may be better than hip BMD in predicting risk of spine fractures especially in women in their 50s and 60s.
Calcitonin (Miacalcin® or Fortical®): A polypeptide hormone that inhibits the resorptive activity of osteoclasts. Second-line antifracture treatment (less effective than alternatives). Nasal spray and injection available. Documented to significantly reduce acute pain of recent vertebral crush fractures. Short-term use advised due to cancer risk.
Calcium: A mineral that plays an essential role in development and maintenance of a healthy skeleton. The vast majority of the body's calcium is stored in bone. If intake is inadequate, calcium is mobilized from the skeleton to maintain a normal blood calcium level. In addition to being a substrate for bone mineralization, calcium is an inhibitor of bone remodeling through suppression of circulating parathyroid hormone.
Cancellous bone: The spongy, or trabecular, tissue in the middle of bone (e.g., vertebrae) and at the end of the long bones. Also called trabecular bone.
Cortical bone: The dense outer layer of bone. Denosumab: A fully human monoclonal antibody to RANK-ligand (RANKL) approved by the FDA for the treatment of osteoporosis in postmenopausal women at high-risk of fracture and other indications. In the pivotal study, denosumab reduces the incidence of vertebral fractures by about 68%, hip fractures by about 40%, and non-vertebral fractures by about 20% over 3 years.
Dual-energy X-ray absorptiometry (DXA): A diagnostic test used to assess bone density at various skeletal sites using radiation exposure about one-tenth that of a standard chest Xray. Central DXA (lumbar spine, hip) is the preferred measurement for definitive diagnosis of osteoporosis and for monitoring the effects of therapy.
Estrogen: One of a group of steroid hormones that control female sexual development; directly affects bone mass through estrogen receptors in bone, reducing bone turnover and bone loss. Indirectly increases intestinal calcium absorption and renal calcium conservation and, therefore, improves calcium balance. See hormone therapy.
Estrogen agonists/antagonists: A group of compounds that act on a subset of estrogen receptors in the body, also known as selective estrogen receptor modulators (SERMs). Examples are the pharmaceutical agents raloxifene and bazedoxifene.
Exercise: An intervention long associated with healthy bones, despite limited evidence for significant beneficial effect on BMD or fracture risk reductions. Studies evaluating exercise are ongoing; however, enough is known about the positive effect of exercise on fall prevention to support its inclusion in a comprehensive fracture prevention program.
Food and Drug Administration (FDA): The US FDA is responsible for protecting the public health by assuring the safety, effectiveness, quality, and security of human and veterinary drugs, vaccines and other biological products, and medical devices. The FDA is responsible for the safety and security of most of our nation's food supply, all cosmetics, dietary supplements, and products that give off radiation.
Fracture: Breakage of a bone, either complete or incomplete whether from trauma, repetitive stress, or bone insufficiency. Osteoporosis can contribute to any fracture at any skeletal site, but overwhelmingly affects sites that predominate in trabecular bone: femoral neck, total hip, spine, and forearm. Fractures in cortical bone dense sites are less likely to be attributed to osteoporosis, such as fingers, toes, skull, and face. Vertebral compression fractures are the most common type of osteoporotic fracture.
Fracture liaison service (FLS): A coordinated care system headed by an FLS coordinator (a nurse practitioner, physician's assistant, nurse or other health professional) who ensures that individuals who suffer a fracture receive appropriate diagnosis, treatment and support.
Hormone/estrogen therapy (HT/ET) (HT-Activella®, Femhrt®, Premphase®, Prempro®; ET-Climara®, Estrace®, Estraderm®, Estratab®, Ogen®, Ortho-Est®, Premarin®, Vivelle®): HT is a general term for all types of estrogen replacement therapy when given along with progestin, cyclically or continuously. HT is generally prescribed for women after natural menopause or bilateral ovariectomy with progestin required to protect the uterus from unopposed estrogen. ET is prescribed for postmenopausal women who have had a hysterectomy. Studies indicate that 5 years of HT may decrease vertebral fractures by 35 to 50% and non-vertebral fractures by about 25%. Ten or more years of use might be expected to decrease the rate of all fractures by about 50%.
Ibandronate (Boniva®): A bisphosphonate approved by the FDA for the prevention and treatment of postmenopausal osteoporosis. Ibandronate reduces incidence of vertebral fractures by about 50% over 3 years. Ibandronate in the large RCTs did not reduce hip or non-spine fractures.
Least significant change (LSC): A measure utilized as part of DXA precision assessment that helps to determine if a BMD change can be ascribed to treatment effects or is due to measurement error.
Low bone mass (osteopenia): The designation for bone density between 1.0 and 2.5 standard deviations below the mean BMD of a young adult reference population (T-score between − 1.0 and − 2.5).
Modeling: The term for skeletal processes that involves shaping the bone during growth and replace damaged bone with new bone throughout the lifecycle. Modeling occurs on bone surfaces without prior bone resorption.
Non-vertebral fractures: Fractures of the hip, wrist, forearm, leg, ankle, foot, and other sites.
Normal bone mass: The designation for bone density within 1 standard deviation of the mean BMD of a young adult reference population (T-score at − 1.0 and above).
Osteopenia: See low bone mass. Osteoporosis: A chronic, progressive disease characterized by low bone mass, microarchitectural deterioration of bone tissue, decreased bone strength, bone fragility, and a consequent increase in fracture risk; BMD 2.5 or more standard deviations below the mean BMD of a young adult reference population (T-score at or below − 2.5).
Peak bone mass: The maximum bone mass accumulated during young adult life (late teens to early 20s).
Peripheral DXA: A DXA test used to assess bone density in the forearm, finger, and heel.
Physiatrist: A physician who specializes in medicine and rehabilitation, or physiatry.
Previous fracture: A risk factor for future fractures, defined here as a history of a previous fracture after age 40 years.
PTH , teriparatide, (Forteo®): An anabolic therapy approved for the treatment of osteoporosis. The pivotal study indicates a 65% reduction in vertebral fractures and a 40 to 50% reduction in non-vertebral fractures after 18 months of therapy in patients with osteoporosis.
Quantitative computed tomography (QCT): A diagnostic test used to assess volumetric bone density; reflects threedimensional BMD. Usually used to assess the lumbar spine but has been adapted for other skeletal sites (e.g., hip). It is also possible to measure trabecular and cortical bone density in the periphery by peripheral QCT (pQCT) or high-resolution pQCT (HRpQCT).
Quantitative ultrasound densitometry (QUS): A diagnostic test used to assess bone density at the calcaneus or tibia. Ultrasound measurements correlate only modestly with other assessments of bone density in the same patient, yet some prospective studies indicate that ultrasound may predict fractures as effectively as other measures of bone density.
Raloxifene (Evista®): An estrogen agonist/antagonist (or selective estrogen receptor modulator) approved by the FDA for prevention and treatment of osteoporosis. It lowers the risk of vertebral fracture by about 30% in patients with and about 55% in patients without prior vertebral fracture. Raloxifene is approved for the prevention of breast cancer.
RANKL: Receptor activator of nuclear factor kappa-B (RANK) ligand (RANKL) Remodeling: Also called bone turnover, remodeling is the process by which the skeleton repairs damage and maintains serum calcium levels through the ongoing lifelong dual processes of bone resorption (breakdown) and formation.
Resorption: The breakdown and removal of bone tissue during bone remodeling.
Risedronate (Actonel®, Atelvia®): A bisphosphonate approved by the FDA for prevention and treatment of osteoporosis. It lowers the risk of vertebral fracture by about 41-49% and non-vertebral fractures by about 36%.
Risk factors: For osteoporotic fractures, risk factors include low BMD, parental history of hip fracture, low body weight, previous fracture, smoking, excess alcohol intake, glucocorticoid use, secondary causes of osteoporosis (e.g., rheumatoid arthritis), and history of falls. These readily accessible and commonplace factors are associated with the risk of hip fracture and, in most cases, with that of vertebral and other types of fracture as well.
Romosozumab (Evenity™): The FDA-approved bone anabolic agent, romosozumab is a fully human monoclonal antibody to sclerostin that both increases BMD and decreases fracture incidence in women with postmenopausal osteoporosis. Reported 73% (95% CI 53-84%) relative risk reduction in morphometric vertebral fracture after 12 months.
Secondary causes of osteoporosis: Osteoporosis that is drug-induced or caused by many disorders such as malabsorption, hyperthyroidism, renal disease, and chronic obstructive pulmonary disease.
Secondary fracture prevention: While primary fracture prevention comprises measures to promote and maintain BMD above − 2.50 so as to prevent an initial osteoporosisrelated fracture, secondary fracture prevention is antifracture treatment after a patient has had an osteoporosis-related fracture, to prevent second and subsequent fractures.
Standard deviation (SD): A statistical measure of variance in a population.
T-score: In describing BMD, the number of standard deviations above or below the mean BMD of a young adult reference population.
Vitamin D: A group of fat-soluble sterol compounds that includes ergocalciferol (vitamin D 2 ) and cholecalciferol (vitamin D 3 ). These compounds are ingested from plant and animal sources; cholecalciferol is also formed in skin on exposure to ultraviolet light. When activated in the liver and then the kidney, vitamin D promotes calcium absorption. Vitamin D replacement increases muscle strength in patients with severe vitamin D deficiency. A 25(OH) D level of approximately 30 ng/mL (75 nmol/L) is considered by many bone health experts to be optimal.
Zoledronic acid (Reclast®): A bisphosphonate approved by the FDA for treatment of postmenopausal osteoporosis and to reduce risk of subsequent fracture in those with prior hip fracture. It lowers risk of vertebral fractures by about 70%, hip fractures by about 41% and non-vertebral fractures by about 25%.
Z-score: In describing BMD, the number of standard deviations above or below the mean BMD for persons of the same age, sex, and ethnicity.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by-nc/4.0/. | 2022-04-28T13:30:08.335Z | 2022-04-28T00:00:00.000 | {
"year": 2022,
"sha1": "51a41eda93253520404bca0d4e426256f5d25bcc",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00198-021-05900-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "2eb6107f25cb0abb739782aba0848f8f29d1502a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246002028 | pes2o/s2orc | v3-fos-license | Fracture of dual lumen cannula leading to cerebrovascular accident in a patient supported with ECMO
Extended duration extracorporeal membrane oxygenation (ECMO), using dual-lumen cannulas, is being used with increased frequency to support patients, including those with COVID-19; both as a bridge to transplant and lung recovery. During such an extended duration of support, there are several factors that might lead to the attrition of the physical structure of the ECMO cannulas, predisposing them to the risk of fracture. Although rare, fracture of the ECMO cannula can be a potentially lethal event. Here, we present a case where fracture of a dual lumen cannula during veno-venous (VV) ECMO support resulted in a cerebrovascular accident. We discuss the potential contributing factors and suggest steps to mitigate the risks for such a complication.
A 63-year-old woman with medical comorbidities, including hypertension and type II diabetes mellitus, was admitted with COVID-19 pneumonia. Two days later, she was intubated and VV-ECMO was initiated for acute hypoxic respiratory failure refractory to maximal medical and ventilator therapy. A 28Fr Crescent™ dual lumen cannula (Medtronic, MN) was placed through the left subclavian vein and connected to a centrifugal pump with oxygenator (Cardiohelp, Getinge, Sweden) with blood flow at 4 L, 2700 RPMs, sweep gas flow 4 L at 100% FiO 2 . Two weeks after cannulation, the patient failed weaning trials but could be separated from the ventilator. Six weeks after the onset of illness, she was deemed to have developed pulmonary fibrosis and was transferred to our center for consideration of lung transplant [1,2]. Upon arrival, the patient was on 6 L high flow nasal cannula and VV-ECMO. The day after arrival, the patient developed acute aphasia and altered mental status. Head computed tomography scan was unrevealing. Video electroencephalogram did not demonstrate seizure or epileptiform activity. Within 24 h of the event, neurologic status returned to baseline. However, the following day, the patient lost consciousness again and experienced a witnessed seizure of the left arm. A sucking/hissing sound was heard from the left subclavian cannulation site and the ECMO bubble detector alarmed. The cannula insertion site was inspected with no obvious anomalies detected. On the chest radiograph, there was no apparent abnormality with the cannula. Repeat imaging of the brain and chest similarly demonstrated no abnormalities. The patient was reintubated for encephalopathy and subsequently underwent tracheostomy. The patient regained normal neurologic function, was able to wean to intermittent trach collar, and was interactive with the care team with time spent sitting up in the chair over the next 7 days.
However, soon thereafter, the bubble sensor alarmed again; a sucking/hissing sound was heard at her cannulation site again; and, there was visible "foam" within the oxygenator. Due to concerns of air entrapment and possible cannula malfunction, the decision was made to change to a bicaval VV-ECMO configuration using a right internal 1A). She remained stable following the revision of VV-ECMO and did not develop new neurological episodes. Transplant work up was successfully completed, and the patient was listed for lung transplant two weeks later. Unfortunately, the patient progressed to bacteremia with sepsis and multi-organ dysfunction while awaiting a lung transplant. She was removed from the lung transplant list and care was withdrawn. At autopsy, a patent foramen ovale (PFO) was identified suggesting air embolism to the brain as the likely cause of the cerebrovascular accidents that she sustained.
Discussion
This report presents an example of a cannula fracture in a patient being supported by VV-ECMO. The typical duration of a patient on ECMO for acute respiratory distress syndrome is 7-10 days and is considered prolonged if continued for more than 14 days [3]. The correlation between a patient's duration on ECMO and survival is controversial, with some studies showing no correlation while others show worsening survival [4,5]. In Posluszny et al., the authors concluded that survival on prolonged ECMO support has improved over the years; but, survival rates are still lower with prolonged support when compared to a duration of less than 2 weeks [6]. Despite these findings, prolonged support can be justified in many cases [7]. An increasing number of patients are undergoing prolonged VV-ECMO runs for severe acute respiratory distress syndrome secondary to COVID-19 pneumonia. Lung transplantation has now been shown to be a life-saving treatment for select patients with COVID-19-associated lung failure, further increasing the duration of ECMO support as patients without evidence of lung recovery are being bridged to lung transplantation [1,2]. With the duration of ECMO support stretching up to hundreds of days, it is important to understand the risks of prolonged ECMO cannulation, particularly in patients being transferred across institutions and during ambulation. Such efforts promoting mobilization of the patients can result in duress on the physical structure of the dual lumen cannula and result in pressure point-related cannula fracture, as evident in this case. Additionally, given the longer length of the dual lumen cannulas and the greater surface of the wire reinforcement, kinks or turns in the cannula can result in the wire reinforcement eroding through the outer plastic leading to the fracture of the cannula.
Disruption in the cannula, or any component of the ECMO circuit, can have devastating consequences for patients and has been associated with increased odds of death [8]. In our case, the fracture likely occurred from the pressure point under the left clavicle where the cannula was compressed in the thoracic inlet, augmented by the bend at the exit site. While there was the formation of a clot preventing hemorrhage, there was air embolism. A small amount of air might be inconsequential as it is typically caught in the oxygenator before entering the patient. However, pumping large amounts of air into the patient can cause cardiovascular collapse [9]. In the presence of a right to left shunt, as in this case where a PFO was present, even small amounts of air can cause neurologic deficits from systemic air embolism [10].
In our experience, cannula fracture is likely in certain scenarios. The cannula was secured using multiple silk sutures to her skin, and dermabond dressing was applied at the insertion site to prevent air embolism. In our institution, we inspect ECMO cannula carefully 2-3 times a day. However, as seen in this patient, bending of the Crescent™ cannula at the wire reinforcement ( Fig. 1) can increase the risk of fracture. And in this case, the fracture site was behind and difficult to detect. Additionally, cannulas inserted into the internal jugular vein in the neck can bend at the insertion site or under the clavicle. The latter is also seen with subclavian insertion sites and can cause fracture as the cannula travels under the clavicle (Fig. 2B), especially when the external portion is secured on the neck. Our group has also observed fractures with the PROTEKDuo™ cannula at the bifurcation, a point of flexion (Fig. 1B-D), and where it turns within the right ventricle into the right ventricular outflow tract ( Fig. 2A). To reduce the chances of the cannula fracture in the right ventricle, we suggest pulling back the cannula after initially advancing it to "burp" the cannula [11].
In conclusion, patients on prolonged ECMO runs who have air detection warnings from bubble sensors should raise a high clinical suspicion for cannula fracture and impending air embolism, even if the defect is not found on inspection and is not evident on imaging studies. Changing the cannulation strategy should be considered in these cases. Factors such as prolonged ECMO support and increased transportation events, amidst the ongoing COVID-19 pandemic, may predispose a cannula fracture. As more patients undergo prolonged ECMO support, while awaiting post-COVID-19 lung transplantation, future Fig. 1) of Crescent™ cannula can also lead to fracture studies will further enumerate the risks associated with prolonged ECMO runs. | 2022-01-18T14:44:40.629Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "8e2fd7b8f862b053df11127ed8afe159efa0d426",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10047-021-01306-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "21c706015e603beca9cfa420ad1cd23b38cef500",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118943982 | pes2o/s2orc | v3-fos-license | Dynamics of earthquake nucleation process represented by the Burridge-Knopoff model
Dynamics of earthquake nucleation process is studied on the basis of the one-dimensional Burridge-Knopoff (BK) model obeying the rate- and state-dependent friction (RSF) law. We investigate the properties of the model at each stage of the nucleation process, including the quasi-static initial phase, the unstable acceleration phase and the high-speed rupture phase or a mainshock. Two kinds of nucleation lengths L_sc and L_c are identified and investigated. The nucleation length L_sc and the initial phase exist only for a weak frictional instability regime, while the nucleation length L_c and the acceleration phase exist for both weak and strong instability regimes. Both L_sc and L_c are found to be determined by the model parameters, the frictional weakening parameter and the elastic stiffness parameter, hardly dependent on the size of an ensuing mainshock. The sliding velocity is extremely slow in the initial phase up to L_sc, of order the pulling speed of the plate, while it reaches a detectable level at a certain stage of the acceleration phase. The continuum limits of the results are discussed. The continuum limit of the BK model lies in the weak frictional instability regime so that a mature homogeneous fault under the RSF law always accompanies the quasi-static nucleation process. Duration times of each stage of the nucleation process are examined. The relation to the elastic continuum model and implications to real seismicity are discussed.
I. INTRODUCTION
There is a wide-spread expectation that a large earthquake might be preceded by a precursory nucleation process which occurs prior to the high-speed rupture of a mainshock. Nucleation process is localized to a compact "seed" area with its rupture velocity orders of magnitude lower than the seismic wave velocity [Dieterich, 1992;Ohnaka, 2000Ohnaka, , 2003Scholz, 2002;Dieterich, 2009]. The fault spends a very long time in this nucleation process, and then at some point, exhibits a rapid acceleration process accompanied by a rapid expansion of the rupture zone, finally getting into the final high-speed rupture of a mainshock. Although such features of the nucleation process have been more or less confirmed by laboratory rock experiments [Latour et al., 2013; McLasky and Kilgore, 2013], its nature, or even its very existence, remains less clear for real earthquakes [ . Nevertheless, such a precursory phenomenon preceding mainshocks are of paramount importance in its own right as well as in its possible connection to an earthquake forecast. It thus remains most interesting and important to elucidate the nature of the possible nucleation process of earthquakes.
We note that a similar nucleation process is ubiquitously observed in various types of failure processes in material science and in engineering. Because of a slow character of the slip, earthquake nucleation process might also be regarded as a type of more general slow-slip phenomena, including afterslips and slow earthquakes, which have attracted much recent research interest. While the relation between these different types of slow seismic pro-cesses poses an interesting and important question, we focus in the present paper on the nucleation process realized prior to the high-speed rupture of a mainshock.
It has been suggested that the earthquake nucleation process might proceed via several distinct steps or "phases". For example, Ohnaka proposed that it started with an initial quasi-static process [Ohnaka, 2000[Ohnaka, , 2003. When the nucleus diameter L exceeds a nucleation length L sc , the fault gets into the acceleration phase where the system gets out of equilibrium, and rapidly increases its slip velocity. Then, when the nucleus diameter exceeds another nucleation length L c (> L sc ), the fault eventually exhibits a high-speed rupture of a mainshock. In this picture, there appear two characteristic length scales for the nucleus, L sc and L c . These two nucleation lengths divide the nucleation process into "the initial phase" in which the nucleus size L is smaller the L sc (L < L sc ), "the acceleration phase" in which the nucleus size exceeds L sc but is still smaller than L c (L sc < L < L c ), and "the high-speed rupture phase" of a mainshock (L > L c ).
Under such circumstances, a theoretical or a numerical study based on an appropriate model of an earthquake fault would be important and helpful. In such modelings, the friction force is a crucially important part. The friction force now standard in seismology is the so-called rate and state dependent friction (RSF) law [Dieterich, 1979;Ruina, 1983;Marone, 1998]. In a pioneering study, Dieterich derived a formula describing the nucleation length based on such RSF law [Dieterich, 1992]. The most standard form of the nucleation length reported in the literature might be where σ n is the normal stress, G is the rigidity, L is a characteristic slip distance and η is a constant, while A and B are the frictional parameters associated with the RSF law, each representing the velocity-strengthening and the frictional-weakening parts of the friction. Dieterich also suggested that under certain conditions the nucleation length might be given by [Dieterich, 1992] η GL σ n B , (2) in which the A parameter did not appear. The RSF law has been used in many of numerical simulations on earthquakes, mostly in the continuum model [Tse and Rice, 1986;Stuart, 1988;Horowitz and Ruina, 1989;Rice, 1993; Ben-Zion and Rice, 1997; Kato and Herr's, 1999;Kato, 2004;Cocco, 2006a, 2006b], including the earthquake nucleation process. In particular, Ampuero and Rubin studied the properties of the nucleation process for the continuum model under the RSF law, with two representative evolution laws, i.e., the aging law [Ampuero and Rubin, 2005] and the slip law [Rubin and Ampuero, 2008] within the quasi-static approximation neglecting the inertia effect.
Meanwhile, a further simplified discrete model has also been used in earthquake studies. Particularly popular is the so-called spring-block model or the Burridge-Knopoff (BK) model [Burridge and Knopoff, 1967], in which an earthquake fault is modeled as an assembly of blocks mutually connected via elastic springs which are subject to the friction force and are slowly driven by an external force mimicking the plate drive.
The model might be better justified in the situation where there exists a well-developed fault layer, presumably corresponding to the low-velocity fault zone [Huang and Ampuero, 2011] observed in many mature faults [Ueda et al, 2014]. The fault layer is supposed to be uniformly pulled by the more or less rigid crust contingent to it. Because of its simplicity, the BK model is particularly suited to the study of statistical properties of earthquakes, since it often enables one to generate sufficiently many events, say, hundreds of thousands of events, to reliably evaluate its statistical properties.
In many numerical simulations of the BK model, while a simple velocity-weakening friction law in which the friction force is assumed to be a single-valued decreasing function of the velocity has often been used [Carlson and Langer, 1989a;Carlson and Langer, 1989b;Carlson et al., 1991;, Carlson, 1991a, Carlson, , 1991bShaw et al., 1992;Carlson et al., 1994;Schmittbuhl et al., 1996;Mori and Kawamura, 2005, 2006, 2008a, 2008b, 2008cKawamura et al., 2012], a more realistic RSF law was also employed in some of recent numerical simulations of the model. For example, Cao and Aki performed a numerical simulation by combining the 1D BK model with the RSF law in which various constitutive parameters were set nonuniform over blocks [Cao and Aki, 1986]. Ohmura and Kawamura extended an earlier calculation by Cao and Aki to study the statistical properties of the 1D BK model combined with the RSF law with uniform constitutive parameters [Ohmura and Kawamura, 2007; Kawamura et al., 2012]. Clancy and Corcoran also performed a simulation of the model based on a modified version of the RSF law [Clancy and Corcoran, 2009].
Of course, the space discretization in the form of blocks is a crude approximation to the original continuum crust. It introduces the short-length cut-off scale into the problem in the form of the block size, which could in principle give rise to an artificial effect not realized in the continuum. Indeed, such a criticism against the BK model was made in the past [Rice, 1993].
Rice criticized that the discrete BK model with the simple velocity-weakening law was "intrinsically discrete", lacking in a well-defined continuum limit, arguing that the spatiotemporal complexity observed in the discrete BK model was due to an inherent discreteness of the model, which should disappear in continuum [Rice, 1993]. In contrast to the simple velocity-weakening law, the RSF law possesses an intrinsic length scale corresponding the characteristic slip distance L. Rice argued that, if the grid spacing d was taken smaller than the characteristic slip distance L, the system tended to exhibit a quasi-periodic recurrence of large events, whereas, if the grid spacing d was taken larger than it, the system exhibited an apparently complex or critical behavior. This problem of the continuum limit of the BK model was also addressed within the velocity-weakening friction law by Myers and Langer [Myers and Langer, 1993], by Shaw [Shaw, 1994], and by Mori and Kawamura [Mori and Kawamura, 2008c], where the Kelvin viscosity term was introduced to produce a small length scale allowing for a sensible continuum limit.
In fact, the problem of the small length scale of the BK model is closely related to the nucleation phenomena. According to Rice, the continuum system under the RSF law always exhibits a quasi-static nucleation process prior to a mainshock [Rice, 1993 ]. In view of the Rice's claim, we wish to clarify in the present paper how the nucleation process of the discrete BK model behaves in its continuum limit, by systematically varying the extent of the discreteness of the model. Note that the extent of the discreteness may be regarded as a measure of the underlying spatial inhomogeneity [Rice, 1993]. The related issue of the characteristic/critical features of mainshocks versus the discreteness will be addressed in the forthcoming paper.
In the simplest version of the BK model, the nearestneighbor interaction is assumed between blocks. In real earthquake faults, the crust perpendicular to the fault plane mediates the effective long-range interaction even between blocks away on the fault plane. Indeed, such an elastic long-range interaction between blocks was assumed in some of the previous studies, especially on its statistical properties such as the magnitude distribution [Mori and Kawamura, 2008b]. In the present study, we concentrate on the nucleation process of the simplest version of the BK model, i.e., the model with the nearestneighbor interaction, with the aim of clarifying the properties of the nucleation process of the simplest version of the BK model. This type of model might also be relevant to the description of other stick-slip-type phenomena such as landslides [Viesca, 2012].
On the basis of the 1D BK model with the nearestneighbor interaction, we wish to shed light on the nucleation process of a mature fault, e.g., the nucleation dynamics, the nucleation lengths and the duration times of each phase of the nucleation process: How these quantities depend on material parameters, and are related or unrelated to the size of the ensuing mainshock. Such an issue would be of special significance from the standpoint of utilizing earthquake nucleation phenomena in a possible earthquake forecast. For example, if the nucleation length L sc or L c is correlated with the mainshock size,e.g., a larger earthquake for a larger L sc or L c , one might have a chance to predict the size of the mainshock from the measurement of the nucleation lengths. If, on the other hand, the nucleation length L sc or L c is not correlated with the mainshock size, the prediction of the mainshock size from the measurement of the nucleation lengths would be impossible. By its nature, the fault sliding velocity in the nucleation process tends to be very low. Hence, for any practical detection, it would crucially be important to clarify how fast the fault sliding velocity is, and how much time is left before the ensuing mainshock. With these motivations in mind, we try to conduct a systematic numerical and analytic study of the BK model in the following part of the paper. A preliminary account was reported in [Ueda et al, 2014].
The rest of the paper is organized as follows. In section II, we define our model, the 1D BK model obeying the RSF law, and present the equation of motion. Its continuum limit is also given. In section III, we report on the results of our numerical simulations on the dynamics of the nucleation process of the model. In subsection IIIA, we first illustrate mains features of the nucleation process. Two distinct parameter regimes exist, i.e., the weak frictional instability regime and the strong frictional instability regime. The two kinds of nucleation lengths L sc and L c are identified. In the subsequent subsections (B)-(D), we present our numerical data on the dynamics of the model at each stage of its nucleation process, i.e., (B) the initial phase of the weak frictional instability regime, (C) the acceleration phase of the weak frictional instability regime, and (D) the acceleration phase of the strong frictional instability regime. In section IV, we report on the results of our theoretical analyses of the nucleation process of the model. After explaining in subsection (A) the basic scheme of the perturbation method employed, we examine the dynamics of the model in some detail in the following subsections (B)-(D), i.e., (B) the initial phase, (C) the acceleration phase at which the epicenter-block sliding velocity v is smaller than the crossover velocity v * , v < v * , and (D) the acceleration phase at v > v * . Analytic expressions of the nucleation length L sc and of the condition discriminating the weak and the strong frictional instability regimes is derived in subsection (B). In subsection (E), we perform a mechanical stability analysis to re-derive L sc and the weak/strong instability condition, which confirms the results from the perturbation analysis. In section V, we present the results of our numerical simulations focusing on various statistical properties characterizing the nucleation process, including the nucleation lengths L sc and L c , and the duration times of each phase, averaged over many events. Their continuum limits are also examined. Finally, section VI is devoted to summary and discussion. Implications to real seismicity and possible extensions of the present analysis are discussed.
II. THE MODEL AND ITS CONTINUUM LIMIT
The 1D BK model consists of a 1D array of N identical blocks of the mass m, which are mutually connected with the two neighboring blocks via the elastic springs of the spring stiffness k c , also connected to the moving plate via the springs of the spring stiffness k p , and are driven with a constant rate ν ′ . All blocks are subject to the friction strength Φ, which is the source of the nonlinearity in the model. The equation of motion for the i-th block can be written as where t ′ is the time, U i is the displacement of the i-th block, and Φ i is the friction force at the i-th block. For simplicity, the motion in the direction opposite to the plate drive is inhibited by imposing an infinitely large friction forU i < 0. For the friction law, we assume the RSF friction law given by where V i = dUi dt ′ is the sliding velocity of the i-th block, Θ i (t ′ ) is the time-dependent state variable (with the dimension of the time) representing the "state" of the slip interface, V * is a crossover velocity underlying the RSF law, N is an effective normal load, L is a critical slip distance which is a measure of the sliding distance necessary for the surface to evolve to a new state, with A, B and C positive constants describing the RSF law. The first term (C-term) is a constant taking a value around 2 3 , which dominates the total friction in magnitude, the second term (A-term) a velocity-strengthening direct term describing the part of the friction responding immediately to the velocity change, the third part (B-term) an indirect frictional-weakening term dependent of the state variable. Laboratory experiments suggest that the Aand B-terms are smaller than the C-term by one or two orders of magnitudes, yet they play an essential role in stick-slip dynamics (Marone, 1998;Scholz, 1998;2002).
Note that, in the standard RSF law, the A-term is often assumed to be proportional to log( V V * ). Obviously, this form becomes pathological in the V → 0 limit because it gives a negatively divergent friction. In other words, the pure logarithmic form of the A-term cannot describe the state at rest. We cure this pathology by phenomenologically introducing a modified form given above. The modified form, where the A-term becomes proportional to the block velocity V at V << V * but reduces to the purely logarithmic form at V >> V * , is enable one to describe a complete halt. The characteristic velocity V * represents a crossover velocity, describing the low-velocity cutoff of the logarithmic behavior of the friction.
For the evolution law of the state variable, we use here the so-called aging (slowness) law given by Under this evolution law, the state variable Θ i grows linearly with the time at a complete halt V i = 0 reaching a very large value at the outset of the nucleation process, while it decays very rapidly during the seismic rupture. The equation of motion can be made dimensionless by taking the length unit to be the critical slip distance L, the time unit to be ω −1 = m/k p and the velocity unit to be Lω, where the dimensionless variables are defined by t = ωt ′ , It is sometimes more convenient to rewrite the equation of motion in terms of the velocity variable v i instead of the displacement u i . By differentiating (6) with respect to t and by using (7), one gets The block displacement u i can be obtained up to a constant by numerically integrating the velocity v i with respect to t. One sees from eqs. (8) and (7) that the constant frictional parameter c no longer remains in the governing equations, meaning this parameter is essentially irrelevant to the dynamical properties of the model. In our simulations, we use either eq.(6) or (8) depending on the situation. In solving the high-speed motion, we use eq.(6), while in solving the low-speed motion as realized in the initial phase or the early stage of the acceleration phase, we use eq. (8).
The frictional parameter a/b tend to suppress/enhance the frictional instability. The earthquake instability is driven primarily by the velocity-weakening b-term, while the velocity strengthening a-term tends to mitigate the unstable slip toward the aseismic slip. Since the frictional parameters a and b compete in their functions, either a < b or a > b might affect the dynamics significantly. When a b, the compensation effect due to the a-term tends to induce a slow slip succeeding a mainshock, an afterslip, while, when a >> b, it gives rise to the so-called "slow earthquake", no longer accompanying the high-speed rupture at any stage of the event. Earthquake properties in this regime of a > b will be reported in a separate paper, with emphasis on the slow-slip phenomena intrinsic to this regime.
Meanwhile, we find that the properties of the precursory nucleation process of the model, which occurs preceding a mainshock, do not much depend on the relative magnitude of a and b. Although we study in the present paper the nucleation process in the parameter range a < b where the unstable seismic character is dominant in a mainshock, main qualitative features of the nucleation process would not change much even for a > b.
The setting assumed in the BK model in terms of an earthquake fault embedded in the 3D continuum crust was examined in [Ueda et al, 2014]. The block assembly of the BK model is supposed to represent a deformable "fault layer" of the width W which is uniformly pulled by the more or less rigid plate contingent to it. An estimate of W of order ∼ 2 [km] has been given [Ueda et al, 2014]. The fault layer modeled by the BK model is likely to be related to the so-called "low-velocity fault zones (LVFZ)" observed in most mature faults, with 20% ∼ 60% wavevelocity reduction relative to the host rock [Huang and Ampuero, 2011]. In the BK model, a uniform plate drive is applied not at infinity as boundary conditions as often assumed in the continuum model, but is applied rather close to the fault plane of order the distance W ≃ 2 [km]. Such a direct plate drive yields a term proportional to the displacement −u i in its equation of motion, eq.(6), which is absent in the standard elasto-dynamic equation.
We try to estimate typical values of the model parameters with natural earthquake faults in mind. The dimensionfull rise time of an event, i.e., the time elapsed from a given block involved in a mainshock rupture begins to move until it stops, is found to be ≃ ω −1 . This is true for a single-block system, while our simulations indicate it is also the case for a many-block system. Since the typical rise time of an earthquake is a few seconds, we get an estimate of ω −1 ≃ 1 [s]. The reported values of the critical slip distance L are largely scattered in the literature depending on the observation scale [Scholz, 1988;Scholz, 2002;Toro et al, 2011]. Here, from our numerical observation that the typical block sliding velocity at the mainshock rupture is 10 2 ∼ 10 3 in units of Lω while it is around 1 [m/s] in real seismicity, we take L to be a few [cm], which is not far from the value at the seismic depth deduced in [Scholz, 1988;Scholz, 2002;Toro et al, 2011]. Since the speed of the plate motion is typically a few [cm/year], the dimensionless loading rate is ν ≃ 10 −7 − 10 −8 .
Let the dimension of the block be D×D ′ ×W , where D is the dimension along the plate drive, D ′ the dimension perpendicular to the plate drive within the fault plane, and W the dimension perpendicular to the fault plane as described above. The spring constant k p may be related to the rigidity G as k p = G DD ′ W . This can be derived by noting that the shear force F shear acting on a block with the displacement U is given by The relation k p = mω 2 = ρW DD ′ ω 2 (ρ is the mass density) and the Putting v s ≃ 2 [km/s], which is taken somewhat smaller than the standard value of v s ≃ 3 [km] due to the possible lower wave-velocity in the fault zone, and ω −1 ≃ 1 [s], we get an estimate of W ≃ 2 [km] as given above. The proportionality between the fault-zone width W and the rise time ω −1 obtained here seems consistent with the observation on the LVFZ [Huang and Ampuero, 2011]. The width of the LVFZ was reported to be 100 [m] ∼ 2 [km], which are a bit smaller than, but does not much differ from the present estimate of the fault-zone width W .
With N = σ n DD ′ where σ n is the normal stress, we have Putting σn G ≃ 10 −3 , we get N kpL ≃ 10 2 − 10 3 . As C is known to take a value around 2 3 [Scholz, 2002], c would be of order 10 2 -10 3 , a and b being one or two orders of magnitude smaller than c. The crossover velocity V * and its dimensionless counterpart v * is hard to estimate though it should be much smaller than unity, and we take it as a parameter in our simulations.
The continuum limit of the BK model corresponds to making the dimensionless block size d, defined by d = D vs/ω , to be infinitesimal d → 0, simultaneously making the system infinitely rigid l → ∞ with d = 1/l [Mori and Kawamura, 2008]. The dimensionless distance x between the block i and i ′ is given by Notice that the continuum limit considered here concerns only with the fault direction (the fault plane in case of 2D), whereas the perpendicular direction (Wdirection) is kept fixed. Thus, the possible internal motion in the fault layer along the perpendicular direction is suppressed in the model setting.
As discussed in [Mori and Kawamura, 2008], the 1D equation of motion in the continuum limit is given in the dimensionful form by where U (x, t ′ ) is the displacement at the position x and the time t ′ , Φ ′ is the friction force per unit mass, while ω and v s are the characteristic frequency and the characteristic wave-velocity (s-wave velocity), respectively. As mentioned, the term −ω 2 U representing the plate drive is absent in the standard elasto-dynamic equation. If one discretize the space into blocks of the size D with
III. SIMULATION RESULTS I
In this section, we present the results of our numerical simulations on the dynamical properties of the model. After surveying their main features in subsection A, we present detailed data in the following subsections separately for each phase of the nucleation process.
A. Weak versus strong frictional instability regimes
The first question might be whether the 1D BK model under the RSF law ever exhibits a nucleation process prior to a mainshock, and if it does, under what conditions. Remember that our constitutive law allows for a complete stick (i.e., v i = 0 for all i) during the interseismic period, which enables us to unambiguously define the onset of the nucleation process by the point where one of the blocks gains a nonzero velocity. We illustrate in Fig.1 typical examples of seismic events realized in the 1D BK model under the RSF law, where the time evolution of the movement of each block is shown as a color plot for each case of (a) the weak frictional instability, and of (b) the strong frictional instability. The model parameters are set to a = 3 and b = 5 in Fig.1(a), and to a = 1 and b = 40 in Fig.1(b), with c = 1000, l = 4, v * = 1 and ν = 10 −8 in common. The origin of the time (t = 0) is taken to be the onset of the nucleation process where an epicenter block begins to move. Examples shown in Fig.1 are events occurring in the stationary state of the seismic sequence of the model, realized after transient initial events where the memory of initial conditions are still remnant.
As can clearly be seen from the figure, a slow nucleation process with a long duration time of order t ∼ 10 8 is observed in the case of the weak frictional instability of Fig.1(a). Such a long-lasting slow nucleation process is absent in the case of the strong frictional instability of Fig.1(b). Note that an apparently nucleation-like process seen in Fig.1(b) just before the high-speed rupture propagation is not a quasi-static initial phase, but is an unstable acceleration phase. Its duration time is around t ∼ 10 which is by many orders of magnitude shorter than the duration time of the quasi-static initial phase seen in Fig.1(a), t ∼ 10 8 . The acceleration phase in the stronger frictional instability regime sometimes could be longer, say, t ∼ 10 2 − 10 3 , particularly for smaller v * . Yet, the dynamics is already irreversible there.
In the case of the weak frictional instability, large events always accompany a precursory nucleation process irrespective of each individual event, or the choice of the initial conditions, while, in the case of the strong frictional instability, a precursory nucleation process is always absent. Hence, for a given set of model parameters, the presence or absence of the quasi-static nucleation process is uniquely determined, not depending on each individual event. As shown shortly below, the condition of the weak/strong frictional instability is determined by the friction parameter b being either greater or smaller than the critical value b c (l), which is solely determined by the stiffness parameter l as b c (l) = 2l 2 + 1.
In Fig.1, we also illustrate the two types of nucleation lengths, L sc and L c (L sc < L c ). L sc is the length separating stable and unstable ruptures, and exists only in the weak frictional instability case. When the nucleus size L is less than L sc , the rupture process is stable and reversible, whereas, when L exceeds L sc , it becomes unstable and irreversible.
One illustrative way to demonstrate the expected borderline behavior across the nucleation length L sc may be to artificially stop the external loading in the course of a simulation. Indeed, when the external loading is stopped at a point before L = L sc , the rupture itself also stops there, as demonstrated in Fig.2(a), whereas, if the external loading is stopped at any point beyond L = L sc , the subsequent seismic rupture is no longer stoppable and evolves until its very end, as demonstrated in Fig.2 (Even better criterion might be whether the block sliding velocity is increased or decreased when the loading is artificially stopped, rather than whether the block is completely stopped or not. ) Ohnaka suggested that, in addition to the nucleation length L sc , there exists another nucleation length L c (> L sc ), which discriminates between the acceleration phase and the high-velocity rupture phase [Ohnaka, 2000[Ohnaka, , 2003]. In the high-speed rupture phase beyond L c , the rupture propagates with a nearly constant speed in both directions in the form of two separate packets of moving blocks, as can be seen in Fig.1. In the figure, the highspeed rupture of a mainshock corresponds to the linear portion of the rupture propagation line with its slope being the propagation speed of ∼ l.
While there might be several ways to define the nucleation length L c (> L sc ), we tentatively give one definition here (other definitions will be presented later). As can be seen from Fig.1, the number of simultaneously moving blocks L tends to be maximum around L c . Hence, we define L c by the size of the nucleus at which the number of simultaneously moving blocks becomes maximum for a given event, which we denote L ′ c : See Fig.9(a) below. At or very close to this point, the epicenter block ceases to move and the group of moving blocks are detached into two parts, each part propagating in opposite directions.
In order to demonstrate the spatiotemporal evolution of the nucleation process of the model, we show in Figs.3 and 4 the time evolutions of the spatial profile of (a) the block sliding velocity v, (b) the state variable θ, and (c) the multiple of the two quantities vθ, in a typical nucleation process of a large event realized in the stationary state in the weak frictional instability regime. Fig.3 covers the time regime from the onset of the nucleation process till the system reaches L = L c , whereas Fig.4 from the point of L = L c till an earlier stage of the highspeed rupture phase. The model parameters are set to a = 1, b = 9, l = 4, v * = 10 −2 and ν = 10 −8 . From these figures, the manner how the nucleus grows and how the nucleation process transforms into the high-speed rupture of a mainshock is clearly visible.
Some characteristic points of the nucleation process corresponding to L = L sc and v = v inertia (in Fig.5), L = L c (in Figs.5 and 6), and L = L ′ c (in Fig.6) are indicated by blue curves. Here v = v inertia is a characteristic crossover velocity at which the inertia effect becomes significant, to be defined below in §IVD. Note that, beyond the point v ≃ v inertia , the inertia effect plays an important role, and the quasi-static approximation in no longer valid. As such, the time range beyond v ≃ v inertia is not covered by [Rubin and Ampuero, 2005] who employed the quasi-static approximation. In the range up to v ≃ v inertia , the profiles obtained here look similar to the ones given in [Rubin and Ampuero, 2005] for the continuum model.
As can be seen from Fig.3(a), the sliding velocity v gets larger until L = L c . Beyond this point, first the epicenter block, and subsequently the neighboring blocks, begin to decelerate, and eventually come to stop ( Fig.4(a)). The nucleus is detached into two parts, each of which propagates in the opposite directions forming a rupture front of a mainshock.
As can be seen from Fig.3(b), in an earlier period of the nucleation process, the state variable θ maintains its large value acquired during the halt period between mainshocks, while it rapidly decreases in the later period as the block movement accelerates, and eventually reaches a minimum value around L = L c , first at the epicenter block, and subsequently at the neighboring blocks. After this point, the θ-value tends to be recovered again ( Fig.4(b)).
The multiple of v and θ, vθ, plays an important role in the healing process since it appears on the r.h.s. of the equation of motion of the state variable, eq.(7). As can be seen from Figs.3(c) and 4(c), this quantity tends to increase in the earlier period of the nucleation process, first gradually and more rapidly beyond L = L sc , reaches a maximum at a point between L = L sc and L = L c , then drops very sharply until it tends to stay around a value close to unity. Note that vθ = 1 is a special point corresponding to the stationary condition for the time evolution of the state variable: see eq.(7). Such a plateau-like behavior of vθ arises around L c in the epicenter region, and transmits outwards in the nucleus. Further beyond L c , vθ tends to decrease again, first in the epicenter region, and subsequently in the outer region in the nucleus.
In the following subsections, we present our simulation data in some detail in each phase of the nucleation process, i.e., (B) the initial phase of the weak frictional insta- multiple of the two quantities vθ, of an epicenter block in a typical nucleation process of a large event realized in the stationary state. The origin of the time (t = 0) is set to be the onset of the nucleation process of the event.
The model parameters are set to a = 1, b = 9, l = 4, v * = 10 −2 and ν = 10 −8 . The inequality b < b c = 2l 2 + 1 is well satisfied, indicating that the system is in the weak frictional instability regime. The nucleation length L sc estimated from eq.(32) to be given below is L sc = 3.35.
The discreteness of the model is eminent in this regime. At an early stage of the nucleation process, only an epicenter block moves. After some time, the neighboring blocks join this move one by one, causing a spatial expansion of the nucleus. As can be seen from Fig.5(a), the velocity of an epicenter block exhibits a step-like behavior, i.e., it exhibits an almost discontinuous rise when the block contingent to the moving blocks begins to move joining the nucleation process. As L sc = 3.35 here, the system gets into the acceleration phase as soon as the number of blocks is increased from 3 to 4, and the epicenter-block sliding velocity begins to increase sharply. The block motion in the subsequent acceleration phase will be examined in the next subsection ( Fig.5 to be continued to Fig.7).
One important general observation is that the epicenter-block sliding velocity in the initial phase stays very low up to L = L sc , of order the pulling speed of the plate ν. This property can also be derived analytically as shown in §IVA below. In real faults, the plate motion is extremely slow, a few [cm/year] ≃ 1 [nm/sec]. Detection of such a slow sliding motion would practically be impossible.
The state variable θ of an epicenter block initially takes a large value as shown in Fig.5(b). This is simply because θ linearly increases during the interseismic period according to eq.(7), acquiring a large value just before the onset of the nucleation process. During the initial phase, θ still keeps its large value since the velocity is still small on the r.h.s. of eq.(7), while it drops steeply beyond L sc . The quantity vθ increases with the time beyond L sc , as can be seen from Fig.5(c).
We note that the dynamics of the model as shown here does not change much depending on the v * -value or on the a-value as long as a is taken smaller than b, though the time evolution tends to be milder for smaller v * or larger a. This tendency can naturally be understood because the smaller v * or the larger a in eq.(6) means a larger contribution of the velocity-strengthening a-term. The velocity-strengthening force serves to soften an abrupt change, causing a smoother time-evolution of observables.
The parameter choice of Fig.5 corresponds to L sc = 3.35 and the discreteness of the model tends to be important around L sc . In order to examine the effect of the discreteness on the nucleation dynamics, and to examine an approach to the continuum limit, we show in Fig.6 A characteristic feature of the block motion in the quasi-static initial phase is that there exist two different time scales: a slow motion of the time scale O(1/ν) and a faster one of the time scale O(1). The former might be better described by the slow time variable τ ≡ νt. Indeed, a perturbative treatment to be given in §IVA yields the time evolutions of the sliding velocity v and of the state variable θ of the epicenter block as v(τ, t) = C + e λ+t + C − e λ−t where C ± are constants to be determined by initial conditions, θ 0 is the τ = 0 value of θ, and with ξ L defined by In the solution, the number of simultaneously moving blocks (the nucleus size) L is assumed to be fixed during the block movement. When the number of moving blocks or the nucleus size L is small such that b − ξ L < 0, both λ + and λ − are negative. When the condition b − ξ L = 0 is reached, λ + changes its sign, leading to the instability. In fact, this condition b = ξ L determines the point of L = L sc . From eq.(13), one can show that the block sliding velocity stays of order ν throughout the initial phase up to L = L sc .
C. The acceleration phase of the weak frictional instability regime Next, we proceed to the acceleration phase of the weak frictional instability regime, which occurs beyond L sc succeeding the initial phase. In the acceleration phase, the block movement exhibits a prominent acceleration, no longer quasi-static nor reversible.
In Figs phase, reaching a maximum of order v ≃ 10 0 ∼ 10 2 , then decreases sharply and finally stops around L c . The state variable, which stayed nearly constant keeping its large value of order 1/ν throughout the initial phase, begins to drop in the acceleration phase, and eventually becomes of order unity. Since the increase in v dominates over the decrease in θ at an earlier stage of the acceleration phase, vθ increases for some period, reaches a maximum, then drops sharply until it becomes close to unity: See Figs.7(c) and 8(c). Note that, around vθ = 1, the time variation of vθ tends to level off exhibiting a much slower time dependence as can be seen from the inset. It is an inevitable consequence of the equation of motion, eq. (7). To a good precision, the maximum sliding velocity is reached when the relation vθ = 1 is met. Eq. (7) indicates that, when the condition vθ = 1 is met, the state variable θ takes a minimum. Meanwhile, vθ sticks to a value close to unity in this range, yielding the relation v = 1/θ. It means that the sliding velocity v takes a maximum at the point where the condition vθ = 1 is met.
In Fig.9(a), we show the time evolution of the number of simultaneously moving blocks, i.e., the nucleus size L. The data exhibit a sharp peak at which the number of simultaneously moving blocks becomes maximum. This point was taken in subsection (A) as our tentative criterion of L c (L ′ c ). At or very close to this point, the epicenter block ceases to move (the double arrow in the figure), beyond which the group of simultaneously moving blocks are detached into two parts, each part propagating in opposite directions.
The point where vθ takes a value unity and the epicenter-block sliding velocity reaches its maximum, might also be used as a reasonable criterion of L c . This definition of L c tends to yield a L c -value somewhat smaller than our previous definition of L c (L ′ c ), i.e., the maximum of the number of the simultaneously moving blocks. One justification of the new criterion might be the observation that the epicenter-block motion in the time range after vθ levels off around vθ = 1 has already become similar to the one observed in a typical block motion in the high-speed rupture phase. In this sense, the high-speed rupture has already set in in the epicenter region when the epicenter blocks satisfies the relation vθ ≃ 1. Thus, in the following, we adopt as our criterion of L c the relation vθ = 1 being reached at the epicenter block. This point agrees with the point of θ taking a minimum, or v taking a maximum. In fact, the L c -values indicated in Figs.1,3,4,7 and 8 above were the ones defined in this way unless otherwise stated.
In earthquake dynamics, there generally exist two different types of velocities. One is the fault sliding velocity (particle velocity), corresponding in our model to the block sliding velocity v. The other is the rupturepropagation velocity (phase velocity), corresponding in our model to the propagation speed of the rim of the rupture zone v r . In the nucleation process, the latter also coincides with the growth speed of the nucleus size (∼ half of it). Although the definition of the rupturepropagation velocity v r is somewhat obscure in the discrete BK model especially in the strongly discrete case, it might be well-defined in the near-continuum case as a (coarse-grained) growth rate of the rim of the nucleus. Namely, if the rim of the nucleus moves from the block j to j +∆j in a unit time interval, the rupture-propagation velocity might be defined by v r = 1/∆j. We show in Fig.9(b) the time evolution of the rupture-propagation velocity v r computed in this way in the near-continuum case.
In the acceleration phase between L = L sc and L = L c , we identify two characteristic points where the block motion appears to change its behavior. One is the point where the epicenter-block sliding velocity exceeds the crossover velocity v * , across which the a-term gradually changes its character. The other is the crossover velocity v inertia at which the inertia effect becomes important. The inertia effect as meant here is borne by the first term of the r.h.s. of the equation of motion (6) or (8). This term tends to suppress the rapid acceleration, giving rise to the saturation and the subsequent drop of the sliding velocity v. These two characteristic points also manifest themselves in our theoretical analysis of §IV below. One sees from Fig.9(b) that the rupture-propagation velocity v r grows exponentially with the time until around v ≃ v * , beyond which it grows faster than exponential (super-exponential). By contrast, as can be seen from Fig.8(a), the epicenter-block sliding velocity exhibits a faster-than-exponential growth even in the acceleration phase at v < v * . Namely, the sliding-velocity accerelation dominates over the nucleation-size expansion. Meanwhile, the super-exponential rapid growth of both the sliding velocity and the rupture-propagation velocity tends to be suppressed beyond the crossover velocity v inertia , which is caused by the inertia effect borne by the first term of eq.(8).
In Fig.10, we show (a) the epicenter-block sliding velocity v, and (b) the rupture propagation velocity v r , versus the nuclear size L normalized by L sc , L/L sc , instead of the time t. The theoretical curves to be derived in §IV are also shown in the figure for comparison. The comparison with the analytical results are sometimes more direct in this form. Next, we study how the dynamics evolves during the acceleration phase for the case of the strong frictional instability. Remember that the model in the strong frictional regime lacks in the quasi-static initial phase.
The block motion here turns out to be similar to that of the weak frictional instability regime with a stronger discreteness. In Fig.11, we show the time evolutions of (a) the sliding velocity v, (b) the state variable θ, and (c) the multiple of the two vθ, of an epicenter block in the acceleration phase in the strong frictional instability regime. The parameters are taken to be a = 5, b = 40, c = 1000, l = 4, v * = 10 −2 and ν = 10 −8 . In the event of Fig.11, the nucleation length L c is L c = 1, i.e., the condition vθ = 1 has been met during the one-block motion. Note that, in contrast to the weak-frictional instability case, this one-block motion is already irreversible.
One noticeable feature appears at an earlier stage of the high-speed rupture phase in the strong frictional instability regime. Namely, the block velocity often exhibits prominent oscillations with the time. The maximum sliding velocity realized at each oscillation is pretty high, comparable to that of a mainshock. In the inset of Fig.11, we show an expanded view around L c . Such an oscillatory behavior is rarely seen in the case of the weak frictional instability.
A closer look of the color plot in the inset of Fig.1(b) might reveal that such a velocity oscillation of the block velocity is borne by the propagation and the multiple reflections of the rupture front originally ejected at L = L c from the epicenter block. This rupture front propagates along the fault with an elastic-wave velocity ∼ l, eventually becomes a rupture front of a mainshock. In the early stage of the high-speed rupture phase, this propagating rupture front is reflected every time it reaches a neighboring block, generating the second, third, · · · rupture fronts, forming an oscillatory pattern. The period of oscillation should be given by 2/l, which, in the example of Fig.11, yields 0.5. This period is expected to be independent of the parameters like the plate loading velocity ν or the crossover velocity v * , while it might increase weakly with the frictional parameter b because of the slowingdown effect due to the friction. Because of the friction, the velocity of the subsequent rupture-front propagation tends to be reduced, making the oscillation period a bit longer at a later time. We find that such expectations are consistent with the observation. For example, in an example shown in Fig.11, the observed oscillation period is 0.8-1.5, a bit longer than the expected value of 0.5.
Thus, in the strong frictional instability regime, the beginning of the high-speed rupture phase seems to be characterized by the multiple reflections of the propagating rupture front originally ejected from the epicenter site. After some time, the leading propagating rupture alone survives and propagates with an elastic-wave velocity ∼ l for the major part of the mainshock.
IV. ANALYTICAL TREATMENTS
In this section, we wish to report on the results of our analytical treatments of the dynamical properties of the model, those based on either the perturbation theory(A-D), or on the mechanical stability analysis (E). Readers interested only the simulation results might skip to §V.
A. Perturbation theory
We begin with the equation of motion (8) for the velocity variable v i . As mentioned, the low plate pulling speed ν provides an extremely small number, say, ν ∼ 10 −7 − 10 −8 . Furthermore, throughout the nucleation process, the state variable θ tends to keep a very large value of order 1/ν. Then, it might be convenient to introduce the reduced state variable of order unity,θ, defined byθ ≡ νθ. One gets a set of equations of motions in terms of v i andθ i , Then, we introduce the "first Fourier-mode approximation", which states that for the most part of the nucleation process the spatial form of observables, i.e., the i-dependence of the block sliding velocity v i or the block displacement u i , is given by that of the first Fourier mode. Namely, when the total L blocks from i = 1 to i = L are moving, v i or u i is proportional to sin π L+1 i . In this approximation, it is implicitly assumed that the nucleus keeps a highly symmetrical form and the central block is an epicenter block.
We show in Fig.12 the spatial form of the block sliding velocity v i observed in our numerical simulations at several representative points of a typical nucleation process in the weak frictional instability regime, including (a) the initial phase, (b) the acceleration phase at v < v inertia , and (c) the acceleration phase at v > v inertia , together with the first Fourier-mode forms. Except for the time range beyond v = v inertia close to L c of Fig.(c), this approximation turns out to be reasonably good, allowing one to reproduce the motion of an arbitrary block within the nucleus by tracing only the motion of the central block i = L+1 2 (for odd L). Under this first Fourier-mode approximation, the equations of motion for the central block is given by where ξ L has been given in eq. (16), and the subscript i is dropped here and below. Now, we expand v andθ with respect to a small quantity ν, i.e., we perform a perturbation expansion in ν, At the zeroth order in ν, one gets a set of equations The equation (23) for the zeroth-order velocity v (0) has two types of solutions, i.e., [A] v (0) = 0, and [B] v (0) = 0. The solution [A] describes the situation where the block is at rest when the plate drive is tuned off (ν → 0). By contrast, the solution [B] describes the situation where the block is moving even when the plate drive is turned off. Hence, one expects that the solution [A] describes the initial phase, while the solution [B] describes the acceleration phase. In the following subsections, we analyze each case separately in some more detail.
B. The initial phase
Here the block sliding velocity at the zeroth order is zero, v (0) = 0. As was seen in §III, an eminent feature of the block motion in the initial phase is that there exist two time scales: One is a slow motion of order the loading velocity ν << 1, and the other is a faster one of order unity. One way to deal with such two different time scales within the perturbative scheme might be to introduce two kinds of time variables, τ = νt associated with a slow motion and t associated with a fast motion, and regard various observables as a function of both τ and t like v (1) (τ, t), θ (0) (τ, t) and θ (1) (τ, t). The original time derivative in the equation of motion is replaced by d dt → ∂ ∂t + ν ∂ ∂τ .
The equations of motion at O(ν) then reads as Since the zeroth-order quantityθ (0) is bounded in the t → ∞ limit, the corresponding first-order quantityθ (1) needs to remain finite in the t → ∞ limit in order that the perturbation analysis remains meaningful, i.e., the relation ∂θ (1) ∂t = 0 is required in the t → ∞ limit. This relation is met if the equality holds. From eq. (25), Substituting this into eq.(26) and taking the t → ∞ limit, one gets an equation to determine the hitherto undetermined τ -dependence of θ (0) (τ ) as This can be solved to yield, whereθ (0) 0 =θ (0) (τ = 0). Substituting this into eq.(25), one can get a full solution of eq.(25) as where C ± are numerical constants to be determined via the initial condition, and λ ± is given by eq. (15). While λ − is always negative, λ + is either negative or positive depending on whether b < ξ L or b > ξ L . When b < ξ L , both C ± terms vanish quickly in eq.(31), whereas, when b > ξ L , the C + term grows quickly leading to the instability. In fact, the borderline case b = ξ L represents the nucleation length L sc discriminating the stable nucleation process corresponding to the initial phase and the unstable nucleation process corresponding to the acceleration phase. Now, from the condition b = ξ L = 2l 2 (1 − cos π L+1 ) + 1, we reach an analytical expression of L sc as In the discrete BK model, the initial phase realized at L < L sc is ever possible only when L sc is greater than the lattice spacing or the block size, i.e., L sc > 1. Then, the condition of this nucleation length being greater than the block size L sc > 1 yields the condition of the weak frictional instability, where the quasi-static nucleation process is realizable in the discrete BK model. In other words, when b > b c , L sc is less than the block spacing and the quasi-static nucleation process cannot be realized in the BK model due to its intrinsic discreteness. This is exactly the point discussed by Rice [Rice, 1993]. Hence, either the weak or the strong frictional instability is determined by the relation between the two parameters b and l only, a strong instability for b > b c = 2l 2 + 1 and and a weak instability for b < b c . We emphasize that the continuum limit of the model corresponds to l → ∞ so that the continuum limit of the BK model with spatially homogeneous parameters always lies in the weak frictional instability regime, which accompanies the quasi-static nucleation process. Another derivation of L sc and b c based on the mechanical stability analysis will be given in the following subsection E. We note that the same formula can also be derived from the linear stability analysis around the steady-state solution of the equation of motion, v = v ss = const. and θ = θ ss = 1/v ss along the line of [Rice et al, 2001].
We note in passing that the analytic formula of L sc given by eq.(32) is in excellent agreement with the L scvalue determined numerically by artificially sopping the external loading as explained in §IIIA. Precisely speaking, the L sc -value determined by artificially sopping the external loading could slightly deviate from the analytical result. Two reasons of such a deviation are identified. In one, a nonzero loading speed ν sometimes causes an "overshooting" giving a bias toward the instability. In the other, the spatial pattern of the block displacement and the block sliding velocity within the nucleus sometimes deviate from the one assumed in deriving the analytic form of §IV, i.e., of the first Fourier-mode form.
C. Acceleration phase at v < v * Next, we perform the perturbation analysis of the acceleration phase. Here, the block motion is no longer stable nor quasi-static, but is essentially unstable and irreversible. There is no slow process so that no need to consider τ . In contrast to the initial phase, the zeroth-order velocity v (0) describing this regime should be nonzero (the solution [B]).
We divide our analysis of the acceleration phase into the two time regimes from the technical reason, i.e., the regime of v < v * and of v > v * . In this subsection [C], we deal with the regime v < v * . The regime v > v * will be dealt with in the next subsection (D). For v << v * , eq.(23) reduces to the linear differential equation of the form, whose solution is given by where λ ± has been given by eq.(15). Since b > ξ L in the acceleration phase, λ + is positive leading to the instability. The time evolution of the state variableθ (0) is given byθ In the analysis, the size of the nucleus L is assumed to be fixed. Of course, an important part of the nucleation process, particularly in the unstable acceleration phase, is how the nucleus size L expands with the time and how various observables evolve under the spatial expansion of the nucleus. In order to deal with such a nucleus expansion, we need additional information about the condition under which the block contingent to the moving blocks located at the rim of the nucleus begins to move. This condition actually depends on the stress state of the block assembly at the beginning of the nucleation process in question, which was basically set by the previous large event preceding the event in question.
We find from our numerical simulations that, in the steady state of an earthquake sequence, the excess stress ∆F , which is defined as the elastic-force difference at a given block between the initial value at the beginning of the nucleation process and the threshold value at which that block eventually begins to move involved into the nucleation process, is more or less constant over blocks involved in a given event, even though this quantity is scattered considerably over various events in an event sequence. This feature originates from the fact the stress distribution after a large event tends to be flat over blocks involved in this event.
Equivalently, the threshold displacement ∆u, which is defined as the displacement that the block located at the rim of the nucleus exhibits in order for the neighboring block initially at rest begins to move, also turns out to be more or less constant over blocks. In fact, there is a relation ∆F = l 2 ∆u. In Fig.13, we show typical distributions of ∆u divided by its average over blocks involved in a given event ∆u, ∆u/∆u, for various parameter sets. The data for each parameter set is an average over 10 4 events. As can be seen from the figure, ∆u/∆u tends to obey a common distribution characterized by a singlepeak structure, suggesting that the approximation to regard ∆u (or ∆F = l 2 ∆u) to be constant over blocks involved in an event may not be so bad.
For convenience of the description, we introduce the reduced time variable t ′ for which the time origin t ′ = 0 is taken at the point where the L-block movement begins. In the symmetric block motion of the first Fouriermode type we are considering here, the two blocks contingent to the nucleus begin to move entering into the nucleus motion of the size L + 2 at the reduced time t ′ = t ′ L . We consider the series of nuclear sizes L sc , L sc + 2, · · · L − 2, L, L + 2, · · · . Let the sliding velocity and the displacement of the central block at the transition from the L-block motion to the L + 2-block motion be v L and u L . From eq.(35), one has Within the first Fourier-mode approximation, the displacement ∆u of the block located at the rim of the nucleus means the displacement of ∆u/ sin( π L+1 ) of the cen- tral block. Hence, the the ∆u-constant condition for the L → L + 2 transition can be given as the condition for the central block, which, together with eq.(38), yields the equation to de- Eqs. (37) and (40) yield the recursion relation for v L , which is solved as .
(42) To proceed further, we consider the situation where L is large enough, L >> 1. For L >> 1, ξ L ≃ 1 + ( πl L ) 2 , and where L sc is given by eq.(32). By replacing the summation by the integral, one gets where we put y ≡ L/L sc , and ∆F = l 2 ∆u is the excess stress defined above. This relation gives the sliding velocity of the central block as a function of the nucleus size L. Substituting this into eq.(40), one gets t ′ L as This expression of t ′ L tends to diverge in the limit y → 1, i.e., L → L sc . This is because, just at L = L sc , the block motion is infinite slow in t. (Remember the relevant time scale has been of O(τ ) at L ≤ L sc .) In the continuum limit, L is taken to be large such that the dimensionless distance in the continuumL = Ld (d = D vs/ω the dimensionless block size) is kept finite [Mori and Kawamura, 2008c]. To have a sensible continuum limit, one needs to set d = 1/l so that the continuum limit means L → ∞ and l → ∞ withL = L/l kept finite. As can be seen from eq.(45), t ′ L goes to zero in the continuum limit due to the factor l in the denominator. This is simply because, in the continuum limit, the portion occupied by each fixed L becomes infinitesimally small.
The physically meaningful time in the continuum limit is a cumulative time t L ≡ t ′ Lsc + · · · + t ′ L , which is calculated as where y = L/L sc =L/L sc as above, and a small number ǫ takes care of removing the aforementioned divergence associated with an infinitely slow motion in t around L = L sc . This t L remains nonzero even in the continuum limit.
The t-derivative of eq.(46) yields another important quantity, i.e., the rupture-propagation velocity v r ≡ The dimensionless rupture-propagation velocity appropriate in the continuum limitṽ r ≡ v r d = v r /l is given by,ṽ where y =L/L sc . If one compares this expression ofṽ r with that of the sliding velocity v of eq.(44), both v andṽ r are proportional to v * a , meaning a larger-a or a smaller-v * value tends to lead to the slower block sliding and to the slower nucleus expansion. Meanwhile, v is proportional to ∆F in contrast to v r , the latter being independent of ∆F (nor ∆u). This means that the low stress state at the onset of the nucleation process tends to induce a high sliding velocity, but the rupture-propagation velocity is rather insensitive to the stress state.
Comparison of the y-dependence of eqs. (44) and (48) suggests that the acceleration is relatively more suppressed in the rupture propagation than in the sliding velocity because of the factor y > 1 in the denominator of eq.(48). In fact, for larger y, the rupture-propagation velocity and the nucleus size grow exponentially with the time t since dy dt is proportional to y, for y >> 1, which is consistent with our simulation data of Fig.9(b). By contrast, the sliding velocity grows faster than exponential, which has also been confirmed by our numerical simulations shown in Fig.8(a). Namely, the accerelation of the block sliding dominates over that of the nucleus expansion.
In reality, y = L/L sc is not necessary much larger than unity in this regime. Even in this case, however, the r.h.s. of eq.(49) may be regarded as approximately being linear in y with a modified proportionality coefficient, i.e., the exponent in eq.(50) modified from the original one to an effective one. In fact, the type of the fit we made in Figs.9(b) and 10(b) was made with the associated exponent as a fitting parameter.
D. Acceleration phase at v > v *
Now we wish to move on to the later part of the acceleration phase where the sliding velocity of the central block exceeds the crossover velocity v * . In this situation, the equation of motion for the central block becomes nonlinear, and the treatment of the previous subsection does not apply in the same form. To proceed, we introduce an additional approximation of the "overdamped approximation".
The l.h.s. of the equation of motion for v, eq.(8), consists of the three terms: the first "inertia term" proportional to the second time-derivative d 2 v dt 2 , the second term proportional to the first time-derivative dv dt , and the third term proportional to the velocity itself v. In the low velocity region, the first term is much smaller in magnitude than the other two terms, and might safely be neglected ("overdamped approximation"). Our simulation results shown in Fig.14 indicate that the first term is indeed much smaller than the other two terms not only in the initial phase and in the acceleration phase at v < v * , but also in the acceleration phase even at v > v * up to a certain point preceding L c . The velocity at which the first term becomes comparable to the other two terms and the "overdamped approximation" fails gives an another crossover velocity, which we denote v inertia . In the following, we take the convention to define v inertia by the v-value where the first inertia term grows to 10% of the sencond first-derivative term. The overdamped approximation enables one to go into the later part of the acceleration phase up to v ≃ v inertia . Within this approximation, the sliding velocity and the displacement of the central block are calculated for a fixed L to be Note that these expressions lead to an apparent divergence at a finite time. Of course, this is an artificial divergence caused by the "overdamped approximation" employed. In reality, when the velocity exceeds the crossover velocity v inertia , the neglected "inertia term" becomes important suppressing the artificial divergence, and the system exhibits an entirely different behavior as can be seen from Fig.14.
One might describe the growth of the nucleus, i.e., the time dependence of L, along the line of the previous subsection. Adopting the first Fourier-mode approximation and the constant-∆u approximation, one gets from eqs. (51) and (52) the recursion relation, with In the case v L >> v * of our interest here, one may safely neglect the second term proportional to v * (<< v L ), to have Then, one gets As an initial state of the recursion relation, we take here somewhat arbitrarily the state at v = v * where the nucleus size is L = L * . In the large-L limit, the summation is replaced by the integral to yield, with y = L/L sc =L/L sc as above, where the constant C is given by with y * = L * /L sc =L * /L sc . These expressions give the epicenter-block sliding velocity v as a function of the nucleus size L or y. The cumulative time t L is obtained as where G(y; y * ) = y y * The normalized rupture-propagation velocityṽ r is then calculated to bẽ If one compares the expression ofṽ r with that of the sliding velocity v, the acceleration is relatively more suppressed in the rupture propagation than in the sliding velocity as in the case of v < v * .
Beyond v = v inertia , the inertia effect becomes important and the system gets into the final stage of the acceleration phase, eventually approaching L c . In this final time regime, the overdamped approximation fails and the equation becomes highly nonlinear so that we have no efficient analytical solution, unfortunately.
Our numerical solution has revealed that, in this final time regime, the inertia term suppresses the acceleration, vθ drops further mitigating the acceleration, and eventually reaches the point vθ = 1 yielding L c , which signals the onset of the high-speed rupture of a mainshock. Beyond the point L = L c , the epicenter block rapidly decelerates and soon comes to a complete stop. Meanwhile, neighboring blocks begin a high-speed motion, and the system gets into the high-speed rupture phase where the rupture front propagates with the elastic wave velocity ∼ l in both directions.
E. Mechanical stability analysis
In this subsection, we re-derive the expression of L sc , eq.(32), based on the mechanical stability analysis, i.e., from the condition of the balance between the elastic force and the friction force acting on a block [Dieterich, 1992;Scholz, 2002]. As mentioned, one may regard L sc as the length separating the stable and the unstable ruptures. When the nucleus size L is less than L sc , the rupture process is stable and reversible, whereas, when L exceeds L sc , it becomes unstable and irreversible.
An appropriate physical condition describing the stable/unstable sliding across L sc might be whether the elastic stiffness K, as defined by K = δf elastic /δu which represents a change of the elastic force f elastic due to an infinitesimal slip δu of the block, is greater/smaller than the frictional weakening rate, as defined by δφ/δu which represents a change of the friction force φ due to an infinitesimal slip of the block. If the frictional weakening rate | dφ du | is greater than the elastic stiffness K, an infinitesimal sliding δu induces a dominance of the friction-force drop over the elastic-force drop causing a dynamical instability, i.e., a slip weakening. By contrast, if the frictional weakening rate is smaller than the elastic stiffness, a further sliding is suppressed by the frictional force leading to a stable slip, i.e., a slip strengthening.
Consider a hypothetical instantaneous process from the states ( The aging law (7) entails the relation δθ i = δt − θδu i ≃ −θδu i . Then, the frictional-weakening rate is obtained as Meanwhile, the stiffness of the L-block system may be given by the smallest nonzero eigenvalue of the L × L matrix K defined via the relation (δf elastic,1 , · · · , δf elastic,L ) = K(δu 1 , · · · , δu L ) as K min = 2l 2 1 − cos π L + 1 + 1.
The eigenfunction associated with the smallest eigenvalue K min just corresponds to the first Fourier mode which we employed in our approximate solution of the equation of motion. As the size of nucleus L is increased, the stiffness K min given by eq.(63) decreases. Note that, however, even in the L → ∞ limit K min does not vanish altogether, retaining a nonzero value, unity, in contrast to the elastic-continuum case [Dieterich, 1992;Rubin and Ampuero, 2005;Ampuero and Rubin, 2008] where K vanishes as 1/L. Matching K and | dφ du |, the condition of the frictional instability is obtained as yielding the expression of L sc given by eq.(32). In Fig.15, the stiffness K of an epicenter block computed in the course of the nucleation process of our simulation is plotted versus the number of moving blocks L, together with the theoretical curve (63). The two agree very well. At an earlier stage of the slip, an inequality K > | dφ du | holds indicating a stable slip, while, at a certain point, an equality K = | dφ b du | is reached signaling L sc , beyond which an opposite inequality K < | dφ b du | holds indicating an unstable slip. The system then gets into the unstable acceleration phase. Eq.(63) might suggest that, if b < 1, b < K min for any value of L. It means that the earthquake-like frictional instability is no longer possible in the region of b < 1 of the model. Indeed, we observe in our simulations that, in the region of b < 1, the model exhibits a creep-like continuous movement without showing an earthquakelike instability any more.
V SIMULATION RESULTS II
When the nucleation process precedes a mainshock, one might naturally ask how the properties of the nucleation process is related or unrelated to the properties of the ensuing mainshock itself. This question would be of particular interest in its possible connection to an earthquake forecast. In this section, we investigate the statistical properties associated with the nucleation process, e.g., the nucleation lengths L sc and L c , and the duration times of each phase of the nucleation process, averaged over many events in connection with the mainshock properties.
Of course, difficulties accompany such a forecast. The fault sliding is generally very slow for most part of the nucleation process, which makes the detection of the nucleation process difficult. Especially in the initial phase, the fault motion is extremely slow, being of "atomic scale" of ≃ 1 [nm/s]. In the acceleration phase, the sliding velocity increases by several orders of magnitude towards the nucleation length L c , eventually becoming comparable to the maximum sliding velocity at the main rupture. An important point here is how much time is left before the onset of a mainshock. We study in this section how the dynamics evolves during the acceleration phase in some detail, mainly for the case of the weak frictional instability relevant to the continuum limit.
A. The nucleation lengths Lsc and Lc
As was revealed in the previous sections, the nucleation length L sc is determined only by the material parameters as given in eq. (32), meaning that L sc cannot be used as an indicator of the size of the ensuing mainshock which may be small or large.
What about the nucleation length L c ? We plot in Fig.16(a) the mean-L c computed in our simulations normalized by the corresponding L sc , L c /L sc , versus the final rupture-zone size L r for various choices of the model parameters in the weak frictional instability regime. The b-value is fixed to b = 9 while the parameters l, a and v * are varied. The data for each parameter set is an average over 10 4 events in the strong frictional instability regime, and 10 5 events in the weak frictional instability regime, except for the case of l = 10 where the corresponding numbers are 3500 and 24000, respectively. As can be seen from Fig.16(a), the data approximately collapse onto a common curve. Since L sc given by eq. (32) does not depend on a and v * , this indicates that L c is also insensitive to a and v * , while its l-dependence is the same as that of L sc . One also sees that L c tends to be independent of L r except for smaller events, implying that one cannot predict the size of the upcoming mainshock even from the information of L c .
We examine the b-dependence of L c /L sc ≡ r, and plot in Fig.16(b) the mean L c /L sc -value versus b for various lvalues, including not only the weak frictional instability regime but also the strong frictional instability regime. As can be seen from Fig.16(b), L c /L sc exhibits a nontrivial b-dependence accompanied by a cusp-like change of behavior at b = b c discriminating the weak and the strong instability regimes. The data in the weak frictional instability regime tend to increase almost linearly with b, lying on a common line even for different l, while those in the strong frictional instability regime tend to decrease with b. We find that the data in the weak frictional instability regime of b < b c exhibits a near-linear behavior well fittable by the relation r(b) = L c /L sc ≃ 0.1b + 4.4. B. The duration times of each nucleation phase Next, we consider the duration times of each stage of the nucleation process, including that of the initial phase T α (L < L sc ), of the acceleration phase T β (L sc < L < L c ), and of the high-speed rupture phase T γ (L > L c ). The ultimate utility of the nucleation phenomenon may be forecasting the upcoming mainshock. As mentioned, practical detection, if any, would become possible only in the acceleration phase. Since the system has already been beyond the "no-return" point, a mainshock should already be "deterministic" there. The remaining problem is how much time is left.
We tentatively set the detectable sliding velocity of the nucleus motion v = 10 −4 = 10 4 ν which corresponds in real unit to ≃ 10 −2 [mm/sec]. Then, the time interval between the point of v = 10 −4 and the point of L = L c (the onset of a mainshock) is denoted by T ′ β . This T ′ β would give a realistic measure of the remaining time available for a mainshock forecast.
In Fig.17(a), we show the duration times (T α , T β , T ′ β and T γ ) for the case of the weak frictional instability versus the associated final rupture-zone size L r . The averaged number of events are the same as those of Fig.16, except for the case of v * = 10 −4 where the corresponding number is 225. Quite naturally, the duration time of the mainshock itself, T γ , gets longer for a larger mainshock. By contrast, the duration times of the nucleation process T α , T β and T ′ β are nearly independent of the size of the ensuing mainshock. This observation means that it is again hard to predict the size of the ensuing mainshock based on the duration times of the nucleation process. A closer look of the data reveals that there is even a weak anti-correlation between the duration time of the initial phase T α and the size of the ensuing mainschok. Namely, T α tends to be a bit shorter for larger earthquakes, though the tendency is not pronounced.
In Fig.17(b), we plot the mean duration times averaged over all L r versus b in the main panel, and versus a in the inset. One sees from the figure that the duration times depend on b and a only weakly. In Fig.17(c), we plot these mean duration times versus v * in the main panel, and versus 1/l in the inset. One sees from the main panel that the duration times T α and T γ depend on v * only weakly, but the duration times T β and T ′ β depend on v * rather sensitively, increasing with decreasing v * .
For v * = 10 −4 , T β is greater than T γ by factor of 700, while T ′ β by factor of 20. For smaller v * , T ′ β could be even longer, although the saturating behavior seems to set in for v * 10 −4 . Unfortunately, taking the data for v * ≤ 10 −5 is beyond our present computational capability. The 1/l-dependence of these duration times shown in the inset turns out to be rather weak. We then conclude that the remaining time available for a mainshock forecast could be longer than the mainshock duration time by one or two orders of magnitude, but perhaps not much longer than that. The mean duration times plotted versus the crossover velocity v * with l = 20 (main panel), and versus the inverse stiffness parameter 1/l with v * = 10 −2 (inset). The other parameters are a = 1, b = 9, c = 1000, l = 20 and ν = 10 −8 .
C. The continuum limit
In view of the intrinsic discreteness of the BK model, it would be important to clarify the fate of the nucleation process in its continuum limit. We have shown above that the condition of whether the block size, an intrinsic short-length cutoff scale of the model, is larger or smaller than the nucleation length L sc largely affects the nature of the nucleation process. In particular, the continuum limit of the BK model always lies in the weak frictional instability regime. This gives us an important suggestion that an earthquake at a mature homogeneous fault obeying the RSF law always accompanies the quasi-static nucleation process, corroborating Rice [Rice, 1993].
As mentioned, the continuum limit of the BK model corresponds to making the block size to be infinitesimally small d → 0, simultaneously making the system infinitely rigid l → ∞ so that d = 1/l [Mori and Kawamura, 2008c]. The equation of motion in the continuum limit has been given in the dimensionful form by eq. (12). It should be emphasized that the length unit scaling the block size is v s /ω, while the length unit scaling the block displacement is the characteristic slip distance L. Note that the former length scale, v s /ω, is absent in the standard continuum elasto-dynamic equation. The appearance of such a second length scale, in addition to the length scale of the critical slip distance L, has become possible due to the existence of the characteristic time scale ω −1 borne by the −ω 2 U term in eq. (12), which represents the plate drive directly applied to the fault layer as modeled by the block assembly of the BK model. Let us examine the continuum limit of the two types nucleation lengths, L sc and L c . Let us begin with L sc . The continuum limit of L sc in the dimensionless form is given byL sc = lim d→0 L sc d = lim l→∞ L sc /l. From the obtained analytical expression of L sc , eq.(32), one can easily getL Remembering that the length unit here is v s /ω and b = BN /(k p L) (N is the normal load) with the relation (10), one can derive the expression of the dimensionfull nucleation length in the continuum limit, L × sc , as Among the frictional parameters, B, not B − A, enters into the formula above. This is consistent with the earlier observation by Dieterich [Dieterich, 1992], who derived the expression of the nucleation length dependent only on B, eq.(2), on the assumption of vθ >> 1, which is also the condition we employed. The derived expression of L × sc is a decreasing function of the frictional parameter B and the normal stress σ n , and an increasing function of the characteristic slip distance L and the rigidity G. This tendency is qualitatively consistent with the one indicated by the standard form, eq.(2). However, the present formula of L × sc is different from eq.(2) in that L × sc is inversely proportional to the square root of GL/(σ n B), not to GL/(σ n B) itself as in eq.(2), the remaining part being complemented by the square root of the second length scale v s /ω. This difference originates from the difference in the expression of the stiffness K, eq.(63) versus the standard form in the continuum of K ∝ 1/L. As mentioned, this difference can further be traced backed to the existence of the two length scales in the BK model, i.e., the critical slip distance L and the length scale v s /ω = W , in contrast to only one length scale L in the standard elasto-dynamic model.
Concerning the continuum limit of L c , since the ratio r = L c /L sc turns out to be hardly dependent on l in the weak frictional instability regime relevant to the continuum limit, the dimensionful nucleation length in the continuum limit L × c is given by where b is a number characterizing the fault interface. We also examine the continuum limit of the duration times of the nucleation process, T α , T β , T ′ β and T γ . As shown in Fig.17(c), the 1/l-dependence of these duration times turns out to be rather weak. This means that the duration times in the continuum limit should be close to the ones computed here for the discrete model.
VI. SUMMARY AND DISCUSSION
We studied the nature of the nucleation process of the BK model in one dimension obeying the RSF law. The model turned out to exhibit qualitatively different nucleation phenomena depending on whether the frictional instability is either "strong" or "week". The condition of the strong or the weak frictional instability is simply given by b > b c or b < b c , respectively, with b c = 2l 2 + 1. The quasi-static nucleation process, i.e., the initial phase exists only for the weak frictional instability. Two kinds of nucleation lengths, L sc separating the initial and the acceleration phases as well as L c separating the acceleration and the high-speed rupture phases, were identified. The nucleation length L sc and the initial phase exist only in the weak frictional instability regime, while L c and the acceleration phase exist for the both regimes. The analytic expression of L sc was obtained as in eq.(32), which took the form of eqs.(65) and (67) in the continuum limit, while that of L c in the continuum limit was obtained as in eq.(68). In fact, both L sc and L c were determined by the material parameters only, independent of the size of the ensuing mainshock. It means that the information on L sc or L c cannot used for predicting the size of the subsequent mainshock . Since the continuum limit of the BK model lies in the weak frictional instability regime, an earthquake at a mature homogeneous fault under the RSF law always accompanies the quasi-static nucleation process. When the discreteness or the inhomogeneity is strong, by contrast, an earthquake does not accompany the quasi-static nucleation process.
Throughout the initial phase up to L sc , the block sliding is extremely slow of order the loading speed of the plate. Beyond L sc , the system gets into the irreversible acceleration phase where both the block sliding and the rupture propagation accelerate rapidly. Two characteristic points are identified within the acceleration phase. One is the point v ≃ v * where the block sliding velocity exceeds the friction crossover velocity, beyond which the rupture propagation is changed from the exponential to the super-exponential growth. The other is the point v ≃ v inertia where the inertia effect becomes relevant, beyond which the block acceleration tends to be suppressed at the epicenter block due to the inertia effect. At L ≃ L c , the sliding velocity v of the epicenter block reaches its maximum, while the state variable θ of the epicenter block reaches its minimum. Beyond L = L c , the epicenter block rapidly decelerates and stops. The system then gets into the high-speed rupture of a mainshock where the rupture front propagates in both direction with a nearly constant speed of the wave velocity. In the case of the strong frictional instability, a characteristic oscillatory behavior takes place at an early stage of the high-speed rupture, which is caused by multiple reflections of the rupture front.
Various duration times of each stage of the nucleation process were studied. The duration times also have no pronounced correlation with the size of the ensuing mainshock . Particular attention was paid to the duration time of the acceleration phase T β and the remaining time available for a mainshock forecast T ′ β . Both T β and T ′ β hardly depend on the model parameters, with the exception of the friction crossover velocity v * , which tends to increase with decreasing v * . We argue that the remaining time for an earthquake forecast could be one or two magnitudes longer than the duration time of a mainshock, but perhaps not much longer than that.
Next, with our present findings on the BK model in mind, we wish to discuss possible implications of the results to the nucleation process of real seismicity. Of course, since the reliability of the 1D BK model in connection with real seismicity may be limited at the quantitative level, such implications to real seismicity should be taken only as indications.
Let us estimate the typical scales of these nucleation lengths on the basis of eqs. Any possibility of an earthquake forecast lies in the acceleration phase. The remaining time T ′ β plays an especially important role here. Let us estimate various duration times on the basis of our present results. If we revive the normalization units and substitute the typical parameter values, we get, for v * = 10 −4 , T α ≃ 10 2 [year], T β ≃ 1 [day], T ′ β ≃ 1 [hour] and T γ ≃ 1 ∼ 2 [min]. For smaller v * , T ′ β could be even longer. However, as can be seen from Fig.17(c), the increase of T ′ β with decreasing v * tends to be suppressed and to saturate for v * 10 −4 . Hence, we deduce that, irrespective of the detailed value of the friction crossover velocity v * , the remaining time available for a mainshock forecast would not be much longer than several hours. Hence, the time left seems not so long even in the best condition.
The duration times T β and T ′ β turn out to depend on the friction parameter v * . Th friction crossover velocity v * is introduced in our analysis to describe the state at rest phenomenologically. In view of such a slow speed of the plate drive ν ≃ 1 [nm/sec], being of "atomic" scale, the question of whether the stuck region of the fault is completely stuck with a zero sliding velocity, or it is moving with a speed much lower than ν, sounds too "academic". In describing a macroscopic earthquake phenomenon, it would perhaps be more realistic to regard the stuck state as being completely at rest with v = 0, and modify the relevant friction law so that it can describe the state at rest. Remember that the standard a-term proportional to ln v gives an infinitely negative friction for v → 0, and does not allow anything to stop whatsoever. In other words, we feel that considering the "stuck" state as a state with its sliding velocity 0 < v << ν is not much meaningful. Then, in order to describe such a state at complete rest v = 0, we need a modified a-term with a nonzero crossover velocity v * (> ν), as was done phenomenologically here.
To predict the size of an earthquake would be even more difficult. Any quantity related to the nucleation process studied here, including the nucleation lengths L sc and L c and various duration times of the nucleation process, has no pronounced correlation with the size of a mainshock, at least for larger ones. The problem of how big a mainshock is going to be is related to the stress state of the entire area, not limited to the nucleus area. Just the information of the nucleus area is not enough to predict the ensuing mainshock size. If so, a wide-area survey of the stress state would be necessary for the detection of the mainshock size.
Finally, we wish to discuss possible extensions of our present analysis. First, as the present model is onedimensional, an obvious extension is to study the properties of the corresponding two-dimensional model. In two dimensions, the geometry could be more complex than in one dimension, which might modify at least a part of the results obtained here for the one-dimensional model.
Second, in the present model, the nearest-neighbor interaction has been assumed between blocks. In real earthquake faults, the existence of the crust perpendicular to the fault plane mediates the long-range interaction even between blocks away on the fault plane. In fact, the long-range interaction has been employed in the elastic-continuum analysis [Dieterich, 1992;Rubin and Ampuero, 2005;Ampuero and Rubin, 2008]. Even within the discrete BK model, the effects of the elastic long-range interaction was investigated, mainly concerning with its statistical properties such as the magnitude distribution [Mori and Kawamura, 2008b]. It would be desirable to study the nature of the nucleation process of such a long-range BK model, and compare it with that of the short-range model studied here.
Third, the present model is homogeneous except for its intrinsic discreteness in the form of blocks. Real faults are more inhomogeneous where the elastic and the frictional parameters exhibit inhomogeneous distribution. The form of such a spatial inhomogeneity might be either random or more organized as being hierarchical [Ide and Aochi, 2005]. Within the BK model, it is possible to take account of such an inhomogeneity by assuming the model parameters varying from block to block [Cao and Aki, 1986].
Fourth, the effects of the viscosity or the relaxation were not taken into account in the present model. Such relaxation effects should more or less exist in real faults. It would also be desirable to clarify its role not only in the earthquake nucleation process but also in the mainshock itself. We leave these extensions and open problems to a future task.
In summary, we studied the properties of the earthquake nucleation process of a mature fault both numerically and analytically on the basis of the spring-block BK model obeying the RSF law. We find that this simplified model successfully reproduces various features of the expected earthquake nucleation process. We analyzed the dynamical properties of the model at each stage of the nucleation process in detail, including their continuum limits, and further discussed the connection to a possible earthquake forecast.
The authors are thankful to T. Okubo, N. Hatano, N. Kato, T. Uchide and N. Ito for useful discussion. This study was supported by Grant-in-Aid for Scientific Research on Priority Areas 19052006. We thank ISSP, Tokyo University for providing us with the CPU time. | 2015-08-19T05:21:31.000Z | 2014-07-10T00:00:00.000 | {
"year": 2015,
"sha1": "924220ca672a889edf02dbb5aae5073e7c071d96",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1407.2693",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6896bb4c0b436c5e2d4393f61435034ccc680297",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
9318515 | pes2o/s2orc | v3-fos-license | Efficacy of Periprostatic Anesthesia according to Lidocaine Dose during Transrectal Ultrasound-Guided Biopsy of the Prostate
Purpose The aim of this study was to evaluate the efficacy of periprostatic lidocaine injection according to lidocaine dose during transrectal ultrasound-guided prostate biopsy. Materials and Methods The subjects of this study were 92 patients who had undergone transrectal ultrasound-guided 12-core biopsy of the prostate. The patients were randomly assigned to three groups: group 1 (n=31, no lidocaine injection), group 2 (n=30, periprostatic injection of 10 ml 1% lidocaine), and group 3 (n=31, periprostatic injection of 20 ml 1% lidocaine). The patients were assessed for pain by use of a 10-point visual analogue scale (VAS) and for other complications after the procedure. Results The mean VAS scores of groups 1 through 3 were 0.93±0.89, 1.32±1.37, and 1.13±1.10, respectively. There were no statistically significant differences between the three groups. However, the mean VAS score of the biopsy pain was 5.0±1.48, 3.93±1.94, and 3.60±2.15, in the same groups, respectively, with statistically significant differences between group 1 and the other groups. Patients in groups 2 and 3 reported significantly less biopsy pain than did group 1 patients (p=0.004, 0.021), with no statistically significant difference in VAS score between groups 2 and 3 (p=0.533). With respect to post-biopsy complications, there were no significant differences in the incidence of hematuria, hematospermia, rectal bleeding, or infection among the three groups. Conclusions Periprostatic injection of local anesthesia with lidocaine was associated with significantly less pain than in the absence of anesthesia. Furthermore, a 20-ml dose of lidocaine produced no better pain control than did a 10-ml lidocaine dose for prostate biopsy.
INTRODUCTION
Transrectal ultrasound (TRUS)-guided prostate biopsy is the most commonly used procedure for detecting prostate cancer. However, pain is the main morbidity and the main hindrance to the acceptance of TRUS-guided prostate biopsy by patients. Several studies have shown that 19 to 30% of patients experience moderate to severe pain during prostate biopsy [1,2]. There has been a shift recently from the standard sextant biopsy to a 10-to 12-core biopsy protocol to increase the cancer detection rate. This extended biopsy protocol is associated with increased pain, discomfort, and anxiety [3,4]. Two factors usually responsible for pain during prostate biopsy are anal pain due to the ultrasound probe and insertion pain of the needle through the prostate [5].
Currently, there is no universally accepted method of anesthesia for prostate biopsy as evidenced by the numerous methods that have been tried and published in the literature [6][7][8][9]. Among the various methods of periprostatic anesthesia, periprostatic lidocaine injection appears to be the most popular. The lidocaine doses for periprostatic anes- [7] reported that the effectiveness of periprostatic anesthesia did not differ between basal injection and apical injection. Furthermore, in their study, patients were randomly assigned into three groups depending on the doses of 1% lidocaine applied during periprostatic anesthesia at the basal lesion: 2.5 ml (group 1), 5 ml (group 2), and 10 ml (group 3). In that study, injection of 2.5 or 5 ml did not result in a significant difference in pain control, whereas use of 10 ml of 1% lidocaine produced better pain control. Because higher doses seem to result in better pain control, at least according to this single study, the effect of doses exceeding 10 ml by basal injection needs to be discerned. To address this shortcoming, we conducted a prospective randomized controlled study to evaluate the efficacy for pain control and tolerability of periprostatic lidocaine injection according to lidocaine doses of more than 10 ml by basal injection during TRUS-guided prostate biopsy.
Patients
This prospective randomized controlled trial comprised a series of 92 consecutive men (median age, 65.4 years; range, 39 to 75 years) with an abnormal prostate-specific antigen (PSA) level (>4 ng/ml) or an abnormal result on a digital rectal examination who underwent TRUS-guided biopsy and prostatic biopsy for the first time between January 2006 and December 2008. Informed consent was obtained from all patients.
Procedure
Patients were randomly assigned to three groups by using the restricted randomization method to achieve balance in group size. The random-number table was drawn up by the urologist and an appropriate anesthetic procedure was assigned to each number. Group 1 received 10 ml of 2% lidocaine gel instilled rectally as a control. Group 2 received 10 ml of 1% lidocaine at the bilateral basal periprostatic lesions following rectal installation of 10 ml of 2% lidocaine gel. Group 3 received 20 ml of 1% lidocaine at the bilateral basal periprostatic lesions after 10 ml of 2% lidocaine gel was instilled rectally.
Patients were unaware of their group assignment.
All patients had suppository enemas the day before and on the day of the biopsy and received intravenous antibiotics on the day before the biopsy and oral antibiotics for 7 days after the biopsy. All biopsies were performed by one individual using a 9.5 MHz HD 11 XE (Philips, New York, NY, USA). Each patient was placed in the left lateral decubitus position during the prostate biopsy. Periprostatic lidocaine injections were performed near the junction of the seminal vesicle with the base of the prostate with an 18-gauge AceCut biopsy needle (TSK Laboratory, Tochigi, Japan). The accuracy of the block was determined by detecting the collection of local anesthetic fluid on TRUS. Each biopsy was performed 5 minutes after the lidocaine injection. The 12-core biopsies were obtained by using an automatic, spring-loaded device with an 18-gauge needle. All patients underwent an equal number of biopsies. After the biopsy procedure, the patients completed a questionnaire regarding the level of pain they experienced during probe insertion and biopsy. The pain score was assessed by using a 10-point linear visual analogue scale (VAS; 0 for no pain, 10 for excruciating pain). After discharge, complications such as hematuria, hematospermia, rectal bleeding, and infection were determined by interviewing each patient on his next visit to the hospital.
Statistical analysis
Statistical analysis was performed by using SPSS ver. 12.0 (SPSS, Inc., Chicago, IL, USA). The groups were compared statistically by use of the Kruskal Wallis test. Various parameters that could be related to the degree of pain during the prostate biopsy (VAS score, patient's age, prostate volume, PSA, and the detection of cancer) were statistically analyzed by Pearson correlation test. Pain scores were compared between groups by use of Wilcoxon's signed ranks test. Statistical significance was defined as a p-value ≤0.05.
RESULTS
The mean age of the patients was 64.0±11.7 years, their mean prostate volume was 49.0±22.5 ml, and their mean PSA level was 11.0±14.6 ng/ml. There were no statistically significant differences in baseline characteristics between the three groups (Table 1). With respect to the correlation between VAS score and each parameter, such as age, prostate volume, PSA, and the detection of cancer, there were no statistical significances in Pearson's correlation test ( Table 2). The mean pain VAS scores during probe insertion were 0.93±0.89, 1.32±1.37, and 1.13±1.10 in groups 1, 2, and 3, respectively, and there were no statistically significant differences between the three groups ( Table 3). The mean pain VAS scores during prostate biopsy were 5.0±1.48, 3.93±1.94, and 3.60±2.15 in groups 1, 2, and 3, respectively (Table 3). Patients in groups 2 and 3, who received a periprostatic injection of 1% lidocaine, reported significant pain reduction compared with the control group (p=0.004, 0.021). However, there was no statistically significant difference in VAS score between groups 2 and 3 (p=0.533) (Fig. 1).
With respect to the incidence of complications after prostate biopsy, the three groups did not show significant differences (Table 4). One patient experienced temporary vasovagal syncope and recovered after conservative management with intravenous fluid therapy. All complications resolved with conservative management.
DISCUSSION
Although well tolerated by most men, 65 to 90% of patients reportedly have discomfort during TRUS-guided prostate biopsy [7,10,11]. One study reported that 64% of patients who underwent TRUS-guided prostate biopsy reported anxiety concerning pain before the procedure, with 20% of patients experiencing severe post-biopsy pain [4]. Pain during TRUS-guided prostate biopsy can occur during transrectal probe insertion and when the needle pierces the capsule of the prostate through the rectal wall. Lidocaine gel is usually instilled transrectally for pain reduction, but its efficacy when instilled transrectally is controversial [8,12,13].
Several nerve block methods have been investigated for better pain control. These include periprostatic injection, prostatic injection, apical anesthetic injection, and prostate plexus anesthetic injection [6]. Among them, the most commonly used method is periprostatic injection of anesthetics into the sites around the neurovascular bundle between the seminal vesicle and periprostatic tissue [6,11,14].
The technique of periprostatic injection into the basal lesion of the prostate was adapted for local anesthesia in the present study. In the process of periprostatic injection for local anesthesia, confirmation of the appropriate injections is important to maximize the anesthetic effects for pain relief during prostate biopsy. The hypoechoic wheal (the collection of local anesthetic fluid between the rectal wall and the prostate detected by TRUS during periprostatic injection) is the key point for determining proper local injection [11,14] (Fig. 2). Since Nash et al. [15] reported the efficacy of periprostatic anesthesia during prostate biopsy, numerous studies have also reported the effectiveness of a periprostatic nerve block. Schostak et al. [6] reported no significant difference in pain control between those receiving an injection of a total of 20 ml of 1% lidocaine into the apical and basal lesions and the group injected with a total of 10 ml of 1% lidocaine only into the basal lesions. Trucchi et al. [16] showed that an injection of 20 ml of 1% carbocaine near the junction of the seminal vesicle with the base of the prostate achieves better pain control than does 20 ml of 1% lidocaine.
Whereas most studies to date have demonstrated good efficacy of periprostatically injected lidocaine during prostate biopsy, there is no information or consensus about the efficacy of dose escalation of lidocaine for pain relief or of the optimal dosage of lidocaine, especially concerning injection into the junction between the seminal vesicle and the base of the prostate. Presently, we assessed the efficacy of periprostatic anesthesia according to the dosage of lidocaine during TRUS-guided prostate biopsy. No statistically significant differences were evident in the VAS score between group 2 (10 ml of 1% lidocaine) and group 3 (20 ml of 1% lidocaine). This result suggests that 10 ml of lidocaine was enough to induce maximum prostatic anesthesia. Therefore, 10 ml of 1% lidocaine was judged to be sufficient for pain control.
The rate of post-procedural infection is about 14.4% of all complications. In particular, the septic condition after prostate biopsy can be life-threatening. According to Obek et al. [17], periprostatic anesthesia is associated with a higher incidence of infectious complications and is due to the extra punctures and infiltration through a highly colonized rectum into a highly vascularized space. However, Song et al. [11] and Lee et al. [18] showed that periprostatic anesthesia was not associated with a higher rate of infectious complications. Our study concurs with these prior findings. Furthermore, other complications such as hematuria, hematospermia, and rectal bleeding were resolved with conservative management.
Our study had several limitations. The first concerns are the study design and the statistical power related to sample size; the lack of a placebo group and the small sample size may have influenced the statistical results. A second limitation was that we could not determine the optimal dosage of lidocaine for periprostatic anesthesia; we only know that there was no significant difference between the group that received 10 ml and the group that received 20 ml lidocaine for periprostatic anesthesia.
CONCLUSIONS
For pain control during prostate biopsy, the combination of periprostatic nerve block and lidocaine gel provides better pain control than does lidocaine gel alone. Furthermore, 20 ml of lidocaine for periprostatic nerve block does not achieve better pain control than 10 ml of lidocaine. To determine the optimal dose of lidocaine for periprostatic anesthesia, further well-designed, placebo-controlled prospective studies involving larger populations will be needed. | 2016-05-13T22:22:08.672Z | 2012-11-01T00:00:00.000 | {
"year": 2012,
"sha1": "66e9627bb6f91e6c07ff7ada28996e70553e2cc4",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3502732?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "66e9627bb6f91e6c07ff7ada28996e70553e2cc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67863523 | pes2o/s2orc | v3-fos-license | Role of Colocasia esculenta L. schott in arsenic removal by a pilot-scale constructed wetland filled with laterite soil
The role of plant Colocasia esculenta L. schott (C. esculenta) in arsenic removal was investigated in a pilot-scale constructed wetland (PCW), which was filled with laterite soil (19.90–28.25% iron by weight). This PCW consists of 2 sets of flow systems in parallel, with C. esculenta planted at a density of 20 plants/m2 in one system and the other without any plants. The synthetic water containing arsenic concentration of 0.50 mg/l, with its pH controlled at 7.0 and influent flow at 1.5 m3/day. With C. esculenta, the arsenic in water decreased from 0.485 mg/l to 0.054 mg/l (89% removal), whereas, without C. esculenta, the arsenic decreased from 0.485 mg/l to 0.233 mg/l (52% removal). As for the fate of the influent arsenic, the C. esculenta was responsible for 65% of arsenic accumulation. Note that the arsenic was found mostly within the root zone depth (20–40 cm). It appears that such a high capacity of arsenic removal was enhanced both by the plants through rhizostabilization and by the iron-adsorbed process within the laterite soil bed. In addition, the arsenic removal was observed to increase along with the time from 30 to 90 days, and it reached to a maximum removal around 90 days, and then decreased after 122 days. Thus, the arsenic removal efficiency including mechanisms founded can then be applied in designing of constructed wetland for arsenic treatment from gold mine drainage with similar site/soil characteristic.
Introduction
Constructed wetland is known as providing a complex biological and physical environment, which can change the chemical nature of contaminants (Shi et al., 2018). According to the literature, the arsenic can be removed in a wetland system by transforming arsenite (As (III)) to less soluble form, arsenate (As (V)). Besides, the arsenic may accumulate in the wetland sediment through precipitation, coprecipitation, and sorption (Lizama et al., 2011). These mechanisms demonstrate removing arsenic from the aqueous phase by direct formation of insoluble arsenic complex or by incorporation of trace amounts of arsenic into the newly formed insoluble compounds (Henke and Hutchison, 2009).
Arsenic in the nature is coexistent in the mineral vein with other elements such as copper, manganese, lead, tin, silver, and gold. Mining of these minerals may cause arsenic releasing into the surrounding area. Inappropriate management of mining that causes arsenic contamination was reported in many areas around the world.
For example, the Wangsaphung district of Loei province in the northeast of Thailand is an area of naturally occurring with the arsenic-rich material. According to the report, the arsenic concentrations were 0.003e0.107 mg/l in the surface water, 0.001e0.130 mg/l in the groundwater and water supply well, and 28.32e429 mg/ kg in the sediment and soil (PCD, 2012). Interestingly, in this district, there exists a gold mining site, and a small natural wetland is nearby, namely Phu Lek Creek, which receives potential arsenic-contaminated runoff from the mining site. As a result of long-term monitoring, it was reported that reduction of arsenic has taken place after passing through this natural wetland (PCD, 2006e2010). Based on the survey of this study, the soil properties in this area belong to mostly laterite soil or red clay ranged from 0.2 to 0.4 m bed depth, which contains high amount of iron. The laterite soil originating from hematite (Fe 2 O 3 ) and goethite (FeO(OH)) is capable of removing arsenic from water via chemical adsorption and precipitation because of its high content of iron (Ramaswami et al., 2001;Maiti et al., 2007).
Besides, the dominating plant species in this wetland is C. escolenta (taro) at a density of approximately 20 plants/m 2 . In 2011, a preliminary study was performed and the results show that the arsenic in water was reduced through precipitation in soil and takeup by plants in this natural wetland. This seems in agreement with some reports, which describe the arsenite and arsenate possibly removed through their coprecipitation with iron oxyhydroxides (Fe(OH) 3 (s)) and iron oxidizing bacteria (IOB) (Hedin et al., 1994;Emerson et al., 2010;Lizama et al., 2011). Specifically, in the low iron content environment, especially under acidic conditions, As(III) may precipitate as arsenopyrite (FeAsS) (Wilkin and Ford, 2006). In addition, the aquatic plants can retain arsenic in the wetland through sorption onto the roots and submerged shoots, as well as translocation to emergent shoots and tips Blute et al., 2004;Sundberg-Jones and Hassan, 2007). Furthermore, the plant roots can alter the chemical conditions of the surrounding sediment, thus enhancing the rate of transformation and fixation of metals (Wang and Peverly, 1999). Many aquatic plants in the wetland, including Typha latifolia (broadleafcattail) translocate oxygen from the atmosphere to the rhizosphere via radical oxygen loss from roots (Doyle and Otte, 1997).
Therefore, in this study, it was attempted to elucidate the role of C. esculenta in the arsenic removal by a pilot-scale constructed wetland (PCW), which was filled with the local laterite soil. The operation of this PCW was designed to last for 122 days, and the arsenic contents were monitored in the phases of water, soil, and the plants.
Consequently, the role of selected plant species was identified and the relationship between arsenic in the laterite soil and in the plants was illustrated.
Laterite soil
The laterite soil filled in this PCW was taken from the surrounded area of the Phu Lek creek within the 1 km radius of the gold mine area. The soil sample was collected at the bed depth of 15e30 cm and then air-dried for 7 days and further used for installation in this PCW by removing the debris in it. The soil sample was characterized by both physical and chemical properties namely, particle size, Eh, pH, organic matter, and chemical compositions.
Plant material
C. esculenta seedlings were collected at a height of 10 cm from Phu Lek creek. After that, seedlings were moved and cultured in the greenhouse for 15 days. The seedlings (size approximately 15 cm) that grew in the greenhouse were then transported into the PWC experimental plot.
Note that, the rootlet was removed from the seedling and the stalk was cut into the size approximately 10 cm in order to break the new rootlet and new leaf, respectively. The 10 cm C. esculenta stalks without rootlet were planted in 3 PWC experimental sets at 22 plants/unit (density of 20 plants/m 2 ) for other 15 days. After 15 days, all of experiments can be carried out by pumping the arsenic contaminated water to the PWC systems.
Pilot-scale constructed wetland
The pilot-scale constructed wetland setup consists of 2 sets with triplicated units each (PCW 3 units and control 3 units), with the dimension of each unit 1.80 Â 0.50 Â 0.60 m as illustrated in Fig. 1. To determine the effect of laterite soil on arsenic removal, the first set of the PCW was filled with 0.4 m bed height of laterite soil without any aquatic plants planted in it. The second set of the PCW was constructed with plants at a density of 20 plants/m 2 and laterite at 0.4 m of bed height (from the result of preliminary study in Phu Lek creek). The 2 sets of PCW were placed in the greenhouse in order to minimize the impact of rainfall. The dimension of each basic unit was so designed to allow adequate contact time and sufficient space for plants growth (Yeh et al., 2009;Aksorn and Visoottiviseth, 2004).
The wetland bed was installed with a liner of polyethylene plastic in order to prevent both water infiltration and adsorption of arsenic onto the surface of the water flow system (Stottmeister et al., 2006). The experimental period in this study was set for 4 months to ensure that the C. esculenta grows long enough to provide the best performance of arsenic removal. The greenhouse was installed in the open area with proper airflow. The roof of the greenhouse was constructed by using a 6 mm clear durable polyethylene plastic sheet to allow enough light similar to the outside environment. The main functions in greenhouse are to prevent only rainwater entering to the experiment plots and to protect the contamination of the outside soil. Other conditions in the greenhouse are similar to the outside environments namely airflow, sunlight, humidity, etc. The experiments were carried out during rainy season (MayeOct., 2017). In the operation of the PCW, it was fed with arsenic-contained water continuously, with the arsenic concentration prepared at 0.50 mg/l, the solution pH adjusted at 7, and a constant flow rate controlled at 1.5 m 3 /day. Note that these conditions were reproduced from those of the nearby natural wetland system. The influent water was prepared and stored in a 3,000 L of fiberglass container for the use throughout the experiment. This container was installed at an elevated level to provide a desired gravity flow of the influent by adjusting the control valve.
Sampling and analyses
Water samples were collected daily at the inflow and outflow. Water samples 1,000 mL of water was collected by grab sampling method at the location shown in Fig. 1.
Samples were acidified with HNO 3 to pH < 2, and stored at 4 AE 0.5 C until being analyzed for metal concentrations with ICP Optima 2100 DV, Perkin Elmer, U.S.A.
The bed soil samples were collected at 4 different depths at the center of each unit (0e10, 10e20, 20e30 and 30e40 cm). Soil collected by core sampling at surface of sediment (0e20 cm). Samples were air dried, sieved, and then dried in oven at 105 C for 24 h to weighted and digested to solution. Digestion was performed with 1:3, HNO 3 : HClO 4 ) (v/v). Samples of plant and soil were taken monthly. Plants were collected at the center of each unit. Plant samples were washed to remove clay and sand particles, and then dried in oven at 105 o c for 24 h to a constant weight. The dry weight was measured. Dried samples were ground to a fine powder with ceramic mortar. Digestion method and chemical used are the same as sediment digestion mentioned above.
All samples were prepared and analyzed at the Science Center Laboratory, Loei Rajabhat University. After being digested, arsenic and iron solution were analyzed using Inductive Coupled Plasma Optical Emission Spectrometry (ICP-OES), Perkin Elmer, Optima 8000, located in the laboratory of the center for Scientific and Technological Equipment, Suranaree University of Technology. The details of methods for sampling and analysis are depicted in Table 1.
The translocation factor (TF) reflects the ability of plants to translocate arsenic concentration in plant's aerial parts (stems and leaves) (Marchiol et al., 2004;Wang and Peverly, 1999;Vanlop T., 2018). TF is the ratio of arsenic concentration in above 1998) ground plant tissues (foliage and leaf stalk) to arsenic concentration in plant part rootlets was calculated using Eq. (2).
TF ¼ As above ðfoliage and leaf stalkÞ As rootlets  100 ð2Þ where As above is arsenic concentration in above ground plant tissues (sum of concentrations in foliage and leaf stalk; mg/kg, plant dry weight) and As rootlets is arsenic concentration in the rootlets (mg/kg, plant dry weight).
The bioconcentration factor (BCF) reflects the ability of plants to accumulate arsenic. It is the ratio of arsenic concentration in plant parts (foliage, leaf stake, rootlets and rhizome) to arsenic concentration in the soil (Liu et al., 2014;Mac Farlane et al., 2007;Wu et al., 2015;Vanlop T., 2018), was calculated using Eq. (3).
BCF ¼ As plantðfoliage; leaf stalk; rootlet and rhizomeÞ As soil  100 where As plant is arsenic concentration in plant tissue (sum of arsenic concentrations in foliage, lefts stake, rootlets and rhizome; mg/kg, plant dry weight) and As soil is arsenic concentration in sediment (mg/kg).
Concerning the ability of arsenic accumulation (AC), it is defined as the ratio of arsenic concentration in the laterite soil with plants installation to that without plants installation (Vanlop T., 2018), as is expressed in Eq. (4).
AC ð%Þ ¼ As ðwpÞ À As ðwoÞ As ðwpÞ Â 100 where the As (wp) is the arsenic concentration in the laterite soil with plants (mg/kg) and the As (wo) the arsenic concentration in laterite soil without plants (mg/kg).
Statistical analysis
All statistical data analysis was performed by using SPSS v.17.0 (IBM Corp., Armonk, NY, USA). The measured data are expressed as means AE standard deviation (SD). Comparisons between groups were performed with t-test and analysis of variance (One way-ANOVA), where a value of P < 0.05 was considered statistically significant. Quality assurance (QA) and quality control (QC) were used in planning, sampling, analysis and reporting of data in all process throughout this study.
Soil and water characterization
In this study, the characteristics of the PCW bed soil is depicted in Table 2. The composition of the installed soil was mostly coarse sand and clay, with a particle size range of 0.025e2.20 mm. It was slightly acidic since the pH zpc (defined as the pH with zero point charge of the soil) fell within the range of 4.80e6.23. According to this study, the soil was characterized as laterite soil or red clay containing a relatively high content of iron (19.90e28.25%). As reported, the major forms of iron in laterite soil are hematite (Fe 2 O 3 ), magnetite (Fe 3 O 4 ) and pyrite (FeS 2 ) (Mutembei, 2013). Besides, high content of aluminum (w24%) was also measured for the soil applied in this PCW.
The results of water sample analyses are shown in Table 3, which summarizes the water quality variables monitored at the inflow and outflow of each unit in this PCW, depending on the presence and absence of plants. With the plants, the pH was 6.68e7.05 at the inflow and 6.75e7.32 at the outflow. This indicates that the water in the PCW system was in a neutral condition. Also, the data for both Eh (236.10e422.20 mV) and DO (4.21e5.42 mg/l) implied an oxidation condition of the water. The decreases of both EC and TDS at the outflow indicate that inorganic ions in water have been adsorbed by the bed soil. In addition, the DOC increased from 1.85 to 2.34 mg/l at the inflow to 4.50e6.41 mg/l at the outflow.
The reason might be due to its release from the bed soil (organic matter content of 1.26e1.98%), and the plants. Furthermore, the sulfate concentrations at the inflow and outflow were less than 0.01 mg/l, whereas, the iron concentration was less than 0.01 mg/l at the inflow and 0.07e1.24 mg/l at the outflow. This demonstrates that partial iron content has been desorbed from the bed soil into water stream. Interestingly, the arsenic content in water decreased from 0.485 mg/l at the inflow to 0.087e0.139 mg/l at the outflow. In other words, the arsenic was removed by 71e98% over the detention time period of 3.44 hrs in each unit.
Without the plants, similar to the case with the plants, a neutral condition of water was observed at both the inflow (pH ¼ 6.85e7.07) and outflow (pH ¼ 6.88e7.05) and the oxidation condition was monitored based on the Eh of 223.78e352.60 mV and the DO of 4.05e4.80 mg/l. Besides, both EC and TDS dropped between the inflow and outflow, implying that inorganic ions in water were adsorbed onto the bed soil. As for the DOC, it decreased from 1.70 to 2.01 mg/l at the inflow to < 0.01 mg/l at the outflow. The sulfates in water were found to be less than 0.01 mg/l at both the inflow and outflow. On the other hand, the iron content increased from less than 0.01 mg/l at the inflow to 0.15e0.40 mg/l at the outflow. In contrast to the case with the plants, the arsenic in water decreased from 0.485 at the inflow to 0.137e0.317 at the outflow. This is to say that, without the plants, the arsenic was removed by 35e72% over the detention time period of 5.45 hrs in each unit, which is significantly lower than the case with the plants, in terms of arsenic removal efficiency.
Arsenic distribution within the bed soil
According to this study, the arsenic content in the bed soil (laterite) was 0.06e100.12 mg/kg in the presence of the plants and, without the plants, it was 0.06e54.53 mg/kg. It appears that the arsenic accumulation within the bed soil was significantly different, with and without the plants. As understood, the removal of arsenic was due to the co-precipitation and sorption onto the iron oxides. As mentioned earlier on the soil characterization (see Table 2), the iron content in the laterite soil was as high as 19.90e28.25%. In addition, the PCW condition was in the oxidation state, with Eh ¼ 223.78e352.60 mV, DO ¼ 4.05e4.80 mg/l, and DOC ¼ 4.70e6.45 mg/l. Hence, it was very possible that the arsenic in the form of H 2 AsO 4 tends to precipitate with iron to form the product of FeAsO 4(s) under the oxidation state of water (Bang et al., 2005;Kadlec and Wallace, 2009). On the other hand, as presented in Table 4, the arsenic content in the bed soil was time-dependent (p < 0.05). With the plants, the average arsenic content increased with time until it reached to its maximum (111.98 mg/kg) at Day 90, and then decreased to 100.12 mg/kg at Day 122. A similar pattern was observed in the absence of the plants, the average arsenic content increased to a maximum (56.67 mg/kg) at Day 90, and then dropped down to 54.53 mg/kg at Day 122.
It's also interesting to point out that the arsenic content at different depths was timedependent. Fig. 2 shows the arsenic content profiles at different depths. With the plants, it appears that there's no significant change of arsenic content at Day 0 in all different depths (0.06e0.07 mg/l). Yet, over the time, the arsenic started to move and accumulate within the lower depth of the bed soil Mostly, the arsenic accumulated at the depth of 10e20 cm (root zone). Lin et al. (2015) reported that the vertical distribution of arsenic content in the wetland bed soil was controlled by the distribution of adsorbents, arsenic deposition and biogeochemical processes.
The emergent plant rootlet and rhizome can stabilize heavy metals around its tissue via rhizostabilization in the presence of rhizospheric microbes (Kumar et al., 2017). Without the plants, in the beginning of experiment (Day 0), the arsenic concentration in water showed no significant difference in all depths. Over the time, the arsenic transport to a lower depth of the bed soil. Consequently, the Arsenic accumulated mostly at the depth of 0e10 cm. Note that the arsenic accumulated in the lower depth might also occur through its transport with water and remain within the soil pores.
Arsenic distribution within the plants
To understand the arsenic distribution within the plants, the plants were harvested monthly and analyzed for the arsenic contents in various parts of the plants, including foliage, leaf stalk, rootlet and rhizome. As shown in Fig. 3, it can be seen clearly that the arsenic content was significantly high in rootlet for all samples. The arsenic content was in the order as follows: rootlet > rhizome > foliage > leaf stalk. The arsenic contents of the four different parts were found to increase with time up to 90 days, and it then started to decrease. The plants C. esculenta used belong to emergent biennial ones. According to this study, the plants reached to its maximum growth after two months, and they started to lose theirs leaves after 3 months. The visual changes of the above-ground mass were observed. This might be due to toxicity of heavy metals. Such results agreed with the report by Bindu et al. (2010). They described that the C. esculenta exposed to lead and chromium decreased its ability of metals accumulation and started to lose its above-ground mass, depending on the increasing metals content.
Furthermore, both BCF and TF increased with time and started to decrease after 90 days, as depicted in Table 5. Such a result was in agreement with the reports by Ye et al. (2003) and Singhakant et al. (2009), who conluded the arsenic uptake more by the plant root than by its shoot.
Role of laterite soil and plant
Based on the outcomes of this study, possible roles of the laterite bed soil and the plants played in absorbing arsenic were further elaborated in the following, in addition to the factor of time of duration in the system.
Role of laterite soil
As presented in Table 3, the arsenic removal by laterite soil alone was 35e72% in the absence of the plants. This demonstrates that the laterite soil was effective in arsenic removal via co-precipitation and sorption onto the iron oxides (Jahan et al., 2010;Maiti et al., 2007;Maji et al., 2008;Canales et al., 2012). Dominant species of arsenic under such experimental conditions as pH ¼ 6.75e7.32 and Eh ¼ 223.78e401.25 will be arsenate (HAsO 4 À2 ). With such an oxidation condition, the arsenate could be precipitated with iron to form FeAsO 4(s) (Bang et al., 2005;Kadlec and Wallace, 2009). In addition, the surface of laterite soil particles was positively charged (pH ZPC ¼ 4.80e6.23). According to Maji et al. (2007), under the condition of the positively charged environment, the arsenic adsorbed onto laterite soil is mostly due to coulombic and van der Waals forces between the solute and the laterite soil surface. Both bioconcentration factor and translocation factor indicate the arsenic uptake more by plant roots than by its shoots. As reported, plants could retain arsenic in the wetland through sorption to roots and the submerged shoots, and through translocation to emergent shoots Blute et al., 2004;Sundberg-Jones and Hassan, 2007). Since the C. esculenta is a non-hyper accumulator, sorption onto such plants plays a minor role.
The comparison of arsenic removal in the presence and absence of the plant is shown in Fig. 5. Obviously, higher arsenic removal was observed in the presence of the plants. It appears that the capacity of arsenic accumulation (AC) depends greatly on the plants, arsenic content and time of duration.
Notably, it was indicated that the plants enhancing transformation and fixation of arsenic in soil. Mechanisms of C. esculenta enhancing arsenic accumulation in can be explained in 3 aspects. Firstly, the enhancement may be through physical effects of roots such as filtering, flow reduction, increasing sedimentation and decreasing resuspension (Stottmeister et al., 2006;Vymazal, 2011 (Vymazal, 2011). Note that the wetland condition can enhance the development of iron-oxidizing bacteria by oxygen relocation into the rhizosphere. Such a condition also provides oxidizing environment in the precipitation process within the laterite soil bed (Niu et al., 2007;Shelef et al., 2013). In this study, with the plants, the highest arsenic accumulation in the unit occurred at the depth of 10e20 cm (root zone), whereas it was at the depth of 0e10 cm in the absence of the plants. Lastly, the promoting effect may be though the roots acting as surface precipitates and thus retaining the arsenic that co-precipitates with iron as FeAsO 4(s) around the root zone (Wang and Peverly, 1999;Blute et al., 2004). In addition, the plant root system, as stated in the second and third, can stabilize heavy metals via rhizostabilization in the presence of rhizospheric microbes (Singhakant et al., 2009;Lizama et al., 2011;Vymazal, 2011;Kumar et al., 2017).
Conclusion
The study of the role of plant in arsenic removal was investigated in pilot scale constructed wetland. Results showed that arsenic in water decreased from 0.485 to 0.054 mg/L and decreased from 0.485 to 0.233 mg/L in cell with and without plant, respectively. Arsenic removal efficiency was significantly different between cells with plant (88.77%) and cells without plant (52.06%). The constructed wetland system with laterite soil and C. esculenta can effectively remove arsenic better than only laterite soil with ability of arsenic accumulated via C. esculenta was 65.13%. The high ability enhancement by plant might due to rhizostabilization and increment of oxidizing in precipitation process in laterite soil since arsenic was found mostly at depth 20e40 cm which is a root zone depth. Removal efficiency was increased with time from 30 to 90 days, reach optimum around 90 days, then decreasing after 122 days. Form plants analysis, the order of bioconcentration factor (BCF) was as follow: rootlet (0.28e0.80), rhizome (0.15e0.21), foliage (0.17e0.38), leaf stalk (0.00e0.26). The order of translocation factor (TF) was as follow: foliage/rootlet (0.00e0.60), leaf stalk/rootlet (0.00e0.40). Design criteria of constructed wetland were set according to our experimental pilot scale. Constructed wetlands pilot scale was effectively applied for arsenic removal using C. esculenta (p < 0.05). Design criteria can be summarized in Table 6. | 2019-03-11T17:23:48.192Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "d40633cf34b77bcf0dac9e4ec3c966295e27d493",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844018364193/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d40633cf34b77bcf0dac9e4ec3c966295e27d493",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
232089946 | pes2o/s2orc | v3-fos-license | Impact of Methyl-β-Cyclodextrin and Apolipoprotein A-I on The Expression of ATP-Binding Cassette Transporter A1 and Cholesterol Depletion in C57BL/6 Mice Astrocytes
Objective Dysregulation of cholesterol metabolism in the brain is responsible for many lipid storage disorders, including Niemann-Pick disease type C (NPC). Here, we have investigated whether cyclodextrin (CD) and apolipoprotein A-I (apoA-I) induce the same signal to inhibit cell cholesterol accumulation by focusing on the main proteins involved in cholesterol homeostasis in response to CD and apoA-I treatment. Materials and Methods In this experimental study, astrocytes were treated with apoA-I or CD and then lysed in RIPA buffer. We used Western blot to detect protein levels of 3-hydroxy-3-methyl-glutaryl coenzyme A reductase (HMGCR) and ATP-binding cassette transporter A1 (ABCA1). Cell cholesterol content and cholesterol release in the medium were also measured. Results ApoA-I induced a significant increase in ABCA1 and a mild increase in HMGCR protein level, whereas CD caused a significant increase in HMGCR with a significant decrease in ABCA1. Both apoA-I and CD increased cholesterol release in the medium. A mild, but not significant increase, in cell cholesterol content was seen by apoA-I; however, a significant increase in cell cholesterol was detected when the astrocytes were treated with CD. Conclusion CD, like apoA-I, depletes cellular cholesterol. This depletion occurs in a different way from apoA-I that is through cholesterol efflux. Depletion of cell cholesterol with CDs led to reduced protein levels of ABCA1 along with increased HMGCR and accumulation of cell cholesterol. This suggested that CDs, unlike apoA-I, could impair the balance between cholesterol synthesis and release, and interfere with cellular function that depends on ABCA1.
Introduction
Beta-cyclodextrin (β-CD) is reported to be effective in exit of cholesterol from the plasma membrane (1, 2); however, relatively few studies have investigated its mechanism of action in influencing either in vivo or in vitro cholesterol metabolism, especially in diseases such as Niemann-Pick disease type C (NPC). A number of candidate proteins involved in cholesterol synthesis/ trafficking and efflux have been introduced. In this research, we focused on two proteins of this type, ATPbinding cassette subfamily A member 1 (ABCA1) as the main protein for cholesterol efflux and 3-hydroxy-3-methyl-glutaryl coenzyme A reductase (HMGCR) as an important and rate limiting enzyme in cholesterol synthesis (3).
There is increasing evidence that deregulation of lipoprotein and/or lipid metabolism is coupled to the progression of neurodegenerative diseases like Alzheimer's disease (AD) and NPC (4,5). Cholesterol is a primary lipid that regulates brain cell structure and function during the developmental period and adult life (4). The blood brain barrier (BBB) separates the brain´s cholesterol metabolism from the periphery (6); therefore, maintaining the steady-state content of cholesterol in the brain is of particular importance for its physiological function (4). HMGCR acts as a rate-limiting enzyme in cholesterol synthesis and is the primary site of feedback regulation in the biosynthesis of cholesterol (7). ABCA1, a member of the ATP-binding cassette transporters family, is responsible for the majority of cholesterol efflux to deliver cholesterol to an acceptor like apolipoprotein A-I (apoA-I) for high-density lipoprotein (HDL) generation (8). There is abundant evidence that ABCA1-mediated cholesterol efflux to apoA-I can occur at the plasma membrane (9). Thus, the mentioned enzymes are targets of the highly successful blood cholesterol-lowering drugs and their inhibition is a rapid mechanism for switching off the cholesterol synthesis. Altered brain lipid metabolism, such as cholesterol, has Effect of methyl-β-Cyclodextrin and Apolipoprotein A-I on Mice Astrocyte ABCA1 been implicated in the progression of neurodegenerative diseases like NPC and AD (10). Cholesterol reduction in experimental animal models delays the progression of Alzheimer's pathology. These findings raise the possibility that treating humans with cholesterol lowering medications might reduce the risk of developing AD (11). In other words, it has been reported that the loss of cholesterol shuttling in NPC disease is associated with reduced activity of ABCA1, which is responsible for low HDL cholesterol levels in NPC patients (12).
ApoA-I, a natural cholesterol lowering agent, is one of the main apolipoproteins in the brain. It is an HDL cholesterol transporter that prevents brain cholesterol deposition and holds neuroprotective properties. Decreased serum HDL cholesterol and apoA-I concentration is shown to be highly correlated with AD severity (13). In the human brain, an association has been found between apoA-I with amyloid beta deposits; complexes between apoA-I and amyloid beta can be detected in cerebrospinal fluid (CSF) from AD patients (14).
Cyclodextrins (CDs), namely synthetic cholesterol lowering agents, are a family of cyclic polysaccharide compounds widely used to bind cholesterol. The use of CDs, in particular β-CDs, is increasing in biomedical research because they are able to interact with cell membranes and are known to extract cholesterol and other lipids from these membranes (15). β-CD is a biologically active molecule, and studies have shown that β-CD and its derivatives significantly reduce intracellular cholesterol levels in NPC mutants (16). CDs may also be useful for AD because of intriguing parallels between NPC1 and AD, including neurofibrillary tangles and prominent lysosome system dysfunction (17).
β-CD has been reported to play a role in cholesterol exit from the plasma membrane (1) but relatively few studies have dealt with its mechanism of action to influence in vivo or in vitro cholesterol metabolism, especially in certain diseases such as NPC (18,19). There are a number of candidate proteins implicated in cholesterol synthesis/ trafficking and efflux. Here we focused on two of them: ABCA1, as the main protein of cholesterol efflux, and HMGCR as an essential rate-limiting enzyme in cholesterol synthesis. In the present study, we used a cell culture model to elucidate and compare the mechanism of CD-mediated cholesterol depletion with apoA-I mediated cholesterol efflux from astrocytes through investigating the protein expressions of ABCA1 and HMGCR.
Primary isolation and culture of astrocytes
In this experimental study, 18 mice were housed in a temperature-controlled room (24 ± 1˚C) under 12 hours light/dark conditions with free access to food and water. The mice were fed with a standard commercial chow diet and water for a week to stabilize their metabolic condition. The animal procedures were in accordance with the guidelines for animal care prepared by the Committee on Care and Use of Laboratory Animal Resources, National Research Council (USA), and approved by the Institute of Animal Ethics Committee (IAEC) in Ahvaz Jundishapur University of Medical Sciences (AJUMS) for the Purpose of Control and Supervision of Experiments on Animals (IR.AJUMS.REC.1395.637). Astrocytes were isolated from P0 C57BL/6J wild-type mice based on a previously described protocol (20). Briefly, after brain dissection and removal of the meninges, the minced brain pieces were incubated with 0.1% trypsin solution in Dulbecco´s phosphate-buffered saline (DPBS) for 3 minutes at 37˚C to obtain single cells. The cell suspension was centrifuged at 1000 rpm for 1 minute and the cell pellet was cultured in DMEM, low glucose + 10% FBS + 1% penicillin/ streptomycin for one week for the primary culture and a subsequent week for the secondary culture (21).
Experimental design and treatment
Astrocytes were plated at a density of 3×10 6 in DMEM/10% FBS medium, incubated at 37˚C and 5% CO 2 , and allowed to adhere. Astrocytes that were 75% confluent were treated with 5 µg/ml apoA-I or 5 µM betacyclodextrin for 24 hours. Vehicle-treated cells were used as the control dish.
Immunoblotting
An equal amount of proteins (150 µg protein/lane) in the cell lysate were separated by 10% sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred to a polyvinylidene difluoride membrane. Bands of HMGCR and ABCA1 were detected after overnight immunostaining of the membrane with specific primary antibodies against HMGCR (1:5000 dilution, Abcam) and ABCA1 (1:2000 dilution, Invitrogen), followed by a subsequent incubation for 2 hours with the corresponding HRP-conjugated anti-IgG (1:4000 dilution, Sigma) as secondary antibodies. Rabbit anti-GAPDH (1:4000 dilution, Abcam) was used as an internal control for equal loading, and immunoreactive proteins were quantified with enhanced chemiluminescence (ECL) reagent followed by densitometric analysis with ImageJ software.
Extraction of lipid from astrocytes
To determine the cellular cholesterol content, the culture medium was removed and the cells were washed with DPBS. Next, the cell plates were dried with a dryer. We added 1.5 ml of hexane: isopropanol (3:2) solution to each culture plate to extract lipids by shaking the samples for 1.5 hours at room temperature. Then, the supernatant was transferred to a tube and this step was repeated with the same volume of hexane: isopropanol (3:2) for another hour. After evaporating the organic solvent in a 40˚C water bath under nitrogen gas, the dried lipids were dissolved in 200 µl cholesterol assay buffer and vortexed until the mixture was homogenized and stored at -20˚C for further cholesterol assay.
Cholesterol assay in cell and conditioned media
We determined the cholesterol content of the astrocytes and conditioned media based on the protocol presented in the Sigma cholesterol quantitation kit (MAK043-1KT). Briefly, a set of cholesterol standards were prepared by diluting 2 µg/µl stock solution of standard cholesterol provided with the kit. Reaction mixtures were set up according to the kit's protocol and the absorbance of samples was measured at 570 nm. All samples and standards were run in triplicate and the cholesterol content of the samples was determined from a standard curve.
Statistical analysis
Statistical analysis of this experimental study was performed with SPSS (version 18) software. Descriptive statistics presented data as mean ± SD and analysis of variance (ANOVA) was used to check significant differences between groups in the results from Western blotting analysis. In all triplicate experiments, significant differences were noted at * P≤0.05 and ** P≤0.01.
Characterization of astrocytes
In the previous study, astrocytes isolated by the same method were characterized immunohistochemically with specific anti-glial fibrillary acidic protein (GFAP) antibody. The results showed that the cellular population contained 95% GFAP-positive cells, which are a marker for astrocyte characterization (20,21). No morphology changes were detected before and after treatment (Fig. S1). (See Supplementary Online Information at www. celljournal.org).
Effects of apolipoprotein A-I and beta-cyclodextrin on protein levels of 3-hydroxy-3-methyl-glutaryl coenzyme A reductase
In order to check the effect of apoA-I and β-CD on the protein level of HMGCR, which is the main ratelimiting enzyme involved in cholesterol synthesis, we treated the cultured astrocytes with 5 µg/ml of apoA-I or 5 µM of β-CD for 24 hours. Once the cells were harvested, cell lysates were subjected to SDS-PAGE and HMGCR was detected by western blot. As indicated in Figure 1, both apoA-I and β-CD increased the protein level of HMGCR, which was only significant for β-CD treatment with a 51% increase in comparison to the control group (Fig.1).
Effect of apolipoprotein A-I and beta-cyclodextrin on protein levels of ATP-binding cassette transporter A1
We sought to investigate the effects of β-CD and apoA-I on protein level of ABCA1 as the main protein involved in cholesterol efflux. Cultured astrocytes were treated with 5 µg/ml of apoA-I or 5 µM of β-CD for 24 hours. Following cell lysis, the lysates were loaded into SDS-PAGE and the protein level of ABCA1 was analysed by western blot. We found a significant increase in the ABCA1 protein (52%) after apoA-I treatment. However, β-CD significantly down
A B
Effect of methyl-β-Cyclodextrin and Apolipoprotein A-I on Mice Astrocyte ABCA1 regulated the protein level of ABCA1 compared with the control group (Fig.2).
Cholesterol content in the cell and conditioned medium
To determine the effect of apoA-I and β-CD on cholesterol release in conditioned medium and on cellular cholesterol content. a quantitative cholesterol kit (Sigma) was used following treatment with 5 µg/ml of apoA-I or 5 µM of β-CD for 24 hours. Cholesterol from both cells and media were extracted and further measured based on the protocol provided in the Sigma quantitative kit for the three experimental groups. Figure 3A shows a significant increase of approximately 66% in cholesterol level in the conditioned medium when the astrocytes were treated with apoA-I. β-CD increased cholesterol release to approximately 24%; however, it was still significant.
Our western blot data showed a significant increase in HMGCR after the astrocytes were treated with either apoA-I or β-CD. We checked to see if the HMGCR enhancement caused an abundance of cholesterol by assessing the cell cholesterol content in the treated astrocytes. Results shown in Figure 3B indicated an increase in cell cholesterol level by both apoA-I (about 15%) and β-CD (about 33%) in astrocytes compared with the control group. However, this increase was significant for β-CD, but not apoA-I (Fig.3B).
Discussion
Abnormal accumulation of intracellular cholesterol results from impaired cholesterol trafficking/efflux (22). In healthy cells there are pathways involved in cholesterol delivery to the extracellular acceptors like apoA-I to provide a balance between cholesterol synthesis, trafficking, and efflux. This process regulates the cell cholesterol content and is mediated by many proteins, including HMGCR and ABCA1 as the two pivotal members of cholesterol homeostasis (7,8). β-CD has been reported to be effective in regulating cholesterol metabolism (23), but relatively few studies have investigated its mechanism of action to influence in vivo or in vitro cholesterol metabolism, especially in the brain (24). The present study was carried out to investigate A B A B the effects of apoA-I, as a natural and well-established signal inducer for cell cholesterol homeostasis, and β-CD, as a cholesterol-lowering synthetic reagent, on protein levels of HMGCR and ABCA1 as a possible regulatory mechanism for cellular cholesterol depletion.
Based on many reports, it is worth noting that apoA-I signalling activates the entire cholesterol metabolic cycle in astrocytes through promotion of cholesterol synthesis/ trafficking, and its subsequent efflux in order to inhibit cellular cholesterol accumulation. Here, we first checked the apoA-I signalling on protein level of ABCA1, HMGCR, and on cell cholesterol content and release.
Our data showed that the ABCA1 protein level was significantly increased. There was a mild increase in HMGCR observed in astrocytes treated with apoA-I. Consistent with this finding, several studies have shown that apoA-I initially interacts with ABCA1 to generate HDL through promotion of cholesterol efflux (8). This interaction is believed to subsequently contribute to an increase in cellular content of ABCA1, suggesting the effect of apoA-I on stability of ABCA1 protein levels, which is in line with our results. HMGCR, along with cell cholesterol content and release were up regulated by apoA-I treatment, which suggested that the entire cell cholesterol pathway was under the control of apoA-I signalling in astrocytes. Astrocytes are the most abundant and supporting cells in the central nervous system (CNS). They should provide enough cholesterol to deliver cholesterol in the form of HDL cholesterol to the neurons (25). These results supported the findings of Ito et al. who reported increased synthesis of cholesterol and phospholipids in rat astrocytes after apoA-I treatment (26). β-CD, like apoA-I, is an acceptor for excess cell cholesterol (27); therefore, it is believed to be used as a cholesterol-lowering medicine in some neurodegenerative disease such as NPC to reduce cell overload cholesterol (19). Unlike the apoA-I effect, we observed an increased level of HMGCR and a decreased ABCA1 protein level in comparison to the control group in astrocytes treated with β-CD. In support of our findings, Coisne et al. reported a significant decrease of ABCA1 protein level in β-CDtreated bovine smooth muscle cells (24). Also, compared to apoA-I and in agreement with our western blot data, we observed a reduction in cholesterol release in conditioned media of astrocytes-treated with β-CD. This confirmed that ABCA1, which is the main protein responsible for cholesterol release, is affected by β-CD treatment.
In contrast to the report showing that CD treatment blocked cholesterol efflux (28), our data demonstrated that CD, which is the cholesterol acceptor, significantly increased cholesterol secretion in conditioned media. β-CD could possibly deplete cholesterol just from plasma membrane because at the same time the cell cholesterol content is increased. Depletion of cholesterol from the plasma membrane may induce a positive feedback to increase HMGCR protein expression, and result in increased cholesterol synthesis.
Overall, apoA-I regulates not only cholesterol efflux but also intracellular cholesterol trafficking and regulates all elements in cholesterol metabolism. However, due to the accumulation of cellular cholesterol, CD only releases cholesterol from the plasma membrane and does not support intracellular cholesterol trafficking. We have suggested that this regulation may be due to the decreased protein level of ABCA1 after CD treatment.
Since ABCA1 is involved in a variety of cell functions, its protein levels are tightly controlled by transcriptional and post-translational regulatory pathways (29). The cell cholesterol content in particular has a regulatory effect on ABCA1 abundance through the post-translational regulatory pathways. Although both apoA-I and β-CD are cholesterol acceptors that can deplete cell cholesterol (30) and increase cholesterol secretion in conditioned media, they have a different effect on ABCA1 abundance. Our findings suggest that, unlike apoA-I, β-CD lacks the ability to stabilize ABCA1, a crucial mediator of cholesterol efflux. Thus, it is likely that the action of β-CD inhibits ABCA1 signalling pathways, including cholesterol efflux, which results in abnormal cholesterol accumulation with long-term exposure. (31).
Conclusion
Our study provides new evidence that β-CD, like apoA-I, can increase the HMGCR protein. Unlike apoA-I, it can reduce ABCA1, which may interfere with many cell functions and signalling that originate from ABCA1. Our findings are of great importance in the understanding of cellular events related to β-CD treatment. Further studies are necessary to clarify all unrecognized aspects of using CDs in treating neurodegenerative disorders like NPC and AD. | 2021-03-03T06:23:25.196Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "0a94ff79e7837deb1ea782018b09eb1080d8b38e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4a2ee23ed02ed94e1f1c7b83d6c37ce01aeb9d7c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
189944839 | pes2o/s2orc | v3-fos-license | Gamma-glutamylcysteine synthetase and tryparedoxin 1 exert high control on the antioxidant system in Trypanosoma cruzi contributing to drug resistance and infectivity
Trypanothione (T(SH)2) is the main antioxidant metabolite for peroxide reduction in Trypanosoma cruzi; therefore, its metabolism has attracted attention for therapeutic intervention against Chagas disease. To validate drug targets within the T(SH)2 metabolism, the strategies and methods of Metabolic Control Analysis and kinetic modeling of the metabolic pathway were used here, to identify the steps that mainly control the pathway fluxes and which could be appropriate sites for therapeutic intervention. For that purpose, gamma-glutamylcysteine synthetase (γECS), trypanothione synthetase (TryS), trypanothione reductase (TryR) and the tryparedoxin cytosolic isoform 1 (TXN1) were separately overexpressed to different levels in T. cruzi epimastigotes and their degrees of control on the pathway flux as well as their effect on drug resistance and infectivity determined. Both experimental in vivo as well as in silico analyses indicated that γECS and TryS control T(SH)2 synthesis by 60–74% and 15–31%, respectively. γECS overexpression prompted up to a 3.5-fold increase in T(SH)2 concentration, whereas TryS overexpression did not render an increase in T(SH)2 levels as a consequence of high T(SH)2 degradation. The peroxide reduction flux was controlled for 64–73% by TXN1, 17–20% by TXNPx and 11–16% by TryR. TXN1 and TryR overexpression increased H2O2 resistance, whereas TXN1 overexpression increased resistance to the benznidazole plus buthionine sulfoximine combination. γECS overexpression led to an increase in infectivity capacity whereas that of TXN increased trypomastigote bursting. The present data suggested that inhibition of high controlling enzymes such as γECS and TXN1 in the T(SH)2 antioxidant pathway may compromise the parasite's viability and infectivity.
Introduction
Trypanosoma cruzi is the etiological agent of human Chagas disease. The World Health Organization estimates that 6-7 million people mainly in the Americas are infected with this parasitic protist, with ≈7500 annual deaths, whereas ≈70 million persons are at risk of becoming infected because of living in endemic regions [1,2,63,64]. Moreover, the disease has now also been found in non-endemic countries due to emigration of infected persons, with consequent non-vectorial transmission [2,3].
The current drugs available to treat the infection, benznidazol (Bnz) and nifurtimox have several drawbacks such as (i) high toxicity which causes severe side effects [4]; (ii) their lack of efficacy in the treatment of the chronic stage of infection [5]; and (iii) poor medical infrastructure: less than 1% of infected people have access to diagnostics and treatment [65]. Therefore, there is a need for new therapeutic strategies, safer and affordable drugs and validated drug-targets against Chagas disease [6,7]. Indeed, many T. cruzi enzymes and processes have been proposed as drug targets [7], including the trypanothione-dependent antioxidant pathway [8][9][10][11]. Trypanothione (T(SH) 2 ) is a conjugate of two glutathione (GSH) and one spermidine (Spd) molecules that replaces the antioxidant functions that GSH has in most cells, including mammalian ones [12]. The antioxidant system of T. cruzi is constituted by two modules, the T(SH) 2synthesis pathway (Fig. 1A) and the T(SH) 2 -dependent hydroperoxide reduction pathway (Fig. 1B). In the first one, cysteine (Cys) and glutamate (Glu) are covalently linked by gamma-glutamylcysteine synthetase (γECS) to form gamma-glutamylcysteine (γEC), which then is bound to glycine (Gly) by glutathione synthetase (GS) thus producing GSH. The other precursor, Spd, can be imported from the extracellular environment or can also be synthesized from putrescine (Put) and decarboxylated S-adenosylmethionine (dAdoMet) by spermidine synthase (SpdS). Finally, trypanothione synthetase (TryS) synthesizes T(SH) 2 by binding two GSH molecules to a Spd molecule [8].
The cytosolic enzymes belonging to the main hydroperoxide reduction pathway catalyze peroxide reduction and oxidized trypanothione (TS 2 ) reduction. Firstly, T(SH) 2 reduces tryparedoxin 1 (TXN1), which then transfers its electrons to either tryparedoxin peroxidase (TXNPx), which has preference for H 2 O 2 and short-chain alkyl/ aryl hydroperoxide reduction, or to a TXN1-dependent non-selenium glutathione peroxidase-like enzyme (GPx), which has preference for long-chain alkyl peroxides, although it also uses other peroxides with one order of magnitude lower affinity [13,14]. These reactions produce oxidized trypanothione (TS 2 ), which is regenerated by trypanothione reductase (TryR) using NADPH [8].
The arguments supporting the notion that T(SH) 2 metabolism enzymes may serve as drug targets are: (i) TryS, TXN, TXNPx and TryR have no counterparts in the host; (ii) through gene expression manipulation, all the pathway enzymes ( Fig. 1) have been proved to be essential in Trypanosoma brucei and Leishmania spp. (reviewed in [8,11,15]); (iii) TryR, the most intensively studied enzyme for drugtarget design and screening studies, seems to be druggable [16][17][18]; (iv) TryS as well as mitochondrial and cytosolic TXNPx's have been proposed as virulence factors [19][20][21]. Nevertheless, their metabolic validation as potential sites for therapeutic intervention is still a pending experimental issue. In this regard, an approach is to determine the role that each enzyme has in controlling the T(SH) 2 metabolism pathway, because inhibition of the most controlling pathway enzymes would affect more the pathway function than inhibition of enzymes exerting limited control (for a review see [22,23]. Metabolic control analysis (MCA) is a theoretical and experimental framework in the study of the control and regulation of metabolic Abbreviations Cys cysteine Glu glutamate γEC gamma-glutamylcysteine Gly glycine GSH glutathione Spd spermidine Put putrescine AdoMet S-adenosyl methionine dAdoMet decarboxylated S-adenosyl methionine T(SH) 2 reduced trypanothione TS 2 oxidized trypanothione ROOH hydroperoxide γECS gamma-glutamylcysteine synthetase GS glutathione synthetase TryS trypanothione synthetase SpdS spermidine synthase AdoMetDC S-adenosyl methionine decarboxylase PutT putrescine transporter SpdT spermidine transporter TryR trypanothione reductase TXN1 tryparedoxin isoform 1 TXNPx tryparedoxin peroxidase GPx non-selenium glutathione peroxidase-type tryparedoxin peroxidase OE overexpressing C J ai flux control coefficient t-butOOH tert-butyl hydroperoxide Wt wild type Bnz benznidazol BSO buthionine sulfoximine
Fig. 1. T. cruzi antioxidant pathway. (A)
The trypanothione synthesis pathway starting from intracellular Cys. (B) The TXN1-dependent hydroperoxide reduction pathway. Metabolites are: Cys, cysteine; Glu, glutamate; γEC, gamma-glutamyl cysteine; Gly, glycine; GSH, glutathione; Spd, spermidine, Put, putrescine; AdoMet, Sadenosyl methionine; dAdoMet, decarboxylated S-adenosyl methionine; T(SH) 2 , reduced trypanothione; TS 2 , oxidized trypanothione; ROOH, hydroperoxide. Transporters and enzymes are: γECS, gamma glutamylcysteine synthetase; GS, glutathione synthetase; TryS, trypanothione synthetase; SpdS, spermidine synthase; AdoMetDC, S-adenosyl methionine decarboxylase; PutT putrescine transporter; SpdT, spermidine transporter; TryR, trypanothione reductase; TXN1, tryparedoxin 1; TXNPx, tryparedoxin peroxidase; GPx, glutathione peroxidase-type tryparedoxin peroxidase. pathways [24,25]; it can be applied to identify suitable drug targets in the intermediate metabolism of parasites [22,23]. Experimental MCA studies in several microorganisms and mammalian cells have demonstrated the non-existence of only one "rate-limiting" or "bottle neck" enzyme in metabolic pathways. Instead, they have shown that control of a metabolic pathway flux is shared among all the pathway enzymes/ transporters, with only a few (2)(3) steps showing the highest control [24,25]. MCA allows to quantitatively determine the degree in which a metabolic pathway flux depends on the activity of each individual pathway step, a value called flux control coefficient (C J ai ), where J is the pathway flux and ai is the activity of the pathway enzyme i. The sum of the positive C J ai (steps that favor the pathway flux) and the negative C J ai (flux-draining steps, e.g. pathway leaks) of all the pathway components should add up to one (summation theorem). An enzyme/transporter with a C J ai approaching to one means that it has a predominant (but not unique) role in determining the pathway flux, whereas enzymes with C J ai approaching to zero have negligible control on it. Therefore, enzymes with high C J ai in essential metabolic pathways of the parasite are promising drug targets from a metabolic perspective [22,23,25].
The C J ai are systemic properties, i.e. their determination requires a whole functional pathway, which is achieved only when all involved players interact with each other. Hence, the C J ai cannot be determined by analyzing or manipulating the enzyme in isolation either in vitro or in vivo. The C J ai can be theoretically predicted by in silico kinetic modeling of the metabolic pathway. Using this latter strategy, it was previously predicted that γECS and TryS exert most of the control on the T(SH) 2 synthesis flux, with C J ai values of 0.58-0.7 and 0.49-0.58, respectively, and with other steps displaying negative C J ai to preserve the summation theorem [26]. On the other hand, by in vitro reconstitution of the hydroperoxide-reducing pathway using the recombinant enzymes, it was predicted that TXN1 and either TXNPx or GPx are the steps that mostly control the hydroperoxide reduction flux with C J ai values of 0.9 [14]. In contrast, TryR showed negligible control on both, the T(SH) 2 synthesis [26] and hydroperoxide reduction [14] fluxes.
The present work aims to expand the previous MCA studies and to authenticate the in silico and in vitro predictions of flux control distribution of the T(SH) 2 metabolism in T. cruzi, by now developing in vivo experimentation. To this end, the C J ai of γECS and TryS on the T(SH) 2 synthesis flux, and TXN1 and TryR on the peroxide reduction pathway, were determined by modulating the expression of these enzymes in the parasites and determining its effects on the pathways' fluxes. In addition, the previously reported metabolic model of T(SH) 2 synthesis [26] was updated and expanded with additionally determined kinetic data and a new kinetic model of the peroxide reduction pathway was also constructed. Furthermore, correlations between the degree of control of the pathway enzymes with Bnz and peroxide resistance, and infectivity in human cells, were also analyzed. The results indicated that γECS and TXN1 have high control on their respective pathway fluxes and are relevantly involved in drug resistance and infectivity.
Cell culture
The Mexican T. cruzi Queretaro strain (DTUI) [27] was used throughout the study. Epimastigotes of non-transfected wild type (Wt), and stable clones created by transfection with empty plasmid (mock) and plasmids for overexpressing (OE) specific enzymes were grown in liver infusion-tryptose (LIT) medium {0.5% tryptose -0.5% liver infusion (DIFCO; Detroit, MI, USA), 0.4% NaCl, 0.04% KCl, 0.42% Na 2 HPO 4 , 0.2% glucose}; supplemented with 10% fetal bovine serum FBS (Biowest; Nuaillé, France), 25 μg hemin/mL and 100 U penicillin/ mL plus 100 μg streptomycin/mL, and maintained at 28°C. Where indicated, 300 μg G418/mL (Cayman; Ann Arbor, MI, USA) were added. To determine their generation time (G), epimastigotes were cultured at an initial concentration of 0.8 × 10 6 parasites/mL and incubated at 28°C for 96 h. Each 24 h the number of parasites was determined by direct counting of motile parasites in a Neubauer chamber or by absorbance at 600 nm. The G was calculated as the inverse of the growth rate constant (μ), the latter being the inverse of the slope of a Log 2 (OD) vs. time curve.
Gene amplification
The TryR and TXN1 genes were ligated into the pTREXn expression vector specific for T. cruzi [28]. The genes were amplified by PCR using standard methods with the following nucleotide primers: TryR, sense 5'gcggcggcggcgaagcttatgatgtcaaagatttttg3′, antisense 5′ggcggccgcttaca-gagatgcttctgaagg3'; TXN1, sense 5'ggcaagcttatgtctggtttggcgaag3′, antisense 5'ggcggccgcttagtcggaccaggggaag3′, both containing HindIII and NotI restriction sites for oriented ligation. For TryR, a previously reported plasmid containing the gene from the T. cruzi Ninoa strain [26] was used as template for gene amplification. For TXN1, genomic DNA from the T. cruzi Ninoa strain was used. The PCR products were ligated into the pTREXn vector by standard methodologies. Construction of plasmids for γECS and TryS overexpression has been previously reported [29]. Genes' nucleotide sequences and in-frame plasmid constructs were verified by automated Sanger DNA sequencing.
Parasite transfection
Epimastigotes overexpressing γECS and TryS were created as reported elsewhere [29]; the same protocol was used to obtain clones overexpressing TryR and TXN1. Briefly, 3 × 10 8 epimastigotes resuspended in 350 μL of cold non-supplemented LIT medium were transfected with 100 μg of cesium chloride-purified plasmid DNA by electroporation with a BTX ECM 830 electroporator (Harvard Apparatus; Holiston, MA, USA) at 300 V for 70 ms in 2 mm gap BTX electroporation cuvettes. The parasites were maintained at 4°C for 5 min and then resuspended in fully supplemented LIT medium and incubated at 28°C. After 48 h, 500 μg G418/mL was added to the culture to select plasmid-containing parasites. After 7 days, an aliquot (500 μL) of the culture was diluted 10 times in medium lacking the antibiotic and cells were grown for 5 days; later, they were diluted in medium and exposed again to antibiotic for 7 days. This procedure was repeated twice to ensure that most of the parasites were transfectants. The selection procedure took approximately two months, after which the parasites were maintained under an antibiotic concentration of 300 μg G418/mL. As control of the transfection and selection procedure, epimastigotes were transfected with a pTREXn-GFP construct, treated likewise, and the total number of transfected parasites was monitored by direct counting using a fluorescence microscope. Whenever ≈100% of parasites were expressing GFP, it was assumed that the parasites transfected with the other constructs were also completely selected. The selected parasites grown without drug and grown back with G418, maintained the resistance and/or fluorescence (GFP control), indicating that the obtained transfected parasites were stable.
Cloning of transfected parasites
An heterogeneous population (pop) of stable-transfected parasites, all selected for antibiotic resistance indicative of harboring an expression plasmid, was obtained and processed to obtain clones with different levels of enzyme expression by the following protocol. Serial dilutions (1:2) were performed, starting with 58 parasites in 200 μL of complete LIT medium with antibiotics and G418 in a 96 well plate and the cells were grown at 28°C for at least 4 weeks. From the last dilution of parasites yielding clones, at least three clones (numbered 1, 2 and 3 as the overexpression level increased) were selected for each overexpressed protein.
Protein content in the overexpressing clones
Soluble cell protein (0.1 mg) of selected OE-clones was prepared as for enzymatic activity (section 2.6). The proteins were separated by SDS-PAGE. For OE-γECS, OE-TryS and OE-TryR, the separating gel was prepared at 10% polyacrylamide whereas for OE-TXN1 it was at 20%. The gels were processed for Coomassie-Blue staining. As we have not specific antibodies against these enzymes, Western blot analyses were not performed.
Enzyme activities in parasites
Control (Wt or mock) and different stable OE-epimastigote clones were cultured to the late logarithmic phase, harvested by centrifugation at 4250 x g and washed with PBS. The soluble-enriched fractions were prepared as described before [29] by lysing the parasites with three cycles of freezing/thawing and the lysate was centrifuged to collect the soluble fraction and immediately used for activity determination.
Activities of γECS and TryS in the OE clones and GS in all parasites were determined as described before [29] by coupling the ADP production via pyruvate kinase/lactate dehydrogenase (PyK/LDH) to NADH oxidation and the change in absorbance was monitored in real time under initial velocity conditions at 340 nm and 37°C in a diode array spectrophotometer (Agilent, Santa Clara, CA, USA). In all assays it was made sure that the activity was linear with respect to the amount of soluble cell protein. The 0.5 mL reaction contained 100 mM Hepes buffer pH 7.4, 1 mM EDTA, 5 mM MgSO 4 , 100 mM KCl, 0.2 mM NADH, 2 mM ATP, 2 mM phosphoenolpyruvate (PEP) and at least 600 mU PyK and 900 mU LDH. In addition, the following components were added (the specific thiol substrates were added to start the reactions): for γECS activity, 0.15-0.25 mg soluble cell protein, 1.3 mM Glu and 2.1 mM Cys; for GS activity, 0.005-0.040 mg soluble cell protein, 8 mM Gly and 0.4 mM γEC; and for TryS activity, 0.01-0.2 mg soluble cell protein, 11 mM Spd and 3 mM GSH. To subtract the high spurious ATPase activity present in the cell samples (accounting for 150-300 nmol/min x mg soluble cell protein in the enzymatic assay), a master mix for two reactions was prepared with all components (including the soluble cell protein) except the specific thiol substrate; the mixture was divided in two reactions and only one was supplemented with the corresponding thiol substrate and changes in the absorbance of the two reactions were followed in parallel. The activity in the absence of the thiol substrate accounted for the non-specific activity and was always subtracted. γECS and TryS basal activities in Wt and mock cells could not be determined with this protocol since no reliable increased rates above the high ATPase activity could be distinguished. TryS activity was determined in soluble cell protein fraction of Wt cells by HPLC. A mixture of reaction buffer (100 mM Hepes pH 7.4, 100 mM KCl, 1 mM EDTA, 5 mM MgSO 4 ), 2 mM PEP, 2 mM ATP, 0.6-0.9 U PyK/LDH and 0.2-0.4 mg of soluble cell protein was prepared and then divided in three: one was supplemented with 3 mM GSH, the second with 11 mM Spd (both were control reactions) and the third was supplied with both substrates. Aliquots of 90 μL were transferred to 1.5 mL tubes and incubated at 37°C for 0, 15, 30 or 60 min. After each time, the reaction was stopped by adding perchloric acid (PCA) at 3% v/v final concentration and T(SH) 2 was determined by HPLC as described in section 2.7 [29].
TryR, TXN1 and TXNPx activities were determined as previously described [14] by monitoring the T(SH) 2 -dependent peroxide reduction associated to NADPH oxidation. For TryR activity the mixture contained 40 mM Hepes pH 7.4, 1 mM EDTA, 0.16 mM NADPH, 0.23 mM TS 2 and the cellular sample (5-50 μg soluble cell protein from Wt clones; and 0.1-0.2 μg soluble cell protein from OE-clones) was added to start the reaction. For TXN1 and TXNPx, the mixture contained 40 mM Hepes pH 7.4, 1 mM EDTA, 0.16 mM NADPH, 0.45 mM (TSH) 2 , 0.5 μM TryR, > 24 μM TXNPx (for TXN1 determination) or > 22 μM TXN1 (for TXNPx determination) and 0.1 mM cumene hydroperoxide (CumOOH) was added. The reason to use CumOOH instead of H 2 O 2 is that the inhibitory effect of the former on TXNPx occurs at a higher concentration than that of the latter [14], making it feasible to monitor the steady state of the reaction for a longer period of time. After 3 min baseline stabilization, the reaction was started by adding the soluble cell protein; when the endogenous enzyme activity levels were determined, the amounts were 0.03-0.4 mg soluble cell protein and 0.03-0.35 mg soluble cell protein for TXN1 and TXNPx, respectively; and 3-30 μg soluble cell protein from the OE-TXN1 clones.
Metabolite determination
Concentrations of thiol metabolites were determined in the parasites as previously described [29]. The cells were grown to the late exponential phase, harvested and washed twice with PBS. They were resuspended in lysis buffer (20 mM Hepes pH 7.4, 1 mM EDTA, 0.15 mM KCl) plus 20 mM dithiothreitol, disrupted by freezing and thawing, centrifuged at 17000 x g for 10 min and the soluble fraction was separated. The samples were strongly reduced with NaBH 4 , acidified with PCA (3% v/v final concentration) and centrifuged at 16,843 x g for 2 min. Twenty microliters of the supernatant were analyzed by HPLC. The thiol-molecules were post-column derivatized with DTNB and detected at 412 nm. It was previously demonstrated that non-significant T(SH) 2 degradation occurs with this protocol [29]. To calculate the millimolar intracellular concentration, it was here determined that 1 × 10 6 epimastigotes have 5 ± 0.8 μg total cell protein (n = 10) and considered that 1 × 10 9 epimastigotes have an intracellular water volume of 30 μL [30].
Supplementation of thiol-molecules and polyamine to epimastigotes
Cultures of epimastigotes (controls and OE-clones) were initiated at a concentration of 0.8 × 10 6 parasites/mL and incubated at 28°C for 48 h (reaching a concentration of ≈3 × 10 6 parasites/mL), after which they were separated in 25 mL aliquots. One of them was used as a control (no supplementation) and the other were supplemented with different concentrations of Cys (0.03, 0.06, 0.1 mM; ≈10-33 fmol/ cell), GSH (0.3, 0.6, 1.0 mM; ≈100-333 fmol/cell), Spd (0.1 mM; ≈33 fmol/cell) or the combination of Cys plus Spd (0.1 mM each; ≈33 fmol/ cell each). The parasites were incubated at 28°C for further 24 h and processed for metabolite determination (section 2.7). The difference in metabolite concentrations before and after 24 h supplementation was calculated and normalized in percentage versus the control condition (no supplementation).
Ex vivo T(SH) 2 synthesis flux
Control (Wt and mock) and OE-γECS or OE-TryS epimastigotes were cultured to the late logarithmic phase and soluble cell protein fractions were prepared as for enzyme activity determination (section 2.6). The T(SH) 2 synthesis flux was determined ex vivo (using saturating concentrations of the precursors Cys, Glu, Gly, ATP and Spd) at room temperature in 0.5 mL of 100 mM Hepes pH 7.4, 5 mM MgSO 4 , 100 mM KCl, 1 mM EDTA, 2.1 mM Cys, 1.3 mM Glu, 8 mM Gly, 11 mM Spd, 3 mM ATP, 2 mM PEP, 20 mM DTT, at least 600 mU PyK and 900 mU LDH (the latter two to maintain a high and constant ATP concentration) and 0.8-1.2 mg of soluble cell protein were added to start the reaction. A control reaction lacking Cys was made in parallel. At 0, 1, 3, 5, 10 and 15 min, 90 μL of the reaction were withdrawn and mixed with PCA (3% final concentration) to stop the reaction. The time-dependent T(SH) 2 formation was determined by HPLC as described in the metabolite determination assay (section 2.7). The T(SH) 2 formation rate of the control reaction lacking Cys was subtracted from the full reaction. For these experiments it was made sure that the flux was linear with respect to the amount of protein used.
Ex vivo hydroperoxide reduction flux
A soluble cell protein fraction was prepared as for the enzyme activities determination assay (section 2.6). The peroxide reduction ex vivo flux was carried out as previously described [14]. Briefly, the 0.5 mL reaction mixture contained 40 mM Hepes pH 7.4, 1 mM EDTA, 0.2 mM NADPH, 0.45 mM T(SH) 2 in-house prepared according to [31] and 0-2.5 mg of soluble cell protein. The reaction was initiated by adding 0.1 mM CumOOH and the NADPH oxidation was monitored at 340 nm and 37°C. Also, for these experiments it was ensured that the flux was linear with respect to the amount of protein used.
Flux control coefficients
For rigorous determination of flux control coefficients, actual enzyme activities are required, regardless of the protein contents. In order to determine the C J ai of the enzymes of the T(SH) 2 metabolism, three clones of the OE-epimastigotes with different levels of enzyme activity were selected. The effect of changes in enzyme activity on either the ex vivo T(SH) 2 synthesis flux or the ex vivo peroxide reduction flux were determined. The C J ai is determined from the slope of the tangent (i.e., the derivative) to a curve of pathway flux versus enzyme activity multiplied by a scalar factor (ai o /J o ) that represents the ratio of the values of flux and enzyme activity at the reference metabolic state (for further details, see [23][24][25]). To simplify the procedure due to the high variability in the biological samples, the percentage of pathway flux versus percentage of enzyme activity compared to control cells (Wt or mock) were plotted, and the C J ai calculated from the derivative at the 100% (control) activity level in each individual experiment. A requisite for C J ai determination is that no significant modifications in the other pathway enzymes be attained. Therefore, the activities of the other enzymes from the pathway were also determined on the OE-clones.
Pathway modeling
Kinetic models for T(SH) 2 synthesis and T(SH) 2 -dependent peroxide reduction pathways were built using the metabolic simulator software GEPASI/COPASI [32,33]. The full information for their construction is included in Supplementary Material 2 (SM2). The models included the reactions displayed in Fig. S2.1 and Fig. S2.3. A summary of the reaction codification in the software, the kinetic parameters and reaction kinetic mechanisms [62] are shown in Tables S2.1 and S2.3. The initial and fixed metabolite concentrations used in the models are provided in Table S2.2 and Table S2.4. The full kinetic rate equations are described in SM2. The model files are available on request from the corresponding author. The characteristics of each model are outlined below.
For the T(SH) 2 synthesis, the previously published kinetic model [26] was improved ( Fig. S2.1) by (i) including the kinetic parameters of the Cys supply reaction (Cys transport; CysT) to simulate the effects of Cys supplementation on the T(SH) 2 pool; the reaction included the Km value for external Cys previously reported [34], whereas the other required kinetic and thermodynamic parameters were parameterized to simulate the internal Cys concentration as experimentally determined.
(ii) including a new TryS rate equation with substrate inhibition by GSH with kinetic parameters obtained from OE-TryS cell samples and recently reported (Fig. S2 in [29]) with Km GSH = 1.6 mM and Ki GSH = 7.3 mM). (iii) removing GSH and Spd leaks to allow higher variation of T(SH) 2 ; the reasons for these changes are described in Results section 3.7. And (iv) replacing TryR and NADPH supply reactions with a reaction of T(SH) 2 -demand with kinetic parameters of the peroxide reducing enzymes as previously reported [14] (see further details in SM2). This kinetic model was also parameterized to simulate the increase in the thiol pools of parasites supplemented with 0.1 mM Cys.
The kinetic model of the T(SH) 2 -dependent peroxide reduction pathway reported here for the first time included the reactions of TryR, TXN1, TXNPx and NADPH supply ( Fig. S2.3). The rate equations were bi-bi ordered reversible for TryR; ping-pong kinetics for TXN1 and TXNPx and mass action reversible for NADPH supply (Table S2.3). The enzyme kinetic parameters Vmax, Km and Ki were those previously determined by our research group under near-physiological conditions using the recombinant enzymes [14] (Table S2.3).
The models were refined until they were able to simulate (i) the steady-state metabolite concentrations as determined within the parasites under the different experimental settings used; and (ii) the experimentally determined fluxes, i.e. the ex vivo pathway flux determined here with parasite cell samples and with the in vitro reconstituted pathway [14].
Benznidazole ( ± BSO) and peroxide resistance assays
Two different protocols were used for Bnz and peroxide resistance assays. The first protocol included a 24 h exposure, and direct counting of motile parasites in a Neubauer chamber or the OD 600nm determination. Epimastigotes were cultured at an initial concentration of 0.8 × 10 6 parasites/mL and incubated at 28°C for 48 h, after which the parasite concentration was determined by direct counting and different concentrations of Bnz (0.5-11 μM; ≈0.16-3.6 fmol/cell) or Bnz (2.0-25 μM; ≈0.66-8.3 fmol/cell) plus 0.1 mM (≈33 fmol/cell) buthionine sulfoximine (BSO) were added. The parasites were further incubated at 28°C and then counted again 24 h later. The difference in parasite concentrations before and after 24 h drug exposure was considered as the relative growth, which then was normalized in percentage versus the control condition (no drug added). The concentration at which the relative growth was decreased by 50% (IC 50 ) was then calculated. For the second protocol, a bolus addition of different H 2 O 2 concentrations (60-250 μM; ≈3.2-13.3 fmol/cell) was used, and the OD 600nm was determined every 24 h over a period of 96 h. The growth rate constant (μ) was calculated (i.e. the slope of a Log 2 (OD) vs. time curve), and the concentration at which the μ was decreased by 50% (IC 50 ) was then calculated.
Parasite host cell infection
Human foreskin fibroblasts (HFF-1) and Rhesus monkey kidney epithelial cells (LLC-MK2) were grown in culture-treated flasks in Dulbecco's-MEM with high glucose (GIBCO; MD, USA), supplemented with 10% FBS (Biowest; Nuaillé, France), 100 U penicillin/mL and 100 μg streptomycin/mL and incubated under 5% CO 2 at 37°C until 70% confluence was reached. The infection protocol was carried out as previously described [35] with some modifications. Primary infections of HFF-1 cells cultured in 25 cm 2 flasks were initiated with Wt and newly transfected and selected populations of mock and OE-epimastigotes (2 × 10 6 parasites/mL) in Dulbecco's-MEM high glucose supplemented with 2% FBS; the interaction of parasites with human cells was allowed to last for 48 h. Then, the cells were daily washed with serumfree medium for epimastigote removal and replenished with fresh medium supplemented with 2% FBS. After 7-8 days post infection (dpi), trypomastigotes derived from the first burst (0.2 × 10 6 parasites/ mL) were used for a secondary infection of LLC-MK2 cells cultured in 25 cm 2 flasks and incubated for 48 h, afterwards the culture was processed as in the previous step. Trypomastigotes from the first burst were used to infect HFF-1 cells (tertiary infection)ultured over coverslips in 24 well-plates in quadruplicates; after 2 h incubation the cells were washed twice to remove non-internalized parasites and replenished with fresh medium. After 18, 48 or 66 h post-infection, the coverslips were exhaustively washed, fixed with 4% formaldehyde, stained with DAPI by standard methods and the number of infected cells as well as the number of internal parasites was analyzed by fluorescence microscopy. At least 500 cells per coverslip were analyzed.
To examine the trypomastigotes bursting, HFF-1 cells were grown in 24 well-plates (in the absence of coverslip) and infected as above. The trypomastigotes in the supernatants were resuspended and collected at the fourth and fifth day to determine by direct counting in the Neubauer chamber the number of trypomastigotes that had burst into the extracellular medium.
Cell growth in stable transfectants
The generation time (G) in at least three independent cultures of the different OE-TryS (23.6 ± 2.5 h), OE-TryR (23.4 ± 3.1 h) and OE-TXN1 (25.6 ± 4.1 h) clones was similar in comparison to that of Wt or clones of mock cells (24 ± 2.7 h and 24.3 ± 2.1 h, respectively), except for OE-γECS, whose value (26.3 ± 3.7 h) was higher (p < 0.05), thus growing slower than Wt and mock cells. In addition, the generation time of mock cells in the absence or presence of G418 showed no difference vs. control cells (23.9 ± 1.4 h). The G values were similar whether optical density or direct cell counting methods were used.
Protein contents and enzyme activities in control and enzymeoverexpressing parasites
SDS-PAGE analysis of the soluble cell protein fractions from Wt, mock and OE-parasites showed that the targeted proteins in the OE-TryS, OE-TryR and OE-TXN1 clones were indeed overexpressed (Fig. S1.1 in supplementary material 1; SM1). However, no clear overexpression of γECS was apparent in the cell samples.
The basal γECS activity in the absence of overexpression cannot be accurately determined due to a high spurious ATPase activity in the assay (150-300 nmol/min x mg soluble cell protein). By performing the appropriate control reactions as described in the methods section and subtracting the unspecific ATPase reaction, it was established that a reliable difference for the specific activity should be at least 3 nmol/min x mg soluble cell protein above the unspecific rates; hence the basal γECS activity was below this threshold value. Despite this high ATPase activity, γECS and TryS were reliably determined above the threshold in their respective OE-clones using the PyK/LDH coupled assay.
An effort was made to determine TryS activity in Wt cells by determining the T(SH) 2 production through HPLC as described in methods. A typical HPLC profile ( Fig. S1.2 in SM1) shows a time-dependent increase in T(SH) 2 using 3 mM GSH. A value of 0.63 ± 0.19 nmol/min x mg cell protein for TryS activity was determined in 5 independent parasite cultures. This value does not correspond to the actual Vmax, since TryS shows substrate inhibition by GSH. To circumvent this problem, it was determined from GSH saturation curves of TryS activity in OE-TryS clones and by fitting of the experimental data, that the TryS Vmax was underestimated by nearly 50% (Table S1.1 in SM1). Therefore, it was assumed that in Wt epimastigotes the TryS Vmax was ≈1.2 ± 0.4 nmol/min x mg cell protein.
The levels of increased activity were on average ≈ 8 for OE-γECS, ≈75 for OE-TryS, ≈31 for OE-TryR and ≈13 for OE-TXN1 (Table 1). This indicated that the parasite clones indeed functionally overexpressed their respective enzymes. These overexpression levels did not significantly vary among clones of the same OE-parasites, except in some cases (Table 1). Unless overexpressed, the basal activities of TryR, TXN1 and TXNPx were unaltered in the OE-clones, whereas a decreased GS activity was determined in several clones. OE-clones randomly evaluated showed that their activities did not depend on the G418 concentration used (300 or 500 μg G418/mL) or its absence for about 2 months (data not shown). Nevertheless, the parasites were always grown in the presence of G418 to prevent unexpected changes in the enzyme activities.
Metabolite pools in transfected cells
Mock and Wt cells showed similar thiol-molecule contents ( Fig. 2A). OE-γECS3 cells showed 3.5-fold increased T(SH) 2 concentration in comparison to mock cells ( Fig. 2A). Analysis of the thiol contents in different OE-γECS clones revealed that when the γECS activity reached > 10 nmol/min x mg soluble cell protein (clone C in Fig. 2B and Table S1.2 in SM1), the T(SH) 2 concentration became clearly higher than in mock cells, reaching up to 4-fold more when the activity was 22 nmol/min x mg soluble cell protein. Moreover, the T(SH) 2 level increased by ≈ 6-fold when OE-γECS cells were supplied with 0.1 mM Cys or 1 mM GSH, which was ≈2-fold above the increases attained in similarly supplemented mock cells (Fig. S1.3 in SM1). Higher concentrations of added Cys (0.2-1 mM) or GSH (up to 5 mM) resulted in lower T(SH) 2 increases of 4.5-and 3.5-fold and even some inhibition (data not shown). On the other hand, OE-TryS1 cells showed similar T(SH) 2 levels as Wt and mock cells, either supplemented with Cys, GSH or Spd or non-supplemented. This unexpected result is examined in the next section. The OE-TXN1-2 cells consistently showed increased Cys and GSH concentrations (≈75% and 60%, respectively) above control cell levels, but they showed no changes in T(SH) 2 levels ( Fig. 2A). OE-TryR3 cells showed a slight increase in thiol contents (≈30-40%) compared to control cells, although it was not statistically significant ( Fig. 2A).
TryS overexpression did not induce increased T(SH) 2 levels
Unexpectedly, OE-TryS1 cells showed non-significant changes in T(SH) 2 vs. mock cells ( Fig. 2A), despite the at least 64-fold increase in enzyme activity (Table 1). This has also been observed by others [36], where TryS overexpressing epimastigotes (but not trypomastigotes) did not increase their T(SH) 2 pool. To further examine this counterintuitive result, the OE-TryS1 cells were grown with supplementation of (i) Cys, to circumvent a possible Cys deficit for endogenous γECS activity and hence low GSH synthesis; (ii) GSH, to directly increase the TryS substrate; and (iii) Spd, to determine whether this precursor was limiting. OE-TryS1 cells supplemented with 1 mM GSH showed a similar increase in T(SH) 2 to that observed in control cells, and lower levels when supplemented with 0.1 mM Cys (Fig. S1.3). On the other hand, Spd seemed not to be limiting for T(SH) 2 synthesis because (i) their basal concentration in Wt cells is high 0.8 ± 0.2 mM [29]; (ii) there should be no increase in T(SH) 2 in the OE-γECS cells by merely supplementing with Cys or GSH (Fig. S1.3); and (iii) the OE-TryS1 parasites supplemented with Spd alone did not increase their T(SH) 2 (Fig. S1.4 in SM1).
Furthermore, in experiments to determine the ex vivo T(SH) 2 synthesis flux in OE-TryS clones, no net increase in T(SH) 2 was attained, despite the presence of saturating Cys, Glu, Gly, ATP and Spd concentrations (see Methods section for details). Instead, a time-dependent increase in GSH was observed in the cell protein samples of OE-TryS cells, which was not evident in the ex vivo flux determinations in Wt, mock and OE-γECS cell samples. Such pattern in the OE-TryS clones suggested that (i) T(SH) 2 might be simultaneously synthesized and degraded at similar rates or (ii) overexpressed TryS was not active. The latter possibility can be ruled out, because high TryS activities were indeed determined in the OE-TryS parasites (Table 1), although it is unknown whether in intact cells any type of negative metabolic regulation may occur.
To analyze the possibility of T(SH) 2 degradation in the OE-TryS cells, soluble cell protein fractions from the OE-TryS1 clone were incubated in the absence or presence of 1 mM T(SH) 2 . Time-dependent increases in GSH (40-80 nmol/min x mg soluble cell protein) and decreases in T(SH) 2 (10-30 nmol/min x mg soluble cell protein) were observed (Fig. S1.5 in SM1), indicating T(SH) 2 degradation. The calculated rates of T(SH) 2 degradation in the OE-TryS cell samples were 13 ± 5 and 21 ± 9 nmol/min x mg soluble cell protein in the absence or presence of added T(SH) 2 , respectively. In contrast, cell soluble protein fractions from Wt and mock cells did not show significant T(SH) 2 degradation (Fig. S1.5 in SM1). These observations may explain the lack of net increase in T(SH) 2 in the OE-TryS1 epimastigotes in comparison to control cells ( Fig. 2A).
Trypanothione synthesis and peroxide reduction ex vivo fluxes
T(SH) 2 is not a metabolic pathway end-product (such as lactate or ethanol for glycolysis or CO 2 for the Krebs cycle) since there are still enzymes or processes using it. Hence, at a specific metabolic steady state, the T(SH) 2 moiety pool in the cell is the result of the dynamic balance between its synthesis (i.e. supply) and consumption (i.e. demand). For these reasons, determination of fluxes of the T(SH) 2 metabolism in intact parasites would require more sophisticated techniques such as labeling studies or fluxomics. To circumvent this limitation, the fluxes were determined ex vivo, in parasite soluble cell protein fractions. However, it has to be considered that (i) the values determined represent the maximal fluxes of synthesis with the enzymes expressed in the cell sample, because the pathway precursors Cys, Glu, Gly, Spd and ATP are saturating and thus the limiting factor should only be the content of active enzymes in the cell sample; and (ii) in the ex vivo system most likely some physiological regulatory interactions are lost.
The T(SH) 2 synthesis flux was linear for up to 5 min using 0.3-0.8 mg soluble cell protein when Wt or OE-γECS3 soluble cell protein fractions were used (data not shown). If the specific pathway substrate Cys was not added to the reaction, the basal T(SH) 2 content did not change over time, indicating that it was not significantly synthesized or degraded. Under such conditions, the maximal ex vivo T(SH) 2 synthesis flux in Wt cells was 0.6 ± 0.2 nmol T(SH) 2 /min x mg soluble cell protein (n = 5) ( Table 2), whereas in the OE-γECS3 clone with the highest activity (20 nmol/min x mg soluble cell protein) the flux was 2 ± 0.4 nmol T(SH) 2 /min x mg soluble cell protein (Table S1.2 in SM1). Since in the OE-γECS3 cells γECS activity was in excess in comparison to GS and TryS activities, these results also suggested that (i) the basal TryS activity should not exceed 2 nmol/min x mg soluble cell protein, i.e. the maximal T(SH) 2 synthesis flux in OE-γECS3 cells; such a value is in the range of the TryS activity calculated from the HPLC results; and (ii) the basal γECS activity should not exceed 1.2 nmol/min x mg soluble cell protein, i.e. the pathway flux in the Wt cells multiplied by two (considering the pathway stoichiometry). In contrast, for the OE-TryS1 cells, which have a TryS activity of 77 nmol/ min x mg soluble cell protein, a maximal flux output of only 0.3 nmol T(SH) 2 /min x mg soluble cell protein was obtained, which was even lower than in Wt cells.
For the peroxide reduction flux, the maximal flux outputs in the Wt and mock cells were 11 ± 5 and 11 ± 4 nmol/min x mg soluble cell protein, respectively, values that were not significantly different in the OE-TryR3 cells (14 ± 4 nmol/min x mg), which showed the highest level of TryR overexpression. In contrast, in the OE-TXN1-3 with the Table 1; the values for clones A-D are shown in Table S1.2 in Supplementary Material 1.100% thiol concentration in mock cells correspond to: Cys = 6.8 ± 0.8 nmol/mg cell protein (1.1 ± 0.1 mM); GSH = 9.8 ± 3.8 nmol/mg cell protein (1.6 ± 0.6 mM); T(SH) 2 = 4.5 ± 1.7 nmol/mg cell protein (0.7 ± 0.3 mM). Student's t-test for non-paired samples *p < 0.05, **p < 0.01 vs. mock. highest TXN1 activity, the flux reached a value of 46 nmol/min x mg soluble cell protein. This latter observation suggests that the basal TXN1 levels are limiting for the peroxide reduction flux.
Flux control distribution of the T(SH) 2 synthesis and peroxide reduction pathway from parasites
To determine the C J ai of the pathway enzymes, the dependence of the pathway flux on the enzyme activities has to be analyzed. It is necessary to emphasize that the actual enzyme activity in the cell is required for C J ai determination; the protein contents (e.g. determined by Western blotting) may provide inaccurate values since a considerable protein fraction may have low or no activity. This was the reason why in the present study the enzyme activities within the parasites were rigorously established. The levels of enzymes were specifically varied in the OE clones without substantial changes in other pathway enzymes (Table 1), which is another mandatory requisite for appropriate C J ai determination. The enzyme variability shown among the different biological replicas of each cell clone allowed obtaining clustered points along the curve of pathway flux versus enzyme activity using only three clones per enzyme. Fig. 3A shows the variations in the ex vivo T(SH) 2 synthesis flux when the activity of γECS was varied using the different parasite clones. A C J ai of 0.69 ± 0.15 (Table 2) was obtained for γECS at the Wt level of activity (taken to be 3 nmol/min x mg soluble cell protein, the threshold confidence value of the enzymatic assay). Unfortunately, the C J ai for TryS could not be determined because of lack of variation of the T(SH) 2 content in the parasites as described above. Notwithstanding this inconvenience for this ex vivo analysis, it was estimated that TryS may have a C J ai of at most 0.31, by using the MCA summation theorem and assuming negligible control exerted by GS [26], the latter based on its higher activity in the parasites with respect to γECS and TryS (Table 1).
On the other hand, the peroxide reduction flux showed non-significant variations when TryR activity was increased up to ≈50-fold above Wt level (Fig. 3B); the calculated C J ai for TryR was 0.15 ± 0.09 (n = 8). In contrast, when TXN1 was varied (Fig. 3C), the flux changed almost linearly near the Wt level, with a concomitant C J ai of 0.73 ± 0.29 (n = 7). As TXNPx was not overexpressed in the parasites, the ex vivo C J ai could not be obtained in a similar fashion to that for TryR and TXN1. In an attempt to determine the C J ai of TXNPx, the peroxide reduction flux of soluble cell protein fractions from Wt and mock cells was titrated by adding recombinant TXNPx (Fig. S1.6 in SM1). The data showed that the flux increased while increasing TXNPx activity in a similar fashion to that of TXN1, yielding a C J ai near 0.57-0.64 for TXNPx; hence, the control of the flux is shared equally by both TXN1 and TXNPx.
Flux control distribution of T(SH) 2 synthesis by pathway modeling
An updated kinetic model of T(SH) 2 synthesis was constructed based upon a previous version published by our group [26]. In the previous version of the model the predicted C J ai of TryS (0.46-0.58) was higher than that obtained ex vivo here using parasite samples. In addition, the predicted C J ai of SpdT (0.22-0.24) was also high in the previous model; however, the Spd supplementation experiments in parasites showed no increases in T(SH) 2 (Fig. S1.4 in SM1), suggesting lower control than previously predicted for the polyamine supply. Simulations for the present study using the published model indicated that the high control values predicted for TryS and SpdT were due to the inclusion of GSH and Spd leak reactions. These two leak reactions decrease substrate availability to TryS, decreasing its rate, with the concomitant higher predicted C J ai value for TryS. The GSH and Spd leaks also increased the C J ai predicted for γECS, resulting in the sum of the C J ai of γECS and TryS being higher than one, which was compensated by the negative C J ai of the GSH and Spd leaks. For all these reasons the latter reactions were removed. In contrast, the supplementation experiments with Cys ( Fig. S1.3 in SM1) indicated that the basal Cys level was limiting for the γECS activity and therefore, an extracellular Cys uptake (CysT reaction) was now included, for which some reported kinetic parameters are available. However, it should be noted that CysT may actually represent any other reaction or metabolic pathways providing Cys to the cell.
A refined model was here obtained, which was able to closely reproduce the metabolite concentrations found experimentally in the parasites without Cys supplementation (Cys concentration in the medium was 0.04 mM), although the simulated flux was lower than that obtained ex vivo ( Table 2). The C J ai of 0.74 for γECS in the model simulation ( Table 2) agreed with that obtained ex vivo of C J ai = 0.69 (Fig. 3A). The model predicted a C J ai of 0.15 for TryS, a lower value than that previously reported, but this enzyme was still the second most fluxcontrolling enzyme in the T(SH) 2 synthesis pathway. The CysT C J ai of 0.09 was low whereas GS showed negligible control.
To further demonstrate the degree of control of Cys supply in vivo, elasticity analysis [23][24][25]37] was performed (Fig. S2.2 in SM2) using data of the Cys, GSH and T(SH) 2 thiol changes after Cys supplementation and BSO inhibition experiments previously published (Figs. 3A and 2 left panel, respectively in [29]). In elasticity analysis, intact cells are used and no modification of the enzymes is required, but manipulation of the intracellular steady-state levels of the pathway intermediates is performed by varying the pathway's initial substrate and by inhibiting the downstream pathway steps (see reference [23] for further details on elasticity analysis). The elasticity coefficients toward internal Cys for the Cys-supply reactions (in this case CysT) and for the Cys-consuming group of reactions (γECS, GS and TryS) were determined. Their respective C J ai were determined by applying the summation and connectivity theorems of MCA [23][24][25]. C J ai values of 0.13 for CysT and 0.87 for the Cys-consuming group of reactions were obtained (Table 2), which are in agreement with the sum of the C J ai of γECS, GS and TryS obtained by modeling.
Afterwards, it was tested whether the model was able to simulate the increases in thiol intermediates found in parasites supplied with 0.1 mM Cys; however, merely increasing the Cys concentration did not predict the in vivo metabolite contents. The model required to simultaneously increase the Vmax values of CysT, γECS and TryS (parameterized in the model; Table S2.1 in SM2) to become able to simulate the metabolite concentrations found in Cys supplied parasites ( Table 2). With these latter modifications, the pathway flux increased by 2.6-fold, which was still within the value experimentally determined, and the model predicted a high control by γECS, an increased C J ai for TryS of 0.26 and again negligible control exerted by GS. Thus, the TryS control was lower than that of γECS, but higher than that of the Cys supply.
Flux control distribution of the peroxide reduction flux by pathway modeling
The kinetic model for the T(SH) 2 -dependent peroxide reduction flux considered TXN1 as an enzyme with ping-pong kinetics using the kinetic parameters previously determined [14]. The model closely simulated the pathway flux determined ex vivo using soluble cell protein fractions from Wt and mock cells (Table 3), as well as the flux attained in the in vitro pathway reconstitution with the recombinant enzymes previously reported [14]. Likewise, the model simulated with high accuracy the C J ai determined here ex vivo by TXN1 and TryR titrations in the parasite and predicted a C J ai for TXNPx of 0.17-0.2.
On the other hand, it has been proposed that kinetics of redoxins, such as TXN1, should be described using a mass-action equation, since Michaelis-Menten kinetics appears to be an inaccurate descriptor of redoxin activities [38]. Thus, the model was modified with the mass action reversible equation for TXN1, which required parameterization of the mass action k values of TXN1 and NADPH supply reactions (described in SM2). With this modification, the model closely predicted pathway flux (6.4 nmol/min x mg cell protein) and C J ai values for TXN1 (0.74) and TXNPx (0.2), as well as for TryR (C J ai = 0.001) and NADPH supply (C J ai = 0.06).
Resistance to Bnz and peroxides
γECS overexpression led to increased T(SH) 2 synthesis and T(SH) 2 pool, whereas TXN1 overexpression led to an increase in the peroxide reduction flux. To determine whether these changes in the parasite antioxidant metabolism could affect other cellular functions, Wt, mock, and OE-clones were exposed to the antichagasic drug Bnz, alone or in combination with BSO. Statistically non-significant differences in the IC 50 on growth were observed between mock and the OE-clones using Bnz alone, except for OE-TryR which was more sensitive (Fig. 4A, Table 4). For comparative purposes, the IC 50 of Wt epimastigotes was determined, which was higher than that for mock cells (Table 4). A possible explanation for the difference in their IC 50 values is that mock cells (and OE-clones) were simultaneously exposed to G418 and Bnz, whereas Wt cells were not treated with the antibiotic.
The susceptibility of the cells to Bnz was increased by exposing them to a combination of this drug with 0.1 mM BSO (Fig. 4B). It was previously demonstrated that BSO not only inhibits γECS, but also TryS [29]. The reported IC 50 value on growth for BSO alone in Wt and OEclones is > 3.3 mM [29]; hence, BSO at a concentration of 0.1 mM is expected to have no effect on parasite growth; nonetheless, BSO could potentiate the Bnz effect. In the Bnz + BSO combination, only OE-TXN1 showed a significant increased resistance, compared to mock parasites ( Fig. 4B and Table 4).
Next, the parasites were exposed to H 2 O 2 . OE-TryR3 and OE-TXN1-2 clones were more resistant than the other clones analyzed (Fig. 4C, Table 4) with IC 50 values higher than 250 μM.
Table 3
Ex vivo, in vitro and kinetic modeling results for the T(SH) 2 -dependent peroxide reduction pathway.
Ex vivo b
In vitro pathway reconstitution c Predictions of the kinetic model d Table S2.3 in SM2.
HFF-1 cell infection
In order to investigate if these antioxidant enzymes are involved in the infection process of T. cruzi, the infectivity of OE parasites was assessed. It is known that epimastigotes have to differentiate to trypomastigotes in order to acquire the capability of invading host cells. Therefore, differences in infectivity using a primary infection with epimastigotes may be due to differences in differentiation, leading to misinterpretation. Thus, to discard any epimastigotes engagement, trypomastigotes derived from secondary infection were used to assess infectivity in tertiary infections of non-phagocytic cells, since these cells are major targets for T. cruzi infection in vivo. After 2 h of parasite and HFF-1 cells interaction, the invasion capabilities of the trypomastigotes was analyzed at 18 h post-infection (hpi) by determining the number of infected cells. The results showed that Wt and mock parasites have similar infection rates, in contrast OE-γECS trypomastigotes displayed a significantly (p < 0.01) higher infection rate of 30%. TryS trypomastigotes (32%, p < 0.05) and TryR trypomastigotes (34%, p < 0.05) only showed a tendency to a higher infection rates in comparison to controls (Fig. 5A).
Afterwards, the evolution of the infection was analyzed by determining the number of intracellular amastigotes as an indicative of transformation from trypomastigote into amastigote inside the cell (at 18 hpi) and amastigotes proliferation (at 48 and 66 hpi). There were no differences among controls and OE-parasites, showing that they were able to transform and replicate to the same extent (Fig. 5B).
Finally, to determine whether the overexpression of any enzyme confers an advantage in parasite burst, released trypomastigotes at 4 and 5 dpi were determined (Fig. 5C). Remarkably, OE-TXN parasites showed a 3-fold increase in trypomastigotes burst vs other OE-and control parasites.
Discussion
In the studies reported so far about T(SH) 2 metabolism in T. cruzi, the pathway enzymes have been mostly individually analyzed, focusing on studying the effects of its manipulation on some metabolic or physiological functions, or characterizing the resulting phenotypes (for instance [19,36,39,40]). Therefore, the aim of the present work was to perform integral analyses of the complete pathway by modulating several enzyme activities in the parasites and performing parallel determinations of enzyme activities, metabolite concentrations and pathway fluxes. This approach allowed to identify which enzymes, and why, have most of the control on the T(SH) 2 synthesis and on the peroxide reduction pathways, and to establish whether their degree of control correlate with essential physiological functions such as peroxide management, antichagasic drug resistance, and infectivity.
In silico and ex vivo flux control coefficients of the T(SH) 2 synthesis pathway
The updated T(SH) 2 The high control attained by γECS may be explained by its very likely low activity in the parasites (< 2 nmol/min x mg soluble cell protein), the latter value predicted from the ex vivo T(SH) 2 synthesis fluxes using the OE-clones (section 3.5) and by pathway modeling (section 3.7). Determination of basal γECS and TryS activities in trypanosomatids is not a trivial task; the high spurious endogenous ATPase activity makes it cumbersome to discern specificity using enzymatic coupled assays. However, we were able to determine TryS activity in Wt epimastigotes by HPLC and using appropriate control reactions; the value was remarkably similar to that initially predicted by pathway modeling. Recently, a TryS basal activity in T. cruzi epimastigotes (Silvio strain) of ≈8 nmol ATP transformed to ADP/min x mg cell protein as determined by an end-point colorimetric assay of Pi release was reported [36]. Unfortunately, inclusion of control enzymatic reactions, such as whether the ATPase activity was subtracted or whether the assay was conducted under conditions of initial velocity regarding protein sample, substrate saturation and time were not described; therefore, such basal TryS activity value in the parasites may have been overestimated. To our knowledge, our study represents the first significant effort in determining the activities of the T(SH) 2 synthesis enzymes within trypanosomatids. Unfortunately, γECS activity could not be determined by HPLC due to overlapping of γEC and GSH peaks.
On the other hand, OE-TryS epimastigotes did not increase the T(SH) 2 content above the wild type level under any condition. A similar observation was found in [36] when TryS was also overexpressed in epimastigotes of the Silvio strain. In the latter study, the authors proposed that polyamine uptake exerted a higher metabolic control of the pathway; however, this seemed unlikely, because Spd supplementation in our OE-TryS clones did not bring about a further increase in T(SH) 2 . Since in the present study a high rate of T(SH) 2 degradation was observed in the OE-TryS soluble cell protein extracts under conditions of ex vivo flux determination, it was hypothesized that the unaltered T(SH) 2 levels in the OE-TryS1 cells could be due to an active T(SH) 2 synthesis and degradation. This is supported by the previously reported in vitro TryS amidase activity [41]. It is clear that further experimental analyses in intact cells are required to assess that hypothesis, which however, are beyond the main objective of the present study. Moreover, T(SH) 2 degradation was not detected in Wt and mock parasites, which have wild-type TryS levels, ruling out any physiologically relevant meaning for T(SH) 2 degradation.
Finally, the predicted control exerted by CysT depended on the availability of external Cys surrounding the parasite; the flux control could be low for the intracellular parasite stages in the mammalian cells, since higher Cys concentration than in blood can be found in the cytosol of human host cells [42]. However, further experimentation is required to establish the control of the Cys supplying reactions (such as Cys transport and other metabolic pathways like de novo Cys synthesis and trans-sulfuration) on T(SH) 2 synthesis.
In silico and ex vivo flux control coefficients of the T(SH) 2 -dependent peroxide reduction pathway
The flux control distribution analysis by in vitro reconstitution of the pathway with the recombinant enzymes previously reported by our group [14] indicated that both TXN1 and TXNPx showed flux-control coefficients of 1.0, whereas that of TryR was 0.2. In that experimental setting, the sum of the flux control coefficients was close to 2, not one as implied by the 'classical' summation theorem of MCA. The explanation is that, unlike canonical metabolic pathways, this is an electron-transfer pathway in which each individual redox reaction (process) compulsorily involves two enzymes, leading to a stoichiometry of one reaction/ two enzymes (and second-order dependence reaction on enzyme concentration) instead of the usual one reaction/one enzyme (and firstorder reaction dependence on enzyme concentration). Therefore, the sum of the enzymes' flux control coefficients on the transfer of groups such as in redox pathways must be added to the sum of two, whilst the sum of the flux control coefficients on the whole process of peroxide reduction remains to be one (the pathway flux maintains a first-order dependence on enzyme activities) [43]. Kinetic modeling of the redox pathway reported here for the first time, allowed to obtain the C J ai on the peroxide reduction process, enabling to dissect C J ai values of 0.11-0.16 for TryR, 0.64-0.73 for TXN1 and 0.17-0.2 for TXNPx, respectively, considering that the sum of all C J ai must add up to 1.0. These in silico predicted values were in agreement with those obtained here by titration of the activity in the parasites, whose C J ai values were 0.73 and 0.15 for TXN1 and TryR, respectively. Moreover, overexpression of TXN1 but not of TryR led to steady increases in the pathway flux, which agreed with the high control attained by TXN1. Therefore, variations in Results are the mean ± SD of three experiments each started from independent primary infections with epimastigotes, except for C in which n = 2 but the difference in the values from the two experiments was less than 30%. **p < 0.01, *p < 0.05 Student's t-test for non-paired samples versus mock.
TryR activity do not change the hydroperoxide reduction flux, but variations in TXN1 do, hence, demonstrating that TXN1 has high pathway control. Regarding TXNPx overexpression in the parasites, this was not addressed in this work; however, in a mixed reconstituted system using soluble cell protein extracts of mock and WT cells the ex vivo peroxide reduction flux was titrated with recombinant TXNPx. The C J ai predicted by this strategy was similarly high to that of TXN1. This result suggested that the control of the flux is shared equally by both TXN1 and TXNPx.
TXN1 and TXNPx are abundant proteins in trypanosomatids (1-6% of the total cellular protein) [20,44,45]. In T. cruzi the values calculated by activity were 0.1% and 1% for TXN1 and TXNPx, respectively, whereas the TryR protein content was 0.02% [14]. Reduction of TXN1 by T(SH) 2 is the step with the lowest catalytic efficiency in the TXNdependent peroxide reduction pathway [14,45], which may explain the high control that TXN1 exerts on the pathway flux. Certainly, TXNPx overexpression prompts increased resistance to peroxides [19,39] and infective stages overexpress this enzyme [20,21]. However, in these reports TXN1 was not evaluated and higher TXN1-TXNPx stoichiometric coupling may favor higher peroxide reduction fluxes. On the other hand, despite its lower protein content, TryR exhibited a comparatively high activity in the cells and high catalytic efficiency, as well as lack of regulatory properties, which led to a low controlling enzyme.
It should be also emphasized that the kinetic properties of the enzymes, analyzed separately, reveal little about how and by which steps the pathway is controlled. However, when all pathway enzymes (i.e. with the content of active enzyme in the cells) are allowed to interact with each other and with all ligands (in their physiological concentration ranges), the most controlling steps can be identified and the control mechanisms clearly emerge. Such controlling enzymes can be now proposed as very attractive targets for therapeutic intervention, since their low inhibition can have the greatest negative effect in the pathway flux and metabolite levels [23].
Fluxes and resistance to hydroperoxides and anti-chagasic compounds
Wt and mock cells exhibited ex vivo hydroperoxide reduction fluxes of ≈11 nmol/min x mg soluble cell protein which is similar to those reported for intact cells of the T. cruzi Y strain (3.3-12.9H 2 O 2 nmol/min x mg cell protein) [46]. Remarkably, these maximal outputs of peroxide reduction values were one order of magnitude higher than those of the T(SH) 2 synthesis (0.6 ± 0.2 nmol/min x mg soluble cell protein) in Wt cells. These results indicated that there was a 20-fold lower maximal output of de novo T(SH) 2 synthesis versus its usage for peroxide reduction. In consequence, the peroxide detoxification pathway, with a highly active and efficient enzymatic machinery, seems to function during immediate and acute (short-term) responses to oxidants, whereas changes in the T(SH) 2 synthesis flux are expected to have their metabolic effect after a longer period of time upon the insult (long-term response).
TXN1 down-regulation in T. brucei causes growth arrest and increased sensitivity to H 2 O 2 [47,48]. So far, there are no reports on the effects of TXN1 overexpression in trypanosomatids. Remarkably, our results showed for the first time that overexpression of TXN1 induced higher resistance to this peroxide (Fig. 4). Furthermore, the OE-TryR cells also showed enhanced H 2 O 2 resistance despite TryR having low control on the flux. As OE-TryR epimastigotes showed no changes in the TXN1 and TXNPx activities that could contribute to the peroxide resistance, the effect was probably due to the two orders of magnitude higher TryR activity compared to the other OE-parasites. On the other hand, no increased H 2 O 2 resistance was observed in the OE-γECS and OE-TryS epimastigotes. The lack of resistance in our OE-TryS epimastigotes contrasts with the result obtained by others [36], where higher (70%) H 2 O 2 resistance on epimastigotes growth was found when TryS was overexpressed. However, in that study the activities of TryR, TXN and TXNPx were not assessed to determine that they were not changed. Furthermore, our results with overexpressing TXN1 (≈10-fold) were similar in terms of hydroperoxide resistance to those reported when TXNPx was overexpressed [19,20,39], where the authors found that 2.5-fold TXNPx overexpression increased by 50% the epimastigotes viability exposed to H 2 O 2 and helped the cells to contend with ONOO − . A plausible explanation is that both enzymes interact tightly and function in channeling, with TXN1 being the only reducing partner for TXNPx. However, the higher activity in the parasite and higher catalytic efficiency of TXNPx will require higher levels of inhibition than TXN1 to affect the peroxide reduction pathway. Thus, TXN1 (or TXN1-TXNPx interaction) may be a better target in order to inhibit the antioxidant machinery as well as many other processes in which TXN1 is involved (e.g. DNA synthesis, polyamine metabolism, protein translation and degradation [11,49,50]). In this regard, a high throughput screening of a library of compounds identified some compounds that preferably inhibited TXN in the T. brucei peroxide reduction pathway over its human counterpart thioredoxin. The compounds showed a selectivity index > 5 for bloodstream trypomastigotes versus HeLa cells [51], rendering TXN1 as a suitable target for therapeutic intervention.
Regarding TryR, it was here demonstrated that its control on the hydroperoxide reduction flux was low because it is one of the most efficient enzymes of the pathway. Consistently, its overexpression in the T cruzi Silvio strain showed no effect on the cell's hydroperoxide reduction capacity [40] and it has been demonstrated that, in order to affect the redox homeostasis in T. brucei, TryR has to be decreased by more than 90% of its wild-type level [52]. Therefore, TryR seems to be a very difficult target for therapeutic intervention.
Remarkably, TXN1 overexpression induced increased resistance to Bnz (in combination with BSO). The proposed mode of cytotoxic Bnz action is adduct formation with thiol metabolites such as GSH, T(SH) 2 , and other molecules, rather than ROS formation [53][54][55]. Our results suggested that a small dithiol protein such as TXN1 may also provide some protection to the parasite against Bnz (and BSO effects), a hypothesis that needs to be further experimentally analyzed. In this regard, it has been recently proposed that some small proteins such as TXN or glutaredoxins (Grx) may contribute to the reduction of glutathione disulfide (GSSG) in organisms lacking glutathione reductase [56]. The authors found that reduction of GSSG directly by T(SH) 2 is a slow process, whereas it is faster when it is mediated by Grx and/or TXN. If both proteins are present at the physiological concentrations analyzed, the reduction is preferably performed by Grx (75%) rather than by TXN (25%). Whether TXN overexpression could favor GSSG reduction was not here evaluated.
Parasite infectivity
As intracellular parasite, oxidative stress is a major challenge for T. cruzi, suggesting that its antioxidant enzymes may contribute to its survival and persistence. In fact, some of the enzymes have been studied during the infection process of the parasite in phagocytic and nonphagocytic cells, showing their participation in survival, replication and differentiation of the parasite, proposing them as virulent factors [19][20][21]. Therefore, we determined the role of γECS, TryS, TryR and TXN1 in the infection process of T. cruzi in non-phagocytic cells, since only in these ones the pathogenesis of the disease is established. It has been determined for some T. cruzi strains that replication within in vitro cultured host cells occurs from 18-72 hpi [57]; for this reason and based on some preliminary observations, the infection process was analyzed over time by examining the percentage of infected cells at 18 hpi, as well as the number of internal parasites at 18, 48 and 66 hpi. Also, the trypomastigotes that burst into the extracellular medium were monitored at 4 and 5 dpi.
At 18 hpi, OE-γECS trypomastigotes were more infective than Wt or mock trypomastigotes (Fig. 5A), suggesting that this enzyme may contribute to the invasion. These findings concur with those reported for Leishmania infantum, where the inactivation of one allele of the γECS genes showed decreased GSH and T(SH) 2 contents and decreased survival inside activated macrophages [58], which in turn showed an increased production of oxygen and nitrogen oxidative species. Moreover, the γECS content also increased in natural antimony-resistant isolates of L. donovani [59,60].
On the other hand, OE-TryS and OE-TryR trypomastigotes showed a tendency to higher infectivity (Fig. 5A). It has been reported that TryS (and TXNPx) are involved in infectivity in mouse models and cell cultures; hence both enzymes have been proposed as virulence factors [20,21]. These two enzymes have been also found overexpressed in (i) T. cruzi acute vs. chronic Chagas disease isolates [61]; (ii) virulent vs. attenuated strains; and (iii) in metacyclic trypomastigotes vs. epimastigotes (independently of the strain virulence) [21]. In the present work, correlation between TryS overexpression and infectivity was low, most probably because of the side effects observed in the OE-clones. On the other side, Piacenza et al. found no correlation between the TryR protein content and degree of virulence in different T. cruzi strains [21]; moreover, Kelly et al. [40] also found no correlation between TryR overexpression and peroxide resistance. Here, in contrast, OE-TryR showed an increased capacity to cope with H 2 O 2 (Fig. 4C) which could contribute to the slight increased infectivity at 18 hpi (Fig. 5A). The different results may be related to the higher overexpression level of 50fold attained in our OE-TryR clones in comparison to the 15-fold attained in the transfectant of Kelly et al. study. Lastly, to the best of our knowledge this is the first time that the correlation between TXN1 (the electron donor of TXNPx) and infectivity was analyzed, finding a statistically non-significant correlation between its overexpression and the capacity to infect cells at 18 hpi.
The different OE-parasites showed a similar capacity to transform into amastigotes, because at 18 hpi all parasites exhibited the same number of internal amastigotes (Fig. 5B). In turn, the internal parasites showed similar capacity to replicate within the cell, at 48 and 66 hpi (Fig. 5B). However, a remarkable finding emerged at 4 and 5 dpi, where OE-TXN displayed a notorious increase in trypomastigotes bursting (Fig. 5C). A possible explanation is that the OE-TXN amastigotes have a higher differentiation rate to trypomastigotes; however, this hypothesis has to be analyzed in more detail to understand how TXN could be involved in this process.
Conclusion
The results of the present investigation allowed to identify by both in silico and ex vivo experimentation that γECS and TXN1 are enzymes with high control on the T(SH) 2 synthesis and peroxide reduction fluxes, respectively. TryS has a lower, but meaningful contribution to the control due to its very low activity in the parasite; thus, a slight inhibition of this enzyme may also negatively impact the pathway flux. Then, when T(SH) 2 metabolism is compromised by thiol-conjugating drugs such as Bnz, high T(SH) 2 or thiol-protein contents may confer drug-resistance. Hence, to prevent parasite resistance against this type of antichagasic drugs, γECS and TXN1 (and TryS) activities should be blocked. Therefore, therapeutic targeting of γECS and TXN1 will more severely affect parasite viability than intervention at other, lowercontrolling pathway steps which will require much higher levels of inhibition. In addition, despite its lower control, TryS should be still considered an adequate drug target since its specific inhibition will affect both T(SH) 2 synthesis and peroxide reduction pathways. Furthermore, γECS and TXN may contribute at different levels of the infection process, strengthening its proposal as drug targets. | 2019-06-14T14:20:50.208Z | 2019-05-28T00:00:00.000 | {
"year": 2019,
"sha1": "e7e8d07f58129d24c766787a923256557e1c6bcd",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.redox.2019.101231",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7e8d07f58129d24c766787a923256557e1c6bcd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
259374483 | pes2o/s2orc | v3-fos-license | Safety and immunogenicity of a tetravalent and bivalent SARS-CoV-2 protein booster vaccine in men
The safety and immunogenicity of a protein-based tetravalent vaccine SCTV01E that contains spike protein ectodomain (S-ECD) of Alpha, Beta, Delta and Omicron BA.1 are assessed and compared with bivalent protein vaccine SCTV01C (Alpha and Beta variants) and monovalent mRNA vaccine (NCT05323461). The primary endpoints are the geometric mean titers (GMT) of live virus neutralizing antibodies (nAb) to Delta (B.1.617.2) and Omicron BA.1 at day 28 post-injection. The secondary endpoints include the safety, day 180 GMTs against Delta and Omicron BA.1, day 28 GMTs to BA.5, and seroresponse rates of neutralizing antibodies and T cell responses at day 28 post-injection. 450 participants, comprising of 449 males and 1 female, with a median age (range) of 27 (18–62) years, are assigned to receive one booster dose of BNT162b2, 20 µg SCTV01C or 30 µg SCTV01E and completed 4-week follow-up. All SCTV01E related adverse events (AEs) are mild or moderate and no Grade ≥3 AE, serious AE or new safety concerns are identified. Day 28 GMT of live virus neutralizing antibodies and seroresponse against Omicron BA.1 and BA.5 with SCTV01E are significantly higher than those with SCTV01C and BNT162b2. These data indicate an overall neutralization superiority with tetravalent booster immunization in men.
More than three years after the COVID-19 pandemic began, the incessant evolution and emergence of new SARS-CoV-2 variants have held a tight grip on the world 1 .Omicron and its sublineages have emerged as the most antigenically divergent variant to date with >30 mutations in the spike protein, 15 of which are clustered in the receptor binding domain.Studies that investigated the effectiveness of primary and booster vaccination with approved vaccines have shown decreased efficacy against Omicron and its sublineages and waning immunity over time, although protection against hospitalization and severe disease are maintained [2][3][4][5][6][7] .
Multivalent vaccine increases the diversity of antibody responses and may improve cross-strain protection.The WHO Technical Advisory Group on COVID-19 Vaccine Composition (TAG-CO-VAC) and the 175th meeting of the Vaccines and Related Biological Products Advisory Committee (VRBPAC) on June 28, 2022 have recommended developing multivalent or broad-protective vaccines against SARS-CoV-2 current and future variants and updating the vaccine strain compositions 8 .Moderna recently reported encouraging immunogenicity data on mRNA-1273.211(originaland Beta variant), mRNA-1273.214(original and Omicron B.1.1.529)and mRNA-1237.222(original and Omicron BA.4/5) [9][10][11] .Likewise, Pfizer also reported on its bivalent mRNA vaccines (original and Omicron BA.1 or BA.4/5) 12 .Both reports showed the superiority of neutralizing antibody (nAb) against Omicron BA.1 and similar nAb status against the original strain compared to their monovalent progenitor vaccines.
We have previously reported the results of three phase 1/2 safety and immunogenicity trials of a protein-based bivalent adjuvanted vaccine SCTV01C containing equal amounts of spike protein ectodomain (S-ECD) of SARS-VoC-2 Alpha and Beta variants.SCTV01C was administered as a two-dose primary series (NCT 05148091) in vaccine naïve people and one booster dose in people previously vaccinated with the inactivated vaccine (NCT 05043285) and mRNA vaccine (NCT 05043311) demonstrated favorable safety and tolerability profiles in a total 922 participants, and induced high levels of spike-protein binding IgG and broad neutralizing antibody responses against Alpha, Beta, Delta and Omicron variants [13][14][15] .On December 2, 2022, SCTV01C was granted Emergency Use Authorization (EUA) by the National Health Commission of the People's Republic of China as a booster dose, and as a primary dose for individuals who have already been infected during the COVID-19 pandemic.
SCTV01E was manufactured by the same process as SCTV01C but has a tetravalent design containing a blend of Spike-ECD proteins derived from SARS-CoV-2 variants, Alpha (B.1.1.7),Beta (B.1.351),Delta (B.1.617.2), and Omicron BA.1., in a proportion of 1:1:1:3, with a total quantity of 30 μg.The selection of a 1:1:1:3 antigen ratio was based on empirical animal data indicating that a higher dose of Omicron BA.1 antigen is required to elicit an optimal immune response as a booster vaccine against the newer BA.1 variant.Both SCTV01C and SCTV01E are adjuvanted with a squalene-based oil-in-water emulsion SCT-VA02B to boost the immune responses and possess a trimerization auxiliary domain (T4-Foldon) to stabilize the trimeric protein conformation, exhibiting temperature stable at 25 °C for over six months and at 2-8 °C for over 24 months 16,17 .
Herein, we present the interim analysis results of the safety and immunogenicity of one booster dose of SCTV01E in people that had previously received authorized mRNA vaccines, using SCTV01C and the ancestral strain monovalent mRNA vaccine as controls, from an ongoing phase 3 study.
Demographic and baseline characteristics
Between May 30, 2022 and September 28, 2022, 451 participants who had a prior diagnosis of COVID-19 and/or received BNT162b2 vaccines were enrolled and 149, 154 and 147 participants were assigned to receive one dose of BNT162b2, 20 µg SCTV01C and 30 µg SCTV01E (Fig. 1 and Supplementary Table 1), respectively (One participant withdrew before vaccination).Notably, six participants in the BNT162b2 group, four in the SCTV01C group, and eight in the SCTV01E group were excluded from the immunogenicity analysis due to missed or out-of-window scheduled visits (Fig. 1).Out of 451 participants, only four had chronic medical conditions (diabetes), with three in the BNT162b2 group and one in the SCTV01C group (Supplementary Table 2).All participants completed Day 28 visit and the median (min, max) time of follow-up was 73 (60, 79) days (Fig. 1).Notably, no cases of COVID-19 infection were reported during the available follow-up period when the data was locked for analysis.The median (min, max) age was 27 (18, 62) years old.0.4%, 95.6%, and 4.0% of participants had previously received 1, 2 and 3 doses of mRNA vaccine respectively.3.5% of all participants were previously diagnosed with COVID-19.The demographic and baseline characteristics were generally comparable for participants across all the groups.For all trial participants, the interval between investigational vaccination and prior COVID-19 vaccination were 3-5 months (11.3%), 6-8 months (31.3%), 9-12 months (25.5%) and 13-24 months (31.9%), respectively.Regarding sex and gender, the study considered sex in its design and relied on selfreported information from participants to determine their sex.As only one female participant enrolled, there was insufficient data to carry out a sex or gender analysis.Generally, participants in each group had similar intervals from prior vaccination (Supplementary Table 1).
Adverse events
For the SCTV01E group, all vaccine-related adverse events (AEs) were mild or moderate.There were no vaccine-related AEs with frequency ≥10%, Grade ≥3 AE, serious AE (SAE) and AE of special interest (AESI) reported.The overall incidence of adverse reactions was similar or numerically lower with the SCTV01E compared to those with SCTV01C and BNT162b2.In BNT162b2, SCTV01C and SCTV01E groups, 25 (16.8%),27 (17.5%)and 18 (12.2%)participants experienced at least one treatment emerged adverse event (TEAE) and 19 (12.8%), 24 (15.6%) and 14 (9.5%) participants experienced at least one treatment-related adverse event (TRAE), respectively.The frequencies of solicited AEs were 15 (10.1%) in the BNT162b2 group, 19 (12.3%) in the SCTV01C group, and 7 (4.8%) in the SCTV01E group.The occurrences of vaccinerelated unsolicited AEs within 28 days after the injection were also numerically lower in the SCTV01E group (4.8%) compared to those in SCTV01C (7.5%) and BNT162b2 groups (6.0%).For all three groups, the most frequent solicited AEs included pain at the injection-site, headache and pyrexia (Table 1 and Supplementary Fig. 1).
Post hoc analysis of nAb responses to Omicron BA.1 and BA.5
To analyze the influence of pre-existing nAb on vaccine immunogenicity, participants were divided into three groups based on their pre-dose GMT levels for each specific variant: low baseline titer group (equal or lower than four times of the LLOQ, ≤80), medium baseline titer group (80-320), and high baseline titer group (>320) (Fig. 3).Day 28 GMTs of SCTV01E against BA.1 were: 1627, 1347 and 2560, with 28.67, 5.58 and 2.67-fold change over baseline; BA.5 were 2153, 2010 and 2765 with 28.51, 8.67 and 2.78-fold change over baseline for the low, medium and high baseline titer groups, respectively.The nAb responses with SCTV01E were consistently superior to those with SCTV01C and BNT162b2, irrespective of baseline GMTs levels of the participants.In addition, SCTV01E elicited relatively uniform GMTs across different baseline groups as compared to BNT162b2 showed a 2.55 and 2.79-fold lower in the low baseline groups than those in high baseline groups for Omicron BA.1 and BA.5 on day 28, respectively.
T cell responses
The peripheral blood mononuclear cells were collected to assess specific Th1 (IFN-γ release) and Th2 (IL-4 release) responses.At day 28 post injection, the mean number of IFN-γ expressing T cell increased by 1.3, 1.2, and 1.2-fold change and IL-4 expressing T cell 1.3, 1.1, and 1.4fold change to baseline for BNT162b2, SCTV01C and SCTV01E groups, respectively (Supplementary Fig. 5).
Discussion
A tetravalent protein-based vaccine, SCTV01E designed to provide broad-protection against SARS-CoV-2 variants is currently being evaluated in an ongoing positive-controlled phase 3 trial, using its progenitor bivalent vaccine SCTV01C and mRNA vaccine as the controls.
The tetravalent vaccine SCTV01E was developed as a modified version of the bivalent (Alpha + Beta) vaccine SCTV01C by adding two subsequent variants of concern Delta and Omicron BA.1.During this clinical study, SCTV01C demonstrated significant cross-neutralizing capability against Omicron BA.1 and BA.5 variants which emerged two years after its initial development.SCTV01E showed even greater breadth of cross-neutralizing capabilities against a variety of Omicron variants during pre-clinical studies 18 .Seven clinical trials have been conducted for both SCTV01C and/or SCTV01E, collectively demonstrating their potential as an important vaccine platform in the context of the challenging epidemiological situation where multiple major variants are prevalent simultaneously.The flexibility of this platform enables rapid replacement of up to four new variant antigens to adapt to immune-evading variants.The findings of this investigation suggest that a tetravalent recombinant protein may be an effective approach to address both current and potential future epidemiological challenges.Currently, a phase 3 efficacy study with SCTV01E is underway in China (NCT05308576).
The results of the interim analysis indicate that the tetravalent vaccine SCTV01E given to individuals who previously received two or three doses of an authorized mRNA vaccine has a clinically acceptable safety and tolerability profile.All vaccine-related AEs were mild or moderate (Grade 1-2).There were no Grade ≥3 AE, SAE or AESI reported in the SCTV01E group.The incidence of adverse reactions was similar or numerically lower with the SCTV01E compared to those with SCTV01C and BNT162b2.These findings are consistent with previous clinical studies of SCTV01C [13][14][15] , which identified no new safety concerns.It is important to note that during the repeat-dose toxicity test of SCTV01E in rats, certain abnormalities were observed.These included increases in neutrophilic and eosinophils, fibrinogen, and globulin, as well as decreases in reticulocyte and albumin levels.In addition, glomerulonephritis was observed in the kidneys of two out of twenty rats, however, these changes were not observed in the present trial.
In this study, 16.8% of participants in the BNT162b2 group reported experiencing at least one treatment-emergent AE, and 19 individuals (12.8%) experienced at least one treatment-related AE.The total frequency of AEs in this study is comparable to that reported in a phase 3 trial of BNT162b2 booster, which found that among 5050 participants, 25.0% experienced at least one AE after receiving a third dose of the vaccine, with 23.4% being related to vaccine administration 19 .However, these incidence rates are much lower than those reported in the Vaccines and Related Biological Products Advisory Committee Briefing Document (17 September 2021), which demonstrated that, within one month following the administration of 3rd dose of BNT162b2 vaccine, 77.2% of 306 participants reported any systemic reaction 20 .Possible reasons for these inconsistencies could include differences in the definition, measurement, and reporting of AEs across different studies, as well as variations in population characteristics like age distribution, comorbidities, prior vaccination, and infection history.It is worth noting that the high rate of prior infections and the predominance of young male participants in this trial might have contributed to the lower occurrence of AEs observed.
The strong correlation between the viral neutralizing antibody level and the protection from symptomatic SARS-CoV-2 has been shown in vaccinated people 21,22 .This study evaluates the live virus nAb GMTs and seroresponse rates against Omicron BA.1, BA.5 and Delta variants.At day 28 post vaccination, GMTs of nAb against: Omicron BA.1 were 4.06, 3.60, and 5.96-fold change over baseline; Omicron BA.5 were 4.34, 3.19 and 4.94-fold change from baseline in BNT162b2, SCTV01C and SCTV01E groups, respectively.The superiority immunogenicity objectives were met for GMRs of SCTV01E / BNT162b2 and SCTV01E / SCTV01C against Omicron BA.1 and BA.5.Similarly, SCTV01E elicited significantly higher seroresponse rates for Omicron BA.1 than those with BNT162b2 and SCTV01C, based on the predefined definition for seroresponse.While statistically significant differences in post-booster antibody titers were observed between study groups, further clinical evidence is needed to demonstrate whether the numerically higher antibody titers would lead to superiority in clinical efficacy or durability of protection.
The data showed highly diversified neutralizing antibody titers to both Delta and Omicron variants at baseline.We conducted post hoc analyses to evaluate the impact of the pre-existing SARS-COV-2 immunity on the nAb responses.The participants were assigned to three groups based on their pre-dose GMTs levels.The nAb responses with SCTV01E were consistently superior to those with SCTV01C and BNT162b2, irrespective of the baseline GMTs levels of the participants.Notably, SCTV01E induced high GMTs in the participants with a low baseline that were comparable to those with high baseline titers.
Having the capability to boost immune responses in persons with low baseline nAb is important, given the fact that breakthrough infections can occur in vaccinated persons.To a larger extent, individuals with low nAb levels have much higher risk of subsequent infection compared with high nAb individuals.The underlying mechanisms of enhanced nAb responses with multivalent vaccines have yet to be elucidated but could be associated with generation of immune memory and evolution of the humoral responses 23 .
The study had several limitations.First, the trial was conducted in an environment of high Omicron variant circulation, and a large portion of the trial participants might have asymptomatic infection according to published reports 24,25 .Our study revealed a wide range of baseline neutralizing antibody titers against the Omicron variant.Notably, the GMT levels were considerably higher than those reported in earlier studies investigating individuals vaccinated with two doses of mRNA vaccines.However, there was no standard way to differentiate asymptomatic individuals.Second, the immunogenicity of booster vaccination was assessed in a short period, and as result, the immune persistence data is not yet available.In addition, the study was designed to evaluate the prominent circulating variants, thus, nAb responses to the ancestral SARS-CoV-2 (D614G) and vaccine prototype variants (Alpha and beta) were not assessed in this study.In addition, the study's sample population was mostly composed of young male adults.This lack of diversity may affect the generalizability and applicability of the study results.Although previous clinical studies involving SCTV01C did not reveal any significant differences in AEs or immunogenicity between male and female participants or between younger and older adults, further investigations on SCTV01E with a more balanced demographic representation are necessary.
In summary, 30 μg tetravalent protein vaccine SCTV01E, when administered to individuals who previously received mRNA vaccines had a clinically acceptable safety and tolerability profile; induced uniformly high nAb responses against Omicron BA.1, BA.5 and Delta variant, showing immunogenicity superiority to those with bivalent vaccine SCTV01C and BNT162b2.The tetravalent vaccine may be a new tool to respond to the continuous emergence of SARS-CoV-2 variants.
Study design and participants
This ongoing randomized, double-blinded, and positive-controlled phase 3 booster study is being conducted at Al Kuwait Hospital, Emirates Health Services in Dubai, and Burjeel Medical City in Abu Dhabi, United Arab Emirates (UAE) between May 30, 2022 and September 28, 2022.The study included two cohorts that aimed to evaluate the immunogenicity and safety of one booster dose of SCTV01E administered to adults who previously received authorized mRNA vaccines or inactivated vaccines.The interim analysis results of cohort 2 are present.Details on inclusion and exclusion criteria are provided in the protocol (Supplementary Materials).Briefly, eligible participants for cohort 2 were aged 18 years and older adults and previously vaccinated with 1, 2 or 3 doses of mRNA COVID-19 vaccine (Pfizer BNT162b2 or Moderna mRNA-1273) and/or previously diagnosed with COVID-19 3-24 months before.Participants with test positive (real-time polymerase chain-reaction assay) for COVID-19 during screening period, fever within three days, with history of allergic reactions to any vaccine or drug and history of infection or disease related to severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS) and HIV-positive were excluded.The trial is conducted in accordance with the ethical requirements of Good Clinical Practice and the Declaration of Helsinki.The protocol, informed consent and amendments were approved by the Ministry of Health and Prevention (reference number: RCMOHP/CT1/0123/2021).All participants enrolled voluntarily and provided written informed consent before any study procedure.
Randomization and masking
Eligible participants were randomized to three groups to receive one dose of BNT162b2 (0.3 mL), 20 µg SCTV01C (0.5 mL), or 30 µg SCTV01E (0.5 mL) by a ratio of 1:1:1 using the Interactive Network Response System (IWRS).The participants were stratified by age (18-54 years, ≥55 years), the number of doses of previously received COVID-19 vaccines (0, 1, 2, or 3), the previous COVID-19 (yes or no) history, the interval between previous vaccination and the study vaccination (3-5 months, 6-8 months, 9-12 months, 13-24 months) and baseline nAb level.The randomization codes were generated using block randomization using SAS software (version 9.4).The syringes used for injection were identical in appearance and covered with stickers for masking the solution insides.All participants, investigators, clinical research associates, data analysts, and laboratory staff were blinded to group assignment.
Procedures
SCTV01C and SCTV01E are recombinant protein vaccines developed and manufactured by Sinocelltech Ltd. in Chinese hamster ovary (CHO) cells (These cell lines have not been identified as misidentified by the International Cell Line Authentication Committee) according to good manufacturing practice guidelines.The main active ingredients of SCTV01C comprise trimeric spike protein S-ECD of SARS-CoV-2 variants Alpha (B.1.1.7)and Beta (B.1.351).SCTV01E has a tetravalent design containing the S-ECD sequences of the Alpha, Beta, Delta, and Omicron BA.1.Both vaccine candidates are adjuvanted with a squalene-based oil-in-water emulsion SCT-VA02B.SCTV01C and SCTV01E were supplied in single use vials as a sterile, emulsified, white solution, 0.5 mL/vial, stored and transported at 2-8 o C protected from light, with a validity period of 24 months.BNT162b2 was used as positive control and the dosage form, package and route of administration were consistent with those of the study vaccines.
One day before vaccination, all participants received a full physical examination, and provided blood samples for baseline safety laboratory testing.Participants were randomized to three subgroups to receive one dose of BNT162b2, 20 µg SCTV01C, or 30 µg SCTV01E at a ratio of 1:1:1.Post injection, solicited adverse event (AE) within 7 days, unsolicited AEs within 28 days, SAE and AESI within 180 days were monitored and recorded.AEs and abnormal changes in laboratory tests were graded according to the FDA Standard 26 .Serum samples were collected to evaluate the geometric mean titers (GMT) of nAb activities against live SARS-CoV-2 Delta, Omicron BA.1 and BA.5 variants on days 0, 28, and 180 using plaque reduction neutralization test (PRNT).The peripheral blood mononuclear cells of the first 150 participants were collected and Th1 (interferon gamma (IFN-γ) release) and Th2 (interleukin-4 (IL-4) release) responses were measured before and at day 28 post-boost, using T-SPOT.COVID test and enzyme-linked immunospot (ELISpot) IL-4 COVID TEST assay.For the Th1 (IFN-γ release) test, spike antigens were used as stimulation antigens along with bovine serum albumin and antimicrobial agents.For the Th2 (IL-4 release) test, spike protein peptides were used for stimulation.The live virus neutralization and ELISpot assays were performed according to the supplier's guidelines (Biogenix,Abu Dhabi, United Arab Emirates) as previously described 27,28 .In detail, the PRNT assay was verified and performed by Biogenix Labs and G42 Healthcare.The serum samples were first exposed to a 30-minute incubation at 56 °C in a water bath.The sera were then initially diluted five times and then serially diluted from 1:10 to 1:640.These dilutions were mixed with SARS-CoV-2 variants (Delta, Omicron BA.1, and BA.5) and transferred in duplicate to sub-confluent Vero E6 cell monolayer plates.Following an incubation period of 3 -5 days at 37 °C °Cand 5% CO2 in 6-well plates, antibody titers were determined as the highest serum dilution that resulted in >50% (PRNT50) reduction in the number of plaques compared to the negative control.The negative control had a plaque count ≥50, while the positive control had a plaque count ≤ 50% of the negative control.A cut-off for positivity was established at 1:20.ELISpot assays were performed in cryopreserved peripheral blood mononuclear cells (PBMCs).The cells were rapidly thawed and rested overnight before being stimulated with a pool of peptides containing Spike antigens (the SARS-CoV-2 ancestral strain), bovine serum albumin, and antimicrobial agents.The cells were then incubated at 37 °C for 24-48 h.Phytohemagglutinin (PHA)-stimulated cells were used as a positive control during the assay.To detect IFN-γ and IL-4, mouse monoclonal antibodies (Oxford Immunotec UK, lot numbers: VEC7000001 and VEC7000003, catalog number: COV.435/300) were used following the manual protocols.The spots secreted by the antigen-specific T cells were counted directly from the well using a stereomicroscope or from a digital image captured from a microscope or plate imager.The analysis included only subjects with both baseline and post-baseline data, and the counting results were reported as spots per million PBMCs.
AE
adverse event, TEAE treatment emerged adverse event, TRAE treatment-related adverse event, IP investigational product, AESI adverse event of special interest.
Fig. 2 |
Fig. 2 | GMTs of live virus nAb against Omicron BA.1, BA.5, and Delta.A The geometric mean titers (GMTs) of live virus neutralizing antibodies (nAb) against Omicron BA.1 at day 28 post-injection measured using 50% plaque reduction neutralization test (PRNT50).B The GMTs of nAb against Omicron BA.5 at day 28 post-injection.C The GMTs of nAb against Delta variant at day 28 post-injection.For (A-C), bars show the GMTs with 95% CIs at day 0 and day 28.Dots represent the values for individual participants.Centre of the error bars represents the GMT.Only those with available baseline and post-baseline data were included in BNT162B2 group (grey), SCTV01C group (blue) and SCTV01E group (red).GMR geometric mean ratio, LS GMR least square geometric mean ratio.Note: Subjects who were COVID-19 infected between Day 0 and Day 28 were excluded from analysis.Source data are provided as a Source Data file.*superiority; + non-inferiority.
Fig. 3 |
Fig. 3 | GMTs of neutralizing antibodies against live Omicron BA.1 and BA.5 in groups with low, medium and high baseline titers.A The geometric mean titers (GMTs) of live virus neutralizing antibodies (nAb) against Omicron BA.1 at day 28 post-injection in groups with low, medium and high baseline titers.B The GMTs of nAb against Omicron BA.5 at day 28 post-injection in groups with low, medium and high baseline titers.For (A, B), participants from BNT162B2 group (grey), SCTV01C group (blue) and SCTV01E group (red) were assigned to three groups based on their GMT levels at baseline.GMTs at baseline equal to or lower than 4 times of LLOQ (80), in the range of 80-320 and over 320 were considered as low, medium and high baseline titers, respectively.Centre of the error bars represents the GMT, and error bars indicate 95% confidence interval.Note: Subjects who were COVID-19 infected between Day 0 and Day 28 were excluded from analysis.Source data are provided as a Source Data file.
Table 1 |
Summary of AEs
Table 2 |
Seroresponse to Omicron BA.1, BA.5and DeltaSeroresponse for participants with pre-dose <LLOQ is defined as equal to or above LLOQ; seroresponse for participants with pre-dose ≥ LLOQ is defined as ≥4-fold in titers compared to pre-dose titer.
a 95% CI of seroresponse rate is based on Clopper-Pearson exact method.b The comparison is based on Cochran-Mantel-Haenszel test (CMH) stratified by randomization stratification factors. | 2023-07-10T06:17:30.333Z | 2023-07-08T00:00:00.000 | {
"year": 2023,
"sha1": "06d5738302c2c37db5678e35c830557769b8e4c3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-39766-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8d91f10141c86d0fe24c783a98a38b3aefbeaef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236553300 | pes2o/s2orc | v3-fos-license | The Use of Hydromulching as an Alternative to Plastic Films in an Artichoke ( Cynara Cardunculus cv. Symphony) Crop: A Study of the Economic Viability
: The use of mulching in agriculture suppresses the weeds around crop plants, enhances the nutrients status of soil, controls the soil structure and temperature, and reduces soil water evaporation. Excessive use of low ‐ density polyethylene mulches is contributing to the accumulation of high amounts of plastic wastes, an environmental problem for agricultural ecosystems. Fragments of plastic from such wastes can be found in soils, in water resources, and in organisms, including humans. The objective of this work was to study the economic viability of the use of different hydromulches in an artichoke crop. Three blends were prepared by mixing paper pulp (recycled from used paper) and cardboard (from paper mills) with different additives: wheat straw (WS), rice hulls (RH), and substrate used for mushroom cultivation (MS). These were compared with low ‐ density polyethylene (Pe), a treatment without mulching on bare soil where hand weeding was performed (HW), and a treatment without mulching on bare soil where herbicide was applied (H). The results indicate that the use of hydromulch in an artichoke crop represents a good alternative for reducing plastic waste in agriculture. The net profits of the hydromulch treatments (MS, WS, RH) were higher than for HW and H, and slightly lower than for Pe. The most profitable treatment was Pe (€0.69 m − 3 ), followed by RH (€0.59 m − 3 ), WS (€0.58 m − 3 ), MS (€0.47 m − 3 ), HW (€0.36 m − 3 ), and H (€0.32 m − 3 ). A sensitivity analysis showed a probability of negative results of 0.04 in Pe, 0.13 in SM, 0.08 in WS, and 0.07 in RH, so the probability that the grower will make a profit is greater than 0.9 with the use of mulch (except mushroom substrate) or polyethylene. authors read and agreed to published version the
Introduction
One of the great advances with regard to improving agronomic yields in crops in areas where water and the environmental conditions are limiting has been the use of plastic films for mulching [1][2][3]. In the 1950s, plastic mulches began to be used, due to their ability to increase the soil temperature [4]. Soil covers enhance total yields, control soil erosion, and suppress weed growth [1,[5][6][7][8]. In addition, mulches, by influencing the temperature and improving the physical structure of the soil, produce a microclimate that improves the productivity in the use of water and fertilizers [9] and modifies the energy balance of the soil [10].
The most used mulching material is low-density polyethylene (LDPe) since, among its physical properties, it exhibits good impact resistance, very good processability, thermal and chemical resistance, flexibility, and impermeability to water, in addition to being a cheap material [6]. However, the use of LDPe provokes serious environmental concerns, because its manufacture requires fossil fuels and, since it is non-biodegradable, it causes a residue problem and has detrimental effects on the ecosystem [11][12][13][14].
Excessive use of LDPe mulches is leading to the accumulation of high amounts of plastic wastes, an environmental problem for agricultural ecosystems. After their use, most of these mulches (around 80%) accumulate in landfills or in natural ecosystems [15]. Pieces of plastic from such wastes can be found in soils, water resources, and organisms, including humans [16][17][18][19][20], with effects on the environment and on human health [21][22][23]. The risks in terrestrial systems are less evident than in aquatic ones, since the waste is either buried or burned; however, in the aquatic environment it floats and travels long distances, or is deposited on the seabed [24].
Recycling rates for mulches are significantly lower than the global plastic recycling rate, and are estimated to be below 30% [25]. Furthermore, the problem is aggravated since the fragments that remain after the use of the mulch are difficult to collect and are of low value [24].
Today, one of the new objectives of many governments is the transition to a more circular economic model, where the value of products, materials, and resources is maintained in the economy for as long as possible, and the generation of waste is minimized, reducing adverse health and environmental impacts [26].
In this respect, the use of low-cost and available organic agricultural residues has been proposed for the production of biodegradable mulch materials [27,28], as a way to make weed management practices cost-effective, labor-efficient, and environmentally sound [29].
Hydromulches have been proposed as an alternative to plastic mulches made of LDPe. Hydromulch can be defined as a mixture of water with some type of lignocellulosic material or polymers, plus other additives suitable for the particular purpose, which is applied not as a film but as a liquid [30].
Hydromulching can be useful for the suppression of weed growth, where mechanical and chemical weeding are very difficult [31,32]. Hydromulches can persist for a long time on the ground, although this depends on environmental factors (temperature and moisture) [33].
In recent years, some studies have been published in relation to the effect of hydromulching on the soil, but very few have dealt with its application in agriculture and the potential benefits for weed control, yield, and sustainability.
The specific aim of this work is to contribute new data to the literature by comparing the economic outcomes of two different forms of artichoke cultivation: with mulch (one plastic and three hydromulches) and without mulch (with and without herbicide). For each of these, the yields of two consecutive years will be valued, according to the market prices, to obtain the income, which will be compared with the operational costs to determine the viability of each alternative. Finally, a sensitivity analysis will determine the variability of the results obtained and the probabilities that the grower will incur losses or not.
Materials and Methods
The hydromulches consisted of different mixtures (blends). Recycled paper pulp and paper pinus pulp were used as the basic components and sodium silicate was used as a matrix for the hydromulch samples. To prepare the blends, in addition to paper pulp, the following crop products were used: wheat straw (WH), used mushroom (Agaricus bisporus) substrate (MS), and rice husk (RH). Three random cultivation blocks were established, with five treatments each: two-color low-density Pe (white/black, top/bottom), the three hydromulches (WS, RH, and MS), a treatment without mulching on bare soil where hand weeding was carried out (HW), and a treatment without mulching on bare soil where an herbicide was used (H). Each block comprised 25 plants.
Plants of artichoke (Cynara candunculus var. scolymus L.) cv. Symphony (Nunhens-BASF), grown from seed, were cultivated at the IMIDA agricultural experimental farm, located in Murcia (Spain) (latitude 37°45' N, longitude 0°59' W). They were transplanted on 8 August in the first year (2019) and on 1 August in the second (2020), the final harvests taking place on 28 and 16 March, respectively. The crop density was 5000 plants/ha. A standard nutrient solution for artichoke was used, applied through an underground drip irrigation system at a depth of 5 cm, with emitters of 4 L/h. The trials were conducted following the agricultural practices commonly used in commercial artichoke production in this area. Herbicide was applied four times in each of the two years of study, the herbicides employed being Assistan ® 40SC, Reglone ® , and Lentagam ® Twelve and 10 harvests were carried out in the first and second years of study, respectively: the artichokes were harvested at their optimum collection time and weighed individually.
Economic Analysis.
A cost-benefit analysis determines the benefit by comparison of the income and the costs of an investment project. As in other works [34], an operational structure representative of southeastern Spain was used. Income was obtained as the product of the weekly yield (for each of the two years analyzed) and the average weekly price. Information on the market prices for artichokes was obtained from CARM [35] and the average market prices from 2000 to 2020 were used, as well as their variability, measured as the standard deviation during said period, for the subsequent sensitivity analysis.
The costs are the averages of the values for the two years studied. The costs per hectare were estimated, separating the structure costs from the annual costs, in line with other studies [36,37]. Among the structure costs, a toolshed (with a useful life of 25 years), an irrigation pumping head (15 years), a localized irrigation network (10 years), a regulating reservoir (30 years), and various auxiliary materials (5 years) were considered. For all these costs, only the annual depreciation was allocated, obtained as the ratio between the acquisition price and the useful life. In addition, the two years of cultivation were considered, so the costs of preparation and planting as well as of the mulching materials and their installation were distributed between the two years. The costs of the mulching materials were obtained using the amount (kg) needed to cover one square meter and the square meters that needed to be covered in one hectare (4600 m 2 ). The annual costs were classified into weeding, herbicides, phytosanitary, fertilizers, irrigation water, etc. With regard to herbicides, where appropriate, 4 treatments were carried out and the amount of herbicide used was measured. Five phytosanitary treatments were carried out each year. The cost of water was €0.24 m −3 , based on the average cost of recent years, and 5250 m 3 ha −1 were used. This cost is similar to that used in other studies, such as that of García-García et al. [38], who considered a water cost of €0.23 m −3 . The cost of water shows great volatility by area [39] and can reach €0.26 m −3 [40]. The harvesting costs were based on the weight (kg) of artichokes collected, considering a cost of €0.13 kg −1 . For the fixed personnel expenses, it was considered that one employee can manage 20 ha, while the hourly cost of the operators was established at €7.50 h −1 and that of the tractor at €36 h −1 .
Productivity and Efficiency Analysis
For the analysis of the productivity with regard to the use of water, the water productivity in Euros per cubic meter of water [41,42]) was calculated. Also, the yield, the income per cubic meter, and the net profit per cubic meter were determined. Finally, the maximum price of water that the operation could support without incurring losses-the water viability threshold (WVT)-was obtained [43,44].
Regarding the employment, the NAJ (number of agricultural jobs) per hectare and per cubic hectometer was calculated. To determine the employment generated, the labor used in different tasks, including the handling of machinery, was calculated. One unit of NAJ corresponds to 1840 h.
The envelopment DEA-CCR model has been used to analyse the efficiency of each technology considering water and labour efficiency as input variables [45] , 0 where Y is the output matrix for the six used technologies (production in kilos and net revenues have been considered), X is the input matrix (considering water and working hours), is the weighted or intensity vector (nx1), , ,..., , ( j is the intensity of unit j) and denotes the punctuation of technical efficiency of unit 0. The use of the DEA-CCR model for the efficiency analysis presents several limitations, such as the estimated frontier can be influenced by outliers, especially when the number of observations is too small; the model does not give information with regard to the theoretical optimum, and it is difficult to make estimations or to test hypothesis for the estimated parameters.
Sensitivity Analysis
In order to determine the effect of possible changes in the variables used on the results of this study, the Monte Carlo simulation was used. This methodology is based on the simultaneous change of all the variables that influence the variable under study; in this case, the yield per hectare. In this regard, the Monte Carlo simulation is especially suitable for the study of the effect of different variables on a given variable [46]. This methodology is much more complete than others such as the coefficient of variation used in Smith et al. [47] and Kiwia et al. [48]. For this, the distribution function of each of the variables was estimated and data were generated from this distribution. The generation of 40,000 iterations allowed the results obtained to be studied. In each iteration, the net return for each year was obtained as the difference between income and expenses. Income was considered as the product of the yield and prices. The yield was considered as a normal variable, its mean being the average of both years and with a standard deviation obtained from the experiment itself. For the prices, we proceeded in a similar way, but considering the information from the last 20 years. Regarding the costs, although the overheads were considered fixed, since they were incurred in the first planting and cannot be altered during the rest of the useful life, the annual costs were also considered normal, with the mean values used throughout the text and the standard deviation obtained from the information provided by the various sources. Combined with this analysis, the Value at Risk (VaR) was used to determine the probability that the profit of each alternative (Pe, H, HW, etc.) is positive (or, in general, greater than a previously defined value).
Although the VaR was originally designed for use in financial institutions, its use is currently spreading in other sectors such as agriculture, as can be seen in the work of Manfredo and Leuthold [49] and Brotons et al. [50], who calculated the VaR to quantify the market risk of cattle feeders. In this sense, if we assume that X is a random variable with a cumulative distribution function F (X) and let VaR be a fixed value such that then VaR is defined as the inverse of the function of the cumulative distribution So, the VaR is the lowest value of a variable for a certain level of confidence α; that is, the value for which the α% of the possible values of said variable are less than said value, and the (1-α)% is greater.
The confidence level for which net income is equal to zero (I = 0), I is the confidence level for which I = 0 can be obtained as the probability that I is lower or equal to zero.
Statistical Analysis
For the analysis of the yield and income of each treatment, the Levene test was used for the analysis of the homogeneity of variances. The non-existence of significant differences in the variance (p > 0.05) allowed the application of a one-way ANOVA to determine the existence of significant differences in yield and income among the treatments. When a difference was significant (p < 0.05), the treatment means were separated by Tukey's honestly significant difference (HSD) multiple-range test, using lowercase letters to indicate significant differences between treatments. The statistical package used was SPSS 25 (Chicago, IL, USA).
Results and Discussion
Next, the income and expenses calculated for each of the treatments, the net yield, and the sensitivity analysis are shown. Tables 1 and 2 show the weekly yield for each production system in each of the years under study. A weekly analysis is normally used for this kind of horticultural crop [51]. As can be seen, the yields were similar in the majority of treatments in the two years, being lower in the treatments without mulching: H (15% lower) and, especially, HW (29% lower). The hydromulches gave yields similar to those of the polythene and so they could be used without a significant yield loss.
Income
Given their variability throughout each season, a weekly study of the prices was chosen, in line with other work such as that of Heuvelink [51] or López-Marin et al. [34,52,53]. In the first place, the evolution of the artichoke prices [54] in the first weeks of the year, the time when harvesting took place in both years, was analyzed. As can be seen, the prices remained stable or showed a slight increase until mid-February (week 6), when they began to decrease, reaching €0.33 kg −1 in week 15, a price that is 61% lower than that at the beginning of the year (they did not vary greatly in the rest of the season). This decrease was motivated by the increase in supply, or, as Prestamburgo and Saccomandi [55] indicated, the greater the offer, the faster the decline in agricultural prices. This volatility in prices is transmitted along food supply chains, thereby exposing all chain actors to risk and uncertainty [56]. Figure 1 summarizes, by treatment, the annual profit (the average yield of the two years multiplied by the average prices of the period 2000-2020) and the average income, at the prevailing market prices (Figure 2). Regarding the yield, the Levene test was performed to study the homogeneity of the variances and had a significance level of 0.916, indicating that there were no significant differences, and the ANOVA was applied with p = 0.000, showing that there were differences among the treatments. The Tukey HSD test showed that treatments H and HW gave significantly worse yields than the other treatments, which did not differ significantly among themselves. The Levene's test was carried out for income and showed that there were no differences among the variances, with a significance level of 0.738. From the ANOVA, it was concluded with p = 0.000 that there were differences among the treatments. The Tukey HSD test showed, as with the yield, that the H and HW treatments were clearly inferior to the rest regarding the income. This confirms that the temporal variability in the yield and price did not offset the differences in yield. As mentioned in the previous section, it can be seen that weed removal by herbicide use (treatment H) or by hand (HW), in both cases without mulching, gave worse results: 15 and 29% lower, respectively. Table 3 shows the costs of the structures, with their corresponding useful life and depreciation. The costs of preparation and planting have also been included since artichoke is a biennial crop, being replanted every two years. The importance of the structure costs is much lower in outdoor plantations than in greenhouse production, as reported in López Marín et al. [53]. Table 4 shows the annual costs, the preparation costs being biennial since the plantation has a useful life of two years; so, only half of the €3706 ha −1 is considered (€1853 ha −1 year −1 ). In the tax costs, the Real Estate Tax was considered. The indirect costs are included in the purchase and sale prices, since, in general, farmers are covered by the special agricultural tax regime, and direct taxes are not included because they differ greatly from one taxpayer to another (especially regarding the tax on the income of individual persons).
The herbicide costs were calculated according to Table 5. It shows the four treatments carried out (the values for the two years were practically identical, so only the average values are shown). These treatments were carried out between July and December (when the weeds had stopped growing) in each of the seasons.
The harvesting costs considered were €0.1 kg −1 , regardless of the yield.
In summary, it can be seen that the MS treatment had the highest costs. Among the other treatments, the most important differences were due to the harvesting costs, which were higher in those treatments with higher yields, since a harvesting cost of €0.10 kg −1 was considered. In particular, the costs (without considering the structure costs) were lower for HW (8.60% lower than Pe) and H (25.07% lower than Pe) since mulch was not used. The treatments WS, RH, and MS had slightly higher costs than Pe (between 2.14 and 4.88%). Studies dealing with costs accounting in artichoke are practically non-existent. Among them, Sgroi et al. [58] reported costs on the island of Sicily of €17,119.75 ha −1 , higher than those found in our study. As in that study, the variable costs in the present work were much higher than the fixed ones. García-García et al. [59] found that the fixed costs represented around 20-30% of the total costs, a percentage lower than that obtained here. In the present study, the distribution of costs among the fixed and variable was strongly influenced by the costs related to the plastic mulch and hydromulches, since, although in all cases the percentage of fixed costs was less than 60%, it was 41.32% for HW and 49.58% for H. This is due to the high percentage of the fixed costs that the hydromulch or polyethylene represents, as well as the manufacturing and removal costs, where appropriate. In addition, part of the personnel cost was considered as fixed.
Finally, given that this is an incipient technology, research should focus on obtaining cheaper mulches that can compete with polyethylene or even be cheaper. Public policies should be reoriented towards such materials that can become highly competitive and that benefit the environment in a double sense: they allow a reuse of "waste" and they contribute to a lower consumption of plastics in agriculture 3.1.4. Net Profit Figure 3 shows the net profit, obtained as the difference between the income and total costs (structure plus annual). The net profit was highest for Pe: for WS and RH it was lower than for Pe, by 16.54% and 14.95%, respectively, and it was lowest for HW and H (53.37 and 31.56% lower, respectively). For H, the yield was lower than for the rest of the treatments and, although it had the lowest costs, the net profit was also lower. With regard to the HW treatment, although the yield was quite high, the high costs of weeding prevented a high net profit from being obtained. The profit was higher with the WS and RH hydromulches. The higher costs observed for MS are due to the costs of the transport from the site of generation of the waste to the application area. This suggests that the mulch should be applied in areas close to the generation of the waste to avoid the costs of the transport and the pollution it generates. The average sale price was between €0.72 and €0.73 kg −1 ( Table 6). The differences among the treatments were due to the distinct temporal distributions of the harvest. The unit cost was calculated as the ratio between the total cost and the yield, so that lower prices are indicative of higher yields and vice versa. The average cost was lowest for Pe (€0.56 kg −1 ), while it was highest for HW and H (€0.61 kg −1 ). The difference between the average sale price and the unit cost is the unit margin for the grower.
Water Productivity
The productivity (in kilos) has been used on a widespread basis-for example, by Azorín et al. [43] and Goldhamer et al. [60], for the cultivation of almonds in Spain and California, respectively, and by Goldhamer et al. [61] and Dichio [62], for peach cultivation in California and Italy, respectively. For the Sao Francisco Valley (Brazil), Bassoi et al. [63] calculated the water productivity for different irrigation strategies such as partial root zone drying and regulated deficit irrigation. Alkhamis et al. [64] and Neal [65], for herbaceous crops, and García et al. [59], for artichoke, also did so. Table 7 shows the indicators of the productivity of water use. The productivity values of the hydromulches (WS, RH, and MS) are similar to those obtained for Pe.
Although the indicators productivity in kilos (kg m −3 ) and productivity in euros (€ m −3 ) may indicate that a crop has an efficient production, this does not mean that it is economically so. It is necessary to analyze the net profit that the activity generates per cubic meter of water consumed.
In this case, the most profitable treatment was Pe (€0. Mulching improves the soil moisture regime by limiting the evaporation rate of water at the surface; in general, mulching gives higher soil moisture contents compared to bare soil [67,68], which means that the yields are lower in treatments without mulching, as happened in our work. The power of plastic mulches to retain soil moisture is greater than that of organic mulches [69]. However, in our work, in both growing cycles, there were no statistical differences between the hydromulches and the treatment with plastic. This may have been because these organic mulches (hydromulches), with the intervention of the soil moisture and temperature, affected the dynamics of the soil organic matter, augmenting the contents of dissolved organic carbon (C) and nitrogen (N) through the decomposition of plant materials, as has been found with other organic mulches [70,71].
The WVT shows the maximum price of water that the grower could bear and the strategies which are profitable for each price. In the treatment with herbicide, the grower could only withstand a maximum price of €0.65 m −3 (assuming that the rest of the costs remain constant). By contrast, in the WS and RH treatments the grower could withstand prices of up to €0.91 and €0.92 m −3 , respectively; this indicates that in periods of scarcity, when the price of water is very high, the grower could bear such prices. The highest price could be borne in Pe (€1.02 m −3 ). In this regard, García et al. [59] obtained lower maximum prices (between €0.17 and €0.53 m −3 , depending on the form of irrigation).
Generation of Employment
These types of indicators are used in agricultural policy [72][73][74]. The National Hydrological Plan of Spain [75] estimates a water productivity in the Segura basin (within which this work was carried out) of between 24 and 62 NAJ hm −3 for horticultural and fruit crops and 190 NAJ hm −3 for greenhouse grown crops. For the cultivation of artichoke, García et al. [65] obtained values between 26 and 45 NAJ hm −3 . Table 8 shows the results obtained for the two indicators: the employment generated per hectare and per hectometer consumed. The highest generation of employment was achieved in Pe (71.14 NAJ hm −3 ), the lowest corresponding to H (45.36 NAJ hm −3 ). The efficiency analysis was done solving optimization program (1) and the results are presented in Table 9. Technologies Pe, WS y RH are inefficient. As a result, Pe can be replaced by W or RH without loss of efficiency. In order to obtain the maximum efficiency in the remaining technologies, HW should reduce the input consumption by 7.52% considering the output obtained (radial reduction) and additionally (addition movement) reducing the input working hours by 700 h and increasing the production by 404 kg. The analysis for the remaining inefficient technologies (H and MS) is similar. Table 10 shows the percentages of input and output potential improvement for the inefficient technologies. For obtaining the efficiency in H technology a 29.09 or 15.80% reduction in water and working hours is required. A similar interpretation can be used for HW and MS. The analysis of the efficiency must be understood in a relative way. The aim of this section was to show the fact that treatments HW, H and MS are not effective with regard to Pe, WS and RH. However, further analysis with many more data will be required to check the efficiency of the former treatments.
Sensitivity Analysis
The variables that influence the net profit were found to be normal, with the means and standard deviations shown in this work. Figure 4 shows the results obtained, using these variables, from the Monte Carlo simulation, which can be easily implemented in a spreadsheet [76,77]. It displays the probability that the net income is equal to or less than each of the values of the x axis. The probability of obtaining negative results is 0.04 in Pe, 0.13 in SM, 0.08 in WS, and 0.07 in RH-so the probability that the grower will obtain benefits is greater than 0.9 when using mulch (except mushroom substrate) or polyethylene-while in HW and H the probability of obtaining negative results is 0.16 and 0.14, respectively. When analyzing the chances of obtaining a high net profit-for example, €4000, which is approximately the average net profit of the Pe-the probability that the yield does not reach this figure is 0.58 in Pe. Among the hydromulches it is 0.76, 0.69, and 0.68 for SM, WS, and RH, respectively; that is, this value will not be reached so often. Contrastingly, for the HW and H treatments values of 0.88 and 0.93, respectively, were obtained. This shows that a net profit of €4000 can reasonably be expected, even if the variables were to suffer alterations with respect to the initial values considered. López-Marín et al. [78] used this approach to compare the NPV with other methodologies such as the decoupled net present value and the use of decreasing discount functions such as the gamma function, given the existence of high initial costs that must be periodified during the useful life of the greenhouse. A similar methodology was applied by Smith et al. [79], who used descending cumulative probability curves for 10-year disease loss and control costs for five different control strategies used in the sensitivity analysis. The probability distribution of maize yields in relation to the target yield was used by Kiwia et al. [48].
Future research should address possible cost reductions in mulch manufacture, so that the costs of the grower are reduced. This would imply a reduction in the probability of incurring losses. In Figure 5, the density function shows a higher concentration of income for Pe, HW, and H around the mean, with the distribution for treatment Pe being a little more shifted to the right; that is, the probability of getting high profits is superior. For treatments MS, WH, and RH, the distributions show greater dispersion and are somewhat shifted to the right; that is, the probability of obtaining high yields is greater. However, it should be noted that the probabilities of obtaining very high values with these latter three treatments (above €7000 ha −1 ) are similar to that of the Pe treatment.
Conclusions
Hydromulches are a good alternative for artichoke cultivation, for the reduction of plastic waste. In addition, their costs may be reduced by the mechanization of the installation process on the ground. Their use can reduce the carbon footprint, and is more sustainable and profitable as well as being ecofriendly. It is a technique that can be easily used in many other horticultural crops, although the availability in the area where the hydromulch will be used of the plant waste used to make it is important for its economic viability.
The main conclusions of the study are: The yields, which showed little variability between the two years analyzed, were lower in the treatments without mulch: H (15% lower) and especially HW (29% lower). Therefore, the use of mulch increases the yield in a similar way to polyethylene, but also has environmental advantages. The sale prices remained practically stable until they began to decrease in mid-February, reaching €0.33 kg −1 in April, 61% lower than at the beginning of the year. Organic mulching had the highest costs (up to 5% higher than Pe) since the costs of the mulching materials were higher than for Pe. Research should now focus on reducing these costs in order to make such materials economically competitive with plastics. The net profits with the mulching materials MS, WS, and RH were higher than for HW and H, but lower than for Pe. The profitability of the use of mulching materials may be reduced by the cost of transport if they are not available near the site of cultivation. It is clear that, if the objective is to reduce the environmental impact of the use of plastics and other polluting elements, the use of mulch should be carried out close to where the corresponding waste is generated to avoid the occurrence of externalities due to its transport. Pe gave the highest productivity in the use of water, regarding yield and income, followed by WS and RH. When considering the productivity in euros, the most profitable treatments were RH and WS. The profit per cubic meter was lower for Pe due to the high acquisition costs of this material. The highest generation of employment (greatest number of jobs) corresponded to Pe (71.14 NAJ hm −3 ) and the lowest to H (45.36 NAJ hm −3 ). According to the sensitivity analysis, the probability of negative results is 0.04 in Pe, 0.13 in SM, 0.08 in WS, and 0.07 in RH; so, the probability that the grower will obtain a profit is greater than 0.9 when using mulch (except mushroom substrate) or polyethylene. A future reduction in mulch costs would greatly reduce the probability that the grower will make a loss.
New work is required to corroborate the reasons for the agronomic differences among the different mulches, as well as studies on the decomposition of hydromulch remains (the C and N cycles in soil, and the availability of C and N to plant roots) and evaporation. Such work will reveal the system that is most effective and profitable, due to both the reduced evaporation and the enhanced bioavailability of nutrients resulting from the decomposition and mineralization of the mulch organic matter. The mulching materials selected in this way, and originating close to the site of cultivation, will be the most sustainable, both economically and environmentally.
Conflicts of Interest:
The authors declare no conflicts of interest and the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2021-08-02T00:05:56.436Z | 2021-05-10T00:00:00.000 | {
"year": 2021,
"sha1": "1db5c00cbd6a52775b86103643eeff49987b4e2f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/9/5313/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b9f71d2249b9baf73751beb701d052034e509758",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
148252700 | pes2o/s2orc | v3-fos-license | The validity and reliability of the cross-national comparison of degree programme levels in European countries. What have students learnt?
A cross-national comparison of degree programme levels became relevant when the borders of European countries opened for students and graduates, and higher education institutions were restructured into bachelor’s and master’s programmes. This new situation foregrounded the questions of what students are learning in the degree programmes of European countries and how to compare their achievements. Therefore, we conceptualised a valid and reliable ‘level’ construct that included a cognitive (‘disciplinary thinking’) and an affective aspect (‘professional attitude’). The main research question for our exploratory study was: ‘What procedure can lead to a valid and reliable cross-national comparison of degree programme levels?’ To achieve this comparison, we designed a Three-Step Procedure, in which level was operationalised (step 1), measured and analysed (step 2), and compared cross-nationally (step 3). The study was conducted in collaboration with four bachelor programmes in Hotel Management from four European countries; a total of 783 participants were involved. Four themes were generated to operationalise the concept of level: professional management, hospitality business research, leading management, and strategic management; their respective learning outcomes were measured with a questionnaire. Principal component analysis identified the conceptualised themes and measured their components with eigenvalues ≥1, which explained 66 % of the variance. The reliability of the components exceeded a Cronbach’s alpha coefficient of 0.70. Analysis of the components and of the single samples showed strong validity and reliability for the learning outcomes. Thus, we believe this study has produced a rigorous means to compare degree programme levels across countries.
Introduction
The cross-national comparison of degree programme levels became relevant when the borders of European countries opened for students and graduates, and higher education institutions (HEIs) were restructured into bachelor's and master's programmes in accordance with the Bologna Agreement, which was signed by the European Ministers of Education (1999). Huisman and Westerheijden (2010) indicate that since this European cooperation in quality assurance began, much has been realised in a new system of accreditation (European Association for Quality Assurance [ENQA] 2009) that functions at a supranational level through the development of the European Standards and Guidelines (ESG), the launch of the European Network of Quality Assurance Agencies, and the establishment of the Register of European Higher Education Quality Assurance Agencies. However, they conclude that 'there is too much stress on compliance to rigid procedures and mechanisms, at the cost of a focus on quality improvement and the learning experience' (p. 63; emphasis added). In the USA, a comparable problem is mentioned by Ewell (2010), who concluded Bedoelde u: that 'changes on quality assurance have rendered the process more intentional, more focused on undergraduate teaching and learning, and far more transparent. But the goal of providing adequate evidence of student learning remains elusive' (p. 173; emphasis added).
The Assessment of Higher Education Learning Outcomes (AHELO) project of the Organisation for Economic Cooperation and Development (OECD) started with a feasibility study (Nusche 2008) and declared that 'in most countries, assessment results are inaccessi-ble…If HEIs would specify the expected student outcomes explicitly and in a measurable way, comparative assessment of learning outcomes would become feasible' (p. 5). AHELO's full report on the feasibility study has since been published, in which they state: Testing of discipline specific skills was considered useful on a global scale but…the diversity of local contexts and disciplines would create difficulties. In general, this type of testing was thought to be easier and cheaper if you test one discipline but the costs would add up for each discipline you add to the test. Achieving consensus could be hard work but the test could prove more intrinsically interesting and engaging for the participants, provided there is no oversimplification of the test (and the results remain relevant). While several suggestions were put forward…to achieve a blended approach the most prevalent answer was to find a way to assess generic skills within a discipline context. (OECD/AHELO 2013, p. 42;emphases added) Based on the recommendations in the full report, we designed a blended procedure (the socalled Three-Step Procedure), paying particular attention to the validity and the reliability of the instruments that we selected. Indeed, 'achieving consensus could be hard work', but it leads to valid and reliable outcomes. These outcomes are achieved by a procedure, outlined in this article, which can be used for about 6 years, at which point it should be updated; this is less costly in the longer term than that suggested above in the AHELO's report.
In this transnational pilot study, our Three-Step Procedure leads to an elaboration of the degree programme level concept. The concept is operationalised using analysis-based themes and learning outcomes that characterise a typical professional bachelor's programme in Hotel Management. We conducted the research in collaboration with teachers and students from four bachelor programmes in Hotel Management from four European countries. The choice of discipline was a pragmatic one based on availability of similar programmes promising international cooperation.
To create a procedure relevant to student learning experiences, we had to reconsider how the level of degree programmes should be described and defined. Our participating graduates had completed their professional bachelor's degrees within the binary systems that prevail in these four countries. Basically higher education consists of two separate systems: one provides professional education, and the other delivers academic education that includes professional education, but on an academic level. The extent to which the two systems are separate is not absolute in the four countries. For example, Norwegian legislation provides space for an institute of professional degree programmes to become an academic institute and to award PhD degrees. The Dutch binary system, by contrast, is currently more rigid. However, various differentiations appear within the system, which suggests that the system is not static.
We proposed defining the level of a degree programme based on two crucial questions that students ask themselves when embarking on higher education: 'What do I have to learn?' and, 'What is expected from a professional in this particular field?' These questions are of paramount importance because the world that graduates enter is an international one. They can apply for jobs in other countries, which means that employers need to know the details of each graduate's degree programme. Employers want to know what the applicant has learned (i.e. 'disciplinary thinking') and how they will behave as a professional (i.e. 'professional attitude'). Thus, we consider these two aspects the basic pillars of employment and used them to conceptualise the degree programme level and thereby lay a solid base for delivering empirical evidence for a comparison of the learning outcomes of the respective programmes. In formulating the two aspects of disciplinary thinking and professional attitude, we realised the potential efficacy of a questionnaire designed for alumni; having graduated from the institutes in question, they would be credible sources of information about the opportunities they were offered during their years of higher education.
Our aim, therefore, was to create a procedure appropriate for measuring and analysing data cross-nationally. The main research question was: What procedure can lead to a valid and reliable cross-national comparison of degree programme levels?
A valid comparison requires a clear concept and unambiguous methods. Thus, in the following sections, we conceptualise 'degree programme level' and then map the critical factors that might affect the validity of a cross-national comparison. Next, under 'Methods' section, we discuss how the conceptual framework can be used to design a Three-Step Procedure focused on minimising bias and show how our procedure can provide valid empirical evidence from a cross-national comparison, which we carried out in collaboration with four bachelor programmes in Hotel Management across four European countries.
Conceptual framework
The degree programme level concept The concept of the degree programme level is connected to the mental activities of students in higher education. Vermunt (1996) studied these activities and concluded that learning in higher education taps into mental processes that can be categorised as cognitive, affective, or regulative activities. These activity types have been confirmed further by various studies such as that by Martínez-Fernández and Vermunt (2013). This study confirms that students in South American countries also use cognitive, affective, and regulative activities at learning in higher education. Cognitive activities are used by students to process content. They lay the foundation for learning results in terms of knowledge, understanding, and skills. Examples include looking for relationships between parts of the subject matter (relating) and looking for applications (applying). Affective activities are used by students to cope with the feelings that arise during their studies and may positively or negatively affect the progression of the learning process. Examples include motivating oneself, attributing learning results to causal factors, attaching subjective appraisals to learning tasks, and controlling emotions that impede learning (Liu et al. 2012). Regulative activities are used by students to organise and manage their cognitive and affective activities and therefore lead indirectly to positive learning results. Examples include monitoring the progress of a learning process, diagnosing the cause of difficulties, and adjusting learning processes when necessary. Given that the Three-Step Procedure focused on disciplinary thinking and professional attitude, we decided not to include regulative activities in the outline of the Procedure. Regulative activities are linked indirectly to the degree programme level and consequently have a different position with respect to the level. In our Three-Step Procedure, we outlined the desired learning outcomes related only to disciplinary thinking and professional attitude.
Disciplinary thinking The cognitive activities of the level concept involve the content (knowledge) of the respective discipline or domain (i.e. disciplinary thinking) (Shulman and Shulman 2004;Sternberg 2003). Disciplinary thinking includes higher-order cognitive processes such as analysing, evaluating, critical thinking, and creating (Kek and Huijser 2011;Robinson 2011;Biggs and Collis 1982;Koh et al. 2012), which are applied to complex discipline-specific problems. Biggs and Tang (2007) indicate that sound knowledge is based on interconnections and cognitive growth that lies not just in knowing more but also in restructuring and reconceptualising what is already known to connect it with new knowledge.
Professional attitude The affective activities of the level concept include the main emotive characteristics of the studied discipline or domain, which we call the professional attitude, that is, the accuracy of the bookkeeper, the conscientiousness of the academic worker, or the discretion of the nurse (O'Connor and Paunonen 2007). The affective aspect of the level subsumes the professional attitude and refers to the most emotive characteristic of the domain in which the programme exists.
We explore the level of disciplinary thinking and professional attitude under 'Instrumentation' section in the 'Methods' section, below.
Critical factors for validity
It is of utmost importance that the comparison between the levels in different countries is valid. Therefore, 'the beast of bias' must be recognised and minimised (Couper and de Leeuw 2003, p. 173;Field 2013, p. 163). Validity means that what was aimed for conceptually was actually measured. Bias refers to the presence of confounding factors that challenge the comparability of measurements across national groups. According to Harkness et al. (2003a, p. 13), construct bias and method bias are the critical factors for achieving validity in a cross-national project.
Construct bias occurs when the construct being measured is not identical across groups. It can be recognised by overlaps in the definitions of the construct across cultures or by incomplete coverage of all relevant aspects of the construct (Harkness et al. 2003b, p. 145). Construct validity is indicated by the distinction between the constructs, the strength of the loadings, and the extent of reliability.
Bias in the methods can be caused by the type of measurement, ambiguous instructions for the respondents, poor translations, and uncertainty about the meaning of terms. Method bias can also result from such factors as sample incomparability, instrument differences, tester and interviewer effects, and the mode of administration (e.g. communication problems and differential familiarity with material). For these reasons, method bias is not a concern of test developers, administrators, or data analysts exclusively but also applies to graduate coordinators, teachers, students, and other members of the educational and examination committees that participate in this type of study (Van de Vijver and Leung 1997, p. 11).
Cross-national comparisons remain complex because of the many triggers for bias (Van de Vijver 2003). In this study, we aimed to minimise bias by carefully designing an outline of a Three-Step Procedure based on a clear concept of the degree programme level in higher education. We believe that a valid measurement requires a clear definition of the concept being used (Koh et al. 2012).
Methods
Having conceptualised the degree programme level in terms of disciplinary thinking and professional attitude, we designed the Three-Step Procedure and carried it out in collaboration with four bachelor programmes from four European countries. Each of the three steps aims to deliver outcomes that validly and reliably reflect the levels of the participating degree programmes to facilitate a cross-national comparison. In this section, we introduce our participants and explain our instrumentation.
This study was initiated by the professional Hotel Management bachelor's programme in a large institute of higher professional education in the Netherlands (i.e. the Dutch HBO). For external quality assurance purposes, this institute required reliable data that reflected the level of the Hotel Management degree programme. Hotel Management schools are inherently internationally oriented and are interested in a rigorous cross-national comparison of their degree programme levels with those of schools in other countries. Additionally, this school was considering developing a more in-depth collaboration with an equivalent institution abroad, including a possible joint degree.
Participants
In total, 783 participants from four Hotel Management bachelor's programmes were involved in this study. The programmes were based in Austria, Belgium, the Netherlands, and Norway. The participants comprised the four deans of the degree programmes; 13 teachers who were members of educational and/or examination committees, some of whom were experts in the professional and/or scientific domain; two members of a central test office; 12 final year students; 18 stakeholders, including teachers and graduate coordinators; one project manager; and 733 graduates. The participating graduates had completed their professional bachelor's degrees within the binary systems that exist in these four countries.
SOLO taxonomy for disciplinary thinking
The taxonomy of the Structure of the Observed Learning Outcome (Biggs and Tang 2007), or SOLO, classifies the learning outcomes related to disciplinary thinking in terms of their complexity (Table 1). SOLO consists of five levels of a student's understanding in a domain that is new for them. The levels are distinguished from one another and reflect increasing structural complexity. Furthermore, students learn disciplinary thinking in two stages: one quantitative and the other qualitative. Students start learning in the quantitative stage, which contains three of the five SOLO levels: pre-structural, uni-structural, and multi-structural. At the pre-structural level students demonstrate almost no understanding of the task and might use tautology to cover this deficiency. At the uni-structural level, students concentrate on a part of the information and thus their conclusions are limited. The multi-structural level ranges from picking up several aspects of the domain information but without elaborating, to picking up many aspects and explaining them. This is illustrated by an example from a study of programmers (Lister et al. 2006), in which the programming students had to learn how to understand codes. Data were collected in the form of written and think-aloud responses from Relational. The student is skilled in relating parts of domain knowledge (e.g. by comparing, contrasting, and explaining causes). This is the first level of understanding in higher education. The student is capable of placing domain knowledge in a perspective. They think more as a professional in the domain. They make initially limited, and then more refined generalisations of ideas.
Apply, integrate, analyse, explain, conclude, review, argue, transfer, make a plan, debate, construct, solve a problem 3 Multi-structural. The student's learning results range from picking up a number of independent facets (of the domain knowledge) without elaborating on them, to picking up many facets and explaining them. The student tells what they know (knowing-telling).
Classify, describe, report, discuss, illustrate, select, compute, sequence, outline, separate 2 Uni-structural. The student focuses on a part of the (domain) information or task so that their conclusion is limited and probably dogmatic. They are able to recognise, identify, and define one facet.
Write, label, count, find, match, memorise, quote 1 Pre-structural. The student misses the quintessence of the (higher education) task or question. They demonstrate hardly any understanding of the question and might use tautology to cover their lack of understanding.
Show little evidence of relevant learning students (novices) and educators (experts), using examination questions. Lister et al. (2006) formulate this by offering another way of describing the multi-structural level of understanding: 'The multi-structural SOLO Response is a response where the student manifests an understanding of all parts of the problem, but does not manifest an awareness of the relationships between these parts-the student fails to see the forest for the trees' (p. 119). The qualitative stage comprises two of the five SOLO levels. The relational level is the first level that is relevant for higher education. Students are capable of relating parts of the domain knowledge and placing them in an appropriate context. They start thinking and understanding as professionals who integrate the parts of the problem into a coherent structure and use that structure to solve a given task. Finally, at the extended abstract level, students are able to transform declarative knowledge into functional knowledge. They are able to theorise, generalise, and reflect across the borders of the relational level and the domain: 'The coherent whole is conceptualised at a higher level of abstraction and is applied to new and broader domains… The trouble is that today's extended abstract is tomorrow's relational' (Biggs and Tang 2007, p. 78).
PA taxonomy for professional attitude
Professional attitude is the affective aspect of the level concept. It involves essential affective characteristics of the profession or domain in which a particular higher education programme exists. To measure the learning outcomes for the expression of professional attitude, we constructed the Taxonomy for Professional Attitude (PA) ( Table 2). This taxonomy provides a means of indicating how a learner's attitude develops in complexity when learning the affective aspects of a domain or profession.
The SOLO taxonomy was a source of inspiration for the development of our PA; for the taxonomy content, we drew on the taxonomy of Krathwohl, Bloom, and Masia (1974, p. 107-170) and on Zimmerman's (2006) self-regulation cycles. The learner's development runs through five levels. At the first level, a student becomes aware of the features of the professional attitude for which they are being educated. Then they 'accept' by moving toward Table 2 Taxonomy for professional attitude (PA) PA level and description 5 Internalising. At the highest level, the student is able to place most features of the professional attitude consistently into control of their own behaviour. Their behaviour consistently incorporates the characteristics of the domain's professional attitude.
one or more features as intended, but in a very inconsistent manner. At the third level, a student gradually 'demonstrates', albeit inconsistently, characteristics of a specific attitude. At the fourth level, a student 'integrates' more features into a consistent behaviour pattern. At the highest level, a student 'internalises' the various features of professional attitude, which means that they consistently place those features into control of their own attitude. The degree of complexity in the attitudinal development as indicated in this PA taxonomy was tested for inter-rater reliability; a Cohen's kappa coefficient of 0.83 indicated strong reliability (Bryman 2012, p. 280).
Questionnaire
After the completion of their degree at an institute for higher professional education, graduates received an e-mail invitation to participate in our study. Using a questionnaire, we solicited the graduates' opinions about the acquisition of disciplinary thinking and professional attitudes; the items were based on both the SOLO and the PA taxonomies and reflected the learning outcomes of the desired level of disciplinary thinking and professional attitudes. The existing questionnaire had been used for data collection in four different countries, with different languages and cultures, albeit within the common context of Europe. Thus, there was a great risk that the questions could be misinterpreted. To minimise the threat of construct and method bias, we pretested the draft questionnaire extensively. For this pretest, we made use of the Questionnaire appraisal coding system (Snijkers 2002; Van de Vijver and Leung 1997), a necessary instrument for minimising question ambiguity in an international survey. The draft questionnaire was pretested by teachers and final-year students from the participating programmes. The pretest took the form of an in-depth interview conducted by a teacher from a participating degree programme with individual students as interviewees. The students answered each question and the interviewer observed the question-and-answer process, while using the questionnaire appraisal coding system to detect possible problems (Table 3). The interviewees thought aloud and often made suggestions for improvements to the questionnaire.
They were asked to indicate the extent to which they had mastered the learning outcomes in their respective programmes. It was most important that the students immediately understood the intended meaning of each question; if they hesitated, it suggested that there might be ambiguities in the wording of the questionnaire.
Three-Step Procedure
We started the development of the Three-Step Procedure by conceptualising the degree programme level from a learning psychology perspective, which resulted in the two aspects, disciplinary thinking and professional attitude. These aspects had to be operationalised to reflect the specific domain of Hotel Management. Following this, we measured, analysed, and compared the degree programme level(s). The final design of a Three-Step Procedure that focuses on minimising or avoiding ambiguities comprised (Table 4): Step 1. Operationalising the degree programme level concept: What is the content of a degree programme and which learning outcomes should be discerned?
Step 2. Measuring and analysing the degree programme levels: How to organise the data collection related to the student performances Step 3. Cross-national comparison of the degree programme levels: What conclusion(s) can be drawn?
Results
The 'Results' section is composed of an explication of the three steps.
Step 1: Operationalising the degree programme level concept The aim of the first step was to determine what content was relevant for the level of the degree programmes. To this end, we analysed various related contexts, created and affirmed themes, and developed and validated learning outcomes. Step 1 Operationalising the degree programme level concept Analysing contexts Creating and confirming themes Developing and validating learning outcomes Step 2 Measuring and analysing the degree programme levels Selecting and constructing a measuring instrument Pre-testing and adjusting the questionnaire Measuring the level of the degree programmes Analysing representativeness, construct validity, and reliability Analysing the components of the full sample and the single samples Determining the validity and reliability of the degree programme levels Step 3 Cross-national comparison of the degree programme levels Calculating and presenting the cross-national comparison Discussing the outcomes of each programme Conclusion The validity and reliability of the cross-national comparison Analysing contexts. Experts of the participating programmes carried out a study of four contexts: the (1) professional and (2) academic domains; the (3) external surroundings of the domain, for example, a professional organisation, a council of the sector, or specific legislation; and the (4) current curriculum. The experts analysed these areas and proposed six themes.
Creating themes. The proposed themes were substantiated by peer-reviewed literature, to improve objectivity and transparency. These themes were then mapped and discussed by the experts, a process that produced newly refined themes to be used in the next step: hospitality business research, professional management, internationalisation, leading management, customer service, and strategic management.
Affirming themes. The aim of this step was to facilitate agreement about the validity of the themes. The themes created by the experts were presented to various stakeholders from the participating programmes, namely, managers involved in educational and examination committees, students, and teachers. They came to an agreement about four themes underlying a bachelor's curriculum in Hotel Management: hospitality business research, professional management, leading management, and strategic management ( Table 5). The stakeholders selected This research is related to marketing and finance, specifically for the hospitality sector. The most important topics in hospitality marketing today are consumer behaviour, service management, and e-marketing. Hospitality finance includes several aspects, such as risk management, financing, bankruptcy, and capital structure.
Yoo et al. 2011 Jang and Park 2011
Professional Management Introductory Hotel Management and processes of self-regulation are combined in this theme. Self-regulation concerns the personal and professional growth of the hotel manager that is crucial for the continuation of the company or hotel. It has been established by the educational and examination committees of the degree programmes that hospitality is the main emotive characteristic of the hotel manager's professional attitude. The affective aspects are expressed by adequately operating with guests from different countries, managing in a results-oriented way to obtain satisfied guests, and acquiring staff possessing the ability to deliver hospitality.
Watson 2008 Lee et al. 2011
Leading Management This management requires the capacity to deal with complicated situations that may occur with guests and colleagues. Leading management asks for curriculum topics that are often difficult to realise in degree programmes and are also included in other business core disciplines such as accounting, finance, and human resource management. The hospitality manager should acquire an international orientation and knowledge about cross-cultural differences between guests and staff members.
Becket and Brookes 2008
Strategic Management Strategic management refers to management that is grounded in well-considered policy. It implies clarity about the organisation's objectives and has become a key subject in many undergraduate and postgraduate programmes in hospitality schools worldwide. The main curriculum topics are organisational strategy and how to deal with uncertainty in the environment and with changes or differences, while concurrently enhancing effectiveness.
Okumus and Wong 2005 Harrington and
Ottenbacher 2011 these four themes based on their high degree of relevance to the programme level, having rated the six themes on a five-point scale ranging from 'not relevant' to 'very relevant'. Face validity was employed in accordance with Kane's (2006) definition. Developing learning outcomes. Once the themes were accepted, they were categorised into intended learning outcomes and worded from the students' perspectives. The SOLO taxonomy was used for the learning outcomes that reflected the necessary level of disciplinary thinking (Table 1), and the PA taxonomy was used for the learning outcomes that reflected the desired level of professional attitude (Table 2). It was decided that the 'relational' SOLO level (Table 1) was necessary for the Hotel Management professional bachelor's programmes because at this level the students demonstrate higher understanding and begin to think as professionals in their domain (Biggs and Tang 2007). Furthermore, it was decided that the PA 'attitude' level (Table 2), characterised by 'integrating', and preferably 'internalising', was necessary for the same programmes; at this level, the students are able to integrate aspects of professional attitudes without inconsistencies.
Validating learning outcomes. The description of the developed learning outcomes was discussed and finally assessed by other expert teachers and members of examination or curriculum committees. They indicated the degree to which the learning outcomes addressed the respective themes. Based on the two taxonomies, 31 learning outcomes were developed, discussed, and finally assessed by the expert teachers for content validity, resulting in a Cohen's kappa coefficient of 0.71, which signified good inter-rater reliability (Bryman 2012, p. 280).
Step 2: Measuring and analysing the degree programme levels The aim of this step was to collect 'unbiased' clean data that were suitable for proper analysis. We collected this data using our own questionnaire consisting of five-point Likert rating scales (See Appendix 2 for the items in a shortened formulation).
Selecting and constructing a measuring instrument. The potential for bias in a questionnaire exists in both the constructs and the methods. Construct bias often results from misinterpretations of the questions. It can be identified as overlaps in the definitions of the constructs across cultures or as incomplete coverage of all relevant aspects of the construct. Method bias can be caused by factors such as poor translations and uncertainty about the meaning of terms. We assumed that these types of biases would be best managed by conducting an extended pretest.
Pretesting and adjusting the questionnaire. The draft questionnaire was pretested on 12 students in their final year from the four bachelor programme Hotel Management from the four countries Austria, Belgium, The Netherlands, and Norway. The pretest was supported by an adapted Questionnaire appraisal coding system (Table 3).
Measuring the level of the degree programmes. The managers of the four participating bachelor programmes invited recent graduates (no more than 1.5 years prior) to participate in this study. In their letters of invitation, they provided the internet address for the questionnaire, along with a unique access code. They assured the participants that their answers would be anonymised.
We analysed the data from the completed questionnaires to establish the representativeness of the sample, the construct validity, and the reliability of the created themes forming the components of the degree programmes.
Representativeness. Representativeness refers to quantitative as well as qualitative characteristics of a sample. As mentioned, we drew our sample from the population of recent graduates. We chose the criterion 'recent graduates' to reduce the potential for bias introduced by influences other than the degree programme. We excluded participants who had graduated more than 1.5 years prior. Using this criterion, a total of 733 graduates were invited to participate in the survey. The gross number of respondents was 535 (73 %). The response number across the four degree programmes ranged between 154 (21 %) and 124 (17 %). However, 142 respondents were eliminated: 107 appeared to have graduated more than 1.5 years before, and another 35 did not complete their questionnaire (30 % or more missing values, or 9 or more unanswered questions out of 31). The net number of respondents was 393. This number was sufficient for a quantitative pilot study (Snijkers 2002, p. 65), in that it could generate a meaningful analysis of construct validity and reliability (Field 2013) (Appendix 2).
The qualitative data from the responding recent graduates are presented in Appendix 1, which demonstrates the even distribution of respondents across the four degree programmes (about 98 (25 %) per programme). Of these respondents, 94 graduated (24 %) from the Belgian programme, 95 (24 %) from the Norwegian programme, 100 (25 %) from the Austrian programme, and 104 (26 %) from the Dutch programme. Of the 393 respondents, 364 (93 %) graduated from bachelor degree programmes oriented to Hotel and Hospitality Management. For 188 (48 %) of the graduates, the time to completion of their degrees was 3 years, while 179 (46 %) took 4 years; 206 (52 %) graduates worked inside and 122 (31 %) outside the hospitality industry during their time in the programme. Recent graduates who filled a managerial position were 107 (27 %) versus 286 (73 %) who did not. Thus, the sample also matches the group's characteristics (Appendix 1).
Degree programme. Most of the respondents confirmed that they had graduated from a Hotel Management programme or an international hospitality management school. While most of the Austrian participants had graduated in Travel and Tourism, their programme involved a strong focus on Hotel Management. Furthermore, some of the Austrian respondents indicated that they had graduated in applied science in Hospitality administration. Thus, most of the respondents graduated from a programme within the same general domain.
Duration of study. The Belgian and Norwegian bachelor's programmes took 3 years, while the Austrian and Dutch lasted 4 years.
Type of organisation. Most respondents (65 %) from the Belgian programme worked in a hotel; some of the Austrian (41 %), Dutch (43 %), and Norwegian (26 %) respondents worked outside the hotel domain.
Graduates' positions. The meaning of 'manager' is limited to positions with executive responsibilities (e.g. directing, governing, and making decisions). An example of a nonmanagerial position is a food and beverage controller. Two hundred eighty-six (73 %) of the 393 responding graduates from the participating degree programmes worked in a nonmanagerial position. Graduates from the Austrian (40 %) and Dutch (28 %) degree programmes worked in managerial positions. Appendix 1 gives an overview of these features of the sample.
Construct validity. The full sample of the four degree programmes (N = 393) was analysed using principal component analysis (PCA), which is appropriate for exploratory data, and assessment and evaluation of treatments. We identified the four conceptualised themes, i.e. Hospitality Business Research, Professional Management, Leading Management, and Strategic Management, and measured four components with eigenvalues of more than 1 and explained 66 % of variance, which is a good percentage for a cross-national study. Consequently, the conceptualised themes were affirmed as existing constructs (Creswell 2007); from the original learning outcomes, 23 with loadings ≥.40 remained, meaning a reduction of eight items. The significant loadings ranged from 0.54 to 0.88, a good result in the stable sample of N = 393 (Field 2013). The loadings were high and the components demonstrated no overlap. The data of the full sample met the criteria of construct validity, suggesting that there was sufficient evidence for claiming that the content of the test corresponded to the content of the construct that it was designed to cover (Field 2013, p. 783). (Appendix 1) Reliability. The components were measured for scale reliability using Cronbach's alpha coefficient. The alpha coefficients exceeded 0.70, meeting the norm of 0.70 for measurements in groups (Committee On Test Affairs Netherlands [COTAN] 2011) (Appendix 2).
Analysis of the components. The first component, Professional Management, explained 45.59 % of the variance; the loadings ranged from 0.60 to 0.80 which was good, as shown at the bottom of Appendix 2. Conclusions have to be drawn on this sound measurement of the learning outcomes, which are confirmed by their high scale reliability (α = 0.91).
The second component, Hospitality Business Research, explained 9.15 % of the variance; it included strong loadings that ranged from 0.83 to 0.88 and sufficient loadings from 0.62 to 0.63. Hospitality Business Research referred to sound measurement of the learning outcomes, which was confirmed by a good-scale reliability measure (α = 0.90).
The third component, Leading Management, included five learning outcomes that explained 5.89 % of the variance. This component included more than sufficient loadings, ranging from 0.62 to 0.77, and good-scale reliability (α = 0.82).
The fourth and final measured component, Strategic Management, included three learning outcomes that explained 5.07 % variance. The loadings were good for a fourth component; the scale reliability was sufficient (α = 0.80).
Analysis of the single samples. The single samples of the four degree programmes (N = 94-104) were also analysed using PCA. The four themes were measured in components with eigenvalues ≥1 and explained 69-80 % of the variance (Austria 69 %, the Netherlands 70 %, Norway 73 %, Belgium 80 %), which were good percentages. The data from the single samples met the criteria of construct validity and of scale reliability (0.70-0.93), suggesting that there was sufficient evidence to support the claim that the content of the test corresponded in the different countries to the content of the construct it was designed to represent (Field 2013, p. 13).
Determining the degree programme level. The analysis of the components of the full as well as the single samples indicated that the measured outcomes were valid and reliable, which is a necessary foundation for the third step in our Three-Step Procedure.
Step 3. Cross-national comparison of degree programme levels For calculating and presenting the cross-national comparison of the levels of the four programmes, we proposed the following. The level achieved by the students was calculated using the grand mean and grand standard deviation of the themes in this study (Table 5). The respondents indicated on a five-point Likert scale the extent to which they had mastered the intended learning outcomes, from 'too little' through 'somewhat' to 'more than satisfactory'. Most gave ratings of about 2.00 and 3.00, which was not high. The standard deviation was often ≥1, which indicated a wider spread than desired and produced ambiguous interpretations. To overcome this problem, we calculated z-scores from the grand mean and grand standard deviation. The norm was established rather low (≥3.00) because it was the first time that the level had been conceptualised using taxonomies for both disciplinary thinking and professional attitude. The difference between the norm and the grand mean was divided by the grand standard deviation. This computational model makes the outcomes comparable: [norm − grand mean / grand standard deviation = z]. This resulting z-score was converted with the support of the cumulative standard normal distribution as a percentage that reflects the degree to which the level component has been achieved. It was determined that the level had been achieved if ≥50 % had indicated that they had mastered the component in question.
Discussion
What have students learnt?
The outcomes of the cross-national comparison demonstrate that students from three countries/ degree programmes (Austria, the Netherlands, and Belgium) successfully achieved the learning outcomes represented in the theme, Professional Management, while students from two countries (Austria and the Netherlands) achieved those of the theme, Leading Management. Professional Management includes introductory aspects of the 'professional attitude' of the hotel manager, and Leading Management involves dealing with more complicated situations that may occur with guests and colleagues. Both themes involve 'professional attitude' skills and deliver valid and reliable results.
However, the outcomes from the programmes in the Netherlands, Belgium, and Norway indicate less success for the theme, Hospitality Business Research. All the programmes, moreover, were less successful with the theme, Strategic Management. Hospitality Business Research involves the most theoretical aspects of the programmes and relates most strongly to 'disciplinary thinking'. Strategic Management implies research-based action and reflection that is often aligned with Hospitality Business Research. Neither theme may seem obvious for a professional bachelor's programme, a problematic assumption, given that these qualities are so necessary. Employers appreciate and need valid and reliable outcomes from Hospitality Business Research (Appendix 2). The theoretical themes are characteristics for Higher Education.
We conclude that most of the problems in these programmes are related to 'disciplinary thinking' (Hospitality Business Research, Strategic Management) rather than 'professional attitude' (Professional Management and Leading Management).
Degree programmes in Norway and Belgium had low scores in this pilot study, outcomes that we verified with the relevant participating institutions. It appeared that one of the programmes was in the process of dissolving, while the other programme was more practice-oriented than their stated level indicated. This information helped explain and affirm our results. The outcomes of the degree programmes in Austria and the Netherlands met the norm.
The Austrian and Dutch degree programmes were the most successful in this study (Table 6), as indicated by the percentages of graduates who fill managerial positions with executive responsibilities (directing, governing, and making decisions): 40 % of the Austrian graduates, 28 % of the Dutch graduates, 27 % of the Norwegian graduates, and 13 % of the Belgian graduates (Appendix 1).
What is crucial for students' learning?
Having discussed our results with the managers of the participating institutes, we sought to interpret the results and to draw conclusions from them. We argue that the regulative activities of teaching are crucial for students' successful achievement of disciplinary thinking and professional attitude at a professional bachelor's level. Students can learn the higher-order thinking processes required for solving complex problems from a discipline or domain and its main as well as its emotive characteristics, if the teaching strategies are suitable for students' learning activities. According to Vermunt and Verloop (1999), the development of learning and thinking activities does not occur if the regulative teaching activities are not compatible with the required regulative learning activities. For example, in higher professional education, students' learning styles are often application-directed in nature. By contrast, learning activities like concretising and applying are learner-initiated. Many teachers in higher professional education employ application-oriented teaching methods: they give many tasks, questions, and assignments gm grand means, gsd grand standard deviations, % converted z-scores of respondents scoring ≥3.00 from the four degree programmes and full sample in which students are asked for possible examples and applications of what they learn. It seems superfluous to stimulate students to employ learning activities that they use of their own initiative. In these situations, other learning activities such as structuring concepts, relating theories, and critically processing ideas, are often left out of the learning process; thus, students do not learn to initiate them, and nor do teachers stimulate students to use them. For students to learn disciplinary thinking and professional attitude, then, the more suitable teaching strategy is one with a 'shared form of regulation', as opposed to one with a 'strong teacher regulation'.
Conclusion
This study started with the research question: 'What procedure can lead to a valid and reliable cross-national comparison of degree programme levels?' To answer this question, we had to establish the concept of a degree programme level and understand what the most salient threats were to a valid and reliable measure of degree programme levels. The level was conceptualised as comprising a cognitive aspect (i.e. 'disciplinary thinking') and an affective aspect (i.e. 'professional attitude'). Construct bias and method bias were identified as the most relevant risk factors for validity and reliability in cross-national comparisons. Based on this understanding, steps were developed to make the degree programme level concept measurable (step 1), to measure it as accurately as possible, and to analyse the validity and reliability of the outcomes. Once the outcomes met these criteria and a representative sample was assembled, the data could be used for the next step of measuring and analysing (step 2). Following this process, calculations were made using the valid and reliable data and the data were presented for the cross-national comparison of degree programme levels (step 3). This Three-Step Procedure was carried out in collaboration with teachers and students from four bachelor programmes in Hotel Management in four European countries; the outcomes were deemed valid and reliable based on a representative sample.
While this Three-Step Procedure was developed in cooperation with degree programmes in the hospitality domain, we believe that its general outline makes it applicable in other domains in higher professional education. Broadly, the procedure focuses on a learning-psychology perspective, creates necessary themes for evaluating the content of a programme, and uses two taxonomies that deal with disciplinary thinking and professional attitude. Based on these taxonomies, learning outcomes were developed for use (in this case, in Hotel Management) in the construction of a questionnaire. The data were analysed for representativeness, construct validity, and reliability. The Three-Step Procedure facilitates, as we have concluded, a successful and valid comparison of degree programme levels. | 2019-05-09T13:11:42.295Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "7a8ad5442788bd53e8edc418c7e302ca86ecdf51",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10212-016-0311-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "cf17db60f7636a76b6147e2e7935552bf3a1bd0e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
208743892 | pes2o/s2orc | v3-fos-license | What do we really know about the appropriateness of radiation emitting imaging for low back pain in primary and emergency care? A systematic review and meta-analysis of medical record reviews
Background Since 2000, guidelines have been consistent in recommending when diagnostic imaging for low back pain should be obtained to ensure patient safety and reduce unnecessary tests. This systematic review and meta-analysis was conducted to determine the pooled proportion of CT and x-ray imaging of the lumbar spine that were considered appropriate in primary and emergency care. Methods Pubmed, CINAHL, The Cochrane Database of Systematic Reviews and Embase were searched for synonyms of “low back pain”, “guidelines”, and “adherence” that were published after 2000. Titles, abstracts, and full texts were reviewed for inclusion with forward and backward tracking on included studies. Included studies had data extracted and synthesized. Risk of bias was performed on all studies, and GRADE was performed on included studies that provided data on CT and x-ray separately. A random effect, single proportion meta-analysis model was used. Results Six studies were included in the descriptive synthesis, and 5 studies included in the meta-analysis. Five of the 6 studies assessed appropriateness of x-rays; two of the six studies assessed appropriateness of CTs. The pooled estimate for appropriateness of x-rays was 43% (95% CI: 30%, 56%) and the pooled estimate for appropriateness of CTs was 54% (95% CI: 51%, 58%). Studies did not report adequate information to fulfill the RECORD checklist (reporting guidelines for research using observational data). Risk of bias was high in 4 studies, moderate in one, and low in one. GRADE for x-ray appropriateness was low-quality and for CT appropriateness was very-low-quality. Conclusion While this study determined a pooled proportion of appropriateness for both x-ray and CT imaging for low back pain, there is limited confidence in these numbers due to the downgrading of the evidence using GRADE. Further research on this topic is needed to inform our understanding of x-ray and CT appropriateness in order to improve healthcare systems and decrease patient harms.
Introduction Guidelines for the assessment and treatment of low back pain (LBP) have been in circulation since the 1980s with more than 11 countries publishing their own LBP clinical guidelines in the last two decades. [1] While most early versions of LBP guidelines did not recommend routine use of radiographic imaging for assessment of LBP, there were discrepancies about when to image (e.g., some guidelines provided specific criteria or timeframes for imaging and others did not). In the 1980s and 1990s, x-ray imaging was commonly recommended in the assessment of LBP persisting longer than four weeks [1] and Computed Tomography (CT) was often recommended in patients experiencing neurological deficits, including radicular symptoms. [2,3] For the last 25 years, there has been increased congruence among LBP guidelines regarding when and under what circumstances to use diagnostic imaging. Since 2000, the recommendations typically state that diagnostic imaging is warranted only when patients with LBP present with red flag symptoms that suggest the presence of one of four known specific spinal pathologies (severe cauda equina, infection, fracture, and cancer). [4,5] Guidelines have also been updated with respect to the potential direct and indirect patient harms of diagnostic imaging, particularly x-ray and CT, as well as their lack of clinical utility for non-specific LBP. While MRI is another form of diagnostic imaging, it does not expose patients to the ionising radiation that x-ray and CT both emit; thus we are focusing only on those two imaging modalities.
Harms of over-testing
Patient harms. Both x-ray and CT imaging expose patients to ionizing radiation, a known mutagen that can increase risk of cancer, with CT exposing patients to more radiation than x-ray. [6] The human body can tolerate some radiation, but the more exposure that a patient has to radiation, the greater their cancer risk. This risk of radiation is even greater to children and young adults as radiation can effect both male and female fertility. [7] Thus, radiologists typically recommend using x-ray and CT only when medically necessary and clinically justified to patient care. [8,9] In addition to the harms from radiation, imaging can reveal incidental findings, such as anatomical abnormalities, that are extremely common in asymptomatic patients, and only weakly correlated with patient symptoms. [10] For example, a systematic review in 2014 found that disc degeneration was present in 96% of asymptomatic adults aged 80 and up, and disc bulges found in 80%. [11] Moreover, patients who receive diagnostic imaging do not have better patient outcomes compared to those treated without imaging. [5,10] Chou et al. performed a systematic review and meta-analysis to compare physical outcomes of patients with LBP who received imaging to those who did not. [12] They found that patients who received immediate imaging for non-serious LBP had similar pain and function outcomes both in the short and long term compared to patients who received usual care without imaging. [12] The harm of incidental findings is that patients may have to be sent for further tests or procedures to confirm that the finding is in fact benign, which may delay the patient receiving the appropriate treatment.
Health system burden. In addition to patient harms, over-testing results in a substantial economic burden to healthcare systems. [13] In the United States, the amount of dollars spent on all CTs in 2000 was $975 million, and by 2006, the amount increased to $2.17 billion. [13,14] In countries with a public healthcare system, it is difficult to quantify in dollars the cost of unnecessary imaging, but in Canada the rate of CT imaging has almost doubled since 2003, [15] suggesting that the cost of imaging has also drastically increased. This financial increase is also associated with trickle-down effects such as increased need for follow-up, further investigations of incidental findings, referrals to specialists, and even surgery. [10,16]
Importance of assessing appropriateness
Given the potential patient harms and added health care costs of using diagnostic imaging, it is essential to understand if these tests are being used appropriately according to the current guidelines. This information allows healthcare providers to understand whether and to what degree patient safety and quality of care are compromised with the use of unnecessary tests. A recent systematic review of diagnostic imaging appropriateness for LBP found that approximately one third of imaging referrals were not appropriate; however, this review included imaging referrals from any healthcare provider for any imaging modality (including MRIs).
[17] X-ray and CT pose the most direct harm to patients due to their radiation emissions; thus we intend to provide a focused estimate of appropriateness for these tests only. Additionally, since physicians in family practice or emergency department settings are the most common setting for imaging referrals for patients with LBP and follow the same guidelines for imaging ordering, we will focus our question to this provider population. This will also allow us to reduce any heterogeneity in our estimate due to potentially different ordering practices or guidelines amongst different providers.
Aim
We aim to synthesize the evidence from all studies investigating the appropriateness of physician-made referrals for CTs and x-rays for LBP in primary and emergency care, which from here on we will refer to both as primary care. Our review adds to the literature by providing clinicians, implementation researchers and policy makers with an estimate of imaging appropriateness for CT imaging and x-ray imaging separately that is specific to physicians working in family practice and emergency department settings.
Methods
This study was performed according to the PRISMA methodology. database search were imported to Endnote (version 10), and duplicates were removed before screening. Forward and backward citation tracking as well as reference lists of relevant systematic reviews and policy documents were done on all included papers in order to ensure our database search captured all applicable published research articles.
Inclusion criteria
Studies were included if (i) the design was a retrospective or prospective review/audit of medical records, (ii) the data item was data on lumbar CT and x-ray images, (iii) the imaging referrals were made by a physician in either general practice or emergency department settings, (iv) the analysis compared the reason for imaging referral to a guideline source, and (v) the outcome was the proportion of appropriate or inappropriate referrals based on adherence to the guidelines. All LBP types were eligible for inclusion. We excluded studies that looked at appropriateness of imaging referred by other providers such as chiropractors, physiotherapists, or nurse practitioners. Only studies that reported individual or aggregate data from chart reviews for CT and x-ray imaging were included. If other tests or imaging modalities (e.g., MRI) were combined with x-rays or CTs, the study authors were contacted to confirm if x-ray and CT data could be reported separately, if not, the study would be excluded. Other study designs, such as self-reported surveys or simulated patient visits were excluded. Since there was potential for variation in imaging recommendations found in guidelines published prior to the year 2000 that could impact in the definition of appropriateness, we excluded all studies in which the data and guidelines were from 2000 and older.
Two reviewers (GL, AH) screened titles and abstracts and created a shortlist of full texts to be screened. Full texts were scrutinized by two reviewers (GL, AH) to assess eligibility against the inclusion/exclusion criteria. Any discrepancy was resolved upon discussion of the difference and consensus of the categorization for inclusion. Authors of studies that did not have a full text available (abstract or conference proceedings only) were contacted to determine if there was a published full-text. Authors of studies that did not report imaging modalities included were contacted to determine if MRI was included in the aggregate data.
Data extraction
An electronic data collection form was developed to extract information from all included studies on study characteristics and outcome data. For each study the healthcare setting, LBP type, sample size, and outcome data were extracted. Outcomes included both the proportion of appropriate and inappropriate images. Additional outcome information extracted included: the guidelines source used for comparison, the definition used to assess appropriateness (or inappropriateness), the outcome denominator (if outcome reported the number of patients, images, visits), and measurement error (if reported).
Quality of reporting and risk of bias assessment
Quality of reporting was assessed for each study according to the "Reporting of studies Conducted using Observational Routinely-collected health data" (RECORD) Statement checklist, which is an expansion of the "Strengthening the Reporting of Observational Studies in Epidemiology" STROBE Statement checklist. [18][19][20][21] Every included study was compared to the RECORD Statement's 35-item checklist to determine if the study reported pertinent information to fulfill the checklist.
No widely accepted tool exists for assessing Risk of Bias (RoB) for this type of observational study. Guidance was provided by a review authored by Sanderson et al. which provides a list of specific domains to be considered. [22] RoB for these observational, non-randomised studies was determined by using items that related to the following 4 domains: Representativeness of patients, misclassification of patients, misclassification of outcome measurement, and inconsistent data. Overall study RoB was judged to be low if 4 out of the 4 domains were judged as low risk, moderate if 3 domains were considered low risk or high if two or less domain items were low risk.
Data synthesis and analysis
Our main outcome was appropriateness of x-ray or CTs. For this review CT and x-ray appropriateness was broadly defined as suspicion of any of the red flag conditions (fracture, cauda equina, infection, malignancy). Since there is some variation in the guidelines about the exact criteria for appropriateness we anticipated some clinical heterogeneity in the definitions used by studies. Data were summarized separately for appropriateness of x-rays and appropriateness of CTs. We extracted estimates of the proportion of appropriate x-rays or CTs (and 95% confidence intervals) from each included study. In one case, the study only included an estimate of inappropriateness. [48] In this case the authors were contacted to confirm'that we could accurately use the inverse of their estimate as the proportion of appropriate x-rays. When studies did not provide CIs for their appropriate percentage, we calculated the 95% CI using the formula for calculating confidence intervals for a single proportion in Stata (v 15). Meta-analysis for a single proportion using a random effects model was completed on studies that were determined to be clinically homogenous. [23] The pooled proportion was calculated with Stata (v 15).
We applied the GRADE (Grading of Recommendations, Assessment, Development and Evaluation) approach to assess certainty of the estimates of appropriateness. [24] Certainty was downgraded based on 4 factors: • Risk of Bias: Twenty-five percent or more of the participants were from studies rated as having a high RoB.
• Inconsistency in results: Determined by examining whether the estimates were similar in magnitude (overlapping confidence intervals).
• Indirectness of evidence: More than 50% of the participants were outside the target group (e.g., differences in populations, outcome measures, and interventions).
• Imprecision of evidence: Determined based on the width of the confidence interval (CI) associated with the proportion of appropriateness (+/-3%) and the overall sample size (at least 2000 participants).
Results
We identified a total of 919 publications from database searching (n = 918) and additional sources (n = 1), which was reduced to 696 studies after deduplication (Fig 1). We reviewed 185 full texts of which 22 were excluded for very specific reasons (see S2 Appendix). Of the six final included studies, [47][48][49][50][51][52] one study was published in Spanish but was translated for analysis, [52] and two studies were abstracts only for which there was no full publication according to the authors of the abstracts. [47,48]
Study characteristics
The studies were conducted in Finland, Ireland, Spain, & the United States (Table 1). In all studies, imaging referrals were made by physicians from a mixture of both primary care clinics or hospital settings. Sample sizes ranged from 30 to 3908. The duration of LBP in the different studies was undefined. Five of 6 studies assessed appropriateness of x-rays; two of the six studies assessed appropriateness of CTs. The studies used a range of different guidelines to select the criteria for determining appropriateness. Of the six studies included, nine different guidelines were used; some studies were directed by more than one guideline source. Study design. The included studies were all retrospective chart reviews/audits (see S2 Appendix), though not all used common terms to indicate that. [47] The majority of studies were a general chart audit/review done specifically to quantify appropriate imaging for LBP. However, one study's objective was to quantify appropriateness of CT imaging in young patients and included more than CT imaging of the lumbar spine (e.g., thoracic spine, head, etc.). [49] Setting. All included studies were a general chart review of medical records and were conducted in a primary care provider setting and reported adequate information for the settings according to the RECORD checklist. The settings were identified as a hospital or health centre, with only one study mentioning data coming from the ED setting alone. [51] What we know about appropriateness of imaging in primary care: a systematic review and meta-analysis Participants and study size. Participants were largely identified either by patient records, or records of images. Coding used to identify the included records was clearly described in only two studies. [51,52] These two studies were the only studies to justify their sample sizes.
Data sources/variables. Most studies took the information from the patients' hospital or clinic charts directly. If there was a specific database or computer program that was accessed, it was not communicated in the published paper. Electronic medical records were specified in three studies, but the applications were not identified by name. [48,51,52] One study utilized an insurance claims database. [51] Data access, cleaning, linkage, and supplementary information. These reporting criteria were poorly or not at all discussed in the studies. If there was linkage involved it was not clarified and if the data cleaning occurred the details were not explained sufficiently. No study mentioned the level of database access researchers had. Only Schlemmer et al. provided supplementary data that was available for access online. [51] Risk of bias. The four domains that were assessed for RoB were representativeness of patients, misclassification of patients, misclassification of outcome measurement, and inconsistency in data reporting (Fig 2). Four studies were judged to have a high risk of bias, one to have moderate RoB [52] and one to have low RoB. [51] Estimates of appropriateness X-rays. We found five studies with 4,598 participants that reported the appropriateness of x-rays, with four studies that used the reason for referral to determine appropriateness (Table 1) [47,[50][51][52] One study, by Culleton et al., used the radiology findings report interpreting the image to determine appropriateness. [48] It was excluded from the meta-analysis due to the heterogeneity of outcome assessment and data source. From the four studies with 4,184 participants, we found low quality evidence that 43% (95% CI: 30%, 56%) of x-rays were appropriate (Fig 3). The quality of evidence was downgraded for two reasons; inconsistency and indirectness ( Table 2). The estimate was determined to be inconsistent based on nonoverlapping confidence intervals of individual estimates across studies. As well, the estimate was downgraded due to indirectness as one of the studies was conducted solely in an ED setting while all others were in a mixed setting health centres with both general and ED physicians.
CTs. We found two studies with 678 participants that reported the appropriateness of CTs (Table 1). Both studies used the reason for referral to determine appropriateness but used different criteria to define the outcome. Schlemmer et al. [51] defined appropriateness as any red flag condition or pain that has persisted greater than 6 weeks and Oikarinen et al. [49] restricted the definition to only situations of trauma. Using both studies, we found very lowquality evidence that 54% (95% CI: 51%, 58%) of CTs for LBP were appropriate (Fig 3). Similar to the outcome of x-ray appropriateness, the certainty of the estimate for CT appropriateness was downgraded due to inconsistency because of non-overlapping confidence intervals and indirectness because there were differences in the setting that would influence the outcome. Additionally, the estimate was downgraded due to imprecision, although the confidence intervals were somewhat narrow, the estimate is based on a sample size that is less than 2000 participants which challenges the certainty of the estimate (Table 2).
Discussion
Few studies have been published reporting on the appropriateness of x-ray and CT scans ordered by primary care physicians (in general practice or emergency medicine) individually for patients with LBP. Among the studies we identified, most were conducted in European countries. No audit was conducted in countries such as Canada and Australia despite these countries having ongoing national campaigns to reduce unnecessary imaging for LBP (e.g., Choosing Wisely Canada, etc.). [7] From the available evidence, we found that only half of xrays and CTs are being ordered according to guidelines. However, due to several factors Table 2. GRADE summary of findings for the outcome of appropriateness of x-ray and CT imaging for patients with low back pain.
Appropriateness of x-ray and CT imaging in patients with LBP ordered by primary and emergency care physicians
Population: Patients with any type of low back pain Setting: Emergency department, General Practice, Hospital Comparison: Back pain guidelines for imaging, assumed to focus on red flag indicators
Outcome Effect Number of participants in Studies Certainty
Appropriateness of x-ray 43% (30 to 56%) n = 4,184; four studies Low 2,4 LL OO Appropriateness of CTs 54% (51 to 58%) n = 678; two studies Very low 2,3,4 L OOO � GRADE Working Group grades of evidence. related to inconsistency and indirectness, we have low certainty in this estimate. Our lack of certainty stems largely from the variation or lack of reporting how appropriateness had been defined in these studies. Moreover, the majority of the studies we identified were conducted with very small sample sizes (and were thus underpowered to provide reliable estimates) and were of low methodological and reporting quality. In order to advance the science in this area, better quality studies that are adequately powered and adhere to guidelines for conducting and reporting clinical audits using routinely collected data are required. While another systematic review has investigated imaging appropriateness, it had heterogeneity by including multiple providers and included multiple imaging modality types, including MRI. [17] Our review adds to the current knowledge base in this area by answering a specific question regarding the appropriateness of radiation emitting x-ray and CT for patients with LBP in settings where patients typically seek care. Given that there have been several recent (past 5 years) international campaigns targeting physicians in general practice and emergency departments to reduce imaging, providing a robust assessment of the appropriateness specific to this recommendation is necessary to help clarify the issue and set targets for change. [7] With respect to the estimate of imaging appropriateness, it is important to discuss that we found wide variation in the methods and reporting of the included studies. The six included studies cited 9 different guideline sources, which were not always internationally recognized. In addition, although the names and sometimes references of guidelines were mentioned as the source for determining appropriateness, it was not clear which criteria were used to define the outcome. For example, many guidelines recommended imaging only when red flags were present, and others provided additional criteria, which recommended imaging after a certain duration of LBP and non-response to treatment. It was unclear how these criteria were operationalized to code the reasons for referral as appropriate or not. This could lead to misclassification of the outcome or low reliability of the results. Better reporting of criteria for defining appropriateness and examples of operationalizing the coding protocol would improve our understanding of possible heterogeneity in the outcomes across studies.
Other sources of potential heterogeneity included the differences in inclusion criteria regarding patient population, the setting in which imaging referrals were made, and the medical record data sources. For example, two studies looked at patients that were under the age of 40, while one study looked only at patients older than 65 years. While most studies included a mixture of settings with referrals made from hospital-based or general practice-based physicians, one study focused solely on referrals made within an emergency department setting. Lastly, one study collected data from an insurance database, while two looked at EMR, and three did not describe the database other than to mention medical records. These potential sources of clinical heterogeneity may explain some of the inconsistency in the estimates across studies.
Strengths
As with most systematic reviews and meta-analyses, we adhered to the PRISMA guidance for conducting and reporting systematic reviews and meta-analysis using observational data. [53,54] This included a) having two reviewers screen studies and extract data, b) providing an assessment of methodological quality and heterogeneity among the included studies, and c) forward and backward citation tracking to ensure all relevant studies were captured. We focused on an exact question of what the pooled proportion of radiation emitting imaging for patients with LBP in ED and primary care settings were appropriate which allowed us to understand how frequent these test orders are appropriate for these modalities that also cause harm to patients. Exclusion of older guidelines allows us to focus on recent studies that are most applicable to the current guideline recommendations and current health care provider practice. Finally, we used the "RECORD checklist" to provide a robust assessment of the quality of reporting which allowed us to make sound recommendations for advancing the quality and replicability of the science in these types of study designs.
Limitations
Despite its strengths, this study is limited in a few ways. First, due to resource constraints we chose to use a more specific search strategy meaning that it may not have been sufficiently sensitive to identify an exhaustive list of all potentially relevant studies. However, after consultation with a research librarian about this decision we included forward and backward citation tracking to enhance our specific search of electronic databases. While additional citation tracking did identify several potentially relevant studies all but one [51] were later excluded for various reasons (see S2 Appendix).
Other limitations of this systematic review involve the quality, risk of bias assessments, and heterogeneity of the included studies. Many of the studies were not described in sufficient detail to assess the quality for replicability. Since a tool does not already exist to help grade the studies that are reporting routinely collected health data, the domains for potential introduction of bias were selected based on expert opinion. This makes it difficult to compare to other systematic reviews. As mentioned, the clinical heterogeneity of the included studies with respect to the definition of appropriateness and differences in the inclusion criteria of patient ages also limits the certainty of our findings around the estimate of appropriateness, which we have reflected in our GRADE assessment.
Future research
Based on this review's findings, we identified several areas for future research that would improve our knowledge about the appropriateness of LBP imaging. First, only 2 studies assessed the appropriateness of CT images for LBP that were ordered by physicians. One of these studies had a very small sample size and high risk of bias and the other was methodologically sound but was conducted in an ED setting. Future studies in other countries, using similar methods to Schlemmer et al. in both general practice and emergency settings, would be helpful to confirm appropriateness of CTs for LBP. This would involve adhering to the RECORD statement for improved reporting quality. Additionally, for both outcomes of x-rays and CTs, we found that the definition of appropriateness varied among studies and in many cases the definition was often unclear or too vague to allow meaningful interpretation or replication. Thus, as a first essential step, we recommend future research clearly report the definition of appropriateness they are using and the operationalization of the definition for coding purposes. Second, and possibly most important, this field of research would benefit from a standardized definition of appropriateness for x-rays and CTs. This could be based on a spectrum to reflect some variation in the guidelines, ranging from a very strict cut-off (e.g., appropriate if only trauma-indicated used in the Oikarinen et al. study) to more inclusive definitions (e.g., any red-flag indicated and/or having pain greater than 6 weeks as was used in Schlemmer et al). [49,51]
Implications for practice
The results of this systematic review show that in several countries about half of the referrals for LBP imaging (x-rays and CTs) are not appropriate according to the guidelines. Due to the associated patient harms of x-ray and CTs scans including radiation exposure, high rates of incidental findings and risk of delayed recovery, non-adherence to the guidelines represents low-value care for patients. [27] Hence, it is important to better understand why these referrals are made through future research.
Conclusion
Recently there has been a push to reduce unnecessary and inappropriate imaging, not only to save costs, but also to provide better patient care. [10] This review provides an estimate of appropriateness for radiation emitting imaging for LBP, which indicates that only about half of imaging is appropriate according to recent guidelines. However, due to lack of published research, this estimate was not informed by data from many of the countries promoting the reduction of inappropriate imaging such as Canada, Australia and the UK. Moving forward, what we need is for more countries to undertake high quality studies with sufficiently large sample sizes using clear definitions of appropriateness. | 2019-12-07T14:01:31.353Z | 2019-12-05T00:00:00.000 | {
"year": 2019,
"sha1": "7ce9b1e7ccf51bcf76e1e21f9186f7692dd6545c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0225414&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a43681e2676c4a114837f5e1db9ca7e1224a807",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119242688 | pes2o/s2orc | v3-fos-license | Nonminimal torsion-matter coupling extension of f(T) gravity
We construct an extension of f(T) gravity with the inclusion of a non-minimal torsion-matter coupling in the action. The resulting theory is a novel gravitational modification, since it is different from both f(T) gravity, as well as from the non-minimal curvature-matter-coupled theory. The cosmological application of this new theory proves to be very interesting. In particular, we obtain an effective dark energy sector whose equation-of-state parameter can be quintessence or phantom-like, or exhibit the phantom-divide crossing, while for a large range of the model parameters the Universe results in a de Sitter, dark-energy-dominated, accelerating phase. Additionally, we can obtain early-time inflationary solutions too, and thus provide a unified description of the cosmological history.
I. INTRODUCTION
The recent observational advances in cosmology have provided a large amount of high-precision cosmological data, which has posed new challenges for the understanding of the basic physical properties of the Universe, and of the gravitational interaction that dominates its dynamics and evolution. The observation of the accelerated expansion of the Universe [1] has raised the fundamental issue of the cause of this acceleration, which is usually attributed to a mysterious and yet not directly detected dominant component of the Universe, called dark energy [2]. In this context, the recently released Planck satellite data of the 2.7 degree Cosmic Microwave Background (CMB) full sky survey [3] have generally confirmed the standard Λ Cold Dark Matter (ΛCDM) cosmological model. On the other hand, the measurement of the tensor modes from large angle CMB B-mode polarisation by BICEP2 [4], implying a tensor-to-scalar ratio r = 0.2 +0.07 −0.05 , has provided a very convincing evidence for the inflationary scenario, since the generation of gravitational wave fluctuations is a generic prediction of the early de Sitter exponential expansion. However, the BI-CEP2 result is in tension with Planck limits on standard inflationary models [5], and thus alternative explanations may be required. In principle, magnetic fields generated during inflation can produce the required B-mode, for a suitable range of energy scales of inflation [5]. Moreover, the existence of the fluctuations of cosmological birefringence can give rise to CMB B-mode polarization that fits BICEP2 data with r < 0.11, and no running of the scalar spectral index [6].
The above major observational advances require some good theoretical explanations, with the role of giving a firm foundation to cosmology, and the underlying theory of gravity. However, up to now, no convincing theoretical model, supported by observational evidence that could clearly explain the nature of dark energy, has been proposed. Moreover, not only the recent accelerated expansion of the Universe, but also observations at the galactic (galaxy rotation curves) and extra-galactic scale (virial mass discrepancy in galaxy clusters) [7] suggest the existence of another mysterious and yet undetected major component of the Universe, the so-called dark matter. From all these observations one can conclude that the standard general relativistic gravitational field equations, obtained from the classic Einstein-Hilbert action S = (R/2 + L m ) √ −gd 4 x, where R is the scalar curvature, and L m is the matter Lagrangian density, in which matter is minimally coupled to the geometry, cannot give an appropriate quantitative description of the Universe at astrophysical scales going beyond the boundary of the Solar System. To explain dark energy and dark matter in a cosmological context requires the ad hoc introduction of the dark matter and dark energy components into the total energy-momentum tensor of the Universe, in addition to the ordinary baryonic matter.
From a historical point of view, in going beyond the Einstein-Hilbert action, the first steps were taken in the direction of generalizing the geometric part of the standard gravitational action. An extension of the Einstein-Hilbert action, in which the Ricci scalar invariant R is substituted with an arbitrary function of the scalar invariant, f (R), has been extensively explored in the literature [8]. Such a modification of the gravitational action can explain the late acceleration of the Universe, and may also provide a geomet-ric explanation for dark matter, which can be described as a manifestation of geometry itself [9]. Furthermore, quadratic Lagrangians, constructed from second order curvature invariants such as R 2 , R µν R µν , R αβµν R αβµν , ε αβµν R αβγδ R γδ µν , C αβµν C αβµν , etc., have also been considered as candidates for a more general gravitational actions [10], which can successfully explain dark matter and the late-time cosmic acceleration. Alternatively, the interest for extra-dimensions, which goes back to the unified field theory of Kaluza and Klein, led to the development of the braneworld models [11]. In braneworld models, gravitational effects due to the extra dimensions dominate at high energies, but important new effects, which can successfully explain both dark energy and dark matter, also appear at low energies.
Most of the modifications of the Einstein-Hilbert Lagrangian involve a change in the geometric part of the action only, and assume that the matter Lagrangian plays a subordinate and passive role, which is implemented by the minimal coupling of matter to geometry. However, a general theoretical principle forbidding an arbitrary coupling between matter and geometry does not exist a priori. If theoretical models, in which matter is considered on an equal footing with geometry, are allowed, gravitational theories with many interesting and novel features can be constructed.
A theory with an explicit coupling between an arbitrary function of the scalar curvature and the Lagrangian density of matter was proposed in [12]. The gravitational action of the latter model is of the form In these models an extra force acting on massive test particles arises, and the motion is no longer geodesic. Moreover, in this framework, one can also explain dark matter [13]. The early "linear" geometry-matter coupling [12] was extended in [14] and a maximal extension of the Einstein-Hilbert action with geometry-matter coupling, of the form S = d 4 x √ −gf (R, L m ) was considered in [15]. An alternative model to f (R, L m ) gravity is the f (R, T ) theory [16], where T is the trace of the matter energy-momentum tensor T µν , and the corresponding action given by The dependence of the gravitational action on T may be due to the presence of quantum effects (conformal anomaly), or of some exotic imperfect fluids. When the trace of the energy-momentum tensor T is zero, T = 0, which is the case for the electromagnetic radiation, the field equations of f (R, T ) theory reduce to those of f (R) gravity.
However, the f (R, L m ) or f (R, T ) gravitational models are not the most general Lagrangians with nonminimal geometry-matter couplings. One could further obtain interesting gravity models by introducing a term of the form R µν T µν into the Lagrangian [17,18]. Such couplings appear in Einstein-Born-Infeld theories [19], when one expands the square root in the Lagrangian. The presence of the R µν T µν coupling term has the advantage of entailing a nonminimal coupling of geometry to the electromagnetic field.
All the above gravitational modifications are based on the Einstein-Hilbert action, namely on the curvature description of gravity. However, an interesting and rich class of modified gravity can arise if one modifies the action of the equivalent torsional formulation of General Relativity. As it is well known, Einstein also constructed the "Teleparallel Equivalent of General Relativity" (TEGR) [20][21][22][23][24], replacing the torsion-less Levi-Civita connection by the curvature-less Weitzenböck one, and using the vierbein instead of the metric as the fundamental field. In this formulation, instead of the curvature (Riemann) tensor one has the torsion tensor, and the Lagrangian of the theory, namely the torsion scalar T , is constructed by contractions of the torsion tensor. Thus, if one desires to modify gravity in this formulation, the simplest thing is to extend T to an arbitrary functionf (T ) [25,26]. An interesting aspect of this extension is that although TEGR coincides with General Relativity at the level of equations, f (T ) is different than f (R), that is they belong to different modification classes. Additionally, although in f (R) theory the field equations are fourth order, in f (T ) gravity they are second order, which is a great advantage. f (T ) gravity models have been extensively applied to cosmology, and amongst other applications it is able to explain the late-time accelerating expansion of the Universe without the need for dark energy [26][27][28]. Furthermore, following these lines, and inspired by the higher-curvature modifications of General Relativity, one can construct gravitational modifications based on higher-order torsion invariants, such is the f (T, T G ) gravity [29], which also proves to have interesting cosmological implications.
Another gravitational modification based on the teleparallel formulation is the generalization of TEGR to the case of a Weyl-Cartan space-time, in which the Weitzenböck condition of the vanishing of the curvature is also imposed (Weyl-Cartan-Weitzenböck (WCW) gravity), with the addition of a kinetic term for the torsion in the gravitational action [30]. In this framework the late-time acceleration of the Universe can be naturally obtained, determined by the intrinsic geometry of the space-time. A further extension of the WCW gravity, in which the Weitzenböck condition in a Weyl-Cartan geometry is inserted into the gravitational action via a Lagrange multiplier, was analyzed in [31]. In the weak field limit the gravitational potential explicitly depends on the Lagrange multiplier and on the Weyl vector, leading to an interesting cosmological behavior.
In the work, we are interested in proposing a novel gravitational modification based on the torsional formulation, by allowing the possibility of a nonminimal torsion-matter coupling in the gravitational action. In particular, for the torsion-matter coupling we adopt the "linear" model introduced in the case of f (R) gravity in [12]. Hence, the gravitational field can be described in terms of two arbitrary functions of the torsion scalar T , namely f 1 (T ) and f 2 (T ), with the function f 2 (T ) linearly coupled to the matter Lagrangian. This new coupling in-duces a supplementary term [1 + λf 2 (T )] L m in the standard f (T ) action, with λ an arbitrary coupling constant. When λ = 0, the model reduces to the usual f (T ) gravity. We investigate in detail the cosmological implications of the torsion-matter coupling for two particular choices of the functions f 1 (T ) and f 2 (T ). For both choices the Universe evolution is in agreement with the observed behavior, and moreover it ends in a de Sitter type vacuum state, with zero matter energy density. The details of the transition depend on the numerical values of the free parameters that appear in the functions f 1 (T ) and f 2 (T ).
The paper is organized as follows. In Section II we briefly describe the basics of the f (T ) gravity model. The field equations of the f (T ) theory with linear nonminimal torsion-matter coupling are obtained in Section III. The cosmological implications of the theory are analyzed in Section IV. Finally, we conclude and discuss our results in Section V.
II. f (T ) GRAVITY AND COSMOLOGY
In this Section, we briefly review the f (T ) gravitational paradigm. We use the notation where Greek indices run over the coordinate space-time and Latin indices run over the tangent space-time. As we mentioned in the Introduction, the dynamical variables are the vierbein fields e A (x µ ), which at each point x µ of the manifold form an orthonormal basis for the tangent space, that is e A · e B = η AB , with η AB = diag(1, −1, −1, −1). Additionally, they can be expressed in terms of the components e µ A in the coordinate basis as e A = e µ A ∂ µ . Hence, the metric is obtained from the dual vierbein through In this formulation, instead of the Levi-Civita connection one uses the Weitzenböck one: [32], and thus instead of curvature we acquire the torsion tensor It proves convenient to define the contorsion tensor K µν ρ ≡ − 1 2 T µν ρ − T νµ ρ − T ρ µν , as well as the ten- Using these one can write down the teleparallel Lagrangian (torsion scalar) [21][22][23][24]33] T which used in the action and varied in terms of the vierbeins gives rise to the same equations with General Relativity. That is why such a theory is called "Teleparallel Equivalent of General Relativity" (TEGR). One can be based on the above torsional formulation of General Relativity, in order to construct classes of modified gravity. The simplest one is to extend T to a function T + f (T ), that is writing an action of the form 1 where e = det(e A µ ) = √ −g, G is the gravitational constant, and we have used units where the speed of light is c = 1. Note that TEGR and thus General Relativity is restored when f (T ) = 0. Moreover, we stress that although TEGR coincides with General Relativity at the level of equations, f (T ) is different than f (R).
Let us now proceed to the cosmological application of f (T ) gravity. Introducing additionally the matter sector the total action becomes where the matter Lagrangian is assumed to correspond to a perfect fluid with energy density ρ m and pressure p m (for simplicity we neglect the radiation sector, although its inclusion is straightforward). Varying the action (5) with respect to the vierbeins we obtain the field equations Proceeding forward, we impose the standard homogeneous and isotropic geometry, that is we consider which corresponds to a flat Friedmann-Robertson-Walker (FRW) universe with metric where a(t) is the scale factor. In summary, inserting the vierbein ansantz (7) into the equations of motion (6) we extract the modified Friedmann equations as with H ≡ȧ/a the Hubble parameter, and dots denoting derivatives with respect to t. Note that we have also used the relation which arises immediately for an FRW geometry using Eq. .
III. f (T ) GRAVITY WITH NONMINIMAL TORSION-MATTER COUPLING
Having presented the f (T ) modified gravity in the previous section, in this section we extend it, allowing for a nonminimal coupling between the torsion scalar and the matter Lagrangian. In particular, we consider the action where f i (T ) (with i = 1, 2) are arbitrary functions of the torsion scalar T and λ is a coupling constant with units of mass −2 . Varying the action with respect to the tetrad e A ρ yields the field equations where we have defined and the prime denotes differentiation with respect to the torsion scalar. As expected Eq. (13) reduces to Eq. (6) when λ = 0.
Since the Lagrangian density of a perfect fluid is the energy scalar, representing the energy in a local rest frame for the fluid, a possible "natural choice" for the matter Lagrangian density is L m /(16πG) = −ρ m [35,36].
In this case, we have em S A ρµ = 0, and also the usual form of the energy momentum tensor for the perfect fluid In summary, inserting the flat FRW vierbein choice (7) and the above matter Lagrangian density, into the field equations (13), we obtain the modified Friedmann equations . (16) In the limit λ = 0, f 1 (T ) ≡ f (T ), and f 2 (T ) ≡ 0, Eqs. (15) and (16) reduce to Eqs. (9) and (10), respectively. The generalized Friedmann equations can be rewritten as where the effective energy density and effective pressure of the dark energy sector are defined as (20) Furthermore, we can define the dark-energy equation-ofstate parameter in the standard form One can easily verify that the above affective dark energy density and pressure satisfy the usual evolution equatioṅ (22) Finally, we can introduce the deceleration parameter q, given by whose sign indicates the decelerating/accelerating nature of the cosmological expansion. Cosmological models with q < 0 are accelerating, while those having q > 0 experience a decelerating evolution.
IV. COSMOLOGICAL IMPLICATIONS
Since we have extracted the basic background equations of motion of the f (T ) gravity model with a nonminimal matter-torsion coupling, we are now able to investigate its phenomenological implications. Due to the relation (11), for convenience in the following we will change the T -dependence to the H-dependence in the involved expressions, so that f 1 (T ) ≡ f 1 (H), and f 2 (T ) ≡ f 2 (H). For the derivatives of the functions respectively. Finally, in the following we fully adopt the natural system of units by taking 8πG = c = 1.
The basic cosmological equations describing the time evolution of the nonminimally-coupled f (T ) gravity are given by Eqs. (15) and (16). From Eq. (15) we can express the matter density as By substituting the matter density ρ m into Eq. (16) we obtain the basic equation describing the cosmological dynamics in nonminimally matter-coupled f (T ) gravity as Once the functions f 1 (T ) and f 2 (T ) are fixed, Eqs. (24) and (25) become a system of two ordinary differential equations for three unknowns, (H, ρ m , p m ). In order to close the system of equations, the matter equation of state p m = p m (ρ m ) must also be given. Finally, the deceleration parameter is given by while the dark energy equation-of-state parameter can be expressed as In the following we will investigate the system of Eqs. (24) and (25), for different functional forms of f 1 (T ) and f 2 (T ).
A. f1(T ) = −Λ + α1T 2 and f2(T ) = β1T 2 As a first example, we examine the case where f 1 (T ) = −Λ + α 1 T 2 and f 2 (T ) = β 1 T 2 , where α 1 and β 1 are constants, since these are the first non-trivial corrections to TEGR, that is to General Relativity. As we mentioned above, it proves convenient to express the involved functions in terms of H. In particular, in terms of H the functional dependencies of f 1 and f 2 are given by f 1 (H) = −Λ + αH 4 and f 2 (H) = βH 4 , respectively, with α = 36α 1 , β = 36β 1 . For the derivatives of the functions f 1 and f 2 we obtain f ′ Moreover, we restrict our analysis to the case of dust matter, that is we take p m = 0.
In this case the gravitational field equations (24) and (25) become respectively. The time variation of the Hubble function H, of the scale factor a, of the matter energy density ρ m , and of the deceleration parameter q, obtained by numerically integrating Eqs. (28) and (29) different numerical values of the free parameters Λ, α, β and λ, in Figs. 1-5. As depicted in Fig. 1, the Hubble function is a monotonically decreasing function of time for all t > 0. In the limit of large times the Hubble function tends to a constant value, lim t→∞ H(t) = h 0 = constant. Hence, for the considered range of values of the free parameters, in the f (T ) model with torsion-matter coupling, the Universe ends its evolution in an accelerating, de Sitter-type phase. The scale factor a, shown in Fig. 2, indicates a monotonically time increase of the size of the Universe, and hence an expansionary behavior. The matter energy density, depicted in Fig. 3, tends progressively to zero. Furthermore, the deceleration parameter q, presented in Fig. 4, indicates a large variety of dynamical behaviors of the f (T ) model with matter-torsion coupling. In particular, for some values of the free parameters the Universe starts its evolution in the matter-dominated phase from a decelerating phase, and ends in a de Sitter-type accelerated behavior. Other values of the parameters produce Universe models starting from a marginally accelerating phase (q = 0), and ending in a de Sitter state. Finally, for other parameter choices at the beginning of the matter dominated phase the Universe is already in an accelerating phase, that is with q < 0. Lastly, as depicted in Fig. 5, for these specific choices of the parameters, the dark energy equation-of-state parameter w DE is very close to the value −1, to which it rigorously tends in the large time limits. This is an advantage, since in this model the effective torsion-matter coupling can successfully mimic the cosmological constant, in agreement with observations. After the above numerical elaboration, we examine whether we can obtain analytical expressions in various limits. In particular, we analyze the properties of the equations in the limit of small and large H(t), respectively.
The limit of small H
In the limit of small H(t), that is at the late phases of the cosmological evolution, Eq. (29) becomeṡ yielding the following solution where a 0 is an arbitrary constant of integration. Similarly, the deceleration parameter (26) becomes Additionally, the matter energy-density (28) can be approximated as and using Eq. (31) its explicit time dependence acquires the form (35) Finally, the dark-energy equation-of-state parameter from (27), becomes that is Interestingly enough, we observe that according to the parameter values, w DE can be either above or below −1, that is the effective dark-energy sector can be quintessence-like or phantom-like. This feature, which is expected to happen in modified gravity [37], is an additional advantage of the scenario at hand.
Lastly, we mention that in the large-time limit the Hubble function (31) becomes almost constant, implying that a de Sitter-type evolution is possible in the framework of the present model.
The limit of large H
In the limit of large H, corresponding to the early phases of the cosmological evolution, in the first order approximation the differential equation (29) describing the cosmological dynamics of the Hubble function becomeṡ with the general solution given by The behavior of the scale factor can then be described by the equation that is, it is determined solely by the parameter α. Similarly, the deceleration parameter (26) is given by Moreover, for the matter energy density (28) we obtain showing that during the time interval for which this approximation is valid the energy density of the matter is approximately constant. Finally, the dark-energy equation-of-state parameter from (27), becomes Again, we mention that according to the parameter choice, w DE can be either above or below −1, that is the effective dark-energy sector can be quintessence-like or phantom-like.
B. f1(T ) = −Λ and f2(T ) = α1T + β1T 2 As a second example, we examine the case where f 1 (T ) = −Λ and f 2 (T ) = α 1 T + β 1 T 2 , where Λ > 0, α 1 and β 1 are constants, since this scenario is also the first non-trivial correction to TEGR, that is to General Relativity. Equivalently, we impose f 1 (H) = −Λ and (24) and (25) in this case become respectively. The time variation of the Hubble function H, of the scale factor a, of the matter density ρ m , and of the deceleration parameter, obtained by numerically elaborating the system of Eqs. (44) and (45) for different values of the free parameters and assuming the matter to be dust (w m = 0), are presented in Figs. 6-10, respectively. The Hubble function, presented in Fig. 6, decreases monotonically in time, and tends to a constant value in the large-time limit. Therefore, for all the parameter choices the Universe ends in a de Sitter phase. The time variation of the scale factor, depicted in Fig. (7), indicates that all considered models are expanding. The matter energy density, shown in Fig. 8, monotonically decreases in time as expected. In the large-time limit the Universe ends in a vacuum state, with negligible matter density, and thus being completely dominated by the effective dark energy sector. The deceleration parameter, presented in Fig. 9 indicates a very strong dependence enters in an accelerated phase, with q(t) < 0, ∀t > t a . Similarly to the model of the previous subsection, the Universe always ends in a de Sitter phase, with q = −1.
Finally, as depicted in Fig. 10, in the large time limit the dark energy equation-of-state parameter w DE tends to the value −1, namely lim t→∞ w DE (t) = −1, thus show-ing that this choice of the functions f 1 (T ) and f 2 (T ) can also successfully mimic an effective cosmological constant. Note however, that for these specific parameter choices w DE lies in the phantom regime, which is an advantage of the scenario at hand, revealing its capabilities. After this numerical elaboration, we examine whether we can obtain analytical expressions in various limits. In particular, we examine the properties of the equations in the limit of small and large H(t), respectively.
The limit of small H
In the limit of small values of the Hubble function H, that is at late times, Eq. (45) can be approximated aṡ which can lie both in the quintessence as well as in the phantom regime, depending on the specific choices of the free parameters of the model, namely on α, β, λ and Λ, respectively.
The limit of large H
In the opposite limit of large H, that is at early times, at first order approximation Eq. (45) becomeṡ with H (t 0 ) = H 0 , and thus the general solution is given by .
The scale factor then reads while the deceleration parameter is obtained as q = −5/2, that is the universe at early times always starts with acceleration, which corresponds to an inflationary stage. Finally, for the time variation of the matter energy density in the large-H regime we find ρ m (t) ≈ 0, which is consistent with the interpretation of this stage as inflationary.
We mention that the above expressions for H, a, q and ρ m at first order approximation, are independent on the free parameters of the model α, β, λ and Λ, respectively, and are determined only by the initial value of H at t = t 0 .
V. DISCUSSIONS AND FINAL REMARKS
In the present paper, we have considered an extension of the f (T ) gravity model by introducing a nonminimal coupling between torsion and matter. The geometric part of the action was extended through the introduction of two independent functions of the torsion scalar T , namely f 1 (T ) and f 2 (T ), respectively, with the function f 2 (T ) being nonminimally coupled to the matter Lagrangian L m . The resulting gravitational model presents some formal analogies with the nonminimally geometry-matter coupling introduced in [12]. However, the resulting equations, as well as its physical and geometrical interpretations, are very different. The theory of nonminimal torsion-matter coupling is therefore a novel class of gravitational modification.
From the physical point of view, in this theory, matter is not just a passive component in the space-time continuum, but it plays an active role in the overall gravitational dynamics, which is strongly modified due to the supplementary interaction between matter and geometry. Moreover, the major advantage of the f (T )-type models, namely that the field equations are second order, is not modified by the torsion-matter coupling.
As an application of the nonminimal torsion-matter coupling scenario we have considered the dynamical evolution of a flat FRW universe. We have investigated the time dependence of the cosmologically relevant physical parameters, for two different choices of the functions f 1 (T ) and f 2 (T ), corresponding to the simplest departures from General Relativity. In these specific models the dynamics of the Universe is determined by the free parameters which appear in the functions f 1 (T ) and f 2 (T ), as well as by the matter-torsion coupling constant. Depending on the numerical values of these parameters a large number of cosmological behaviors can be obtained. In our analysis we have considered the matter dominated phase of the Universe evolution, that is, we neglected the matter pressure. More general models with p m can be easily constructed and analyzed.
We restricted our analysis in expanding evolutions, although contracting or bouncing solutions can be easily obtained as well. We have found a universe evolution in agreement with observations, that is a matter-dominated era followed by an accelerating phase. Additionally, the effective dark-energy equation-of-state parameter can lie in the quintessence or phantom regime, which reveals the capabilities of the scenario. Furthermore, a general and common property of the considered models is that they all end in a de Sitter phase, with zero matter density, that is to complete dark-energy domination. Finally, these models also accept solutions with almost constant Hubble function, which can describe the inflationary regime. Thus, the scenario of nonminimal torsion-matter coupling can offer a unified description of the universe evolution, from its inflationary to the late-time accelerated phases.
Apart from the exact numerical elaboration, we have extracted approximate analytical expressions in the limit of a small Hubble parameter, that is corresponding to the large-time limit, as well as for large Hubble parameters, that is corresponding to the beginning of the cosmological expansion. These expressions verify the above physical features that were extracted through the numerical analysis.
In conclusion, based on the torsional formulation of gravity, we have proposed a novel modified gravitational scenario which contains an arbitrary coupling between the torsion scalar and the matter Lagrangian. The cosmological implications of this theory proves to be very interesting. However, in order for the present scenario to be considered as a good candidate for the description of Nature, additional investigations should be performed, such as the detailed comparison with cosmological observations, the complete perturbation analysis, etc. These necessary studies lie beyond the scope of the present work and are left for future projects. | 2014-06-21T23:22:37.000Z | 2014-04-24T00:00:00.000 | {
"year": 2014,
"sha1": "f7ed728e1d606520726d0abf80b5d38303e1ac6d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.6212",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f7ed728e1d606520726d0abf80b5d38303e1ac6d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55437995 | pes2o/s2orc | v3-fos-license | An analysis of relationship between food safety and pesticides usages of grape growers in Manisa province
This study was carried out in Manisa which has the largest vineyard areas and grape production of Turkey. In this research, awareness of environment and pesticide using attitudes of growers and effects on food safety of pesticides were investigated. Main data of the study was collected by survey from 117 grape growers which are settled in Manisa province where sultana production is very widespread. Applying Analytic Hierarchy Process (AHP), for reaching quality raisin and table grape target, conventional and environment friendly pesticides preference priorities were estimated. The AHP was applied to determine conventional and environment friendly pesticides usages of grape growers related to food safety. As a conclusion, it is understood that this target could be reached with 66.8% using environment friendly pesticides.
Introduction and objectives
The study was conducted in Manisa province that is biggest grape producer province of Turkey.The total vineyard area was 75 401 hectares and the total grape production was 1 114 466 tons in 2013.Of the total grape production; 854 117 tons were raisins (212 000 tons of raisins) and 260 544 tons were table grapes.87.49% of the raisins and 15.46% of the table grapes produced in Turkey were produced in the Manisa province [8].
Pesticides are commonly used on the food we eat to control pests that may damage the crops during production, storage or transport.Pesticides allow growers to increase the amount of usable food from each crop at the time of harvest.Pesticides may also improve the quality, safety, and shelf-life of certain foods.For consumers, this means access to a wide variety of affordable foods, grown locally or imported from other states or countries.Like other crops pesticides widely used for growing grapes.Also in Turkey grape growers use different kinds of pesticide groups.In this study a survey conducted and analyzed farmer's preferences in terms of food safety and pesticide use in Manisa province.
Main goals of this study, -Determination of pesticide using preferences of growers between environment friendly and conventional pesticide groups
Data and methodology
In the study, three district of Manisa which were most important in grape production was selected.The survey population of this study was composed of table and raisin producers in these three districts.At the second stage, nine villages were selected on the basis of sultana production potential after interviewing some people and institutions who were expert of this subject.Farmers preferences are based on the data collected in the study area.The data used in this study come from a survey of 117 farmers in Manisa province of Aegean Region.Survey was based on a standardized and pretested questionnaire.A pre-tested questionnaire consists of both open ended and closed ended questions and was used to collect data in face to face interviews.The survey questionnaire had subsections: The demographic and socioeconomic information, farm and marketing information and also the perceptions of the farmers.
Then, the AHP was applied to determine pesticide using preferences in terms of food safety of farmers related to quality, high price, production cost and marketing easily.
The AHP model was built by taking into account the pesticide using preferences of grape growers in terms of food safety and achieving high quality table grape and raisin.The AHP model for the growers' preferences is explained in Fig. 1.
Explanation of Analytical Hierarchy Process (AHP)
The AHP was developed by Thomas L. Saaty [6,7].This model is one of the most commonly applied multicriteria decision making techniques [2,5].The AHP is a decision-support tool to cope with complex multicriteria problems.The method helps to structure and analyze decision problems by breaking down the complex problem in a hierarchic order and by employing pair-wise comparisons of its elements to determine the preferences among the set of alternatives.
Findings and results
In the AHP hierarchy system growers have pesticide choices like environmental friendly (EF) and conventional (C).Determination of Good Shape sub criteria, environmental friendly (EF) pesticides (0.577) are also more favorable than conventional (C) pesticides (0.423), but difference is not large compare to food safety sub criteria.
In terms of price and marketing easily criteries also tend the growers environment friendly pesticides groups.Only in tems of production cost criteria growers choice is conventional pesticides.
When we determine the importance of the criteria that influence the pesticide choices, Manisa grape growers are committed to the first marketing easily.Then comes quality, price and production cost criteria.Here is understood primarily growers that want to market their products in a way guaranteed, however, producing high quality grapes it is at least just as important.
Considering the food security and appearance criteria, Manisa grape growers, stated that the use of environment friendly pesticides carry a priority for the production of high-quality grapes.
Finally, environmental friendly pesticides for grape growers to carry out the production of quality grapes according to all criteria could be said they saw a higher priority than conventional pesticides.
04005-p.2
The first stage of AHP is problem structuring.The AHP decision problem is structured hierarchically at different levels, each level BIO Web of Conferences E.F.= Environmental Friendly Pesticide C= Conventional Pesticide Figure 1.Problem Definition of AHP Model.
[1][2][3][4]a finite number of decision elements.A basic hierarchical model consists of a goal, criteria and alternatives.The top level of the hierarchy represents the overall goal, while the lowest level is composed of criteria and all possible alternatives.The second stage is assessment of local priorities.The relative importance of the decision elements is assessed indirectly from comparison judgments during the second step of the decision process.The third stage is calculation of global priorities.The last step of the AHP aggregates all local priorities from the decision table by a simple weighted sum[1][2][3][4].
Table 1 .
Determination of AHP Criteria and Choices.
Table 2 .
Determination of in terms of criteria for producing high quality grape.
Table 3 .
Determination of in terms of Quality Criteria Mix Priorities. | 2018-12-07T21:51:29.321Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "73440cd3d813d27cf0a03c1772cab7759ae6711b",
"oa_license": "CCBY",
"oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2015/02/bioconf-oiv2015_04005.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "73440cd3d813d27cf0a03c1772cab7759ae6711b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
1976398 | pes2o/s2orc | v3-fos-license | Polytraumatization in an adult national sample and its association with psychological distress and self-esteem
Objective The objective of this study was to examine the prevalence of self-reported experiences of potential childhood traumas and polytraumatization, and to find cut-off values for different kinds of potential traumatic events in a national representative sample of adults in Sweden. In addition, to analyse the association between polytraumatization and both psychological distress and global self-esteem. Method A web-based survey - containing SCL-25 and Rosenberg Self-Esteem Scale, and Linköping Difficult Life Events Scale - Adult - was sent out to a nationally reprative sample and 5062 people chose to participate in the study. Results Results showed that almost everyone (97%) has experienced at least one potential traumatic event and that polytraumatization (the 10% of the participants with most reported traumas) was significantly (Z = 12.57, P < 0.001, r = 0.18) associated with psychological distress and global self-esteem. Gender differences were significant (Z = 8.44, P < 0.001, r = 0.12), in that men experience more noninterpersonal traumas but women report more symptoms. The effect sizes regarding the impact of potential trauma on self-esteem were largest for women with experience of polytraumatization in the age group 18–25 (r = 0.48). There was almost linear increase in psychological distress and linear decrease in self-esteem with increasing number of traumatic events experienced. Conclusion Experience of polytrauma can be considered an important factor to take into account in psychiatric settings as well.
Introduction
There are an increasing number of studies that show an association between having experienced polytrauma -which means multiple types of potential traumas -and a broad variety of psychosocial and somatic health problems later on in life, both among children and adults (Maschi et al. 2012). In the research literature repeated or/and multiple types of potential trauma, have been labeled differently i.e. cumulative trauma, polyvictimization, and polytrauamtization (Scott-Storey 2011).
Most of the studies have been conducted in adolescent populations (Finkelhor et al. 2007b;Nilsson et al. 2010Nilsson et al. , 2012Soler et al. 2012Soler et al. , 2013Zetterqvist et al. 2012) and Finkelhor and coworkers have, in several studies, demonstrated the negative impact of polyvictimization (multiple types of victimizations) among adolescents (Finkelhor et al. 2007a(Finkelhor et al. ,b, 2009aTurner et al. 2010). Finkelhor and colleagues have identified victims and polyvictims by counting the different types of youth victimizations over both the last year and the (youth) lifetime, and have suggested classifying polyvictims as the 10% most victimized in the population (Finkelhor et al. 2009a).
Among adult studies of childhood trauma, the large Adverse Childhood Experience studies (ACE) have been at the forefront of this research for many years (Feletti et al. 1998). The ACE-studies have shown us how adverse childhood experiences such as: Abuse: emotional, physical, and sexual, Neglect: emotional and physical and Dysfunctional Family/Household: mother was physical abused, drugs in family, mental illness, divorce, somebody in the family in prison, are clearly associated with psychosocial as well as worsened somatic health in adult life (Feletti et al. 1998;Anda et al. 1999Anda et al. , 2001Anda et al. , 2002Dietz et al. 1999;Dube et al. 2001aDube et al. ,b, 2003Dong et al. 2003Dong et al. , 2004. Other researchers have also recognized the cumulative effect of experiences from different types of potentially traumatic life events, some studies from community samples (Chiara and Straus 2008;Widom et al. 2008;Richmond et al. 2009) and some from clinical populations (Briere et al. 2008;Cloitre et al. 2009). In the studies by Chiara and Straus (2008) and Richmond et al. (2009) polyvictimization accounted for a significant proportion of the variability on scores of psychological distress and also for unique variance.
It has also been shown that there is an increased risk of revictimization after a person has been victimized once (Noll et al. 2003;Finkelhor et al. 2007b;Widom et al. 2008).
To experience one potential trauma is not uncommon in adolescent populations (Finkelhor 2008;Finkelhor et al. 2009a) as well as in adult populations (Arata et al. 2005;Richmond et al. 2009). Richmond and colleagues (Richmond et al. 2009) found in two study samples that exposure to at least one individual type of potential trauma was reported by 98%, and almost half of the population 40-49% reported at least one in five of the categories (property crime, physical assault, child maltreatment, peer/ sibling victimization, sexual victimization, and witnessed indirect victimization).
Both to be a polyvictim and to have experienced polytrauma have been shown to have greater predictive value on mental health than single or one type of traumas even if repeated (Chiara and Straus 2008;Briere et al. 2008;Cloitre et al. 2009;Chartier et al. 2010), also on the effects on symptom complexity (Briere et al. 2008;Cloitre et al. 2009) and self-esteem (Soler et al. 2012(Soler et al. , 2013. However, even if the experience of polytrauma in the above mentioned studies was shown to have the greatest predictive value on mental health it was also shown that physical abuse and sexual abuse, including rape, also predicted poor health, but to a lesser extent than polytrauma (Chiara and Straus 2008;Briere et al. 2008;Cloitre et al. 2009;Chartier et al. 2010).
Even if there is strong evidence for the negative effects of experiences of trauma and polytraumatization we need to know more about this in normative samples and different cultures. It is important to look at the prevalence of childhood trauma and its consequences in the ages between 18 and 65. It is also essential for research to cover the broad spectrum of potential traumas such as: noninterpersonal (nIPE), interpersonal (IPE), and adverse childhood circumstances (ACC) and the effects of polytraumatization (PT), otherwise the results will be too narrowly interpreted. Since there is a lack of studies concerning polytraumatization in adult populations and in order to highlight the above mentioned aspects of potential traumatic events in a representative adult population, this study was carried out.
Aims of the Study
This study aims to explore the prevalence of self-reported potential traumatic events before the age of 18, in different age groups in a representative Swedish adult population and to identify cut-off values for self-reported experienced trauma of both noninterpersonal and interpersonal character, and adverse childhood circumstances.
A second aim was to investigate the interactions and associations between polytraumatization, psychological distress (anxiety and depression), and global self-esteem and also to look at possible education and gender differences.
Materials and Method
This paper used data derived from a large representative sample of the Swedish population.
The epidemiological study was carried out in 2011 within the project: "Prostitution in Sweden. Mapping and evaluation of the three Swedish prostitution units for support to people who are selling or buying sex and experiences and attitudes in the general population". The project consisted of eight parts, of which this study was one, and took place between the years 2009 and 2012 and resulted in a main report to the Swedish government ).
Participants
From a national web panel of 71,446 people between 18 and 65 years old, a stratified representative sample from the Swedish population was drawn. This sample consisted of 9999 people who were invited to participate. Of these 4215 did not answer, 701 started to answer but did not complete the questionnaire, 12 refused and 5071 chose to participate. The participating rate of 50.7% is in line with what has been obtained in former national studies (Lewin 1998).
Procedure
The survey consisted of 81 questions. The participants were asked about their experience of buying or selling sex as well as potential traumatic childhood experiences and different aspects of psychological well-being. The survey included several standardized scales. In this study the following questionnaires were used.
Questionnaires
Linköping's youth life events scale-adult, LYLES -A The Linköping's Youth Live Events Scale -Adult (LYLES-A) is a recently developed trauma history inventory (Nilsson et al. 2010), intended to cover several important areas of potentially traumatic events and circumstances during childhood, up to the age of 18. It contains 23 main questions and 18 more detailed secondary items, making a total of 41 questions. Eighteen items are designed to identify noninterpersonal (nIPE) traumas, 13 items identify interpersonal (IPE) traumas and 10 items ask questions about more enduring Adverse Childhood Circumstances, (ACC), see Table 1 for the whole scale. There are subquestions on several items to identify the respondent's proximity to the event, i.e. whether the person has experienced the event him/herself, seen it or just heard about it. The test-retest reliability has been found to be r = 0.79 (P < 0.01) and kappa statistics (Cohen's kappa) item per item range between 0.44 and1.0 (Finkelhor et al. 2007b). This is the first time the LYLES has been used in a population based group with adults. The response rate of the LYLES in this study was 99.8%, i.e. only nine participants answered "Don't want to answer" to (all of) the LYLES questions.
In this study, the number of potentially traumatic experiences was summarized up in an index Polytraumatization (PT). For each of the aspects (nIPE, IPE, and ACC) on the LYLES-A, total score, nIPE, IPE, and ACC the 90th percentile was set as a limit for polytraumatization (PT) and initial analyses were based on three groups: (a) no trauma at all, (b) at least one trauma but no PT, and (c) PT. Further analyses were based on two groups: (a) no PT (nPT) and (b) PT.
The Hopkins symptom check list-25
The Hopkins Symptom Check List-25 (SCL-25) is a selfadministrated instrument widely used for the assessment of psychological distress. SCL-25 is developed out of SCL-90 and is one of the shortened versions (Derogatis et al. 1976). SCL-25 consists basically of two (anxiety and depression) of the nine original symptom dimensions of SCL-90 (Derogatis et al. 1974). The scale has been used in several cultural settings as well as psychometrically investigated and has proved to have psychometrically satisfactory characteristics, such as validity and reliability (Nettlebladt et al. 1993;Moreau et al. 2009;Strand et al. 2003). SCL-25 has 25 items on a four point Likert-scale ranging from 1 = "not at all" to 4 = "extremely". Based on several studies an average item score of 1.75, calculated by dividing the total score by the number of items answered has been recommended as a valid predictor of clinical psychological distress -especially concerning depression but also anxiety (Strand et al. 2003). Cronbach's alpha in this study was a = 0.95.
Rosenberg self-esteem scale
The Rosenberg Self-Esteem Scale (RSES) is a widely used scale for the measurement of global self-esteem. It was developed by Rosenberg (1965) with self-esteem as a onedimensional concept that reflects a positive or a negative orientation toward the self. The psychometric qualities have been investigated in several studies and cultures (Rosenberg 1965). Psychometric studies have supported a one-dimensional scale approach (Hatcher and Hall 2009) but there are also studies proposing that the RSES is twodimensional (Schmitt and Alllik 2005;Hatcher and Hall 2009;Marsh et al. 2010;Mullen et al. 2013). In this study, we have chosen the one-dimensional approach with reference to the study by Schmitt and Alllik (2005). The RSES has ten items, five positively and five negatively worded items. There are four possible answer choices from 3 = strongly agree to 0 = strongly disagree, a score 13.9 Have you witnessed anyone in your family (mother, sibling) been beaten or wounded by an adult in your family? Have you been exposed to sexual acts against your will by an adult in your family? Have you been exposed to sexual acts against your will by another person? Have you witnessed anybody else getting exposed to sexual acts against their will? is derived by reversing the five negative items and summing them with the five positive -one gets values between 0 and 30, high values are considered to be good self-esteem. Cronbach's alpha in this study was found to be a = 0.89. There is no cut-off point in the research literature presented for RSES (The Morris Rosenberg Fondation 2011).
Statistical analyses
LYLES-A; gender, age, and education LYLES-A was considered in four different aspects (total, IPE, nIPE, and ACC). For each condition the 90th percentile was set as a limit for polytraumatization (PT) and participants were organized in three groups thereafter: (a) no trauma at all, (b) at least one trauma but no PT, and (c) PT. Thereafter, the group variable was put into a loglinear analysis together with gender (woman, man), age group (18-25, 26-39, 40-49, 50-65), and education (junior high school, high school, and university degree) in order to examine differences in distribution over different categories. Significant interactions were further analysed using Chi-square-statistics. For significant differences in distribution of PT, odds ratios and corresponding 95% CIs are reported. In further analyses two groups (nPT and PT) were used. The distribution of Rosenberg total scores was negatively skewed and the distribution of SCL-25 scores was positively skewed and group comparisons between men and women (for nPT and PT, respectively) and between nPT and PT (for men and women, respectively) were made by Mann-Whitney U (comparing two groups) and Kruskal-Wallis (comparing more than two groups) tests. Due to the large sample size, even small differences were expected to be significant and therefore the effect size r is reported (r = 0.1: small effect; r = 0.3: moderate effect; r = 0.5: large effect).
Differences in distribution of PT and nPT between distressed and nondistressed participants were examined using chi-square-statistics.
Descriptives
For the total distribution of the different potential traumas on the LYLES-A over different age groups and gender, see Table 1. The most common events among both men and women were of nIPE character such as "Has anyone in your family been in hospital?" "Has anyone close to you died?" "Has anybody close to you been in hospital?" "Have you been in hospital?" and "Has any-body in your family died?" all endorsed by more than 40 percent of the sample. The most common endorsed IPE question was "Have you witnessed anybody else been beaten or wounded?" (men 29.7 and women 16.0 percent). Finally, "Have you been bullied?" was the most common circumstance among ACC endorsed by 34.5 percent, almost equal among the genders. 97% of the participants reported at least one potential trauma.
Gender specific cut-off values
Log-linear analysis resulted, after elimination of nonsignificant higher-order effects, in a small but significant two-way interaction between PT and gender, v 2 (2, N = 5062) = 31.31, P < 0.001, Cramer's V = 0.08. In the PT group there were unexpectedly many men (std. residual = 3.9) and unexpectedly few women (std. residual = À3.8), but no such differences for the two groups who were not polytraumatized (nPT). Further analyses were therefore based upon nPT and PT groups. Due to differences in gender distribution among the polytraumatized, gender specific cut-off values for PT, i.e. the 90th percentiles (by definition the 10% who reported most traumas) were estimated, Table 2.
LYLES-A, total scale
The 90th percentile was set to 14 reported potential traumas. There were significant differences in distribution for women and men over different groups of trauma, v 2 (2, N = 5062) = 31.31, P < 0.001. The odds ratio of PT between men and women was 1.38, 95% CI [1.38, 1.97], i.e. for every 100 PT women there are 138 PT men.
There were also significant differences in distribution for different educational levels over different groups of trauma, v 2 (4, N = 5024) = 14.18, P = 0.007. Low educational levels were more often associated with PT than
LYLES-A, nIPE
The 90th percentile was set to 10 reported potential noninterpersonal traumas (nIPE). There were significant differences in distribution for women and men over different groups of potential traumas, v 2 (2, N = 5061) = 93.53, P < 0.001. The odds ratio of nIPE-PT between men and women was 2.51 95% CI [2.07, 3.04], i.e. for every 100 nIPE-PT women there are 251 nIPE-PT men. There were also significant differences in distribution for different educational levels over different groups of potential traumas, v 2 (4, N = 5023) = 10.55, P = 0.032, indicating higher levels of nIPE-polytraumatization with lower educational level. These differences were small (Cramer's V = 0.03).
LYLES-A, IPE
The 90th percentile was set to three reported potential interpersonal traumas (IPE). There were significant differences in distribution for women and men over different groups of trauma, v 2 (2, N = 5056) = 31.71, P < 0.001. The odds ratio of IPE-PT between men and women was 1.50, 95% CI [1.29, 2.75], i.e. for every 100 IPE-PT women there are 150 PT men. There was no interaction with educational level.
LYLE-A, ACC
The 90th percentile was set to three reported potential adverse childhood circumstance traumas. The ACC questions of LYLES-A did not interact with any of gender, age group, or educational level, i.e. there were no significant differences in distribution of traumatization and nontraumatization between different age-groups or gender. individuals is used, 11.2% was found to be clinically distressed, in the total sample, for women it was 14.2% and men 8.1% (Table 3).
Self-esteem (RSES) by gender and agegroups
RSES levels were significantly but marginally higher for men (Mdn = 36) than for women (Mdn = 34), Z = 8.44, P < 0.001, r = 0.12. RSES total scores were also significantly different between the age groups, H(3) = 127.33, P < 0.001, the older the higher values of Rosenberg scores
Self-esteem and PT
The 90th percentiles of the LYLES-tot were set as a cut-off for PT, for women and men separately. RSES total scores were thereafter compared between PT and nPT, for women as well as for men. Comparisons were also made between women and men for PT and nPT.
In general there was a moderate effect of gender on self-esteem when comparing PT women and men, Z = 6.33, P < 0.001, r = 0.27. PT women have lower RSES scores (Mdn = 30) than men (Mdn = 34). This effect was shown to be strong when analysing 18-25 year olds separately, Z = 3.47, P < 0.001, r = 0.48. Note that there were only 11 men (Mdn = 35) compared to 42 women (Mdn = 25.5) in this comparison. Even though the small number of men might have introduced some random variation in the measures, the difference between women and men was nevertheless remarkable in terms of RSES scores. There was also a moderate effect of PT when comparing RSES scores for women only in this age group, Z = 4.73, P < 0.001, r = 0.26, PT women have lower RSES scores (Mdn = 16) than nPT women (Mdn = 23). For all other comparisons the effect sizes were considered small and in some cases nonsignificant, Table 4.
RSES, SCL-25, and PT
The number of reported potential traumas on LYLES-A showed almost linear relations with self-esteem (RSES decrease with increase of LYLES-A) and psychological distress (SCL-25 increase with increase of LYLES-A), see Fig. 1.
Discussion
This study has examined the prevalence of self-reported experiences of potential traumatic events (before the age of 18) using LYLES-A in a representative national sample of adults 18-65 years old and the association with psychological distress (SCL-25) and global self-esteem (RSES). Polytraumatization (PT) defined as the 90th percentile, or the 10% in the sample who reported the most frequent potential traumas, has been identified and also the association with psychological stress and self-esteem can be seen. The result can be summarized in five main findings.
First, having experienced at least one potentially traumatic event before the age 18 years is common, especially concerning noninterpersonal (n-IPE) traumatic life events with 92% reporting this experience -something that has been shown in several other studies (Arata et al. 2005;Richmond et al. 2009). Traumatic interpersonal (IPE) events were reported by approximately half of the population (44% women and 51% men), a figure rather close to what Richmond (Richmond et al. 2009) has reported. Finally, adverse childhood circumstances (ACC) were reported by around sixty percent (64% women and 59% men), which is more than reported by two earlier studies (Chiara and Straus 2008;Bellis et al. 2013) but about the same as what has been reported from the ACE-studies (Brown et al. 2009).
Second, cut-off values were identified for persons at the 90th percentile, the definition of PT, for the different LY-LES-A scales. To be able to identify the 10% of the population who have experienced the most potential traumas is suggested by Finkelhor et al. (2009a) and has also been Table 3. Comparisons of SCL-25 scores between poly-(PT) and nonpolytraumatized (nPT) (for men and women separately) and between men and women (for poly (PT) -and nonpolytraumatized(nPT) separately).
Age
Men ( used in other studies (Soler et al. 2012(Soler et al. , 2013. To identify this group can be considered important both for research and clinically, as this group has shown to be vulnerable to different physical and psychological difficulties (Anda et al. , 2001(Anda et al. , 2002Dietz et al. 1999;Dube et al. 2001aDube et al. ,b, 2003Dong et al. 2003Dong et al. , 2004 and could be a risk group for revictimization (Widom et al. 2008). However, consequences of PT experiences need to be further investigated.
Third, the impact of PT across the different aspects of potential traumas showed significant differences between men and women, with men reporting more Table 4. Comparisons of Rosenberg scores between poly (PT)-and nonpolytraumatized (nPT) (for men and women separately) and between men and women (for poly (PT)-and nonpolytraumatized (nPT) separately).
Age
Men ( experiences of potential traumas of all kinds -except for ACC, where no difference was found. Epidemiological studies in national samples concerning exposure to different sorts of traumas and gender differences are few, but Kessler et al. (1995) found that men were more exposed to trauma than women, 60% compared to 50%. They also highlighted that men and women are often exposed to different kinds of potential traumas but men are likely to experience almost every type of traumatic event -with the exception of sexual assault and rape. Regarding gender differences and ACC, no such differences have been reported from other studies; men and women report a similar prevalence of ACC as we have found here (U.S. Department of Health and Human Services 2010). This is something that can be understood in terms of there being no dissimilarities concerning gender and distribution of children in certain families, or exposure to divorce and also that boys and girls are equally exposed to bullying. Significant differences were found concerning educational level, with more reports of polytrauma from those having a lower educational level -something which has also been found in other studies (Chan et al. 2011). So education may be interpreted as being a protective factor. Fourth, the study also showed that women have higher SCL-25 scores than men, something which has also been found in other studies (Nettlebladt et al. 1993). People who have experienced PT have significantly higher values on SCL-25 than non-PT. However, these are significant values, with moderate to low effect sizes. Using the recommended cut-off for clinical psychiatric cases of ≥1.75 on SCL-25 (Nettlebladt et al. 1993;Strand et al. 2003) in the present study showed that there were significantly more persons with PT than with no PT, who scored above the cut-off with an effect size that could be considered as low to moderate.
Fifth, the impact of PT on global self-esteem measured by Rosenberg Self-Esteem Scale (RSES) was found significant, (P < 0.001) between men and women. Women with PT had lower RSES than men, with a moderate effect size (r = 0.27). For women in the age group 18-25 the global self-esteem was remarkably lower in the PT group, significant, (P < 0.001) and with strong effect size (r = 0.47). The strong effect size for women in this age group is a key finding. It must be seen as being of great importance to the helping professional and also of importance in understanding the difference between men and women. More women than men seek psychiatric help and as men often have been exposed to more traumas than women, it can be easy to overlook the greater impact PT has on women -who seem to be more vulnerable. Women's vulnera-bility in developing post traumatic symptoms compared to men, despite lower rates of trauma exposure, has been well-documented (Kessler et al. 1995;Tolin and Foa 2006). In a Spanish study it was found that global self-esteem could be seen as both a moderator and a mediating factor as a buffer against polyvictimization and mental health (Soler et al. 2013). However, the pathways to polytraumatization, for both men and women, need to be further investigated (Finkelhor et al. 2009b).
The almost linear association between self-reported PT and an increase in psychological distress, depression, and anxiety and at the same time a decrease in reported selfesteem is in line with previous research (Williams et al. 2007;Soler et al. 2012Soler et al. , 2013. This almost linear association between PT and an increase in SCL-25 scores must be taken seriously, as both anxiety and depression have detrimental effects on health and have also found to be associated with early death (Edmonson et al. 2013;Wedegaetner et al. 2013). Also self-esteem has, in studies, been shown to have an impact on mental health (Merianos et al. 2013) and has also, in other studies, shown to be associated with polyvictimization (Soler et al. 2012(Soler et al. , 2013. These relationships need to be further examined.
It is worth taking into consideration the difference between men and women in respect of experienced PT; more women than men seek psychiatric help and their experienced potential traumas need to be taken seriously, and clinicians need to address this. The way to address polytrauma is still a matter of speculation, but cannot be neglected. It is essential that methods for routine screening and appropriate interventions are developed and implemented.
There is an aspect worth discussing, alongside studies in order to cover further aspects of this field, that Scott-Storey (2011) points out: that today there are many definitions that appear to describe almost the same concept, and she suggests that there is a need for research to clearly conceptualise and operationalize what is meant by polyvicitmization (Finkelhor et al. 2007a) lifetime polyvictimization (Finkelhor et al. 2009a) revictimization (Widom et al. 2008), polytraumatization (Gustafson et al. 2009), and cumulative trauma (Chiara and Straus 2008). It is also necessary to operationalize clearly what has been measured as potential trauma and adversity. We have, in this study, chosen to cover a broad spectrum of selfreported potential traumasnoninterpersonal and interpersonal, where also adverse childhood circumstances such as bullying and mental health in the family have been asked about. These aspects of potential traumatic experiences and difficult life events have, in separate studies, shown to be important for mental health. Noninterpersonal potential traumas can be seen as maybe not such a difficult experience, but natural disasters like the 2004 tsunami in Thailand can be a harrowing experience for many people (Wahlström et al. 2008). In a follow-up study in Sweden after the 2004 tsunami, in which many Swedes were struck by this disaster, it was shown that not only exposure to life threatening situations and losing people but also prior life events were related to an elevated risk of worsening mental health, as measured with the General Health Questionnaire (Wahlström et al. 2010).
A limitation in this study, even if the sample is large, is that the participation rate was only 53% of the population asked. Another limitation is the recall bias -especially when asking older people about what happened 40 years ago. Although what has been asked are often things people remember when questioned (Hardt and Rutter 2004). The cross-sectional character can be seen as a limitation. It is also possible that if we in this study had used questionnaires especially developed to identify symptoms related to experienced potential traumas, such asfor example -Trauma symptom Inventory-2 (Briere 2011) maybe the effect sizes, for instance, would have been stronger.
In this study, we have screened for multiple types of traumas in a national representative sample, something that there has been a lack of in previous research (Widom et al. 2008) and we have found no study looking at how experiences of polytrauma impact the sense of global selfesteem and psychological distress measured by SCL-25. | 2016-05-12T22:15:10.714Z | 2014-12-04T00:00:00.000 | {
"year": 2014,
"sha1": "38cef597035241a8a7e1e37ce5f4a117827150a2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/brb3.298",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c829a9ce18256cd497ab5807aaa3ed73f9e03ad",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
245148299 | pes2o/s2orc | v3-fos-license | Neuropathic Pain in Multiple Sclerosis and Its Animal Models: Focus on Mechanisms, Knowledge Gaps and Future Directions
Multiple sclerosis (MS) is a multifaceted, complex and chronic neurological disease that leads to motor, sensory and cognitive deficits. MS symptoms are unpredictable and exceedingly variable. Pain is a frequent symptom of MS and manifests as nociceptive or neuropathic pain, even at early disease stages. Neuropathic pain is one of the most debilitating symptoms that reduces quality of life and interferes with daily activities, particularly because conventional pharmacotherapies do not adequately alleviate neuropathic pain. Despite advances, the mechanisms underlying neuropathic pain in MS remain elusive. The majority of the studies investigating the pathophysiology of MS-associated neuropathic pain have been performed in animal models that replicate some of the clinical and neuropathological features of MS. Experimental autoimmune encephalomyelitis (EAE) is one of the best-characterized and most commonly used animal models of MS. As in the case of individuals with MS, rodents affected by EAE manifest increased sensitivity to pain which can be assessed by well-established assays. Investigations on EAE provided valuable insights into the pathophysiology of neuropathic pain. Nevertheless, additional investigations are warranted to better understand the events that lead to the onset and maintenance of neuropathic pain in order to identify targets that can facilitate the development of more effective therapeutic interventions. The goal of the present review is to provide an overview of several mechanisms implicated in neuropathic pain in EAE by summarizing published reports. We discuss current knowledge gaps and future research directions, especially based on information obtained by use of other animal models of neuropathic pain such as nerve injury.
INTRODUCTION
Multiple sclerosis (MS) is the most common chronic inflammatory and demyelinating disease of the central nervous system (CNS) (1,2). MS is a multifaceted disease. Both genetic and environmental factors contribute to the risk of developing the disorder. It is estimated that close to a million people live with MS in the United States alone (3). Young adults are the most affected with the range of onset being 20-40 years of age. The prevalence of the disease is three-times higher in women than in men (4,5). Even if the majority of the patients initially present with relapsing-remitting (R-R) MS, the disease eventually progresses into secondaryprogressive form within 10-20 years from the onset (6). The major symptoms include limb weakness, fatigue, spasticity, sensory impairments, loss of coordination, cognitive decline, pain and paralysis. Because MS is not curable, the goal of the various therapeutic approaches is to slow disease progression and alleviate the symptoms to improve quality of life (7).
Among the MS population, pain is a frequent symptom that affects from 28 to 87% of individuals, with variations in time of onset and type of pain. Pain impacts both the physical and emotional well-being of the individual (8) and interferes with the majority of daily life activities such as sleep, work, and participation in recreational and social activities (9), reducing the quality of life and leading to depression and other comorbidities (10)(11)(12).
The pain associated with MS can be classified into four groups: musculoskeletal pain (painful tonic spasms, pain secondary to spasticity), intermittent central neuropathic pain (trigeminal neuralgia, Lhermitte's sign), continuous central neuropathic pain, and mixed neuropathic and non-neuropathic pain (headache). Regardless from the type of pain, there is a correlation between pain and the disease course, its duration, and age of the affected individual (13). Neuropathic pain is more common in women with higher disability and longer disease duration (14).
Painful spasms, especially in the lower limbs, are due to ectopic impulses, generated from the motor fibers, as a result of axonal damage and demyelination. These painful spasms are more frequent at night (15). Headaches and low back pain are very common among affected individuals throughout the course of the disease (16)(17)(18). Importantly, other manifestations of pain occur as the disease progresses. The spasticity and progressive weakness compromise the posture and motility of the individual, leading to osteoporosis and the dysfunctions of tendons, ligaments and/or joints, which evoke secondary pain (15). Even the pharmaceutical treatments commonly used for MS symptomatology exacerbate some of the most common pains (19). For example, Interferon-β exacerbates headaches and migraines (20).
Pain Circuits and Mechanisms Underlying Peripheral and Central Sensitization
Primary sensory neurons of the dorsal root ganglia (DRG) sense noxious stimuli via their peripheral projections, and convey the pain information to the dorsal horn (DH) of the spinal cord (SC) through their central projections. The central projections synapse with second-order sensory neurons and excitatory or inhibitory interneurons in the DH. The DH also receives projections from supraspinal locations which modulate pain transmission. These signals are integrated in the DH and then conveyed to various brain regions where pain perception and affective responses to pain develop (21). The cingulate and insular cortices, the amygdala and brainstem, are among brain regions implicated in pain states (22)(23)(24)(25).
Neuropathic pain is caused by damage or disease that affects the central or peripheral somatosensory systems, and is referred to as central or peripheral neuropathic pain, respectively (26, 27). The pathophysiological changes associated with neuropathic pain include hyperexcitability of neurons in pain pathways. Neuronal hyperexcitability is an essential mechanism underlying the increased sensitivity to pain in various pathological conditions. Central sensitization manifests as increased sensitivity to innocuous (allodynia) or noxious (hyperalgesia) stimuli, or spontaneous pain. It occurs in many chronic pain conditions (28-30) including MS or its animal models (31) and is independent of peripheral damage or disease. Central sensitization is the consequence of maladaptive changes in pain circuits of the CNS and heightened excitability of neurons which is partly attributed to increased synaptic efficacy and reduced inhibition (32). The major mechanisms underlying central sensitization have been discussed in earlier comprehensive reports (33, 34) and include glutamate and glutamate receptors. The efficacy of excitatory synapses is enhanced, partly as a consequence of increased glutamate release, impaired glutamate uptake and overactivation of glutamate receptors in DH neurons involved in pain processing. Increased activity, expression and trafficking of ionotropic glutamate receptors, such as α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl-D-aspartate (NMDA) receptors are observed in second order sensory neurons of the DH (34-36). Metabotropic glutamate receptors (mGluRs) also participate in the modulation of pain transmission by inducing Ca 2+ release from intracellular stores, which, in turn, activates kinases, including phosphatidyl inositol 3 kinase (PI3K) and mitogen activated protein kinase (MAPK). Activated kinases phosphorylate ion channels and receptors implicated in pain mechanisms, altering their activity and leading to increased synaptic efficacy (33). These events, together with decreased inhibitory activity of gamma aminobutyric acid (GABA)ergic interneurons (34) and the additional effects of non-neuronal cells, enhance neuronal excitability. In particular, activated glia and infiltrating immune cells secrete pro-nociceptive cytokines, including Tumor Necrosis Factor α (TNFα) and Interleukin 1β (IL-1β), which increase the excitatory and reduce the inhibitory currents in DH neurons, supporting central sensitization (37). Reactive astrocytes have been associated with hyperalgesia under pathological conditions (38). Furthermore, descending noradrenergic and serotoninergic projections to the DH inhibit or facilitate pain transmission (23). Therefore, injury-or diseaseinduced aberrance in descending pain pathways modulate chronic pain (39). For example, in individuals with MS, lesions in the periaqueductal gray (PAG) are often reported (40). The PAG is an important control center for the modulation and propagation of pain along the descending pathways with analgesic effect upon stimulation (40). In contrast, the rostral ventromedial medulla (RVM) GABAergic cells project to the DH where they facilitate mechanical nociception by inhibiting the spinal GABAergic interneurons (41). Finally it is important to mention that supraspinal glia also contribute to the modulation of chronic pain along the descending pathways particularly through the release of soluble mediators (42).
Peripheral sensitization manifests as an intensification of the responsiveness of primary sensory neurons in the DRG and lowered pain thresholds when there is damage or pathology in tissues that they innervate. Stimulus-independent spontaneous pain is also one of the outcomes of peripheral sensitization. Multiple effectors, including chemokines, prostaglandins, calcitonin gene-related peptide (CGRP), adenosine triphosphate (ATP), substance P and nerve growth factor (NGF) are released at the affected site (43)(44)(45)(46). These effectors initiate a molecular cascade partly mediated by tyrosine kinase-and G-protein-coupled receptors. Protein kinases are activated and phosphorylate ion channels and receptors at the peripheral terminals of nociceptors, and by doing so, alter their activity (47)(48)(49). In addition, the same modulators modify the expression, localization and stability of nociceptive ion channels and receptors (50). The overall outcomes of these alterations are increased activity of nociceptors and decreased pain thresholds (51).
Manifestation of Neuropathic Pain in MS
Neuropathic pain is widely experienced among individuals with MS and can take several forms. Dysaesthetic (lower extremity) pain is the most common form of neuropathic pain described as a constant burning, tingling and throbbing, painful sensation in the legs and feet (10). Even though the specific mechanisms underlying the onset of dysaesthetic pain have not been elucidated, it has been suggested that lesions in the SC could lead to the disruption of pain transmission along the spinothalamic tract, or dysfunction of GABAergic interneurons (52). In accordance, magnetic resonance imaging (MRI) of individuals with dysaethetic extremity pain shows plaques in the cervical and thoracic spinal cord (53,54).
Trigeminal neuralgia is described as a brief electric shock sensation resulting from the irritation of the trigeminal nerve. It primarily affects different facial regions. Pain can be induced even by mild stimulation of the face. Although the attacks can be brief (up to 2 min), they occur frequently during the day (up to 50/day) (55). Demyelination of the pons, which affects the trigeminal root entry zone, and neurovascular compression of the trigeminal root are observed in MS, and could potentially be a cause of trigeminal neuralgia (55). Similar to trigeminal neuralgia, Lhermitte's sign is described as an electric-shock sensation associated with neck movement, and running along the back and down the limbs (56). Lhermitte's sign is not specific to MS and manifests in other pathological conditions that include the compression or lesion of the cervical SC (57). In accordance with this idea, the MRI of individuals with MS shows demyelinated plaques in the dorsal columns at the cervical level (58).
Treatment of Neuropathic Pain in MS
The treatment of neuropathic pain is challenging, mostly due to the low efficacy and side effects of pharmacological agents (59)(60)(61). Tricyclic antidepressants, and serotonin/norepinephrine reuptake inhibitors have been used as first-line pharmacological treatments. Anticonvulsants, including gabapentine and pregabalin, are also considered first-line of treatment (62). Although cannabinoids and opioids have been utilized for the alleviation of neuropathic pain in MS, they are not considered as a first-line treatment option (63,64). Neuromodulation therapies are becoming more common in clinical practice (65). Both brain and SC chemical and/or electrical stimulation and inhibition have been used for the management of different chronic pain conditions (66). Transcranial Direct Current Stimulation (67), peripheral nerve field stimulation (68), transcutaneous spinal direct current stimulation (69), are among treatments to manage neuropathic pain in MS. Alternative strategies such as water exercise and yoga have also been utilized (70).
The goal of the present review is to highlight select mechanisms implicated in the development and persistence of chronic neuropathic pain in MS and its animal models. We first focus on the cellular mechanisms, and then discuss potential molecular mechanisms with particular emphasis on ion channels, pumps and exchangers.
ANIMAL MODELS OF MS: EXPERIMENTAL AUTOIMMUNE ENCEPHALOMYELITIS (EAE)
Experimental Autoimmune Encephalomyelitis (EAE) is one of the best characterized animal models utilized for the study of MS (71)(72)(73). For over 30 years, the use of EAE in different susceptible animal strains has proven essential for investigations on various aspects of MS pathophysiology, and the discovery of drugs that are widely used to treat MS such as Interferon-β or Glatiramer acetate (74)(75)(76). With all the limitations that an animal model could present, conventional EAE models and the spontaneous EAE model observed in T-cell receptor transgenic mice (77), have shed light on many aspects of MS etiology, pathophysiology and its pharmacological treatments (72,(78)(79)(80).
Induction of EAE
EAE can be induced by immunization of susceptible animal strains with myelin components and encephalitogenic peptides, or by adaptive transfer of encephalitogenic T cells (81). There are various EAE models in different species and susceptible strains, and they mimic distinct hallmarks of MS (82). In particular, a clear distinction can be made between two forms of EAE: chronic-progressive and R-R EAE. To induce chronic-progressive EAE, mice are inoculated with myelin oligodendrocyte glycoprotein (MOG). The MOG antigen in CNS tissue homogenates was long recognized to be crucial for provoking demyelinating lesions characteristic of MS (83). Currently, the use of recombinant MOG or synthetic encephalitogenic fragments such as MOG 35−55 for immunization provides a reproducible animal model for the study of MS pathology (73). Alongside with MOG, myelin basic protein (MBP) has been used to induce chronic-progressive EAE (84). Proteolipid protein (PLP) and its encephalitogenic PLP fragment (PLP 139−151 ) have been used as a standard approach to induce R-R EAE in susceptible rodent strains. The recombinant protein or synthetic peptide of choice is dissolved in a mineral oil-based adjuvant, Complete Freund's adjuvant (CFA), containing heatinactivated mycobacteria tuberculosis (MT) that activates the innate immune system through actions on pattern recognition receptors. Injections of pertussis toxin enhance the immune response and perturb blood brain barrier integrity. The onset, course and severity of EAE differ depending on the antigen and adjuvant utilized, and the species, strain, sex and age of the animals used. Within the same species and strain, susceptibility to EAE can be different based on intrinsic and environmental factors such as colonization of the gut and the type of commensal flora, elements that are challenging to control (85).
Alongside with the classical EAE forms, induced by sensitization to a specific myelin protein, there are spontaneous and humanized EAE models which have been discussed in a comprehensive review (86). Among those models are two that have been developed by use of transgenic mouse lines: opticospinal EAE and spontaneous R-R EAE. These models have been useful to unravel the role of B cells and B and T cell interactions in disease pathogenesis (87). Mice manifesting opticospinal EAE develop a chronic progressive EAE-like disease that affects primarily the optic nerve and the SC, but not the brain. The neurological deficits that spontaneously develop are predominantly reminiscent of neuromyelitis optica (88). However, transcriptome profiling indicated that human MS risk genes are among differentially regulated transcripts, supporting the applicability of this model to the study of MS etiology (89). Mice affected by spontaneous R-R EAE develop the disease at high frequency and manifest unique clinical features with distinct symptoms at onset (e.g., ataxia) and during relapse (e.g., hindlimb paralysis). This is especially observed in female mice. The clinical symptoms are paralleled by formation of lesions in relevant CNS regions (e.g., lesions in the cerebellum and brainstem in mice manifesting ataxia, and SC lesions in mice with hindlimb paralysis) (90). Therefore, the spontaneous R-R EAE is a useful animal model for the study of specific disease aspects and treatments because it mimics the most frequent form of MS (88,90).
Neuroinflammation, demyelinating lesions, axonal damage, and oligodendrocyte and neuronal death are observed in classical EAE, with the SC being the CNS region most affected. These histopathological features are similar to MS neuropathology (91,92).
EAE Symptoms
Ascending paralysis is the most pronounced symptom in animals with classical EAE. It manifests several days post-immunization, and is associated with disease progression (93). In chronicprogressive EAE, flaccid paralysis starts at the tail with loss of tone, proceeds to the hind and fore limbs, and could lead to death. Aside locomotor impairment, sensory and cognitive dysfunctions are also observed even before the manifestation of motor deficits (94,95). This is in concordance with the impairment in cognitive function in MS, which is observed even at early phases of the disease, and is paralleled by demyelination and neuronal damage that affects the gray matter (96,97).
As in the case of MS, rodents with EAE manifest pain (98)(99)(100). Pain behavior has been assessed in both chronic progressive and R-R EAE (98)(99)(100). Mechanical and thermal allodynia have been documented at onset of EAE symptoms, and concomitant with deficits in cognitive behavior (101), which has also been reported in MS patients (102). Similar to MS, pain behavior and hypersensitivity are variable among different EAE models, revealing the heterogeneity of the disease and mimicking the diversity among individuals with MS (103). Neuroinflammation and disease severity modulate mechanical hypersensitivity, in a MOG 35−55 -induced EAE model (104). Similar to the caudal to rostral progression of the neuroinflammatory reaction observed in the SC, ascending sensitization might manifest during EAE. Both morphological and functional changes influence the cortical synapses participating in central sensitization and pain hypersensitivity during EAE (105). An increase in immune cell infiltration, glial activation and release of soluble inflammatory mediators (106) together with alterations in glutamate and GABA levels as well as changes in neurotransmission lead to an overall increase in neuronal excitability in EAE (107,108).
Because anxiety and depression are strongly associated with pain in individuals with MS, EAE has been widely used for the study of these comorbidities (109)(110)(111). Mice, affected by a mild form of EAE that does not substantially affect motor function, were evaluated at a later disease stage. Increased anxietyand depression-like behavior correlated with elevated TNFα, neuronal dysfunction, synapse loss, and neuroinflammationinduced hippocampal damage (112).
Cellular and Molecular Mechanisms Underlying CNS Damage in EAE
EAE has been useful to unravel several cellular and molecular mechanism implicated in MS pathogenesis including the involvement of T cells and other infiltrating immune cells, and the role played by humoral components of the immune system in neurodegeneration. In fact, during the initial phase of EAE, T cells play the major role. Upon their activation in the periphery, T cells produce and secrete pro-inflammatory cytokines and cross the blood-brain barrier. The entry of T cells into the CNS is mediated by integrins and cell adhesion molecules which are upregulated. Once in the CNS, these T cells are re-activated by antigen presenting cells (113). The presentation of myelin-derived antigens by macrophages to T cells, the infiltration of additional immune cells including macrophages and B cells, and glial activation lead to a complex and extensive neuroinflammatory reaction, ultimately resulting in demyelination, loss of oligodendrocytes and neurons, and axonal degeneration (72). The cytotoxic effects of cytokines, the activation of complement proteins and the production and secretion of reactive oxygen and nitrogen species (ROS and RNS, respectively) are among molecular mechanisms that cause CNS damage (114). The free radicals have a severe and deleterious impact on neuronal metabolism, mitochondrial integrity, and energy balance leading to increased intracellular calcium (Ca 2+ ), and eventually neuronal death (115). Because of the important role played by oxidative stress in MS, some of the latest therapies focus on reducing the deleterious effects that are the result of ROS and RNS accumulation (116).
Oxidative stress does not only damage neurons, but it affects glial cells as well, with oligodendrocytes being the most sensitive cell type (117). In active lesions during the early stages of R-R MS, oxidative damage affects primarily oligodendrocytes and myelin (117,118). The accumulation of ROS or RNS in mitochondria compromises oligodendrocyte function and impairs the differentiation of oligodendrocyte progenitor cells. This could be a mechanism underlying the impairment of remyelination (119) since mitochondria are essential for the biosynthesis of lipids to produce myelin (120,121). Oxidative stress affects oligodendrocytes and their precursors in more than one way and therefore, could contribute directly and indirectly to MS pathophysiology (122,123).
ROLE OF ACTIVATED GLIA AND GLIA-NEURON INTERACTIONS IN NEUROPATHIC PAIN
Following injury or disease, glia undergo morphological and functional changes switching into an active state. As indicated above, glial activation also occurs in MS and EAE, and is considered an essential determinant of disease pathology (124). In addition to myelin phagocytosis and antigen presentation, glia release pro-inflammatory and pronociceptive cytokines, chemokines, brain derived neurotrophic factor (BDNF), ROS, and ATP, leading to both neurotoxicity (125) and the development and maintenance of neuropathic pain (126)(127)(128)(129)(130)(131). Attenuation of astrocyte and microglia activation using fingolimod has been associated with amelioration of pain hypersensitivity in mice with EAE (31).
Activated glia are also a source of glutamate. Glutamate homeostasis is impaired in MS and EAE. Glutamate levels in the cerebrospinal fluid (CSF) of individuals with MS correlate with disease severity (132). Increased serum glutamate levels and elevated glutamate in active white matter lesions have been observed in MS (133,134). Since glutamate and glutamate receptors play essential roles in the hyperexcitability of sensory neurons and sensitization to pain, increased glutamate release by glia potentially contributes to neuropathic pain in MS/EAE. Glutamate can also induce excitotoxicity through the activation of ionotropic receptors which participate in the neurodegeneration observed in MS and EAE and other CNS disorders (135,136). In addition to neurons, glia express ionotropic and metabotropic glutamate receptors, and therefore, their function could be modulated via direct stimulation by glutamate (137,138).
Astrocytes are essential for the clearance of synaptic glutamate through uptake by the glutamate transporters, glutamate aspartate transporter (GLAST/EAAT1) and glutamate transporter 1 (GLT-1/EAAT2). A decrease in EAAT1 and EAAT2 levels was observed in the SC at the peak of EAE which persisted even after remission (139). A reduction of astrocytemediated glutamate transport could elevate glutamate levels causing neuronal hyperexcitability as well as excitotoxicity. Oligodendrocytes also express EAAT1 and EAAT2 and aberrance in glutamate uptake by oligodendrocytes also contributes to pathology in EAE/MS (140). Paradoxically, an increase in glial glutamate transporter levels in the MS optic nerve has been implicated in neuroprotection (141). A loss of EAAT1 and EAAT2 in MS lesions and an upregulation of EAAT2 in the adjacent cortex with intact myelin has been documented (142). Since both astrocytes and the white matter oligodendrocytes express glutamate transporters (143), these contradictory findings could be the result of many factors including cell type, CNS region, disease stage and whether EAE as opposed to post-mortem MS tissue were analyzed. Collectively, the studies support the notion that glial EAAT1 and EAAT2 could play a role in EAE/MS pathology. However, their precise contribution to neuropathic pain in these diseases requires further investigations.
Glutamate receptor antagonists have been assessed in EAE, primarily in the context of neuroprotection, prevention of demyelination, and improvement of motor deficits (144)(145)(146)(147)(148). Although glutamate receptor antagonists, including memantine (149)(150)(151) or modulators of the glutamatergic system such as ketamine (152,153) have been used for the relief of chronic pain in various diseases or injuries, their effectiveness in the alleviation of neuropathic pain during EAE/MS has not been adequately investigated.
Ion Channels
Ca 2+ , Na + , and K + channels are essential components of pathways underlying pain mechanisms (154,155). They have also been implicated in several aspects of MS pathophysiology (156). Voltage gated Na + and Ca 2+ channels (VGSCs and VGCCs, respectively) are expressed in sensory neurons of the DRG and in DH neurons (157,158). Under physiological conditions, these channels participate in neuronal excitability and neurotransmitter release (159,160). Following CNS and peripheral nervous system (PNS) injury and disease, changes that affect their expression and distribution lead to neuronal hyperexcitability, resulting in an exaggerated and repetitive response to subthreshold sensory stimuli, and increasing synaptic strength (161,162). The increased activity of Na + and Ca 2+ channels in first order sensory neurons of the DRG promotes neurotransmitter release, which, in turn, enhances the excitatory inputs to the SC (34).
CaV2.2
The N-type VGCC, CaV2.2, is one of the major ion channels regulating neurotransmitter release in the PNS and CNS (163). Compelling evidence indicates that CaV2.2 is involved in pain mechanisms. In fact, CaV2.2 has been extensively investigated, especially with the goal of finding pharmaceutical approaches for the treatment of chronic pain (164)(165)(166).
Several lines of evidence support the involvement of CaV2.2 in neuropathic pain.
Following CFA-induced inflammatory pain in mice, CaV2.2 mRNA and protein levels are upregulated in the lumbar DRG and correlate with the increase in thermal hyperalgesia (167). Electrophysiological recordings from the DRG showed a voltage increase in CaV2.2 currents and a rise in the frequency of spontaneous action potentials that were directly related to CaV2.2. These factors, taken together, suggested an overall enhancement of the excitability of DRG neurons (167).
Furthermore, CaV2.2 knockout mice manifest decreased pain responses in models of neuropathic and inflammatory pain (168). Blockade of CaV2.2 in nociceptors reduces chronic inflammatory pain in a mouse model of rheumatoid arthritis (169). Additionally, interference with CaV2.2 trafficking to the membrane of DRG neurons abrogates Ca 2+ currents, decreases stimulus-induced pro-nociceptive neuropeptide release, and reduces the excitatory synaptic transmission in lamina II DH neurons which receive DRG afferents that express CaV2.2. These changes result in the attenuation of pain responses in several rodent models of evoked inflammatory and neuropathic pain (170). Following chronic constriction injury (CCI) of the sciatic nerve, CaV2.2 is upregulated in lamina II of the lumbar DH (171). In the lumbar L5-spinal nerve ligation (L5-SNL) model, IL-1β and IL-10, pro-and anti-nociceptive cytokines, respectively, modulate CaV2.2 expression in the DRG in opposite manner. Injury at the L5 level increases IL-1β release in the adjacent, uninjured L4 DRG which upregulates CaV2.2, resulting in neuronal hyperexcitability (172) and mechanical allodynia (173). In contrast, L5-SNL upregulates both IL-1β and IL-10 in the corresponding, injured L5 DRG and lumbar SC. However, the effects of IL-10 predominate and a significant reduction in CaV2.2 is observed (172). Collectively, the aforementioned studies illustrate the role played by CaV2.2 in pain.
Various mechanisms have been implicated in CaV2.2mediated pain responses.
In peripheral somatosensory neurons, CaV2.2 is located close to synaptic vesicles containing pro-nociceptive neurotransmitters such as glutamate, substance P and CGRP (174). Inhibition of N-type VGCC in rat DRG neuronal cultures decreases the entry of Ca 2+ and reduces neurotransmitter release (175). As indicated above (170), by regulating the release of pro-nociceptive neurotransmitters, CaV2.2 can enhance pain responses.
CaV2.2 forms a complex with NMDA receptors (NMDA-R) in the mouse and human SC (176). Following nerve injury, CaV2.2 promotes the trafficking and synaptic targeting of NMDA-R, suggesting an additional mechanism by which it could contribute to pain signaling (176).
Ectopic fiber sprouting in targets of DRG neurons, which occurs following nerve injury, has been associated with neuropathic pain (177). Inhibition of CaV2.2 activity in neurons derived from the DRG of mice manifesting CFA-induced inflammatory pain, prevented neurite outgrowth. It has been proposed that excess neurite outgrowth causes an increase in the nerve terminal density, which leads to increased nociceptive inputs and pain (167,178).
During MS and EAE, a CaV2.2 subunit is overexpressed in active MS lesions, and MS/EAE plaques (179). It has been suggested that the overexpression of CaV2.2 impacts neuronal function and demyelination in the lesions (180,181). Although a direct study has not yet confirmed the link between CaV2.2 and neuropathic pain in MS/EAE, the involvement of this channel in EAE pathophysiology, its role in neuropathic pain, together with the use of N-type Ca 2+ channel blockers in the treatment of chronic pain in MS patients (182)(183)(184), point toward the need for further investigations to determine how CaV2.2 plays a role in EAE-associated pain.
Nav1.6 and Nav1.8 Nav1.6 and Nav1.8 are among the VGSCs participating in inflammation and neuronal pathology in retinal ganglion cells and lumbar DRG during EAE (185)(186)(187)(188). Nav1.6 is associated with Na + /Ca 2+ exchangers (NCX). It has been proposed that in EAE, increased Nav1.6 expression prevents activity reverses the Na + /Ca 2+ exchanger activity, and by doing so, promotes Ca 2+ influx into neurons and causes neurodegeneration. It is possible to speculate that such interactions between Nav1.6 and the NCX also occur in sensory neurons and leads to elevated intracellular Ca 2+ which, in turn, modulates pain mechanisms. Despite studies showing the role of Nav1.6 in various pain models, illustrated by examples described below, its contribution to MS-or EAE-associated pain has been underappreciated.
In neuropathic pain induced by SNL, siRNA-mediated Nav1.6 knockdown decreases the hyperexcitability of DRG neurons by reducing the frequency of the action potentials and restoring their resting potential. In addition, a decrease in sympathetic sprouting and the ectopic firing of Aβ fibers in the lumbar DRG have been documented (189). Moreover, in spared nerve injury, a model of neuropathic pain, conditional adenoviral vector (AVV)-mediated Nav1.6 knockdown in DRG neurons reduces excitability and ameliorates pain in adult mice. Furthermore, the accumulation of Nav1.6 in newly formed nodes of Ranvier, which has been implicated in pain, is mitigated by Nav1.6 knockdown (190). Accordingly, an amelioration of pain sensitivity is observed. Nav1.6 is also upregulated in a mouse model of diabetes and diabetic neuropathy (191). A gain of function mutation in Nav1.6 was found in individuals with trigeminal neuralgia. The mutated form shows decreased current threshold, increased frequency of evoked action potentials and overall potentiation of the Na + currents with higher excitability of the trigeminal ganglion (192). Taken together, the investigations mentioned above stress the pivotal role played by Nav1.6 in pain states. The role of Nav1.6 could be a promising research direction to further explore in MS-and EAE-associated pain.
In the context of MS and EAE, NaV1.8 was first associated with alterations in the firing pattern of cerebellar Purkinje cells, causing dysfunction and disruption of motor coordination, one of the symptoms observed in EAE and MS (188). Few studies have shed light into its involvement in DRG sensitization. There is a significant increase in Nav1.8 levels in DRG neurons in CFAinduced inflammatory pain. TNFα and IL-1β directly modulate Nav1.8-like currents by activating p38/MAPK signaling pathway. This is paralleled by increased excitability of DRG neurons (160). Nav1.8 channels are expressed in more than 90% of the nociceptive neurons in the DRG (193). Since medium to large sensory neurons of the DRG are hyperexcitable at onset of EAE, and IL-1β is significantly elevated in the DRG at the same stage of the disease (194), it is possible that similar mechanisms contribute to peripheral sensitization and neuropathic pain associated with EAE. This possibility requires further investigations.
K + Channels
The calcium-sensitive large conductance K + channels (BK) have recently been implicated in pain mechanisms during EAE (195). These channels are found in DRG neurons of mice with EAE and play a role in the endoplasmic reticulum (ER) stress-mediated pain response. It has been hypothesized that the prolonged neuroinflammation that occurs in the CNS during MS/EAE affects DRG neurons by conveying retrograde stress signals, which, in turn, induce ER stress (195). In fact, ER stress markers are elevated in post-mortem DRG of individuals with MS. Furthermore, 4-phenyl butyric acid (4-BPA), an inhibitor of ER stress, suppresses mechanical hypersensitivity in mice with EAE (195). The potential mechanisms by which ER stress leads to pain states in EAE have been investigated. ER stress causes variations in intracellular Ca 2+ transients. Under physiological conditions, because of their Ca 2+ sensitivity, BK channels respond to transient changes in Ca 2+ concentrations by mediating the efflux of K + in order to maintain the membrane potential of excitable cells. In mice affected by EAE and individuals with MS, there is a reduction in the b4 auxiliary subunit of the BK channels in DRG neurons (195). This subunit modulates the Ca 2+ sensitivity of BK channels, and therefore, its electrical properties (196). The decrease in b4 subunit causes a change in BK channel physiology resulting in a higher depolarizing resting potential due to decreased BK activity and the release of K + from neurons. An elevated depolarizing resting potential results in frequent and easier firing, ectopic discharges, and increased neurotransmitter release leading to hyperexcitability of sensory neurons. In mice with EAE, administration of 4-BPA restores the membrane potential of DRG neurons to physiological values by increasing the expression of b4 and by restoring BK channel properties.
A link between ER stress and pain behavior has been observed in other models of neuropathic pain. An increase in ER stress is recorded in the DH of rats following SNL and it corresponds with pain hypersensitivity. Inhibition of ER stress pathways in the SC alleviates pain (197,198). ER stress in DH neurons can influence the environment and promote spinal sensitization and increased pain sensitivity (199). ROS production is an outcome of ER stress. GABAergic interneurons are sensitive to ROS. The increased production of ROS following ER stress causes longterm depression of GABAergic interneurons in the DH and impairs GABA release following SNL (197,200). It is possible that similar mechanisms occur in EAE. In addition, astrocytes and macrophages release pro-nociceptive cytokines including IL-1β and IL-6 in response to ER stress (201,202). These cytokines are upregulated in MS/EAE and they have been implicated in pain (130,203). X-Box Binding Protein 1 (XBP1), a transcription factor associated with ER stress and a regulator of unfolded protein response, has been implicated in neuropathic pain (199,204). XBP1 increases in the brain white matter in MS and in SC white matter in EAE. However, the potential contribution of XBP1 to neuropathic pain in MS/EAE has not received attention. In sum, the involvement of ER stress in pain responses in EAE or MS and its relation to BK is an intriguing research direction that warrants further studies.
Ion Pumps and Exchangers
Plasma membrane calcium ATPases (PMCAs) and NCX are among ion pumps and exchangers, respectively, that are modulated during MS/EAE, and implicated in pain mechanisms. Their main physiological function is to regulate intracellular Ca 2+ concentrations. Both PMCA and NCX expel intracellular Ca 2+ . PMCAs are essential in maintaining the cytosolic Ca 2+ concentrations due to their high Ca 2+ affinity and low Ca 2+ capacity, whereas NCX, which has low Ca 2+ affinity, participates more dynamically in re-establishing Ca 2+ homeostasis during large cytosolic Ca 2+ increases (205). Aberrant Ca 2+ clearance in neurons can cause hyperactivity (206).
Studies on MOG 35−55 -induced chronic EAE have shown that the levels of one of the PMCA isoforms, PMCA2, which is exclusively expressed in neurons, is downregulated in the DH of mice at onset of the disease. This is paralleled by a concomitant increase in mechanical and thermal pain sensitivity (207). PMCA2 is also downregulated in motor neurons (208,209) and photoreceptors at early stages of EAE, causing aberrance in Ca 2+ signaling and contributing to synaptic dysfunction (210). Since PMCA2 is not expressed in the DRG (211), it is likely that the modulation of pain responses by PMCA2 is primarily mediated through changes in DH neurons. It has been postulated that a delay in Ca 2+ clearance, as a result of decreased PMCA2 in DH neurons of EAE mice, could lead to increased intracellular Ca 2+ resulting in the hyperactivation of DH neurons. Furthermore, increased intracellular Ca 2+ could activate Ca 2+ -dependent transcription factors involved in the transcription of pro-nociceptive genes (207). It is worth noting that glutamate, which is one of the major players in the initiation and maintenance of chronic pain (138,212,213) and central sensitization (33), modulates PMCA2 levels in SC neurons. Kainic acid, an analog of glutamate, reduces PMCA2 levels in cultured SC neurons and administration of 2,3-dioxo-6-nitro-7-sulfamoyl-benzo quinoxaline (NBQX), an AMPA/Kainate receptor antagonist, restores PMCA2 levels in EAE mice (148). IL-1β is also a trigger that reduces PMCA2 expression by mechanisms that remain elusive (207). Further studies are needed to establish the direct link between PMCA2 and pain mechanisms in DH neurons. NCX NCX has been studied in animal models of neuropathic pain (214). Because of its intrinsic properties as an exchanger, this transporter allows the passage of Ca 2+ and Na + across the membrane in either direction, depending on ion gradients. In the forward mode, NCX mediates intracellular Ca 2+ extrusion whereas in the reverse mode it facilitates Ca 2+ influx. Reverse activity of NCX has been reported in animal models of neuropathic pain and EAE (215,216). When the reversal of NCX activity is inhibited by pharmacological agents in rodents with peripheral nerve injury, the Ca 2+ overload in the lumbar DRG is reduced and an overall decrease in pain sensitivity is observed (214).
During EAE, the number of NCX expressing neurons in the lumbar SC increases. NCX co-localizes with Nav1.6 channels in the dorsal column white matter (216). An increase in the number of Na + channels and the consequent elevation in Na + currents, reverses NCX activity leading to Ca 2+ overload and axonal injury in the CNS during EAE (216,217). The co-localization of Nav1.6 and NCX is also observed in cervical SC tissue of MS patients (218). NCX is expressed in cortical astrocytes where it facilitates glutamate release (219). Reactive astrocytes participate in the modulation of neuropathic pain by releasing glutamate, among other modulators of pain (220). Taken together, these investigations raise the possibility that NCX contributes, not only to axonal damage, but also to mechanisms underlying pain hypersensitivity by modulating Ca 2+ signaling. This mechanism has not been adequately explored in EAE-related pain, and could be an engaging research direction.
The involvement of ion channel and exchangers in pain mechanisms has been summarized in Table 1.
ADDITIONAL CONTRIBUTORS TO PAIN MECHANISMS IN EAE: TRANSCRIPTION FACTORS, SIGNALING PATHWAYS AND INFLAMMATORY MEDIATORS, E.G., CHEMOKINES
Several molecular mechanisms have been implicated in the pathogenesis and maintenance of neuropathic pain in MS/EAE. Most of these studies were undertaken to investigate molecules that been implicated at different stages of disease pathophysiology.
Wnt Signaling Pathways
The Wnt signaling pathways are involved in the development of EAE-associated chronic pain by increasing the expression of pro-inflammatory and pro-nociceptive cytokines (221). The physiological roles of Wnt include CNS development (222), synaptic plasticity (223), regulation of oligodendrocyte maturation and differentiation (224) and cytokine production by cortical and SC neurons (225,226). These functions are exerted through β-catenin-dependent and -independent pathways (227,228). During EAE, both of these pathways are overactivated, and pharmacological inhibition of either of these pathways attenuates mechanical allodynia (221). Similar results were obtained by use of inflammatory and neuropathic pain models, supporting further the involvement of Wnt in pain mechanisms. Wnt signaling enhances synaptic plasticity and spine morphogenesis in DH neurons. This was associated with hypersensitization and chronic pain in adult mice (229).
C-X3-C Motif Chemokine Ligand 1 (CX3CL1) and CX3C Receptor 1(CX3CR1)
Chemokines are potent mediators of inflammation in MS, with Tcells being the major source during the initial phase of the disease. They chemoattract leukocytes to affected sites (230). An increase in chemokines and their receptors is observed in the blood and CSF of individuals with MS (231) as well as animals affected by EAE (232).
The chemokine CX3CL1 and its receptor CX3CR1 have been implicated in pain mechanisms. In the CNS, CX3CL1 is primarily expressed in neurons, and CX3CR1, is predominantly found in microglia. The involvement of CX3CR1 in pain mechanisms is indicated by the fact that CX3CR1 knockout mice do not (194) Frontiers in Neurology | www.frontiersin.org develop thermal hyperalgesia in an inflammatory pain model (233). Moreover, upon interaction with its ligand in the DH, CX3CR1, which is a G-coupled receptor (234), initiates a cascade of events in microglia, with ultimate production and release of pro-nociceptive mediators such as IL-1β, IL-6 and NO (235). The link between pain and CX3CL1 was further demonstrated in a R-R EAE model. The authors showed that induction of pain by ligation of the middle, sensory branch of the trigeminal nerve during a remission, increases CX3CL1 expression in the lumbar SC leading to the recruitment of immune cells which eventually results in EAE relapse (236). Another study found that CX3CL1 and CX3CR1 are upregulated in the DRG and DH of EAE rats at the early stage of the disease, prior to demyelination and axonal injury (126). This corresponded with the onset of hypoalgesia (126). The authors postulated that the parallelism between hypoalgesia, a sensory aberrance manifesting prior to the development of hyperalgesia in MS (237), and the increase in CX3CL1 and CX3CR1 in the DRG and DH support the idea that they play a role in nociception and pain.
BDNF, Tyrosine Receptor Kinase B (TrkB) and Extracellular Signal-Regulated Kinase (ERK) Signaling Pathway
BDNF has been studied in the context of MS/EAE because of its involvement in MS-associated neuroinflammation (238), and the possibility of using it as a marker for disease progression (239), but its role in chronic pain has been controversial. A variety of animal models have shown both anti-nociceptive and pro-nociceptive effects of BDNF. Interestingly, ERK and c-fos, which are among the downstream signaling molecules activated by BDNF and its receptor TrkB, are considered markers of central sensitization (240). Phospho ERK (pERK) directly increases the activity of AMPA and NMDA receptors while decreasing the activity of K + channels in the lumbar DH. The overall result is an increase in the excitability of superficial laminae neurons (241). These rapid events associated with pERK activation are followed by the slower but long-lasting effects such as the transcription of genes implicated in pain including c-fos, Cox-2 and TrkB. Thus, pERK plays a critical role in central sensitization of DH neurons (241).
In the DH of EAE mice, cellular markers of central sensitization, including pERK, are increased in neurons, and this is paralleled by mechanical and cold hyperalgesia (31). In R-R EAE, attenuation of BDNF-TrkB-ERK signaling in the lumbar DH, is associated with the alleviation of mechanical allodynia in the diseased mice (242). Since white blood cells of individuals with R-R MS express elevated BDNF levels (243), it is likely that immune cells infiltrating the CNS are a source of BDNF in addition to glia and neurons. In rats with EAE, BDNF in the DRG and SC was upregulated at the peak of the disease (244). In agreement, TrkB levels were elevated in the lumbar DH and associated with increased mechanical pain sensitivity in R-R EAE (242).
IL-1β -NF-kB Signaling Pathway
In the context of neuropathic pain, IL-1β, IL-6 and TNFα have been implicated in the maintenance of pain states following Frontiers in Neurology | www.frontiersin.org injury and disease (131,245,246). These mediators exert their actions in a synergistic manner. Upon binding to their respective receptors, they can lead to the production, secretion and modulation of additional cytokines (131). Both activated glia and infiltrating immune cells release IL-1β, IL-6 and TNFα (247,248).
IL-1β is produced in an inactive precursor form. Cleavage of the precursor by caspase-1 produces mature IL-1β which is the active form. The binding of IL-1β to its receptor induces the activation of transcription factors such as NF-kB and this results in the further production and release of pro-inflammatory molecules including other cytokines, ROS, NO, NOS and cyclooxygenases (COXs) (249,250). It has been proposed that in rodents experiencing inflammationevoked hyperalgesia, IL-1β activates NF-kB in DRG and SC neurons inducing COX-2 (251). COX-2 is a potent stimulator of prostaglandins, which are lipids that exert several physiological functions including maintenance of inflammation and pain (252,253). IL-1β also enhances mechanical and thermal pain sensitivity by perturbating the activity of glutamate receptors (254)(255)(256). IL-1 receptor I, which is expressed in nociceptors (257), colocalizes with the NR1 subunit, one of the NMDA-R subunits (246). IL-1β selectively induces the phosphorylation of NR1 subunit facilitating pain transmission (248). Furthermore, administration of IL-1 receptor antagonist (IL-1ra) significantly reduces inflammatory hyperalgesia in rats, by inhibiting NR1 phosphorylation in the SC (255).
IL-1β, IL-6 and TNFα have been investigated in MS pathology and related pain (258,259). As stated before, the majority of the intracellular signaling pathways that lead to neuropathic pain during MS/EAE include the transcription of pro-inflammatory and pro-nociceptive cytokines such as TNFα, IL-6 and IL-1β. IL-1β is necessary for the neuroinflammatory reaction that develops during EAE as indicated by the resistance of IL-1β knockout mice to EAE (260). IL-1β has been associated with pain hypersensitivity in mice with EAE, as systemic administration of an IL-1 receptor antagonist alleviates pain (203).
The molecular mechanisms underlying pain responses in pathological conditions including EAE have been summarized in Figure 1.
CONCLUSIONS
In spite of a wealth of knowledge about the immune system reaction and the crosstalk between the immune and nervous systems in MS and EAE, information regarding the mechanisms underlying neuropathic pain has been limited. Given the high frequency of pain states in individuals affected by MS, it is imperative to advance the understanding of events leading to the onset and maintenance of chronic pain. Revealing the specific aberrant mechanisms and major players involved in the development and/or maintenance of pain in MS/EAE would greatly alter the pharmacological approach to its treatment, thus improving the quality of everyday life in affected individuals.
Considering the type of pain experienced in MS, it is highly likely that diverse and complex mechanisms regulate pain processing. In particular, neuropathic pain remains a challenge, not only in MS but in other pathological conditions of the CNS. Whereas such mechanisms could be somehow distinct in each pathological condition, commonalties also exist. Because neuropathic pain has been extensively studied in animal models of nerve injury, findings obtained in such studies could facilitate future investigations on MS/EAE-associated pain. For example, glutamate excitotoxicity-induced death of inhibitory GABAergic neurons in the superficial DH has been implicated in the transition of acute neuropathic pain into chronic neuropathic pain following nerve injury (261). As mentioned above, a role for glutamate excitotoxicity has been shown in EAE and high glutamate levels have been reported in the CSF of individuals with MS. Therefore, mechanisms similar to those reported in nerve injury (261) could also underlie chronic neuropathic pain in EAE/MS. Studies on EAE have also shown that demyelination and inflammation occur, not only in the CNS, but also in the PNS (262). Peripheral nerve demyelination has been reported in a subset of individuals with MS (263). This can affect the function of sensory neurons in the DRG. In fact, electrophysiological studies indicated membrane hyperexcitability of DRG sensory neurons and therefore, peripheral sensitization in EAE (195,264).
Regarding the use of EAE for the study of pain states in MS, the multitude of EAE models that have been used in investigations, could challenge the acquisition of cohesive information (265). On the other hand, corroborating findings in more than one model is needed to determine the wider applicability of the findings to different species and experimental paradigms.
With respect to the management of chronic pain, the challenge experienced in other pathological conditions associated with neuropathic pain, are also relevant to MS. Both pharmacological and non-pharmacological approaches are used, although further studies are needed to establish the effectiveness of these therapies (69,266,267). Importantly, the evaluation of pharmacological therapies in EAE or other models of MS, need to be performed in a manner that can be applicable in the clinic.
AUTHOR CONTRIBUTIONS
EM and SE conceived and wrote this article and approved the submitted version. | 2021-12-16T14:24:38.559Z | 2021-12-16T00:00:00.000 | {
"year": 2021,
"sha1": "25436c0122f983fb85c346cee1e6a01785df105a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "25436c0122f983fb85c346cee1e6a01785df105a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53960739 | pes2o/s2orc | v3-fos-license | Study of Chattri Dam for Selected Metal Concentrations in Water, Sediments and Fish Species
The research was conducted on Chattri dam in Haripur district. The study was aimed to find metal (Zn, Cu, Fe, Mn, Cd and Ni) concentrations in water, sediment and fish tissues (muscles, gills and liver) with an idea to know about the possible metal hazard and potential health risk assessment. 24 samples of water, 16 sediment samples and 3 samples of fish species were selected for analysis. Water samples were directly analyzed by atomic absorption spectrometer while sediment and fish samples were extracted and then aspirated to atomic absorption spectrometer (Model AAS 700 Acetylene Flame). Results showed the concentration of metals in water samples were within the permissible limit when compare with international guidelines in water (World Health Organization) [1]. Higher concentrations of metals in some samples were reported collected after rainy season. In sediments, average values of metals were investigated below the range in order of Fe>Zn>Mn>Ni>Cu>Cd and Fe>Zn>Mn>Ni>Cu>Cd at upstream and downstream respectively. Enrichment factor value of Cu (1.0) was found minor and linked with anthropogenic sources, Zinc (9.7) showed moderately severe contamination while Cd (0.24), Ni (0.80), Fe (0.07) were within limits but the concentration was toward increase. In Cirrhinus mrigala, Hypophthalmic hthysmolitrix and Cteno pharyngo donidella, concentration of metals were observed below the USEPA limit. It is indicated that there is no metal with THQ>1. But there is a possibility of harmful impacts of sediments on fish species in future, which will contaminate food chain. It is therefore, recommended to monitor the dam for pollution load, tune agriculture activities and initiate reforestation activities in surrounding.
Introduction
Water scarcity becomes a worldwide problem. In Pakistan, water is scarce in arid and semiarid region. Small dams are constructed to store water for dry season to overcome water scarcity [2]. The concept of small dam construction increasing throughout the world, major purpose for construction of dam is irrigation. In rural areas of Pakistan (almost 64% of total population) where livelihood directly or indirectly depends upon the agricultural activities, available water is not sufficient (Agricultural Census [3], Pakistan Report). Some Non Government Organizations (NGOs) and government agencies constructed many dams as an artificial source of water to reduce water shortage problems [2].Dams receive suspended load and sediments from nearby mountain and agriculture fields due to erosion and weathering. Sometime sediment load contributes toward metals contamination in reservoirs [4]. More than 90% of heavy metal contamination has been associated with suspended solids and sediments. Sediments have a high nutrient content, accelerate various microbial activities and there is a possibility to release metals and its accumulation in food chain [5]. In China, Rauf [6] investigated metal contamination in sediments. Highest concentrations of Cu, Zn and Cr were recorded in Fo-Tan tributaries, while Al and Cd were found in the Shing Mun River. Metal pollution in sediment of reservoir has also been investigated in the Kurang Nallah, Feeding tributary of the Rawal Lake, Pakistan [5,6].
002
Agricultural Research & Technology: Open Access Journal are common pollution indicators. Literature indicated the contamination of metals in muscle and liver tissues of fishes in Pakistan [7][8][9][10]. Accumulation of higher level of these metals in an environment is toxic to aquatic as well as terrestrial life [11]. For human, toxic effects include headache, hypertension, irritability, abdominal pain, nerve damages, liver and kidney problems, anemia, intellectual disabilities and carcinogenesis [12].
Various other sources of metal pollution have been identified due to rapid increase in population, unplanned human settlements, domestic and recreational activities in a catchment area of a reservoir [5]. Agriculture activities like application of various agro-chemicals within a catchment area of a dam contribute to higher metal concentration [13]. Geology of a catchment area of a reservoir has got role in quality and quantity of sediment load and metal contents. Rocks along the spring contribute metal contamination in dams. The literature revealed that excessive metals enter in water through spring would cause health risk for both human and aquatic life [11]. Health risk assessment commonly used for determination of potential health hazard related to human activities. For determination of potential health risk due to metals, two methods are commonly use that are carcinogenic and non-carcinogenic method [5]. Noncancer risk assessment included Target Hazard Quotient (THQ) as adopted by United States Environmental Protection Agency (USEPA) [5,14].
Enrichment factor is being used to differentiate the source of pollution either by natural or manmade means. Enrichment factor is a method commonly used to compare the concentration of metal in sediments with pre-civilization background levels to present day metal concentration with their concentrations in standard earth materials such as average shale [15]. Risk related to human from metal contaminated foods, actual dietary intakes of metals should be estimated and compared with oral toxicity reference dose [14]. The estimation of actual intake of metals is essential to determine effect on humans by frequent ingestion of particular pollutants. The target hazard quotient (THQ) was developed by the United States Environmental Protection Agency (USEPA) for estimation of potential health risks associated with long-term exposure to chemical contaminants [16,17].
Chattri dam is located in Haripur district (Khyber Pakhtunkhwa) has a possibility for metal contamination. As besides irrigation, the dam is being used for fish farming and animal watering. Concentration of metals in water and sediments affect the fish species. Fish is a part of aquatic food chain so there is a risk potential of human. Therefore, this study was aimed to analyze sediment, water and fish for various metals and to assess the level of metal contamination by using enrichment factor (EF). Possible potential health risk assessment was made by using target health quotient (THQ).
Study area
The study was conducted on Chattri Dam (33.94414 N, 73.030634 E) located in district Haripur, Khyber Pakhtunkhwa (Pakistan). The Dam is located about 800m above the sea level and was constructed in 1967 by blocking natural spring coming from mountains. Dam is a source of water for irrigation, fish farming and provides opportunities for recreation [18]. The catchment area of Chattri Dam is 6.68km2. Maximum height of the dam is 85 feet, crest length 530 feet and throw back 560m. Fish species in the Dam are Tor putitora (Mahseer), Cirrhinus mrigala (Mori), Catlacatla (Thela fish), Hypophthalmic hthysmolitrix (silver carp), Cteno pharyngo donidella (Grass Carp), Labeorohita (Rohu), and Puntiuschola (Chiddu).
Methodology
For determination of water quality of Chattri Dam, 24 composite water samples were collected over a year (January to December 2012) twice a month. Water samples were analyzed in laboratory for concentrations of selected metals (Zn, Cu, Fe, Mn, Cd, and Ni) by using atomic absorption spectrometer Model AAS 700 (Acetylene Flame), Detection limits for Cd, Cu, Zn, Fe, Ni and Mn were 0.8, 1.5, 1.5, 5, 6 and 1.5 ppb respectively. Method adopted according to the standard methods for the examination of water and waste water analysis [19]. Total of 16 sediment samples were collected from upstream and downstream locations in dry season (May, June, December, and January in 2012). Sediments were then analyzed in laboratory for determination of metals (Zn, Cu, Fe, Mn, Cd, and Ni) by microwave aqua-regia digestion method [20]. In this method, 1 gram of each sample was digested with 15ml of aqua regia (HCl, HNO3 and HClO4 with ratio of 3:1:1) and flask were covered and allowed to sit overnight. Samples were placed inside digester at 120 °C until vapor and sample inside the flask turned clear. Samples were then cooled at room temperature, filtered, diluted with distilled water up to 50ml and were aspirated into atomic absorption for analysis. Model AAS 700 (Acetylene Flame), Detection limits for Cd, Cu, Zn, Fe, Ni and Mn were 0.8, 1.5, 1.5, 5, 6 and 1.5ppb respectively [20].
Three frequently consumed fish species; Cirrhinus mrigala (Mori), Hypophthalmic hthysmolitrix (Silver Carp) and Cteno pharyngo donidella (Grass Carp) were collected from dam for analysis. Fish species were sampled with help of fisherman by using fishing net. The organs (muscles, gills and liver) were digested with aqua-regia on digestion block. In brief, 1 gram of each organ was digested with 15ml of aqua-regia (HCl, HNO3 and HClO4 with ratio of 3:1:1) and flask were covered and allowed to sit overnight to dissolve tissue completely. Samples were placed in digester at 120 °C until vapor and sample inside the flask turned clear. Samples were cooled at room temperature, filtered and diluted to 50ml. Finally samples were subjected to Agricultural Research & Technology: Open Access Journal atomic absorption spectrophotometer for analysis of Cd, Cu, Zn, Fe, Ni and Mn, Detection limits were 0.8, 1.5, 1.5, 5, 6 and 1.5ppb respectively [20].
Enrichment factor (EF)
Source and extent of metal contamination either by natural sources or anthropogenic sources in the Chattri Dam was determined by metal enrichment factor to compare metal concentrations in sediments with respect to reference value of elements. EF was calculated with the following Equation used by Zahra et al. [5]. (1) Where, is the ratio of concentration of concern metal to that of in sediment sample (mg kg-1) while is the same ratio of unpolluted refrence sample. Manganese ( ) was used as a reference element to calculate anthropogenic metal enrichments. World's average concentrations of unpolluted metals (Zn, Cu, Fe, Mn, Cd, and Ni) were used as reference values [5]. Based on EF values, sediments were categorized into different classes. EF <1 natural, EF 1-3 minor, anthropogenic enrichment, EF 3-5 moderate, and EF value 5-10 moderately severe enrichment and EF 10-25 severe contamination, EF 25-50 very severe >50 extremely severe [21].
Health risk assessment (THQ)
For non-carcinogenic health effects, hazard quotient (THQ) was calculated as risk by the equation given by Iwegbue [17] & Akoto et al. [22].
Results and Discussion
The concentrations of Zinc (Zn), Copper (Cu), Iron (Fe), Manganese (Mn), Cadmium (Cd) and Nickel (Ni) in water samples were observed within normal range when compared with guidelines given by WHO ( Table 1). The trend was recorded as Fe>Zn>Cu> Ni>Mn>Cd in summer season while in winter the order of metals was Fe>Zn>Cu>Mn>Ni>Cd. The results suggested that metals concentration in summer and winter season was almost same. Higher concentrations were observed in samples collected in the rainy period. This is in agreement with the study, Zahra, et al. [5] suggested that concentration of metals may be due to dilution during rainfall which mixes polluted and unpolluted water and decrease the heavy metal concentrations in post-rainy season.
Overall Fe showed higher concentrations over other metals. Perez et al. [23] reported higher concentrations of Fe in water and sediments of the Alzate Reservoir because Lerma River drains terrigenous sedimentary formation, rich in Fe-oxyhydroxides. Our findings go in favor of Perez et al. [23]. In our study the metals contamination is resulted from both natural and anthropogenic sources such as erosion and discharge of municipal waste. This is in support with the study of Perveen et al. [24] which reported that metals contamination in water caused by both natural and human activities. In future, continuous pollution of such water over longer period of time may cause accumulation of Cd up to toxic levels for living biota.
In sediments, concentrations of metals were found below the range when compared with International Sediment Quality Agricultural Research & Technology: Open Access Journal guidelines ( Table 1). The order of metals was Fe>Zn>Mn>Ni>Cu>Cd at upstream and Fe>Zn>Mn>Ni>Cu>Cd at downstream. Higher concentration of metals at upstream was recorded. However, anthropogenic activities like sewage discharge municipal waste and agricultural activities in the surrounding village were suggested to attribute higher concentration of metals at upstream. Other sites were less contaminated while decline was observed at downstream. This finding supports Iqbal et al. [25] study of Rawal lake, Pakistan. Sediments collected from upstream showed high concentration of all metals which are receptor sites for domestic waste, agricultural runoff and entrance of natural spring from mountains [25].
Fe was found high in sediments as compared with other elements. Relatively higher concentration of Fe was also recorded in the sediments of Rawal Lake, Pakistan [25]. However, presence of metals in sediments is associated with nature of source from where it enters into reservoirs and anthropogenic activities within surrounding areas i.e. runoff from cropland. For example, cadmium is found as impurities in phosphate fertilizer and is a possible source of the sediment coming from the surrounding agriculture field [13]. Similarly, as farmers are using natural fertilizer and is a big source of Ni and Cd as people disposed off re-chargeable batteries along with the solid waste used in compost preparation. The possible sources of Cu and Zn are sewage water used for irrigation in the surroundings coming from the compost [26].
In comparison, metal contamination in water and sediments clearly indicated that sediments were more contaminated as compared to water. Higher concentration of metals (Cu, Mn, Zn, Cr, Ni) in sediments, with comparison of water may be as a result of adsorption to sediment particles [6]. Concentration of selected metals in sediments increased many folds as compare to the Dam's water. Water quality in Dam was relatively better as compare to sediments.
EF values were compared with categories given by Pheiffer et al. [21]. Zinc, EF value 9.7 showed moderately severe enrichment, Copper, EF value 1, showed minor anthropogenic enrichment at downstream and other metals showed no enrichment (Table 1).
Metals enrichment values showed that accumulation of metals such as Cd, Ni and Fe were due to the natural source (erosion) while Cu showed anthropogenic sources of contamination (municipal sewage discharge) according to categories [21].
In selected fish species, Concentration of selected metals was observed lower than the adverse level ( Table 2). Metals concentration observed in the Cirrhinus mrigala in order of Fe>Ni>Zn>Cu>Cd>Mn, Hypophthalmic hthysmolitrix Fe>Ni>Cu>Cd>Mn>Zn and Cteno pharyngo donidella Fe>Ni>Cu>Zn>Mn>Cd. The level of metal accumulation varied among tissues. Among selected organs (muscles, gills and liver), liver is observed contaminated as compare to other organs. A higher concentration of metals in the liver is in the agreement with the finding of Khan, et al. [9] from Shah Alam River Pakistan. Khan reported that the muscles were not considered as an active tissue for accumulation of metals while higher concentration of metals in liver is attributed with storing ability [9]. Cirrhinus mrigala, Hypophthalmic hthysmolitrix and Cteno pharyngo donidella, average values of Fe and Cu were observed high as compare to other selected metals but considered as negligible with the comparison of USEPA [27]. Thus high concentration of Fe is considered as safe due to its beneficial effects for organism and plays a vital role in hemoglobin formation. Presences of metals are indicator of extent of pollution of the dam. Metals are not bio-degradable, stay in tissues of fishes for long time and enter into food chain by consumption [13]. Results of health risk assessment considered in this study are presented in Table 2. Results indicated that there is no metal with all metals THQ>1. This level of exposure will not cause any harmful effect in humans [28]. So, no risk was associated by consumption of these fishes from the Chattri dam. Exposure of metals was within limits but Cd showed a potential risk for future.
From human health point of view Cu, Zn and Cd values were found safe and less than tolerable daily dietary intake limit of USEPA [27] given in Table 2. It is concluded that concentrations of metals in all parts of fish species were within acceptable levels. Aquatic environment accumulate metals nearly 100 times as compare to concentration of metals in water. Therefore metals are of high concern [29]. Although metals contamination in the Dam are not alarming but their accumulation can be dangerous in future.
Conclusion and Recommendation
Enrichment factor showed that value of Cu was found minor and linked with anthropogenic sources, Zinc showed moderately severe contamination while Cd, Ni, Fe and Mn were within limits but the concentration was toward increase. The dam under study can be considered safe in terms of Target Hazard Quotient (THQ) for all studied metals. To overcome on the increasing level of different metals, periodic monitoring is recommended [30][31][32] | 2019-04-27T13:07:54.497Z | 2017-08-24T00:00:00.000 | {
"year": 2017,
"sha1": "1e0e5521ec1dab985ab5dab23dc4046de7d6ccf7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19080/artoaj.2017.10.555788",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "89738856170c8495b69ff9bbd335167574710c7c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
259220404 | pes2o/s2orc | v3-fos-license | Designing a new sustainable Test Kit supply chain network utilizing Internet of Things
The advent of COVID-19 put much economic pressure on countries worldwide, especially low-income countries. Providing test kits for Covid-19 posed a huge challenge at the beginning of the pandemic. Especially the low-income and less developed countries that did not have the technology to produce this kit and had to import it into the country, which itself cost a lot to buy and distribute these kits. This paper proposes a sustainable COVID-19 test kits supply chain network (STKSCN) for the first time to fill this gap. Distribution and transportation of test kits, location of distribution centers, and management of used test kits are considered in this network. A mixed integer linear programming Multi-Objective (MO), multi-period, multi-resource mathematical model is extended for the proposed supply chain. Another contribution is designing a platform based on the Internet of Things (IoT) to increase the speed, accuracy and security of the network. In this way, patients set their appointment online by registering their personal details and clinical symptoms. An augmented ɛ-constraint2 (AUGMECON2) is proposed for solving small and medium size of problem. Also, two meta-heuristic algorithms, namely NSGA-II and PESA-II are presented to solve the small, medium and large size of the problem. Taguchi method is utilized to control the parameters, and for comparison between meta-heuristic, five performance metrics are suggested. In addition, a case study in Iran is presented to validate the proposed model. Finally, the results show that PESA-II is more efficient and has better performance than the others based on assessment metrics and computational time.
Introduction
In late 2019, the outbreak of an unknown virus in Wuhan, China, took the world by surprise. The COVID-19 virus quickly became widespread around the world. Based on the World Health Organization, more than 220 countries around the world have been affected by the pandemic so far, more than 169 million people have been infected with this disease, and more than 3 million have died. The mortality rate of the virus is estimated at 1 to 5 percent, which varies depending on each patient's immune system (Chen et al., 2020). Of course, the amount of medical and health facilities and equipment in each country is definitely effective in increasing or decreasing the number of infected people and the rate of deaths. Table 1 shows the morbidity, mortality, and recovery rates of some countries according to WHO.
The coronavirus pandemic has devastated governments and people in many ways. In the COVID-19 condition, the demand for plastic products such as masks, gloves, and corona test kits increased dramatically, leading to an increase in plastic waste. If these products are not disposed of, or appropriately recycled after consumption, they can and various industries is becoming increasingly felt. Demand for some equipment such as masks, gloves, and corona test kits for hospitals and clinics has significantly increased during pandemics. Due to the novelty of this issue, little research has been done to address the issue of SC and allocation in epidemic conditions and to include aspects of sustainability. Nowadays, digital technologies are widely used in the field of healthcare. The Internet of Things (IoT) use in this area has accelerated the process, increased accuracy, and made the process safer from violations. While it can reduce additional costs, it can also increase customer satisfaction. The IoT is a network of intelligent devices, sensors, and applications that can be used to collect data, analyze existing systems, and make informed decisions (Rahman et al., 2020). Using the Internet of Things makes it possible to control SC violations and help SC management by tracking and sending sensory data into the cloud space (Hasan et al., 2019).
In this paper, a sustainable SC network of coronavirus test kits is designed. According to the emphasis on the importance of applying the concept of sustainability, it has been tried to consider all three aspects of it in the proposed network. For this aim, a MO MILP model is proposed to decrease the network's total cost, adverse environmental effects, and negative social impact. In this paper, IoT, which has been rarely utilized in healthcare studies, was used to estimate the required demand and allocate hospitals to patients. Three methods, one exact method, and two meta-heuristic methods have been used to solve the proposed model. The small and medium size of the problem is solved using the augmented -constraint2 (AUGMECON2) method, and for the large size of the problem, two meta-heuristic algorithms, namely, non-dominated sorting genetic algorithm (NSGA-II) and Pareto envelope-based selection algorithm (PESA-II), are developed. Then the results of the proposed methods were compared with each other.
The rest of the paper is summarized as follows. The related studies are shown in next section. Section 3 describes the structure of the presented network, and presents notation and mathematical models. The solution approach and encoding scheme are illustrated in Section 4. A numerical example, computational experiments, parameter tuning, case study, sensitivity analysis, and managerial insights are provided in Section 5, and the conclusion and future works is discussed in final section.
The related works
As regards the outbreak of COVID-19 has had a severe effect on people's lives, applying appropriate management methods in various economic, social, and environmental aspects is necessary to cope with it. According to evaluations, the issue of Waste Management (WM) during the outbreak of the disease has faced a serious challenge that can also affect the environment. Also, the lack of an integrated SC in the production, allocation and distribution of test kits, disinfectants, and plastic equipment, including masks, gloves, syringes, etc. has created problems during the pandemic outbreak. This section will deal with related works to further explore the field.
Regarding the healthcare SC, Moslemi et al. (2017) presented a MO model for the multi-echelon closed-loop medicine SC and multiproduct with quality consideration. They provided a conceptual model in the medical SC, taking into account the concepts of the environment. The proposed model includes three objective functions: minimizing production costs, including transportation, purchase, maintenance, breakdown, commissioning, collection, and disposal cost. Maximizing the quality of products and minimizing the environmental impact of products and transportation. Savadkoohi et al. (2018) proposed a three-echelon, multi-period medicine SC network with a distributionlocation-inventory model considering perishable products. Then, they developed a possibilistic programming approach to tackle with uncertain parameters. They also presented a real case study to give a practical description of the proposed model. Sabouhi et al. (2018) have investigated a combined method based on Data Envelopment Analysis and mathematical programming method. Also, a two-stage possibilisticstochastic programming model to design an integrated pharmaceutical SC network, and supplier selection is developed. Assumptions such as minor and complete disruption of suppliers and consideration of small discounts for raw materials were considered. Eventually, a case study has been applied to the pharmaceutical industry. Roshan et al. (2019) developed a two-stage method to manage the medicine SC with perishable products in crisis, intending to minimize unmet demand, decreasing total costs, and increasing social responsibility satisfaction. A robust optimization method to reverse drug SC coordination under reversible strategies was proposed by Taleizadeh et al. (2019) to maximize the benefits of reverse drug chain members. Zhang et al. (2019) developed a two-stage pharmaceutical SC from the perspective of the pharmaceutical manufacturer. They aimed to decrease the whole cost. They first show the NP-hard problem and then provide a pseudo-polynomial-time algorithm to solve their problem. Nasrollahi and Razmi (2019) developed a mathematical model for an integrated multi-period, multi-echelon pharmaceutical SC network with maximum expected coverage under uncertainty. A particle swarm optimization algorithm and faulty sorting genetic algorithm were used to solve the model. Aghababaei et al. (2019) proposed a two-stage fuzzy optimization model for the rationing problem of rare medicines under uncertainty. The first stage is to minimize the maximum profit of the supplier and the second stage is to minimize the maximum amount of shortage. Goodarzian et al. (2020) designed a comprehensive integer nonlinear MO mathematical model for the medicine SC network that includes production, distribution, procurement, ordering, inventory maintenance, allocation, and routing. Some of the proposed model objectives were to minimize network costs, minimize total network flow time, and maximize SC reliability. Five MO meta-heuristic algorithms named MOSEO, MOSA, MOPSO, MOKA, and MOFFA are proposed to achieve the optimal solution. Sazvar et al. (2021) developed a scenario-based MO integer linear programming model to design a sustainable pharmaceutical close-loop SC. The SC provided includes the reverse flow of expired medicines divided into three categories: disposable, reproducible, and recyclable. Then, they combine the LPmetrics method and a heuristic algorithm, and suggest a new hybrid method. A network for vaccine distribution in developing countries is provided (Yang et al., 2021). A MIP model to reduce the cost of the entire network is presented, and then an innovative algorithm that is suitable for very large problems is proposed. Eventually, the performance of the model and algorithm was measured with real data from four countries in South Africa. Singh et al. (2021) proposed a network distribution system simulation model with three scenarios to investigate problems in the food SC under pandemic conditions. They placed great emphasis on resilient SC flexibility during the pandemic. They claimed that the proposed simulation model could create a resilient and responsive food SC. An outbreak of a pandemic can profoundly affect the workforce, as the (2019) MINLP They summarized their studies on the effects of the virus in six cases, three of which were positive and three of which were negative. Increased organic and inorganic wastes, disruption of the waste recycling cycle, and some specific adverse effects such as contamination of sewage systems due to the use of disinfectants to prevent the spread of the virus in some countries such as China were among the negative effects of coronavirus. They also claimed that COVID 19 reduced air pollution and emitted gases such as nitrogen dioxide, in addition to helping to clean up beaches and reduce noise that was harmful to the environment. However, they eventually concluded that some of the positive effects of the virus do not appear to be lasting and require serious and intelligent management to deal with its negative effects in the long run. Ivanov and Dolgui (2021) investigate the various SCs and identified the disruptions in them. Their goals are collect SC disorders, and identify and classify methods to deal with it. Nagurney (2021) presented an optimization model for a SC in a pandemic situation, considering the workforce as an important element in linking the economic activities of the SC network and its related capacities. The chain provided includes medical and protective equipment and some special food items. Goodarzian et al. (2021a) due to the lack of mathematical model in the field of sustainability in the pharmaceutical industry and also the new conditions created by COVID-19 to design a comprehensive medicine SC including production, distribution, inventory control, they assigned and located. A MO, multi-product and multi-level model was presented. They presented three hybrid algebraic algorithms called ant optimization Algorithm, fish swarm algorithm and firefly algorithm and they were hybridized by variable local search to solve the corresponding network model. Due to the sensitivity of meta-heuristic algorithms to the input parameters, they used the response surface approach to adjust the parameters. They eventually used a real case study and concluded from numerical results that the fish swarm algorithm was more efficient than the other algorithms. A systematic review of studies that focus on the impact of the COVID-19 epidemic on SCs has been conducted by Chowdhury et al. (2021). (Yu et al., 2020). This model looks for optimal locations for temporary equipment, and different transportation methods for medical WM. With the prevalence of the COVID-19, consequent adherence to personal hygiene protocols, and increased referrals to medical centers, the amount of medical and infectious waste has raised significantly. An efficient and reliable infectious medical waste reverse logistics network was designed (Kargar et al., 2020a). A linear programming model purposes to decrease the total costs and the risk associated and maximize the amount of uncollected waste in medical waste generation centers is presented. Then, a real case study from Tehran is designed to validate their model. Kargar et al. (2020b) provided a linear programming multi objective model under uncertain parameters to design a medical waste reverse SC. A robust possibilistic programming method is used to tackle with uncertain parameters, and a fuzzy goal programming approach is suggested to solve a MO model. Tirkolaee et al. (2021) presented a MILP model to design the sustainable problem with time windows for medical WM in the coronavirus pandemic. Minimizing travel time, contravention from time windows, and total environmental impact inflict on the population around disposal sites are aims of the model. The classification of papers reviewed is shown in Table 2.
According to the literature, we find that there are very few studies that examine all three aspects of sustainability in pandemic conditions with operational research (OR) methods. The coronavirus detection test kit is an essential tool that some developing or poor countries may not be able to produce it, or it may take a long time to reach production capacity. In this research, a test kits SC network is included, which includes distribution, allocation, location and WM. Finally, the proposed model is solved by exact and meta-heuristics algorithms. The novelties of the current study that, distinguish it from other study are as follow: • Designing a new distribution, allocation, and WM SC network for the coronavirus test kits for the first time. • Proposing all sustainability aspects: economic, environmental, and social in proposed test kits SC. • Considering the rate of population density in the routes between different echelons of the network to select the best route with the aim of reducing the destructive social effects. • Providing the IoT application to identify patients and accelerate and organize health care services. • Using one exact method (AUGMECON2) and two meta-heuristic algorithms to solve presented model in small, medium and large size. Then, comparing and validating presented methods by utilizing five performance metrics and a real case study in Iran.
The description of the model and mathematical modeling
Problem statement, related figures, notations and mathematical modeling are provided in this section.
Problem statement
In this study, a sustainable SC of coronavirus test kits network (STKSCN) is designed. An MO, multi-period, and multi-resource mathematical model is formulated to implement this network. The proposed network has four levels, suppliers, Distribution Centers (DCs), hospitals and WM centers, which are interconnected respectively. To better understand the network, Fig. 1 is depicted. According to the health of patients is a priority, and we may see a sudden increase in hospital referrals and the need for more test kits, and also the lack of a large number of kits in the country, in the network provided, both internal manufacturer of coronavirus test kits and external supplier of these kits are used. There are two types of test kits in this network: PCR and Antibody tests. Patients with suspected COVID symptoms should have a PCR test, and patients who have been through the disease and want to make sure they have antibodies in their body want to have Anti-body test. The kits are first transported to DCs that located and constructed by the government. After packing, they are sent to hospitals according to the declared demand. After use, the test kits are immediately placed by the operator in a special box (safety box) that is placed in each hospital room and are collected at the end of the day and sent directly to the waste disposal unit to be incinerated and disinfected by the incinerator. Of course, not all hospitals have incinerators. Due to the weakness of our country's health system and the critical situation in this chain, we considered that hospitals that do not have incinerators, take their medical waste to appropriate locations for landfilling to be buried there following health protocols.
An IoT-based application that can be installed on a mobile phone or computer is designed. Data is collected by this application and after analysis, decision and allocation, the necessary information is sent to hospitals and patients. The data gathering process is shown in Fig. 2. The way this application works is that the patient has to answer a few questions after entering the application and registering their first and last name and address. Questions about their clinical symptoms or previous tests. For the PCR test, if at least 2 out of 5 questions are answered in the affirmative, the patient is allowed to take the test, and the nearest hospital is assigned to their based on their geographical location. The hospital address is also sent to the patient in the same application. For the Anti-body test, if the patient has two necessary conditions, they are allowed to take this test, and the same steps are performed for them. The steps that the patient must go through in the application are shown in Figs. 3 and 4.
Cost management in the health logistics network is essential during the pandemic condition. On the other hand, the amount of plastic waste and infectious waste has increased significantly. In the long run, the increase of plastic waste will cause irreparable damage to the environment, and improper management of infectious waste will be dangerous for public health. The purposes of the proposed model in this research are to minimize network costs, minimize negative environmental impacts and increase the level of social indicators so that it includes all three aspects of sustainability. The assumptions of current problem are explained as follow: • There are two types of test kits. The first category is PCR test kits that indicate the presence or absence of the virus, and the second category is Antibody test kits that are used to assess the patient's immune system. • The DCs and landfills must be located.
• The hospitals, manufacturers, and foreign suppliers are pre-determined. • Some hospitals are not equipped with incinerators. These hospitals transfer their waste to the landfill at the end of each period. • DCs and landfills have a fixed establishment cost.
• The DCs, manufacturers, and foreign suppliers have limited capacity.
Notations
The notations, parameters, and decision variables are stated.
Indices
Set of test kits. The number of vehicles used in the rout of hospital ℎ ′ to landfill center at period
Mathematical modeling
A MO, multi-period, multi-resource, distribution-location-allocationinventory-WM model is presented in subsection aims to minimizing total network costs, negative environmental effects and negative social effects.
The objective function (1) seeks to minimize total costs of network. Respectively, two first equations are Purchasing cost, transportation cost between manufacturer and DCs, fixed establishing cost of DC, transportation cost between suppliers and DCs, transportation cost between DCs and hospitals, incinerator cost, establishing cost of landfill centers, transportation cost between hospitals and landfill centers, landfilling cost, and two last equations consider inventory holding cost and penalty shortage cost.
This objective function (2) aims to reduce adverse environmental impact on SC. The parentheses contain amount of carbon emission during transportation from manufacturer to DC, supplier to DC, DC to hospital, and hospital to landfill center. The second equation calculate amount of carbon emission in burning Used kits with incinerator.
This Eq. (3) purpose to minimize the adverse social index with considering population density rate in routes between manufacturer to DC, supplier to DC, DC to hospital, and hospital to landfill center. : Constraints (4)-(6) ensure that DC, manufacturer and supplier cannot receive, produce and supply kits exceed of their limited capacity at each period.
Constraint (7) is demand constraint, and refer to amount of shortage in each hospital per period.
Constraints (8)-(11) ensure that kits are transported to or from DC, and transported to landfill center if they are opened.
Constraints (12)-(15) calculate the number of vehicles used between manufacturers to DCs, suppliers to DCs, DCs to hospitals and, hospitals to landfill centers at each period.
This Eq. (16) indicates balancing for inventory level of each DC during each period.
Constraint (17) Specifies an upper limit for amount of kit type transported to DC at period .
Constraints (18) and (19) determine number of used kits that must be burn and number of used kits that must be transported to landfill, respectively.
Constraints (20)-(23) ensure that, at most, each DC is allocated to one manufacturer, each DC is allocated to one supplier, each landfill center is allocated to one hospital, and each hospital is allocated to one DC.
Solution approach
In this section, to solve the current model, cope with three objectives, and then find the Pareto optimal solution, the augmented -constraint2 (AUGMECON2) method will be applied. This method is effective in proposing the Pareto set in mathematical programming problem compared to the original version of -constraint method and some other methods, like weighted sum method.
Because of the complexity of the problem, it is hard to solve the large size of the problem with this method, or it takes a lot of time; NSGA-II and PESA-II are utilized to cope with NP-hard problem. The algorithms are selected to develop because these methods are swift and flexible and search in a wide range to find Pareto solutions. Enhance, the implementation is not complex.
Augmented -constraint2 (AUGMECON2)
AUGMECON2 improves the AUGMECON which uses slack variables in each iteration. Additional iterations are eliminated and calculation times are reduced (Mavrotas and Florios, 2013). The formulation of this method according to our presented model is defined as follows: s.t.
2 ( ) + 2 = 2 3 ( ) + 3 = 3 ∈ , ∈ where 2 , 3 are parameters for RHS that are plotted for specific iterations of the network points of Objective 2 and 3. The parameters 2 and 3 are the domains of the objective functions.
Non-dominated sorting genetic algorithm (NSGA-II)
NSGA-II is one of the most successful and widely used algorithms introduced by (Deb et al., 2002). The advantages of this algorithm include less computational complexity and compatibility with constraints. The NSGA-II process consists of three main phases: generating the population, computing crowding distance for members, and selecting the solution which satisfies all objective functions. Crowding distance is a factor for better selection of solutions in terms of dispersion. More information about this method can be in some other studies (Taleizadeh et
Pareto envelope-based selection algorithm (PESA-II)
PESA-II was introduced by (Corne et al., 2001). This algorithm is an evolutionary optimization algorithm that solves MO problems based on Pareto selection and according to the concept of genetic algorithm. This algorithm actually improves the PESA algorithm. Region-based selection is considered an individual selection feature. Selective fitness is to be assigned to the region of the objective space. This selection enhances the Pareto frontier. For more information about PESA-II, scholars can read these studies (Arjmand et al., 2020;Omidi Brojeni et al., 2021). The main steps and the pseudo code of the PESA-II are depicted in Figs. 7 and 8.
Computational experiment
In this study, for solve proposed multi objective model, AUGME-CON2 is applied with GAMS software (24.1.2) and baron solver on a computer with technical specifications of 16 GB of RAM and 2.70 GHz CPU. Hence, two meta-heuristic algorithms, namely NSGA-II and PESA-II, are utilized to solve medium and large size of the problem. The proposed methods are implemented in MATLAB R2018b on the same PC. Finally, a real case study in Iran is presented to validate current model.
Numerical example
The price of foreign and domestic kits is estimated according to the information announced by the Ministry of Health. Distance between manufacturer and supplier to DC and, DC to hospital and, hospital to landfill center are generated randomly by uniform distribution. Other parameters are estimated based on Expert opinion. Several numerical tests problem in small, medium, and large scale are presented in Table 3. The examples contain the number of test kits, manufacturer, supplier, DCs, landfills, hospitals, and periods. The range of parameters is shown in Table 4.
The results of solving the model by exact methods are shown in Fig. 9 and Table 5. Small and medium size of problem are solved by AUGMECON2.
Performance metrics
For measure the quality of the solution obtained by proposed algorithms, several performance metrics are presented. Convergence, diversity and number of solutions are aspects that metrics considered (Riquelme et al., 2015). In order to take these features into account and compare meta-heuristic algorithms, a number of performance metrics namely SM, CM, SNS, IGD and MID are purposed.
Parameter tuning
Adjusting the parameters can raise the performance of the proposed methods, and makes the conditions for comparing them fairer. In this study, Taguchi method is utilized to tuning parameters. Taguchi was introduced by Taguchi (1986). Taguchi method is different from Small Test problem1 2 1 1 2 1 2 2 Test problem2 2 2 2 2 1 2 3 Test problem3 2 2 2 2 2 2 5 Test problem4 2 2 2 3 2 3 6 Medium Test problem5 4 4 5 6 5 6 12 Test problem6 5 5 7 8 6 8 15 Test problem7 5 6 10 12 10 10 20 Test problem8 6 8 15 18 15 15 30 Large Test problem9 8 10 18 20 18 16 32 Test problem10 8 10 20 25 19 18 35 Test problem11 9 12 25 30 20 20 40 Test problem12 10 15 30 40 25 common engineering methods. Taguchi emphasizes on quality control during product and process design, while other common methods are based on inspection during the production process or after product production. Taguchi uses common statistical tools in his quality improvement methods and has simplified this method by identifying a set of powerful solutions in designing experiments and analyzing the results. Parameters are divided into controllable and uncontrollable factor. Taguchi design uses orthogonal arrays. Two different orthogonal Fig. 9. CPU time of AUGMECON2 for small and medium size of problem.
Table 6
The factors and levels of parameter tuning for meta-heuristic algorithms.
NSGA-II Level1 Level2 Level3
Number of population ( designs are used for these two sets of parameters. Internal array for controllable variables and outer array for intrusive variables. The combination of two internal and external arrays provides a crossed array that provides information about the interactions between controllable and intrusive variables. By facilitating the experimental design process, orthogonal arrays make it possible to examine main and interaction effects by performing the least number of experiments in a reasonable amount of time. The factors and levels of algorithms are shown in Table 6. In this approach, after adjusting the parameters of the algorithms by using a number of orthogonal arrays, the total number of experiments is reduced (Goodarzian et al., 2021a). The Taguchi method is used for the NSGA-II and PESA-II, the Orthogonal Array L9 and L27 with three levels, which specified in Tables 7 and 8 respectively. To attain the optimum level between the proposed levels for the proposed methods, Relative Percent Deviation (RPD) and S/N indices have been calculated.
Since the model's objective functions are aimed to minimized, the formula for the loss function is calculated as follows in Eq. (28) and S/N ratio is provided in Eq. (29).
indicates the orthogonal array and illustrates the value of the solution for the orthogonal array. Figs. 10-11 show the S/N ratio for the proposed algorithms, respectively.
Comparison of the proposed methods
In this section, the solution of meta-heuristic algorithms is compared together by their CPU time and purposed assessment metrics. The objective function values in each algorithm are provided in Table 9. Also, the behavior of algorithms is compared by their CPU time in Fig. 12, and based on figure, it is obvious that PESA-II is swifter than NSGA-II for each test problem. As mentioned earlier, five performance metrics were proposed for better and fairer comparison of algorithms. The results are provided in Tables 10 and 11. In Figs. 13 and 14, some examples are run and the obtained Pareto front of each algorithm is shown. Furthermore, according to the Pareto optimal analyses, a set of statistical comparison is presented to find the best method. Relative Deviation Index (RDI) that is common metric is calculated.
The formulation of RDI is as follow (Mehdizadeh et al., 2015): shows the value obtained with a specific measurement scale of the method.
indicates the best solution between all algorithm, and Max and Min are maximum and minimum values between all resulted. The results are indicated in Fig. 15. The lower value of RDI is better. So PESA-II is More efficient. Table 7 The proposed Orthogonal Array L9 for NSGA-II algorithm.
Comparison of results
In this article, to compare and evaluate the presented metaheuristic algorithms, five different assessment metrics named MID, IGD, SNS, NPS, and MS are presented. According to the results of Tables 10 and 11, we come to the conclusion that the PESA-II algorithm is more efficient and better than the NSGA-II algorithm. In order, the (higher the MS performance metrics, higher the NPS, higher the SNS, lower the IGD, and lower the MID), the more efficient the algorithm is. Also, based on Fig. 12, the amount of time needed to solve the PESA-II algorithm is less than NSGA-II, as a result, this algorithm is faster and smoother. Also, for additional analysis, we proposed the RDI index, which is calculated according to Eq. (30), and its results are shown in Fig. 15. It is worth pointing out that the lower RDI value indicates better quality and efficiency. Therefore PESA-II is more efficient.
Case study
In order to validate the proposed model and come conditions closer to real world, a case study in Tehran is investigated. Tehran is the capital of Iran. Fig. 16 indicates its location. With the outbreak of the corona pandemic, the distribution and allocation of COVID test kits has become big challenges for decision makers due to the shortage of test kits and the sudden increase in cases suspected of having COVID19. Eight hospitals, six DCs, four landfills, four suppliers, and three manufacturers are considered in case study. Their locations are shown in Fig. 17. Other information is provided in Tables 12-14. Also, two types of test kits, and fifteen units of time period are determined. The dataset of case study is obtained from Application of IoT, google map, ministry of health and treatment, and Municipality of Tehran city. The amount of distance between network components are calculated by google map. The amount of demand for each hospital in each period is obtained from IoT application, the ministry of health and municipality of Tehran provided other data.
Case study results
The outcomes of case study are investigated by PESA-II algorithm. Tables 15-20 shows case study solutions. It should be noted that Nikan, Milad and Erfan hospitals are equipped with incinerators and do not need to transfer their infectious waste to landfills.
As you can see in Table 15, Saadat Abad, Nobonyad, Piroozi and Rah Ahan are the locations that obtained for DC, but Haft-e-Tir and Jomhoury are rejected locations. Also, according to Table 16 Ariashahr, Khaksefid and Darband are selected for landfill locations, whereas Yaftabad is rejected. Table 17 indicates the amount of case study solution for each objective function. Tables 18-20 shows the allocation relationship between manufacturers and DCs, suppliers and DCs, hospi-
Sensitivity analysis
A set of sensitivity analysis is performed to evaluate and identify purposed model. The proposed case study (two kinds of COVID test kits, three manufacturers, four suppliers, six DCs, four landfills, eight hospitals and fifteen periods of time) is selected for this section. Objective functions, several important parameters like demand ( ), capacity of vehicle (CC) and Capacity of DCs (CI) are considered to sensitivity analysis. The variations of objective functions relative to parameters are measured relative. In Fig. 18, 3D plot of three objective function is shown. For instance, the behavior of the objective functions and other According to Fig. 19, As the capacity of vehicles increases, the first objective function decreases because the number of vehicles used in a period decreases dramatically and the cost of transportation decreases dramatically, which ultimately reduces the final cost. For example, when vehicle capacity increase 10%, total cost decreases 3653. Objective functions 2 and 3 are also reduced because environmental pollution and social harms are reduced by reducing vehicle traffic. Then, increasing the capacity of vehicles can have better results.
As you can see in Fig. 20, as the capacity of DCs increases, the total shortfall for a hospital decreases dramatically. Here Nikan hospital has been selected as a sample. When the capacity of DC increases 20%, the total shortage of Nikan hospital decreases to 45.
Managerial insight
As mentioned in the literature section, there is a lack of mathematical modeling in research investigating COVID 19, and most of them are conceptual. In this study, a MILP model is presented with three objective functions, and several mathematical constraints. Also, it solves by exact and meta-heuristic algorithms, and the results are compared. Healthcare decision makers can utilize the methods used in this paper to handle their plans for pandemic and reduce its harmful effects. Nowadays, cities are going to smartness in every aspect. Internet of things (IoT) is one of the smartness elements that play a key role in decision makers' plans. In recent years, IoT has made a major contribution in medical progression in developed countries. Fig. 21 indicates influences that IoT can be in healthcare management, but still in developing countries has not remarkable role in this aspect.
Using a platform based on the IoT for a health supply chain during a pandemic primarily increases the speed of the process, which is the most important point in crisis control and management. In addition, due to the reduction of direct human intervention and human errors, it also improves the accuracy and security of the network. It reduces people's visit to get their turn and crowding in this condition, and as a result, costs and environmental damages are also reduced. Therefore, it can be concluded that in all aspects, the use of such a network can be effective and profitable for supply chain managers. This research which has also been investigated in Iran can help decision makers to develop and use IoT systems to prioritize Suspicious patients to take the COVID test. IoT leads to appropriate allocation, location and distribution in healthcare systems. Also, in these fatal situations where the number of test kits is low in developing countries, more critical patients will have access to the kit sooner with proper planning and prioritization.
Also, if the capacity of vehicles that carry test kits or other medical products related to the pandemic is increased, it can reduce the total costs. Due to the decrease in the number of required cars, the amount of emission of polluting gases is also reduced and the environmental indicators are improved, of course, on the condition that managers pay attention to the emission rate of polluting gases of the cars that are going to be replaced.
Conclusion
In the current study, a multi-period multi-echelon sustainable COVID-19 test kits SC network (STKSCN) is designed to address the location-allocation-distribution problem for medical test kits for COVID19. Manufacturer/supplier, DC, hospital are four echelons of proposed network. Two types of test kits (domestic and foreign) are considered for problem. There are two types of hospitals: some of them have incinerators and others do not. The hospitals that do not have incinerator must transport their used test kits to landfill centers at the end of each period. A MILP MO model is presented that aims to minimize total network costs, negative environmental effects, and negative social impacts. To estimate the amount of required demand for each hospital in each period, an IoT-based application is provided that gets information from patients, gives them permission to test, and determines which hospital is allocated to them. One exact method (AUGMECON2), and two meta-heuristic algorithms (NSGA-II and PESA-II) are provided to solve the proposed STKSCN. Finally, a case study in Iran is presented, and a set of sensitivity analysis are performed to validate the model and compare parameters and objective functions.
Some of the limitations we faced include the lack of database in the field of transportation cost, we had to ask some taxi drivers as experts, and there were different answers, and by averaging them, we recorded the exact amount related to the cost in the case study. Also, another problem that can be mentioned is the issue of access to the Internet for all people. This issue may be trivial for developed countries, but some poor and underdeveloped countries are facing the problem of accessing to the internet in some of their rural areas. This problem can disrupt the healthcare supply chain based on the IoT.
The final results of model solving show that PESA-II is better than other presented algorithms. It is approved by comparing the results of assessment metrics and CPU chart. Also, the results of case study indicates that four distribution centers and three landfill centers must be established. As can be seen by increasing the capacity of vehicles regarding to their carbon emission, managers can reduce the costs.
For future research, the fallowing matters are suggested: -machine learning methods can be used to forecasting demand, supplier selection or other matters. -Also, the concept of uncertainty can be considered. For example, fuzzy or stochastic programming can be presented. -Another concept that should be contributed in future research is resiliency, which can be crucial for healthcare SCs.
Table 18
The allocation of manufacturer and supplier to DCs.
Table 19
The allocation of DCs to hospitals.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2023-06-23T05:06:51.282Z | 2023-06-21T00:00:00.000 | {
"year": 2023,
"sha1": "2a5adc164b4afa9f12eab7e37b9a668d28af29e4",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10282662",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a5adc164b4afa9f12eab7e37b9a668d28af29e4",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
182353013 | pes2o/s2orc | v3-fos-license | Strong Designated Verifier Signature Scheme with Undeniability and Strong Unforgeability in the Standard Model
: Strong designated verifier signature can provide an efficient way to protect the identity privacy of the signer and the integrity of the data transmitted over the public channel. These characteristics make it very useful in outsourcing computing, electronic voting, electronic bidding, electronic auction and other fields. However, most strong designated verifier signature schemes are unable to identify the real signature generator when the signer and the designated verifier dispute a signature. In addition, the existing strong designated verifier signature schemes in the standard model rarely satisfy strong unforgeability, and thus cannot prevent the attacker from forging a valid signature on any previously signed message. Therefore, designing a strong designated verifier signature scheme without random oracles that satisfies strong unforgeability and undeniability is very attractive in both practice and theory. Motivated by these concerns, we design the first undeniable strong designated verifier signature scheme without random oracles, in which the arbiter can independently perform the judgment procedure to prove whether a controversial signature is generated by the signer or the designated verifier. Under standard assumptions, the scheme is proved to be strongly unforgeable in standard model. Furthermore, it not only achieves non-transferability and privacy of the signer’s identity but also satisfies the undeniable property of traditional digital signature schemes. Performance analysis results show that the length of the signer’s private key, the designated verifier’s private key and signature length are 40 bits, 40 bits and 384 bits, respectively. Compared with he related schemes, the proposed scheme has higher performance in signature length, private key size and computational overhead. Finally, we show how to apply it to implement outsourcing computation in cloud computing.
Introduction
Digital signature is a very important information security technology, which can realize data integrity, non-repudiation, identity authentication and other functions. It plays an important role in network security communication [1], e-commerce [2], e-government [3] and other systems [4][5][6]. To deal with specific application scenarios, some digital signature schemes with special properties have been proposed. Among them, designated verifier signature (DVS) [7] is a significant variant of digital signature. In DVS, the signer is allowed to designate a verifier to confirm the authenticity of a signature, but the designated verifier is unable to convince anyone that the signature was generated by the real signer. The reason is that the simulated signature produced by the designated verifier is computationally indistinguishable from the original signature created by the signer for the same message. This feature of DVS is called non-transferability, which is very useful in the fields of electronic voting, electronic tendering and software copyright [8,9]. To avoid the signer's identity information being leaked, Jakobsson et al. [7] introduced the concept of strong designated verifier signature (SDVS). In SDVS, the validation of a signature must require the designated verifier's private key, and any third party cannot determine the real creator of the signature. That is to say, only the designated verifier knows the real identity of the signer. Thus, SDVS enhances the privacy of the signer's identity (PSI) and can be applied to some new fields [10]. For example, in cognitive computation [11], an intelligent robot authenticates the identity of its owner, but it must protect the owner's identity information.
However, in a SDVS scheme, any third party does not know who generated the signature when the signer and the designated verifier dispute a signature. In this scenario, the undeniability property is very essential for SDVS. There are a few SDVS schemes with undeniability, and they were proved to be secure in the random oracle model [12][13][14]. Unfortunately, Canetti et al. [15] showed that the cryptographic scheme in the random oracle model may be insecure when the random oracle is instantiated by a concrete hash function. Therefore, it is of practical significance to study the SDVS scheme without random oracles.
Most existing SDVS schemes in the standard model only possess existential unforgeability [16][17][18]. Namely, an adversary can easily obtain a new legal signature of the same message by modifying an existing message-signature pair. Strong unforgeability can prevent the above-mentioned modification and protect the integrity of a signature [19]. A SDVS scheme is said to be strongly unforgeable if it satisfies existential unforgeability and the adversary cannot produce a legal signature of a message that has previously been signed. Although strong unforgeability has already been considered in several SDVS schemes [20], none of them has the undeniable property in the standard model.
Our Contribution
Motivated by the above concerns, we construct a new SDVS scheme with undeniability and strong unforgeability, which is named the SDVS-USU scheme in this paper. The main contributions of this paper are as follows.
•
The proposed scheme is the first strongly unforgeable SDVS scheme with the undeniability property in the standard model, while the existing SDVS schemes are secure in the random oracle model.
•
In the SDVS-USU scheme, the signer assigns a verifier to validate the signature, and designates an arbiter to determine the actual generator of the signature. For a controversial signature, the arbiter can independently identify the real signature generator without the help of the signer or the designated verifier.
•
The SDVS-USU scheme is proved to be strongly unforgeable against adaptive chosen message attacks under the bilinear Diffie-Hellman (BDH) assumption, while the privacy of the signer's identity relies on the decisional bilinear Diffie-Hellman (DBDH) assumption. At the same time, it has the property of non-transferability.
•
Compared with the existing SDVS schemes without random oracles, the SDVS-USU scheme has better performance in terms of signature length, private key size and computational cost.
Paper Organization
The rest of this paper is organized as follows. Section 2 describes the work related to SDVS. Section 3 introduces some preliminaries, such as bilinear parings, complexity assumptions and the security definition of SDVS. Section 4 presents the SDVS-USU scheme. Section 5 demonstrates the security of the SDVS scheme. Section 6 analyzes the performance of the SDVS-USU scheme. Section 7 illustrates the application of the SDVS-USU scheme in outsourcing computation. Section 8 is the conclusions.
Related Work
The concept of SDVS was first introduced by Jakobsson et al. [7], and formalized by Saeednia et al. [21]. Since then, some efficient SDVS schemes were proposed [22][23][24][25][26][27], but the security of those schemes is based on the ideal random oracle. To deal with this problem, Hung et al. [17] designed a SDVS scheme in the standard model. However, its security is highly dependent on the security of the pseudo-random function. If the pseudo-random function leaks the associated index, the attacker can easily generate legitimate signatures for arbitrary messages on behalf of the signer or the designated verifier. Based on the q-Strong Diffie-Hellman assumption, Zhang et al. [16] constructed another SDVS scheme without random oracles. However, their SDVS scheme could not protect the privacy of the signer's identity and did not give formal security proof. Besides, Asaar et al. [18] presented a secure SDVS scheme based on Waters' scheme [28], but their scheme is malleable. Tian et al. [20] showed that the above three SDVS schemes [16][17][18]. do not satisfy strong unforgeability. Later, Tian et al. [20] used the OR proof [29] and Kang et al.'s scheme [30] to design a basic signature scheme with existential unforgeability. Then, Tian et al. [20] constructed a SDVS scheme using their basic scheme and the Cramer-Shoup scheme [31]. To shorten the signature length, Tian et al. [20] proposed another SDVS scheme based on their basic signature scheme and Tian et al.'s encryption scheme [32]. Although Tian et al.'s two SDVS schemes [20] satisfy strong unforgeability, neither provides undeniability. To overcome this shortcoming, Yang et al. [12] designed an undeniable SDVS scheme using chameleon hash function [33]. However, the signer needs to store all previous signature data to identify the real generator in a signature, and the judgment process needs the help of the signer. To improve the fairness of the judgment, Hu et al. [14] designed two undeniable SDVS schemes in which the arbiter can independently identify the real signer in a disputed signature. However, Yang et al.'s scheme [12] and Hu et al.'s schemes [14] were provably secure in the random oracle model. Unfortunately, there is no strongly unforgeable SDVS scheme with the undeniable property in the standard model. Thus, in this paper, we put forward such construction for SDVS.
Bilinear Paring
Suppose p is a prime, G 1 and G 2 are two cyclic groups of order p, and g is any generator of G 1 . A map e : G 1 × G 1 → G 2 is called a bilinear pair if it satisfies the following conditions [18]: • Bilinearity: For any x, y ∈ Z p , e(g x , g y ) = e(g, g) xy = e(g y , g x ). • Non-degeneracy: e(g, g) = 1.
•
Computability: For any x, y ∈ Z p , e(g x , g y ) can be calculated efficiently.
Complexity Assumptions
Given (g, g x , g y , g z ) ∈ G 4 1 , where x, y, z ∈ Z p are unknown, the BDH problem is to calculate e(g, g) xyz .
Definition 1.
The BDH assumption is that the probability of any probabilistic polynomial-time (PPT) algorithm solving the BDH problem is negligible.
Given (g, g x , g y , g z ) ∈ G 4 1 and Z ∈ G 2 , where unknown x, y, z ∈ Z p , the DBDH problem is to determine whether Z = e(g, g) xyz holds.
Definition 2.
The DBDH assumption is that there is no PPT algorithm to solve the DBDH problem with a probability of more than 1 2 [18].
Strong Designated Verifier Signature
An SDVS scheme with undeniable property is defined as follows: • Setup: On the input of a security parameter λ ∈ Z, this algorithm produces the public parameters params. • KeyGen: On the input of params, this algorithm produces a public/private key pair (pk S , sk S ) for a signer S, (pk V , sk V ) for a designated verifier V and (pk A , sk A ) for an arbiter A. • Sign: On the input of public keys of S, V and A, the signer S's private key sk S and a message m, this algorithm produces a signature σ on m. • Verify: Given public keys of S, V and A, this algorithm returns 1 if the designated verifier V's private key sk V can be used to verify that σ is a legal signature for a message m; otherwise, it returns 0. • Sim: On the input of public keys of S, V and A, the designated verifier V's private key sk V and a message m, this algorithm produces a simulated signature σ that is indistinguishable from σ.
The correctness of SDVS requires that both the original signature and the simulated signature are valid. That is, for any key pairs (pk S , sk S ), (pk V , sk V ) and (pk A , sk A ), any message m, any signature σ=Sign(pk S , pk V , pk A , sk S , m) and any simulated signature σ =Sim(pk S , pk V , pk A , sk V , m), the following two equations must hold: Verify(pk S , pk V , pk A , sk V , m, σ) = 1, Verify(pk S , pk V , pk A , sk V , m, σ ) = 1.
A secure SDVS scheme with undeniable property should achieve the security requirements of strong unforgeability, non-transferability, privacy of the signer's identity (PSI) and undeniability.
The unforgeability requires that only the signer and the designated verifier can produce a valid signature. Formally, the strong unforgeability of an SDVS scheme is defined by the following game between a challenger C and an adversary F .
•
Setup: C executes the Setup algorithm to output the public parameters params, and runs the KeyGen algorithm to generate the signer's key pair (pk S , sk S ), the designated verifier's key pair (pk V , sk V ) and the arbiter's key pair (pk A , sk A ). Then, C sends (params, pk S , pk V , pk A ) to F .
•
Signing queries: When F initiates a signature query for message m i , C runs the Sign(pk S , pk V , pk A , sk S , m i ) algorithm to obtain a signature σ i on m i and returns σ i to F .
•
Simulating queries: When F asks for a simulated signature on a message m i , C runs the Sim(pk S , pk V , pk A , sk V , m i ) algorithm to obtain a signature σ i on m i and returns σ i to F .
•
Verifying queries: When F submits a signature σ i on a message m i , C sends the signature verification result output by the algorithm Verify(pk S , pk V , pk A , sk V , m i , σ i ) to F . • Output: Finally, F outputs a message/signature pair (m * , σ * ). F wins the game if 1. Verify(pk S , pk V , pk A , sk V , m * , σ * )=1.
(m * , σ * ) is not one of all tuples (m i , σ i ) during the Signing queries.
Definition 3.
If the probability of any PPT attacker F winning in the above game is negligible, then an SDVS scheme is said to be strongly unforgeable against adaptive chosen message attacks.
The non-transferability requires that no third party can tell the signature on a message was created by the signer or was simulated by the designated verifier.
Definition 4.
An SDVS scheme is said to be non-transferable if it is not feasible for any PPT algorithm A 1 to differentiate that a given signature is produced by the signer or the designated verifier without knowing the signer's private key sk S , the designated verifier's private key sk V or the arbiter's private key sk A . That is, the probability ε of A 1 distinguishing between simulated signatures and real signatures is negligible.
In other words, the signature generated by the signer is computationally indistinguishable from the signature simulated by the designated verifier, i.e., PSI requires that no one other than the designated verifier knows the identity of the signer, but any third party is unable to identify the designated verifier and the signer. That is, if there are two signers S 0 and S 1 , it is infeasible for any PPT adversary to differentiate whether the signature of a message is signed by S 0 or S 1 without knowing the designated verifier's private key. PSI is formally defined by the following security game between a distinguisher D and a challenger B.
• Setup: B runs the Setup algorithm to produce the public parameters params, and runs the KeyGen algorithm to generate the signer S 0 's key pair (pk S 0 , sk S 0 ), the signer S 1 's key pair (pk S 1 , sk S 1 ), the designated verifier V's key pair (pk V , sk V ) and the arbiter A's key pair (pk A , sk A ). Then, B sends (params, pk S 0 , pk S 1 , pk V , pk A ) to D. • Query phase 1: D adaptively initiates a series of inquiries to B as follows.
-Signing queries: When D issues a signature query on a message m i and an index d i ∈ {0, 1}, B executes the Sign(pk S d i , pk V , pk A , sk S d i , m i ) algorithm to obtain a signature σ i on m i and returns σ i to D.
-
Simulating queries: When D issues a simulated signature query on a message m i and an index d i ∈ {0, 1}, B runs the Sim(pk S d i , pk V , pk A , sk V , m i ) algorithm to obtain a signature σ i on m i and returns σ i to D. -Verifying queries: After receiving a message m i , a signature σ i and an index d i ∈ {0, 1}, B responds to D with the output of the algorithm Verify(pk S d i , pk V , pk A , sk V , m i , σ i ).
• Challenge: After receiving the challenge message m * submitted by D, B obtains a random value d ∈ {0, 1} by flipping a coin. Then, B returns the signature σ * generated by the algorithm Sign(pk S d , pk V , pk A , sk S d , m * ) to D. • Query phase 2: D continues to make queries as in Query phase 1 except that D is unable to submit a signature verification query on (m * ,
Definition 5.
An SDVS scheme is secure about PSI if there is no PPT distinguisher D wins the game with a probability of more than 1 2 .
For a controversial signature, the undeniability requires that the arbiter can correctly identify the real identity of the generator in the signature.
Definition 6.
An SDVS scheme is said to be undeniable if there exists a PPT arbiter, with inputting the signer's public key pk S , the designated verifier's public key pk V , the arbiter's private key sk A and a disputed signatureσ on a messagem, can prove whether the signer S or the designated verifier V generatedσ with an overwhelming probability, namely, Here, the output S indicatesσ is created by the signer, while the output V indicatesσ is generated by the designated verifier.
The SDVS-USU Scheme
In this section, we design a strongly unforgeable SDVS scheme with undeniable property on the basis of a variant of Waters' scheme [28]. Although a few SDVS schemes [12][13][14] satisfy undeniability, their security depends on ideal random oracles, which might be insecure in reality. Most of the SDVS schemes [17,18] without random oracles are malleable, so they cannot achieve strong unforgeability.
To overcome these problems, the SDVS-USU scheme uses two collision-resistant hash functions to protect the integrity of the signature. This method can not only generate non-malleable signatures, but also achieve strong unforgeability and undeniability. Since we design the SDVS-USU scheme using a direct construction rather than a general conversion method, it basically maintains the performance of the Waters' scheme [28] in terms of signature size and computational overhead. Additionally, it should be emphasized that the employed collision-resistant hash functions are not considered as random oracles in our security proof.
There are three participants in the SDVS-USU scheme: the signer S, the designated verifier V and the arbiter A. In the following, we assume that all signed messages are bit strings of length n. To achieve this assumption, messages of arbitrary length can be converted into messages of fixed length n by using a secure hash function H : {0, 1} * → {0, 1} n . The SDVS-USU scheme is described as follows.
• Setup: Let G 1 and G 2 be two multiplicative cyclic groups of prime order p. g is any generator of G 1 , e : The signer S picks two random elements k S,1 , k S,2 ∈ Z * p as the private key sk S = (sk S,1 , sk S,2 ) = (k S,1 , k S,2 ), and computes the corresponding public key pk S = (pk S,1 , pk S,2 ) = (g k S,1 , g k S,2 ). Similarly, sk V = (sk V,1 , sk V,2 ) = (k V,1 , k V,2 ) and pk V = (pk V,1 , pk V,2 ) = (g k V,1 , g k V,2 ) are the designated verifier V's private key and public key respectively. The arbiter A's public/private key pair is (pk A , Sign: To generate the signature of a n-bit message m = (m 1 , ..., m n ) ∈ {0, 1} n , the signer proceeds as follows.
1.
Select r ∈ Z p randomly and calculate σ 2 = g r .
• Verify: After receiving a signature σ = (σ 1 , σ 2 , T) on a n-bit message m = (m 1 , ..., m n ) ∈ {0, 1} n from the signer, the designated verifier calculates h = H 2 (m, σ 2 , T) and uses its private key If it holds, the designated verifier believes that σ is legal and outputs 1; else, the designated verifier considers σ to be illegal and outputs 0.
To produce a simulated signature on a message m = (m 1 , ..., m n ) ∈ {0, 1} n , the designated verifier performs the following: 1. Select s ∈ Z p randomly and compute σ 2 = g s .
The above equation indicates that the signature σ of message m generated by the signer using the private key sk S can be verified by the signature verification algorithm Verify. That is, σ is a legal signature.
If σ = (σ 1 , σ 2 , T ) is correctly produced by the Sim algorithm, then we have It shows that the simulated signature σ produced by the designated verifier using its private key sk V can also be verified by the signature verification algorithm Verify. Therefore, the SDVS-USU scheme satisfies correctness.
Compared with the previous similar schemes, the novelty of the SDVS-USU scheme is as follows: • In the Sign algorithm, h = H 2 (m, σ 2 , T) is embedded in a part σ 1 = e(g k S,1 k S,2 (wv h ) r , pk V,1 ) of a signature σ = (σ 1 , σ 2 , T). Since the hash function H 2 is collision-resistant, any modification of m, σ 2 and T will make σ fail the signature verification equation. In other words, an attacker cannot generate a legitimate signature for a previously signed message if the attacker does not know the private key of the signer or the designated verifier. Hence, the SDVS-USU scheme possesses strong unforgeability.
•
The value T = (pk A ) k S,1 k S,2 H 1 (m,σ 2 ) contains the arbiter's public key pk A , the signer's private key sk S = (sk S,1 , sk S,2 ) and the hash value H 1 (m, σ 2 ), which shows that only the arbiter can use its own private key sk A and T to identify the real generator in a signature. In addition, H 1 and H 2 are two collision-resistant hash functions, and T is a part of h = H 2 (m, σ 2 , T) and the signature σ = (σ 1 , σ 2 , T). Therefore, any modification of the value T will result in the failure of the validation of the signature σ. That is, the SDVS-USU scheme provides undeniability.
•
The Waters' scheme [28] is malleable and satisfies existential unforgeability in the standard model. The proposed SDVS scheme is based on Waters' scheme [28], but the SDVS-USU scheme is no-malleable and strongly unforgeable in the standard model. Therefore, the SDVS-USU scheme is different from Waters' scheme [28] in terms of design and security proof.
Security Analysis
In this section, we demonstrate that the SDVS-USU scheme holds strong unforgeability, non-transferability, PSI and undeniability. Theorem 1. If the BDH assumption holds, then the SDVS-USU scheme is strongly unforgeable against adaptive chosen message attacks in the standard model.
Proof of Theorem 1. Suppose there exists a polynomial-time adversary F who breaks the strong unforgeability of the SDVS-USU scheme with non-negligible probability, where F can make at most q S signing queries, q Sim simulating queries and q V verifying queries. Then, we construct another algorithm C who can solve the BDH problem by using the F 's forgery. Given a random BDH problem instance (g, g a , g b , g c ) ∈ G 4 1 , the goal of C is to calculate e(g, g) abc . C will act as F 's challenger and respond to F 's queries as follows.
• Setup: C simulates the algorithm Setup in the following way.
2.
Select two random values k 1 , k 2 ∈ Z p , and set the signer's public key pk S = (pk S,1 , pk S,2 ) = (g a , g b ), the designated verifier's public key pk V = (pk V,1 , pk V,2 ) = (g c , g k 2 ) and the arbiter's public key pk A = g k 1 . Note that a, b and c are unknown to C. 3.
4.
Select a random integer z ∈ Z p , assign v = g z , u 0 = (g b ) p−kl+x 0 g y 0 and u j = (g b ) x i g y i for 1 ≤ j ≤ n, and set a vector u = (u 1 , ..., u n ).
For a n-bit message m = (m 1 , ..., m n ), we define two functions Hence, we obtain the following equation Table T r which is initially empty. If there is a tuple (m i , r i ) in T r , C extracts r i from T r ; otherwise, C randomly selects r i ∈ Z p and adds (m i , r i ) in T r . Then, C picks a random element T i ∈ G 1 , and computes w i = u 0 n ∏ j=1 u m i,j j , Finally, C returns a signature σ i = (σ i,1 , σ i,2 , T i ) on m i to F .
Correctness:
We show that σ i = (σ i,1 , σ i,2 , T i ) is a valid signature on m i as follows: Simulating queries: C responds to this kind of query in the same way as in Signing queries.
•
Verifying queries: F requests a verification query on a signature Table T r and extracts r i from T r . Then, C computes h i = H 2 (m i , σ i,2 , T i ), F(m i ) and J(m i ), and checks whether If this equation holds, C returns 1 to F ; otherwise, C returns 0 to F . and outputs e(g, g) abc as follows: Here, we discuss the probability of C successfully solving the BDH problem instance. If C does not abort in the above simulation, then the following conditions must hold: Hence, the probability that C completes the whole simulation is Pr[E i ∩ E * ]. According to Waters' proof [21], we have Therefore, if F breaks the strong unforgeability of the SDVS-USU scheme with probability ε, then C can solve the BDH problem with probability at least ε 8(n+1)(q S +q Sim +q V ) .
The randomness of (σ 1 , σ 2 , T) is determined by the random value r ∈ Z p , and the randomness of (σ 1 , σ 2 , T ) depends on the random value s ∈ Z p . Since r and s are randomly selected from Z p , the distribution of the real signature (σ 1 , σ 2 , T) and the simulated signature (σ 1 , σ 2 , T ) is computationally indistinguishable. Namely, it is infeasible to distinguish σ and σ without knowing the private key of the signer, the designated verifier or the arbiter. Hence, the SDVS-USU scheme satisfies the non-transferable property.
Theorem 3.
Our SDVS scheme is secure against the privacy of the signer's identity under the DBDH assumption.
Proof of Theorem 3.
Suppose there exists a PPT distinguisher D who breaks the privacy of the signer's identity of the SDVS-USU scheme. Then, we can construct an algorithm B to solve the DBDH problem. Given a random instance (g, g a , g b , g c , Z) of the DBDH problem, where unknown a, b, c ∈ Z p and Z ∈ G 2 , the B's goal is to determine if Z is equal to e(g, g) abc .
• Setup: B simulates the Setup algorithm by performing the following steps: 1.
3.
Set sk S 0 ↔V = Z as the common secret key between S 0 and V, and sk S 1 ↔V = e(g k 1 , g c ) k 2 as the common secret key between S 1 and V.
4.
Pick two collision-resistant hash functions Send the public parameters params = (G 1 , G 2 , p, g, e, u 0 , v, u, H 1 , H 2 ) and (pk S 0 , pk S 1 , pk V , pk A ) to D. 1. Select a random integer r i ∈ Z p , and compute σ i,2 = g r i .
2.
Pick a random element T i ∈ G 1 , and compute w i = u 0 Correctness: We show that the above signature σ i = (σ i,1 , σ i,2 , T i ) produced by the Signing query is correct since • Challenge: When D submits a challenge message m * = (m * 1 , ..., m * n ), B proceeds as follows: 1.
• Query phase 2: D continues to issue various queries as in Query phase 1 except that D cannot make a signature verification query on (m * , σ * , d * ) for any d * ∈ {0, 1}. • Output: D outputs a value d ∈ {0, 1}. If d = d , indicating Z = e(g, g) abc , B outputs 1; else, indicating Z is a random element in G 2 , B outputs 0.
From the above simulation, we can see that B does not exit in the whole simulation. Therefore, if D breaks the PSI property of the SDVS-USU scheme with probability ε, then B can solve the DBDH problem instance with probability of 1 2 + ε.
3.
Check e(T, g) = T S or e(T, g) = T V . If e(T, g) = T S , the arbiter confirmsσ is created by the signer. If e(T, g) = T V , the arbiter confirmsσ is produced by the designated verifier.
In the proposed scheme, a signature from the signer has the form T = (pk A ) k S,1 k S,2 H 1 (m,σ 2 ) , while a signature from the designated verifier has the form T = (pk A ) k V,1 k V,2 H 1 (m,σ 2 ) . The arbiter can independently prove the real signer of any valid signature by verifying e(T, g) = T S or e(T, g) = T V with probability 1. Therefore, the SDVS-USU scheme holds the undeniability property.
Comparison
The SDVS-USU scheme is compared with other SDVS schemes [14,18,20] in terms of performance and security properties. In Tables 1 and 2, the Size, Sign and Verify columns represent the size of a signature, and the computational cost of signature generation and signature verification, respectively. The SU column shows whether the scheme is strongly unforgeable. The PSI column indicates whether the scheme has the PSI property. The Undeniability column shows whether the scheme is undeniable. The SM column indicates whether the scheme is secure in the standard model. Let p and q be two primes such that p = 2q + 1. Since the computational cost of some cryptographic operations such as modular multiplication, hash function or inverse is relatively small after being optimized by various technologies [34], we consider only the computationally expensive bilinear pairing and exponentiation operations in Table 1. We use the symbol P to denote one paring operation. E 1 , E 2 and E p denote one exponentiation operation in G 1 , G 2 and Z p , respectively. |G 1 |, |G 2 |, |p| and |q| represent the length of an element in G 1 , G 2 , Z p and Z q , respectively.
Scheme SU PSI SM Undeniability
Scheme I in [7] Yes No No Yes Scheme II in [7] Yes No No Yes Asaar et al. [11] No No Yes No Scheme I in [13] Yes Yes Yes No Scheme II in [13] Yes Yes Yes No Our scheme Yes Yes Yes Yes As can be seen in Tables 1 and 2, two SDVS schemes of Hu et al. [14] outperform other schemes in both signature length and computational overhead, but their two schemes are not proven to be secure in the standard model. For the length of signature, the SDVS-USU scheme has one more element in G 1 than Asaar et al.'s scheme [18] but is superior to Tian et al.'s two schemes [20]. The SDVS-USU scheme is able to perform some pre-computation, such as g k S,1 k S,2 in the signature generation phase and e(pk S,1 , pk S,2 ) k V,1 in the verification phase. Thus, the SDVS-USU scheme has comparable computation complexity with other schemes [18,20]. However, Asaar et al.'s scheme [18] does not have strong unforgeability and the PSI property. Moreover, none of Asaar et al.'s scheme [18] and Tian et al.'s [20] schemes holds the undeniable property. The SDVS-USU scheme has strong unforgeability and the PSI property in the standard model. Moreover, it achieves undeniability. Therefore, the SDVS-USU scheme has stronger security.
We carried out simulation experiments to evaluate the performance of the SDVS-USU scheme. The experimental environment was a laptop with Intel Core i7-6500 CPU@2.5 GHz and 8 GB memory. All simulation programs running on Microsoft Windows 10 operating system were based on PBC-0.47-VC library. Figure 1 illustrates that the signature size of the SDVS-USU scheme, Asaar et al.'s scheme [18] and Tian et al.'s two schemes [20] is 384 bits, 256 bits, 532 bits and 404 bits, respectively. Hence, the SDVS-USU scheme has shorter signature length. As shown in Figure 2, the length of the signer's private key in the SDVS-USU scheme is 40 bits, which is the same as that of Asaar et al.'s scheme [18] but larger than that of Tian et al.'s two schemes [20]. Moreover, the length of the designated verifier's private key in the SDVS-USU scheme is 40 bits, which is larger than that of Asaar et al.'s scheme [18] but smaller than that of Tian et al.'s two schemes [20]. In the signing phase, Asaar et al.'s scheme [18] requires two exponentiations and one pairing operation. The first SDVS scheme and the second SDVS scheme of Tian et al. [20] need six and five exponentiations, respectively. The SDVS-USU scheme requires four exponentiations and one pairing operation. Figure 3 shows that the computational performance of signature generation in the SDVS-USU scheme is comparable with other schemes [18,20]. We consider the optimization of the verifying process by pre-computing so that the signature verification algorithm of each scheme achieves the highest performance. In the verification phase, Asaar et al.'s scheme [18] actually requires one exponentiation and one pairing operation. The first SDVS scheme of Tian et al. [20] needs three hash functions, three exponentiations, one inverse and two pairing operations. The second SDVS scheme of Tian et al. [20] requires three hash functions, two exponentiations, one inverse and two pairing operations. The SDVS-USU scheme requires two exponentiations and one pairing operation. Figure 4 demonstrates that the computational cost of signature verification of the SDVS-USU scheme is more than that of Asaar et al.'s scheme [18] but less than Tian et al.'s two schemes [20].
Application in Outsourcing Computing in Cloud Computing
Cloud computing has strong computing power and storage capacity of big data. However, the cloud service provider (CSP) is not trusted by the user, and may steal the user's private information or deceive the user. Cloud computing allows resource-constrained users to outsource expensive computations to the CSP. Hence, it is very important to ensure the integrity of the computing task and the authenticity of the remote user's identity. Due to the limited computing ability of the user, the heavy computing task is outsourced to the CSP to complete. The CSP is able to authenticate a computing task outsourced by the user through a signature-based protocol. For the protection of private information, the user wants the designated CSP to be the only entity that can verify the legality of the signature on a computing task, and the CSP cannot reveal the signature to any third party at will. Since the ordinary digital signature has public verifiability and transferability, anyone can verify the validity of signatures by using the public key of the signer and obtain the real identity of the signer. Obviously, the ordinary digital signature scheme is not suitable for this scenario. The SDVS scheme is considered as one of the solutions to these problems, which can provide secret authentication service to the user in an outsourcing computation task. SDVS guarantees that a designated CSP can validate the user's signature on a computing task. At the same time, it ensures that the designated CSP does not convince others that the user is involved in a computing task.
However, most of the SDVS schemes cannot identify the real signature generator when the user and the cloud service provider dispute a signature, which may cause huge economic losses to the user or the CSP. Hence, the SDVS scheme without undeniability cannot handle a controversial computing task. For example, if the user denies the submission of a computing task for some reasons, then the CSP is forced to stop it. At the same time, if the CSP forges a user's signature on the computing task, then the user will take on the responsibility to pay for expensive computing cost. These economic losses are undesirable to the user or the CSP. The SDVS-USU scheme given in Section 3.2 is undeniable and strongly unforgeable, so it is more suitable for outsourcing computation in a cloud computing environment. The system model of outsourcing computation in cloud computing based on the SDVS-USU scheme is shown in Figure 5.
The user
The CSP There are three entities in the system: the user, the CSP and the arbiter. The process of outsourcing calculation is as follows.
1.
A user with limited computing resources uses his private key and the SDVS-USU scheme to generate a signature σ 1 for a computing task m 1 and sends (m 1 , σ 1 ) to the CSP.
2.
The CSP has powerful computing power. After verifying the validity of the signature σ 1 on m 1 to confirm this submission, the CSP performs the computational task of m 1 . Then, the CSP uses its private key and the SDVS-USU scheme to generate the signature σ 2 of the corresponding calculation result m 2 , and returns (m 2 , σ 2 ) to the user.
3.
If σ 2 is the valid signature of m 2 , the user accepts the calculation result returned by the CSP; otherwise, the user refuses to accept m 2 and accuses the CSP of malicious behavior.
4.
For a controversial computing task, the arbiter determines whether the user or the CSP is responsible for the economic loss of the computing task based on (m 1 , σ 1 ) and (m 2 , σ 2 ).
The SDVS-USU scheme is easily implemented as a software in cloud computing environments. For example, the signature algorithm Sign is installed on the user side, and the verification algorithm Vefify is installed on the CSP side. On the one hand, the user sends the computing task and the corresponding signature to the designated CSP. On the other hand, only the designated CSP can check the integrity of the computing task and the authenticity of the user's identity by verifying the validity of the signature, and vice versa. From the performance analysis results in Section 5, the SDVS-USU scheme has better computational performance while achieving the undeniable property. The length of the signer's private key, the designated verifier private key and signature are 40 bits, 40 bits and 384 bits, respectively. If the message length is 900 bits, the time cost for signing and verifying is approximately 0.12 s and 0.06 s, respectively. At present, an ordinary laptop configuration is at least Intel Core i3 CPU@2.1 GHz, 4 G memory and 256 GB hard disk storage space. The CSP has more computing power, thus the SDVS-USU scheme can be practically applied to cloud computing environments.
Conclusions
In this paper, we construct an undeniable SDVS scheme that satisfies strong unforgeability in the standard model. The performance analysis results show that the SDVS-USU scheme has better performance in terms of private key size, signature length and computational overhead. In the SDVS-USU scheme, strong unforgeability prevents hackers from using the existing message/signature pair to create a legal signature of the same message. Non-transferability ensures that hackers cannot know the identity of the real signer in a signature. PSI further protects the privacy of the signer's identity. Undeniability ensures that the signer and the designated verifier cannot deny messages that they have previously sent. Therefore, our SDVS scheme can guarantee the integrity of outsourced computing tasks and authenticate the identity of users in cloud computing. In the future, we will design an instance scenario to illustrate the feasibility of implementing the SDVS-USU scheme in the real world.
Author Contributions: X.Y., and G.C. wrote the paper; T.L., and R.L. proved the security; and M.W. and C.W. designed the experiments. | 2019-06-07T22:06:51.880Z | 2019-05-19T00:00:00.000 | {
"year": 2019,
"sha1": "ef819356ad25c37613faf60e19ad8280756fa8e7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/9/10/2062/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "58741cf460fcf50b5e693ca01a126114b71f0fdb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
248577098 | pes2o/s2orc | v3-fos-license | Ameliorative Effects of Raphanus sativus L., Nyctanthes arbor-tristis L. and Ficus palmata Forssk. on Calcium Oxalate Crystallization Events of Stone Formation In Vitro
. : In vitro Antiurolithiatic Activity of Plants from Western Himalaya Study demonstrates the antiurolithiatic potential of the three important plant species of the western Himalayan region viz. Ficus palmata fruits, Raphanus sativus leaves and Nyctanthes arbor-tristis leaves in vitro . Nucleation, growth and aggregation assays along with microscopic analysis of calcium oxalate crystals was employed to investigate the antilithic effect of the hydroethanolic extracts of Ficus palmata fruits, Raphanus sativus leaves and Nyctanthes arbor-tristis leaves on crystallization events of calcium oxalate stone formation. Fourier-transform infrared spectroscopy and high performance liquid chromatography analysis was employed for characterizing the phytoconstituents present in the extracts. All the three plant extracts produced inhibition of nucleation, growth and aggregation, and reduction of number and size of calcium oxalate crystals. A favorable morphological transformation of calcium oxalate crystals was also witnessed in the presence of the hydroethanolic extracts of Raphanus sativus and Nyctanthes arbor-tristis . Phytochemical investigation of the extracts revealed the presence of saponins, tannins, flavonoids and polyphenolic compounds while Fourier transform-infrared spectroscopy and high performance liquid chromatography analysis further substantiated the presence of polyphenolic compounds which are known to be involved in producing the anticrystallization effect of the tested extracts. Study confirmed that Ficus palmata fruits, Raphanus sativus leaves and Nyctanthes arbor-tristis leaves possess significant anticrystallization activity against calcium oxalate crystals which may translate to brilliant antiurolithiatic activity based on the effect of these extracts on various phases of urinary stone formation as witnessed in the present study.
Urolithiasis or kidney stone disease is usually described as a disease that results from disruption of equilibrium between promoters and inhibitors of stone formation [1] . It is an enigmatic disease with complex etiology which is persistently on rise and has emerged as a common yet excruciating affliction that accounts for frequent emergency department visits [2] . Urolithiasis is known to currently afflict approximately 12 % inhabitants of the World's industrialized nations [3,4] . Due to global warming, further 10 % hike in this statistics is anticipated within next 50 y [5] . As per the reports from the National Health and Nutrition Examination Survey 2000 [6] . India falls in the Afro-Asian stone belt of high stone prevalence [7] . In India, urolithiasis contributes to numerous cases of chronic renal diseases and renal failures [8] .
Calcium Oxalate (CaOx) stones have been the most studied stone types for the last few decade [9] . The reason being, CaOx is the most predominant chemical entity of urinary stones [10] and is the most recurrent type of all the stones [11] . CaOx also presents the most challenging class of stone disease due to their majorly idiopathic nature [12] and complex etiology [13] . Due to this, available treatment options have not been found to be completely effective so far [14] . Therefore, the current study appertains to mitigating CaOx urolithiasis which is the most prevalent of all urinary stone diseases [2] .
Herbs are more like a panacea for vivid range of afflictions and ailments. Plants continue to be a vital part of therapeutics and medicine worldwide. An era of renaissance of phytotherapy is being witnessed wherein plants and phytoconstituents are increasingly grabbing interest as a potential source of drug discovery and development [15] . Phytoconstituents like caffeine have been reported to prevent urolithiasis by inducing translocation of crystal binding annexin A1 proteins from the apical surface of the renal tubular cells to the cytoplasm [16] . Other phytoconstituents like catechin [17] , resveratrol [18] , rutin, curcumin [19] , quercetin and hyperoside [20] have shown promising outcomes as antiurolithiatic in animal models.
The Himalayan region is a treasure of biodiversity and harbors immensely rich flora and fauna. The present study addresses Ficus palmata Forsk. (F. palmata) (Moraceae), Raphanus sativus L. (R. sativus) (Cruciferae) and Nyctanthes arbor-tristis L. (N. arbortristis) (Oleaceae) of the western Himalayan region for their antiurolithiatic activity in vitro. F. palmata or Wild Himalayan Fig (Bedu) is an underexplored plant of high medicinal value [21] . Contrarily, R. sativus or radish and N. arbor-tristis or Night Jasmine (Harsingar) has been reported for their therapeutic potential in wide range of diseases and ailments. F. palmata possesses reported nephroprotective activity [22] and N. arbortristis have been shown to possess diuretic activity [23] . Roots of R. sativus have been reported to be antilithic [24] and diuretic [25] . Antioxidant property reported in all the three plants [26][27][28] is a unifying feature and is of utmost significance in context to the present study. Despite of the indications in various nephrological disorders, none of the selected plant parts have been evaluated for their plausible antiurolithiatic activity. Hence, this study was conducted to demonstrate the in vitro antilithic potential of the fruits of F. palmata, leaves of R. sativus and N. arbor-tristis. Predilection for the use of fruits of F. palmata [29] , leaves of R. sativus [30] and leaves of N. arbor-tristis [31] is based on the traditional use of these specified plant parts in urinary stone treatment or as diuretics.
Plant collection:
The plant samples of R. sativus L. and F. palmata Forsk. were collected from Bhimtal region and N. arbor-tristis L. were collected from Haldwani region of Uttarakhand situated in the foothills of Himalaya. Plant specimens were authenticated from Botanical Survey of India (BSI), Dehradun and a voucher specimen of each with accession number 116594, 116591 and 12611, respectively was also deposited in the herbarium of BSI. Leaves of R. sativus were collected in the month of April while ripe fruits of F. palmata and leaves of N. arbor-tristis were collected in the month of July.
Extract preparation:
The collected plant parts were dried in shade, powdered and were subjected to extraction by cold maceration in 70 % v/v ethanol for 96 h. Marc was separated using grade 1 Whatman filter paper and the extracts were dried in a rotary evaporator under reduced temperature and pressure [32][33][34] . Extractive yield of R. sativus Leaf Extract (RSLE) obtained was 14.15 % and that of F. palmata Fruit Extract (FPFE) and N. arbor-tristis Leaf Extract (NALE) was 17.403 % and 10.62 %, respectively.
Quantification of total phenolic and flavonoid content:
Total Phenolic Content (TPC) of the extracts was determined by Folin-Ciocalteau method as described by Singleton et al. with minor modifications [37] . Briefly, to 0.5 ml of 1 mg/ml extract, 2 ml Folin-Ciocalteau reagent (10 %) was added followed by the addition of 2 ml sodium carbonate solution (7.5 %). The reaction mixture was allowed to stand at room temperature for 1 h and the absorbance was recorded at 760 nm. A standard calibration curve for gallic acid (5-100 mg/l) was plotted and TPC of each extract was expressed as mg of Gallic Acid Equivalent (GAE) per g of dry weight of extract [38] .
Total Flavonoid Content (TFC) of the extracts was determined by Aluminium Chloride (AlCl 3 ) colorimetric method. To 0.5 ml of 1 mg/ml extract 1.5 ml ethanol, 0.1 ml AlCl 3 solution and 0.1 ml potassium acetate solution was added followed by the addition of 3 ml distilled water. The reaction mixture was allowed to stand at room temperature for 1 h and the absorbance was measured at 415 nm. A standard calibration curve for quercetin (5-100 mg/l) was plotted and TFC of each extract was expressed as mg of Quercetin Equivalent (QE) per g of dry weight of extract [38] .
Fourier
Transform-Infrared (FT-IR) characterization of the extracts: The three extracts viz. NALE, FPFE and RSLE were characterized using PerkinElmer FT-IR by attenuated total reflectance technique [39] .
High Performance Liquid Chromatography (HPLC) analysis of the extracts:
HPLC analysis of FPFE, NALE and RSLE was performed by Reverse Phase High Performance Liquid Chromatography (RP-HPLC) in Agilent 1200 series HPLC system. HPLC was performed using Agilent Zorbax Eclipse plus RP-C 18 column (4.6×250 mm; particle size 5 mm) at 45° with a solvent flow rate of 1.0 ml/min and injection volume of 20 μl at 254 nm wavelength. Mobile phase consisted of water (eluent A) and acetonitrile (eluent B). The following gradient program was used for the separation of analytes: 0-
Nucleation assay:
Nucleation assay of CaOx crystallization was used to evaluate the effect of the extracts on CaOx crystal formation. For this, 100-1000 µg/ml concentrations of the extracts were prepared in distilled water. To 1 ml of each concentration of the extract was added with 3 ml of 5 mmol/l Calcium Chloride (CaCl 2 ) solution and 3 ml of 7.5 mmol/l Sodium Oxalate solution (Na 2 C 2 O 4 ), both prepared in a Tris (Hydroxymethyl) Aminomethane Hydrochloride (Tris-HCl) (0.05 mol/l) and Sodium Chloride (NaCl) (0.15 mol/l) buffer at pH 6.5. Final solutions were vortexed and incubated at 37° for 30 min and their Optical Density (OD) was measured using a Shimadzu UV-1601 Ultraviolet-Visible (UV-Vis) spectrophotometer at 620 nm wavelength. The extent of nucleation in the presence and absence of the extracts was determined and expressed as percent (%) inhibition of nucleation by incorporating the recorded OD in formula: % Inhibition=(1-OD Test /OD Control )×100 [40] . Cystone (Himalaya Herbal Healthcare), a polyherbal formulation commonly employed as a standard substance in various antilithiatic studies [41,42] was also evaluated in similar set up that served as standard.
Microscopic characterization:
CaOx crystals formed in metastable solutions prepared by the addition of CaCl 2 solution and Na 2 C 2 O 4 solution were viewed using a Leica DM 2500 LED microscope and their number, size and morphology was determined [41] .
Aggregation assay:
To determine the effect of the extracts on aggregation of CaOx crystals, seed CaOx crystals were prepared by mixing 50 mmol/l each of CaCl 2 and Na 2 C 2 O 4 solution. Crystal slurry thus produced was dried and 0.8 mg/ml solution of CaOx crystals was prepared in a Tris-HCl (0.05 mol/l) and NaCl (0.15 mol/l) buffer (pH 6.5). To 3 ml of CaOx solution was added 1 ml of varying concentrations (100-1000 µg/ml) of the extracts and Cystone and the OD of the test samples and standard was read on UV-Vis spectrophotometer at 620 nm wavelength after 30 min incubation at 37°. Percent inhibition of aggregation was calculated using formula: % Inhibition= (1-OD Test /OD Control )×100 [40] .
Growth assay:
Effect of the extracts on CaOx crystal growth was determined by means of oxalate depletion assay. For this, to 1.5 ml buffer system containing 10 mM Tris-HCl and 90 mM NaCl (pH 7.4) was added 1 ml each of CaCl 2 solution (4 mM) and Na 2 C 2 O 4 solution (4 mM). Finally, 30 µl of 1.5 mg/ml CaOx crystal slurry prepared in 50 mM sodium acetate buffer (pH 5.7) was added and depletion of oxalate from the solution was recorded over a period of 600 s at 214 nm wavelength on a UV-Vis spectrophotometer as a measure of CaOx crystal growth. Growth inhibitory effect of the extracts and Cystone was then recorded at varying concentrations (100 µg/ml, 500 µg/ml and 1000 µg/ml) by addition of 1 ml solution of the extracts. Difference in the rate of oxalate depletion before and after the addition of the extracts was taken into account and expressed as percent inhibition of growth by using formula: % Inhibition=(1-OD Test /OD Control )×100 [43] .
Statistical analysis:
Quantitative data was expressed as mean±Standard Error of Mean (SEM). Statistical computations and analysis of the data were performed using one way Analysis of Variance (ANOVA) followed by Tukey-Kramer's multiple comparison test with the help of GraphPad Prism 6 software, p values less than 0.05 were considered statistically significant.
RESULTS AND DISCUSSION
Presence of carbohydrates, steroids, saponins, flavonoids, tannins and phenols was confirmed in all the three extracts while alkaloids and glycosides were also detected in RSLE in addition to the other phytoconstituents.
Substantial amount of phenols and flavonoids were confirmed in all the evaluated plant extracts. Among the three extracts, highest concentration of phenolic compounds was recorded for NALE followed by RALE and FPFE. Highest amount of flavonoid content was present in RSLE followed by NALE and FPFE ( Table 1). cm -1 correspond to that of arbortristoside B has been found to be similar to that reported by Purushothaman et al. [44] .
FT-IR spectrum of RSLE ( fig. 1C) showed the presence of O-H stretching band at 3265.74 cm -1 , C-H stretching at 2929.11 cm -1 and -NH bending and -CH 3 bending at 1586.48 cm -1 and 1392.68 cm -1 , respectively, representative of primary amines. A sharp peak at 1054.79 cm -1 may be due to C-O stretching vibration for alcohols or phenols or may be due to C-N stretching vibration for amines.
HPLC analysis of RSLE ( fig. 2C) showed the presence of 10 compounds two of which is correlated to catechin (R t : 12.989 min) and caffeic acid (R t : 13.984 min) as in previous studies [5,49] . Marker compounds were not estimated in the extracts, which is the limitation of the study.
A concentration dependent rise in the reduction of CaOx crystallization was witnessed with all the three extracts and Cystone. RSLE showed significantly better outcomes in inhibiting nucleation of CaOx crystals as compared to Cystone at higher concentrations i.e. 800 µg/ml (p<0.05) and 1000 µg/ml (p<0.01) followed by FPFE and NALE. Percent inhibition of nucleation at highest concentration (1000 µg/ml) was recorded to be 60.14 %±3. 57 . 3). CaOx crystals in control group majorly exhibited monoclinic or rectangular habit characteristic of Calcium Oxalate Monohydrate (COM) crystals. Calcium Oxalate Dihydrate (COD) crystals that were present in the control group were few in number and that too with sharp edges. A morphological transformation of crystals from COM to tetragonal bipyramidal COD crystals with smooth surface and edges was witnessed in the presence of the extracts and Cystone. This effect was most prominent with RSLE ( fig. 6) that produced favorable morphological change in majority of crystals at lowest concentration itself similar to that of Cystone ( fig. 7). NALE promoted COD crystal formation at 600 µg/ml concentration and above ( fig. 8). This effect on CaOx crystal morphology was less apparent with FPFE ( fig. 9).
A concentration dependent increase in reducing CaOx aggregation was witnessed with all the three extracts and Cystone. NALE showed highest percent inhibition of aggregation viz. 55 . 11).
CaOx urolithiasis was addressed in the present study as it is the most challenging type of urolithiasis due to its majorly idiopathic nature [12] and complex etiology [13] . It also presents the most prevalent and recurrent class among all the urinary stone diseases [50] .
Nucleation, growth and aggregation are key events among the myriad of steps involved in stone formation. Hence, any alteration in the course of these events brought about by synthetic or natural substances can promote or inhibit calculi formation [51] . Nucleation, growth and aggregation assays that were used in the present study for evaluating the antiurolithiatic efficacy of the plant extracts are principally simulation of the crucial predisposing factors for CaOx stone formation inside the body. OD of the turbid solutions produced as a result of the formation of CaOx crystals on combining CaCl 2 and Na 2 C 2 O 4 solutions was measured spectrophotometrically as OD is directly proportional to turbidity (τ). This inference has been made from the expression τ=2.303 (OD/l) that was first devised by Melik and Fogler in the year 1983. In the expression 'l' stands for the path length [52] .
Nucleation is a preliminary event that marks the process of spontaneous crystallization in a supersaturated solution. It also serves as a prerequisite to the crystallization and stone forming events that follow [51] . Henceforth, inhibition of nucleation can play a significant role in inhibiting stone formation. Present study showed beneficial outcomes of RSLE, NALE and FPFE in inhibiting nucleation of CaOx crystals which was even better than that of Cystone. This was also supported by the microscopic studies of the metastable solutions of CaOx crystals that showed fewer crystals in the presence of these extracts. This clearly reveals the ability of these extracts to form complex with calcium and oxalate ions which would have served as a possible mechanism to reduce relative supersaturation with respect to CaOx and thus inhibit CaOx crystallization. Similar mechanism has been reported for anticrystallization activity of Sarghassum wightti by Sujatha et al. [53] . Moreover, phytochemical investigation of the three extracts revealed the presence of tannins which are known to aid in calcium complexation and thus inhibit CaOx crystallization [54] .
Since the FT-IR analysis of RSLE, FPFE and NALE showed the presence of O-H, C-N and -NHCO groups that are anionic in nature, it can be alleged that calcium complexation by these extracts would have been the more effective mechanism involved in reducing CaOx supersaturation [55] .
Crystal growth is suggestive of increase in the dimensions of the crystals as a result of the deposition of atoms and molecules over the existing crystal lattice [56] . Size of the crystals is a crucial determinant of stone formation as large crystals pose a risk of occlusion and retention while smaller crystals spontaneously excrete out in urine [57] .
In the present study RSLE, NALE and FPFE showed promising potential in CaOx crystal growth inhibition. This growth inhibitory effect of RSLE, NALE and FPFE was also evident from the smaller crystals produced in the presence of these extracts as conferred from the microscopic investigation. The growth inhibitory effect of the extracts may have resulted from adsorption of the phytoconstituents over the crystal surface that may have hindered the addition of cations and anions to the crystal lattice thus interfering with the growth of the crystals [58,59] .
Crystal aggregation is the key determinant of stone formation process, as it accounts for crystal retention. Aggregation of crystals suggests clustering of numerous crystals to acquire enormous size. Crystal aggregates are a common finding in urolithiatic urine and in CaOx stone matrix [56] . In the present study RSLE, NALE and FPFE showed promising potential in inhibiting CaOx crystal aggregation. The antiaggregatory effect of the extracts would have been the consequence of the adsorption of the various phytoconstituents of the extracts over the CaOx crystal surface. Thereby, raising the zeta potential of the crystals rendering them more electronegative, thus hindering crystalcrystal interaction [60] by overcoming Van der Waals attraction force that hold the crystals together into aggregates [61] . This seems to be fairly possible as presence of numerous anionic moieties was confirmed through FT-IR analysis of the extracts that can impart negative charge to the crystals. Phytochemical analysis also revealed the presence of saponins and flavonoids in RSLE, NALE and FPFE. Flavonoids and saponins are known to induce disintegration and dissolution of CaOx crystals [62] .
Polymorphic forms of CaOx crystals have a remarkable impact in the course of disease progression of urolithiasis. As compared to the COM crystals, COD crystals are less injurious to the renal epithelial cells. This is due to their reduced adhesive ability that deters their attachment to the renal epithelium as well as hinders their agglomeration [30] . Therefore, transformation of COM crystals to COD crystals that was witnessed in the presence of RSLE and NALE shows their immense potential as possible candidate for drug development for urolithiasis. These observations are in agreement with those reported for Herniaria hirsuta [57] and Holarrhena antidysenterica [30] .
The IR spectra of FPFE, NALE and RSLE showed the presence of functional groups which are characteristic of phenolic compounds, carboxylic acids, amines [63] , flavonoids and amino acids [64] . Moreover, peaks in FT-IR spectra of NALE indicated the presence of arbortristoside B, an iridoid glycoside [44] .
HPLC analysis of FPFE showed the presence of gallic acid, 1,3-O-caffeoylquinic acid and epicatechin, that of NALE showed the presence of gallic acid, chlorogenic acid and iridoid glycosides, and HPLC chromatogram of RSLE showed the presence of catechin and caffeic acid. These polyphenolic compounds present in the extracts are of added advantage. They possess antioxidant activity which by inhibiting oxidation mediated renal tissue damage prevent crystal-cell interaction of CaOx crystals with the renal tissue and thus prevent further disease progression [3] . Chlorogenic acid, caffeic acid and gallic acid present in NALE, RSLE and FPFE, respectively have been reported to be strong iron chelators and hence possess strong ability to inhibit free radical generation and lipid peroxidation [65] . Moreover, chlorogenic acid [66] , catechin [67] and caffeic acid found to be present in NALE and RSLE, respectively also possess anti-inflammatory activity [68] . Anti-inflammatory activity of these polyphenolic compounds may have the significance in providing symptomatic relief in urolithiasis [3] . Caffeic acid, chlorogenic acid [69] and catechin have also been reported to possess Angiotensin Converting Enzyme (ACE) inhibitory activity [70] and therefore may prove to be efficacious in ameliorating renal stone disease by inhibiting inflammation of renal tissue and CaOx crystal deposition [3] . Catechin has also been reported to possess antiurolithiatic activity against CaOx crystallization in in vitro and in vivo models of urolithiasis [19] .
In vitro studies provide an optimum insight into the potential activity related outcomes of the extracts or compounds under investigation and also provide a platform for preliminary investigations that help in devising future studies. Present study demonstrated promising antiurolithiatic potential of the hydroethanolic extract of the fruits of F. palmata, leaves of N. arbortristis and R. sativus in in vitro setting of which N. arbortristis and R. sativus produced more pronounced effects in modulating each step of CaOx crystallization. Taking into account the effects of the lowest concentration of the three extracts, N. arbor-tristis possessed maximum anti-aggregatory and growth inhibitory activity against CaOx crystallization at the lowest tested concentration (100 µg/ml). This may have the outcome of the higher phenolic content of the NALE (3.504±0.137 mg GAE/g extract) as compared to RSLE and FPFE. Although, the anticrystallization activity of FPFE tested in in vitro settings in the present study was comparatively less as compared to the other tested extracts, nevertheless, F. palmata has been reported to be a plant of high use value with analgesic activity which may prove to be an added advantage in combating urinary stone disease in vivo [71] .
Findings of the present study demonstrated the efficacy of R. sativus leaves, F. palmata fruits and N. arbor-tristis leaves in favorably modulating nucleation, growth and aggregation phases of CaOx crystallization events of stone formation in in vitro settings. Prominent advocation of COD crystallization and suppression of COM crystal formation was witnessed in the presence of R. sativus leaves and N. arbor-tristis leaves. All these effects can be attributed to the saponins, tannins, flavonoids and polyphenolic principles of the tested extracts, and to the ability of the extracts to raise the zeta potential of the CaOx crystals to inhibit attachment of ionic entities to growing crystal lattice and crystal-crystal interaction. Further exploration of the antiurolithiatic potential of these plant extracts in preclinical and clinical settings and characterization of the active constituents may lead to the development of new plant based molecules or products for the treatment and prevention of urolithiasis. R. sativus leaves are widely grown and consumed worldwide and hence can be a solution to the enigmatic recurrence of the urinary stone disease in the afflicted individuals in form of a common consumable house hold commodity. | 2022-05-10T16:19:39.566Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "fc05452424610a061e23dae302376634d7c5f8aa",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ijpsonline.com/articles/ameliorative-effects-of-emraphanus-sativusem-l-emnyctanthes-arbortristisem-l-and-emficus-palmataem-forssk-on-calcium-oxa.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dd52b800902c85afd682b844feb79921af9c6d1d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
3013457 | pes2o/s2orc | v3-fos-license | Contribution of clinical trials to gross domestic product in Hungary
Aim To determine the contribution of clinical trials to the gross domestic product (GDP) in Hungary. Methods An anonymous survey of pharmaceutical companies and clinical research organizations (CROs) was conducted to estimate their clinical trial-related employment and revenues. Clinical trial documents at the National Institute of Pharmacy (NIP) were analyzed to estimate trial-related revenues at health care institutions and the value of investigational medical products (IMPs) based on avoided drug costs. Financial benefits were calculated as 2010 US $ purchasing power parity (PPP) values. Results Clinical trials increased the revenue of Hungarian health care providers by US $165.6 million. The value of IMPs was US $67.0 million. Clinical trial operation and management activities generated 900 jobs and US $166.9 million in revenue among CROs and pharmaceutical companies. Conclusions The contribution of clinical trials to the Hungarian GDP in 2010 amounted to 0.2%. Participation in international clinical trials may result in health, financial, and intangible benefits that contribute to the sustainability of health care systems, especially in countries with severe resource constraints. Although a conservative approach was employed to estimate the economic benefits of clinical trials, further research is necessary to improve the generalizability of our findings.
Active participation in international clinical trials may provide health benefits to patients and financial and professional benefits to health care providers. In lower income economies, such as those in Central-Eastern Europe (CEE), the relative benefits of clinical trials are even greater than in the high income countries of Western Europe and North America. Consequently, the contribution of emerging markets to international clinical trials is growing substantially (1). This phenomenon is especially visible in CEE, where the number of clinical trials has increased significantly over the past 15 years and is expected to increase even further in the near future (2). In CEE, international clinical trials offer opportunities for site personnel to improve their professional networking and be remunerated on higher-than average income level. For health care institutions with substantial budget constraints, trial-related payments can represent an important source of liquid cash. A supportive attitude of hospital management toward clinical trial activities, in terms of providing better working environment or increased remuneration, may help to prevent the migration of qualified professional staff to higher income countries. In CEE countries, the health status of the population is worse than in higher income Western European countries (3) and the accessibility of new medicines is relatively limited (4). Therefore, through clinical trials, CEE patients can obtain access to standardized modern health care services, technologies, and investigational drugs without waiting lists or co-payments. However, investigational medical products (IMPs) may represent considerable health risks for patients.
The societal gain associated with clinical trials is multifactorial. Clinical trials contribute to the evolution of evidencebased medicine. They systematically investigate side effects and health outcomes not only for IMPs but also for the control treatment arms. Therefore, safety information, even about marketed therapies, is captured and no public investment is necessary.
The most tangible benefit may be the financial impact, including the contribution of trials to the revenues of health care providers and clinical research organizations (CROs). However, there are also indirect benefits, such as avoided health care expenses due to the free delivery of IMPs and services.
Few scientific publications have addressed the financial benefits of clinical trials. These publications examined avoided drug costs and additional revenues primarily from the viewpoint of health care institutions (5)(6)(7)(8)(9). There is also one Polish study on the national economic impact of clinical trials, but the approach was not comprehensive enough to capture all direct and indirect financial benefits (10).
Hungary currently has a favorable position for implementation of clinical trials (11). It has high-level professionalism at investigational centers, rapid regulatory and ethical endorsements of applications, complex but manageable contracting processes at clinical sites, sufficient contributions to patient recruitment, and high Good Clinical Practice (GCP) quality according to Food and Drug Administration (FDA) inspections (12). However, similarly to other CEE countries, the capacity for clinical trial participation in Hungary has not been maximized. The aim of this study was to determine the contribution of clinical trials to the national economy in Hungary. We estimated the clinical trial-related revenues of CROs, investigators, and health care institutions and the financial benefits of avoided drug costs due to IMPs as the percentage of the gross domestic product (GDP).
MethodS
The economic impact of clinical trials was measured from several different perspectives. In 2009, the Hungarian Clinical Trial Management Society (CTMS) and the International Society for Pharmacoeconomics and Outcomes Research Hungary Chapter (ISPOR HCh) (13) obtained information about clinical trial-related revenues among health care institutions and CROs. In 2012, the ISPOR HCh collected additional information about the value of investigational drugs in clinical trials.
In the first step, to estimate the operational costs and the number of clinical research associates and other medical professionals involved in clinical trial activities in 2008, the CTMS conducted an anonymous survey among CRO managers with operations in Hungary and medical directors at research-based pharmaceutical companies. The questionnaire was mailed three times to 65 companies, and 12 questionnaires with a full set of data were returned. The aggregate survey results were assumed to be proportional to the total.
Information on clinical trial-related revenues and employment was validated and consolidated based on the annual balance sheets of Hungarian CROs for the 2008 fiscal year. In Hungary, public and private companies are obliged to provide annual financial data, and the Court of Registration makes these reports publicly available.
In the second step, after having signed the National Institute of Pharmacy (NIP) confidentiality agreement, two health economists from the ISPOR HCh reviewed the master files of clinical trials that were approved in 2008. The NIP approves and controls clinical trials in Hungary and archives master files of all interventional clinical trials, including information on trial budget estimates. The researchers assessed a randomly selected sample of clinical trial master files that were submitted to the NIP for approval. They calculated the total clinical site-related budgets of the clinical trials, including investigator fees and institutional costs. In total, 313 clinical trial applications were approved. Of 59 randomly selected studies, 9 files were excluded because of insufficient information on the site-related budget. The 50 remaining trials were representative of the overall allocation of studies in different clinical trial phases (χ 2 test P = 0.6) ( Table 1).
As no information was available on the allocation of trial budgets by calendar year, we assumed that the clinical trial revenues of health care providers before a given calendar year were equal to their estimated revenues in subsequent years. This assumption was supported by the fact that the number of clinical trials in Hungary remained relatively constant from 2006-2011 (11). The third step was to estimate the indirect value of IMPs based on avoided drug costs for patients treated in clinical trials. All phase II-IV trials that were licensed in Hungary in 2010 were selected. Phase I and bioequivalence studies were excluded because the participants are healthy volunteers without a need for treatment. After signing the confidentiality disclosure agreement, a health economist retrieved information from the clinical trial master files at the NIP from the clinical studies approved in 2010, including the European Clinical Trials database (EudraCT) number, detailed characteristics of the investigational compound and its comparator, the dosages of IMPs that were provided free of charge to study participants, and the full study protocol. From the EudraCT database, the following additional information was retrieved by the NIP experts (only authorized individuals are allowed to retrieve data from this international database): clinical trial authorization date, planned number of study participants in Hungary, and Medical Dictionary for Regulatory Activities (MedDRA) categories for the therapeutic area. The value of the investigational compounds was conservatively estimated based on the public price of the study comparator drug or a similar marketed product in the same ATC group or therapeutic area. The price was obtained from the drug list of the Hungarian National Health Insurance Fund (NHIF). In the three cases in which the Hungarian price for a first-in-class IMP or comparator was not available, German drug prices listed on www.medizinfuchs.de were used. The value of rescue medications was assumed to be zero because their use depends on IMPs and not on routine medical care. No additional technological costs were included in the value of IMPs (eg, additional diagnostics). Because the number of clinical trials and the proportions of different trial phases in Hungary had been constant in the years before the study, we assumed that the value of IMPs from clinical trials approved before a given calendar year was equal to the estimated value of IMPs in the subsequent years. In 2010, 262 phase II-IV clinical trial applications were approved. Fourteen clinical trials were excluded due to incomplete data. Therefore, the value of IMPs was estimated based on an analysis of 248 clinical trial master files.
The clinical trial master files contained information only on the planned number of patients. Actual recruitment is usually lower than the planned number due to competitive recruitment among countries. As the actual number of recruited patients was not known, 6 senior managers at different CROs were interviewed. Based on their consensus
ReSuLtS
Based on the CTMS survey and data from the annual balance sheets of CROs, approximately 900 professionals, or 1 out of every 4350 Hungarian employees, worked in clinical trial-related functions at CROs and pharmaceutical companies. Based on data from the CTMS survey, the total value of trial management activities was 166.9 million in 2010 US $. This amount included the gross income of clinical trial professionals at pharmaceutical companies and CROs and other operational costs, such as traveling, office and storage costs, communication and IT costs, and legal and financial counseling expenses, but excluded spending at clinical sites.
According to data from the NIP, the annual revenue of health care professionals and their institutions at clinical sites was 165.6 million in 2010 US $ (Table 2), which represented an additional 2.84% of revenues to NHIF-funded traditional health care services. A major proportion of clinical site-related revenues represented a personal income source for physicians and nurses.
Significant savings were generated from avoided drug costs for clinical trial participants. The estimated annual financial value of IMPs in phase II-IV clinical trials was US $67.0 million, which was equal to 2.52% of the NHIF pharmaceutical budget. Phase III trials accounted to 65% of the total amount, whereas phase II and phase IV trials accounted for 15% and 20%, respectively. Three disease areas (neoplasms, diseases of the nervous system, and musculoskeletal diseases) represented 75% of the total value of IMPs, although they included only 30% of enrolled patients.
The revenues of health care providers (ie, investigators, hospitals) and the clinical trial industry, and the value of IMPs together amounted to US $399.5 million, which was equal to 0.2% of the GDP (Table 3).
dISCuSSIoN
Hungarian physicians and patients have been participating in international clinical trials for more than twenty years. However, the related economic benefits have never been assessed. The shared effort of two professional associations, namely, the CTMS (clinical trial managers) and ISPOR HCh (health economics researchers), aims to provide real-world economic data to government officials and politicians in order to obtain strategic support for and engagement with the implementation of international clinical studies.
The three-step survey about the financial impact of clinical trials was based on the best available data, but the generalizability of our findings is limited due to several reasons. The sample size of the CTMS survey was relatively small. The review of annual balance sheets included CROs but excluded pharmaceutical companies, as clinical trial-related functions could not be separated from sales and marketing activities. The Directorate General of the NIP does not collect information on the number of patients who com- plete trials, so the 80% ratio of planned to actual patient recruitment and the average drop-out rate were based on expert opinions. However, Tufts Center also found a similar value of actual trial enrolment at Eastern European sites in 153 international phase II-III clinical trials (76%) (14). Furthermore, we could not capture the financial benefit related to improved health outcomes or the potential risks related to IMPs, or economic multiplicator effect of clinical trial activities. We also did not capture the revenues or costs related to capacity building of clinical research, including the implementation of GCP training for clinical site personnel and infrastructural development at research sites (eg, phase I research centers). In general, we employed a conservative approach to estimate the economic benefits of clinical trials.
In conclusion, participation in international clinical trials may result in health, financial, and intangible benefits that contribute to the sustainability of health care systems with limited resources. The direct financial benefit of clinical trials, in the form of the revenues of CROs and investigators, contributed to the Hungarian GDP by 0.163%, and avoiding pharmaceutical spending represented additional indirect benefits that accounted for 0.033% of the GDP. It is difficult to compare our findings to those from other countries as we are unaware of similar studies that have considered various aspects of clinical trial-related economic benefits.
Individual countries can strengthen their market position if policymakers, relevant authorities, and management teams at investigational sites support the implementation of clinical trials (2,15). Long-term national strategies in different areas, including postgraduate education, streamlined ethical and regulatory approval of trials, infrastructural development at trial sites, and the promotion of specific trial management skills and capacities, may improve the competitiveness of countries with severe health care resources constraints. Additional efforts should be made to develop regional cooperation in CEE. Countries with similar backgrounds and geographical and economic statuses can learn from the successful strategies of other countries and eventually improve the competitiveness of the entire CEE region in attracting clinical trials.
Funding None.
ethical approval Not required.
declaration of authorship JA, MP, LN, and ZK conducted the empirical part of the study. ZsSz and CsP participated in the study design. ZK, JA, and MP prepared the draft manuscript. ZsSz, CsP, and LN reviewed and approved the manuscript. The scientific guarantor was ZK. | 2016-06-21T08:51:46.632Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "df4b38cb43a99db23b2ea4d49166fc41f0164785",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4228288/pdf/CroatMedJ_55_0446.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df4b38cb43a99db23b2ea4d49166fc41f0164785",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253752530 | pes2o/s2orc | v3-fos-license | Direct-current-dependent shift of theta-burst-induced plasticity in the human motor cortex
Animal studies using polarising currents have shown that induction of synaptic long-term potentiation (LTP) and long-term depression (LTD) by bursts of patterned stimulation is affected by the membrane potential of the postsynaptic neurone. The aim of the present experiments was to test whether it is possible to observe similar phenomena in humans with the aim of improving present protocols of inducing synaptic plasticity for therapeutic purposes. We tested whether the LTP/LTD-like after effects of transcranial theta-burst stimulation (TBS) of human motor cortex, an analogue of patterned electrical stimulation in animals, were affected by simultaneous transcranial direct-current stimulation (tDCS), a non-invasive method of polarising cortical neurones in humans. Nine healthy volunteers were investigated in a single-blind, balanced cross-over study; continuous TBS (cTBS) was used to introduce LTD-like after effects, whereas intermittent TBS (iTBS) produced LTP-like effects. Each pattern was coupled with concurrent application of tDCS (<200 s, anodal, cathodal, sham). Cathodal tDCS increased the response to iTBS and abolished the effects of cTBS. Anodal tDCS changed the effects of cTBS towards facilitation, but had no impact on iTBS. Cortical motor thresholds and intracortical inhibitory/facilitatory networks were not altered by any of the stimulation protocols. We conclude that the after effects of TBS can be modulated by concurrent tDCS. We hypothesise that tDCS changes the membrane potential of the apical dendrites of cortical pyramidal neurones and that this changes the response to patterned synaptic input evoked by TBS. The data show that it may be possible to enhance LTP-like plasticity after TBS in the human cortex.
Introduction
In the motor cortex, transcranial magnetic stimulation (rTMS) can increase or decrease the response to a standard TMS pulse for 30 min or more after stimulation. Since these eVects are abolished by drugs that block NMDA receptors (NMDAR), they are thought to involve cortical long-term potentiation (LTP)-or long-term depression (LTD)-like synaptic plasticity .
A common type of rTMS that was introduced in 2005 is theta-burst stimulation (TBS) (Huang et al. 2005). TBS can produce signiWcant and long-lasting LTP-and LTD-like eVects within very short stimulation periods. If TBS is applied in an intermittent pattern (iTBS) to the human motor cortex, it enhances the amplitudes of motor-evoked potentials (MEP), whereas the application of a continuous train of stimuli (cTBS) suppresses MEPs (Huang et al. 2005). Huang et al. (2005Huang et al. ( , 2010 speculated that all patterns of TBS produce a mixture of excitatory and inhibitory after eVects and that the Wnal eVect of any particular paradigm depends on the balance between them. To explain this, they suggested that glutamate release during TBS activates NMDAR of the postsynaptic neurone which in turn depolarises the neuronal membrane and allows Ca 2+ to enter the cell. Following one version of the Ca 2+ -hypothesis of synaptic plasticity (Artola and Singer 1993), the authors proposed that LTD-like eVects are related to the total amount of Ca 2+ -entry whereas LTP was related to the rate of Ca 2+ -entry.
Transcranial direct-current stimulation (tDCS) is thought to polarise cortical neurones and alter their discharge rates by biasing their membrane potentials towards a more depolarised or hyperpolarised state. If tDCS is applied for longer than 5 min, NMDA receptor and Ca 2+ -dependent long-lasting after eVects on cortical excitability are induced Nitsche and Paulus 2001). Short periods of tDCS (<2 min) have no lasting eVects on cortical excitability. There have been a number of studies on the interaction between sequential application of long lasting tDCS (>5 min) and rTMS (Ridding and Ziemann 2010); however, there are no studies of the eVects of concurrent tDCS and rTMS/TBS, apart from one experiment which combined paired associative stimulation (PAS) and tDCS (Nitsche et al. 2007).
We aimed to explore the impact of short-lasting, concurrent tDCS on cTBS and iTBS protocols. Experiments on slice preparations of adult rat visual cortex have shown that the induction of LTP and LTD by burst stimulation depends on the membrane potential of the postsynaptic neurons and that the eVects of burst stimulation can be modiWed by external hyper/depolarisation (Artola et al. 1990). It was postulated that biasing the membrane potential changed the rate or amount of Ca 2+ -entry into the postsynaptic neurone. Thus, we hypothesised that in humans, concurrent tDCS will modify or bias the responses to TBS according to the polarity of DC stimulation in a similar fashion.
Materials and methods
Nine healthy subjects (two women, seven men, mean age = 30.3 § 1.5 years) participated in this study after giving informed consent. All subjects were right-handed according to the Edinburgh handedness inventory (OldWeld 1971), and none of the subjects had a history of neurological or mental illness or had metallic cerebral implants. No subject had a history of alcohol or drug abuse and nobody was taking any neuroactive medication. The study protocol, which is in accordance with the Declaration of Helsinki, was approved by the Ethics Committee of the University College London.
Theta-burst stimulation TBS was applied according to previously published protocols (Huang et al. 2005). In short, each burst consists of three stimuli with a repetition rate of 50 Hz, and the bursts were repeated with a frequency of 5 Hz. We applied a continuous train of 20 bursts (cTBS, 600 pulses, LTD-like plasticity) and an intermittent pattern of 20 trains of 10 bursts of 2 s duration with a break duration of 8 s between each train (iTBS, 600 pulses, LTP-like plasticity). The conditioning intensity was set at 80% of active motor threshold (AMT) elicited by a biphasic stimulator. As we performed TBS through the tDCS electrode (thickness approximately 5 mm), we had to increase the stimulator output to account for the distance between TMS coil and scalp. Therefore, all biphasic thresholds were measured through the electrodes in all experimental conditions. Transcranial direct-current stimulation tDCS was applied with an intensity of 1 mA using a commercially available DC stimulator (Eldith-Electro-Diagnostic & Therapeutic Systems GmbH, Germany, distributed by Magstim Co., Whitland, Dyfeld, UK) through salinesoaked surface sponge electrodes (35 cm 2 ). In each experimental session, the motor cortex electrode (anode or cathode) was placed over the hot spot as identiWed by TMS and the other electrode was placed above the right orbit (Nitsche and Paulus 2000).
Transcranial magnetic stimulation
During all experiments, subjects were placed in a comfortable armchair with head and arms at rest. We recorded surface electromyography (EMG) from the right Wrst-dorsalinterosseous muscle (FDI) via Ag/AgCl electrodes in a belly tendon montage. Raw signals were ampliWed (Digitimer 360, Digitimer Ltd., Welwyn Garden City, Herts, UK), band-pass Wltered (10-3 kHz) and digitalised using a 1401 data acquisition interface (Cambridge Electronic Design Ltd., Cambridge UK) controlled by Signal Software (Cambridge Electronic design). All data were stored on a computer and oZine analysed using Signal Software. We controlled for complete relaxation of the target muscle through visual feedback of EMG activity on a computer screen.
During the experiments, the coils were placed tangentially to the skull above the left primary motor cortex (M1) with the handle pointing away with a 45° angle from the midsagittal line. This orientation leads to a posterior-anterior directed current, which is oriented perpendicular to the central sulcus and which is optimal to result in a predominantly transsynaptic activation of motor cortex neurons (Di Lazzaro et al. 1998). The optimal stimulation point ("hot spot") was deWned as the position where single-pulse TMS induced consistently the largest motor-evoked potentials (MEP). To ensure a constant coil position during the experiment, the hot spot was marked with a skin marker.
Monitoring of excitability changes
All measures were performed with a monophasic transcranial magnetic stimulator. Before and 3 min after each stimulation procedure, the resting motor threshold (RMT) and the active motor threshold (AMT) were obtained according to standard publications (Rothwell et al. 1999;Ziemann et al. 1996a). To determine corticospinal excitability (MEP size) before and after each stimulation procedure, singlepulse TMS was performed at an intensity to evoke MEPs of about 1 mV (S1 mV, peak to peak, 0.7-1.3 mV) over the left motor cortical representation of the right FDI. We measured 40 MEPs at baseline and 20 MEPs at diVerent timepoints (0, 5, 10, 15, 20, 25, 30 min) after the stimulation.
Short-latency intracortical inhibition (SICI) and intracortical facilitation (ICF) were recorded with a standardised paired-pulse protocol (conditioning stimulus: 80% RMT, test stimulus: S1 mV, interstimulus intervals (ISI): 2, 3, 7, 10 and 12 ms Kujirai et al. 1993). The test pulse was applied 16 times, and all paired-pulses were applied 8 times in a randomised order at 0.25 Hz. SICI and ICF were recorded at baseline and 5 min after stimulation. The intensity of the test pulse was not adjusted since percent SICI/ ICF is known to be unaVected by test pulse amplitude over the range of MEP sizes used in the present experiments (Ridding et al. 1995). SICI/ICF were evaluated before and 5 min after intervention.
Experimental design
To assess the eVect of the simultaneous application of tDCS and TBS, all nine subjects were tested on six diVerent days, resulting in 56 experimental sessions. The study was designed as a single-blind and balanced complete crossover study in a repeated measurement design. Each subject received the following experimental conditions: cathodal-tDCS + iTBS, anodal-tDCS + iTBS, sham-tDCS + iTBS, cathodal-tDCS + cTBS, anodal-tDCS + cTBS and sham-tDCS + cTBS in diVerent sessions separated by at least 4 days from each other.
As tDCS and TBS were performed simultaneously, the TMS coil was placed above the tDCS electrode and the TBS stimulation had to be performed through the tDCS electrode (coil-scalp-distance: approximately 5 mm). In all iTBS conditions, the duration of the TBS train was 190 s and the duration of tDCS was 180 s (plus 10 s each of fade in and fade out). In all cTBS conditions, the TBS train lasted for 40 s and the overlapping tDCS duration was 30 s (plus 10 s each of fade in and fade out). For tDCS, this short stimulation period (<200 s) is known not to produce any after eVects, although it changes motor cortical excitability during the stimulation Nitsche and Paulus 2000). The usual form of sham tDCS applies a ramp of stimulation at the start and end of the stimulation period, with a total duration of 1 min or so, depending on the exact parameters used. Our stimulation period for cTBS was shorter than this so that in this study our sham tDCS consisted of placing the stimulating pads on the scalp without passing any current between them. Since tDCS was always accompanied by concurrent TBS, participants were unable to distinguish this from real tDCS.
Statistical methods SPSS 18 for Windows was used for all analysis, and the level of signiWcance was deWned as = 0.05. Normal distribution of the data was conWrmed using the Kolmogrov-Smirnov test (P > 0.05) for all dependent variables.
Pearson correlation analyses (two-tailed) were performed within iTBS and cTBS conditions to investigate correlations between the plasticity responses, expressed as the post/pre-ratio of the MEP size, itself.
In the linear models, sphericity was tested with the Mauchly's test and if necessary (Mauchly's test < 0.05), the Greenhouse-Geisser correction was used. All data are presented as mean § standard error of the mean (SEM), unless otherwise indicated.
Results and statistical analysis
ModiWcation of cortical plasticity Figure 1 illustrates the eVects of the 6 conditioning protocols on MEP amplitude. Figure 1a, b shows the mean data at all timepoints from pre-TBS to 30 min post-TBS (timecourse). Figure 1e provides a visual summary of the results in terms of the percentage change of MEP after application of TBS. As reported in the statistical analysis below, iTBS alone weakly facilitated MEPs, an eVect that was strengthened by simultaneous cathodal tDCS, whereas it was unaVected by simultaneous anodal tDCS. In contrast, cTBS alone suppressed MEPs, but in the presence of simultaneous cathodal or anodal tDCS, this was abolished (turned into facilitation).
Since there was no signiWcant "timecourse x condition" interaction in the Wrst RM-ANOVA, we used the mean post-TBS MEP amplitude for the main analysis below (see Table 1). The RM-ANOVA with main factors of "condition" (i.e. TBS/TDCS paradigm) and "time" (pre/post-TBS) revealed signiWcant eVects of "condition" (P = 0.030), "time" (P = 0.039) and a "time" £ "condition" interaction (P = 0.031). Given the signiWcant interaction the remainder of the analysis concentrates on comparing conditions within the cTBS and iTBS subsets (Fig. 2).
EVects on intracortical inhibition and facilitation
Since SICI and ICF are thought to be due to diVerent mechanisms (Ziemann et al. 1996b), we analysed the data separately for ISIs of 2/3 ms (SICI), 7 ms (intermediate) and 10/12 ms (ICF). None of the interventions had any eVect on the measures, apart from a signiWcant eVect of "ISI" in the ICF paradigm. This was due to the fact that ICF was larger at ISI = 12 ms than at ISI = 10 ms (see Table 3).
Discussion
The present results show that the LTP-/LTD-like after eVects of iTBS/cTBS on motor cortex excitability are sig-niWcantly modulated by simultaneous tDCS. Cathodal tDCS increases the facilitatory eVect of iTBS and abolishes the eVect of cTBS; anodal tDCS has no signiWcant inXuence on the response to iTBS, but suppresses and even reverses eVect of cTBS. None of the protocols inXuenced SICI/ICF or threshold measures. The interpretation of these eVects is complex because tDCS has three potential consequences that could interact with the LTP/LTD-like changes caused by TBS. Thus, tDCS can (1) bias the membrane potential of cortical neurones and thereby change their response to theta-burst protocols (Artola et al. 1990); (2) change the ongoing level of activity in cortical networks, with anodal stimulation increasing basal activity and cathodal stimulation reducing it (Bindman et al. 1964) which could interact with TBS according to "homoeostatic" rules; (3) if applied for longer than 3 min Paulus 2000, 2001), tDCS causes long-term LTP-and LTD-like changes in synaptic connections that could change the response to TBS.
Interestingly, each of these three possible explanations has been used to account for the results of previous studies of tDCS and TMS. In their initial report, Nitsche & Paulus interacted tDCS and single-pulse TMS. They found that MEPs were enhanced when applied during a short period of anodal tDCS; in contrast, MEPs were reduced by simultaneous Fig. 3 Correlation (P = 0.008) between the post/pre-MEP ratios of cathodal-tDCS + iTBS (y axis) and sham-tDCS + iTBS (x axis). The slope is 1.66 Table 2 Mean values of resting motor thresholds (RMT), active motor thresholds (AMT, active motor thresholds obtained with a biphasic stimulator (AMT Biphasic ), TMS intensity to evoke a MEP with a 1 mV peak-to-peak size (S1 mV) Thresholds were measured before and after the stimulation. Data are presented as means § SEM cathodal tDCS (Nitsche and Paulus 2001). This Wnding was replicated in a more recent study (Nitsche et al. 2005) which showed that a short period of tDCS, which elicits no after eVects, had no eVect on resting or active motor thresholds nor on the amount of paired-pulse inhibition and facilitation (Kujirai et al. 1993), when measured during tDCS. These changes were interpreted in terms of the tDCS eVect on membrane potential. It was suggested that short-lasting tDCS deor hyperpolarises the cell bodies of corticospinal pyramidal neurones, making the neurons more/less likely to respond with an action potential to a given excitatory input. The lack of eVect on SICI during short-lasting tDCS appeared to indicate that GABA A -interneurons were unaVected by tDCS. This is in line with the observation that application of Lorazepam, a GABA A -receptor agonist, had no eVect on the response to short-lasting tDCS (Nitsche et al. 2004). The only other experiment investigating simultaneous tDCS and TMS has involved a more complex combination of long-lasting tDCS with the facilitatory paired associative stimulation protocol (PAS) (Nitsche et al. 2007), which is thought to induce a form of NMDA-dependent spike-timing-dependent plasticity (STDP) within motor cortex. Simultaneous application of cathodal-tDCS prolonged the LTP-like eVects of PAS, whereas anodal-tDCS turned the LTP-like eVects to inhibition. The authors interpreted these changes in terms of tDCS eVects on ongoing cortical activ-ity. They used a homoeostatic argument to suggest that cathodal tDCS reduces background activity (Bindman et al. 1964), which will favour the induction of LTP, whereas enhanced background activity, generated by anodal tDCS, will turn LTP into LTD (Nitsche et al. 2007).
Several other studies have examined what happens when longer periods of tDCS (>5 min) are followed by another plasticity protocol. However, consecutive application of plasticity protocols in awake humans leads to a variety of complex interactions that may or may not follow simple homoeostatic logic (Ridding and Ziemann 2010). This could be due to the fact that long periods of tDCS not only change the prior history of ongoing activity in the network, but can themselves produce LTD-and LTP-like after eVects (see Introduction) that may have separate interactions with the plasticity protocol.
As in these previous studies, we can only speculate on which type of interaction might be most likely to account for our present results. We will consider them in turn.
Homoeostatic explanation
The combination of cathodal tDCS with iTBS resulted in an enhanced LTP-like plasticity and cathodal tDCS abolished LTD-like plasticity after cTBS. Both of these eVects could result from "homoeostatic" interactions as proposed in the tDCS/PAS study by Nitsche et al. (2007). Cathodal tDCS might reduce ongoing activity and enhance LTP-like eVects of iTBS whilst reducing LTD-like eVects of cTBS. However, the eVects of anodal tDCS are less compatible with this explanation. Anodal tDCS had no signiWcant eVect on the response to iTBS, although it can be argued that since the eVect of iTBS-sham was relatively small, changes produced by anodal tDCS may not have been detectable in the present experiments. A more important exception to the homoeostatic explanation was the fact that anodal tDCS changed the response produced by cTBS from inhibition to facilitation. This is contrary to the expectations of homoeostatic rules in which it should promote LTD-like eVects. Artola et al. (1990) found that in slice preparations, depolarisation of the postsynaptic membrane favoured production of LTP, whereas hyperpolarisation initially favoured LTD; if hyperpolarisation was too strong, then neither LTP nor LTD could be produced. First, consider the eVects of cathodal tDCS which were well-Wtted by the homoeostatic explanation above. Cathodal tDCS hyperpolarises the cell bodies of pyramidal neurones in the cortex whilst simultaneously depolarising their distal dendrites (Creutzfeldt al. 1962). If the synapses activated by TBS are onto distal dendrites, then cathodal tDCS should favour LTP-like changes. The result would be the same as the homoeostatic explanation above: cathodal tDCS would increase the response to iTBS and reduce the response to cTBS. In contrast, anodal tDCS depolarises cell bodies of pyramidal neurones whilst hyperpolarising their dendrites: hyperpolarisation of the postsynaptic membrane initially favours induction of LTD-like eVects, but then at greater levels of hyperpolarisation, no LTD-like eVects can be produced. If anodal tDCS had this latter eVect, then the response to cTBS would be abolished. However, anodal tDCS did not simply suppress the response to cTBS, it changed its polarity from depression to facilitation. One explanation for this relates to the basic mechanism of inhibition following cTBS600. Gentner and colleagues noted that short periods of cTBS (300 stimuli) can produce facilitation rather than inhibition, and they suggested that longer periods of cTBS600 yielded inhibition because of excess Ca 2+ -entry into the neurones provoked by continuous stimulation (Gentner et al. 2008). Hyperpolarisation of the dendrites by anodal tDCS might lower excitability and reduce the Ca 2+ -overXow induced by cTBS. A reduction of Ca 2+ -overXow after cTBS 600 might lead to the facilitatory eVect seen after cTBS 300.
Interactions with LTP/LTD-like eVects of tDCS and TBS
In our study, tDCS was given for a very short period of time (<3.5 min when simultaneous with iTBS; <1 min when simultaneous with cTBS). When applied alone, tDCS of this duration has no lasting LTP-and LTD-like eVects that might interact with those of TBS (Nitsche and Paulus 2001). Thus, any eVect that tDCS have on the response to TBS is likely to occur because it changes the way neurones react when TBS is applied, rather than to some complex interaction between their respective LTP/LTD-like after eVects. However, it is not possible to rule out subthreshold persisting changes from even short duration periods of tDCS which could potentially interact with the lasting eVects of TBS. This could perhaps be addressed in future experiments in which sequential applications of very short periods of tDCS and TBS could be explored to test for time-dependent interactions between protocols.
EVects on SICI/ICF and motor thresholds
None of the protocols had any eVect on SICI/ICF or thresholds. Since neither AMT nor RMT are aVected individually by tDCS or TBS, it is not surprising that the combined intervention had no eVect on these parameters (Huang et al. 2005;Nitsche and Paulus 2001). Both SICI and ICF have been reported to change after separate application of both TBS and long periods of tDCS. However, a very short period of tDCS as used here (<3 min) is not known to have any lasting eVects on either circuit. At Wrst sight, our SICIF/ ICF Wndings seem contrary to the original observations of Huang et al. (2005) who reported that SICI was suppressed after cTBS and enhanced after iTBS; in addition, ICF was reduced after iTBS. However, these eVects were maximum about 10 min after stimulation. Because of the need to evaluate changes in MEP as well as thresholds, our measures of SICI/ICF were taken at 5 min and therefore, we may have missed these baseline eVects. In addition, our strong conditioning pulse (80% RMT) may be a methodological limitation and may result in Xoor or ceiling eVects in our paired-pulse experiments. Finally, it is possible that the SICI and ICF networks are not involved in the generation of the observed eVects.
Limitations
In contrast to the original TBS publication (Huang et al. 2005), we found only a numeric enhancement of MEPs after iTBS (20%, compared with up to 100% in the original publication) and only a moderate inhibition of MEPs after cTBS (25%, up to 50% in the original publication). Nevertheless, MEPs were signiWcantly larger following sham-tDCS + iTBS compared with sham-tDCS + cTBS. Our data are more similar to those of Todd et al. (2009) who reported no signiWcant eVect neither for iTBS nor for cTBS and who postulated that this was related to the large inter-subject variability in response to TBS. Finally, it should be considered that we performed TBS through a tDCS electrode (thickness 5 mm). Though correcting for this coil-to-scalp-distance by increasing the stimulus intensity, a change in the distribution of the electrical Weld may contribute to the reduced TBS eVects. One study found modulations in cortical network activation related to the distance between coil and skull (Cukic et al. 2009), and it is possible that this could have confounded our Wndings.
Conclusions
In summary, the present study indicates that tDCS has the potential to interact with motor cortex plasticity generated by TBS. Our results give slightly stronger support to the membrane polarisation hypothesis than the "homoeostatic" interaction, but we would not dismiss more complex accounts involving mixtures of several eVects. With regard to the clinical application of non-invasive brain stimulations, it may be important to combine stimulation protocols to get optimal stimulation eVects. Our results show that concurrent cathodal tDCS can stabilise the eVect of iTBS and turn the usual cTBS eVect into LTP-like plasticity which may be useful in future clinical applications. This might have further importance for diseases with ongoing activity modiWcations of cortical areas (e.g. schizophrenia, Tourette's diseases) which are treated with TBS. | 2022-11-22T15:03:06.575Z | 2011-12-06T00:00:00.000 | {
"year": 2011,
"sha1": "0bec5a3ebfa81a354fbf6caba4ee6b35e58a7b7e",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00221-011-2968-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "0bec5a3ebfa81a354fbf6caba4ee6b35e58a7b7e",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": []
} |
54440172 | pes2o/s2orc | v3-fos-license | Learning Representations of Social Media Users
User representations are routinely used in recommendation systems by platform developers, targeted advertisements by marketers, and by public policy researchers to gauge public opinion across demographic groups. Computer scientists consider the problem of inferring user representations more abstractly; how does one extract a stable user representation - effective for many downstream tasks - from a medium as noisy and complicated as social media? The quality of a user representation is ultimately task-dependent (e.g. does it improve classifier performance, make more accurate recommendations in a recommendation system) but there are proxies that are less sensitive to the specific task. Is the representation predictive of latent properties such as a person's demographic features, socioeconomic class, or mental health state? Is it predictive of the user's future behavior? In this thesis, we begin by showing how user representations can be learned from multiple types of user behavior on social media. We apply several extensions of generalized canonical correlation analysis to learn these representations and evaluate them at three tasks: predicting future hashtag mentions, friending behavior, and demographic features. We then show how user features can be employed as distant supervision to improve topic model fit. Finally, we show how user features can be integrated into and improve existing classifiers in the multitask learning framework. We treat user representations - ground truth gender and mental health features - as auxiliary tasks to improve mental health state prediction. We also use distributed user representations learned in the first chapter to improve tweet-level stance classifiers, showing that distant user information can inform classification tasks at the granularity of a single message.
David Yarowsky convinced me to come to Hopkins with a hard pitch for JHU during a meal at the Helmand. The pitch was effective.
Despite being my first and only Ph.D. advisor, I can confidently say that Mark Dredze is an excellent advisor. He is inexplicably optimistic when presented with lukewarm results and has always given me the freedom to pursue questions that interest me. I value the time he gave me to implement and understand the models I worked with, and the opportunity to help manage other student projects. I am grateful to him not only because he is my advisor, but also because he is a pretty good one. I hope that one day Mark will have a bubble soccer-amenable lab.
iii The CLSP is one of the strongest NLP groups in the world and the quality of the other students and faculty continually impressed me; it was an honor to be accepted as a student here. Adam Teichert and Michael Paul were responsible for mentoring me on several projects and gave me a crash course in machine learning debugging. Michael Paul, in Most of all, I owe this whole grad school stint to Kika's patience, emotional support, and ultimately gentle nudging towards graduation. Thank you for sticking around this long. I love you and I hope you decide to stick around a while longer. v Li, Ritter, and Jurafsky (2015) to learn user representations in an MTL setting. The user representations that are learned are depicted by the blue vectors e u and e v , and components for the text modeling and friendship prediction tasks are separated by dotted lines. The text prediction task is addressed by a multinomial logistic regression model -the feature set is the mean of average context word embeddings and a user's vector representation.
The friendship prediction task is defined as a parameterless logistic 4.1 Graphical model of LDA (left) and DMR (right) in plate notation.
The key difference between these topic models is that DMR includes document-dependent features, α, that affect the document-topic prior through log-linear weights, η, shared across all documents. LDA conversely shares the same document-topic prior for all documents.
Motivation
Social media platforms offer researchers and data scientists a massive source of usergenerated data including not only what users say, but who they are friends with, their self-reported descriptions, and which posts they like. Social media data is valuable to two major groups of stakeholders: Technologists and social science Researchers.
Technologists are focused on engineering and are concerned with either maintaining and augmenting the social media platforms themselves, or building tools that perform well on social media data. Social science researchers treat social media data as a lens on society, an imperfect version of how humans communicate with each other naturally. They use social media data to answer deep questions about people and the world at large, and only care about building strong tools insofar as these tools can help judge hypotheses. Each of these groups has a different set of tasks to complete and questions they want to answer. 1 Technologists Maintainers of social media platforms routinely use user-generated data to improve their products. These include improving the platform's friend recommendation system (Hannon, Bennett, and Smyth, 2010;Konstas, Stathopoulos, and Jose, 2009) and content recommendation or feed optimization (Kramer, Guillory, and Hancock, 2014;Chen et al., 2012;Yan, Lapata, and Li, 2012;Guy et al., 2010). These features are tuned to retain users and increase the "addictiveness" of the platform. Advertising revenue is the foundation of many social media platforms' business models. Platforms such as Facebook specifically attract advertisers by using user data to better predict advertisement click-through rate.
Natural language processing (NLP) can be conceptually decomposed into an array of subtasks around automatically extracting information from human-generated text. Practitioners build tools to address each of these subtasks: part-of-speech taggers, syntactic parsers, sentiment analyzers, semantic parsers, etc. These tools were traditionally trained to perform well on standard text such as newswire, and extending them to perform well on social media posts is an active area of research (Gimpel et al., 2011;Rosenthal, Farra, and Nakov, 2017;Strauss et al., 2016;Daiber and Goot, 2016).
Researchers Social media data can also be used to test theories of how information flows through social networks (Wu et al., 2011), how these networks are structured (Martin et al., 2016), and how to identify and quantify social influence (Bakshy et al., 2011). Hypotheses which were theoretically motivated, or were empirically validated by painstakingly compiling word-of-mouth data can be tested at scale in observational studies on social media (Jansen et al., 2009). The persuasiveness of these observational studies hinges on arguing that the online behavior is evidence of a causal relationship, 2 and this causal argument often relies on controlling for potential confounds in social media data (Tan, Lee, and Pang, 2014).
Showing that social media data merely has predictive power for real-world happenings may also be sufficient when building predictive models. Predictive models of real-world trends such as disease incidence, stock market prices, and sentiment on public policy issues based on messages people post to Twitter can be used as surrogates for more traditional surveys (Tumasjan et al., 2010;Paul and Dredze, 2011;O'Connor et al., 2010a;Bollen, Mao, and Zeng, 2011).
In general, user-generated social media data is attractive to both groups for the following properties: 1. Social media data can be used as a proxy for actual human interactions. This is most relevant to Researchers but also to Technologists who would like to extend their systems to noisier, more naturally-produced language than news articles.
2. The data are also multi-modal and offer several views of these interactions. Take for example users' friending and messaging behavior against images or videos they post. Multiple views of user behavior can be used to build stronger tools, but can also suggest different hypotheses to test. 4. Social media data is produced and effectively updated in realtime. Consequently, models can be frequently updated to stay fresh and relevant. Hypotheses can also be tested as time progresses to validate that past findings hold in the present.
Some benefits such as quantity and rate of updating are shared with other online data sources such as server and web search logs. However, these other data sources do not capture natural human interactions.
Problems Associated with Social Media Data
Social media data can be a source for building robust NLP tools, predictive models of real-world trends on online activity, and testing social scientific theories. However, there are several fundamental problems that anyone who wants to build tools using these data must overcome. Figure 1.1 is an example of a message posted on Twitter (tweet) that demonstrates many of these problems 2 .
Feature Sparsity NLP models often rely on token n-gram features to make predictions. Longer document lengths allow these models to generalize well. On social media, messages with only a handful of tokens are common, leading to very sparse feature vectors. Additional sparsity arises from the fact that conversations on social Figure 1.2: Following tweet with the mysterious "covfefe". media span many domains, even when one is restricting to messages made in English, for instance. This is further exacerbated by frequent typos, butter fingers, and intentionally alternate spellings. The vocabulary size is larger for social media messages than restricted domains such as Wall Street Journal articles.
Context Most importantly, the context of this tweet is absent from the tweet itself.
"Despite the negative press covfefe" is not a complete English sentence. However, knowing that the user posting this tweet is the current President of the United States of America who has a confrontational history with the press, one can infer that "covfefe" was meant to be the word "coverage" and that this was meant to be followed by some self-aggrandizing statement. Context can also include previous activity within the social media platform. In this case, context can be previous messages within a discussion, or other messages posted with similar content. Figure 1.2 shows another message where humor is dependent on awareness of the original presidential covfefe.
Lack of context is a fundamental problem in NLP, since natural language regularly refers to events and entities in the real world, but it is exacerbated on social media primarily because of short message length. 5
User Features to Alleviate Problems with Social Media
Knowing the context around a social media message is key to understanding its meaning. Author demographic features, such as age, gender, and socio-economic class are an old and critical piece of this context. Demographic features are traditionally treated as categorical, where users that fall in the same demographic category can be thought of as belonging to the same "hard cluster".
Hard User Clusters Partitioning a population by some subset of stable properties and then describing the behavior of each of these subgroups is one way to use user features to improve social media systems. This idea has been applied in several fields: 1. Marketing: Market segmentation, particularly demographic segmentation is a classic marketing strategy. Smith (Smith, 1956) described market segmentation in 1956, and contrasted this strategy with product differentiation, which refers to supply side heterogeneity (distinguishing one's product from the competition).
Although Smith considered market segmentation abstractly: "Segmentation is based upon developments on the demand side of the market and represents a rational and more precise adjustment of product and marketing effort to consumer or user requirements", demographic quantities such as gender and age typically defined the boundaries between different market segments. This was because these boundaries aligned with dominant stereotypes (e.g. men buy lawnmowers and women buy dish soap) and these characteristics could be reliably quantified. This strategy has been transplanted to political campaigns as well -instead of hawking soap, campaigns sell a candidate or policy platform.
More fine-grained targeting has also been used to better appeal to consumers.
6
These include using psychographic properties to define groups or identifying consumers with specific interests or habits (gardeners, bicyclists, latte-drinkers), as is offered by the Facebook advertising platform 3 . Nevertheless, partitioning the market into broad clusters based on a set of categorical indicators is the norm.
2. Computational social science/policy: Social scientists and public policy researchers are interested in characterizing a population by different groups for essentially the same reason as marketing scientists: policy opinions or beliefs are not homogeneously distributed throughout a population. Public policy surveys reflect this by disaggregating public opinion by different groups, often along demographic features. Exploration of the best subset of features to segment a population into homogeneous subgroups is an active area of research in public health (Boslaugh et al., 2004).
3. NLP: There has also been interest in using author features to improve performance at standard NLP tasks including sentiment analysis and part-of-speech tagging (Hovy, 2015). Inferring latent user features from social media has been thoroughly explored in NLP, either predicting typical demographic properties (Volkova et al., 2015a) or less typical features such as profession or interests (Beller et al., 2014).
Proposal: User Embeddings
We instead propose learning distributed user embeddings based on a user's online behavior. A user embedding, in this context, is a vector of real numbers, that succinctly captures properties of that user. Users with similar embeddings should behave similarly on social media.
The process of learning user embeddings is inspired by work in NLP on learning word and generally text embeddings. Word embeddings are vector representations of words where closeness is able to capture semantic and syntactic relationships between words. The key component in learning word embeddings is that they are trained to be predictive of the word's context, where context is defined as the identity of surrounding words (collocations) (Mikolov et al., 2013a). Surrounding words make for good context when learning word embeddings, but there is no clear analog for "user context".
Social media user activity is arguably much richer than the appearance of a word in documents. A user can be characterized by the friends they connect to, the messages they post, or the articles and images they share. In order to incorporate multiple 8 behaviors, we propose using multiview representation learning techniques to learn user representations that capture multiple views of online activity simultaneously. We also learn user embeddings with standard dimensionality reduction techniques and evaluate their effectiveness at several downstream social media tasks.
Continuous Relaxations of Other Hard Clusters
Using vector-valued user embeddings rather than categorical user features to improve classifiers is an old idea and has analogs in several other domains: • Recommendation systems can be broadly categorized into those that make recommendations based on which subgroup a user belongs to (content-based filtering) vs. those that make recommendations based on preferences of similar users, where similarity is defined as "rating items similarly" (collaborative filtering). Content-based methods cluster users by provided features, such as given demographics or stated genre preferences, and make recommendations based on similarity in this feature space, although these user representations may also be distributed (e.g. a user embedding learned based on a free text description field) (Adomavicius and Tuzhilin, 2005). Collaborative filtering approaches based on factorization of a large user-item matrix are similar in spirit to our distributed user embeddings.
• In NLP, distributed word embeddings were preceded by Brown clusters, which are hierarchical cluster representations of words (Brown et al., 1992). Words that share more parent nodes in this hierarchy tend to be more similar semantically than those that do not. 9
Contributions
In this thesis we learn distributed social media user embeddings and evaluate how well these embeddings and traditional user features improve downstream tasks. In the process, we present machine learning models to both learn and use user features.
There are two main thrusts of this thesis: methodological contributions in learning user embeddings, and evaluating these user features at improving downstream tasks.
We learn the following user embeddings: • Principal component analysis (PCA) embeddings of the ego user's message text.
We also consider PCA reductions of different user activity views such as the friend, follower, mentioned user networks, as well as reductions of the text of those groups.
• Generalized canonical correlation analysis (GCCA) derived embeddings, where user views are different types of user activity. We present two novel, orthogonal extensions of GCCA: to discriminatively weight the reconstruction error of each view, and to learn nonlinear transformations from observed to latent space.
We evaluate user features and pretrained user embeddings at improving performance at the following tasks: • User-level hashtag prediction: Predicting whether or not a Twitter user will use a particular hashtag in a future tweet.
• Friend recommendation: Predicting whether Twitter users have established a friend relationship with each other.
• Demographic prediction: Predicting users' age, gender, or political affiliation. Figure 1.3: Classification scheme of methods we explore for learning user embeddings and how they are evaluated.
• Topic model fit: Improving the quality and fit of supervised topic models on corpora of social media posts.
• Tweet-level stance classification: Predicting the opinion expressed in a tweet with respect to a specific issue.
Model Implementations: A main contribution of this thesis are the publicly released implementations of models presented here. Many of these methods cannot be applied naïvely to real-world datasets -scaling them up to the number of examples and feature dimensionality present in real-world datasets, accounting for missing data, and differentially weighting the importance of views are all non-trivial challenges. Since publication, these implementations have been used by researchers to learn relationships between many different kinds of data sources such as substance abuse and social media language (Ding, Bickel, and Pan, 2017), speech and cognitive impairment features 4 , as well as to learn multimodal representations of video (Tsai and Kender, 2017).
Overview
We present several methods for learning embeddings and evaluate according to many objectives. Figure 1.3 classifies the different methods we explore, evaluation tasks we 4 The wGCCA implementation was shared with the Remote Monitoring of Neurodegeneration through Speech team at the Third Frederick Jelinek Memorial Summer Workshop (JSALT 2016). The dGCCA implementation was also extended and applied by the Grounded Sequence to Sequence Transduction team at JSALT 2018.
perform, and what function each serves in this thesis. Newly-developed models and new datasets are denoted by * and † , respectively.
Chapter 2 contains background on user features and embeddings in social media research as well as the methods we use in this thesis to learn and utilize learned user embeddings: multiview representation learning and multitask learning.
Chapter 3 describes how user embeddings can be learned by unsupervised multiview learning techniques, and analyzes the efficacy of different views on downstream embedding performance in predicting which hashtags a Twitter user will mention, who they will friend, and their demographic features. We present an extension of generalized canonical correlation analysis (GCCA): weighted GCCA to learn user embeddings. Weighted GCCA and the demographic prediction experiments were presented in Benton, Arora, and Dredze (2016), published as a short paper in the Proceedings of the Conference of the Association for Computational Linguistics (ACL) in 2016. The experiments with deep GCCA were presented in , an ar χ iv preprint. Robust LasCCA was developed and implemented during an internship at Amazon Research.
In Chapter 4, we evaluate using (distant) author-level features to better fit topic models to social media messages. We then describe a supervised topic model that can make effective use of user embeddings to better fit social media text, in lieu of explicit author features: deep Dirichlet Multinomial Regression. This model was originally presented in Benton and Dredze (2018a) Chapter 5 describes work in leveraging several Twitter user mental conditions to better predict suicide risk from their social media posts. It also considers how including user features such as demographics as an auxiliary task can improve mental condition prediction. This work was published in Benton, Mitchell, and Hovy (2017), a long paper in the Proceedings of the European Chapter of the Association for Computational Linguistics (EACL), 2017.
Chapter 6 describes a final application of embeddings where we show how the embeddings learned in chapter 3 can be used to learn stronger tweet stance classification models in a multitask learning framework. Neural classifiers can be pretrained to predict generic user embedding features for a general set of users before being finetuned on a specific task. This can alternately be read as an extension of chapter 5 to treating learned user embeddings as auxiliary tasks, not categorical supervision. This work was presented at the 4th Workshop on Noisy User-Generated Text (W-NUT) (Benton and Dredze, 2018b).
Chapter 7 summarizes the contribution of each chapter and provides direction for future work.
14 Chapter 2 Background This chapter begins in Section 2.1 with a discussion of existing work in applying user features to improve downstream systems, followed by sections on methods we will use to learn user representations and integrate them into existing models. Section 2.2 can be read as a primer on multiview representation learning that covers the basics of canonical correlation analysis and extensions to more than two views and nonlinear mappings. We primarily use these methods to learn user embeddings in Chapter 3. Section 2.3 finally describes the multitask learning paradigm, which is used to inject user information into trained models in Chapters 5 and 6.
Applications of User Features
User features and representations have been shown to help in a variety of downstream tasks. Here we give a selection of tasks that benefit from stronger information about the user. 15
Inferring Latent User Features
We rarely have direct access to latent user features such as gender, personality, socioeconomic class, or political affiliation. Models that can infer these traits are particularly desirable since the demographics predictions can be used as proxies for many different kinds of behaviors we may want to predict.
Standard features include word (Rao et al., 2010) and character n-gram features (Pennacchiotti and Popescu, 2011) of messages users post, output from NLP systems such as word stems or part of speech tags (Al Zamal, Liu, and Ruths, 2012;Nguyen et al., 2013;Preot , iuc-Pietro and Ungar, 2018), and topic and word embedding features (Pennacchiotti and Popescu, 2011;Preot , iuc-Pietro and Ungar, 2018). Dictionary features such as the Linguistic Inquiry and Word Count (LIWC) are also popular, especially since these features have been shown to correlate with meaningful demographic and psychometric user properties (Tausczik and Pennebaker, 2010). It is also common to draw on features of the local network such as the identities of neighboring users or text that they post (Culotta, Ravi, and Cutler, 2016;Yang and Eisenstein, 2017). This is done either by aggregating information from friends or followers of the source user into a feature vector (Al Zamal, Liu, and Ruths, 2012), or by sharing predictions made on neighboring users through the social graph (Yang and Eisenstein, 2017). Work that takes the latter approach exploits homophily, the tendency of similar users to establish connections with others in the social graph.
Mental Properties Although more recent, there is also a community around predicting less traditional user properties such as mental health (De Choudhury et al., 2013;Coppersmith et al., 2015a;Coppersmith et al., 2016) and user personality (Schwartz et al., 2013a;Schwartz et al., 2014;Preoţiuc-Pietro et al., 2015). Similar to predicting demographics, the typical approach is to train supervised models to predict these features. A major difficulty with learning mental health classifiers is that unlike demographic features such as gender which are relatively easy to annotate, mental health is a particularly sensitive characteristic. Not only are subjects reticent in divulging this information, but care should be taken by researchers, even when mental health status is inferred (Benton, Coppersmith, and Dredze, 2017).
One clever approach is to consider public messages self-reporting having a particular condition as genuine (Harman, Coppersmith, and Dredze, 2014;Coppersmith et al., 2015b). Padrez et al. (2015) assembled a parallel corpus of electronic health records alongside Twitter and Facebook posts. However, these subjects manually opted in from a single hospital emergency department, and therefore the number of positive examples for any single mental health condition is small.
Personality is a less sensitive target to predict, since users regularly subject themselves to online personality tests (Kosinski, Stillwell, and Graepel, 2013;Plank and Hovy, 2015). Nevertheless, knowing user personality has implications for predicting future behavior (Cadwalladr and Graham-Harrison, 2018), and it is unclear how comfortable users are with personalized inferences made about them without their consent.
Recommendation Systems
Recommendation systems can be grouped into two main classes based on how recommended items are ranked: collaborative filtering and content-based. Content-based systems are more strongly dependent on user profile since recommendations are based on representations of the user and the item being recommended. Collaborative filtering systems make recommendations based on prior consumption. Collaborative filtering systems have a hard time making useful recommendations early on because they rely on a history of the user's consumption. This is known as the cold-start problem (Adomavicius and Tuzhilin, 2005). Content-based systems are not as susceptible to the cold-start problem, since they can make recommendations based on extraneous user factors (e.g. a user description that is populated at enrollment). Although systems rarely fall squarely in one category or the other, this remains a useful dichotomy.
Recommendation systems on social media platforms either recommend content to consume or recommend friends to connect with (Phelan, McCarthy, and Smyth, 2009;Leskovec, Huttenlocher, and Kleinberg, 2010). Facebook and Twitter news feeds are examples of content recommendation systems operating over the space of other 18 users' messages. Predicting whether or not a user would click on an advertisement can also be viewed as a recommendation system, where the items that are recommended are advertisements for various products (Lohtia, Donthu, and Hershberger, 2003;Dembczynski, Kotlowski, and Weiss, 2008). In this case, clicking on an advertisement constitutes consuming the item. Note that a content-based approach, modeling the user, is critical since ad clicks are very rare events (Wang et al., 2011).
Social Science
Social media data provides researchers with a platform to study the effects of human relationships, social networks, on behavior. One goal of social media analytics is to replace traditional survey mechanisms (Thacker and Berkelman, 1988;Krosnick, Judd, and Wittenbrink, 2005) by monitoring messages posted on social media. Although the surveys that are simulated are most regularly seen in political polling (Tumasjan et al., 2010), they also appear in tracking disease and public health (Paul and Dredze, 2011;Culotta, 2014), and opinion related to public policy issues (O'Connor et al., 2010a;Stefanone et al., 2015;Benton et al., 2016a).
However, social media users are a biased sample relative to the general population (Ruths and Pfeffer, 2014). This presents difficulties when predicting survey responses directly from online messages. One way to account for difference in the populations is to adjust one's predictions based on demographic features of the social media population you are measuring. Inferred demographics can be used to appropriately adjust for bias on social media (Culotta, 2014).
User features are also important to control for as potential confounds when measuring influence in social networks (Hill, Provost, and Volinsky, 2006). Aral, Muchnik, and Sundararajan (2009) find that features such as a user's demographics explain most of the tendency to adopt a mobile application in an instant messaging network. Not controlling for homophily in the network means that the effect of social influenceone user adopting the application leads their friends to adopt it -is overestimated since users with a natural propensity to adopt will share other features that also make them more likely to be linked. Global position in the social network may also be used as a substitute for latent user features (Hill et al., 2011).
Message-Level Prediction
User information can help systems make better predictions for single messages/documents even when not clearly related to the message-level prediction task. Work related to improving NLP systems by conditioning on user demographics is a key example. Hovy (2015) show that training separate classifiers for product reviewers of different gender and age can improve accuracy at predicting product category and rating. Johannsen, Hovy, and Søgaard (2015) show that author gender is predictive of certain types of syntactic patterns in online product reviews. This suggests that knowing features of the user writing a review could improve syntactic parsing of sentences. Similarly, Hovy and Søgaard (2015) show that author age affects the performance of already trained part-of-speech taggers, suggesting a disparity in the way younger vs. older authors use language.
Instead of relying on ground truth user features, messages can be conditioned on generic user embeddings. For example, Amir et al. (2016) The goal of multiview representation learning is to learn representations for a class of objects that capture correspondences between multiple feature sets, views, associated with each object. We learn these representations because we believe they will be predictive of some latent object property, a useful component in a downstream system. These feature sets often correspond to multiple modalities. Take for example the X-ray microbeam dataset, a corpus of speech utterances containing acoustic measurements paired with the position of speech articulators (Westbury, 1994). Multiview learning methods have successfully been applied to this data to learn representations predictive of what phone a person is uttering at each frame .
Multiview methods are applied under the assumption that each view is sufficient to predict a target of interest given enough training data (Kakade and Foster, 2007).
However, we almost never have enough training data, so variance in our small training set will obscure the mapping from input features to target. By learning a representation of what is common between views, discarding uncorrelated noise, we ignore uninformative variance in our input features and yield better downstream performance.
Single-view dimensionality reduction techniques such as principal component analysis may discard variance in the data that happens to be correlated across views, simply because it treats the input features as a single feature set.
In this thesis, we learn multiview user embeddings derived by applying variants of canonical correlation analysis (CCA), an old statistical technique for finding linear transformations of two random variables such that they are maximally correlated (Hotelling, 1936). In this section we describe the CCA problem and present solution 21 derivations. We also describe objectives extending CCA to learn nonlinear mappings between two views, nonlinear kernel CCA and deep CCA. We finally discuss MAX-VAR generalized CCA, an extension to maximizing correlation between more than two views. See Uurtio et al. (2017) for another CCA tutorial, a discussion about using it as an analysis tool, and interpreting the learned embeddings.
Other Multiview Techniques Although in this thesis we learn user embeddings with methods related to CCA, there is a long history of using multiview techniques to learn representations as well as classifiers. Below is a selection of related methods.
Co-training is a semi-supervised approach for training a robust classifier from few labeled examples (Blum and Mitchell, 1998). In this method, the feature set is partitioned into two views, and an independent classifier is trained independently on each view. An unlabeled dataset is then tagged by each classifier, and the unlabeled data along with the predicted labels are used to augment the other classifier's training set. This entrains each classifier to make similar predictions from different feature sets. This framework has applicability beyond learning classifiers, and has also been applied to the problem of multiview clustering (Kumar, Rai, and Daume, 2011).
Siamese networks are a class of neural models that can be applied to multiview representation learning (Bromley et al., 1994). In this framework, each view is passed through a network and network weights are trained to minimize the ℓ 2 distance between the siamese network output layers. This is similar in spirit to CCA, where as we will show, correlation between two views is maximized.
Another related class of models are multiview probabilistic generative models, where a latent variable is assumed to govern the distribution of several observed views.
Topic models that infer a shared distribution over topics for multiple document views (e.g. the body and title of a news article) are one class of models (Ahmed and Xing, 2010). CCA has a corresponding probabilistic model as well (Section 2.2.2.4).
Canonical Correlation Analysis
CCA is a statistical technique used to learn a linear relationship between two sets of random variables. These two sets of variables are referred to as views. CCA is applied when one wants to maximize correlation between views and discard independent variation as noise.
Problem Definition
Suppose we are given two data matrices X ∈ R n×p and Y ∈ R n×q where X corresponds to view 1, and Y corresponds to view 2. n is the number of examples in your data, p is the number of features in view 1, and q is the number of features in view 2.
The one-dimensional CCA problem is as follows: The solutions to this problem, u and v, are points in the feature space that are mapped to points in R n by the view data matrices. u and v are called the canonical weight vectors or canonical weights, and their images under X and Y , z X and z Y , are called the canonical variates. This is the one-dimensional CCA problem, as we are finding a single pair of canonical weights. It can be extended to finding more than one set of canonical weight vectors by solving for u i and v i that satisfy the above problem, with the additional constraints that z i X and z i Y are orthogonal to all other z j X and z j Y .
For all i ∈ 1 . . . k, the k-dimensional CCA problem then becomes: Why Correlation Analysis? If we consider z X and z Y to be n draws of two scalarvalued random variables, then the empirical correlation between these variables is The constraints in the CCA objective ensure that z X and z Y are both unit-norm, so: This quantity is maximized when the inner product between z X and z Y is maximized.
Notation and Terminology
Suppose we are given two data matrices X ∈ R n×p and Y ∈ R n×q where X corresponds to view 1, and Y corresponds to view 2. n is the number of examples in the data, p is the number of features in view 1, and q is the number of features in view 2.
To simplify the following derivations, we assume that the columns of each of these matrices are normalized such that their means are zero and have unit variance 1 . We assume, without loss of generality, that p ≤ q.
Useful Definitions The sample auto-covariance matrices for views 1 and 2: The sample cross-covariance matrices between views 1 and 2: Finally, the joint covariance matrix for both views: This is the covariance matrix in the single-view setting, the auto-covariance matrix for the concatenation of both views. However, it will be useful to consider each block of this matrix separately, since they correspond to the auto-covariance and cross-covariance matrices of the individual matrices.
Solution
Below are two sketches of derivations for solving the CCA problem. The first derivation was given in Hotelling (1936), and the second was published over 50 years later in Ewerbring (1990).
Original Hotelling Derivation At a high-level, the original solution presented by Hotelling gives the solution for the first pair of canonical weights and variates. Iit boils down to the following steps: 1. Form the augmented Lagrangian of the CCA problem.
2. Take the partial derivatives of the augmented Lagrangian with respect to the unknowns.
3. Mathematically massage these equations to yield an eigenvalue problem.
4. Show that the eigenvectors solutions to this equation are one set of canonical weights, and the eigenvalues are correlations between canonical variates of the CCA problem.
5. Solve for the other set of canonical weights by substitution.
The first observation is that CORR(z X , z Y ) does not change if we scale z X or z Y , so let us scale u and v to ensure they are both unit-norm.
Thus, we can rewrite the CCA problem as: First we use the Lagrange multipler technique to fold the constraints into the objective with Lagrange multipliers λ X and λ Y : We take the partial derivative of the righthand side with respect to u and v, and set each equal to 0 -a solution of the objective must necessarily also be a stationary point of the Lagrangian for some non-negative values λ X and λ Y .
Multiply each equation on the left by u T and v T , respectively.
We know that a solution to the CCA problem must satisfy the unit norm constraints constraints, so we insert these: and since the left-hand side of the first equation is just the transpose of the second (and is a scalar), we know that the multipliers must be the same value λ = λ X = λ Y .
Substituting for λ back into Equation 2.5 yields: Note that C XX and C Y Y are invertible because they are both symmetric -X T X = (X T X) T -and positive definite -∀w ∈ R p , (Xw) T (Xw) > 0 2 . A final substitution of u from Equation 2.7 into Equation 2.5 yields: This is in the form of an eigenvalue problem, where v is the principal eigenvector for the left-hand side matrix and 4λ 2 is its associated eigenvalue. We can use an eigensolver to solve for v. Once solved, we can substitute v back into Equation 2.5 and finally solve for u.
This derivation can be extended to finding k pairs of canonical weights by replacing u and v by matrices U ∈ R p×k and V ∈ R q×k , where each successive column V i .7. In summary, assuming that C XX and C Y Y are invertible, a solution to the k-dimensional CCA problem can be found by: SVD of Joint Covariance Matrix A second derivation of the CCA solution expresses the CCA objective in terms of the data's joint covariance matrix (Ewerbring, 1990). The constraint that successive canonical variates be orthogonal to each other and unit-norm can be written as: where if U ∈ R p×k and V ∈ R q×k , with k <= p, then I is the k × k identity matrix. The objective can also be written as: where Λ ∈ R k×k is a diagonal matrix whose diagonal values, Λ 1,1 . . . Λ k,k , are the canonical correlations, (z i X ) T z i Y . We can express both the constraints and the objective in a single equation: Y Y V , we can rewrite Eq. 2.10 as: From the on-diagonal blocks, we know thatŨ andṼ must be orthogonal matrices, and we can manipulate the off-diagonal elements to show that: We can solve forŨ ,Ṽ , and Λ by a rank-k truncated singular value decomposition (SVD) of the left-hand matrix, then solve for the canonical weights by U = C Why these Derivations are Interesting The first derivation was originally presented in Hotelling (1936). This second derivation is worth seeing since we can learn both pairs of canonical weights using an SVD of a particularly constructed matrix. It also relates the CCA weights to the joint sample covariance matrix. One can draw connections to other linear techniques such as principal component analysis, where an eigendecomposition of the joint covariance matrix yields the principal components:
Probabilistic Interpretation
Bach and Jordan (2005) showed that the solution for the CCA objective is equivalent to the maximum likelihood solution of latent weights in a particular generative model.
The generative story of this model is simply as follows: where z is the latent vector representation for an example, W X and W Y are weight matrices mapping this embedding to observed views, and x and y are the observed views of this example. ψ X , ψ Y are positive definite covariance matrices and µ X , µ Y are arbitrary means that parameterize independent noise in each view. Bach and Jordan (2005) show that the maximum likelihood estimates of W X and W Y are: (2.14) Here, the C XX and C Y Y are the sample auto-covariance matrices, U and V are the left singular vectors of C XX and C Y Y respectively, and M is the square root of the diagonal matrix of singular values C
31
This probabilistic model demonstrates when CCA is appropriate for learning a representation: when variation in observed views are independent conditional on their latent representation with independent Gaussian noise applied to each view. The model also suggests that the CCA problem can be approximately solved by iterative algorithms for estimating latent variables in probabilistic models such as Expectation Maximization.
Nonlinear Variants
One drawback of CCA is it can only uncover linear relationships between views.
Although less work has been devoted to maximizing correlation between views after nonlinear transformation, there are two prominent methods for doing so: Kernel CCA and Deep CCA.
Kernel CCA
One method to uncover a nonlinear relationship between two views is to consider kernel CCA (Lai and Fyfe, 2000) (KCCA). Similar to kernel PCA, the practitioner defines kernel functions that independently define the similarity between points in view 1 and view 2, and these kernel functions are used in lieu of inner product when computing correlation.
Problem Let K X ∈ R n×n and K Y ∈ R n×n be symmetric positive semi-definite Gram matrices expressing the similarity between examples according to features from views 1 and 2. The kernel CCA problem is defined as: where α, β ∈ R n take the place of the canonical weights u, v in vanilla CCA.
Note also that K X and K Y replace X and Y . The similarity between this problem formulation and linear CCA comes from the fact that the canonical variates, z X and z y , lie in the span of the data in both problems.
Derivation Similar to the Hotelling solution, we can form the augmented Lagrangian, take the derivative with respect to the weights α and β, and set them equal to zero: We then right-multiply each derivative by α T and β T respectively, substitute in the unit-norm constraints, and find that at the solution λ X = λ y : If we substitute λ back in and solve for α in δL δα = 0, we find that: Substituting for α in δL δβ = 0 yields: This suggests that if K Y is invertible, then β is completely unconstrained, and the canonical correlation is 1. A standard solution is to change the unit-norm constraints to be α T (K X + ϵ X I)α = 1 and β T (K X + ϵ Y I)β = 1.
The derivation is instructive since we see how closely the kernel CCA derivation follows the original Hotelling solution, and it is important since it underscores the necessity of regularizing the Gram matrix regularization (otherwise the problem is not well-defined). This idea of regularization is also applicable to the original CCA objective, where a small amount of diagonal weight can be added to the sample auto-covariance matrices, mostly to ensure invertibility.
Deep CCA
Although KCCA maximizes correlation between views subject to an implicit nonlinear mapping, the nonlinear mapping solely depends on the choice of kernel and the training set. In addition, computing and inverting the Gram matrices are very expensive operations in space and computation time. One model that solves these problems is Deep CCA (DCCA), a CCA variant that alternates between maximizing correlation between views and updating a nonlinear mapping from observed views to shared space . The nonlinear mappings for each view are parameterized by two neural networks. In addition to learning the nonlinear mappings to shared space, DCCA also avoids computing and inverting large n × n Gram matrices. The crux of fitting DCCA to a dataset lies in the gradient update to update the per-view neural networks.
Problem The DCCA problem for the first canonical component is defined as follows: Here θ X and θ Y are the set of weights parameterizing fixed neural network archi- tectures f X and f Y respectively. These are functions that map each example view to a fixed vector of the dimensionality of the network output layer p f and q f .
Note that if the network weights are fixed, the solution for canonical weights u and v is just that given by linear CCA with respect to the output layer activations on the training set. In addition, if we fix the canonical weights then we can update network weights θ X and θ Y by backpropagation (assuming we can differentiate the objective with respect output layer activations f X (X; θ X ) and f Y (Y ; θ Y )).
This suggests an optimization scheme where at each iteration we alternate between updating network weights by backpropagation and then solving for canonical weights.
This way the orthonormality constraint on canonical components is maintained after each iteration.
DCCA Gradient Let the output layer activations of two views passed through their associated networks be X f and Y f , and assume that each has zero mean. The gradient of the correlation objective with respect to X f is given as: The partial derivative with respect to Y f is similar. Note that here we overload the notation for C XX and C Y Y to be the sample auto-covariance matrices of the output layer activations, X f and Y f . Similarly C XY is the cross-covariance matrix of X f and Y f . U , Λ, and V are solved by a singular value decomposition in the Ewerbring
(Many-View) Generalized Canonical Correlation Analysis
Can we apply CCA to maximize correlation between more than just two views?
Unfortunately there is no single generalization of correlation to more than a pair of random variables. The extensions to more than two views, generalized CCA (GCCA), frame the multiview correlation analysis problem as one of optimizing some function of the correlation matrix between all pairs of views, ϕ: where V is the number of views in our data, X i ∈ R n×p i is the data matrix, u i ∈ R p i are the canonical weights, and z i ∈ R n are the canonical variates for view i. Kettenring (1971) gives five different formulations of linear many-view CCA objective.
Problem Formulations
In these formulations, the canonical variates are learned, one-at-a-time, with the constraint that the variates are orthogonal to each other. For each canonical variate, we want to find the canonical weights u i that satisfy one of the following optimization problems: • SUMCOR: maximize the sum of correlations: max In this thesis we focus on the MAXVAR objective to learn user embeddings 4 .
The MAXVAR formulation is attractive since the optimal canonical weights U i can be found by standard linear algebra operations and singular value decompositions, much like the two-view CCA objective.
MAXVAR GCCA Problem
Kettenring (1971) shows that the MAXVAR GCCA formulation is equivalent to a problem presented earlier by Carroll (1968). This formulation introduces an auxiliary variable to the problem, a matrix G ∈ R n×k that acts as a low-dimensional shared representation across views. The MAXVAR objective with auxiliary variable formulation is as follows: Assuming columns of each X i are centered, the optimal solution for shared representation G and canonical weights U i is given as: Multiview LSA Unfortunately, the MAXVAR GCCA solution given above does not scale well as the number of examples in one's dataset nor the dimensionality of each view increases. Note that the matrix whose eigenvectors are G has n rows and columns. As the number of examples increases, this matrix will quickly become impossible to store in RAM on most computers. Similarly, inverting the p i × p i autocovariance matrix, (X T i X i ), will quickly become intractable as the dimensionality of view i increases. Rastogi, Van Durme, and Arora (2015) offers several important tweaks to make this solution tractable: the Multiview LSA algorithm. The key contribution of Multiview LSA is that they consider a truncated SVD decomposition of each view's data matrix: They use these low-rank decompositions to avoid forming the full n × n matrix in 2.20. They show that G is approximately the left singular vectors of the following matrix: In practice, they also regularize the auto-covariance matrices, which leads to a slight scaling of the columns of each A i .
Details One mild assumption for these methods is that the covariance matrices be invertible, that they have full rank. This assumption can be fulfilled by adding a small value to the diagonal of each of the covariance matrices. For example, It is also important to consider the presence of missing data within views when applying GCCA to real data. Van De Velden and Bijmolt (2006) introduces masking matrices ∀i ∈ 1 . . . V, K i into the MAXVAR objective to address this problem: Each mask, K i ∈ R n×n , is a diagonal matrix, where diagonal elements are either 0 or 1. Examples with data missing in a view are encoded by a zero whereas views with data present are encoded by a one. Although this is cosmetic, it is important to include these masking term, otherwise canonical weights will be artificially forced to map views towards zero (assuming views with missing data are represented as zero vectors).
Neural Alternatives to GCCA
Neural architectures that maximize a correlation objective are popular alternatives and scale better to large numbers of examples than classic solutions to GCCA problems. Kumar, Rai, and Daume (2011) elegantly outlines two main approaches these methods take to learn a joint representation from many views: either by (1) explicitly maximizing pairwise similarity/correlation between views or by (2) alternately optimizing a shared, "consensus" representation and view-specific transformations to maximize similarity.
Models such as the Siamese network proposed by Masci et al. (2014), fall in the former camp, minimizing the squared error between embeddings learned from each view, leading to a quadratic increase in the terms of the loss function size as the number of views increase. Rajendran et al. (2015) extends Correlational Neural Networks to many views and avoid this quadratic explosion in the loss function by only computing correlation between each view embedding and the embedding of a pivot view. Although this model may be appropriate for tasks such as multilingual image captioning, there are many datasets where there is no clear method of choosing a pivot view. The MAXVAR-GCCA objective does not suffer from a quadratic increase in computational complexity with respect to the number of views, nor does it require a privileged pivot view, since the shared representation is learned from the per-view representations.
Nonlinear (Deep) GCCA
In spite of encouraging theoretical guarantees, multiview learning techniques cannot freely model nonlinear relationships between arbitrarily many views. Either they are able to model variation across many views, but can only learn linear mappings to the shared space (Horst, 1961), or they simply cannot be applied to data with more than two views using existing techniques based on kernel CCA (Hardoon, Szedmak, and Shawe-Taylor, 2004) and deep CCA . Deep Generalized Canonical Correlation Analysis (dGCCA) is one recently-introduced model that fills this gap. Here we briefly describe the dGCCA model -see for further details.
Model dGCCA is a model that can benefit from the expressive power of deep neural networks and can also leverage statistical strength from more than two views in data, unlike Deep CCA which is limited to only two views. new data can be projected by feeding them through the learned network for each view.
In the dGCCA problem, we consider J views in our data and let X j ∈ R d j ×N denote the j th input matrix. 5 The network for the j th view consists of K j layers.
Assume, for simplicity, that each layer in the j th view network has c j units with a final (output) layer of size o j .
The output of the k th layer for the j th view is h j is a nonlinear activation function and W j k ∈ R c k ×c k−1 is the weight matrix for the k th layer of the j th view network. We denote the output of the final layer as f j (X j ).
5 Our notation for this section closely follows that of Andrew et al. (2013) 43 dGCCA can be expressed as the following optimization problem: find weight matrices W j = {W j 1 , . . . , W j K j } defining the functions f j , and linear transformations U j (of the output of the j th network), for j = 1, . . . , J, such that arg min where G ∈ R r×N is the shared representation we are interested in learning.
Gradient Derivation Sketch Next, we show a sketch of the gradient derivation.
See for the full gradient derivation with respect to network output layer. It is straightforward to show that the solution to the GCCA problem is given by solving an eigenvalue problem. In particular, define to be the scaled empirical covariance matrix of the j th network output, and let P j = f (X j ) ⊤ C −1 jj f (X j ) ∈ R N ×N be the corresponding projection matrix that whitens the data; note that P j is symmetric and idempotent. We define M = ∑ J j=1 P j . Since each P j is positive semi-definite, so is M . Then, it is easy to check that the rows of G are the top r (orthonormal) eigenvectors of M , and U j = C −1 jj f (X j )G ⊤ . Thus, at the minimum of the objective, we can rewrite the reconstruction error as follows: Minimizing the GCCA objective (w.r.t. the weights of the neural networks) means maximizing Tr(GM G ⊤ ), which is the sum of eigenvalues L = ∑ r i=1 λ i (M ). Taking the derivative of L with respect to each output layer f j (X j ) we have: Thus, the gradient is the difference between the r-dimensional auxiliary representation G embedded into the subspace spanned by the columns of U j (the first term) and the projection of the actual data in f j (X j ) onto the said subspace (the second term). Intuitively, if the auxiliary representation G is far away from the view- However, there is nothing preventing one from using a GCN as a view transformation layer, which can subsequently be tuned according to a dGCCA objective.
The dGCCA learning algorithm is agnostic to the architecture of the transformation network as well as the form of input view representation, so long as the objective is differentiable with respect to the neural network weights.
Multitask Learning and Neural Models
In chapters 5 and 6, we use a machine learning framework called multitask learning (MTL) to inject user information into classification models. In this section we present the MTL setting at a high level and discuss why neural networks are particularly convenient models to train in this framework.
Motivation
MTL was first presented in Caruana (1993) and is discussed in detail in Caruana's dissertation (Caruana, 1997). MTL is a machine learning framework for exploiting related auxiliary tasks to improve a classifier's generalization performance at some main task that the practitioner cares about. The classifier is trained to perform well according to these auxiliary tasks along with the main task, updating weights or a representation common across the tasks. Caruana describes MTL as introducing a human "inductive bias" to the main task model. This inductive bias is encoded by which additional tasks the practitioner believes will serve as useful guides to a model that must perform well at the single main task.
Consider an example from Collobert et al. (2011). In this paper, the authors want to improve a semantic role labeling system. This system takes a sequence of tokens as input and generates a sequence of labels encoding semantic roles, one for each token, as output -this is their main task. They then consider a related task of language modeling -the auxiliary task. The auxiliary language modeling task is formulated as maximizing the score that the model assigns to real English sentences, while minimizing the score assigned to fake, generated English sentences. They hypothesize that a model that can successfully discriminate between real and fake English text will be better at assigning semantic role labels than models that have a poor sense of what constitutes well-formed English. They report reducing semantic role labeling word error rate from 16.5 to 14.4 after joint MTL training with the language modeling auxiliary task -over a 10% reduction in word error rate.
Benefits
There are several benefits to the MTL framework aside from improving classifier generalization over traditional single-task learning.
The first is that auxiliary tasks are only necessary during train, not at test time.
The auxiliary tasks serve as beneficial regularizers for the classifier being learned, and can be discarded. This is analogous to the way that neural network weights may be trained with dropout, weight decay, or other regularization techniques, but those terms only influence how weights are updated during training, not how predictions are ultimately made. This was the major motivation behind using multitask learning to improve pneumonia risk prediction given medical history in Caruana, Baluja, and Mitchell (1996). In this work, they use the results from lab tests as auxiliary tasks.
These lab tests are time-consuming, expensive, and are only available after a patient has been hospitalized. However, they are predictive of pneumonia risk, so a classifier that can predict these results will also better predict pneumonia risk.
Learning Setting
In supervised MTL, we want to train a classifier such that it achieves low expected loss across multiple tasks. We are given a total of T tasks and one classifier for each task, The classifier for task t maps examples from domain X to predictions in domain Y t 6 . Each classifier f t is determined by two sets of parameters: • Θ: parameters shared by all classifiers • θ t : the task-specific parameters for task t One can also consider all the f t as a single model that generates vector-valued predictions : ⟨y 1 , y 2 , . . . , y T ⟩.
If L t : R Yt,Yt → R + is the loss function for task t and examples are drawn from the joint probability distribution P t , then the MTL objective is as follows: In other words, we want to learn task-shared parameters, Θ, and task-specific parameters, {θ 1 , . . . , θ T }, that minimize the average expected loss across all tasks 7 . 6 The assumption that the domain of each classifier is the same is actually a simplification. In general, the domain of each classifier may be different (either subsets of a shared feature set or completely different feature sets), so long as there exist parameters shared across classifiers (Zhang and Yeung, 2011). 7 In practice the objective will also include a regularization term to penalize large parameter weights. This term is orthogonal to the multitask objective, though. Caruana (1993) first introduces MTL in the context of neural models, and for good reason: MTL is trivial to implement in a neural model. This is because neural models are simple to optimize with respect to multiple loss functions, so long as each loss function is differentiable with respect to the model parameters.
Neural Models
Take the model presented in Li, Ritter, and Jurafsky (2015) for learning Twitter user representations in an MTL framework that capture both similarity in posted text and closeness in the social network 8 .
Example: Multitask Learning of User Representations A simplified version of their model is diagrammed in Figure 2.2. Consider two tasks to aid learning vector representations for Twitter users: a user-conditioned language modeling task and a friend prediction task. Their language modeling task is described by the following objective: where L is a word embedding lookup table mapping word indices to vectors, The friend prediction task is modeled more simply: where e u and e v are embeddings for two distinct users u and v. The probability that user u is friends with v is determined by passing their dot product through a sigmoid function.
The empirical log-likelihood for both of these tasks for a set of users U , the sequence of words each user posts w u , and pairs of friends F is then: Model parameters can be learned by alternately sampling pairs of users for the friend prediction task, and sampling words in context for the language modeling task to update user representations. For each task, user representations are updated by stochastic gradient descent. This is a consequence of the fact that the joint loss for 51 both of these tasks is a linear combination of the per-task losses, and so the joint loss remains differentiable with respect to the user representations.
Other Multitask Models Although this thesis only considers neural networks trained in multitask fashion, a wide variety of machine learning models have been extended to the MTL setting.
Along with neural networks, Caruana (1997) presents MTL extensions for featureweighted k-nearest neighbor regression and decision trees. In this model, the perfeature weights for computing ℓ 2 distance are selected to minimize average mean squared error across tasks rather than for a single task. Multitask decision trees are learned by not just maximizing information gain of a single task, but a weighted average of information gain across all tasks.
Evgeniou and Pontil (2004) present a generic regularization-based framework
inspired by fitting support vector machines. In this framework, each task's model is assumed to have parameters which are close to each other. This "closeness" is enforced by penalizing large deviations from a set of shared parameters while maximizing the margin from decision boundary (in the case of max-margin classification). MTL has also influenced learning of unsupervised tasks such as clustering (Gu and Zhou, 2009) and nonlinear regression models such as Gaussian processes (Yu, Tresp, and Schwaighofer, 2005).
Options for MTL
The MTL framework is extremely flexible in how models can be trained. This is both a blessing and a curse: flexibility means that MTL is applicable to improving just about any predictive model, but means that there is potentially more space to explore to find the most effective way to deploy MTL.
Task Selection and Weighting How does one define related tasks? This is the fundamental problem in deriving generalization improvements from MTL training and one without a clear answer. Simple measures such as correlation between class labels are not necessary to identify related tasks as (Caruana, 1997). The best definition of task-relatedness offered in Caruana (1997) is not particularly illuminating: The most precise definition for relatedness we have been able to devise so far is the following: Tasks This cannot be easily used as a heuristic for task selection since the definition of relatedness amounts to going ahead and training a model on both tasks (showing tasks are unrelated is even more difficult, requiring a sweep over all learning algorithms/models). Similar to selecting tasks, it is typical to weight auxiliary tasks differently within the loss function based on which are believed to be the most beneficial. This is akin to a soft selection of auxiliary tasks.
Although there are methods to jointly infer how "related" tasks are as well as learn weights for each task (Bakker and Heskes, 2003;Kang, Grauman, and Sha, 2011), there is no panacea, since they typically make strong assumptions on how data were generated. Ultimately, which auxilliary tasks that will lead to best improve generalization performance must be selected by the practitioner based on their domain knowledge, the data and tasks available, and empirical evaluation. parameter updates that improve that task more. A final pass of fine-tuning toward the main task is often helpful. This, however, may be liable to discarding the benefits of MTL if one does not freeze shared parameters or limit the number and size of single-task updates.
Discussion: Relationship to Multiview Methods
The models we consider in this thesis integrate auxiliary information into embeddings and models to improve generalization performance at some downstream task. The integration of auxiliary information can come in the form of a multiview method, learning embeddings that capture correlation between views, supervision used to condition distribution over topics in a topic model, or as an auxiliary task for a supervised classifier.
Since many of the approaches we present here are naturally extended to semisupervised learning settings (multitask learning in particular), one might be tempted to think of these methods as transductive. However, unlike transductive algorithms which infer labels for a batch of unlabeled examples at training time (Gammerman, Vovk, and Vapnik, 1998), the multiview and multitask algorithms we present here do not need to infer labels for unlabeled examples. Multiview algorithms have no notion of a target or label which the embedding is entrained to predict (although they can easily be extended to have one ). Multitask training (particularly for neural models) can easily be extended to incorporate new examples labeled for any of the main or auxiliary tasks -main task labels can be inferred by the model, but they need not be used in the training process.
It is more useful to view multiview and multitask learning approaches as ways of inserting inductive biases into learned model features. The major difference between these two approaches, multiview and multitask learning, is in the kinds of biases that each can express. Multiview methods suppose that the latent features one wants to learn are best captured by that which is common between a set of "auxiliary" features, or views. Under this interpretation, we presume that observed views are generated conditional on this latent feature (consider the probabilistic interpretation of CCA (Bach and Jordan, 2005)). In neural multitask learning, one supposes that this latent feature vector (e.g. a hidden layer in one's network) is predictive of the auxiliary tasks, and is generated by a separate set of input features.
To take an example from chapter 3, suppose we have collected the past tweets and list of local network friends for a large set of Twitter users. We can take two separate approaches to model these users. We could apply a multiview representation learning method such as CCA to map the text and network views to a shared space.
This approach assumes that the text and network features are generated independently conditioned on some latent feature vector. If we took a supervised multitask learning approach, we would either predict a user's local network from their text, their text from 55 their local network, or would predict both text and local network from a completely separate set of input features.
One critical difference between multiview and multitask learning is the flexibility afforded by multitask learning setting. The multiview methods we consider in this thesis are constrained to maximizing correlation between views, and often make strong assumptions on the distribution of observations (e.g. observations are Gaussian distribution). On the other hand, multitask learning is closer to a philosophy of model training that happens to be easily translated to neural network training.
Multiview Embeddings of Twitter Users
In this chapter we present methods to learn unsupervised embeddings for a general set of users from different views of their online behavior. We evaluate these embeddings both intrinsically according to how well they capture hashtag usage and friending behavior and extrinsically according to how well they predict demographic features.
This chapter was adapted mainly from Benton, Arora, and Dredze (2016) sentences (Kiros et al., 2015), and entire documents (Le and Mikolov, 2014). These embeddings exhibit desirable properties, such as capturing some aspects of syntax or semantics and outperforming their sparse counterparts at downstream tasks.
While there are many approaches to generating embeddings of text, it is not clear how to learn embeddings for social media users. There are several different types of data (views) we can use to build user representations: the text of messages they post, neighbors in their local network, articles they link to, images they upload, etc.
Although user embeddings can always be finetuned for a supervised objective, it is unclear which unsupervised views and methods perform best across a variety of tasks.
Multiview embedding methods such as Generalized Canonical Correlation Analysis (GCCA) (Carroll, 1968;Van De Velden and Bijmolt, 2006;Arora and Livescu, 2014;Rastogi, Van Durme, and Arora, 2015) are attractive methods for simultaneously capturing information from multiple user views. These methods may be more appropriate for learning user embeddings than concatenating views into a single vector, since views may correspond to different modalities (image vs. text data) or have very different distributional properties. Treating all features as equal in this concatenated vector would not be appropriate.
In this chapter we present an extension of the MAXVAR-GCCA problem that offers increased flexibility in learning user embeddings than standard GCCA: weighted GCCA (wGCCA). wGCCA allows the practitioner to discriminatively weight the perview loss, forcing user embeddings to capture variation in some views more closely than others. View weighting is chosen based on either a prior notion of which views will be the most informative or by tuning to improve some downstream metric -this is up to the embedder's discretion. We also consider an algorithm to approximately solve the (linear) SUMCOR-GCCA problem, large-scale CCA (Fu et al., 2016) as another multiview user embedding method. We adapt the LasCCA implementation presented in Fu et al. (2016) to support data with missing views (especially important when considering data compiled from social media).
We evaluate multiview embeddings at how well they capture hashtag usage and friending behavior and how well they predict user demographic features. We compare their performance at these tasks to single-view baselines and show that the location of users in embedding space can capture average peoples' notions of what constitutes a similar group of users. This is analogous to how word embeddings capture semantic and syntactic properties of word types.
In Section 3.1 we first describe the different types of user behavior used to learn embeddings and how this dataset was assembled. Sections 3.2 and 3.3 describe the baseline and multiview methods we use to learn embeddings. Section 3.4 describes how embeddings were evaluated and Section 3.5 finally contains both quantitative and qualitative evaluation of user embeddings.
User Behavior Data
What is the best type of behavior to learn user embeddings on? Although the answer ultimately depends on how these embeddings will be used, some types of user behavior and embedding methods will be more appropriate for a variety of tasks. To answer this question, we assembled a dataset of general Twitter users, with multiple aspects of user behavior. Knowing how the dataset was assembled is critical to understanding what kind of user behavior is available to each embedding method.
Data Collection
We uniformly sampled 200,000 users from a stream of publicly available tweets from the 1% Twitter stream from April 2015. We removed users with verified accounts, more than 10,000 followers, or non-English profiles to restrict to typical, English speaking users 1 . For each user we collected their 1,000 most recent tweets, and then filtered out non-English tweets. We removed users without English tweets in January or February 2015, yielding a total of 102,328 users. Although limiting tweets to only these two months restricted the number of tweets we were able to work with, it also ensured that our data are drawn from a narrow time window, controlling for differences in user activity over wide stretches of time. This allows us to learn distinctions between users, and not temporal distinctions of content. This data supports our evaluation tasks as well as the four sources of behavior/content for each user: their tweets, tweets of mentioned users, friends, and followers.
User Views
We consider four main views/sources of information about a user. ego information as represented by the text in public tweets the user posts, mentioned information represented by messages made by people mentioned in a tweet posted by the ego user, friend information for those people who the ego user follows, and follower information for those who follow the ego user. Although there are other views we could have collected (e.g. the user description or image), prior work has shown that these four views are predictive of latent user attributes, and therefore would be useful for learning user embeddings (Volkova, Coppersmith, and Van Durme, 2014b).
Two main representations are considered when constructing views: either text representations or a direct representation of the friend or follower IDs.
Text
For each text source we can aggregate the many tweets into a single document, e.g.
all tweets written by accounts mentioned by a user. We represent this document as a bag-of-words (BOW) in a vector space model with a vocabulary of the 20,000 most frequent word types after stopword removal. We consider TF-IDF weighted BOW vectors. This was done for tweets made by the ego user, mentions, friends, and
followers.
A common problem with these representations is that they suffer from the curse of dimensionality. A natural solution is to apply a dimensionality reduction technique to find a compact representation that captures as much information as possible from the original input. Here, we consider principal components analysis (PCA), a ubiquitous linear dimensionality reduction technique. The text views that are fed into multiview embedding methods are all first reduced by PCA before learning the embedding.
We run PCA and extract up to the top 1,000 principal components for each of the above views. This speeds up fitting multiview embedding methods since the feature dimensionality of each view is reduced.
Network
An alternative to text based representations is to use the social network of users as a representation. We encode a user's social network as a vector by treating the set of users in the social graph as a vocabulary, where users with similar social networks have similar vector representations (NetSim). This is an n-dimensional vector that encodes the user's social network as a bag-of-words over this vocabulary. In other words, a user is represented by a summation of the one-hot encodings of each neighboring friend or follower in their social network. In this representation, the number of friends two users have in common is equal to the dot product between their social network vectors.
We define the social network as one's followers or friends. The motivation behind this representation is that users who have similar networks may behave in similar ways.
Such network features are commonly used to construct user representations as well as to make user recommendations (Lu, Lam, and Zhang, 2012;.
The binary representations of local network are reduced to the top 1,000 principal components, as are the text representations.
Baseline Embedding Methods
Each of these views can be treated as a user embedding in their own right. They can also be combined using different methods to yield aggregate user representations across views. Here we describe baseline user embeddings we evaluate.
PCA
For the following experiments, we consider the PCA representations as a baseline.
We consider up to the top 1,000 principal components within each view as the user embedding. In order to fairly compare multiview embedding methods to methods that do not maximize correlation between views, we also consider a naïve combination of PCA views as an embedding.
We consider all possible combinations of views obtained by concatenating original view features, and subsequently reducing the dimensionality by PCA. By considering all possible concatenation of views, we ensure that this method has access to the same information as multiview methods. Both the raw BOW and BOW-PCA representations have been explored in previous work for demographic prediction (Volkova, Coppersmith, and Van Durme, 2014b;Al Zamal, Liu, and Ruths, 2012) and recommendation systems (Abel et al., 2011;Zangerle, Gassler, and Specht, 2013). Only the best performing view subset evaluated on the development set is reported on test.
Word2Vec
BOW-PCA is limited to linear representations of BOW features based on global context.
Modern neural network based approaches to learning word embeddings, including word2vec continuous bag of words and skipgram models, can learn representations that capture local context around each word (Mikolov et al., 2013b). We represent each view as the simple average of the word embeddings for all tokens within that view (e.g., all words written by the ego user). Word embeddings are learned on a sample of 87,755,398 tweets and profiles uniformly sampled from the 1% Twitter stream in April 2015 along with all tweets and profiles collected for our set of usersa total of over a billion tokens. We use the word2vec tool, select either skipgram or continuous bag-of-words embeddings on dev data for each prediction task, and train for 50 epochs. We use the default settings for all other parameters.
Multiview Embedding Methods
Here we describe three different methods for learning multiview user embeddings.
Each of these multiview embedding methods are evaluated against each other at the tasks described in section 3.4.
MAXVAR-GCCA
We use Generalized Canonical Correlation Analysis (GCCA) (Carroll, 1968) to learn a single embedding from multiple views. GCCA finds G, U i that minimize: maps from the latent space to observed view i, and G ∈ R n×k contains all user representations (Van De Velden and Bijmolt, 2006).
Weighted GCCA
Since each view may be more or less helpful for a downstream task, we do not want to treat each view equally in learning a single embedding. Instead, we weigh each view differently in the objective: where w i explicitly expresses the importance of the ith view in determining the joint embedding. The columns of G are the eigenvectors of In our experiments, we use the approach of Rastogi, Van Durme, and Arora (2015) to learn G and U i , since it is more memoryefficient than decomposing the sum of projection matrices.
We also consider a minor modification of GCCA, where G is scaled by the squareroot of the singular values of -sv). This is inspired by previous work showing that scaling each feature of multiview embeddings by the singular values of the data matrix can improve performance at downstream tasks such as image or caption retrieval (Mroueh, Marcheret, and Goel, 2015). Note that if we only consider a single view, X 1 , with weight w 1 = 1, then the solution to GCCA-sv is identical to the PCA solution for data matrix X 1 , without mean-centering.
SUMCOR-GCCA
In addition to the MAXVAR-GCCA objective, we also consider another generalization of CCA to more than two views: SUMCOR-GCCA. The SUMCOR-GCCA problem is given in Equation 3.3: where V is the number of views and U i are the canonical weights for view i.
SUMCOR-GCCA seeks to find mappings that maximize the sum of total correlation captured between every pair of views while ensuring that the canonical variates for each view are orthonormal as in CCA. This differs from the MAXVAR-GCCA objective in two ways: (1) SUMCOR-GCCA requires no nuisance variable, G, to ensure views are mapped close to each other. The orthonormality of projected views is ensured by the hard constraints in the objective.
(2) The SUMCOR-GCCA problem seeks to maximize the sum of correlations between each pair of views. The MAXVAR formulation instead seeks to maximize the maximum eigenvalue of the correlation matrix between all pairs of views (Kettenring, 1971).
Jointly solving for all U i is difficult, so we run the Large-scale generalized CCA (LasCCA) algorithm (Fu et al., 2016) for a fixed number of iterations (100) to solve for the mappings for each view. LasCCA proceeds by maximizing the SUMCOR-GCCA objective with respect to each U i round-robin, holding all other view mappings fixed.
We consider this multiview objective for three reasons: (1) It allows us to compare if a slightly different multiview objective yields similarly-performing embeddings to those learned to maximize the MAXVAR-GCCA.
(2) We can assess how performant the learned embeddings are as a function of LasCCA epochs devoted to solving the GCCA problem.
(3) Although LasCCA does not guarantee an optimal solution, the algorithm is designed to scale well when the input views are very high-dimensional and sparse, avoiding keeping low-rank approximations to the sum of projection matrices as in multiview LSA. This allows LasCCA to learn multiview embeddings directly from, for example, a bag-of-words in all of a user's tweets.
Robust LasCCA Algorithm
The LasCCA algorithm is shown in Algorithm 1, and relies on the subroutine H Compute in Algorithm 2. In order to support our Twitter data, we modified the original LasCCA algorithm to ignore views with missing data similar to the modification of multiview LSA. The terms that differentiate this algorithm from the LasCCA algorithm presented in Fu et al. (2016) are highlighted in red. LasCCA is more computationally efficient than standard GCCA algorithms with many, high-dimensional views. For of non-zero views per example 3: end for 12: end for 13: Algorithm 2 H compute subroutine for LasCCA ◁ Observations, auxiliary variates, and masking matrices for each view
Experiment Description
We selected three prediction tasks to evaluate the effectiveness of the multi-view user embeddings: user engagement prediction, friend recommendation and demographic characteristics inference. Our focus is to show the performance of multiview embeddings compared to other representations, not on building the best system for a given task.
Learning Embedding Details
GCCA embeddings were learned over combinations of the views in Subsection 3.1.2.
When available, we also consider GCCA-net, where in addition to the four text views, we also include the follower and friend network views used by NetSim-PCA. For computational efficiency, each of these views was first reduced in dimensionality by projecting its BOW TF-IDF-weighted representation to a 1000-dimensional vector through PCA. 3 We add an identity matrix scaled by a small amount of regularization, 10 −8 , to the per-view covariance matrices before inverting, for numerical stability, and use the formulation of GCCA reported in Van De Velden and Bijmolt (2006), which ignores rows with missing data (some users had no data in the mention tweet view and some users accounts were private). We tune the weighting of each view i, w i ∈ {0.0, 0.25, 1.0}, discriminatively for each task, although the GCCA objective is unsupervised once the w i are fixed (weighting swept over only for linear GCCA embeddings).
When learning deep GCCA (dGCCA) and LasCCA embeddings, we do not apply any view-weighting 4 . For LasCCA, we consider embeddings learned over the following sets of views: {ego text, friend network}, all four text views, and all views (all text views along with two network views). We run the LasCCA algorithm for a fixed 100 epochs and a maximum of 20 for solving linear least squares subproblems 5 When we compare representations in the following tasks, we sweep over embedding width in {10, 20, 50, 100, 200, 300, 400, 500, 1000} for all methods. We also consider concatenations of vectors for every possible subset of views: singletons, pairs, triples, and all views for the BOW-PCA baseline. 3 We excluded count vectors from the GCCA experiments for computational efficiency since they performed similarly to TF-IDF representations in initial experiments. 4 Although it is not difficult to imagine altering the dGCCA and LasCCA objectives to per-view loss weighting.
User Engagement Prediction
The goal of user engagement prediction is to determine which topics a user will likely tweet about, using the hashtags they mention as a proxy. This task is similar to hashtag recommendation for a tweet based on its contents She and Chen, 2014;Zangerle, Gassler, and Specht, 2013). Purohit et al. (2011) presented a supervised task to predict if a hashtag would appear in a tweet using features from the user's network, previous tweets, and the tweet's content.
We selected the 400 most frequently used hashtags in messages authored by our users and which first appeared in March 2015, randomly and evenly dividing them into development and test sets. We held out the first 10 users who tweeted each hashtag as exemplars of users that would use the hashtag in the future. We ranked all other users by the cosine distance of their embedding to the average embedding of these 10 users.
Since embeddings are learned on data pre-March 2015, the hashtags cannot impact the learned representations. Performance is measured using precision and recall at k, as well as mean reciprocal rank (MRR), where a user is marked as correct if they used the hashtag. Note that this task is different than that reported in Purohit et al. (2011), since we are making recommendations at the level of users, not tweets.
Friend Recommendation
The goal of friend recommendation/link prediction is to recommend/predict other accounts for a user to follow (Liben-Nowell and Kleinberg, 2007).
We selected the 500 most popular accounts -which we call celebrities -followed by our users, randomly, and evenly divided them into dev and test sets. We randomly select 10 users who follow each celebrity and rank all other users by cosine distance to the average of these 10 representations. The tweets of selected celebrities are removed during embedding training so as not to influence the learned representations. We use the same evaluation as user engagement prediction, where a user is marked as correct if they follow the given celebrity.
For both user engagement prediction and friend recommendation we z-score normalize each feature, subtracting off the mean and scaling each feature independently to have unit variance, before computing cosine similarity. We select the approach and whether to z-score normalize based on the development set performance.
Demographic Prediction
Our final task is to infer the demographic characteristics of a user (Al Zamal, Liu, and Ruths, 2012;Chen et al., 2015).
We use the dataset from Volkova, Coppersmith, and Van Durme (2014b) and Volkova (2015) which annotates 383 users for age (old/young), 383 for gender (male/female), and 396 political affiliation (republican/democrat), with balanced classes. Predicting each characteristic is a binary supervised prediction task. Each set is partitioned into 10 folds, with two folds held out for test, and the other eight for tuning via cross-fold validation. The provided dataset contained tweets from each user, mentioned users, friends and follower networks. It did not contain the actual social networks for these users, so we did not evaluate NetSim, NetSim-PCA, or GCCA-net at these prediction tasks.
Each feature for feature set was z-score normalized before being passed to a linear-kernel SVM where we swept over 10 −4 , . . . , 10 4 for the penalty on the error term, C. GCCA setting placed weight 1 on the ego tweet view, mention view, and friend view, while BOW-PCA concatenated these views, suggesting that these were the three most important views but that GCCA was able to learn a better representation. There are a few other several points to note: First is that dGCCA outperforms linear multiview methods according to recall at 1000 and MRR. This is exciting because this task benefits from incorporating more than just two views from Twitter users through linear multiview representation learning methods. These results suggest that a nonlinear transformation of the input views can yield additional gains in performance.
User Engagement Prediction
In addition, GCCA models sweep over every possible weighting of views with weights dGCCA is able to outperform GCCA at hashtag recommendation is encouraging, since GCCA has much more freedom to discard uninformative views, whereas the dGCCA objective forces networks to minimize reconstruction error equally across all views.
In addition, the LasCCA embeddings learned on all views, also unweighted, perform almost as well as dGCCA. This suggests that linear multiview representation learning methods may learn similarly effective embeddings as nonlinear ones, only with a slightly different GCCA formulation. However, it is not clear why the SUM-COR objective would be more appropriate than the MAXVAR generalized CCA objective for learning embeddings geared towards this task. It also further underscores the fact that multiview techniques seem to be more appropriate than single-view for learning embeddings effective at user engagement prediction.
Effect of LasCCA Solution Quality on User Engagement Prediction
LasCCA is an iterative algorithm for solving the SUMCOR-GCCA problem. In practice we learned embeddings with a fixed 100 epochs for all embedding widths. increments of 5 up to 100. We then examined the final downstream performance at user engagement prediction as a function of how many LasCCA epochs were taken to learn an embedding as well as other training parameters such as embedding width and which views to apply LasCCA too.
It is encouraging that performance at hashtag recommendation is completely insensitive to number of epochs (Figure 3.4, left). In contrast, downstream performance is most influenced by which embedding width we choose (Figure 3.4, right), although we also find that LasCCA user embeddings learned over network views improve over just text views (center), echoing what we see when learning GCCA embeddings. GCCA-sv performs identically to GCCA-net since it only placed weight on the friend network view, learning identical embeddings to GCCA-net. Table 3.3 shows the average cross-fold validation and test accuracy on the demographic prediction task. + BOW indicates that BOW features were concatenated to the embeddings as an additional feature set for the classifier. The wide variation in performance is due to the small size of the datasets, thus it's hard to draw many conclusions (the average development performance of all models are within one standard deviation of each other). However, Word2Vec surpasses other representations in two out of three datasets, and including a TF-IDF weighted bag of words features tends to improve the generalization performance of most classifiers.
Demographic Prediction
It is difficult to compare the performance of the methods we evaluate here to that reported in previous work (Al Zamal, Liu, and Ruths, 2012). This is because they report cross-fold validation accuracy (not test), they consider a wider range of hand-engineered features, different subsets of networks, radial basis function kernels for SVM, and find that accuracy varies wildly across different feature sets. They report cross-fold validation accuracy ranging from 0.619 to 0.805 for predicting age, 0.560 to 0.802 for gender, and 0.725 to 0.932 for politics.
Experiment
We used these cluster exemplars to construct an intruder detection task to submit to Amazon Mechanical Turk 10 . For each cluster, we presented the subject with links to four of the five exemplar users' Twitter summaries 11 , along with an intruder user sampled uniformly at random from another cluster's exemplars. The order of users was randomized for each HIT and the subject was asked to complete two tasks: 1. Given only the information provided on the users' summary pages (their most recent tweets, user text description, and profile image), identify which user is the most different from the other four. We treat the intruder detection task as a proxy for user cluster coherence -how similar are users belonging to the same cluster. If it is easier for a subject to spot which user does not belong, that suggests that the other users share an easily identifiable, common property. This task was inspired by work in evaluating the quality of topics learned by a topic model, specifically the word intrusion task described in Chang et al. (2009). In addition, we were able to use this task to quickly collect cluster labels for qualitative analysis, without influence from our own biases.
Each cluster was labelled by three unique subjects and we compare embedding types by accuracy at the intruder detection task, averaged over all annotations.
Results
We omitted a single subject's HITs from analysis as they completed a large number of HITs very quickly, performed only slightly above random chance (23% accuracy), and labelled clusters uninformatively (e.g. "They are all the same" or "'posts are in English"). After removing this user, we calculated accuracy over a total of 311 Surprisingly, subjects found the clusters from BOW-PCA[ego] tended to be the most coherent (Figure 3.8).
Although the confidence intervals estimated by bootstrap samples are wide, subjects were able to detect the intruder statistically significantly more frequently than chance for all embedding types according to a proportion z-test (p = 0.05) 12 . The BOW-PCA[ego] embeddings resulted in statistically significantly higher accuracy than PCA on all views according to this same test (p = 0.05).
One reason why subjects better detected intruders in the BOW-PCA[ego] clusters was likely because of the information they were allowed to act on: a short summary of the Twitter user. Grouping users together by the frequent words they post is a simple cue for someone to latch onto. These are exactly the sorts of features that methods that only consider the ego text view will try to preserve. However, we fidn it interesting that dGCCA clusters are more coherent than BOW-PCA[all]. Although less coherent than an ego text embedding, this suggests that multiview representation learning methods yield more "natural" user embeddings when consolidating multiple types of input behavior.
Appendix A contains an exhaustive list of labels assigned to each cluster along with a few examples of Twitter users belonging to the same cluster. Many of these clusters were assigned vague labels ("They all speak English", "none"), which speaks to the difficulty of this task. Not only are the user clusters noisy and subjects are given scant information in the Twitter summary, but the user embeddings were learned over three years before the HIT was conducted.
Preprocessing Considerations In this chapter we naïvely preprocessed the text views by removing stop words and restricting the vocabulary size to the 20,000 most frequent token types. Because of this, the user representations we learn in this chapter sometimes captured user behavior that would be considered noise in most downstream tasks. Figure 3.9: Tweets from exemplar users from an "astrology app" cluster. Members of this cluster belonged to a range of astrological signs and the only discernible feature shared between them were automated posts generated by the app. We intentionally obfuscated the users' names for their privacy.
The most salient examples of this were clusters of users who registered for the same Twitter app. Astrological sign apps are particularly popular, and some automatically post tweets associated with the user's astrological sign. Figure 3.9 shows exemplar tweets from one such cluster learned from GCCA embeddings. This cluster mixes users with different signs suggesting that user representations generalize to those who subscribed to this particular astrology app, rather than homing in on repetition of tokens for one astrological sign. Another cluster included users who subscribed to follower-tracking apps that automatically tweet about changes in their follower network. Although we focus on evaluating different methods of learning user representations, this underscores just how important data quality and preprocessing are when applying these methods to real-world data.
Summary
This chapter shows how unsupervised user embeddings can be learned from multiple views of Twitter user behavior. We find that although embeddings learned on friending behavior alone are the most predictive of other friends a user may have, multiview embeddings learned over views of both what the ego user posts and their friending behavior better capture which hashtags they are likely to use in the future.
Although subjects found embeddings learned only on ego text to yield more coherent user clusters than multiview embeddings, multiview user embedding clusters were more coherent than those learned by applying a single-view dimensionality reduction technique to all views.
86
Chapter 4 User-Conditioned Topic Models
Background: Supervised Topic Models
Social media has proved invaluable for research in social and health sciences, including sociolinguistics (Eisenstein, Smith, and Xing, 2011), political science (O'Connor et al., 2010b), and public health (Paul and Dredze, 2011). A common theme is the use of topic models (Blei, Ng, and Jordan, 2003), which, by identifying major themes in a corpus, summarize the content of large text collections. Topic models have been applied to characterize tweets (Ramage, Dumais, and Liebling, 2010), blog posts and comments (Yano, Cohen, and Smith, 2009;Paul and Girju, 2009), and other short texts (Phan, Nguyen, and Horiguchi, 2008 Figure 4.1: Graphical model of LDA (left) and DMR (right) in plate notation. The key difference between these topic models is that DMR includes document-dependent features, α, that affect the document-topic prior through log-linear weights, η, shared across all documents. LDA conversely shares the same document-topic prior for all documents.
as predicting labels for each document, e.g., supervised LDA (Mcauliffe and Blei, 2008); modeling tags associated with each document, e.g., labeled LDA (Ramage et al., 2009) or tagLDA (Zhu, Blei, and Lafferty, 2006); placing priors over topicword distributions (Jagarlamudi, III, and Udupa, 2012;Paul and Dredze, 2013); or interactive feedback from the user (Hu et al., 2014). Using the terminology of Mimno and McCallum (2008), these models can be classified as either "Upstream" or "Downstream", referring to whether this supervision is assumed to be generated before or after the text in the generative stories. The supervised models we consider in this chapter are upstream models with document-level supervision, in particular Dirichlet Multinomial Regression (DMR).
DMR Generative Story
4.1 illustrates the generative story of LDA and DMR in plate notation. In upstream topic models, supervision influences the priors over topic distributions in documents.
For each document m:
2. For each topic k: 3. For each token n in each document m: (a) Sample topic index z mn ∼ θ m (b) Sample word token w mn ∼ ϕ zmn
Fitting Topic Models
The experiments described in this chapter use a collapsed Gibbs sampler with regularized hyperparameter updates to infer topic model parameters. Methods such as variational expectation maximization are also possible, but we only fit models by Gibbs sampling updates due to its simplicity of implementation and applicability to all the topic model architectures we consider.
The inference procedure for each model involves alternating between one iteration of collapsed Gibbs sampling (sampling each token's topic assignment) and one iteration of gradient ascent for the parameters η b (bias vector in the document-topic prior), ω b (bias in the topic-word prior), and η (weights determining how document supervision influences the document-topic prior).
Gibbs Sampling
The Gibbs sampling step involves sampling z mn , each topic assignment, in turn for every word in the corpus, w mn , where m is the document index and n is the word index within a document. Each topic assignment is drawn conditioned on all previous topic assignments as well as the document-topic and topic-word priors, Dirichlet(θ m ) and Dirichlet(φ) respectively. Formally, the probability that k is sampled as the current topic assignment is proportional to: where C(m, k) is the number of times topic k was sampled in document m (excluding the word we are currently sampling for) and C(w mn , k) is the number of times topic k was sampled for word w mn (Paul, 2015b). The counts are aggregated over all topic assignments except the current word being sampled. The term on the right-hand side can be converted to a probability by normalizing by the sum of unnormalized topic sampling probabilities: This Gibbs sampling step is the same for both unsupervised LDA as well as supervised models like DMR -the only difference between these two models is how ϕ andθ are parameterized (Figure 4.2).
Hyperparameter Updates
The hyperparameters that parameterize the document-topic prior are learned by firstorder methods. We first calculate the gradient of the joint log-likelihood of both the observed words and sampled topics with respect to the prior hyperparameters, and then update the hyperparameters along a descent direction 1 .
The partial derivative of an upstream topic model's log-likelihood with respect to the document-topic priorθ m : where k is a topic index and ψ is the digamma function, the derivative of the natural logarithm of the gamma function (a generalization of factorial to complex and real numbers). The partial derivative with respect toφ is: where w and w ′ are word indices. Ifθ m is parameterized as exp(η b + η T α m ) (as in DMR), we can simply apply the chain rule to solve for the partial derivative with respect to the prior hyperparameters η b and η. The same goes for the topic-word parameters ω b .
In practice we also include a small amount of ℓ 2 regularization on the gradient term. This is necessary to prevent hyperparameter weights from growing far too large, overfitting to the current topic samples. Perplexity is just the exponentiated average negative log probability of the corpus under the model:
Calculating Model Fit
where N m is the number of words in document m. Perplexity can be interpreted as encoding how "confused" the topic model is on average for each token in the corpus.
A topic model with lower perplexity is better at predicting which words are likely to occur in a document than one with higher perplexity (assigning higher average log-likelihood to words in the corpus).
Heldout perplexity is computed by only aggregating document-topic and topicword counts from every other token in the corpus, and evaluating perplexity on the remaining heldout tokens. This corresponds to the "document completion" evaluation method as described in (Wallach et al., 2009), where instead of holding out the words in the second half of a document, every other token is held out after shuffling the words within a document 2 . The counts C(m, k), C(w, k) are computed only over training token samples.
2 Word ordering within a document is of no consequence to the probabilistic topic models we consider, since they assume that each word is generated independently of all other words in a document (given the current document-topic distribution). We shuffle the tokens within each document before topic sampling to ensure that ordering effects do not influence the word distributions between training and heldout tokens. Proposal: Neuralize the Prior One solution to addressing this restriction is to learn low dimensional representations of document metadata before conditioning DMR on them. Neural networks have shown wide-spread success at learning generalizable representations, often obviating the need for hand designed features (Collobert and Weston, 2008). A prime example is word embedding features in natural language processing, which supplant traditional lexical features (Brown et al., 1992;Mikolov et al., 2013a;Pennington, Socher, and Manning, 2014). Jointly learning networks that construct feature representations along with the parameters of a standard NLP model has become a common approach. For example, Yu, Gormley, and Dredze (2015) used a tensor decomposition to jointly learn features from both word embeddings and traditional NLP features, along with the parameters of a relation extraction model.
Additionally, neural networks can handle a variety of data types including text, images, and general metadata features. This makes them appropriate tools for addressing dimensionality reduction in DMR.
Deep Dirichlet Multinomial Regression (dDMR) is a model that extends DMR by introducing a deep neural network that learns a transformation of the input metadata into features used to form the document-topic prior. Whereas DMR parameterizes the document-topic priors as a log-linear function of document features, dDMR jointly learns a feature representation for each document along with a log-linear function that best captures the distribution over topics. Since the function mapping document features to topic prior is a neural network, we can jointly optimize the topic model and the neural network parameters by gradient ascent and back-propagation. For simplicity we make no assumptions on the type of this function, only that it can be optimized to minimize a cost on its output by gradient ascent. In practice, we define this function as a neural network, where the architecture of this network is informed by the type of document supervision, e.g. a convolutional neural network for images.
Model
We use neural networks since they are expressive, generalize well to unseen data, and can be jointly trained using straightforward gradient ascent with back-propagation.
The generative story for dDMR is as follows: 4. For each topic k, generate word distribution: where V is the vocabulary size and K are the number of topics. In practice, the document features need not be restricted to fixed-length feature vectors, e.g. f may be an RNN that maps from a sequence of characters to a fixed length vector in R k .
DMR is a special case of dDMR with the choice of a linear function for f .
Synthetic Experiments
Our intuition in developing dDMR was that if the document-level supervision is very high-dimensional but lies on a low-dimensional manifold, then expressing the supervision with respect to its position on this manifold will avoid overfitting a topic model to the training corpus. DMR does not perform any such dimensionality reduction and thus may be liable to overfitting when the source of supervision is high-dimensional; a neural prior topic model that learns an appropriate embedding of the supervision will not be as susceptible. We constructed a synthetic dataset to determine what sort of corpora are more appropriate to model with dDMR rather than DMR.
Data Generation
Algorithm 3 displays pseudocode for how the synthetic corpus was generated. 10,000 documents were generated with 50 tokens per document according to the generative story of a dDMR model where f was defined as a single-hidden-layer feedforward neural network with 5-dimensional hidden layer (sigmoid activation function) and a 100-dimensional output layer (softmax activation function
Model Fit to Synthetic Corpora
For each corpus, we fit three 20-topic models: LDA, DMR, and dDMR. The dDMR model had an identical prior architecture to the generating model, but with randomly initialized weights. We fit models by the procedure described in Section 4.1.2 and evaluated model fit by heldout perplexity after 1,000 Gibbs sampling iterations 4 . can exploit the noisy supervision to achieve a much lower perplexity. dDMR achieves no worse heldout perplexity than DMR across all corpora, excelling when noise is high and supervision is wide. This suggests that dDMR is a promising model for using high-dimensional, noisy supervision such as user features to improve topic model fit.
dDMR Evaluation
We explore the flexibility of dDMR by considering three different datasets that include different types of metadata associated with each document. We first describe the documents and metadata associated with each dataset and then the criteria by which we evaluate topic models.
Data
All datasets were preprocessed similarly. Article text was tokenized by non-alphanumeric characters and numerals were replaced by a special number token and infrequent word types were excluded from the corpora, although the number of word types kept varies slightly between corpora.
New York Times
The New York Times Annotated Corpus (Sandhaus, 2008) contains articles with extensive metadata used for indexing by the newspaper. For supervision, we used the "descriptor" tags associated with each article assigned by archivists. These tags reflect the topic of an article, as well as organizations or people mentioned in the article. We selected all articles published in 1998, and kept those tags that were associated with at least 3 articles in that year -2424 unique tags. 20 of the 200 most frequent tags were held out from training for validation purposes: { "education and schools", "law and legislation", "advertising", "budgets and budgeting", "freedom and human rights", "telephones and telecommunications", "bombs and explosives", "sexual harassment", "reform and reorganization", "teachers and school employees", "tests and testing", "futures and options trading", "boxing", "firearms", "company reports", "embargoes and economic sanctions", "hospitals", "states (us)", "bridge (card game)", and "auctions"}. Articles contained an average of 2.1 tags each, with 738 articles not containing any of these tags. Tags were represented using a one-hot encoding to use for supervision.
Words occurring in more than 40% of documents were removed, and only the 15,000 most frequent types were retained. This resulted in a total of 89,397 articles with an average length of 158 tokens per article.
Amazon Product Reviews
The Amazon product reviews corpus (McAuley and Yang, 2016) contains reviews of products as well as images of the product. We sampled 100,000 Amazon product reviews: 20,000 reviews sampled uniformly from the Musical Instruments, Patio, Lawn, & Garden, Grocery & Gourmet Food, Automotive, and Pet Supplies product categories. We hypothesize that knowing information about the product's appearance will indicate which words appear in the review, especially for product images occurring in these categories. 66 of the reviews we sampled contained only highly infrequent tokens, and were therefore removed from our data, leaving 99,934 product reviews.
Articles were preprocessed identically to the New York Times data.
We include images as supervision by passing each product's image through the Caffe convolutional neural network reference model, trained to predict ImageNet object categories 5 . We then extract the 4096-dimensional second fully-connected layer from this network to use as document supervision. Using these features as supervision in a dDMR model with a feedforward network prior is similar to finetuning a pretrained CNN to predict a new set of labels. Since the Caffe reference model is already trained on a large corpus of images, we chose to fine-tune only the final layers so as to learn a transformation of the already learned representation.
Reddit Messages
We finally constructed a corpus of online text by selecting a sample of Reddit posts made in January 2016. A standard stop list was used to remove frequent function words and we restricted the vocabulary to the 30,000 most frequent types. We restricted posts made to subreddits, collections of topically-related threads, with at least ten comments in this month (26,830 subreddits), and made by users with at least five comments across these subreddits (total of 1,351,283 million users). We then sampled 10,000 users uniformly at random and used all their comments as a corpus, for a total of 389,234 comments over 7,866 subreddits (document length mean: 16.3, median: 9) We considered a one-hot encoding of the subreddit ID a comment belonged to as supervision.
This corpus differs from the others in two ways. First, Reddit documents are very short, which presents a challenge for topic models that rely on detecting correlations in token use within a document. Second, the Reddit metadata that may be useful for topic modeling is necessarily high-dimensional (e.g. subreddit identity, a proxy for topical content), so we believed that DMR will likely have trouble exploiting it.
Experiment Description
We used the same procedure to fit topic models on each dataset. Hyperparameter gradient updates were performed after a burnin period of 100 Gibbs sampling iterations.
Hyperparameters were updated with the adaptive learning rate algorithm Adadelta with a tuned base learning rate and fixed ρ = 0.95 6 . All models were trained for a maximum of 15,000 epochs, with early stopping if heldout perplexity showed no improvements after 200 epochs (evaluated once every 20 epochs).
We used single-hidden-layer multi-layer perceptrons (MLPs), with rectified linear unit (ReLU) activations on the hidden layer, and linear activation on the output layer for the dDMR neural prior architecture. We sampled three architectures for each dataset, by drawing layer widths independently at random from [10, 500], and also included two architectures with (50, 10) and (100, 50), (hidden, output) layers 7 . We compare the performance of dDMR to DMR trained on the same feature set as well as
LDA.
For the New York Times dataset, we also compare dDMR to DMR trained on features after applying principal components analysis (PCA) to reduce the dimensionality of descriptor feature supervision, sweeping over PCA projection width in 6 We found this adaptive learning rate algorithm improved model fit in many fewer iterations than gradient descent with tuned step size and decay rate for all models. 7 We included these two very narrow architectures to ensure that some architecture learned a small feature representation, generalizing better when features are very noisy or only provide a weak signal for topic modeling. We restricted ourselves to only train dDMR models with single-hidden-layer MLPs in the priors to limit our search space.
{10, 50, 100, 250, 500, 1000}. Comparing performance of dDMR to PCA-reduced DMR tests two modeling choices. First, it tests the hypothesis that explicitly learning a representation for document annotations to maximize data likelihood produces a "better-fit" topic model than learning this annotation representation in unsupervised fashion -a two-step process. It also lets us determine if a linear dimensionality reduction technique is sufficient to learning a good feature representation for topic modeling, as opposed to learning a non-linear transformation of the document supervision. Note that we cannot apply PCA to reduce the dimensionality for subreddit id in the Reddit data, since these are one-hot features.
Model Selection Documents in each dataset were partitioned into ten equally-sized folds. Model training parameters of ℓ 1 and ℓ 2 regularization penalties on feature weights for DMR and dDMR and the base learning rate for each model class were tuned to minimize heldout perplexity on the first fold. These were tuned independently for each model, with number of topics fixed to 10, and dDMR architecture fixed to narrow layer widths (50, 10). Model selection was based on the macro-averaged performance on the next eight folds, and we report performance on the remaining fold.
We selected models separately for each evaluation metric. For dDMR, model selection amounts to selecting the document prior architecture, and for DMR with PCA-reduced feature supervision, model selection involved selecting the PCA projection width.
Evaluation
Each model was evaluated according to heldout (1) perplexity, (2) topic coherence by normalized pointwise mutual information (NPMI) (Lau, Newman, and Baldwin, 2014), and (3) a dataset-specific predictive task. We finally collect user preferences for topics learned by each model. These are all typical approaches to evaluating topic models (Paul, 2015a).
NPMI computes an automatic measure of topic quality: the sum of pointwise mutual information between pairs of the m most likely words normalized by the negative log probability of each pair jointly occurring within a document (Equation 4.6). A topic with a large NPMI score is one whose most probable words tend to occur in the same documents more frequently than chance. We calculated this topic quality metric on the top 20 most probable words in each topic, and averaged over the most coherent 1, 5, 10, and all learned topics. However, models were selected to only maximize average NPMI over all topics.
For the prediction tasks, we used the sampled topic distribution associated with a document, averaged over the last 100 iterations, as features to predict a document label.
For New York Times articles we predicted 10 of the 200 most frequent descriptor tags restricting to articles with exactly one of these descriptors. For Amazon, we predicted the product category a document belonged to (one of five), and for Reddit we predicted a heldout set of document subreddit IDs. In the case of Reddit, these heldout subreddits were 10 out of the 100 most prevalent in our data, and were held out just as in the New York Times prediction task. SVM models were fit on inferred topic distribution features and were then evaluated according to accuracy, F1-score, and area under the ROC curve. The SVM slack parameter was tuned by 4-fold cross-validation on 60% of the documents, and evaluated on the remaining 40%. Here we elicit which of two topics humans believe is more likely for an Amazon product with the displayed image (a cat feeder).
We also collected human topic judgments using Amazon Mechanical Turk (Callison-Burch and Dredze, 2010). Each subject was presented with a human-readable version of the features used for supervision. For New York Times articles we showed the descriptor tags, for Amazon the product image, and for Reddit the name, title, and public description of the subreddit. We showed the top twenty words for the most probable topic sampled for the document with those features, as learned by two different models. One topic was learned by dDMR and the other was either learned by either LDA or DMR. The topics presented were from the 200-topic model architecture that maximized NPMI on development folds. Annotators were asked "to choose which word list best describes a document . . . " with the displayed features. The topic learned by dDMR was shuffled to lie on either the right or left for each Human Intelligence Task (HIT). An example HIT for the Amazon data is shown in Figure 4.6.
We obtained judgments on 1,000 documents for each dataset and each model evaluation pair -6,000 documents in all. This task can be difficult for many of the features, which may be unclear (e.g. descriptor tags without context) or difficult to interpret (e.g. images of unfamiliar automotive parts). We chose to not present the document text as well, since we did not want subjects to evaluate topic quality based on token overlap with the actual document.
but not as easily, ultimately making resampling topics the most expensive step in model training. Because of this, the potential difference in runtime for a single iteration between dDMR and LDA is small, with the former converging in far fewer iterations. The time taken per iteration by DMR or dDMR was at most twice as long as LDA across all experiments.
Sensitivity to Learning Parameters Also, dDMR performance is much less sensitive to training parameters relative to DMR. While DMR requires heavy ℓ 1 and ℓ 2 regularization and a very small step size to achieve low heldout perplexity, dDMR is relatively insensitive to the penalty on regularization and benefits from a higher base learning rate (Figure 4.8). We found that dDMR is easier to tune than DMR, requiring less exploration of the training parameters. This is also corroborated by higher variance in perplexity achieved by DMR across different cross-validation folds ( Table 4.2: Top-1, 5, 10, and overall topic NPMI across all datasets. Models that maximized overall NPMI across dev folds were chosen and the best-performing model is in bold.
Topic Quality
Results for the automatic topic quality evaluation, NPMI, are mixed across datasets. In many cases, LDA and DMR score highly according to NPMI, despite achieving higher heldout perplexity than dDMR (Table 4.2). This may not be surprising as previous work has found that perplexity does not correlate well with human judgments of topic coherence (Lau, Newman, and Baldwin, 2014).
However, in the Mechanical Turk evaluation, subjects found that dDMR-learned topics are more representative of document annotations than DMR ( subjects only statistically significantly favored dDMR models over LDA on the Reddit data, they favored dDMR topics over LDA by a small margin across all datasets, and statistically significantyl preferred dDMR topics over DMR on two of the three datasets. This is contrary to the model rankings according to NPMI, which predict that DMR topics would be preferable.
Predictive Performance
Finally, we consider the utility of the learned topic distributions for downstream prediction tasks, a common use of topic models. Although token perplexity is a standard measure of topic model fit, it has no direct relationship with how topic models are typically used: to identify consistent themes or reduce the dimensionality of a document corpus. We found that features based on topic distributions from dDMR outperform LDA and DMR on the Amazon and Reddit data when the number of topics fit is large, although they fail to outperform DMR on New York Times (Table 4.4). Heldout perplexity is strongly correlated with predictive performance, with a Pearson correlation coefficient, ρ = 0.898 between F1-score and heldout perplexity on the Amazon data. This strong correlation is likely due to the tight relationship between words used in product reviews and product category: a model that assigns high likelihood to a words in a product review corpus should also be informative of the Table 4.4: Top F-score, accuracy, and AUC on prediction tasks for all dDMR evaluation datasets.
product categories. Prior work showed that upstream supervised topic models, such as DMR, learn topic distributions that are effective at downstream prediction tasks (Benton et al., 2016b). We find that topic distributions learned by dDMR improve over DMR in certain cases, particularly as the number of topics increases.
Qualitative Results
We also qualitatively explored the product image representations DMR and dDMR learned on the Amazon data. To do so, we computed and normalized the prior document distribution for a sample of documents learned by the lowest perplexity DMR and dDMR 200-topic models: This is the prior probability of sampling topic k conditioned on the features for document m (before seeing any words in the document). We then marginalize over topics to yield the conditional probability of a word w given document m: Table 4.5 contains a sample of these probable words given document supervision.
We find that dDMR identifies words likely to appear in a review of the product pictured.
However, some images lead dDMR down a garden path. For example, a bottle of "Turtle Food" should not be associated with words for human consumables like "coffee" and "chocolate", despite the container resembling some of these products. However, the image-specific document priors DMR learned are not as sensitive to the actual product image as those learned by dDMR. The prior conditional probabilities p(w|m) for "Turtle Food", "Slushy Magic Cup", and "Rawhide Dog Bones" product images are all ranked identically by DMR.
Application: Predicting Policy Surveys with Twitter Data
In section 4.2, we presented a new supervised topic model that is more resilient to noisy supervision than DMR. In this section we apply DMR and dDMR to three different Twitter public policy opinion datasets, comparing how models conditioned on inferred user location features fare against supervised models trained with distant demographics and policy-relevant features.
Motivation
One goal of social media analytics is to complement or replace traditional survey mechanisms (Thacker and Berkelman, 1988;Krosnick, Judd, and Wittenbrink, 2005).
Traditional phone surveys are both slow and expensive to run. For example, the CDC's annual Behavioral Risk Factor Surveillance System (BRFSS) is a health-related telephone survey that collects health data by calling more than 400,000 Americans. sound amp guitar mic pedal sounds price volume quality cable great bass microphone strings music play recording 000 tone unit sound guitar fit easy well 0000 works car quality light music cover work one set nice looks 00 install unit Table 4.5: Top twenty words associated with each of the product images -learned by dDMR vs. DMR (Z = 200). These images were drawn at random from the Amazon corpus (no cherry-picking involved). Word lists were generated by marginalizing over the prior topic distribution associated with that image and then normalizing each word's probability by subtracting off its mean marginal probability across all images in the corpus. This is done to avoid displaying highly frequent words. Words that differ between each model's ranked list are in bold. We would like to fit topic models to these data, and use the inferred topic distribution to predict survey responses at the state level. We consider two classes of supervision for guiding supervised topic models: weak author demographic and opinion supervision based on the inferred location of the tweet author. We also compare how predictive of BRFSS survey responses DMR is to dDMR when we use a one-hot encoding of the author's inferred location as topic model supervision -either at the state, county, or the city level.
Datasets
We created three Twitter datasets based on keyphrase filtering (Table 4.6) with data collected from Dec. 2012 through Jan. 2015 to match tweets relevant to these three survey questions. We selected 100,000 tweets uniformly at random for each dataset and geolocated them to state/county using Carmen . Geolocation coverage is shown in Table 4.7.
We consider the following sources of (distant) topic model supervision along with one-hot author location indicators:
Survey
This indirect supervision uses the values of the BRFSS survey responses that we are trying to predict. Tweets whose authors are resolved to a state are assigned the proportion of "yes" survey respondents within that state. This setting reflects predicting the values for some states using data already available from other states.
This setting is especially relevant for BRFSS, since the survey is run by each state with results collected and aggregated nationally. Since not all states run their surveys at the same time, BRFSS routinely has results available for some states but not yet others.
Census
We also experimented with an alternative indirect type of supervision: demographic information from the 2010 U.S. Census 8 . Demographic variables are correlated with the responses to the surveys we are trying to predict (Hepburn et al., 2007;King, Dube, and Tynan, 2012;Gust et al., 2008), so we hypothesize that conditioning on demographic information may lead to more predictive and interpretable topic models than no supervision at all. This approach may be advantageous when domain-specific survey information is not readily available.
From the Census, we used the percentage of white residents per county as supervision for tweets whose county could be resolved. Although this feature is not directly related to the survey proportions we are trying to predict, it is sampled at a finer granularity than the state-level survey feature. Proportion of tweets tagged with this feature are also included in Table 4 Table 4.7: A summary of the three Twitter public policy datasets: size of the vocabulary, proportion of messages tagged at the state and county level, and the state-level survey question (BRFSS) asked. two types of supervision in isolation to assess the usefulness of each class of distant supervision.
User Location Features
In addition, we consider conditioned models on a one-hot encoding of location. We consider three different levels of granularity: state, county, and city. We restricted to only locations that were resolved in the United States, treating tweets resolved to other countries as though they were not resolved at all. As the surveys we are trying to predict are specific to American opinions, this ensured that document-level features were restricted to those tweets that were more likely to come from United State residents. It also means that tweets that are tagged with a specific location are a strict subset of those that were tagged by the state-level Survey feature. Tweets that Carmen was unable to resolve were assigned a NOT_RESOLVED location feature, and finer granularity features backed off to the most specific type of location resolved.
We consider these direct user location features since like the Census feature it is agnostic to which survey question we are trying to predict. However, unlike the Census feature, a topic model conditioned directly on location has more flexibility to learn which topics are more likely in that specific location, rather than relying on a single, the proportion of white residents in the county, as a proxy.
Experiments
We fit DMR and dDMR models conditioned on each feature set, tuning for held-out perplexity and evaluated its ability to predict the survey proportion for each state. We also compared to an LDA model without any supervision.
The text was preprocessed by removing stop words and low-frequency words. We also removed usernames, URLs, and non-alphanumeric tokens. We applied z-score normalization to the BRFSS/Census values within each dataset, so that the mean value was 0. For tweets whose location could not be resolved, the Survey and Census document supervision was set to 0.0, and the NOT_RESOLVED one-hot location features was active.
Evaluation We evaluated the utility of topics as features for predicting the survey value for each U.S. state, reflecting how well topics capture themes relevant to the survey question. We inferred θ m for each tweet and then averaged these topic vectors over all tweets originating from each state, to construct 50 feature vectors per model. We used these features in a regularized linear regression model. Average root meansquared error (RMSE) was computed using five-fold cross-validation: 80% of the 50 U.S. states were used to train, 10% to tune the ℓ 2 regularization coefficient on the ridge regression model, and 10% were used for evaluation. In each fold, the topic models used supervision only for tweets from the training set states, while the α values were set to 0.0 (a neutral value) for the held-out states.
For both perplexity and prediction performance, we sweep over number of topics in {10, 25, 50, 100} and report the best result. Results are averaged across five sampling runs to mitigate variation in performance due to estimating model parameters by Gibbs sampling.
Model Selection For tuning, we held out 10,000 tweets from the guns dataset and used the best learning parameters for all datasets. We ran Spearmint (Snoek, Larochelle, and Adams, 2012) for 100 iterations to tune the learning parameters, running each sampler for 500 iterations. We used Spearmint since it allowed us to automatically explore a large space of learning parameters quickly without resorting to brute-force grid search. Spearmint was used to tune the following learning parameters: the initial values for ω b and η b , as well as ℓ 2 regularization on η b , ω b , and η.
Held-out perplexity is very sensitive to some parameters, such as initialization of η b and ω b , while other parameters, such as the ℓ 2 regularization on ω b had little effect.
Once tuned, all models were trained for 2,000 iterations, using AdaGrad with a master step size of 0.02, with no hyperparameter updates made in the first 200 iterations.
Replication: Comparing DMR to dDMR
One crucial detail is that the initial set of experiments with DMR conditioned on Survey and Census features were run using a Java package, sprite 9 , that implemented Sprite topic models -a class of upstream topic models with structured priors (Paul and Dredze, 2015). We attempted to replicate these experiments with a Python 3.5 library that supports defining and training dDMR models with feedforward neural network priors, deep-dmr 10 . Relying on deep-dmr was necessary as sprite does not support training dDMR models.
When replicating models in deep-dmr, we considered a different model selection scheme due to the wide space of possible dDMR models and time restrictions. For each model class conditioned on feature type, we performed a grid search on the heldout gun control tweets for ℓ 1 and ℓ 2 regularization constants in {0.0, 10 −4 , 10 −2 , 10 −1 } and {10 −4 , 10 −2 , 10 −1 , 10 0 }, respectively. These constants were then applied to models trained on all datasets. We also swept over base learning rate for each model class in {10 −2 , 10 −1 , 10 0 }, for Adadelta hyperparameter updates (this is the default hyperparameter update algorithm in this package). For all models, bias hyperparameters were initialized to η b = −2 and ω b = −4, corresponding to sparse initial Dirichlet priors.
For dDMR and each feature set, we swept over three single-hidden-layer architec- This amounts to a simple lookup embedding of the state, county, or city features. For DMR, we use each of the feature sets as supervision, but for dDMR we only consider state, county, or city indicator features 11 .
Results
We first present the results on comparing DMR conditioned on Survey and Census features to an unsupervised topic model, LDA. These experiments were run using the sprite package. We then present these experiments replicated using deep-dmr, with the new model learning and selection criteria as described in 4.4.3.1. We compare perplexity and predictive performance of conditioning on location features under this replication framework.
Evaluating Survey and Census Features
Results from training models in sprite are shown in Table 4 Table 4.8: RMSE of the prediction task (left) and average perplexity (right) of topic models over each dataset, ± the standard deviation (learned under sprite). Perplexity is averaged over 5 sampling runs and RMSE is averaged over 5 folds of U.S. states. As a benchmark, the RMSE on the prediction task using a bag-of-words model was 11.50, 6.33, and 3.53 on the Guns, Vaccines, and Smoking data, respectively.
prediction error, as might be expected, but they also have substantially lower perplexity, and thus seem to be topics that better represent the data.
The poor performance of LDA may be partially explained by the fact that Spearmint seems to overfit LDA to the tuning set. Other models attained a tuning set perplexity of between 1500 to 1600, whereas LDA attained 1200. To investigate this issue further, we separately ran experiments with hand-tuned models, which gave us better held-out results for LDA, though still worse than the supervised topic models (e.g., RMSE of 16.44 on the guns data). Although Spearmint tuning is not perfect, it is fair to all models.
For additional comparison, we experimented with a standard bag-of-words model, where features were normalized counts across tweets from each state. This comparison is done to contextualize the magnitude of differences between models, even though our primary goal is to compare different types of topic models. We found that the bag-of-words results (provided in the caption of We used a topic model trained with data from the universal background check (UBC) survey question as features for predicting the state values for the UBC surveys.
As in the previous experiments, we used topic features in a linear regression model, sweeping over ℓ 2 regularization constants and number of topics, and we report test performance of the best-performing settings on the tuning set. We evaluated the model using five-fold cross-validation on the 22 states.
Additionally, we sought to utilize data from a previous, topically-related survey: the "Guns" BRFSS survey used in the previous section, which measured the proportion Figure 4.9. We generated similar plots for dDMR models conditioned on state and county features. Why is the Performance so Different? We took great pains to ensure that both sprite and deep-dmr optimized models identically. We made sure that initializing LDA under both both frameworks with the same topic samples yielded identical training and heldout perplexity, and that they achieved similar final heldout perplexity when learning over synthetic data. In the process of uncovering the difference between the original experiments and replications, we noticed two small discrepancies between these implementations that were subsequently resolved:
Conditioning on Location Features
• Treating every other token in the corpus as heldout as opposed to every other token within each document. Since words are shuffled within each document as a preprocessing step, this did not affect heldout perplexity significantly.
• The bias hyperparameters were initialized to different values in each implementation: deep-dmr initialized them to η b = −1 and ω b = −2.
131
The critical differences between model training and selection in the spritetrained models and the deep-dmr replicated models are as follows: • Hyperparameters updated with Adagrad (master learning rate fixed to 0.02) → Adadelta (tuned master learning rate).
• Spearmint-tuned model selection → Grid search for training parameters for each model class Although these should be relatively minor choices, they clearly had a profound impact on the quality of topic models that were learned.
Summary
This chapter presents a non-traditional application of user features and embeddings:
Motivation
Suicide is one of the leading causes of death worldwide, and over 90% of individuals who die by suicide experience mental health conditions. 1 However, detecting the risk of suicide, as well as monitoring the effects of related mental health conditions, is challenging. Traditional methods rely on both self-reports and impressions formed during short sessions with a clinical expert, but it is often unclear when suicide is a risk in particular. 2 Consequently, conditions leading to preventable suicides are often not adequately addressed.
Automated monitoring and risk assessment of patients' language has the potential to complement traditional assessment methods, providing objective measurements to motivate further care and additional support for people with difficulties related to 1 https://www.nami.org/Learn-More/Mental-Health-Conditions/ Related-Conditions/Suicide#sthash.dMAhrKTU.dpuf 2 Communication with clinicians at the 2016 JSALT workshop (Hollingshead, 2016).
135 mental health. This paves the way toward verifying the need for additional care with insurance coverage, for example, as well as offering direct benefits to clinicians and patients.
We explore some of these possibilities in the mental health space using written social media text that people with different mental health conditions are already producing. Uncovering methods that work with such text provides the opportunity to help people with different mental health conditions by leveraging a data source they are already contributing to.
However, these studies typically model each condition in isolation, which misses the opportunity to model coinciding influence factors. Tasks with underlying commonalities (e.g., part-of-speech tagging, parsing, and NER) have been shown to benefit from multi-task learning (MTL), as the learning implicitly leverages interactions between them (Caruana, 1993;Sutton, McCallum, and Rohanimanesh, 2007;Rush et al., 2010;Collobert et al., 2011;Søgaard and Goldberg, 2016). Suicide risk and related mental health conditions are therefore good candidates for modeling in a multi-task framework.
In this chapter we apply multi-task learning for detecting suicide risk and mental health conditions. The tasks in our model include the user mental health conditions of neuroatypicality (i.e. having an atypical mental condition) and suicide attempt, as well as the related mental health conditions of anxiety, depression, eating disorder, panic attacks, schizophrenia, bipolar disorder, and post-traumatic stress disorder (PTSD), and we explore the effect of task selection on model performance. We additionally include the effect of modeling a user demographic feature, gender, which has been shown to improve accuracy in tasks using social media text (Volkova, Wilson, and Yarowsky, 2013;Hovy, 2015).
Predicting suicide risk and several mental health conditions jointly opens the possibility for the model to leverage a shared representation for conditions that frequently occur together, a phenomenon known as comorbidity. Further including gender reflects the fact that gender differences are found in the patterns of mental health (WHO, 2016), which may help to sharpen the model. The MTL framework we propose allows such shared information across predictions and enables the inclusion of several loss functions with a common shared underlying representation. This approach is flexible enough to extend to factors other than the ones shown here, provided suitable data.
We find that choosing tasks that are prerequisites or related to the main task is critical for learning a strong model, similar to findings in Caruana (1996). We further find that including gender as an auxiliary task improves accuracy across a variety of conditions, including suicide risk. The best-performing model from our experiments demonstrates that multi-task learning is a promising new direction in automated assessment of mental health and suicide risk, with possible application to the clinical domain.
Findings
1. We demonstrate the utility of MTL in predicting mental health conditions from social user text -a notoriously difficult task (Coppersmith et al., 2015b;Coppersmith et al., 2015a) -with potential application to detecting suicide risk.
2. We explore the influence of task selection on prediction performance, including the effect of gender.
Model Architecture
A neural multi-task architecture opens the possibility of leveraging commonalities and differences between mental conditions. Previous work (Collobert et al., 2011;Caruana, 1996;Caruana, 1993) has indicated that such an architecture allows for sharing parameters across tasks, and can be beneficial when there is varying degrees of annotation across tasks. This makes MTL particularly compelling in light of mental health comorbidity, and given that different conditions have different amounts of associated data.
Previous MTL approaches have shown considerable improvements over single task models, and the arguments are convincing: predicting multiple related tasks should allow us to exploit any correlations between the predictions. However, in much of this work, an MTL model is only one possible explanation for improved accuracy. Another more salient factor has frequently been overlooked: The difference in the expressivity of the model class, i.e., neural architectures vs. discriminative or generative models, and critically, differences in the number of parameters for comparable models. Some comparisons might therefore have inadvertently compared apples to oranges.
In the interest of examining the effect of MTL specifically, we compare the multitask predictions to models with equal expressivity. We evaluate the performance of a standard logistic regression model (a standard approach to text-classification problems), a multilayer perceptron single-task learning (STL) model, and a neural MTL model, the latter two with equal numbers of parameters. This ensures a fair comparison by decoupling the unique regularization of MTL from the dimensionalityreduction aspect of deep architectures in general.
The neural models we evaluate come in two forms. The first, depicted in plate notation on the left in Figure 5.1 are the STL models. These are feedforward networks with two hidden layers, trained independently to predict each task. On the right in Figure 5.1 is the MTL model, where the first hidden layer from the bottom is shared between all tasks. An additional per-task hidden layer is used to give the model flexibility to map from the task-agnostic representation to a task-specific one. Each hidden layer uses a rectified linear unit as non-linearity. The output layer uses a logistic non-linearity, since all tasks are binary predictions. The MTL model can Single-task Multi-task Figure 5.1: STL model in plate notation (left): weights trained independently for each task t (e.g., anxiety, depression) of the T tasks. MTL model (right): shared weights trained jointly for all tasks, with task-specific hidden layers. Curves in ovals represent the type of activation used at each layer (rectified linear unit or sigmoid). Hidden layers are shaded.
easily be extended to a stack of shared hidden layers, allowing for a more complicated mapping from input to shared space. 3 As noted in Collobert et al. (2011), MTL benefits from mini-batch training, which both allows optimization to jump out of poor local optima, and more stochastic gradient steps in a fixed amount of time (Bottou, 2012). We create mini-batches by sampling uniformly from the users in our data, where each user has some subset of the conditions we are trying to predict, and may or may not be annotated with gender.
At each mini-batch gradient step, we update weights for all tasks simultaneously. This not only allows for randomization and faster convergence, it also provides a speed-up over the task selection process reported in earlier work (Collobert et al., 2011).
Another advantage of this setup is that we do not need complete information for every instance: learning can proceed with asynchronous updates, dependent on what the data in each batch has been annotated for, while sharing representations throughout. This effectively learns a joint model with a common representation for several different tasks, allowing the use of several "disjoint" data sets, some with limited annotated instances.
Data
We train models on a union of multiple Twitter user datasets: 1) users identified as having anxiety, bipolar disorder, depression, panic disorder, eating disorder, PTSD, or schizophrenia (Coppersmith et al., 2015b), 2) those who had attempted suicide (Coppersmith et al., 2015c), and 3) those identified as having either depression or PTSD We use the entire Twitter history of each user as input to the model, and split it into character 1-to-5-grams, which have been shown to generaliize better than words for many Twitter text classification tasks (Mcnamee and Mayfield, 2004;Coppersmith et al., 2015b). For instance, a character n-gram representation of a document is less sensitive to typographical errors than token n-gram features -although a single mistyped character will yield an entirely different token, the misspelled word will share most of its character unigram features with the correctly spelled word. We compute the relative frequency of the 5,000 most frequent n-gram features for n ∈ {1, 2, 3, 4, 5} in our data, and then feed this as input to all models. This input representation is common to all models, allowing for fair comparison.
Experiments
Our task is to predict suicide attempt and mental conditions for each of the users in these data. We evaluate three classes of models: baseline logistic regression over character n-gram features (LR), feed-forward multilayer perceptrons trained to predict each task separately (STL), and feed-forward multi-task models trained to predict a set of conditions simultaneously (MTL). We experiment with a feed-forward network against independent logistic regression models as a way to directly test the hypothesis that neural classifiers can improve mental condition prediction, particularly when regularized with MTL.
We also perform ablation experiments to see which subsets of tasks help us learn an MTL model that predicts a particular mental condition best. For all experiments, data were divided into five equal-sized folds, three for training, one for tuning, and one for test (we report performance on this fold).
All our models are implemented in Keras 4 with Theano backend and GPU support.
We train the models for a total of up to 15,000 epochs, using mini-batches of 256 eaxmples each. Training time on all five training folds ranged from one to eight hours on a machine with Tesla K40M.
Evaluation Setup
In clinical settings, we are interested in minimizing the number of false positives, i.e., incorrect diagnoses, which can cause undue stress to the patient. We are thus interested in bounding this quantity. To evaluate the performance, we plot the false positive rate (FPR) against the true positive rate (TPR). This gives us a receiver operating characteristic (ROC) curve, allowing us to inspect the performance of each model on a specific task at any level of FPR.
While the ROC gives us a sense of how well a model performs at a fixed true positive rate, it makes it difficult to compare the individual tasks at a low false positive rate, which is also important for clinical application. We therefore report two more measures: the area under the ROC curve (AUC) and TPR performance at FPR=0.1 (TPR@FPR=0.1). We do not compare our models to a majority baseline model, since this model would achieve an expected AUC of 0.5 for all tasks, and F-score and TPR@FPR=0.1 of 0 for all mental conditions -users exhibiting a condition are the minority, meaning a majority baseline classifier would achieve zero recall.
Optimization and Model Selection
Even in a relatively simple neural model, there are a number of hyperparameters that can (and have to) be tuned to achieve good performance. We perform a line search for every model we use, sweeping over ℓ 2 regularization and hidden layer width. We select the best model based on the development loss. Figure 5.4 shows the performance on the corresponding test sets (plot smoothed by rolling mean of 10 for visibility).
We train each model for 5,000 iterations, jointly updating all weights in our models.
After this initial joint training, we select each task separately, and only update the taskspecific layers of weights independently for another 1,000 iterations (selecting the set of weights achieving lowest development loss for each task individually). Weights are updated using mini-batch Adagrad (Duchi, Hazan, and Singer, 2011) -this converges more quickly than other optimization schemes we initially experimented with. We evaluate the tuning loss every 10 epochs, and select the model with the lowest tuning loss. Both AUC and TPR (at FPR=0.1) demonstrate that single-task models do not perform nearly as well as multi-task models or logistic regression. This is likely because the neural networks learned by STL cannot be guided by the inductive bias provided by MTL training. Note, however, that STL and MTL are often perform comparably in terms of F1-score, where false positives and false negatives are equally weighted.
Results
Multi-task suicide predictions reach an AUC of 0.848, and predictions for anxiety and schizophrenia are not far behind ( Figure 5.2). Interestingly however, schizophrenia stands out as being the only condition to be best predicted with a single-task model.
MTL models show improvements over STL and LR models for predicting suicide, neuroatypicality, depression, anxiety, panic, bipolar disorder, and PTSD. The inclusion of gender in the MTL models leads to direct gains over an LR baseline in predicting anxiety disorders: anxiety, panic, and PTSD. Figure 5.3 illustrates the true positive rate -that is, how many cases of mental health conditions that we correctly predict -given a low false positive rate -that is, a low rate of predicting people have mental health conditions when they do not. This is particularly useful in clinical settings, where clinicians seek to minimize overdiagnosing, especially when false positives incur an unnecessary, great treatment and emotional cost. In this setting, MTL leads to the best performance across the board, for all tasks under consideration: neuroatypicality, suicide, depression, anxiety, eating, panic, schizophrenia, bipolar disorder, and PTSD. Including gender in MTL further improves performance for neuroatypicality, suicide, anxiety, schizophrenia, bipolar disorder, and PTSD.
Comorbid Conditions Improve Prediction Accuracy
We find that the prediction of the conditions with the least amount of data -bipolar disorder and PTSD -are significantly improved by having the model also predict comorbid conditions with substantially more data: depression and anxiety. We are These differences in AUC are significant at p = 0.05 according to bootstrap sampling tests with 5,000 samples. The wide difference between MTL and STL can 149 be explained in part by the increased feature set size -MTL training may, in this case, provide a form of regularization that STL cannot exploit. Further, modeling the common mental health conditions with the most data (depression and anxiety) helps improve performance in predicting rarer conditions comorbid with these common health conditions. This provides evidence that an MTL model can help in predicting elusive conditions by using large data for common conditions, and a small amount of data for more rare conditions.
Utility of Author Demographic Features
Figures 5.2 and 5.3 both suggest that adding an author's demographic feature, such as gender, as an auxiliary task leads to more predictive models, even though the difference is not statistically significant for most tasks. This is consistent with the findings in previous work (Volkova, Wilson, and Yarowsky, 2013;Hovy, 2015). Interestingly, though, the MTL model is worse at predicting gender itself. While this could be a direct result of data sparsity (recall that we have only a small subset annotated for gender), which could be remedied by annotating additional users for gender, this appears unlikely given the other findings of our experiments, where MTL helped in specifically these sparse scenarios.
However, Caruana (1996) notes that not all tasks benefit from a MTL setting in the same way, and that some tasks serve purely auxiliary roles. Here, gender prediction does not benefit from including mental conditions, but guides MTL models to better predict other mental health conditioned. In other words, predicting gender is qualitatively different from predicting mental health conditions: it seems likely that the signals for anxiety are much more similar to the ones for depression than for, say, being male, and can therefore add to detecting depression. However, the distinction 150 between certain conditions does not add information for the distinction of gender. The effect may also be due to the fact that these data were constructed with inferred gender (used to match controls), so there might be a degree of noise in the data.
Selecting User Features as Auxiliary Tasks
Although MTL tends to dominate STL in our experiments, it is not clear whether modeling several tasks provide a beneficial inductive bias in MTL models in general, or if there exist specific subsets of auxiliary tasks that are most beneficial for predicting suicide risk and related mental health conditions. We perform ablation experiments by training MTL models on a subset of auxiliary tasks, and prediction for a single main task. We focus on four conditions to predict well: suicide attempt, anxiety, depression, and bipolar disorder. For each main task, we vary the auxiliary tasks we train the MTL model with. Since considering all possible subsets of tasks is combinatorially infeasible, we selected the following task subsets as auxiliary: • all: all mental conditions along with gender • all conds: all mental conditions, no gender • neuro: only neurotypicality • neuro+mood: neurotypicality, depression, and bipolar disorder (mood disorders) • neuro+anx: neurotypicality, anxiety, and panic attack (anxiety conditions) • neuro+targets: neurotypicality, anxiety, depression, suicide attempt, bipolar disorder • none: no auxiliary tasks, equivalent to STL Table 5.2: Test AUC when predicting Main Task after multitask training to predict a subset of auxiliary tasks. Significant improvement over LR baseline at p = 0.05 is denoted by * , and over no auxiliary tasks (STL) by † .
are denoted by superscript. Restricting the auxiliary tasks to a small subset tends to hurt performance for most tasks, with exception to bipolar, which benefits from the prediction of depression and suicide attempt. All main tasks achieve their best performance using the full set of additional tasks as auxiliary. This suggests that the biases induced by predicting different kinds of mental conditions are mutually beneficial -e.g., multi-task models that predict suicide attempt may also be good at predicting anxiety.
Based on these results, we find it useful to think of MTL with user features as a framework to leverage auxiliary tasks as regularization to effectively combat data paucity and less-than-trustworthy labels. As we have demonstrated, this may be particularly useful when predicting mental health conditions and suicide risk.
Discussion
Our results indicate that an MTL framework with user feature tasks can lead to significant gains over single-task models for predicting suicide risk and several mental health conditions. We find benefit from predicting related mental conditions and demographic attributes simultaneously.
We experimented with all the optimizers that Keras provides, and found that Adagrad seems to converge fastest to a good optimum, although all the adaptive learning rate optimizers (such as Adam, etc.) tend to converge quickly. This indicates that the gradient is steeper along certain parameters than others. Default stochastic gradient descent (SGD) was not able to converge as quickly, since it is not able to adaptively scale the learning rate for each parameter in the model -taking too small steps in directions where the gradient is shallow, and too large steps where the gradient is steep. We further note an interesting behavior: all of the adaptive learning rate optimizers yield a strange "step-wise" training loss learning curve, which hits a plateau, but then drops after about 900 iterations, only to hit another plateau. Obviously, we would prefer to have a smooth training loss curve. We can indeed achieve this using SGD, but it takes much longer to converge than, for example, Adagrad. This suggests that a well-tuned SGD would be the best optimizer for this problem, a step that would require some more experimentation and is left for future work.
We also found that feature counts have a pronounced effect on the loss curves: relative feature frequencies yield models that are much easier to train than raw feature counts. This of course is understandable, since feature counts will be sensitive to differences in raw number of tweets between users, whereas relative feature frequencies will be less sensitive. Table 5.3: Average development set loss over epochs 990-1000 of joint training on all tasks as a function of different learning parameters. Models were optimized using Adagrad with hidden layer width 256 (aside for the rightmost column which sweeps over hidden layer width.).
Feature representations are therefore another area of optimization, e.g. different ranges of character n-grams (n > 5). We used character 1-to-5-grams, since we believe that these features generalize better to a new domain (e.g., Facebook) than word unigrams. However, there is no fundamental reason not to choose longer character n-grams, other than time constraints in regenerating the data, and accounting for overfitting with proper regularization.
Initialization is a decisive factor in neural models, and Goldberg (2015) recommends repeated restarts with differing initializations to find the optimal model. In an earlier experiment, we tried initializing an MTL model (without task-specific hidden layers) with pretrained word2vec embeddings of unigrams trained on the Google News n-gram corpus. However, we did not notice an improvement in F-score. This could be due to the other factors, though, such as feature sparsity.
Related Work
Some of the first works on MTL were motivated by medical risk prediction (Caruana, Baluja, and Mitchell, 1996), and it is now being rediscovered for this purpose (Lipton et al., 2016). The latter use a long short-term memory (LSTM) structure to provide several medical diagnoses from health care features (yet no textual or demographic information), and find small, but probably not significant improvements over a structure similar to the STL we use here.
The target in previous work was medical conditions as detected in patient records, not mental health conditions in social text. The focus in this work has been on the possibility of predicting suicide attempt and other mental health conditions using social media text that a patient may already be writing, without requiring full diagnoses.
The framework proposed by Collobert et al. (2011) allows for predicting any number of NLP tasks from a convolutional neural network (CNN) representation of the input text. The model we present is much simpler: A feed-forward network with n-gram input layer, and we demonstrate how to constrain n-gram embeddings for clinical application. Comparing with additional model architectures is possible, but distracts from the question of whether MTL training with user features can improve mental condition prediction in this domain. As we have shown, it can.
Summary
In this chapter we showed that user mental health and gender features can be used to learn more accurate suicide risk and mental health classifiers from Twitter user text.
Integrating user features as auxiliary tasks during training is clearly a more effective way to integrate user features into a classifier than treating them as predictors, since properties like user mental condition and gender are not available at test time. This shows that user features can improve machine learning models that broadly improve public health.
Our results show that an MTL model trained to predict all user mental health tasks performs significantly better than other models, reaching 0.846 true positive rate for predicting neuroatypicality at a false positive rate of 0.1 (TPR@FPR=0.1), and a TPR@FPR=0.1 of 0.559 for predicting suicide risk. Due to the nature of MTL, we also find pronounced gains in detecting anxiety, PTSD, and bipolar disorder. MTL predictions for anxiety, for example, reduce the error rate from a single-task model by up to 11.9%.
Our results also underscore the general challenge neural models face in defeating strong linear models with scarce training data. Logistic regression classifiers predict a single mental condition more accurately than feedforward neural networks trained on a single task. It is only with the beneficial regularization of user demographic and mental condition tasks and that neural networks outperform logistic regression. This suggests that explicitly designing a neural architecture with the classification task in mind can make the critical difference between under or overperforming a baseline linear model. In this case, an architecture of a "forest" of tasks corresponding to correlated user demographic and mental condition comorbidities improved mental condition prediction.
Whether user embeddings can act as useful auxiliary tasks for learning mental health classifiers is still open. However, they may be noisy surrogates for user gender, age, and other demographic features, as evidenced by the experiments in Section 3.5.3.
Therefore, it is natural to assume that user embeddings would be useful auxiliary targets in cases where predicting user demographic properties are related to the main 156 task. Chapter 6 follows this line of research by exploring whether user embeddings are beneficial auxiliary tasks in an MTL framework to improve tweet-level stance classifiers.
157
Chapter 6 User Embeddings to Improve Tweet Stance Classification Chapter 5 showed that ground truth user features -mental condition and demographic features -help learn more accurate classifiers, specifically at predicting the mental conditions a Twitter user has based on their character n-gram usage in tweets they post. This was accomplished by training neural classifiers in a MTL framework where additional user conditions and gender were added as auxiliary tasks. The question remains: can user embeddings take the place of ground truth user features and also act as beneficial auxiliary tasks? This chapter answers this question (in the affirmative!) for the domain of tweet stance classification. This is more evidence that semi-supervised training, predicting user embeddings as an initial auxiliary task, can be used to improve a wide range of tasks beyond predicting latent user features.
In this chapter we consider recurrent neural network (RNN) tweet-level stance classifiers, and evaluate the efficacy different pretraining schemes. We evaluate on two separate datasets: 1) the hashtag-annotated Twitter gun control opinion dataset described in Chapter 4 and 2) the stance classification dataset released as part of the SemEval 2016 6A shared task. We show that user embeddings alone are surprisingly effective at predicting gun control stance. We then proceed to use the author embeddings indirectly, as auxiliary tasks to pretrain the parameter-heavy RNN stance classifiers. We find that this pretraining improves stance classification performance on average across the five domains in the SemEval shared task, although it still performs on par or underperforms compared to linear classifiers trained on tweet token n-gram features.
Section 6.1 introduces the problem of stance classification. Section 6.4 then describes the different datasets used for training stance classifiers as well as learning the user embeddings used in pretraining. Section 6.5 discusses the experimental setting and section 6.3 describes the model architectures we evaluate in detail. Finally, section 6.6 presents performance of user embeddings for predicting stance alone, along with the performance of RNNs. This work was presented at W-NUT 2018 (Benton and Dredze, 2018b).
Introduction
Social media analyses often rely on a tweet classification step to produce structured data for analysis, including tasks such as sentiment (Jiang et al., 2011) and stance (Mohammad et al., 2016) classification. Common approaches feed the text of each message to a classifier, which predicts a label based on the content of the tweet.
However, many of these tasks benefit from knowledge about the context of the message, especially since short messages can be difficult to understand (Aramaki, Maskawa, and Morita, 2011;Collier and Doan, 2011;Kwok and Wang, 2013). One of the best sources of context is the message author herself. Consider the task of stance classification, where a system must identify the stance towards a topic expressed in a tweet. Having access to the latent beliefs of the tweet's author would provide a strong prior as to their expressed stance, e.g. general political leanings provide a prior for their statement on a divisive political issue. Therefore, we propose providing user level information to classification systems to improve classification accuracy.
One of the challenges with accessing this type of information on social media users, and Twitter users in particular, is that it is not provided by the platform. While political leanings may be helpful, they are not directly contained in metadata or userprovided information. Furthermore, it is unclear which categories of user information will best inform each classification task. While information about the user may be helpful in general, what information is relevant to each task may be unknown.
We propose pretraining tweet stance classifiers to predict a user embedding given the tweet text. This is similar to multitask training of mental health classifiers in Chapter 5, where ground truth binary user features were used as auxiliary tasks, instead of embeddings. Since a deployed classifier will likely encounter many new users for which we do not have embeddings, we use the user embeddings as a mechanism for pretraining the classification model By pretraining model weights to be predictive of user embeddings, a classifier will be able to generalize better on heldout data after training on a task-specific dataset. This pretraining can be performed on a separate, unlabeled dataset of tweets and user embeddings and tends to improve downstream task performance. Although semi-supervised approaches to stance classification are far from new, they have been implemented at the message-level -predicting heldout hashtags from a tweet, for example (Zarrella and Marsh, 2016). Our approach leverages additional user information that may not be contained in a single message.
In this chapter, we evaluate our approach on two stance classification datasets: 1) the SemEval 2016 task of stance classification (Mohammad et al., 2016) and 2) the guns-related Twitter opinion data described in Section 4.4.2. On both datasets we compare the benefit of pretraining neural stance classifiers to predict different user embeddings derived from different types of online user activity: an author's ego text user embedding, their friend network embedding, and a multiview embedding of both of these views. We also compare pretraining on within-domain user embeddings vs.
pretraining on the generic out-of-domain user embeddings learned in Chapter 3.
Stance Classification
The popularity of sentiment classification is motivated in part by the utility of understanding the opinions expressed by a large population (Pang and Lee, 2008). Sentiment analysis of movie reviews (Pang, Lee, and Vaithyanathan, 2002) can produce overall ratings for a film, analysis of product reviews allow for better recommendations (Blitzer, Dredze, and Pereira, 2007), and analysis of opinions on important issues can serve as a form of public opinion polling (Tumasjan et al., 2010;Bermingham and Smeaton, 2011).
Although similar to sentiment classification, stance classification concerns the identification of an author's position with respect to a given target (Anand et al., 2011;Murakami and Raymond, 2010). This is related to the task of targeted sentiment classification, in which both the sentiment and its target must be identified (Somasundaran and Wiebe, 2009). In the case of stance classification, we are given a fixed target, e.g. a political issue, and want to predict the opinion of a piece of text towards that issue.
While stance classification can be expressed as a complex set of opinions and attitudes (Rosenthal, Farra, and Nakov, 2017), we confine ourselves to the task of binary stance classification, in which we seek to determine if a single message expresses support for or opposition to the given target (or neither). This definition was used in the SemEval 2016 task 6 stance classification task (Mohammad et al., 2016).
A key observation behind stance classification is that the system is designed to uncover the latent position held by the author of the message. While most work in this area seeks to infer the author's position based only on the given message, other information about the author may be available to aid in the analysis of a message.
Consider a user who frequently expresses liberal positions on a range of political topics. Even without observing any messages from the user about a specific liberal political candidate, we can reasonably infer that the author would support the candidate. Therefore, when given a message from this author whose target is the political candidate, our model should have a strong prior to predict a positive label.
This type of information is readily available on social media platforms where we can observe multiple messages from a user, as well as other behaviors such as sharing content, liking or promoting content, and the social network around the user. Additionally, this type of contextual information is most needed in a social media setting. Unlike long form text common in sentiment analysis of articles or reviews, analysis of social media messages necessitates understanding short, informal text. Context becomes even more important in a setting that is challenging for NLP algorithms to operate in.
How can we best make use of contextual information about the author? Several challenges present themselves: First, what contextual information is valuable to social media stance classifiers?
We may have previous messages from the user, social network information, and a variety of other types of online behaviors. How can we best summarize a wide array of user behavior in an online platform into a single, concise representation?
We answer this question by exploring several representations of this context encoded a user embedding: a low dimensional representation of the user that can be used as features by the classification system. We include a multiview user embedding that is design to summarize multiple types of user information into a single embedding, learned in Chapter 3.
Second, how can we best use contextual information about the author in the learning process? Ideally we would be provided a learned user representation along with every message we were asked to classify. This is unrealistic. Learning user representations requires data to be collected for each user and computation time to process that data. Neither of these are available in many production settings, where millions of messages are streamed on a given topic. It is impractical to insist that additional information be collected for each user, new representations inferred, all while the consumer of a stance classifier waits for a label to be predicted for a single tweet.
Instead, we integrate user context in multitask learning setting, in a similar way to how user gender was used as an auxiliary task to improve mental condition classification in Chapter 5. We consider augmenting neural models with a pretraining step that updates model weights according to an auxiliary objective function based on available user representations. This pretraining step initializes the hidden layer weights of the stance classification neural network so that the resulting model improves even when observing only a single message at classification time.
Finally, while our focus is stance classification, this approach is applicable to a variety of document classification tasks in which author information can provide important insights in solving the classification problem.
Models
Our stance classification tasks focus on tweets: short snippets of informal text. We rely on recurrent neural networks as a base classification model, as they have been effective classifiers for this type of data (Tang, Qin, and Liu, 2015;Vosoughi, Vijayaraghavan, and Roy, 2016;Limsopatham and Collier, 2016;. Our base classification model is a gated recurrent unit (GRU) recurrent neural network classifier. The GRU consumes the input text as a sequence of tokens and produces a sequence of final hidden state activations. Prediction is based on a convex combination of these activations, where the combination weights are determined by global dot-product attention using the final hidden state as the query vector (Luong, Pham, and Manning, 2015). A final softmax output layer predicts the stance class labels based on the convex combination of hidden states. Input layer word embeddings are initialized with GloVe embeddings pretrained on Twitter text (Pennington, Socher, and Manning, 2014).
For this baseline model, the RNN is fit directly to the training set, without any pretraining, i.e. training maximizes the likelihood of class labels given the input tweet. As in Chapters 4 and 5, we have the option of exploring an entire zoo of neural architectures. This is however not the point of this thesis -we want to show how user features and embeddings can be used to improve downstream tasks; indiscriminately exploring different architectures distracts from this point.
We now consider an enhancement to our base model that incorporates user embeddings.
RNN Classifier with User Embedding Pretraining
We augment the base RNN classifier with an additional final (output) layer to predict an auxiliary user embedding for the tweet author. The objective function used for training this output layer depends on the type of user embedding (described below). A single epoch is made over the pretraining set before fitting to train.
In this case, the RNN must predict information about the tweet author in the form of an d-dimensional user embedding based on the input tweet text. If certain dimensions of the user embedding correlate with different stances towards the given topic, the RNN will learn representations of the input that predict these dimensions, thereby initializing the RNN with good representations for determining stance.
The primary advantage of this semi-supervised setting is that it decouples the stance classification annotated training set from a set of user embeddings. It is not always possible have a dataset with stance-labeled tweets as well as user embeddings for each tweet author (as is the case for our datasets). Instead, this setting allows us to utilize a stance-annotated corpus, and separately create representations for a disjoint set of pretraining users, even without knowing the identity of the authors of the stance-annotated tweets.
User Embedding Models
We explore pretraining on several different user embeddings. These methods capture both information from previous tweets by the user as well as social network features.
Keyphrases In some settings, we may have a set of important keyphrases that we believe to be correlated with the stance we are trying to predict. Knowing which phrases are most commonly used by an author may indicate the likely stance of that 165 author to the given issue. We consider how an author has used keyphrases in previous tweets by computing a distribution over keyphrase mentions and treat this distribution as their user representation.
Author Text When a prespecified list of keyphrases is unknown, we include all words in the user representation. Rather than construct a high dimensional embedding -one dimension for each type in the vocabulary -we reduce the dimensionality by applying principal component analysis (PCA) to the TF-IDF-weighted user-word matrix based on tweets from authors (latent semantic analysis) (Deerwester et al., 1990). We use the 30,000 most frequent token types after stopword removal.
Social Network On social media platforms, people friend other users who share common beliefs (Bakshy, Messing, and Adamic, 2015). These beliefs may extend to the target issue in stance classification. Therefore, a friend relationship can inform our priors about the stance held by a user. We construct an embedding based on the social network by creating an adjacency matrix of the 100,000 most frequent Twitter friends in our dataset (users whom the ego user follows). We construct a PCA embedding of the local friend network of the author.
MultiView Representations Finally, we consider a canonical correlation analysis (CCA) multiview embedding over the content of the user's messages as well as their social network 1 . We project both the text and friend network PCA embeddings described above, and take the mean projection of both views as a user's embedding We use a mean squared error loss to pretrain the RNN on these embeddings since they are all real-valued vectors. When pretraining on a user's keyphrase distribution, we instead use a final softmax layer and minimize cross-entropy loss.
For embeddings that rely on content from the author, we collected the most recent 200 tweets posted by these authors using the Twitter REST API 2 . If the user posted fewer than 200 public tweets, then we collected all of their tweets. We constructed the social network by collecting the friends of users as well 3 . We collected user tweets and networks between May 5 and May 11, 2018.
We considered user embedding widths between 10 and 100 dimensions, but selected dimensionality 50 based on an initial grid search to maximize cross validation (CV) performance for the author text PCA embedding.
Baseline Models
We compare our approach against the following two baseline models: Hashtag Prediction Pretraining As part of the SemEval 2016 task 6 tweet stance classification task, Zarrella and Marsh (2016) submitted an RNN-LSTM classifier that used an auxiliary task of predicting the hashtag distribution within a tweet to pretrain their model. There are a few key differences between our proposed method and this work. Their approach is restricted to the stance classification dataset, whereas we consider building representations of the user from context. Additionally, their method is restrictive in that they are predicting a task-specific set of hashtags, whereas user features/embeddings offer more flexibility in that they are not as strongly tied to a specific task. However, we select this as a baseline for comparison because of how
SVM Baseline
We also reproduce a word and character n-gram linear support vector machine that uses word and character n-gram features. This was the best performing method on average in the 2016 SemEval Task 6 shared task (Mohammad et al., 2016).
We swept over the slack variable penalty coefficient to maximize macro-averaged F1-score on held-out CV folds. create labels based on commonly occurring hashtags that were clearly associated with one of these positions (see Table 6.2 for a list of keywords and hashtags). Tweets which contained hashtags from both sets or contained no stance-bearing hashtags were excluded from our data.
We constructed stratified samples from 26,608 labeled tweets in total. Of these, we Legalization of Abortion #prochoice, #praytoendabortion, #plannedparenthood Table 6.3: Subset of hashtags used in Mohammad et al. (2016) to identify politicallyrelevant tweets. We used this set of hashtags to build a pretraining set relevant to the stance classification task.
User Embedding Datasets
We considered two unlabeled datasets as a source for constructing user embeddings for model pretraining. Due to data limitations, we were unable to create all of our embedding models for all available datasets. We describe below which embeddings were created for which datasets.
SemEval 2016 Related Users The SemEval stance classification dataset does not contain tweet ids or user ids, so we are unable to determine authors for these messages.
Instead, we sought to create a collection of users whose tweets and online behavior would be relevant to the five topics discussed in the SemEval corpus.
We selected query hashtags used in the shared task (Mohammad et al., 2016) and searched for tweets that included these hashtags in a large sample of the Twitter 1% streaming API sample from 2015 6 . Table 6.3 lists the example hashtags described in Mohammad et al. (2016) used to sample politically relevant tweets from the Twitter stream. This ensured that tweets were related to one of the targets in the stance evaluation task, and were from authors discussing these topics in a similar time period.
We recorded the author of each of these tweets and then queried the Twitter API to pull the tweet authors' most recent 200 tweets and local friends and followers network.
We omitted tweets made by deleted and banned users as well as those who had fewer than 50 tweets total returned by the API. In total, we were able to obtain 79,367 tweets for 49,361 unique users, and were able to pull network information for 38,337 of these users.
For this set of users, we constructed the Author Text embedding (PCA representation of a TF-IDF-weighted bag of words from the user) as well as the Social Network embedding (PCA representation of the friend adjacency matrix.) For users with missing social network information, we replaced their network embedding with the mean embedding over all other users. This preprocessing was applied before learning Multiview embeddings over all users.
General User Tweets Is it necessary for our pretraining set to be topically-related to the stance task we are trying to improve, or can we consider a generic set of users?
To answer this question we created a pretraining set of randomly sampled users, of over 102 thousand user learned in Chapter 3, not specifically related to any of our stance classification topics. If these embeddings prove useful, it provides an attractive method whereby general embeddings can be created for users not specifically related to the stance classification topic.
Although there are many potential user embeddings we could consider pretraining with, we only consider the ego text, friend network, and a CCA embedding of these two views as user embeddings for pretraining. We selected these since the PCA ego text embedding is a clear baseline, the friend network embedding was shown to be most effective at friend network prediction, and a CCA representation of ego text and friend network was shown to outperform other embeddings at hashtag prediction. We avoided considering CCA embeddings of all subsets of viwes to narrow the model search space.
To pretrain classifiers, we randomly selected three tweets that each user made in March 2015 as pretraining tweets. Embeddings were learned over tweets from January and February 2015, a disjoint sample from the three randomly selected tweets from March. This resulted in a pretraining set of 152,751 tweets for 61,959 unique users.
Guns User Tweets
We also kept 49,023 unlabeled guns tweets for pretraining on the gun control stance task, using the distribution over general keyphrases that an author posted across the pretraining set as the user embedding. We pretrained on the (Author Text) embedding of these tweets, along with a Social Network embedding (network data collected identically to above pretraining datasets).
Model Training
We preprocessed all tweets by lowercasing and tokenizing with a Twitter-specific tokenizer (Gimpel et al., 2011) 7 . We replaced usernames with <user> and URLs with <url>.
For training on the SemEval dataset, we selected models based on four-fold cross validation macro-averaged F1-score for FAVOR and AGAINST classes (the official evaluation metric for this task directionality (forward or bidirectional). Architecture was selected to maximize crossfold macro-averaged F1 on the "Feminist Movement" topic with the GRU classifier without pretraining. We performed a separate grid search of architectures for the pretraining models. Table 6.4 lists the range of hyperparameters swept by the grid search. Figure 6.1 displays average cross-fold F1 for an RNN pretrained on predicting the ego text 6.6 Results and Discussion 6.6.1 SemEval 2016 Task 6A Table 6.5 contains the performance for each target in the SemEval 2016 stance classification task.
Considering the pretrained models versus the non-pre-trained RNN, pretraining improves in four out of five targets. Additionally, one of our models always beats the baseline of tweet-level hashtag distribution pre-training (RNN-content-hashtag).
Notably, while topic specific user embeddings (hsetpre) improve over not pretraining in four out of five cases, the generic user embeddings (genset) improves in three out of five cases. This suggests that even embeddings for generic users who don't necessarily discuss the topic of interest can have value in model learning.
In terms of embedding type, embeddings built on the author text tended to be best, but results were not clear.
The linear SVM baseline with word and character n-gram features outperforms neural models in two out of five tasks, and perform the best on average. This agrees with the submissions to the SemEval 2016 6A stance classification task, where the baseline SVM model outperformed all submissions on average -several of which were neural models.
Guns
Using the guns dataset, we sought to understand how the amount of training data affected the effectiveness of model pre-training. Table 6.7: Test accuracy of an SVM at predicting gun control stance based on gunsrelated keyphrase distribution (keyphrase), user's Author Text embedding (text), and word and character n-gram features (tweet). ▽ encodes models significantly worse (p = 0.05) than a tweet features-only SVM according to a bootstrap sampling test with sample size 250 and 1000 iterations, and ♣ means the feature set did significantly better than user-text-PCA.
As with SemEval, the SVM always outperforms neural models, though the improvement is only statistically significant in the smallest data setting. Although we are unable to beat SVM models, the improvements we observe in RNN performance after user embedding pre-training are promising. Neural model architectures offer more flexibility than SVMs, particularly linear-kernel, and we only consider a single model class (recurrent networks with GRU hidden unit). Further architecture exploration is necessary, and user embedding pre-training will hopefully play a role in training state-of-the-art stance classification models.
Since for the guns data we have an intersection between the annotated stance data and the users for whom we have embeddings, we sought to understand how much information may be contained in the embeddings relevant to stance classification. Like above, we trained an SVM to predict the gun stance but instead of providing the tweet, we alternately provided the tweet, one of the embeddings, or both together. Higher prediction accuracy indicates that the input is more relevant, and helpful, in predicting stance. Table 6.7 shows test accuracy for this task across different amounts of training data. Unsurprisingly, the tweet content is more informative at predicting stance than the user embedding. However, the embeddings did quite well, with the "Author Text" embedding coming close to performance of tweet text in some cases. Providing both had no or a marginal improvement over tweet text alone.
Summary
This chapter shows that pretraining on unsupervised user embeddings improves tweetlevel neural stance classifiers. We find that author embedding pretraining yields improvements on four out of five domains for the SemEval 2016 Task 6A tweet stance classification task over a non-pretrained neural network, although benefits are less discernible on the gun control stance classification dataset of tweets.
We expand on Chapter 5 and show that pretraining to predict unsupervised user embeddings also improves classifier performance, in spite of not having gold user features. This remains true even when pretraining on a completely generic set of user embeddings, when the domains of the training and unlabeled sets do not match. Chapter 7
Conclusion
This thesis explores representation learning techniques to learn social media user embeddings and evaluates embeddings at improving downstream task performance.
In the process we develop novel methods to learn user embeddings and integrate them into existing models. We conclude by summarizing the contributions of each chapter: whether user embeddings are being learned (and if so how), how they are evaluated, the tasks we try to improve with them, and any novel models described there. In section 7.1 we retrospectively summarize the main contributions of this thesis. In section 7.2 we touch on ethical considerations around social media data and the choices we made in our research to respect the privacy of users. We conclude in section 7.3 with directions for future research.
Chapter 2 begins by reviewing work on applications of user features and then provides an overview of relevant computational methods: canonical correlation analysis (CCA)-based multiview representation learning methods, and the multitask learning setting. Section 2.1 is a review of work on inferring user demographic features and applications that benefit from user features and embeddings. Section 2.2 is an overview of correlation-based multiview representation learning methods covering how these 183 models are fit and the data assumptions that they make. This section presents the derivation of the vanilla two-view CCA solution, the derivation of many-view generalized CCA, and describes existing extensions to these methods. This section could stand as a primer on correlation-based multiview representation learning. Section 2.3 describes the multitask learning setting, the motivation behind this framework, and describes (at a high level) how learn neural models are learned in this setting. The work presented in this thesis has resulted in three major contributions, as evidenced by subsequently published research.
Expanding What Constitutes as Model Supervision
In chapter 3 we apply several novel variants of multiview representation learning methods for learning social media user embeddings using auxiliary user information as additional views. In chapter 4 we present a new supervised topic model that can exploit high-dimensional, noisy supervision. In chapters 5 and 6 we use neural multitask learning to improve classifier generalization at social media prediction tasks by entraining model weights to also be predictive of features associated with the tweet author.
Although these extensions belong to entirely different model classes (correlationbased multiview learning methods, probabilistic topic models, and supervised neural networks), they were all motivated by the need to exploit the wealth of unstructured, auxiliary information around social media users to improve downstream task performance. This need is not specific to social media data but pervades many applied machine learning domains, encouraging others to arrive at similar solutions. Card, Tan, and Smith (2018) developed a supervised neural topic model that allows for metadata to appear as either a covariate or a predicted variable in the model structure.
There is also a line of work that integrates information from multiple user views to learn more robust user embeddings (Li et al., 2017;Tao and Yang, 2017;Kursuncu et al., 2018;Hazarika et al., 2018). The models we present are all trying to extract value out of features that are only distantly related to a task of interest.
By developing these models, we hope to widen the set of viable signals that machine learning practitioners will consider when training semi-supervised models.
For example, instead of considering using author demographics as auxiliary tasks when training a multitask model, one may just as well consider predicting prior tweeting history as an auxiliary task (a feature that is readily accessible although higher dimensional).
Learning Social Media User Embeddings
Although representing users as a vector of real numbers is not a new idea, learning user embeddings and evaluating them as first-class objects is a new contribution to social media research. In chapter 3 we evaluate multiview user embeddings at multiple tasks in a similar way to how word embeddings have been subjected to a battery of syntactic and semantic similarity tasks, as well as prediction tasks. Subsequent research has also taken this tact and treats user embeddings as first-class objects worth learning and evaluating in their own right.
For example, Xing and Paul (2017) take inspiration from the friend prediction task to evaluate their own user embeddings. Li et al. (2017) takes a multitask approach to learning user embeddings and evaluates the embeddings according to how well they predict which text and other users they are likely to agree with. Although considering a supervised objective to learn user embeddings, Kursuncu et al. (2018) also collapses features from each view into a joint user embedding. Multiview user embeddings have even been shown to be predictive of sarcasm in author tweets (Hazarika et al., 2018).
State-of-the-art for Social Media Mental Health Monitoring
At the time of publishing, the neural MTL model presented in chapter 5 marked the state-of-the-art for mental health inference from social media text. Tran and Kavuluru (2017) reference this work as follows: There is also a quickly growing body of literature detailing machine learned models to predict mental health status based on social media data.
For a detailed analysis of the current state-of-the-art in this emerging domain, readers are encouraged to refer to the deep learning architecture by Benton et al. [6].
Subsequent work has also taken semi-supervised approaches to monitor mental health from users' social media data (Yazdavar et al., 2017;Zou, Lampos, and Cox, 2018). In particular, our finding that classification tasks with little labeled training data tend to benefit more from MTL than tasks with more training data is frequently cited as justification for the use of MTL (Bingel and Søgaard, 2017) across many application domains: news headline popularity prediction (Hardt, Hovy, and Lamprinidis, 2018), information retrieval (Salehi et al., 2018), medical concept normalization and recognition (Crichton et al., 2017;Niu et al., 2018), and NLP (Bjerva, 2017;Schulz et al., 2018).
Ethical Considerations
The work described in this thesis can lead to more robust social media systems, benefiting the users whose data these models were tuned to. At the same time, it is important to recognize that although these analyses were purely observational 1 , the users' whose data comprise our training sets may not be comfortable with being studied. The survival of social media research depends on the trust of the users whose data are studied and betraying this trust will can have far-reaching effects 2 .
There is a wide set of ethical concerns associated with doing social media research, especially when dealing with users' physical and mental health. These concerns are extensively covered by McKee (2013), Conway (2014), and Ayers et al. (2018). In Benton, Coppersmith, and Dredze (2017) we condense these into a set of maxims and describe ethical concerns that new social media health researchers should be aware of.
In this thesis we made several explicit choices to respect the privacy of users in our studies. In Chapter 3 we chose to only present anecdotes of users that were obfuscated.
The user clusters in Appendix A do not include user IDs of cluster members, only a bag of words associated with exemplar users in that cluster. We also made sure not to release tweets made by these users as this would explicitly break the Twitter REST API's terms of service (although we did release the pre-trained user embeddings).
The work in Chapter 5 is the most ethically treacherous. In other chapters, user data is used to improve models that are only tangentially related to these same users, or are used in innocuous tasks (e.g. improving topic model fit, predicting hashtag use, and friending behavior). Although the users in the mental health prediction work made their accounts publicly available, they probably did not expect to be enrolled in an observational study. The stigmatism surrounding mental illness prevented us from sharing their data or identities with outside researchers. We also explicitly chose to not release the models we trained over their tweets, nor do we release an analysis of which features were most influential in predicting mental health conditions for fear that these character/word choice features would be used to stigmatize others.
User Embedding Evaluation Suite
Creating a benchmark evaluation suite set for user embeddings similar to word similarity benchmarks for word embeddings (Rubenstein and Goodenough, 1965;Miller and Charles, 1991;Finkelstein et al., 2001;Hill, Reichart, and Korhonen, 2015) (Ma, Lu, and Foster, 2015) Neural architectures have also been proposed to maximizing correlation between more than two views, but these architectures make additional assumptions on which correlations are maximized between views. For example, the Bridge Correlational Networks proposed in Rajendran et al. (2015) assume that one of the views is designated a "pivot" view whose representation all other views are mapped close to. On the other hand, generalized CCA formulations such as SUMCORR-GCCA and MAXVAR-GCCA maximize a loss function dependent on correlation between all pairs of views.
Neural proposals also do not come with theoretical guarantees on solution quality, and therefore may have trouble difficulty finding good user embedfdings in practice.
Accounting for Noisy Views: Certain example views will have higher variance than others. Imagine a user has only posted a message once in their online life, but they have friended over 1000 other users. The ego text view for that user will be a noisier estimate of their true ego text distribution (when they have produced infinite tweets) than their friend network view if they explicitly took the time to vet every other user in the entire network as a potential friend. Although the example-view binary mask in MV-LSA, K, explicitly ignores example views that are missing data, it cannot tell how much to trust each view. Rastogi, Van Durme, and Arora (2015) suggests a heuristic weighting of example views in the section on "Handling Missing Data".
This could be used to downweight views whose estimates are noisier. There has also been work on Bayesian online classifiers of user demographics that gain confidence as more user information arrives, but it is unclear how to apply these models in the multiview representation learning setting (Volkova and Van Durme, 2015) Scaling to Variable Feature Dimensionality: Another general problem in learning and updating multiview social media user embeddings is the introduction of new social media members and new vocabulary as time goes on. This will result in new features being introduced into the per-view feature vectors. We are not familiar with work on extending the GCCA problem to cases where input feature dimensionality may grow over time. One possible direction is to incorporate new view features into your model by refreshing the embeddings as a batch periodically. This is not a very satisfying solution and is expensive to apply at scale. A second direction would be to look to the nonlinear mappings in dGCCA to map new feature elements to the same feature space. Imagine for instance that one of our views is a representation of our local friend network, but we take the mean of friend user description embeddings as our friend network feature vector. Adding a new friend would only require embedding their user description and integrating it into our view average. This is very much an open problem and these solutions are not particularly satisfying.
Interpretability of User Embeddings
There has been work in interpreting the dimensions of textual embeddings . One strength of distributed user embeddings, their ability to encode many user features in a single dense vector, is also their weakness -user embeddings are opaque. This technique could be applied to user embeddings by calculating the distance between an input user to prototypical users (e.g. man, woman, athlete, or a wine-snob). Although this thesis focuses on learning user embeddings to improve downstream tasks, interpretation of these embeddings will be important in analyzing what they capture and convincing engineers that they are worth including in production systems.
Appendix A User Embedding Clusters
Here we present the labels assigned by Mechanical Turk subjects to user clusters from three different user embeddings. Each cluster was represented by four user exemplars (with one false exemplar/intruder), corresponding to a single Gaussian in a Gaussian mixture model (subsection 3.5.4). Many of the cluster labels are vague, hinting at the difficulty of this task. For each cluster, we present the assigned labels along with tokens from the four exemplar user tweets.
One important thing to be aware of is that different Gaussians capture more users than others. This is likely because we assume all follower christ texan paige baby| finnish girl loves beauty stuff good music ice hockey ..... never said perfect nobody walkin earth ....
1673
They appear to be nothing more than bots ( + ) NONE ( -) without struggle progress handsome clever boy always liked boy true humorist real hustler ontop game gang smart gorgeous gurl gurl ain't gat tym type bio buh willing knw knw wah ryt #team gemini #peace
650
Foreign accounts talking about life in their native language ( -) different language ( -) They all have English as common but the user 5 in some other language. ( -) instagram hear birds summer breeze inna maal yusra optimis sesuatu yang tapi tetap
702
User 4 is a man, the rest are women. Also, user 4 is the only one who mentions that he is a parent. They have in common that they can understand foreign language and there is no English in their comments under the profile. They might not be native Americans. ( + ) Foreign accounts tweeting in their native languages ( + ) fotgrafo reportero del centro argentino estudiante perro bad life bad day desde 2011 hasta ely sueo importa quien quiera antes querido gira tanto nyc life home sweet home glad back ian mitchell phd hcpc swansea city afc wales national team performance psychologist hakuna matata i'm singer/songwriter unsigned artist also follow instagram sexo son taylor lautner @avrillavigne @coldplay @eminem
7191
They are all women ( -) none auntie future superstar forever want maccies vivir con fuerza locura libertad msica alma instagram follow instagram put mind achieve anything
150
The one that doesn't belong is a family with kids and the others are young and probably single. ( + ) the others are adults, these are children ( -) Young adults expressing their opinions and interestes ( + ) motivation christ follower husband father friend bonito encontrar amor vida todos los das misma persona quiero sonrer siempre lado kidrauhl
1103
The other profiles did not talk about band or sports ( -) @DenunaArifandi is the only user tweeting in non-english language. He is an Indonesian. ( -) free thinker wine drinker par tt ime investor occasional putt sinker views retweet necessarily endorsement follower christ husband beautiful wife daddy amazing kids student discipleship pastor coffee lover stl cards indy colts fan love guns roses wwe big cubs fan huge fantasy football baseball player loves laugh trys things menos como forma vida lado los realidad donde sea pero con verdadero hermanas cosplayer full-time art history student otl daily astrology vivian owen author lucky stars astrology bringing ancient star wisdom modern-day life featured writer
1348
Communicating information relevant to their person or business instead of bots ( -) The others re all individuals ( -) The other four seem to be more conservative ( -) the other four are regular people, this appears to be a musician ( -) Foreign accounts tweeting in their native languages ( + ) wife mom esthetician chocolate lover book reader embarrassing dancer slightly ocd someday traveler life lover always moments notice away crazy mature honeybee art way run away without leaving home ask.fm:
41
The others are more into american culture ( -) Young men tweeting about personal lives and interests ( -) The other 4 use one language while this account uses at least two ( -) | 2018-12-02T17:57:04.000Z | 2018-10-25T00:00:00.000 | {
"year": 2018,
"sha1": "311b4de42b05c22531e2afba18b0eb76a4028764",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6c7268d79331c5afda8c92ac004686dd16741cff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
29359424 | pes2o/s2orc | v3-fos-license | Management ownership and market valuation
We investigate the relation between management ownership and corporate performance, as measured by Tobin's Q. In a cross-section of Fortune 500 firms, Tobin's Q first increases and then declines as board of directors holdings rise. For older firms there is weak evidence that Q is lower when a firm is run by a member of the founding family than when it is run by an officer unrelated to the founder.
Introduction
Many large American corporations are not run by the people who own them.
As stressed by Berle and Means (1932), when managers hold little equity in the firm and shareholders are too dispersed to enforce value-maximization, corporate assets may be deployed to benefit managers rather than shareholders.According to Jensen and Meckling (1976), these costs of deviation from value-maximization decline as management ownership rises.As their stakes rise, managers pay for a greater share of the costs of their on-the-job consumption1 and are less likely to squander corporate wealth.According to this "convergence of interestst' hypothesis, corporate performance improves with increases in management ownership.
More recently, Oemsetz (1983) and Fama and Jensen (1983) have pointed out offsetting costs of significant ownership by management.These writers recognized that, when a manager owns only a small stake, market discipline (e.g. the managerial labor market (Fama, 1980), the product market (Hart, 1983), and the market for corporate control (Jensen and Ruback, 1983)) may still force him toward value maximization.In contrast, a manager who controls a substantial fraction of his firm's equity may have enough voting power to guarantee his future employment with the firm at an attractive salary2.He may then indulge his tastes for on-the-job consumption, although perhaps to a more limited extent than if he had effective control of the firm but did not have any claim to its cash flows3.This "entrenchment" hypothesis predicts that performance declines as management's stake increases beyond the point where control challenges are still effective.
As the above discussion suggests, theoretical arguments alone cannot unambiguously predict the relationship between management ownership and corporate performance.While the "convergence of interests" hypothesis predicts a uni-formly positive relation, the "entrenchment" hypothesis predicts a decline in performance for sufficiently high management stakes.In this paper, we study the relation between managerial ownership and performance empirically.
In section 2, we look at the relation between two measures of the firm's performance (Tobin's Q and profit rate) and the shareholdings of its board of directors.A related study was conducted by Demsetz and Lehn (1985), who estimated a linear relationship between profit rate and ownership by large shareholders (as opposed to just management), and found no correlation.We estimate a nonlinear relationship between management ownership and performance to capture the possible presence of both the "convergence of interests" and "entrenchment" effects.We also attempt to evaluate a number of reasons why the observed relationship might be spurious.
Section 3 takes a more disaggregated look at the relation between management ownership and performance.First, we segregate ownership by top corporate officers from that of other board members and evaluate the impact of ownership by these two distinct groups on performance.In part, this is done to address a frequently made claim that outside board members are puppets of top officers.Second, we evaluate the impact on corporate performance of having a founding family on the board of directors.We do this because we are interested in the possibility that a management team can become entrenched for reasons other than the number of voting shares it controls.Section 4 summarizes our findings.
2.
The Relationship Between Board Ownership and Performance.
In this section, we evaluate the relationship between board ownership and performance in a sample of large industrial firms.For this purpose, we use a December, 1980 listing of the names and stakes of large shareholders of 456 of the Fortune 500 firms supplied by Corporate Data Exchange (CDE).The CUE identified shareholders who were members of the board of directors, with the exception of those whose stakes were below .2%.While this means that, in large firms, positions worth millions of dollars are not reported, the CUE numbers are still very useful for examining issues of corporate control, since board members holding less than .2%are never among our firms' largest shareholders.
To measure performance, we rely mainly on average Tobin's Q, equal to the ratio of the firm's market value to the replacement cost of its physical assets.Tobin's Q is high when the firm has valuable intangible assets in addition to physical capital, such as monopoly power (Lindenberg and Ross, 1981), goodwill, a stock of patents, or good managers.While Q is undoubtedly a very noisy signal of managerial performance, we believe that it is well-suited to our purpose.Because we are interested in the predictable effects of a firm's ownership structure on its value, it seems natural to look at the cross-sectional relation between ownership and value.One alternative might be to study events that represent large unexpected changes in ownership structure for which there is no accompanying news to contaminate the experiment.But large changes in ownership structure are fairly rare, except for those accompanying control challenges, where there is clearly much more going on.For this reason, we feel justified in concentrating on a cross-sectional analysis of measures such as Q and the profit rate.
The measure of Q we employ was obtained from the Griliches R & D master file (Cummins, Kall and Laderman, 1982) for 1980.The numerator of Q is the firm's market value, defined as the sum of the actual market value of common stock and estimated market values of preferred stock and debt4.The denominator of Q is the replacement cost of the firm's plant and inventories, A, also taken from the R & D master file.Values of Q are not available for 85 firms, primarily because of difficulty in obtaining values of long term debt, and, in some cases, the replacement cost, A. While we cannot be sure that such sample selection does not bias our results, the omitted firms do not appear to be very different from the included ones in any observable respect5.Our final sample consists of 371 firms.
In this sample, the mean combined stake of all board members is 10.6%.The median stake, however, is only 3.4%, suggesting that the distribution is skewed.Indeed, in 103 firms (28% of the sample), total board holdings added to no more than 1% of outstanding equity, and in 46 of our firms (12% of the sample), no board member owned more than 0.2% of the firm.Nonetheless, in 31% of our sample the board owned more than 10% of the firm; and in 20% of the sample the board owned more than 20% of the firm.These numbers accord with the findings of Lewellyn (1971) and Denisetz and Lehn (1985) who also document the prevalence of significant managerial ownership in the United States.These results also corroborate the hypothesis of Fama and Jensen (1983) that firms in which management owns over 50% of the equity (and thus has complete control) should have a hard time surviving as organizations.In fact, there are only 14 such firms in our sample6.It suggests that, at low levels of ownership, higher stakes are associated with higher Q's.
It also records a decline of Q for substantial ownership positions, although outliers strongly affect average Q in some cells.In particular, the 35%-40% ownership cell includes Hewlett-Packard with Q=3.21 and Searle with Q=1.72, which together account for the mean Q in that cell being 1.06.
Similarly, Dow-Jones alone, with Q=2.58, accounts for the mean Q of 1.46 in the 60-65% cell.While Table 1 suggests that the relationship between ownership and Q might be nonlinear, it also highlights the need for controlling for some sources of heterogeneity across firms, particularly industry.
In our econometric work, it would be impractical to use as many cells of ownership levels as appear in Table 1, primarily because of the scarcity of observations in some cells.Instead, we consider only four categories of ownership levels, and estimate regressions using dummies for these categories.
Specifically, we define: BOARDOO = 1 if holdings of no board member exceed .2%= 0 otherwise BOARDO5 = I if total reported board holdings are between 0% and 5% = 0 otherwise BOARD2O = 1 if total reported board holdings are between 5% and 20% = 0 otherwise BOARD99 = 1 if total reported board holdings exceed 20% = 0 otherwise Partitioning ownership levels at 0%, 5%, and 20% can be justified as follows.Firms with close to no board ownership are probably a special group in which the convergence of interests effect might be the weakest, except for possible ownership-mimicking incentive contracts.The choice of 5% as the dividing line between low and moderate ownership is arbitrary, motivated primarily by the benefits of having a large number of observations in both the BOARDO5 and BOARD2O categories.The choice of 2O as the cutoff for high ownership stems from our prior belief that bona fide entrenchment should become important in the 2O-3O range (Weston, 1977), balanced against the need to have enough observations in that cell.Later in the paper, we consider alternative specifications.
The first column of Table 2 presents the regression of Tobin's Q on the board dummies (BOAR000 is omitted).This regression is essentially equivalent to a comparison of means of Q across ownership cells; the only differences from To deal with this problem, we estimate a model that explicitly incorporates variables that might be correlated with both ownership and Q.
The first type of controls we use are observable measures of intangible assets that affect Q7.These are (divided by A, to make them compatible with • ADV/A -1980 advertising expenditures (COMPUSTAT)8.
In addition to observed assets, we consider several variables that might be correlated with unobserved intangible assets, as well as with board ownership: • 0/Athe ratio of the calculated market value of a firm's long-term debt to A. This variable may in part capture the value of corporate tax shields.Alternatively, according to the pecking-order theory, debt is negatively correlated with the profitability of the firm, and hence with Q.Managers of the more leveraged firms might hold a higher fraction of equity, on average, for the same Q.
• A replacement cost of assets.'A' measures size; and unobserved intangible assets of a firm might be correlated with size.Also, it is hard to own a large part of a bigger firm, raising the possibility that a large board stake proxies for small firm size.
• S1C31three digit SIC code dummies, used to control for possible spurious correlation between ownership and Q operating through industry effects (Demsetz and Lehn, 1985).
The final equation, hereafter equation ( 1), takes the form: The estimated coefficients and their heteroskedasticity-consistent standard errors are shown in the second column of Table 2, while Table 3 presents t-statistics for the pairwise null hypotheses that the coefficients on the board ownership dummies are equal.
Tables 2 and 3 suggest that, all other things equal, firms in which management owns between 5 and 20% have the highest Q's, which exceed the Q's of firms with negligible board ownership by .206(t=3.06), the Q's of firms with negligible to 5% ownership by .085(t=.91), and the Q's of firms with dominant ownership by .13(t=1.61).The second best performing are firms with negligible to 5% ownership, whose Q's exceed those of firms with negligible ownership by .201(t=2.77) and those of firms with dominant ownership by .045(t=.62).One interpretation of these findings is that the convergence of interests hypothesis is the key to understanding the data at lower ownership levels, while the entrenchment hypothesis is operative for large board ownership.
Some potential difficulties with these regressions concern 1) the arbitrariness of the specification, 2) the stability of results over time, 3) the effect of wealth constraints on managerial ownership, and 4) the omission of a measure of growth opportunities from the right hand side of equation ( 1).We presently address these issues.
To some extent, our choice of where to partition ownership cells is arbitrary.To judge the robustness of our results, we estimated equation ( 1) using different cutoff levels.In particular, in addition to separating low from moderate ownership at 5%, we did so at 2.5% and at 7.5%; and in addition to separating moderate from high ownership at 20%, we did so at 15% and 25%.The results of these regressions, with and without controlling for other variables, support the following conclusions9.If the range of low ownership is defined as either 0-2.5% or 0-5%, then Q's in the low ownership cell are significantly lower than Q's in the moderate ownership cell (i.e., 2.5-20% or 5-20%).
However, there is no support for an increase in Q as ownership rises from 7.5% to 20%.Further, there is evidence of a significant decline in Q as board ownership increases from somewhere between 15-20% to about 25%.The decline seems essentially complete when board ownership reaches 25%.
Because we only have ownership data for 1980, the stability of our results over time is in question.As a crude test of stability, we obtained 1979 and The next issue is the effect of wealth constraints on managerial ownership.
If a management team is wealth constrained, it can only afford to own a large proportion of the equity if average Q, and hence the market value of the firm, is low.That is, the managers might only be able to afford to own a large stake in a poorly performing firm.This argument predicts that there will be a spurious negative correlation between the proportion of equity owned by the board and Q.It therefore only strengthens our finding of the positive correlation of Q and ownership at lower ownership levels.On the other hand, this spurious negative correlation might account for our finding that Q falls as board ownership becomes very large.
To subject this issue to some empirical scrutiny, consider the relation between board ownership and the replacement cost of the firm, A. Holding significant at the 95 level.
Finally, we look at the profit rate as an alternative measure of performance.The profit rate is defined as the ratio of the firm's net cash flows (less the inflation-adjusted value of depreciation) to the previously defined replacement cost of its capital stock, A. The board ownership regressions which parallel those for Tobin's Q are presented in the right panel of Table 2.
Although the qualitative pattern of the estImated coefficients on the ownership dummies is the same as in the Q regressions, the statistical significance of the estimates is much lower.Only the estimated coefficient on BOARD2O is significant at the 95 level.The point estimate for BOARD2O implies that, all other things equal, firms with 5-20 board ownership have profit rates .017higher than those of firms with negligible board ownership, and .012higher than those of firms with dominant board ownership.To gauge the magnitudes of these effects, note that the mean profit rate of the sample is .055with a standard deviation of .035.
The above results appear at odds with the finding of Demsetz and Lehn (1985) of no association between large shareholder ownership and performance.
The important differences between our procedures seem to be twofold.First, we focus only on the equity stakes of the board of directors, while Demsetz and Lehn measure concentration of ownership weighting ownership by members of the board and by other large shareholders equally.To the extent that large shareholders without board seats represent competing managerial teams, they may be attracted to firms with poorly performing incumbent management.This selection effect would tend to reduce the observed correlation between ownership concentration and performance.
Second, Demsetz and Lehn estimate a linear relationship between ownership concentration and performance.When we estimate a linear relationship between their measure of performance (profit rate) and our board stake variable, we get 11 = .055 -.005 • BOARD, which is consistent with their result.Even controlling for SIC codes and other factors In this regression does not yield a significant estimated coefficient on the board stake variable.We are led to conclude that Demsetz and Lehn's failure to find a relationship between ownership concentration and profitability may have been due to their use of a linear specification that does not capture what appears to be an important nonlinearity.
The Composition of the Board
So far we have assumed that the impact of the board's ownership stake on performance is independent of who owns that stake.This might not always be appropriate, for at least two reasons.First, ownership by officers and by outside directors might have different effects.Second, at any given level of ownership, leadership by the firm's founders or by their descendants might have different effects on performance than leadership by officers who are not related to the founders.In this section, we examine these two hypotheses.
The distinction between officers and outside board members might be important for several reasons.While it is the fiduciary duty of all directors to represent the interests of shareholders, outside directors in particular must oversee the performance of the firm's officers.But monitoring the performance of top officers requires time and effort.
In addition, an outside director serving on a board dominated by officers with more expertise and influence over votes, risks losing his position if he objects to these officers' choices.Without a personal financial interest in the firm or control over a large block of votes, an outside director may be reluctant to second guess poor corporate decisions.Presumably, the extent of the outside directors' role in disciplining officers is positively related to the equity stakes of the former.
For officers, the ownership stake is only a partial indicator of their interest in the financial success of the firm.Officers also get significant salaries, bonuses and incentive plans (Murphy, 1985) and may be subject to the discipline of the managerial labor market (Fama, 1980)12.In addition, top officers sometimes exercise virtually complete control over their firms with only small stakes, since their familiarity with the business and tenure with the firm enables them to dominate the board regardless of their personal equity ownership.These considerations suggest that the equity holdings of officers and outside board members might have different effects on performance.
Our analysis here parallels that of the previous section.By examining the 1980 annual reports of our 371 firms, we identified the two senior corporate officers of each firm.Returning to the CDE's listing of stock holdings, we constructed a new variable (OFFICER) giving the holdings of these top officers, who were usually the chairman and the president13.The holdings of the remainder of the board of directors are denoted OUTBOARD.That variable therefore includes the holdings of junior officers, such as vice-presidents.Since junior officers generally own very little stock, this classification is unlikely to make much difference.
The two top officers owned 6.3% of their firms on average.In 117 firms (32% of our sample), however, their stake was negligible; and their median stake was approximately one half of one percent.In 60 firms (16% of our sample) their holdings were in excess of 10%, and in 43 firms (12% of our sample) their stake exceeded 20%.
The mean value of the OUTBOARD variable was 4.4%, with only 97 firms (26% of the total) having negligible outside board ownership.The median -for OUTBOARD was just under one percent, and was thus greater than that of the OFFICER variable.In 50 firms (13% of the sample) the outside board's holdings exceeded 10%, and in 24 firms (6% of the sample), its stake surpassed 20%.
Column 1 of Table 5 contains the results of regressions of Q on ownership variables alone as in Column 1 of Table 2, but with a separate set of dummy variables for the top two officers' stake and for the stake of the rest of the board.In Column 2 of Table 5, we report the results of controlling for industry effects and other determinants of Q.Although the pattern of point estimates in columns 1. and 2 is consistent with firm value being maximized at moderately high levels of ownership, the estimates are not reliable enough to draw any solid conclusions.Still, it is worth noting that the results for the outer board more closely resemble the results for the board as a whole than do the top officer results.The pattern of point estimates for top officer holdings may reflect the absence of significant unexploited gains from raising their holdings.This is not the case for outside board holdings.This difference is consistent with the importance of non-ownership-based compensation for top officers, but not for outside directors.It is also consistent with the argument that officers and free-riding shareholders will make it difficult for outside board members to profitably increase their stakes (Shleifer and Vishny, 1986).
In the previous discussion, we have explored share ownership as a means to managerial entrenchment.But managers can become entrenched even without control over a large block of votes, especially in firms where the founder is a top officer14.Since founders presumably have a special claim to control of their firms, they might be instrumental in selecting the board or otherwise become entrenched even with small stakes.At the same time, the entrepreneurial ability of the founder can be a valuable asset, at least early in the life of the firm.
To discriminate between firms in which the founding family might supply entrepreneurial talent, and firms in which such families might only reduce corporate wealth, we estimate different founder effects for old and young firms.In particular, we reestimate the Tobin's Q regressions including a dummy variable that is equal to 1 if a member of the founding family15 is one of the top two officers and another dummy variable that is equal to 1 if the founding family is top management and the firm was first incorporated in 1950 or later16.
This specification aims to capture the impact of the founding family on the firm's performance independent of its stake.The results for the combined board holdings regression provide some confirmation of the expected founder effects: For pre-1950 firms, the presence of the founding family at the top of the management team is associated with a Tobin's Q that is .125lower on average.
However, the t-statistic for this difference is only -1.58, so the result must be interpreted with caution.The estimated coefficient on FOUNDER5O indicates that the effect of the founding family on Q is .351greater in newer firms than it is in older firms.This difference is reliably different from zero with a t-statistic of 2.10.On the other hand, one cannot confidently conclude that the net effect of the founding family in newer firms (the sum of the two dummy coefficients), estimated to be .226, is different from zero (t=1.24).
Unfortunately, results for the regression in which we segregate the In this paper, we examined two well-known hypotheses concerning the impact of managerial ownership on a firm's performance.The "convergence of interests" hypothesis suggests that agency costs should fall, and performance should improve, as the management's stake rises.We found support for this hypothesis in the O-1O range of ownership by the board of directors, although our results seem to be driven more strongly by holdings of the outside board members than by holdings of top officers.The "entrenchment" hypothesis predicts a decline of performance when managers are protected against the discipline of the market and are thus free to pursue their own objectives instead of value-maximization.We find evidence for this hypothesis based on lower levels of performance for firms with very large management holdings and on the finding that founding families have a negative impact on performance of older firms.
We cannot, however, rule out the possibility that our results can be explained by factors other than the "convergence of interests" and "entrenchment" hypotheses.Alternative explanations may have to do with the joint behavior of performance and management holdings over the corporate life-cycle, a spurious correlation between fraction of equity owned by management and market value induced by wealth constraints, or signalling hypotheses such as that of Leland and Pyle (1977).In addition, a theory predicting a nonlinear relationship beween management ownership and performance of the type that we have found has been proposed by Stulz (1986).In his theory, management's preference for control and consequent refusal to tender their shares forces acquirers to pay higher premia to gain control when the management's stake -is higher, and may lead to an increase in the target firm's ex ante value.When the management stake is so large that no takeover can be profitable, however, the ex ante firm value includes no takeover premium, and is therefore low.While Stulz's story differs from Jensen and Meckling's at the lower end of management ownership, it is closely related to the "entrenchment hypothesis" at the higher end.
Because of the nature of our data, this paper has not dealt with several important issues that might be fruitfully pursued in future research.First, we have focused on very large (and therefore usually older) corporations.In newer, faster growing firms, managerial holdings may play a more important signalling role than they are likely to play for our firms.Moreover, as our results have suggested, founders of younger firms might have an important leadership role to play.Research on ownership structure can doubtless benefit from considering smaller firms as well.Second, a better analysis of the impact of officers' stakes on performance would incorporate other compensation data.Important work in this area is Murphy (1985).Finally, on both a theoretical and empirical level, it is very important to learn how members of boards of directors with different individual ownership positions interact, and how the distribution of ownership among board members affects performance.Our work essentially assumed a good deal of unanimity on the board; a more complex story is surely appropriate.
1• On-the-job consumption is a generic term that can refer to shirking and taking managerial perquisites, but also encompasses pursuit of non-valuemaximizing objectives such as sales maximization (empire building), clean environment, or the maximization of employee welfare.
2 Numerous studies have shown that control is valued.For example, DeAngelo and DeAngelo (1985) find that, among 45 large corporations with dual classes of common stock entitled to identical cash flows but carrying different voting rights, top managers own a median of 56.9 of the votes but only 24 of the common stock cash flows.Loderer and Zimmerman (1985), using Swiss data, find that non-voting issues are priced lower than voting issues.
3.
In line with this point, Walkling and Long (1984) find that the larger is the officers' financial gain from a takeover, the less likely they are to resist a bid.At the same time, managerial ownership lessens the firm's vulnerability to a hostile takeover: Weston (1977) reported that no firm where insiders owned over 3O had ever been acquired through a hostile takeover.(Cummins, Hall and Laderman, 1982).These estimates are constructed on the assumption that all long term debt has an original maturity of twenty years, and using a matrix of bond prices in year t for bonds due in year s from the Moody's corporate BAA bond price series.The age struc- ture of corporate debt is estimated from changes in the firm's book value of long term debt in each of the twenty previous years on the Compustat tape.
Using this age structure estimate and the bond price matrix, Cummins et al (1982) calculate the value of each firm's long term debt.
We have calculated some descriptive statistics on the sample of 85 firms for which we have ownership data, but do not have market-value-based measures of Q (omitted firms).The mean board stake for these firms is 12.0% (it is 10.6% for the sample of 371 firms we study).Among omitted firms, 25% are run by founding families; among included firms, this number is 24%.From the viewpoint of ownership, therefore, omitted firms do not appear exceptional.As a further check that omission from the sample is not systematic, we calculated the ratio of the replacement cost of the omitted firm to the mean replacement cost in its (3-digit SIC) industry.The average of this ratio among omitted firms is .95.The growth rate in the firm's labor force is a geometric mean of the percent change in its labor force from one year to the next from 1970 to 1980.
For 62 firms, this calculation could not be made.For 59 of those, we set GL equal to the mean rate of growth in the firm's 3-digit SIC industry.Three firms are omitted from the regression because GL could not be imputed in this way.
12 Lewellen (1971) nonetheless reports that top managers get four times as much of their income from ownership income as from other forms of compensation.
13
In a few cases, either only one of the positions of Chairman and President existed for that firm, or the same person occupied both positions.In those cases, the OFFICER variable is the stake of the one top officer.
14
Consistent with this hypothesis, Johnson et al. (1985) find that sudden deaths of chief executives are accompanied by price increases in their firms' stocks when those executives are founders, but not otherwise.
15
We identified the founders and their families using a history of annual reports dating back to either the incorporation of the firm or the turn of the century, whichever was more recent.
16 Year of incorporation is in most cases taken to be the year of the first incorporation of the firm obtained from Moody's Industrial Manuals.In a few cases, Moody's noted a large discrepancy between the year the business was established and the year of first incorporation.The establishment year was used in those cases.a * = significant at 90% confidence level ** = significant at 95% confidence level = significant at 99% confidence level bNumbers in brackets are consistent standard errors calculated according to White (1980).a * = significant at 90% confidence level ** = significant at 95% confidence level = significant at 99% confidence level bNumbers in brackets are consistent standard errors calculated according to White (1980).
4.
The market value of common stock is taken from the Standard and Poor's Compustat tape.The market value of preferred stock is estimated by dividing the preferred stock dividend figure (reported in Compustat) by the Moody's preferred dividend rate for median risk companies.The market value of the firm's debt is taken as the value of its short term liabilities net of its short term assets (from Compustat) plus an estimate of the market value of its long term debt.Estimates of long term debt for our firms were obtained from the N.B.E.R.'s R & 0 Master File
Table 1
presents means of Q for different levels of the board's percentage ownership (the mean Q in the sample is .85,with a standard deviation of .67).
Table 1
1981 Q's for the firms in our 1980 sample, and ran the regression in the second column of Table2with Q for 1979 and with Q for 1981 as the dependent variables, but with 1980 values of all the independent variables.Because ownership is relatively stable over time, these regressions should be at least suggestive of the stability of our results over time.The results in fact are quite similar to the findings in Table2.10 leverage constant, market value can be lower either because Q is low or because the firm has fewer assets, i.e., A is low.If lower market value facilitates larger board ownership, we should see a negative correlation between replacement cost and the fraction of equity owned by the board.Table4presents the values of A at various levels of board ownership.The relationship is not monotonic, especially in the range of high board ownership.For firms for which the board ownership is at least 5%, the correlation between board ownership and A is only -.02.This correlation points against the view that size is a strong deterrent to management ownership.Although we cannot rule out the possibility that our finding of low Q's for firms with very high board ownership is spurious, evidence on the replacement cost of capital points against this possibility.Our omission of measures of firm growth rates from Q equations also raises some important issues.A high Q may in part reflect the value of future growth opportunities of the firm.If managers own large stakes in younger, faster-growing firms which tend to have high Qs, then the positive association between board ownership and Q that we observe might be spurious.On the other hand, given that fast growth is itself an important component of performance that depends on the actions of the management, we are probably understating the effect of management ownership on performance if we focus only on the effect of management ownership on Q holding growth constant.That is, much of the variation in Q across different board ownership structures may be due to the differing values of growth prospects that are achieved by managements with different incentives to maximize value.With this reservation in mind, we include the growth rate of the firm's labor force, 6L11, into the regression. of increases and subsequent declines in Q's as ownership rises nonetheless remains, and the estimated coefficients on BOARDO5 and BOARD2O are still 910 • --.00000154 • A + .196• BOARDO5 + .304• BOARDO5 + .177• BOARD99 For example, of the 40 firms with top officer stakes between 5 and 20%, 31 have a founding family member as a top officer, and of the 43 holdings of top officers and other board members are plagued by multicollinearity.+.054• OFFICERO5 + .0048• OFFICER2O -.048 • OFFICER99 + .143• OUTBOARDO5 + .225• OUTBOARD2O + .163• OUTBOARD99 .200 • BOARDO5 + .237• BOARO25 + .203• 60AR099 +
Table 1
Mean values of Tobin's Q for 371 Fortune 500 firms in 1980 grouped by level of equity ownership of the board of directors.
a. Negligible board stake means that no single member of the board of directors owned more than .2% of the firm's common stock.
Table 2
Ordinary least squares regressions of corporate performance measures (Tobin's Q and the profit rate) on measures of corporate assets and liabilities, including dummy variables indicaing the level of equity ownership by the firm's board of directors.a,
Table 3 1
-statistics for the pairwise null hypotheses that the coefficients on the ownership dummy variables estimated in equation (1) are equal.The average values of various measures of firm size for a 1980 sample of 371 Fortune 500 firms.The firms are grouped based on the fractional equity ownership of the board of directors.
Table 5
Ordinary least squares regressions of corporate performance measures (Tobin's Q and the profit rate) on measures of corporate assets and liabilities, including dummy variables indicating the level of equity ownership by the fir's top two officers and by the remainder of its board of directorsa?
** dummy set to one if 0% < stake ** dummy set to one if 5% < stake * dummy set to one if stake of -.0886 | 2017-09-12T19:12:22.873Z | 1988-01-01T00:00:00.000 | {
"year": 1988,
"sha1": "e83913d914c48fa023246a8b264ffd348cc0571f",
"oa_license": "CCBY",
"oa_url": "https://dash.harvard.edu/bitstream/1/29407535/2/w2055%202.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "538c89c63ad5a7d00347996b5e26c299a7ac7716",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
39482884 | pes2o/s2orc | v3-fos-license | Effectiveness of health education and behavioral intervention for tobacco de‑addiction among degree students: A clinical trial
Background: Objectives of the study were to assess the prevalence of tobacco use among the degree students of Oxford institutions in Bangalore city, offer a tobacco cessation intervention for tobacco users among the degree students, and assess the effectiveness of intervention by comparing with the control group. Materials and Methods: A randomized control trial was conducted to assess the prevalence of tobacco use and the effectiveness of tobacco cessation behavioral intervention offered to degree students of Oxford institutions in Bangalore city. Then were randomly selected and divided into 55 students in the study group (group A) and 60 students in the control group (group B). Results: The effect of intervention of tobacco cessation in group A showed an increase of 29.1% students who stopped using tobacco completely after intervention compared to 15% in group B, and the highest reduction of 21.8% change was noticed in the students using one to five tobacco products per day and the least reduction in percentage (1.8%) change was noticed in the students using one tobacco product per day. Conclusion: Findings from the present study suggest that the intervention has suggestive significance on tobacco intervention.
INTRODUCTION
The World Health Organization (WHO), predicts that India will have the fastest rate of rise in deaths attributable to tobacco in the first two decades of the 21 st century. Many of these deaths will occur in the productive years of adult life as a consequence of an addiction acquired in youth. [1] Although adolescence is a time of optimum health, adolescents are often inclined to assume behaviors which could damage their health and affect their lifestyle in the future. One such behavior is nicotine dependence, which is the most prevalent, deadliest, costliest, yet the most treatable type of substance dependency. [2] Adolescent tobacco use cessation promises to arrest the physical consequences of use in a rapidly growing and developing body and before the addiction becomes so ingrained that cessation becomes a much more difficult problem. [3] College students are an important target group for smoking cessation interventions. College students have a higher perception of smoking among their peers and are influenced by this perception. They have more freedom to make personal decisions now than during their schooling. Stress is cited as one of the main reasons for cigarette use among these students. Tobacco companies are more heavily targeting this population through print, media, specialty item distribution, and sponsorship of public entertainment events. [4] The various treatment approaches include cognitive-behavioral strategies (self-monitoring and coping skills), motivational strategies (techniques to clarify desire for change and reduce ambivalence toward change), and social influence strategies (addressing social influences that serve to promote or maintain smoking). The majority of systematic reviews and meta-analyses of school-based prevention programs have found that curricula using the social influences approach, specifically including normative education and practice of resistance skills, are consistently more effective than curricula adopting other approaches such as information-only or "affective." [5] Very few intervention studies have been conducted on college prevention programs and there is little information on effectiveness. [4] Hence, an attempt has been made for the assessment of prevalence of tobacco use and effectiveness of tobacco cessation intervention offered to degree students of Oxford institutions in Bangalore city.
Randomized controlled trial
Ethical clearance was obtained from the ethical committee of the Oxford Dental College, Hospital and Research Centre, Bangalore, India. Consent forms were prepared in English and consent was obtained from the students of the Oxford institutions as well as from the heads of the concerned institutions. A randomized control trial was conducted to assess the prevalence of tobacco use and the effectiveness of tobacco cessation intervention offered to degree students of Oxford institutions in Bangalore city. There are 32 educational institution offering from UG to PG courses including Dentistry, Nursing, Pharmacy, Physiotherapy, Engineering, Computer Education, Management, Life Science, and Law.The total number of students in these institutions is about 5000. The source of data for this study was the degree students from the Oxford group of institutions, which has 12 degree colleges. Sample size of 155 was obtained while maintaining a statistical power of 90% with 95% confidence level and 5% margin of error (E). Initially, a self-prepared tobacco questionnaire was given to degree college students to assess their smoking behavior by which the prevalence of tobacco use was obtained. There were 248 tobacco users present in the study; out of 248 students, only 115 students gave informed consent for the study. Then, the tobacco users were randomly divided as 55 students in the study group and remaining 60 students in the control group by simple random sampling. Block randomization of the colleges was done to prevent dissemination of the information. The study group students were selected from Dental, Hotel Management, Information Technology and Management, and Commerce. The control group students were selected from Pharmacy, Nursing, Physiotherapy, Fashion Designing, Engineering, Science and Law. Inclusion criteria for the study were being degree students of Oxford institutions for assessing the prevalence of tobacco use; among them, the tobacco users who gave informed consent and were willing to participate in the study. The exclusion criterion was students who were in the final year degree as they might not be available for the complete period of the study and were preparing for their exams. A specially prepared proforma, which included demographic data and smoking behavior, associated with physical and psychological complications and other products of tobacco use was obtained from the tobacco users, and also, Fagerstrom test was done using Fagerstrom questionnaire and a carbon monoxide (CO) grade was estimated by using smokerlyzer instrument. 'Fagerstrom questionnaire used by the Arizona Smokers' helpline'. [6] The tool has been paired to six simple questions. Scoring was been recorded to assist in tailoring nicotine cessation advice to fit the individual needs. The degree of nicotine dependency was assessed by Fagerstrom's test. Depending on the answer that each smoker gives to each question, a certain mark is given, that may vary from 0 to 10 points. A degree of slight dependency is considered when the test result ranges from 0 to 3 points; moderate dependency is from 4 to 6 points; a severe degree of dependency is 7 points or over. The Micro CO is a powerful diagnostic tool for measuring alveolar CO in ppm concentrations and percentage carboxyhemoglobin (COHB). The Smoke Check is designed as a simple screening test for cigarette consumption, giving an instant indication of CO levels in ppm and backed up with color light indicators. The Smoke Check is the most cost-effective CO monitor available today. Conversion of ppm results to % COHB is easily done using the Smoke Check's smoking cessation guide chart. A self-help guide obtained from National Institute of Mental Health and Neuro Sciences (NIMHANS) which contains the reasons to quit tobacco, readiness to quit, how to quit, dealing with withdrawal symptoms, and self-help tips for tobacco quitting was given to the students in the third session. The self-help material was obtained from the institution to be given to the students. The training of the intervener was done in Tobacco Cessation Center (TCC), NIMHANS, Bangalore, for the duration of 1 month. The study was systematically scheduled to spread over a period of 6 months from May 2010 to October 2010. In these sessions, the intervener was trained to give counseling regarding tobacco cessation for the subjects. The Chairman of the Oxford group of institutions was approached, the purpose of the study was explained, and his approval was obtained to proceed with the study. Also, permission from the principals of respective colleges was obtained to conduct the study. A pilot study was undertaken on 10% of the study population (degree students). For the main study, the study sample was divided into two groups: Study group (group A) and control group (group B). Four sessions of intervention were administered to the students of group A. They were administered the intervention after they were grouped into four subgroups which included A1, A2, A3, and A4 with 15 students in each group. In the control group (group B), no intervention was given to the students. The first session consisted of distributing self-help material to the students. The topics for intervention included: Introduction to tobacco, prevalence of tobacco use, effects of tobacco use in general health and dental health, psychosocial factors influencing tobacco use, healthy diet, and behavioral intervention for prevention of tobacco use. The second session intervention was given within 15 days after the first intervention. In this session, group A students were intervened in their individual subgroup (A1, A2, A3, and A4). The content for discussion included the assessment of high-risk situation and enhancement of motivation and the role of high-risk situation with tobacco use/ quitting. The third session intervention was given in the 4 th month. In this session, group A students were intervened in their individual subgroup (A1, A2, A3, and A4). The content for discussion included reflection of previous session discussion. Management of high-risk situation and educational material on tobacco use were given, and enhancement of self-efficacy by motivation and evaluation were done after 1 week of the third session. The fourth session intervention was given in the 5 th month; in this session, group A students were intervened in their individual subgroup (A1, A2, A3, and A4). The content for discussion were enhancing their self-efficacy for quitting tobacco, reinforcement for tobacco cessation, and feedback, and evaluation was done. At the 6 th month follow-up, the same proforma was used and, also, Fagerstrom test was done by using Fagerstrom questionnaire and a CO grade was estimated by using smokerlyzer instrument for both the study and control groups. Education and intervention was given for the control group. Descriptive statistical analysis was carried out in the present study. Significance was assessed at 5% level and 95% confidence interval. Chi-square/Fisher exact test was used to find the significance of the study parameters on a categorical scale between two or more groups.
RESULTS
Prevalence and characteristics of tobacco users among the study population are presented in Table 1. A total of 2165 students were administered the questionnaire, of which 248 (11.5%) students were tobacco users and 1917 (88.5%) students were non-users of tobacco. Of the 248 tobacco users, 68.5% of students were smokers, 19.4% of students used smokeless tobacco, and 12.1% of students used both forms of tobacco (smoking and smokeless). Out of 248 students, only 115 (46.4%) students were willing to participate and thus were included in the study. Distribution of study subjects with regard to abstaining from tobacco use showed that around 12.7% students in group A and 21.7% students in group B could abstain from tobacco use for 1 day, 21.8% students in group A and 21.7% students in group B could abstain from tobacco use for 1 week, 23.6% students in group A and 20% students in group B could abstain from tobacco use for 1 month, and 12.7% students in group A and 16.6% students in group B could abstain from tobacco use for more than 6 months. Also, 29.1% students in group A and 20% students in group B were not recorded because they never tried to quit tobacco.
Distribution of students according to the CO levels was statistically similar between the two groups (P = 0.280). Distribution of the study subjects according to tobacco usage before and after intervention in group A is shown in Table 2. The effect of intervention of tobacco cessation in group A showed a positive percentage increase of 29.1% showing that they stopped using tobacco completely after intervention. The highest reduction in percentage (21.8%) change was noticed in the students using one to five tobacco products per day and the least reduction in percentage (1.8%) change was noticed in the students using one tobacco product per day. Also, it increased to 3.6% in the students using more than 10 tobacco products per day. The response for the average use of tobacco per day was strongly significant after intervention with P ≤ 0.001. A reduction in percentage change of 7.3% was found among those who said there was no association between smoking and alcohol after intervention. The total 29.1% increase in the students was gained to quit tobacco use after intervention. The response rate for the reason association between smoking and alcohol was not significant after the intervention with P ≤ 0.451. Before intervention, 72.7% students were willing to quit tobacco and 30.9% students were not willing to quit the habit. After intervention, 16 (29%) students quit the habit.
The response for willing to quit tobacco use was strongly significant after intervention with P ≤ 0.001. Before intervention, about 55.2% students in this category wanted to quit due to health problems, followed by 31% students due to the pressure of friends and family members who do not like tobacco usage and 13.8% students for other reasons. The same factors were the reasons after intervention too. The response for the reason for not willing to quit tobacco use was not significant after intervention with P ≤ 0.109. Their knowledge regarding the harmful effects of tobacco increased after intervention. Though majority (96.4% students) regarded cancer as the most harmful effect of tobacco before intervention, the intervention increased knowledge regarding the other harmful effects of tobacco substantially. The response for the knowledge of harmful effects on tobacco use was strongly significant after intervention with P ≤ 0.001. Before intervention, 22 (40%) students did not have any of the health problems and the others faced cough (20% students) and hair loss (16.4% students) as the major health problems.
The major sources of knowledge regarding tobacco before intervention were TV/news papers (56.4% students) and friends (30.9% students). Only 10.9% of the students got the information from health professionals. After intervention, the major source of information was from health professionals (94.5% students), TV (65.5% students), and friends (41.8% students). The response for the source of knowledge on tobacco use was strongly significant after intervention with P ≤ 0.001. Distribution of the study subjects according to the effect of intervention on smoking status is given in Table 3. An increase in percentage (14%) of students who stopped tobacco use was noticed among the students of group A after intervention. Incidence of relapse was significantly more in group B compared to group A (48.3% vs 27.3% students) with P = 0.077. Students not willing to quit the habit were more in group A (43.6%) when compared to group B (36.7%) students. Distribution of study subjects according to Fagerstrom/smoking analysis in group A showed an increase in percentage change of 1.9% and 3.8% in the very low dependence and in high dependence categories, respectively. Also, reduction of 5.7% was seen in the low dependence Contd... category. In group B, an increase in percentage change of 5.8% was seen in the very low dependence category. Also, reduction of 3.9% and 1.9% was seen in the low dependence and very high dependence categories, respectively. These results cannot be generalized because only students have been included in the study.
DISCUSSION
Young people explore new roles, develop new skills, and begin to consider their future as adults during the teenage years. Therefore, the role of competence skills is highly relevant to understanding the course of adolescent development. [7] Tobacco use and health are intimately related; thus, tobacco use among students is an important issue. [8] Preventive approaches that focus on psychosocial factors associated with drug use initiation and those that emphasize the teaching of social resistance skills either alone or in combination with generic personal and social tactics are effective. [6] In India, it is generally thought that smoking by girls is socially unacceptable and, therefore, they do not smoke, but in the Northeastern states, a high smoking prevalence has been reported among girls, ranging from 28% in Mizoram to 8.3% in Arunachal Pradesh. [7] In the present study preferred smoking (68.5%) followed by chewing (19.4%) tobacco. Also, 12.1% of the students used both forms of tobacco. Prevalence of tobacco use in this study was 68.5%, which is higher when compared to 14.4% reported in a study done in Iran. [9] Among smokers, 82.3% preferred cigarette, and 75.5% of the students preferred gutkha form among the smokeless tobacco users. Majority (35.9%) of them used tobacco for 1-5 years, and 53.6% of students used tobacco more than once and less than five times a day.
The majority of the school children started to smoke between 15 and 18 years of age. Whereas in the studies of Aslan et al. and Singh et al., [10,11] the habit was found to have started much early at 10 years. Early smoking can be regarded as a specific health and psychosociological problem. It has been shown that young people are more reluctant to give up smoking, possibly due to greater addiction to nicotine. [12] The period of abstinence was positive for 1 month and more than 6 months in group A with intervention. The important variables favoring abstinence for more than a year were increase in age, higher socioeconomic status, male sex, and presence of respiratory symptoms. The odds ratio was higher for Bangalore and Chandigarh versus Delhi or Kanpur. [13] In the present study, 34.4% students in both groups attempted to quit at least 1 day, which is less when compared to 54% reported in the study of Susan. [14] One of the primary reasons for the failure of smoking prevention and cessation programs among young adult smokers is that they are less likely to be concerned about the health risks of smoking than older smokers. They believe that the health consequences occur much later in life and the health risks of smoking are not clear to them. The CO levels decreased to lower levels in both groups A and B. The reduction was more in the control group (group B) after 6 months. Regarding nicotine dependence (Fagerstrom Test for Nicotine Dependence scores) and daily cigarette consumption, there was a significant reduction from baseline to each of the follow-up sessions. The treatment group also displayed an increase in Fagerstrom Test for Nicotine Dependence scores from 3 months to the subsequent follow-up sessions, relative to smokers in the comparison group, whose nicotine dependence tended to decline. [15] Finally, the students who stopped smoking were more in group A after intervention than in group B without intervention after 6 months. The relapse was more in group B than group A after intervention. The students who were not willing to quit were more in group A after intervention than group B. In this study, there was decrease in smoking cases and relapse cases in group A after intervention than group B, but it was only a minor change. Another possible reason for the small cessation effect is the less than anticipated sample size and consequent reduction in statistical power to detect group differences. [10] Young adults who initiate late smoking and college students may experience greater success at quitting than early initiators. Any prevention and cessation program for adolescents or young adults needs to multiple ways by which tobacco is used. [16] CONCLUSION It is concluded from the study that majority of the students were found to be smokers and started using tobacco product for style and fun and by the influence of friends in both groups. Findings from the present study suggest that the intervention has suggestive significance on tobacco usage. Their knowledge regarding the harmful effects of tobacco increased after intervention. This may be explained mainly by factors such as peer influence and psychosocial aspects related to tobacco usage.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T04:34:11.450Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "3179edb3cf724198c89f27f6462f151a54d94634",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0f9c6f36d092fdfff300723309fc8ace16c7841c",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233441610 | pes2o/s2orc | v3-fos-license | Statistical-Economic Design of Control Chart If The Vector of Target Values of Multi-Variables Shifts With Time |
INTRODUCTION Mao (2019 a) presents a new quality control technique. The optimization adjustment interval of one dimensional quality characteristic are extended to multi-dimensional case in which the vector of the quality characteristics or important financial indices of firms shifts with time due to seasonal changes or cyclical change of economic environment, and also there are time-varying correlated among each other. The statistical-economic design of a multivariate special triangle control chart is proposed to control such kinds of processes. An application is illustrated to monitor the soundness of insurers of U.S. Optimal solutions of control region, power of finding out assignable cause(s), sample size and sample interval are found out. In business process, the process means shift gradually with time. For instance, the means of some performance type financial indices gradually increases or some consumption type financial indices gradually decrease And we must distinguish three different situations. One is that the shift of the means is resulting from the normal growth and development of firms and another is that it is due to the unnormal events happening in firms, such as, the underwriting premium or liabilities suddenly increases in a very short time to very high level and even excesses the normal underwriting capability. Since the quality of underwritten insurance policies is not good, it will cause very big insolvency risk to insurance companies. The third situation is that the process is affected by some causes and the financial indices shift gradually from the targets due to seasonal changes or cyclical change of economic environment. And these changes are unavoidable within an allowable range. The important thing for insurance companies is to use suitable control techniques to dynamically control the means of important financial indices under the target level, find and correct unnormal situations as long as the control chart alarms the signal of out of control. It is obvious different from production process where in any situation that the target means is always a constant even though the quality characteristics change with time due to tool wearing or some other reasons such as material consumption. This is because the engineering specification limit or region is fixed. The control target is to keep the process means at the target value presetting by adjustment the process cyclically. Mandal (1969) (also see Montgomery (2009) ) suggests ABSTRACT In this paper, the statistical-economic design of a multivariate special triangle control chart is proposed to control the processes of the quality characteristics or financial indices shifting with time. A multi-objective programming with several constraints is used to determine optimal solutions of control region (probability of false alarm), the power of finding out assignable cause(s), sample interval and sample size. An application of the statistical-economic design of a Multivariate special triangle control chart is illustrated to control the soundness of insurers of U.S
INTRODUCTION
Mao (2019 a) presents a new quality control technique. The optimization adjustment interval of one dimensional quality characteristic are extended to multi-dimensional case in which the vector of the quality characteristics or important financial indices of firms shifts with time due to seasonal changes or cyclical change of economic environment, and also there are time-varying correlated among each other. The statistical-economic design of a multivariate special triangle control chart is proposed to control such kinds of processes. An application is illustrated to monitor the soundness of insurers of U.S. Optimal solutions of control region, power of finding out assignable cause(s), sample size and sample interval are found out.
In business process, the process means shift gradually with time. For instance, the means of some performance type financial indices gradually increases or some consumption type financial indices gradually decrease And we must distinguish three different situations. One is that the shift of the means is resulting from the normal growth and development of firms and another is that it is due to the unnormal events happening in firms, such as, the underwriting premium or liabilities suddenly increases in a very short time to very high level and even excesses the normal underwriting capability. Since the quality of underwritten insurance policies is not good, it will cause very big insolvency risk to insurance companies. The third situation is that the process is affected by some causes and the financial indices shift gradually from the targets due to seasonal changes or cyclical change of economic environment. And these changes are unavoidable within an allowable range. The important thing for insurance companies is to use suitable control techniques to dynamically control the means of important financial indices under the target level, find and correct unnormal situations as long as the control chart alarms the signal of out of control. It is obvious different from production process where in any situation that the target means is always a constant even though the quality characteristics change with time due to tool wearing or some other reasons such as material consumption. This is because the engineering specification limit or region is fixed. The control target is to keep the process means at the target value presetting by adjustment the process cyclically. Mandal (1969) (also see Montgomery (2009) ) suggests using "Trends XR control charts" to jointly control the mean and the deviation of a process. The fitted regression equation is used as the central control limit with the parallel upper and lower control limits. The width of the control limits is 6 . The process needs to be adjusted as long as the regression line is over the presetting maximum level of process mean. Quensenberry (1988), Wu (1998), Cheng and Fricker Jr. (1999) and Kang et al.(1999) also address the quality control of such kinds of production processes. Spiring (1991) proposes a method to evaluate the process capacity when there exists an unavoidable systemic cause. Spiring and Cheng (1998) propose a single variable control chart, the MSE chart, to monitor the location and the scale simultaneously. Mao (1995 a) presents a new technique of multi-variate joint quality control. It can be used to monitor the change of mean vector and covariance simultaneously. Its application is illustrated with an example. Finally, the characteristic of Average Run Length (ARL) is discussed. Chao and Cheng (1996) discuss the semi-circle control chart to control both location and variation of quality characteristic simultaneously. Mao et al (2014) apply dynamic monitoring to control and predict insurers' financial strength. They present a new statistic that combines means, variances, and co-variances of the multivariate financial indices as a new type of control tool. They use data from the U.S. property and casualty insurers from 2001 through 2010 to determine the control regions and provide two examples to illustrate the application of their proposed methodology. Mao (1997) discuss the control and optimization of adjustment interval when one dimensional quality characteristics shifts with time. Mao and Cheng (2016) extend it to use joint trends semi-circle control chart to control this type of processes effectively. An optimization model is suggested to determine the optimal interval of adjustment. They also discuss the average run length of the proposed control chart and the extension to the EWMA chart. An example is used to illustrate its application in a production process. There are lots of literature on economic or statistical and economic design of multivariable processes (For the review of them, please see Mao (2019 a,b).
There are some papers approaching the economic or statistical-economic design based on Taguchi (1986), for example: Krishnamoorthi et al. (2009) Alexander et al.(1995. Cai et al. (2002) present the economic design of control chart for trended processes. They address appropriate timing problem of making adjustment of trended process through economic design of its control chart. However, we believe that there are some limitations on this study. Taking the shift of quality characteristic from the acceptable level as only assignable cause to be determined and corrected may not be best strategy. Not considering other important and possible assignable cause(s) and corresponding cost will cause additional control cost and result in ineffective process control. The economic design or statistical-economic design of control chart generally focus on optimization of three important parameters of the upper and lower limits of control charts, sample interval and sample size.
In this article, we establish multi-variable control chart and we assume that the vector of multi-financial indices change with time. We apply three dimensional and multivariate special triangle control chart to monitor process mean vector shifting with time and covariance jointly. We also discuss the statistical-economic design of the control chart we present. Different from Cai et al. (2002), which considers the means of quality characteristic expressing unacceptable means as out of control state, we divide assignable causes into two kinds, that is, avoidable and unavoidable cause(s). Our optimization problems is to establish economic design model. It only takes into account of avoidable causes to determine optimal control limit (region) and sample interval. The total cost of process control cost includes the control cost of production and the loss of consumer due to out of control. It should be especially noticed that our approach in this article has obviously three aspects of importance. First one is its flexibility and universal adaptability,that is, it has universal and widespread -99-application value, which can be applied in the process control of almost all industries. Second one is its simplicity and effectiveness. Third one is its dynamic nature, that is, it can be used in tracing and monitoring of the process in which the vector of target values of multivariables changes with time, which is a widespread phenomenon in the growth type of industries. In fact, the target values of indices reflecting the growth of the firms gradually increases with time but those reflecting their consumption reduce with time. Figure 1 and Figure 2 displays the time series patterns of the means of main financial indices of an insurance company in U.S. Figure 3 describe the change patterns of the means of asset and liability values of a main bank on U.S.
Model for monitoring the vector of target means of multi-variables shifted with Time
Similar to Mao (2019 (a)), we assume that there are p variable for a firm. Let () t X be the vector of the values of the characteristics that changes with time. We use the linear equation to describe this change. let () t X be a multivariable continuous function of t , and , to be the t random sample of size n taken at time t from a given process. The t th sample mean vector, , follows a multi-variable normal distribution, that is, , where 0 is the covariance matrix of the multivariables expressed as: Assume that the expected vector of observations vector ij x belonging to same sample change with time, that is, the expected value vector of the i th value of the j th sample is The square sum of deviations between the i th observation and the mean of j th sample could be expressed as: where () LA t is the measurement of random variation of multivariables at time t , () LB t is the measurement of the un-avoidable assignable cause at time t , 0 tT and T is control period. We assume that the un-avoidable assignable cause is independent of other possible causes and both of them are functions of time, where The expected values of () The proof of independnece of () LA t and () LB t is almost same as that in . Please see Appendix 1.
Similar to Mao (2019 (a)), we let By Hohollon theorem (2013) We LT t Chart, to control both the location and the variation of the process.
We have the intercept where 2 ( 1) ,(1 ) np is the 100 (1 ) . We do not want to repeat it. The readers with interests can refer to ).
Statistical-Economic Design of Dynamic Multivariable Triangle Control Chart
Regarding to economic or statistical-economic design of control charts, most of literature is focused on X control chart and only very limited literature consider the joint control charts, such as XR control chart. Since it is important to design process control procedure by taking into account of both means and dispersal of variables simultaneously, in following, we will establish model of statistical-economic model of dynamic multivariable special; triangle control chart, which can be used to jointly control the vector of means and covariance of multivariable under consideration of the vector of target means of multivariable shifting with time.
Multi-objective economic statistical design is first introduced by Evans and Emberton (1991) for joint X bar and R control charts. In their studies, multi-objectives, including cost function and statistical properties are optimized simultaneously. Therefore, optimal design of a control chart is represented as a Multiple Criteria Decision-Making (MCDM) problem. Safaei (2012) et al. present a multi-objective model of the economic statistical design of the X-bar control chart, which incorporating the Taguchi loss function and the intangible external costs. The model minimize the expected hourly loss cost while minimizing out-ofcontrol average run length and maintaining reasonable in-control average run length. The Pareto optimal solutions are obtained by evolutionary algorithm, namely NSGA-II. Bahiri et al. (2013) propose economic-statistical design of X control chart by using multi-objective genetic algorithm. They consider off the trade between average time needed to detect out of control (ATS) and the control cost taken in whole control period in their multi-objective function.
In our model, we also apply multi-objective function. According to Taqucis (1986), any shift from the target value represents the loss. Different from Taqucis (1986), we consider the social loss resulting from both the shift from the target values and un-normal increase of covariance in our design. Similar to Bahiri (2013), we also consider off the trade between total expected social loss and ATS. In the following, we will discuss the case of optimal control of multi-financial indices In the study of economic design of control charts applied in the control of production process, there are three important parameters to be determined. They are sample size, sample interval and control limits (region), while the selection of optimal sample size generally only consider the effect of sampling cost on total control cost. However, one important fact in monitoring financial performance of firms is that sample cost is much less important than other cost. In our optimal design schedule discussed in this sub-section, we discuss the optimal design of multivariate special triangle control chart from aspect of statistical and economics and we neglect sample cost. Therefore, our optimal design of the control chart for monitoring financial state of a firm is as to determine three parameters including control bound (region), sampling interval and and sampling size. A great difference between the control of production process and financial performance or insolcency of firm is that the loss resulting from any missed alarms will be much greater than that resuting from false alarms ( Mao and Hao (2019), because any alarms of out of control of firms's financial indices indicates the great risk of bankruptcy of firms and the loss will be fatal if it really happens. While it is very difficult to estimate the loss caused by missed alarm accurately, in the establishment of optimization model, we minimize the total expected control cost, at the same time, to assure the power of the control -104-chart is greatest and average time needed to detect out of control is minimized. Therefore, it is an optimization problem of multi-objectives.
Assumptions
(1) Assume that the dynamic statistic of p dimensional quality variables, 2 ( 1) ( )T a n p LT t , where n is sample size, The process may be affected by ( 1) vv kinds of assignable causes which is avoidable, and the non-centrality parameter would change from 0 to , = 1, 2, , k k v.
L when the process is out of control as a result of the occurrence of the assignable cause being avoidable, where non-centrality parameter Where k U and k are the vector of the means and covariance when the process is affected by k th assignable cause being avoidable. The process will not affected by other assignable causes which is avoidable when it is affectd by one assignable cause being avoidable. It is also assumed that if the process is out of control it remains in that state until detected and corrective action takes.
(2) Assume that the assignable cause occurs during a time interval following a Poisson process, that is, the occurrence of the assignable cause are independent exponential variables with means of 1/ , 1 2, . k k = , v L .
(3) Assume that the process is not shut down during the search for the assignable cause. It means that the business of the firm will not be interrpted when the out of control alarm is sounded.
:
A the expected direct and indirect loss to the firm bacause of out of control of the firm including the reduction of the profit or the value of shareholders'and the loss resulting from the reduction of the demand.
The models of the the cycle of the process and the costs
Since the sampling cost is much smaller comparing with other costs, we neglect it.
Similar to Mao (1995), the cycle of the process control is () 11 . (17) Different from Cheng and Mao (2011), the total expected cost does not include the expected cost incurred due to a higher rate of defectives when the process is out of control, the situation of the control of production process, and also the total expected cost is time-varying. Hence, the total expected control cost can be written as
Randwick International of Social Science Journal
where 1 a is the average cost per unit time of searching for the assignable cause being avoidable, 2 () ak is the avarage cost per unit time of adjusting and dealing with th k assignable cause being avoidable and 0 A is the expected direct and indirect loss to the firm when the business of the firm is out of control and l is its expected direct and indirect loss to the firm per unit of the square errors because of out of control of the firm, and The multi-objective function is expressed as following: and 1 G *( ) LV t and 2 () Gt are given constants. We can find optimal values of parameters of sampling interval, () ht ,sampling size () nt and the intercept of triangle control region through grid search method.
Statistic and Economic Design by Using of the data of insurers of U.S
We begin by discussing statistical and economic estimation of the optima control region, sample interval and the vector of target value of unified statistic using historical (2001 through 2010) data from all U.S. property/casualty (P/C) insurance companies in financially strong condition, as provided by SNL Financial 1 . We select six key financial indices according to their impact on the soundness of insurers. Appendix 2 lists these six key indices and their detail descriptions. We obtain co-variances matrix 0( ) x by using the historical data as: U is the vector of means of financial indices when the process is in control and 1 U is the vector of means when the process is out of control. By using the data given in Table 1, Table 2 and other data given above, we can get optimal soluions of control region,sample interval, sample size and the power to find out assignable cause(s).
To simplify calculation, we assume only the vector of financial indices change, but the volatility and covariance do not change. We also assume that the process is only possiblally affected by one assignable cause which results in the process out of control. With the help of Math lab and multi-objective programming, we can obtain optimal solutions of sample interval, control region, probability of false alarm, power of finding out assignable cause(s), average time of finding assignable cause(s) based on the criteria of statistical-economic design of control chart. Table 3 list optimal solutions of statistical-economic design of multivariate special triangle control chart. Figure 4 displays the multivariate special triangle control chart. The correct selection of the vector of target values is very important for monitoring and improving the financial performance of a firm. Therefore, the process manager must inspect the process characteristics and adjust them periodically. Well known expert of quality management, Dr. Deming emphasizes the importance of quality improvement. Setting the vector of financial indices dynamically conforms the thought that constant improvement can make things tend to perfect. The results also show that `optimal probability of false alarm and optimal power of finding out assignable cause(s) are same no matter how optimal sampling interval changes, optimal value of probability of false alarm is rather small and optimal power of finding out assignable cause(s) is as high as 1. It means there is no missing alarm. This result is very important because any missing alarm will cause the financial indices out of control and even bankruptcy of insurers. Finally, the results show that total cost increases with the increase of sampling interval. It means that the smaller the sampling interval, the smaller the control cost it is. Figure 4 indicates that there are two points are out of control region. It means that in last two years, the financial indices exist some problems and the insurer much take some effective measures to prevent this situation from further worse. Comparing to Mao (2019 a), the criteria of control is more strict in the situation of statistical-economic design than those of statistical design of multivariate special triangle control chart. That is, both of the optimal probability of false alarm and the power of finding out assignable cause(s) based on statistical-economic criteria are greater than those based on statistical design. In this way, it can be favorable for the manager to finding out assignable cause(s) as soon as possible. In addition, if the process is (are) affected by other assignable cause (s) and all or some parameters have changed, it is necessary to re-calculate optimal solutions and reset the optimal control region of the control chart. Table 1 indicates that all of the six indices selected are growth indicators under specific ranges. However, Figure 1 and Figure 2 show that the change tarences of five indices, 1 2 3 4 , , , x x x x and 6 x gradually decline with time and only one index 5 x slightly increases with (1) by using linear regression method as listed in Table 2. Table 2, Figure 1 and Figure 2 have shown obvious decline trends of five financial indices, However, the points in Figure 4 has shown obviously gradually increases trends and there are two points out of control. The manager must not only pay much attention on the out of control situation resulting from the gradually increases of co-variances, and the interaction of multi-financial indices and co-variances, but also concern closely on the decline trends of these five indices and take effective measure to let the five indices gradually increases rather than continuously deceases. Although we cannot quantitatively determine the accurate adjustment value and interval of the vector of target values of financial indices due to that we cannot obtain relevant data, we can observe the change trends of financial indices by combining Table2, Figure 1, Figure 2 and Figure 4 and make effort to improve the soundness of insurers to avoid the situation become worse. It is important to notice that although the sixth indices is a growth index and it has been negative, it is necessary to carefully analysis the causes resulting in this worse situation and take some measures to make it become positive value. Table 4 lists the results of sensitivity analysis when the important cost parameters change. The results in Table 4 show that optimal sampling interval and the power of finding out assignable cause(s) did not change no matter how the cost parameters change. The optimal total control cost, sample size and the control region changes with the change of the values of cost parameters. However, in one situation when the cost parameter of 2 a changes, only total optimal control cost changes and other optimal solutions do not change at all. On the whole, in all situations, the smaller the sampling interval and optimal average time of finding out assignable cause(s), the better they are. Therefore, collecting data and make calculation, drawing figures to carry out analysis in time is very important for the effective control of financial indices. This is different from the optimization and control when quality characteristics of production process shifting with time, where the sampling and inspection cost often high, especially when inspection of products needs high cost equipment or destructive test. Optimization of sampling interval needs to balance the sampling and inspection cost and the cost of out of control to determine an optimal sampling interval and optimal average time of finding out assignable cause (s).
Regarding to adjusting the financial indices of a firm when they gradually change with time, we should distinguish from the adjustment of the means of the production process. For the situation of monitoring financial indices, it is difficult to optimize the adjustment interval by quantitative methods because the financial indices generally have no serious specifications, therefore, it is difficult to evaluate the loss resulting from the unexpected shift of the financial indices and it is even impossible to estimate the adjustment cost. As a result, what is important is to distinguish two situations: on the one hand, if the change of the financial indices of a firm results from some seasonal or some other cyclical causes, the firm should make some off-seasonable or countercyclical measure to make the financial indices affected return to the normal states, on the other hand, if the change of the financial indices results from the gradual development and growth of the firms and in fact, these changes are sound changes expected by the firm. The important thing in this situation is to observe the change trends of each financial index and reset new and better target value vector as long as the original vector of target values is over the presetting maximum (minimum) level of the vector of process means.
We have discuss how to quantitatively determine the optimal adjustment interval for the production process in Mao (2019,a) Here we will not repeat it again.
CONCLUSION
In this paper, we discuss the statistical-economic design of multivariate special triangle control chart. We consider the optimal control multi-financial indices of a firm by using multi-objective programming. We illustrate its application in the optimal control of financial indices of insurers in U.S. Optimal solutions of control region, sampling interval, sample size, the probability of false alarming and the power of finding assignable cause(s) are determined. The optimal solutions satisfies the constraints of (1) power of finding out | 2021-04-29T00:39:58.485Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "1f25214a5ac5f10ec50bc0f4eed6ca2f17095f59",
"oa_license": "CCBYSA",
"oa_url": "http://www.randwickresearch.com/index.php/rissj/article/download/187/124",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f25214a5ac5f10ec50bc0f4eed6ca2f17095f59",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
10374726 | pes2o/s2orc | v3-fos-license | Occurrence of Black Aspergilli and Ochratoxin A on Grapes in Italy
Ochratoxin A (OTA) in wine is linked to contamination by several Aspergillus species. In 2003–2007, grape samples collected in Italy were surveyed for the presence of OTA and OTA-producing fungi. A. niger aggregate was the prevalent species. A. carbonarius, which is considered the main source of OTA in grapes, was mostly found in Southern Italy. The year and the environment had an important influence on the development of the black Aspergillus populations. Testing with ELISA showed OTA to be present in about 30% of the samples. Samples from Southern Italy showed the highest occurrence (45%) and also the highest OTA concentration, sometimes higher than 2 μg/L. The values decreased progressively the further North the samples were taken.
Introduction
Ochratoxin A (OTA) is a mycotoxin with immunosuppressive, genotoxic, carcinogenic and potently nephrotoxic properties [1]. Several nephropathies affecting animals and humans have been attributed OPEN ACCESS to OTA; for instance, OTA has been proposed to be associated with the Balkan endemic nephropathy [2]. Based on the available scientific data, the International Agency for Research on Cancer has classified it as a possible human carcinogen in the group 2B [3].
OTA occurs naturally in several foodstuffs, including grapes and their derivatives. The first reports of OTA in wine go back to 1996 [4,5]. Since then, many studies have focused on OTA occurrence in products derived from grapes, such as dried vine fruit, wine, grape juice, must and vinegar [6,7]. OTA has been found in wines in many European countries, as well as in America, Asia, Africa and Australia [8]. Red wines have been reported to be contaminated more frequently than white wines, probably due to the different wine-making methods involved [9]. OTA occurrence seems to be higher in wines from Southern European countries: several studies have shown an increase in the amount of OTA in warmer climates [6,10]. Given the danger posed by OTA, the European Community has recently established a toxin concentration limit of 2 g/kg in grape juice, must, wine and dried fruit (Commission Regulation n. 123/2005/EC).
OTA is a secondary fungal metabolite produced naturally by several Aspergillus and Penicillium species; Aspergillus and Penicillum species able to produce OTA occur in temperate and cold climate areas, respectively. The most important OTA-producing species belong to Aspergillus sections Circumdati and Nigri [11,12]; however, the presence of OTA in grapes and wine is mainly linked to the contamination in the vineyard by species belonging to the Aspergillus section Nigri, the so-called black aspergilli. The major producer of OTA in grapes is A. carbonarius, though other species belonging to the Nigri and Circumdati sections have also been found to produce the toxin in different Mediterranean countries, such as Spain, Italy, Portugal, and in Australia and South America [13][14][15][16][17][18]. The percentage of A. carbonarius strains able to produce OTA and isolated from grapes has been found to range between 70 and 100% when grown in vitro, whereas the range of OTA positive strains has been reported to be around 2-20% for A. niger and A. tubingensis [15,19]. Some reports claimed the production of OTA also by A. japonicus, however it is unconfirmed [20,21]. Ponsone et al. [22] have recently found that A. niger aggregate was the most frequent species on grapes in Argentinean vineyards, with 27% of the isolates producing OTA. The authors also reported the production of OTA by A. japonicus and A. aculeatus strains; however, that work lacks molecular identification of the strains [23]. Also A. ochraceus and other similar species in the section Circumdati, in particular A. westerdjikiae, have been reported as OTA-producing fungal species in grapes [12,24].
Black aspergilli can be present on grapes from the first stages of berry development and their occurrence increases as the season advances [25]. Meteorological parameters are, indeed, the most important factors in determining the contamination by the black aspergilli, in particular A. carbonarius [26]. However, heavy contamination of grapes by OTA-producing species does not necessarily lead to a higher amount of the toxin [27], first of all because not all the strains have the ability to produce the toxin and, secondly, because the production of OTA by these species is influenced by environmental factors, such as humidity and temperature [19].
The aim of this paper was to estimate the occurrence of OTA in grapes from different Italian vinegrowing environments and to evaluate the possible correlation with the presence of OTA-producing fungi in the vineyard.
Results and Discussion
Ochratoxin A (OTA) and fungal contamination were estimated in collected grape samples and the values obtained were analyzed in comparison to the geographic origin of the samples and the climatic data. Moreover, statistical analyses were carried out in order to evaluate whether significant correlations occurred between the following: OTA content and fungal contamination; OTA content and climatic data; fungal contamination and climatic data.
Ochratoxin A Content
In general, OTA was present in 30.4% of the grape samples analyzed ( Table 1). The different geographic origins were responsible for considerable differences in statistical significance. Indeed, the highest number of OTA-contaminated samples came for Southern Italy, where in the five-year period the toxin was detected in 45% of the samples examined. This result was significantly different from data obtained for the samples which were collected in the other regions (p < 0.05). Similar contamination levels have also been recorded in other vineyards in Southern Italy by other authors, in both grapes and wine [10,[28][29][30][31]. The lowest occurrence of OTA was recorded in Central Italy, where only 3.3% of the grape samples were contaminated. In Northern Italy, the mycotoxin was present in 17.5% of the tested samples. The difference between the OTA occurrence in Central and Northern areas was not statistically significant. Low occurrence of OTA in samples collected in Northern Italy has been found in previous studies carried out in Piedmont and other regions [29,32]. Moreover, the OTA concentration in the contaminated samples differed according to the origin of the grapes. The highest concentrations of OTA were found in grapes from Southern Italy, with values higher than 2.0 g/L; one sample showed an OTA concentration as high as 9.2 g/L. The OTA-contaminated samples from Northern and Central areas did not contain more than 0.02 g/L of the toxin ( Table 2). The climatic differences, related to the geographic region and the latitude, have been demonstrated to influence fungal and OTA contamination: the greatest OTA occurrence and concentration have been found at the lower latitudes [5,10,[33][34][35].
The OTA occurrence varied greatly according to the year. In general, a higher number of samples were contaminated in 2003 (45.3%) and 2007 (41.7%), while in 2006 the occurrence of OTA-contaminated grapes was as low as 12.0% (Table 1). The highest occurrence of samples contaminated by OTA was recorded in 2003 in grapes from Southern Italy (90.9%). Chi-square analysis highlighted that the latter samples were significantly more contaminated than those collected in the other years and regions (p < 0.05). Differences in OTA levels, probably due to different weather conditions, have been also reported by Pietri et al. [10] and Lopez de Cerain et al. [36] between samples collected in the same regions in 1995-97 and 1997-98, respectively. In the present work, no differences were found between white and red grapes regarding the concentration of the toxin (p > 0.05). On the other hand, analyses carried out on wines instead of grapes have revealed a higher amount of OTA in red wines coming from Tuscany and Sicily in 2000 [37] and from 19 different Italian regions in 1995-97, particularly in wines from Central and Southern Italy [10]. As OTA synthesis during winemaking has not been observed, because alcohol inhibits fungal growth [38], the concentration of the toxin could be higher in red wines due to the presence of the skins during winemaking. In particular, maceration, which is carried out only in red wines, can cause an increase in OTA content estimated at around 20% [21], while fermentation seems to be the primary step responsible for the removal of the toxin [9,39]. Moreover, Caridi et al. [40], Bejaoui et al. [41] and Leong et al. [42] have recently shown that both dead and live yeast cells are able to adsorb OTA rapidly in vitro.
Mycological Analyses
In general, the species belonging to the Aspergillus genus (mainly Aspergillus section Nigri and, sporadically, A. ochraceus, belonging to the section Circumdati) occurred in more than 70% of the grapes tested. The percentage of contaminated samples varied from one geographic region to another and ranged from 82.5% in the samples from Northern Italy to 64.8% in the grapes from the Southern areas ( Figure 1). In contrast, other Italian authors reported that the highest contamination levels by all Aspergillus species occurred in grapes from Apulia (Southern Italy) [28][29][30]. The A. niger aggregate was the main Aspergillus section Nigri group of species present in all the samples, with similar occurrence (from 56.8 to 69.8%) in all the different regions; this was also confirmed by the Chi-square test (p > 0.05). The species that were least present in the grape samples were A. ochraceus and A. carbonarius, which occurred in 0-14.3% and 0-9.9% of the samples, respectively. The occurrence of uniseriate species showed intermediate values, ranging between 23.3% and 55.6%. Black aspergilli species which were different from A. carbonarius-mainly A. niger aggregate and uniseriate isolateswere present in all the regions surveyed ( Figure 1). A. carbonarius, which is considered the main source of OTA production in grapes, was mainly present in Southern grapes; indeed, it was found in 9.9% of the Southern samples collected in the five-year period. The Chi-square analyses confirmed that grapes from Southern vineyards were significantly more contaminated by A. carbonarius than grapes from Northern vineyards. On the contrary, this species was never found in the Northern grapes examined. In Central Italy the occurrence of A. carbonarius was intermediate and not significantly different from the occurrence recorded in both Northern and Southern samples (p > 0.05). This geographical distribution of A. carbonarius agrees with data reported by other authors in the Mediterranean areas [19]. Battilani et al. [29] detected A. carbonarius in several Northern and Southern Italian regions in the period 2001-2003 and found out that the grape samples from Apulia were the most contaminated by this species, accordingly to the results reported in the present paper.
The strains belonging to the section Circumdati were identified as A. ochraceus because of their morphological features and the ability to growth at 37 °C : these characteristics were taken into account to distinguish A. ochraceus from the quite similar A. westerdijkiae [12]. A. ochraceus, which usually occurs in much warmer regions than Italy, such as tropical areas, was sporadically found in grapes from both Northern and Southern Italy, in 2003 and 2005, respectively. Using the Chi-square test, the difference in the occurrence of A. ochraceus between Northern Italy and the other areas was found to be statistically significant, while no statistical differences between Central and Southern Italy were observed. Although the occurrence of A. niger aggregate and A. carbonarius is generally higher than that of A. ochraceus on grapes [43,44], some authors have detected a higher percentage of OTA positive isolates among A. ochraceus in Argentina, Brazil and Spain [24,44]. Therefore, this species should be regarded as a possible contributor to the OTA presence in grapes and their derivatives. The highest occurrence of uniseriate black aspergilli was in Northern Italy, where 55.6% of the samples were contaminated by these species. The value was statistically different from that recorded in samples collected from Central and Southern Italy, where the occurrence was much lower, with average contamination being 23.3% and 25.2%, respectively. In any case, it has not yet been confirmed that the uniseriate black aspergilli are able to produce OTA.
The percentage of contaminated samples varied greatly from year to year, ranging from 94.3% in 2003 to 36
Correlation between the Occurrence of OTA and Black Aspergilli
In general, there was no clear correlation between the presence of A. carbonarius, uniseriate black aspergillli species, A. niger aggregate species and A. ochraceus on the one hand and the occurrence and concentration of OTA on the other. In the Northern and Central regions especially, the presence of high populations of these species did not necessarily lead to the production of OTA. However, all the samples from Southern Italy that showed the presence of OTA-producing Aspergillus species were contaminated by OTA.
Some interesting information is provided by samples collected in Apulia in 2003, i.e., the region and the year with the highest production of OTA. The concentration of the toxin was compared with the CFU/g of the different species of Aspergillus present in the grape samples ( Table 3). The following correlations were statistically significant: A. niger aggregate contamination versus OTA concentration; A. carbonarius contamination versus OTA concentration; Aspergillus species contamination versus OTA concentration. No significant correlation was found in the other years or in the other geographical areas. The temperature trend was similar for Northern, Central and Southern regions in the five-year period; the temperature was very similar both in the North and Center, whilst the Southern areas were much hotter. The rainfall pattern was similar for Northern and Central Italy, although the North was wetter. In Southern Italy, the rainfall pattern was quite different, although the total amount of rain was similar to that in Central Italy.
It was previously determined by many authors that the highest contamination by OTA-producing fungi occurred on grape samples taken from Southern Italy, where temperatures were higher and humidity was lower. For this reason, more detailed statistical analyses and correlation with meteorological variables were carried out on these samples. Using the Pearson correlation coefficients, an assessment was made regarding the statistical significance of the correlation between the sum of the environmental parameters (maximum, mean and minimum temperatures; maximum, mean and minimum relative humidity; rainfall) in the month before the sampling data and the CFU/g of A. carbonarius, uniseriate isolates, A. niger aggregate and Aspergillus section Nigri. The analyses of all the data collected in 2003-2007 revealed a slightly negative correlation only between both A. niger aggregate isolates and Aspergillus section Nigri isolates on the one hand and the minimum and mean relative humidity on the other (p < 0.05). No significant correlation was found with the others parameters and species (Table 4). Table 4. Correlation values obtained in the comparison between meteorological parameters recorded the month before the sampling and black aspergilli isolated from Apulian grapes, by using the Pearson coefficients. *: statistically significant (p < 0.05). The data were confirmed using the daily average of each parameter and the CFU/g of Aspergillus section Nigri (data not shown). This result highlighted that the grapes were more contaminated by these OTA-producing fungi in the years when it was very dry. Other authors have also demonstrated that the meteorological conditions play a major role both in fungal colonization in bunches: the year significantly affected the number of berries colonized by black aspergilli, particularly A. carbonarius at harvesting, with a positive correlation with the sum of degree-day and negative with the sum of rain between early veraison and ripening [29]. In a different paper, a positive correlation has been observed between temperature and grape contamination by black aspergilli, while correlation with relative humidity and rainfalls has not always been evident [45].
Meteorological Parameter
Moreover, OTA was mainly found in the hottest and driest years and areas. Indeed, in 2003, which was the hottest and driest year, a significantly higher number of samples contaminated by OTA was found (p < 0.05). Meteorological conditions, as well as closeness to the sea, have been shown to play a major role in determining OTA occurrence in grapes [9].
Grape Samples
Grapes were collected in 23 vineyards located in four Italian regions: Veneto (North), Tuscany and Latium (Center) and Apulia (South), during the 2003-2007 harvests. Twenty-two different grapevine varieties were included, among which 16 were wine varieties (11 red and 5 white) and six table grape varieties (3 red and 3 white) ( Table 5).
Sampling was carried out in a systematic manner: each row tested was divided into three parts (beginning, middle and final), which were used as replicates. In every replicate, about 50 berry clusters were randomly collected from different parts of the bunch and from different positions on the plant, in order to obtain a representative sample. In total, 204 samples (800 to 1000 g) were collected in plastic bags and maintained in refrigerated containers until they were processed (maximum 2 days). Each sample was hand-crushed and homogenized in a plastic bag and then divided into two aliquots: one for OTA detection and another for the evaluation of fungal contamination. The aliquot for OTA detection was maintained at -20 °C until the analyses, whilst the aliquot for the evaluation of fungal contamination was immediately processed. Taranto Italia white table 2 Apulia Taranto Crimson red table 1 Apulia Taranto Italia white table 1 Apulia Taranto Regina white table 2 Apulia Bari M. Palieri red table 2 Apulia Bari Victoria white table 3 Apulia Taranto Italia white table 2 Apulia Taranto Italia white table 3 Apulia Taranto Red Globe red table 1 Apulia Taranto Crimson red table 1
Ochratoxin A Extraction and Determination
The protocol for OTA determination in the grape samples included an extraction of the toxin using immunoaffinity columns (IAC) and immunoassay by competitive enzyme-linked immunosorbent assay (cELISA), in accordance with Angelini et al. [46]. In order to remove any solid residue, samples were subjected to a gross filtration and centrifugation stage of 15 min at 2000 × g; 100 mL of the supernatant was then filtered with glass micro-fiber filters (Whatman, grade GF/A) under vacuum and stored at -20 °C until analysis.
OTA was extracted using RIDA Ochratoxin A columns (R-Biopharm), following the manufacturer's instructions for wine, with minor alterations. Ten milliliters of the clarified sample were diluted twice with a sodium phosphate buffer (0.4 M, pH 7.5), following which 10 mL of the solution obtained was applied directly to the IA columns. The columns were washed with a 9:1 solution of sodium phosphate buffer (PBS 20 mM, pH 7.4) and methanol, and dried by air flushing. Any toxin present was subsequently eluted with methanol, evaporated at 40 °C overnight and then prepared for the cELISA. The dried pellets obtained from the IAC extractions were stored at -20 °C in the dark and reconstituted with 0.5 mL of a sodium bicarbonate buffer (0.13 M, pH 8.1) immediately prior to immunoassay.
Immunoassays were carried out using commercial kits (R-Biopharm), suitable for OTA determination in different foods and feeds. The cELISA kit included a 96-well microplate, six OTA standards, an anti-OTA antibody and an enzymatic conjugate, developing and stop solutions and a washing buffer. Blank, OTA standard and sample wells were always run in duplicate. The enzyme immunoassay was performed following protocols provided by the manufacturer, with incubation taking place in the dark at room temperature (20-25 °C ). Absorbance was measured after the last incubation step at an optical density of 450 nm in a spectrophotometer (Titertek Multiscan Plus MKII, Labsystem). Whenever the absorbance was higher than the upper limit of detection, samples were diluted and analyzed again.
The analyses of the cELISA data were performed using the calibration curves, which were constructed with five points. The curve for each plate was obtained from the mean absorbance values of each OTA standard included in the kit. The values of the toxin concentration in the samples were calculated from the mean absorbance values by interpolating the corresponding OTA concentrations from the calibration curve. The software used for the analysis of the results was RIDA®SOFT Win (R-Biopharm), which was provided by the manufacturer.
Mycological Analyses
The mycological analyses of the grape samples were performed using a serial dilution plating method. Two-hundred grams of juice and 200 g of pulp, skins and stalks were placed in a sterile flask and diluted twice with a sterile 0.1% bacteriological peptone solution (Oxoid), before shaking on an orbital shaker for 15 min to allow adequate suspension of the fungal conidia, spores and fragments of mycelium. Finally, 50 mL of each sample were collected and five ten-fold serial dilutions from 1:1 to 1:10000 were performed. One-hundred microliters of each dilution was poured onto three Petri dishes and spread with a sterilized glass rod over the whole substrate surface, until fully absorbed. The culture medium used was Dichloran Yeast Extract Sucrose 18% Glycerol agar (DYSG), amended by chloramphenicol, in accordance with Pitt and Hocking [47]. The Petri dishes were placed in plastic bags and incubated in the dark for 6-7 days in a climatic cabinet at 25 °C , until colonies formed. The fungal colonies were then counted and the strains belonging to the Aspergillus genus were isolated on Czapek Yeast extract Agar (CYA) medium for identification at species level.
The identification focused on species belonging to the Nigri and Circumdati sections, which are OTA-producers, although all isolates belonging to the Aspergillus genus were isolated and identified. The identification at species level was carried out on the basis of the macroscopic and microscopic features of the fungal isolates. The following were recorded on different substrates and at different temperatures: the growth diameter of the colonies; the characteristics and color of the mycelium; the production of pigments and exudates; the characteristics of the conidiophores and conidia [47,48]. Microscopic observation of reproductive structures from colonies grown in MEA (Malt Extract Agar) was performed by glass slide preparation in absolute ethanol and lactic acid and observation at 400 and 1000 X magnification with an optical microscope (Leitz Laborlux S). Aspergillus section Nigri isolates were identified as uniseriates, biseriates (mainly isolates belonging to the A. niger aggregate) or A. carbonarius. Uniseriate isolates bear uniseriate conidial heads, while biseriates bear biseriate heads; among the latter, A. carbonarius isolates were identified at species level, whilst the other biseriate isolates were, on the whole, classified as A. niger aggregate. When present, isolates of the section Circumdati were identified at species level [12,[47][48][49].
For every species, the amount of Colony Forming Units per gram of fresh grape weight (CFU/g) was evaluated. The strains were preserved for further analyses and characterization by taking pieces of mycelium and substrate and placing them at -80 °C in tubes with sterile 10% glycerol solution.
Meteorological and Climatic Data
Climatic data were gathered from meteorological stations located close to the vineyards from which the samples were collected. In Northern Italy, the data were obtained from the meteorological station of "C.R.A. -Centro di Ricerca per la Viticoltura", Conegliano (TV); in Central Italy, the climatic data were kindly furnished by M. D'Arcangelo (C.R.A. -Unità di Ricerca per la Viticoltura, Arezzo); in Southern Italy, the climatic data were obtained from the meteorological station of "Associazione Consorzi di Difesa della Puglia". Rainfall, maximum, mean and minimum daily air temperature and relative humidity were recorded and used in the subsequent statistical correlation analyses with the occurrence of Aspergillus species section Nigri and Circumdati and OTA.
Statistical Analyses
Several analyses were carried out in order to check whether there was any statistically significant correlation between OTA content, fungal contamination, geographic areas and climatic conditions. The Pearson correlation coefficients, the Chi-square analysis and the linear regression (r 2 ) were carried out using Statistica 7.1 software (StatSoft, Inc. Tulsa, OK).
Conclusions
The occurrence of OTA and OTA-producing fungi in Italian grapes and in grapes in other vinegrowing countries has been reported by several studies, most of which have been carried out in the last few years. The results of the present work confirmed the high risk of OTA and fungal contamination in Southern Italy, where in very dry and hot years, such as demonstrated for 2003, OTA concentration on grapes can reach very high levels. On the other hand, the presence of OTA-producing fungi in the Central and Northern regions did not lead to the production of the toxin at a level which is dangerous for human health in the years studied. On the whole, OTA concentration levels over the legal limit were found in 2.5% of more than 200 grape samples tested over five years.
OTA is a problem that originates in the vineyard. Black aspergilli, the main fungi responsible for OTA presence in grapes, are naturally present in vineyards, and fungi can be isolated from bunches starting from the early stages of the berry development, although their incidence is more relevant from early veraison. Despite the widespread occurrence of OTA in various types of wine, there is limited information on the ability of black aspergilli to infect berries and produce OTA in different grape varieties. The ecological parameters of black aspergilli are not completely known and this knowledge is critical in the development and prediction of the risk models on contamination of grapes [9].
Climatic conditions and geographical location are important factors favoring OTA accumulation in grape berries. Damaged berries, by abiotic or biotic causes, provide preferential entries for black aspergilli and their efficiency in producing OTA increases [50]. High OTA levels occur in grapes severely damaged by the grape moth, Lobesia botrana, particularly in the Mediterranean areas [9,51].
Control measures for toxigenic mycoflora in the vineyards must consider these critical control points. Moreover, it is necessary to monitor OTA in grapes and their derivatives products, especially in areas with the highest risk of occurrence of the toxin and the OTA-producing fungi.
A. Zanzotto, L. Lovat and F. Autiero (CRA-VIT, Conegliano, TV, Italy) for the assistance in the analyses; to M. D'Arcangelo (CRA-VIC, Arezzo, Italy) for furnishing part of the climatic data; to M. D'Arcangelo (CRA-VIC, Arezzo, Italy), L. Tarricone, G. Masi and A. Coletta (CRA-UTV, Bari, Italy) for collecting the samples from Tuscany and Apulia, respectively; to R. Davison for the revision of the English text. | 2014-10-01T00:00:00.000Z | 2010-04-01T00:00:00.000 | {
"year": 2010,
"sha1": "416a292a0292df8c55257a78f1829a74df81f422",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/2/4/840/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "416a292a0292df8c55257a78f1829a74df81f422",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
29698143 | pes2o/s2orc | v3-fos-license | Fusion Approaches for Land Cover Map Production Using High Resolution Image Time Series without Reference Data of the Corresponding Period
: Optical sensor time series images allow one to produce land cover maps at a large scale. The supervised classification algorithms have been shown to be the best to produce maps automatically with good accuracy. The main drawback of these methods is the need for reference data, the collection of which can introduce important production delays. Therefore, the maps are often available too late for some applications. Domain adaptation methods seem to be efficient for using past data for land cover map production. According to this idea, the main goal of this study is to propose several simple past data fusion schemes to override the current land cover map production delays. A single classifier approach and three voting rules are considered to produce maps without reference data of the corresponding period. These four approaches reach an overall accuracy of around 80% with a 17-class nomenclature using Formosat-2 image time series. A study of the impact of the number of past periods used is also done. It shows that the overall accuracy increases with the number of periods used. The proposed methods require at least two or three previous years to be used.
Introduction
Land cover maps provide key information in many environmental and scientific applications. They can be used to monitor deforestation [1] or urban pressure [2] over croplands, for instance. Satellite imagery and, by extension, satellite image time series allow one to produce accurate land cover maps. In the past few years, the number of space-borne optical sensors has increased, making a wealth of useful data available for land use monitoring. The Landsat sensors provide useful data for land cover monitoring, especially Landsat 5 and 8, which are used to produce accurate land cover maps [3]. The Sentinel-2 system (S2), a pair of twin satellites dedicated to continental surface monitoring, is already providing high quality data for land cover maps. The first results obtained by using single date S2 images are promising [4], and by using S2 image time series, the performance should increase.
Supervised classification algorithms are the state of the art approach to producing land cover maps automatically [5]. Among these, the Support Vector Machine (SVM) and the Random Forest (RF) classifiers are the most widely used. These methods require, nevertheless, a large amount of reference data, that is samples for which the land cover class is known. This requirement is the main cause of the delay needed to produce a map. Indeed, the reference data can be provided by different sources. Field surveys to collect in situ data are tedious and expensive. On the other hand, the use of topographical databases is possible, but these data are available usually several years after the collection date. Generally, the delay depends on the source of reference data and also on the size of the mapped area.
Despite these delays, reference data comprise precious information that is gathered together over the years. In standard supervised classification, the reference data are seldom used for labeling samples, as they are valid only for the period (reference year, for instance) corresponding to their acquisition, because the landscape changes over time. However, considering reference data over the years allows one to know the landscape history. Therefore, training a classifier using imagery of the current period and reference data of a previous period can lead to low quality land cover maps. In the case of annual land cover map production, one uses images of the current year to produce a land cover map, but the reference data may have been collected more than one year earlier.
One could consider training a classifier with imagery corresponding to the period of the reference data and then apply the trained classifier to the imagery of the current period. Unfortunately, due to climate conditions (temperature, precipitations), cloud cover and other factors, the image time series of two different years can have different temporal patterns leading again to bad classification accuracy.
Some solutions to correct this kind of distortion in the data have been proposed in the literature: the Domain Adaptation (DA) techniques. In these approaches, the source domain is defined as known, i.e., corresponding reference data are available, and the target domain as unknown. The aim is to reduce the shift between the data or tuning the classifier parameters to use reference data of the source domain in the target domain. The state of the art DA methods are presented in Section 1.1, but they need some adaptations to be applied to the problem at hand. In the particular case addressed in this paper, each previous period with reference data must be considered as a source domain. In this work, instead of adapting DA techniques, we take a more pragmatic yet efficient approach in order to exploit several previous periods for the production of land cover maps without reference data for the current period. To this end, simple fusion schemes are proposed. Two studies are done in this paper, one regarding the performance of the fusion methods with a high number of previous periods and a second one to evaluate the sensitivity to the number of periods used.
The remainder of the paper is organized as follows: a short state of the art of DA methods is done in Section 1.1; the materials and methods used are presented in Sections 2.1 and 2.3, respectively; finally, results are presented in Section 3 and discussed in Section 4 before conclusions are drawn in Section 5.
Short Review of Domain Adaptation Methods
As introduced in the previous section, the use of data from previous periods for the classification of data of the current period is addressed in this paper. This problem can be reduced to a distortion correction between past and current periods' image time series. In the literature, the methods allowing one to correct important distortions between pairs of datasets are called Domain Adaptation (DA). The distorted datasets must be at least related.
In DA, a source domain D S is used to predict a target domain D T . For each domain, a probability is defined by P S (X, Y) and P T (X, Y) for D S and D T , respectively, where X is the input variable vector, i.e., the image time series described by spectral bands and derived indices, and Y is the output variable associated with a set of classes, i.e., the map nomenclature. D S is the domain where enough reference data are available, and D T has little or no reference data. The goal of a DA method is to adapt a classifier trained using D S data or the data directly to predict the D T samples.
A DA survey has recently been done by Tuia et al. [6]. In this survey, the authors present the most widely-used DA methods in remote sensing. They define four categories of algorithms: The invariant feature extraction category is composed of algorithms similar to those used in feature selection approaches, as for instance Principal Component Analysis (PCA) to reduce the data dimensionality by keeping the most relevant features. In DA, the aim is to determine the features that suffer the least from the distortion between the two domains, by determining a projection matrix. With the set of invariant features, the distortion can be estimated, and a common space can be created. In this space, the D S and the D T extracted features are jointly used, as one domain. Therefore, this common space is stable, and it is possible to use the same classifier on both domains. In the literature, there are some works that present different invariant feature selection methods, for instance [7,8], where the authors use several distance measures to select the features.
This approach is mostly used to correct two kinds of distortions between the source and the target domains. The first is the variations of the illumination or the sensor angle of view. In our case, the image time series used are acquired with constant viewing angles and are radiometrically corrected and expressed in surface reflectance, which makes them invariant with respect to these issues. The second use case is when only a small part of the image is labeled and the target domain is another area of the same image. In our case, this problem is not considered, as a split of the area of interest into eco-climatic areas allows one to achieve good performance [9]. In an eco-climatic area, the classes' time profiles are more homogeneous than in an entire image. In addition, a drawback of feature extraction in our case is that the increasing number of source domains requires the invariant features to be the same for each source domain. If the distortions between the different source domains are too important, a loss of discriminative information can occur, and therefore, a classifier loses generalization.
The second category of approaches deals with data distribution adaptation. As the first category of algorithms, this approach aims to adapt the data. The main difference is that the methods keep the original features and try to create a new space where the shift between P S (X, Y) and P Y (X, Y) is reduced. In this new space, the two domains will be treated equally. A first approach considered in the literature is the use of a kernel matrix to project one domain into the other. There are many methods for matrix estimation [10,11]. A second approach aims to align the data distribution. These methods use histogram matching methods [12] or distance methods, such as Dynamic Time Warping (DTW) [13].
The main drawback of this approach, in our case, is that the data dimensionality will increase for each new source domain, leading to huge transformation matrices and statistical estimation issues. In the case of alignment methods, like DTW, for instance, the processing time will increase greatly since the process must be carried out between each source and target domain. In addition, the most efficient methods use the target domain labeled samples, which are not available in our case.
The two previous categories are used as a pre-processing step allowing one to use standard classifiers to predict the samples of the target domain. There are drawbacks shared by these two approaches in our application case, since they often use similarity, covariance, the dependence measure or minimization functions. Their efficiency depends on the data dimension, which can highly increase if many previous periods are used. In the literature, sometimes, feature extraction is used before applying the data distribution algorithm to improve the performance [8].
The third category of approaches uses semi-supervised algorithms. For these methods, it is mandatory that the features and the nomenclature are the same in both domains. A classifier is defined as semi-supervised when target domain data are used to change the decision rules of a supervised classifier trained on the source domain. For instance, in the work of Bruzzone et al. [14], a classifier is trained on the source domain, and the sample distributions of the target domain are used to tune the parameters of the classifier. Another approach is to use a cascade classifier [15], more often using radial basis function neural networks, which include target domain samples in the learning step.
The main drawback of this approach, in our case, is that the training of a semi-supervised classifier is often an iterative process. At each iteration, target domain samples are added to the training sample set. This training could be long, and it is mandatory to train the classifier again when a new period is considered. The main difficulty of this approach is the target domain samples' selection, by similarity or clustering, which will have a direct impact on the classifier performance. The naive solution to this sample selection problem requires user interaction.
The last category is active learning. This approach also aims at adapting the classifier. It can be considered as a particular case of the semi-supervised approach where the target domain samples' selection is done by the user. Often, the target domain samples are labeled by hand, by visual interpretation (this is the meaning of "active"). As the sample selection quickly becomes time consuming and the learning becomes costly, the samples must be well-chosen. An active learning algorithm ends when the user is satisfied with the results. Many active learning algorithms exist in the literature, but as they are not automated, they are not considered in this work.
The interested reader is invited to refer to the survey [6] for more information. In addition, a most complete survey was done by Patel et al. [16], considering all DA methods used in machine learning, with applications to computer vision.
Looking at the particular case addressed in this paper, none of these algorithms seem to be appropriate. Indeed, they do not take advantage of several previous periods to predict the current one. In this particular case, each previous period must be considered as a source domain, and the target domain is the current period image time series. That means that it is necessary to process the domain adaptation for each pair of past and current periods. This will be very costly in terms of processing time and complexity of use. As a consequence, the existing DA methods will not be considered here, but this work should be a first step to adapting existing DA methods to a multi-source domain problem. In this paper, the usefulness of multiple source domains will be shown. To this end, several fusion schemes, used in some DA works as a post-processing task, are proposed to avoid the costly domain adaptation process.
Optical Data
To study the contribution of previous periods to produce land cover maps, a large number of periods is required. To this end, 8 years of Formosat-2 images are available for the same area, near Toulouse in the southwest of France, shown in Figure 1. Formosat-2 has the advantage of a high resolution of 8 m and a high revisit cycle of one day, over a 24 km × 24 km area. In comparison to the sensors used mostly at the moment for land cover mapping, Landsat 8 has a 30-m resolution and a revisit cycle of 16 days. Sentinel-2 when fully operational will provide 10-m resolution images every 5 days.
The Formosat-2 images are processed with MACCS (Multi-sensor Atmospheric Correction and Cloud Screening) [17] to correct atmospheric effects and provide cloud, cloud shadow and saturation masks. Formosat-2 has 4 spectral bands (blue, red, green and NIR) to which two spectral indices, the Normalized Difference Vegetation Index (NDVI) and the brightness, are added as image features. The time series is the concatenation of spectral reflectances, NDVI and brightness [3].
As the number of usable images varies over the periods, mainly due to cloud cover, a temporal interpolation is performed. To this end, a regular time grid is defined, with a time gap of 14 days. This interpolation does not induce a loss of accuracy [3] and allows one to have the same temporal sampling for every period. The time grid begins 1 October of the previous year and ends 31 December of the current year. This time slicing corresponds to the phenology cycle and the agricultural season in the study area. Therefore, at the end of these pre-processing steps, 7 time series (from 2007 to 2013) with identical dates for every period are available.
Reference Data
The reference data used in this work were obtained by field surveys. These surveys were done every year, so for each time series, corresponding reference data are available. The reference data are randomly split into two independent datasets, where 50% of the samples of each class are used for training and 50% for validation. This split was done 10 times for each period in order to compute average performance over several runs of the experiments, as well as confidence intervals.
The reference data are composed of the 17 classes shown in Table 1. This set of classes includes winter (wheat, barley, . . . ) and summer (maize, sunflower, . . . ) crop classes, natural classes (water, forest, . . . ) and artificial surfaces. As is possible to see in Table 1, the number of samples available for each class varies greatly. We therefore have a stratified sample, but not exactly proportional to the distribution of classes in the entire study area, although ranks are preserved. Indeed, the field campaign represents only a small part of the study area as shown in Figure 1. We can see majority classes that are represented by a large number of samples such as "wheat" and "broad-leaved tree" and also minority classes with very few samples like "wasteland" or "barley". This wide nomenclature is not restricted to any specific application, and therefore, conclusions for general land cover mapping may be drawn from the results.
Methodology
Supervised classification algorithms outperform other approaches for land cover mapping using satellite image time series. Random Forests (RF) [18] achieve very good performance for these tasks [9], whose general workflow can be described as follows: 1. Data pre-processing 2. Classifier training using labeled data to define the decision rules 3. Classification, using a trained classifier to predict the classes of unlabeled data 4. Post-processing In the ideal case, reference data would be available for the current period and a good quality land cover map could be obtained. This ideal case (called standard supervised in the following) will be used in the experiments as an upper bound for the performance. At the other end, using a classifier trained using images and reference data from a previous period will produce a lower quality land cover map since, as explained above, the image time series for the current period may be different from the ones from the period used for the training. We will also use this naive case to define the lower bound of the classification performance.
The fusion approaches presented in this work aim at increasing the quality of the maps from the naive case towards the ideal standard supervised case.
In this section, the global workflow is presented first, and then, the different methods used are detailed. Finally, the validation procedure is explained.
. Global Approach
The data preparation is described in Section 2.1, and it is the same for all the methods. The training step, shown in Figure 2, requires an image time series and reference data to learn the decision rules. The dedicated set of reference data is used. The generated output model contains the decision rules. The learning step can be repeated as many times as different datasets are available. For each pair of image time series and reference data, a classifier can be trained, using the N s samples of the training set. Each sample has the same number N f of features defined by the time series (see Section 2.1). In our case, each previous period yields a classifier. Another possibility is to use together several previous periods for training a single classifier. In this case, the number of training set samples is s where p is the number of previous periods considered, and N f is the same for each period (see Section 2.1). In both cases, the procedure is the same: a training set is built using image time series pixels and their corresponding labels, and this training set is used to train a classifier.
The next step is the classification shown in Figure 3. This step requires a classifier trained in Step 1 and one time series as the input. The classifier predicts the labels of the time series and also gives access to the number of trees that voted for each label. When converted to proportions, these values can be interpreted as probabilities for each class. The probability of the majority class is called confidence. In the standard supervised case, the time series used is the same for the training and the classification steps. All the available classifiers can be used to classify the image time series of the current period, producing land cover maps, confidence values and probabilities for each class. These will later be used in fusion approaches to derive the final land cover map. The last step is post-processing, shown in Figure 4. This step regroups different optional tasks. For the standard supervised classification, the validation of the labeled image is the only task. If several classifiers are used, fusion processing is mandatory to produce a unique land cover map.
This workflow allows one to generate a land cover map from an image time series. More specific cases of use are explained in the following part of this section.
Fusion Methods
Two main fusion approaches are considered in this work. The first approach is based on the work of Flamary et al. [19]. In their work, the authors use several previous periods to train different classifiers. The best results are obtained by training a unique classifier with all previous periods. In our work, a single RF is trained, using a training set with The aim is to feed the classifier with all the variability present in previous periods so that no particular period is favored. The second approach uses the fusion of land cover maps. Every previous period (i.e., a set of N p s samples for a given period p) is used to train a classifier, and then, each classifier is applied to the current period image time series. The fusions of the N p produced maps are performed in a post-processing step using voting. Three voting methods are proposed: 1. Majority Voting (MV): Each map votes for a label, and the majority label is chosen. In the case of a tie, a non-decision label is chosen. 2. Confidence Voting (CV): Each voter selects a class, and the confidence is used as a weight to compute a score per label. The label with the highest score is chosen. This approach considers only the labels chosen by the classifier. 3. Probability Voting (PV): Each voter uses the probability values to give a weight to every possible label. The label with the highest score is chosen.
Since the causality (the temporal order of the periods) is not used, in order to increase the amount of data available, it is possible to produce the land cover map for period N fusing classifiers trained with data of periods N − n and N + n, where n represents other periods. Therefore, each of the 7 years of the dataset will be considered as the current period and the other 6 as the previous periods.
This study is split into two parts. The first one uses all the available periods to evaluate the performance of each method. In the second part, the impact of the history size (the number of previous periods) is evaluated. For this part, each period is again considered as the current period, and the history is created by using all the combinations of 2, 3, . . . , up to 6 periods.
Validation Procedure
The land cover maps produced with each approach are validated using standard metrics. A confusion matrix [20], where the rows are predicted labels and the columns the reference labels, is computed using the validation sets. In this matrix, the diagonal elements represent the number of correctly classified pixels, and the rest of the matrix is misclassification. The Overall Accuracy (OA) is the sum of the diagonal elements divided by the sum of all elements of the confusion matrix. For each class, the F1-score (Fscore) is considered, which is the harmonic mean of precision and recall.
•
The recall (also called producer's accuracy) is computed for each class. To this end, in the confusion matrix, each row is considered (one per class), and the number of correctly classified pixels is divided by the total number of reference data pixels of that class.
•
The precision (also called user's accuracy) is computed considering the column of the confusion matrix. It is the fraction of correctly classified pixels with regard to all pixels classified as this class.
As vote methods are considered, a particular label is used for non-decision. It is used when there is a tie in a vote. In addition, this label must not be included in metrics computation [21], as this label does not represent an error in the classification. Therefore, it will be necessary to study the ratio of non-decided pixels in the vote output to temper the metrics values.
Results
In this section, the results obtained with the different methods are presented. This section is organized as follows: first, the performances of the two baselines are analyzed, then the four fusion methods are compared, and finally, the analysis of the impact of the history size on the performance is done.
Baseline Configuration Analysis
The baseline configurations are defined by the use of a classifier trained with the image time series of one period, which is then applied to the same period (standard supervised case) or to another period (naive baseline). Table 2 presents the OA obtained for all these combinations of periods. In this table, the rows represent the period of the data used to train the classifier and the columns the period of the image time series used to produce the map, i.e., the current period. Each value is the average of the 10 runs (different draws of training and validation samples), and the 90% confidence intervals are shown. The diagonal represents the standard supervised case.
As expected, the standard supervised OA is very good with narrow confidence intervals and similar values for all the available years. In contrast, the naive baseline yields lower performance, and in most of the cases, OA is below 70% with large disparities between the different cases. The OA gap between the standard supervised and the naive baselines is larger than 20%, and therefore, there is room for improvement.
It is interesting to analyze the variability of the performance of a given classifier when applied to different periods. For instance, the classifier trained with the 2012 image time series yields an OA of 74% when applied to the 2010 image time series, but it can produce maps with OA as low as 51% for other period. This means that changing the input time series for a given classifier does not have the same impact on the predictions as changing the decision rules, i.e., the classifier trained on a given time series to predict a new time series. As shown in the table, the performance for a given time series (column), except for the standard supervised case, is rather stable. This effect is due in part to the robustness of RF to the noise present in the labeled data [22]. The robustness of RF is also shown by the very narrow confidence intervals.
In the following sections, the naive baselines will be summarized as the average values of the columns of Table 2 removing the diagonal.
Results of the Fusion Strategies
Two metrics are used for the evaluation: the OA for a global evaluation and the Fscore of each class. The OAs obtained by the two baselines and the four fusion methods are plotted in Figure 5. Figure 5 shows that the four fusion methods and the naive baseline curves follow the same trends. This effect proves the contribution of history, which can provide an amount of useful information. Using this information year by year (naive baseline) is not efficient, but by combining all periods, the performance increases by around 20%. The three voting methods yield similar results, and their differences lay within the confidence intervals. The single classifier approach results are slightly above the other methods, which can be interpreted as fusing the history data before training the classifier yields better results than a post-classification fusion. However, the main drawback of the single classifier is that the training with all periods has to be performed again every time a new period is available. This is not the case for the other methods, for which only the training of the new available period has to be performed. The standard supervised baseline is often 10% better than the fusion methods, but for the years 2010 and 2012, the single classifier performs better than the standard supervised case. Table 3 shows the Fscore values obtained with the six methods. The average values are computed by using all the periods for each method. The aim of this table is to show the behavior of the methods for the different classes of the nomenclature in order to give further insight. The values obtained by the standard Supervised case (Sup) are as expected. Indeed, the wide variations of Fscore values, as for broad-leaved tree at 93% and barley at 30%, are usual for the nomenclature considered: wheat and barley have very close temporal and spectral signatures, and barley being a minority class, it is often classified as wheat; the same issue happens with sorghum classified as maize.
Three categories of classes can be determined by looking at Fscore variations: 1. Classes for which the Fscore is similar to the standard supervised case, with narrow confidence intervals. These classes are: broad-leaved tree, pine, wheat, maize and sunflower. 2. Classes for which the Fscore is lower than for the standard supervised case and the confidence intervals are wide. These classes are: rapeseed, artificial surfaces, wasteland, river, lake, gravel pit and grass. 3. Classes for which the Fscore is very low with narrow confidence intervals. These classes are: barley, sorghum, soybean, fallow lands and hemp.
This categorization represents a known situation, introduced in Section 2.2. Indeed, the first category of classes is usually well predicted for the study area, as they are representative of the land use of the region (majority classes). In contrast, the third category of classes is often confused by the classifiers with majority classes, as wheat and barley for instance. Other classes, such as hemp or soybean, are not representative of the crops in the studied region, so they are minority classes. This categorization could be used to show the nomenclature limits, i.e., the confusion done by the classifier whatever the reference dataset used. A possible extension of this categorization is to associate a weight to each category and to use this to reduce the imbalance between minority and majority class. Another extension could be to propose automatically, using these categories and logical rules, a fusion of the third category classes with the others classes: for instance, wheat and barley could be fused into a winter cereal class. This kind of class fusion should reduce confusion and therefore improve the performance.
Impact of the History Size
In this section, we study the impact of the history size (the number of previous periods) on the performance of the methods. For this experiment, seven periods are available; therefore, six periods can be combined to create the history dataset. For the particular case where only one period is available, only the naive approach is possible. The other methods need at least two periods of history data. Figure 6 shows the OA obtained as a function of the history size. The x-axis is the number of periods in the history, and the y-axis is the OA value. The OA of the standard supervised case (Sup) and naive case (NBM) are extracted from Table 2 and give the lower and upper bounds. For the other history sizes, all possible combinations of periods are used, and the average is plotted. As one could have expected, performance increases with increasing history size. The two weighted voting methods provide similar results, and the disparity decreases when the number of periods increases. The single classifier is better than the other fusion methods, and the OA values increase faster than the ones of voting methods. With four periods of past data, the SC is able to provide a map with 80% accuracy. This accuracy is reached by the voting methods when six previous periods are used. The subset of previous period data used has a minor impact on the performance, as shown by the very narrow confidence intervals.
The majority voting approach has an unexpected behavior: when only two periods are used, the majority voting produces a map with 85% OA and decreases to 80% when three periods are used. This is due to the amount of undecided pixels, which are not taken into account for the OA computation. Figure 7 shows the ratio of correctly classified, incorrectly classified and undecided (not classified) pixels in the validation step. The x-axis represents the history size and the y-axis the ratio values. This figure shows that the two weighted voting approaches produce a small amount of undecided pixels, while the majority voting approach always produces over 10% of these pixels.
A high number of undecided pixels is a major drawback. Furthermore, the ratio of pixels is always between the limits defined by the baselines, except for the majority voting. Therefore, the single classifier seems to be the best method to produce a map with past data.
Discussion
The efficiency of the proposed methods is shown by comparing their performance to the baselines. The proposed fusion methods yield similar performance, and they are closer to the standard supervised case and much better than the naive baselines.
In addition, the OA trends in Figure 6 show the contribution of using different previous periods to produce the land cover map without reference to the corresponding period. The single classifier seems slightly better than the other methods, but it requires training the classifier again, using all past period, for every new period available. In contrast, the voting methods yield similar OA using previously trained classifiers, which is a great advantage in the case of large-scale land cover map production since the previous time series do not need to be available once the classifier has been trained. The proposed voting methods are simpler than the DA methods, as they only require a data pre-processing procedure, as explained in Section 2.1. Consequently, increasing the number of previous periods only impacts the processing time and not the algorithm complexity. Therefore, they are limited by the number of previous periods available as explained in the previous section. Another limit of these methods is related to the nomenclature as the minority classes are incorrectly predicted, as shown in the Fscore in Table 3. Errors in the reference data can also induce the confusion of the minority classes in voting methods. The majority voting produces many undecided pixels, which is a major drawback.
However, the accuracy obtained is very high, which could be useful for some of the DA techniques of the literature based on semi-supervised or active learning, which require labeled samples in the target domain. The small amount of errors in this labeled image should reduce the number of training iterations. The labeled pixels by majority voting could also be used to reduce the amount of data in the projection matrix estimation or to align the data distributions, for instance. Another perspective would be using the history to evaluate the methods' capability regarding the nomenclature using the Fscore variations and the confidence intervals, as presented in Section 3, and propose a new voting system.
The voting methods and the single classifier give very similar results. It is therefore interesting to give some insight in terms of algorithmic complexity. The classification complexity, O(T c ), i.e., the application of the decision rules, is the same for each method. A first case to consider is the year the process begins. In the case of voting methods, it is necessary to train N classifiers with a cost O(T l ) proportional to the amount of input samples. The cost of training all required classifiers is therefore O(N × T l ). In the case of the single classifier, the complexity is O(N × T l ) because the samples for all available periods are used. Hence, for the first year, the learning time is equivalent. For the single classifier, the cost of the whole procedure is finally O(N × T l + T c ). For the voting methods, it is necessary to carry out one classification per available period and also to perform the fusion operation, that is a complexity of O(N × T l + N × T c + T f ), with T f the fusion complexity. In this case, the processing time for the voting methods is greater than the one required for the single classifier. The second case represents the following years. In this case, the models required for voting methods have already been computed. It is only necessary to train the classifier for the new available period. We therefore have a processing complexity of O(T l + (N + 1) × T c + T f ); whereas in the case of the single classifier, it is necessary to carry out the complete training again by including the new period. Therefore, the complexity is O((N + 1) × T l + Tc). Since typically T f < T c << T l , voting methods are faster after the first year of processing. In addition, voting methods do not require keeping samples, but only trained classifiers. This represents an advantage in terms of storage.
Conclusions
In this paper, the use of images and reference data from previous periods to produce the current period land cover map without current reference data for standard supervised classification is addressed. This work is focused on showing the contribution of the use of multiple previous periods' data instead of using only one as is usually done in the domain adaptation literature. It constitutes of a first step before adapting the DA methods to deal with multiple source domains. To this end, three voting methods are proposed, majority voting, and two weighted votes: confidence voting and probability voting. The results show similar Overall Accuracy (OA), ranging from 75% to 85%, depending on the datasets.
Another method is considered, a single classifier trained by using data from several previous periods. This approach yields slightly better results than the others. The second part of this work evaluated the impact on the performance of the number of available previous periods. As expected, when more periods are available, the performance increases. The single classifier is slightly better than the others as its performance increases faster when the history size increases. All these results are stable as proven by the very narrow confidence intervals. They are encouraging and prove the interest in using several previous periods' data. The single classifier requires keeping all the samples' image data available to train the decision rules again instead of the voting methods requiring only the previous trained classifiers. In the case of working at a very large scale, this can be a major drawback for the single classifier. Therefore, the voting methods represent a good alternative to DA methods for producing large-scale land cover maps without reference data of the same period. participated in the discussion during the system design and gave valuable methodological advice. All authors have been involved in the writing of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 2018-04-03T01:28:28.397Z | 2017-11-09T00:00:00.000 | {
"year": 2017,
"sha1": "81c12b32050afebb4fd2e3d410aa786f03a0c24d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/9/11/1151/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "50022c50e633356586989ddaa628ade1e0459f9d",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
210040975 | pes2o/s2orc | v3-fos-license | Effects of light intensity on growth and lipid production in microalgae grown in wastewater
Background Cultivation of microalgae in wastewater could significantly contribute to wastewater treatment, biodiesel production, and thus the transition to renewable energy. However, more information on effects of environmental factors, including light intensity, on their growth and composition (particularly fatty acid contents) is required. Therefore, we investigated the biomass and fatty acid production of four microalgal species, isolated in the Northern hemisphere and grown at three light intensities (50, 150 and 300 μE m−2 s−1). Results Increases in light intensities resulted in higher biomass of all four species and, importantly, raised fatty acid contents of both Desmodesmus sp. and Scenedesmus obliquus. Fourier-transform IR spectrometry analysis showed that the increases in fatty acid content were associated with reductions in protein, but not carbohydrate, contents. Assessment of fatty acid composition revealed that increasing light intensity led to higher and lower contents of oleic (18:1) and linolenic (18:3) acids, respectively. The microalgae consumed more than 75% of the nitrogen and phosphorus present in the wastewater used as growth medium. Conclusion The results show the importance of optimizing light intensities to improve fatty acid production by microalgae and their quality as sources of biodiesel. In addition, increase in fatty acid content is associated with decrease in protein content.
Background
The increasing demand for energy and the negative environmental impacts of fossil fuel use are prompting global searches for renewable and clean fuels [1]. Many researchers are studying microalgae-based biofuels as promising candidates to replace fossil fuels. Microalgae are a group of photosynthetic organisms that can produce organic molecules including lipids, which can be used to generate biodiesel [2]. To get a viable fuel, the growth of algae for biodiesel production should be costeffective. Algal growth relies mostly on two nutrients: nitrogen and phosphorus [3]. Levels of these nutrients in wastewater, such as municipal wastewater, are often too high for safe environmental release, but they are expensive to remove [2,4]. Therefore, using municipal wastewater to grow algae may provide an efficient means to both clean the wastewater cheaply and generate biofuel.
Generally, increases in light intensity increase microalgal growth up to a photoinhibitory threshold, but both the strength of this effect and the threshold vary among species [5,6]. Light intensity also influences microalgal lipid production, which is of particular interest because lipids are the sources of biodiesel (as described below). However, increases in light intensity reduce lipid contents of some species [7], but promote or have no effect on lipid production in others [8,9]. Therefore, it is important to study the effects of light intensity on lipid production, on a species-by-species basis.
Microalgal biomass is mostly composed of lipids, carbohydrates, and proteins [10]. Therefore, if lipid contents increase there should be corresponding reductions in contents of carbohydrates, proteins, or both. Nitrogen starvation often reportedly leads to an increase in lipids and a decrease in carbohydrate content [11][12][13]. However, little is known about how light intensity affects the biochemical composition of microalgae, apart from the variable effects on lipid production mentioned above. Therefore, it is important to determine how the production of all three biochemical components changes with light intensity in order to optimize microalgal lipid production to generate biodiesel.
Biodiesel is produced from neutral lipids, primarily in the form of triacylglycerols, which contain three fatty acids linked by glycerol. Transmethylation of triacylglycerols results in fatty acid methyl esters (FAMEs), which make up biodiesel, and glycerol as a byproduct [11,12]. Fatty acid composition is an important factor to consider for the successful generation of biodiesel from algae (or any other biomaterial). For example, biodiesel with high amounts of polyunsaturated fatty acids can be readily oxidized due to the presence of double bonds in the fatty acid chains. In addition, biodiesel with high amounts of saturated fatty acids can solidify. Light may reportedly affect fatty acid composition and, therefore, biodiesel properties [13,14]. However, few studies have focused on the effects of light intensity on fatty acid composition. Therefore, we examined effects of three light intensities (50, 150 and 300 µmol m −2 s −1 ) on the biomass of four species of microalgae isolated in the Northern hemisphere and grown in wastewater. Using Fourier-transform IR spectrometry (FTIRS) analysis, we also examined the relative abundance of lipids, carbohydrate and proteins under each of the treatments. We also evaluated fatty acids content and profile using gas chromatography.
Effects of light intensity on biomass production
The highest biomass we recorded in our cultivations of the four species of microalgae for 8 days was 1.1 g/L, for Desmodesmus sp. grown at 300 μE m −2 s −1 light intensity (Fig. 1). The biomass of Desmodesmus sp. cultures at this time point was positively correlated with the light intensity. However, increasing the light intensity from 150 to 300 μE m −2 s −1 did not significantly increase the biomass of C. vulgaris and S. obliquus cultures, which was ca. 0.6 and 0.8 g/L, respectively, at 150 μE m −2 s −1 light intensity after 8 days (Fig. 1). Thus, for those two species a light intensity of 150 μE m −2 s −1 was optimal for biomass production. The conclusion is that the threshold at the current condition for these two species is at 150 μE ( Fig. 1). The results confirm general findings that up to a certain taxa-dependent saturating threshold light intensity limits growth of microalgae, and further increases would presumably have been photoinhibitory [5,15]. To assess effects of a longer growth-period on biomass and fatty acid content, growth of the two species with the highest biomass, Desmodesmus sp. and S. obliquus, were cultivated for 15 days. After 15 days cultivation at 300 μE m −2 s −1 , their biomass yields were 1.4 and 1.2 g/L, respectively ( Fig. 1). Biomass yields of both species were still lowest at 50 μE m −2 s −1 (Fig. 1). According to these results, Desmodesmus yields the highest biomass, while C. vulgaris and S. obliquus grow optimally at 150 μE m −2 s −1 . Thus, all three of those species are potential sources of biofuel, but E. pseudoalveolaris would not be a suitable source due to its low biomass production, at least under any of the conditions we applied.
Effects of light intensity on fatty acid content
The only types of lipids used for biodiesel production are fatty acids. While common gravimetric methods measure total lipid content, GC methods have the advantage of measuring contents of specific fatty acids [16]. Thus, we analysed the fatty acid content of each of the species cultivated under each of the three light intensities using a GC (with a FID) system. When grown at 300 μE m −2 s −1 , Desmodesmus sp. had the highest content of fatty acids (6.2%), followed by S. obliquus (5.8%) at day 8 ( Fig. 2). Moreover, fatty acid contents of these two species were positively correlated with the light intensity ( Fig. 2). Our results are consistent with previous findings that algae grown at high light intensities often accumulate more lipids. For example, increasing light intensity from 55 to 110 μE m −2 s −1 has been found to increase lipid production by S. abundans [17], and several Chlorella species reportedly produce more lipids at a high light intensity (600 μE m −2 s −1 ) than at lower light intensities [18]. This may be at least partly because at high light intensities algae counter photooxidation by converting excess photoassimilates into fatty acids [19]. However, in some recent studies high light intensity reduced lipid contents of various microalgae, including marine strains of Chlorella, despite increasing their biomass [20,21]. The cited authors suggested that the energy produced was used for cell division instead of being stored in the form of lipids [20,21]. We also found that C. vulgaris and E. pseudoalveolaris had lower lipid contents when grown at 300 μE m −2 s −1 light than when grown at lower light intensities, despite increases in biomass (Fig. 2). Therefore, there may be differences in species' mechanisms of responses to high light intensities, which result in either higher or lower lipid contents. We grew the two species with the highest biomass yields for 15 days. During the period between 8 and 15 days, fatty acid contents of S. obliquus growing at 300 μE m −2 s −1 light doubled, from 5.8 to 11.6%, but changed little at the 50 and 150 μE m −2 s −1 light intensities. In contrast, fatty acid contents of Desmodesmus sp. slightly increased during this period under all light intensities (Fig. 2). It has been suggested that increases in lipid production under high light intensities may be partly caused by starvation [22]. However, we found that nitrogen and phosphorus were still present in the medium after 15 days (Fig. 2).
Effects of light intensity on biochemical composition
Contrary to the effects of light on biomass and lipids, the impact of light intensity on proteins and carbohydrates has received little attention. To address this gap, we examined effects of light on the protein, carbohydrate and lipid contents of the microalgae using FTIRS methods, which reportedly provide results that correlate well with those obtained using standard extraction and analysis methods [23,24]. With increases in light intensity, the fatty acid contents of Desmodesmus sp. and S. obliquus (grown for either 8 or 15 days) increased, their protein contents declined, and their carbohydrate contents did not significantly change (Fig. 3). Similarly, increases in lipid contents and reductions in protein contents of Dunaliella tertiolecta associated with increases in light intensity have been observed [25]. On the other hand, nitrogen starvation is reportedly associated with higher lipid, lower carbohydrate, and constant protein levels in S. obliquus and two Chlorella species [26][27][28]. For example, high lipid production in Chrolella sorokiniana under nitrogen starvation corresponded to starch degradation [27]. The hypothesis was that lipid and carbohydrate paths compete for a common carbon precursor [27,29]. Thus, it has been suggested that blocking starch synthesis could increase lipid production [27]. However, our current results show that higher lipid content is linked to lower protein content, suggesting that lipid synthesis relied mostly on protein degradation or inhibition of protein synthesis. This is supported by He et al. [14], which showed that decrease of protein under increasing light intensity may be attributed to the consumption of nitrogen. It might also be that the provisions of carbon skeleton for amino acids and proteins synthesis might divert to serve as carbon and energy source for TAG biosynthesis [14]. In addition, although starch represents a more accessible form of carbon storage for plant cells than fatty acids, the energy recovery from fatty acid oxidation is greater than that of starch oxidation. When fatty acids are oxidized via the b-oxidation pathway and the citric acid cycle, the energy recovery is approximately 6.7 ATP equivalents per carbon for as an example palmitic acid [28]. It is indicated, that microalgae may have different mechanisms to synthesise fatty acids under high light intensities and/or nutrient starvation, which could affect either protein or carbohydrate content.
Fatty acid composition under different light intensities
GC analysis of the fatty acid composition of the algae indicated that light intensity had similar effects on the fatty acid profile of all strains, except E. pseudoalveolaris, in which the fatty acid content was very similar under all three light intensities (Fig. 4). In the other three strains, 16:0 and 18:3 fatty acids were abundant, and 18:2 least abundant, at the lowest light intensity (Fig. 4). Increases in light intensity resulted in lower amounts of 18:3, and higher contents of 18:1, which became the most abundant fatty acid (Fig. 4). These results are consistent with previous findings that C. protothecoides had lower 18:3 and higher 18:1 contents when light intensity was increased from 35 to 420 μE m −2 s −1 [30]. Intriguingly, E. pseudoalveolaris had high 18:2 contents under all three light intensities, but it would be interesting to observe possible changes in its lipid composition at higher intensities. Biodiesel with a high content of polyunsaturated fatty acids such as 18:3 is prone to oxidation-dependent degradation [31]. By contrast, a high content of monounsaturated fatty acids such as 18:1, which are not susceptible to oxidation, increases biodiesel's flow properties and reduces its solidification temperature [32]. Hence, our results show that optimizing the light intensity can improve the quality of microalgae-derived biodiesel.
Nitrogen and phosphorus uptake under different light intensities
To decrease the cost of biodiesel production, wastewater can be used to supply nutrients for microalgal growth, especially the major nutrients nitrogen and phosphorus [3]. The municipal wastewater used in this study had initial total nitrogen and phosphorus concentrations of 34.5 and 2.8 mg L −1 , respectively. According to the European Directive for Urban Wastewater Treatment, at least 75% of the total nitrogen and phosphorus should be removed from incoming wastewater before it can be discharged [33]. All the test strains had removed more than 75% of the total nitrogen and phosphorus content of the treated wastewater after 8 days, except Desmodesmus sp. at the light intensity of 50 μE (Fig. 5). The amounts of nitrogen and phosphorus removed by S. obliquus did not change between 8 and 15 days of cultivation (Fig. 5). Taken together, our results suggest that the microalgae used in this study could take up nitrogen and phosphorus from wastewater, and thus provide a cost-effective wastewater treatment method.
Conclusions
Our analysis of effects of light on microalgae performed on algae isolated in the Northern hemisphere showed that increases in light intensity increased both biomass and fatty acid contents of two of four tested species (Desmodesmus sp. and S. obliquus). They also induced changes in fatty acid composition of those species that could improve the quality of biodiesel derived from them. Therefore, Desmodesmus sp. and S. obliquus seem to be promising candidates for further studies of approaches to optimize biomass and biodiesel production. Another interesting finding, which warrants further mechanistic and physiological attention, is that increases in fatty acid content were accompanied by reductions in protein content.
Algal strains and municipal wastewater
The four algal strains used in this study (Chlorella vulgaris, Desmodesmus sp., Ettlia pseudoalveolaris, and Scenedesmus obliquus) were isolated in Sweden and described by Ferro et al. [34]. Each strain was inoculated and grown in 100 ml of BG11 medium, with a photoperiod of 16 h light (120 μE m −2 s −1 ):8 h dark, at 25 °C, and shaking at 150 rpm. Wastewater was provided by the community wastewater plant at Umeå, in Sweden, and stored at − 20 °C. Wastewater was prepared as growth medium by autoclaving after filtration through filter paper with ca. 10 μm pores, provided by Munktell AB (Sweden).
Algal harvesting and experimental setup
Cultures of each strain were harvested in exponential growth phase by centrifugation ( The temperature was 25 ± 2 °C and the tubes were aerated (0.1 L min −1 ). The algae were grown under these conditions for either 8 or 15 days, and OD 630 was measured daily to monitor their growth. Samples (50 mL) were then harvested by centrifugation at 3700 g for 6 min and freeze-dried for 3 days. Freeze-dried algae were used for lipid extraction and FTIRS analysis. The amounts of nitrogen and phosphorus present in the wastewater were determined before and after each experiment using LCK 138 and LCK349 kits, respectively, and a DR 3900 spectrophotometer, operated according to the manufacturer's manual (Hach Lange, Germany).
Lipid extraction
Each freeze-dried sample (2-5 mg) was ground in 5 mL of 4:1 methanol:H 2 O. Then, 4 mL of chloroform was added, the mixture was vortex-mixed, 1.2 mL of 0.73% NaCl solution was added, the mixture was vortex-mixed again and centrifuged at 1250 rpm for 2 min (Wifug, Doctor, Sweden). The lower phase was collected and ¼ of its volume was dried by nitrogen sparging for later use in transmethylation.
Transmethylation to fatty acid methyl esters (FAMEs)
Dried lipids, prepared as described above, were mixed with 200 μL of a 0.514 mg mL −1 solution of pentadecanoic acid, C15:0 (for use as an internal standard) in dry methanol, then 1 mL of 2% H 2 SO 4 (in dry methanol) was added. After sparging for 2 min, the tubes were immediately closed (to prevent oxygen entering) and heated for 2 h at 80 °C. FAMEs were then extracted from the transmethylation reaction mixture by adding 1 mL of mQ water and 2 mL petrol ether, vortex-mixing and centrifuging at 1250 rpm for 2 min. The top phase was transferred to a new screw-cap tube. This process was repeated using only 2 mL of petroleum ether. The petroleum ether was then sparged with nitrogen gas and dried lipids were finally resuspended in 100 μL of heptane for FAME analysis.
Fourier transform IR spectroscopy (FTIRS)
The biochemical composition of the algae was examined using FTIRS, as previously described [23,24], with slight modifications. Briefly, freeze-dried samples and KBr (1:10) were ground and loaded in an IFS 66 FTIR spectrometer equipped with OPUS 6.5 software (Bruker Optik GmbH, Ettlingen, Germany). FTIR spectra were acquired at 400-5200 cm −1 , signals spanning 800-1850 cm −1 were retained, and the baseline was corrected to remove broad background features and keep the low-intensity bands. The relative quantities of carbohydrates, proteins, and lipids were determined by comparing peak intensities at 900-1100 cm −1 (carbohydrates), 1738 cm −1 (lipids), and 1540 as well as 1658 cm −1 (proteins). | 2020-01-08T14:34:31.177Z | 2020-01-07T00:00:00.000 | {
"year": 2020,
"sha1": "a6f89dddffa2e77c967da98d9207b8a4eaf84cdf",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-019-1646-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdfaf15082ddb3f0bb604b644fd850484575a642",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
264084926 | pes2o/s2orc | v3-fos-license | Grandiose narcissism indirectly associates with lower psychopathology across five countries
Using five independent non-clinical cross-cultural samples ( total N = 3649; overall M age = 29.31; 31% male and 69% female)
Introduction
The Dark Triad refers to the three interrelated traits including Machiavellianism, narcissism, and psychopathy (Paulhus and Williams, 2002).Christie et al. (2014) derived their conceptualization of Machiavellianism from the writings of the 16th century political and military strategist and leader Niccolò Machiavelli.Hence, the construct denotes individuals who demonstrate strong agreement with the opinions of Machiavelli, such as endorsement of scheming and the use of manipulation.Machiavellianism is characterized by cynicism, indifference to morality, and distrust of others (Dahling et al., 2008).Subclinical narcissism includes facets retained from the clinical syndrome, namely grandiosity, entitlement, dominance, and superiority.Two main types of narcissism exist: grandiose and vulnerable.Grandiose narcissism encompasses exhibitionism, lack of humility/modesty, and interpersonal dominance, whereas principal features of vulnerable narcissism include negative affect, distrust, selfishness, and a need for attention/recognition (Dickinson and Pincus, 2003).Historically, psychopathy was investigated in clinical and institutional settings.Illustratively, Hare (1980) developed the Psychopathy Checklist (PCL) as a diagnostic tool to screen for clinical levels of psychopathy.In 1985, Hare established the Self-Report Psychopathy (SRP) scale, which facilitated wider assessment of psychopathy in subclinical research.
The Dark Triad was the initial taxonomy proposed to represent dark personalities.The three traits share a dark core composed of diminished empathy, ruthless exploitation of others (Jones and Figueredo, 2013), and a predisposition to high antagonism (Truhan et al., 2021).Noting these commonalities, investigators have often examined Dark Triad collectively.However, other researchers criticize this approach because Machiavellianism, narcissism, and psychopathy differ in important ways, including, for example, their propensities toward impulsivity (Jones and Paulhus, 2011).Studies also report variations within traits as a function of facets (Harms, 2022).For instance, accumulating evidence suggests that grandiose and vulnerable narcissism form two separate factors.This distinction is important because grandiose narcissism does not fit well within the Dark Triad core of callousness and manipulation (Truhan et al., 2021).
Acknowledging these points, the present cross-cultural study investigated the direct and indirect associations between the Dark Triad and anxiety, stress, and depression through mental toughness.A focus was placed on grandiose narcissism relative to the other two traits because despite being a dark trait, narcissism has consistently shown a negative indirect association with psychopathology through mental toughness.
Grandiose narcissism and mental toughness
Mental toughness is an important individual difference factor that facilitates the ability to deal effectively with life challenges and pressures (Lin et al., 2017;Denovan et al., 2022a,b).Although mental toughness was originally studied in sporting domains (Dagnall et al., 2021), the construct's importance is also recognized in a range of other applied settings (Drinkwater et al., 2019;Wheatley et al., 2023).At a conceptual level, mental toughness is an umbrella term that denotes possession of positive psychological resources that aid performance across achievement contexts (Gucciardi et al., 2015a;Perry et al., 2021).Specific features pertinent to the present study include the ability to deal with stressors, utilization of effective coping strategies (e.g., reappraising demanding situations as opportunities for self-development), and the inclination to proactively seek out opportunities for personal growth (St Clair-Thompson et al., 2015).These attributes are attendant with corresponding values, attitudes, emotions, and thoughts.
To date, several studies have established a moderate positive association between grandiose narcissism and mental toughness (Onley et al., 2013;Papageorgiou et al., 2017;2018;2019;Sabouri et al., 2016a,b).Although it is not clear why grandiose narcissism and mental toughness correlate positively, it is possible that this correlation is mainly due to mental toughness' component of confidence in one's abilities that is tapping into grandiose narcissism's self-enhancement properties.Through its positive relationship with mental toughness, grandiose (as opposed to vulnerable) narcissism has been shown to predict various positive outcomes in the context of education and psychopathology.For example, a semi-longitudinal study has shown that at the beginning of the school term, grandiose narcissism increases mental toughness by the end of term, contributing to higher school grades in adolescent students (Papageorgiou et al., 2018).
Through its positive association with mental toughness, grandiose narcissism has been shown to reduce anxiety, depression, and stress in three independent samples (Papageorgiou et al., 2019;Papageorgiou et al., 2019b).The same studies reported that vulnerable narcissism contributed to higher psychopathology through its negative association with mental toughness.Another study has used mediation analysis to show that grandiose narcissism contributed indirectly to reduced surface learning, increased strategic learning, and lower symptoms of depression in university students (Denovan et al., 2021a,b).A semi-longitudinal, cross-cultural study has also shown that grandiose narcissism exerted a negative indirect effect on anxiety, stress, and depression through mental toughness in two samples from the UK and Greece (Truhan et al., 2022).In this study, grandiose narcissism and mental toughness were assessed just before the start of the COVID-19 pandemic, while psychopathology was assessed before and during the pandemic.Finally, two cross-sectional studies with five independent Hungarian samples reported that grandiose narcissism was associated with higher mental toughness and resilience and reduced levels of psychopathology (Zhabo, Kun, Balogh, Simon, Csike, 2022).
The current study
Extant research indicates that individuals who score high on both grandiose narcissism and mental toughness may be highly goal oriented, respond proactively to stressors, and exhibit better mental health outcomes.This is consistent with the notion that including subclinical narcissism (or at least its grandiose facet) into the Dark Triad as a trait that links to poor and toxic psychosocial outcomes, requires revision (e. g., Truhan et al., 2020).This proposition is consistent with a large metaanalysis of the Dark Triad literature that failed to report statistically significant correlations between narcissism and various measures of negative psychosocial outcome, such as antisocial tactics, aggression, sex-related issues and morality problems (with the exception of a weak positive correlation between narcissism and interpersonal difficulties; Muris et al., 2017).
Noting this, additional evidence is needed to examine the relationship between grandiose narcissism and other traits, such as mental toughness, across diverse populations and contexts.This academic work will help to identify and promote narcissism's adaptive tendencies while delimiting its potential for harm.To achieve this, the present study tested and directly compared the results of a mediation model, across five independent cross-cultural samples.It was predicted that grandiose narcissism would associate negatively with symptoms of psychopathology through mental toughness.Commensurate with previous findings, it was further predicted that Machiavellianism and psychopathy would show either no relationship or a positive relationship with psychopathology.
Sample
This study used five independent national samples enrolled through advertisements on social networks and word of mouth.Data collection took place online.Preliminary data screening eliminated data points with z-scores >3.29 or < − 3.29 SDs (26 UK, 94 Greece, 15 Italy, 58 Russia, and 1 Canada) (Tabachnick and Fidell, 2013)
Instrument translation
For instrument translation we followed the method suggested by Brislin (1986) and Beaton et al. (2000).Specifically, to enable data collection in Greece, Russia, and Italy, two native speaking external colleagues (i.e., not co-authors) completed forward translation of the Mental Toughness Questionnaire (MTQ10), and the Short Dark Triad questionnaire (SD3; Jones and Paulhus, 2014) (for Greece only).Then, a co-author and an external colleague (an English teacher whose first language is Greek) proficient in English, whose respective native languages are Greek and Russian, evaluated unsatisfactory expressions/idioms and completed back translation.The same approach was followed for the Italian translation of the MTQ10.Back translation is an established technique that has been utilized in cross-cultural survey research throughout the past 50 years (Son, 2018;Denovan et al., 2021a,b;2022).
Measures
The Short Dark Triad questionnaire (SD3; Jones and Paulhus, 2014) is a widely used measure of subclinical narcissism, subclinical psychopathy, and Machiavellianism (Vaughan et al., 2019).The SD3 includes 27 items with nine per subscale.Responses are given on a 5-point Likert scale (1 = strongly disagree, to 5 = strongly agree).Example items include 'People see me as a natural leader' and 'Payback needs to be quick and nasty'.Satisfactory validity and internal consistency exist (Jones and Paulhus, 2014).
Unlike the MTQ48 that assesses total MT and its four facets, the MTQ10 provides an overall MT score only.
Procedure and ethics
Following advertisements on social networks, interested respondents obtained details of the study's aims and objectives via an information sheet.Participants supplied informed consent to take part and received a message containing a link to the online questionnaire alongside a unique respondent code.Completion of the measures was self-paced, and participants only advanced to the subsequent page after completing all items.Upon conclusion of the study, participants were debriefed.The project received ethical approval from the lead university.Procedures performed were in accordance with the ethical standards of the institutional and/or national research committee and concurred with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Data analysis
Assessment of bivariate correlations investigated basic associations among variables.Analysis (using Mplus 7.4; Muthén and Muthén, 2015) included an assessment of a mediation model with the total sample to investigate the hypothesized indirect/mediating role of mental toughness in relation to Dark Triad traits, symptoms of depression, anxiety, and stress.The hypothesized model included direct paths from Dark Triad traits to mental toughness, and from mental toughness to depression, anxiety, and stress, whilst integrating the paths from Dark Triad traits to depression, anxiety, and stress.This method was valuable to examine if the associations of the Dark Triad traits with depression, anxiety, and stress take place in the absence of any intervening variable, or if these are channeled through mental toughness (Tighezza, 2014).Model testing included bootstrapping (1000 resamples) to assess the significance of the indirect relationships relative to 95% bias-corrected confidence intervals (Preacher and Hayes, 2008).
Lastly, a multigroup path analysis tested cross-cultural invariance by comparing two models.Explicitly, a baseline/unconstrained model with no equality constraints (i.e., equal model structure, but freely estimated coefficients) and a constrained model where all parameters were fixed to be equal between the five countries.A significant chi-square during model comparison infers non-invariance in relation to the structural paths (Bang et al., 2019).Effect sizes were interpreted using Kenny's (2016) criteria of small ≤0.01, medium ≤0.09, and large ≥0.25.
Preliminary analysis
Preliminary assessment of univariate normality revealed that skewness values fell between − 2.0 and + 2.0, and kurtosis between − 4.0 and + 4.0 (Field and Miles, 2010) (Table 1).Bivariate correlations (Table 1) indicated small-to-moderate significant associations between the Dark Triad traits.Mental toughness evidenced weak associations with Machiavellianism and psychopathy, and a moderate relationship with narcissism.Depression, anxiety, and stress correlated negatively with narcissism and mental toughness, and positively with Machiavellianism and psychopathy.
Model test for total sample
The mediation model with the total sample was a saturated model, containing as many parameters as data points (Kline, 2005).This is not informative insofar as model fit indices.Accordingly, non-significant paths were trimmed from the model to increase interpretability.The only non-significant path was from narcissism to depression, β = − 0.03, p = .028.A model without this path revealed good fit, χ 2 (1, N = 3649) = 2.56, p = .109,CFI = 1.0,RMSEA = 0.02 (90% CI of 0.01-0.05),SRMR = 0.01.In addition, this model did not suggest a significantly worse fit than the original model, S-Bχ 2 (1) = 2.56, p = .109.This model was retained for subsequent analyses.Fig. 1 displays standardized regression coefficients of this model, which accounted for 33%, 25%, and 28% of the variance in depression, anxiety, and stress respectively.
Assessment of indirect effects using the bootstrap procedure (Table 2) indicated that mental toughness significantly mediated the association between all Dark Triad variables (Machiavellianism, narcissism, psychopathy) and depression, anxiety, and stress.Specifically, mental toughness appeared to contribute to the Dark Triad variables exerting a weaker association with depression, anxiety, and stress.
In comparison with the unconstrained model, a non-significant difference existed, S-Bχ 2 (24) = 35.88,p = .056.A significant difference was apparent with the fully constrained model, S-Bχ 2 (32) = 119.02,p < .001.These outcomes suggested that some of the paths were moderated by country, and it was necessary to retain the partially constrained model.
Lastly, the authors tested whether country moderated the indirect 4) indicated that paths relating to Machiavellianism were moderated by country, which is unsurprising given the discrepancies reported above.Differences among indirect effects for narcissism (in relation to depression and anxiety only) reflect the variance between large and medium effect sizes (i.e., greater effect sizes for UK, Italy, Canada, lower for Greece and Russia).
Discussion
Previous research has found that grandiose narcissism associates negatively with symptoms of psychopathology indirectly through Note.increasing resilience (e.g., Papageorgiou et al., 2019a,b,c).Five samples from distinct cultural backgrounds were analyzed to explore cross-culturally the association of grandiose narcissism as well as psychopathy and Machiavellianism with resilience (mental toughness) and psychopathology (stress, anxiety, and depression).
Of the dark traits, grandiose narcissism revealed the most consistent pattern of results.Firstly, grandiose narcissism exhibited significant and negative associations with stress, anxiety, and depression through mental toughness across the five samples.The tested model explained a small-to-moderate (depending on the country and type of psychopathology) amount of variation in psychopathology scores that ranged from approximately 2%-10%.Secondly, the model demonstrated slightly stronger relationships among grandiose narcissism, mental toughness, and depression (compared to either anxiety or stress).Conceptually, grandiose narcissism can be perceived as the opposite of depression.In this context, individuals scoring high on grandiose narcissism have unrealistic and self-enhancing views about themselves.Individuals scoring high on symptoms of depression also have an unrealistic view about themselves, such that they self-devaluate.As such, grandiose narcissism may be particularly adaptive with regards to depression because it primarily encapsulates traits of self-belief.This is in line with the results of a recent meta-analysis, which reported that self-enhancement was positively associated with psychological adjustment across sex, age, cohort, and culture (Dufner et al., 2019).
Thirdly, the model explained the highest amount of variation (on average) in psychopathology in the UK, followed by Canada and Italy.Interestingly, the model revealed almost identical results for the Greek and Russian samples.Looking at differences among the samples, the Greek and Russian samples were of similar age and had the highest mean age as compared to the other three samples.It could be that with increasing age, the dark traits lose some of their significance (positive or negative) for important life outcomes.The possible decrease in malevolent traits with age aligns with the maturity principle; that is, personality may change in a way to be more socially mature (i.e., emotionally stable, agreeable, and conscientious), communal, responsible, and selfcontrolled (Luo et al., 2022;Roberts et al., 2008).
Machiavellianism and psychopathy displayed weaker (as compared to grandiose narcissism) and positive (as opposed to negative) relationships with psychopathology through mental toughness.These results were similar in terms of direction and effect size with the exception of Machiavellianism in the Russian and Canadian samples (where the indirect effects were not significant).The similar results for psychopathy and Machiavellianism could be linked to the way these two traits are typically assessed.Specifically, previous research contends that psychopathy and Machiavellianism scales measure the same concept, and that Machiavellianism assessment tools fail to capture the construct as articulated in theoretical descriptions (Miller et al., 2017).
This investigation should be viewed in light of some important limitations.Firstly, the current study cannot fully explain the cross-cultural differences discussed above.However, the results highlight the need for further cross-cultural research investigating the ways in which the Dark Triad is expressed and associated with mental health.The study is crosssectional, which precludes definitive conclusions concerning the causal order of the variables.However previous semi-longitudinal work supports the notion that mental toughness mediates the effect of grandiose narcissism on psychopathology (Denovan, et al., 2021).Despite evidence suggesting the Dark Triad is multifaceted (Truhan et al., 2022), the present study has explored dark traits as unidimensional constructs.This is problematic, especially in the context of cross-cultural research.For example, while a recent cross-cultural study failed to report significant differences in narcissism assessed with the SD3 in two samples from the UK and Russia (Papageorgiou et al., 2022), it reported a significant, theoretically relevant difference between the two countries on the narcissism Antagonism facet using the Five-Factor Narcissism Inventory Short Form (Sherman et al., 2015).Incorporating in cross-cultural research other (to grandiose) facets of narcissism-such as its antagonistic and vulnerable facets-will help to provide a more balanced account of the positive and negative correlates of narcissism.
While large samples from five distinct cultural backgrounds were collated, other variables (e.g., differences in socioeconomic conditions among the countries) that could explain cross-cultural differences in the tested model have not been assessed.Future research should investigate ways through which narcissism may differentially impact resilience (e. g., through narcissism and MT's possible association with sleep patterns, seeSabouri et al., 2016a,b) and mental health in different cultural and socioeconomic environments.Another limitation refers to the demographic differences among the cross-cultural samples.Specifically, some of the differences in the main study variables among different countries could be influenced by the age differences among the samples.As such, direct cross-cultural comparisons and their interpretation become difficult as these may be confounded by demographic differences in the samples.Finally, self-report data may be influenced by common-method variance (Podsakoff et al., 2003) and social desirability.In the present study reliance on self-report measures may be problematic.Specifically, it could be that individuals high in grandiose narcissism may not objectively possess greater mental toughness and lower levels of psychopathology; rather, their perception of mental toughness could be inflated due to overconfidence.As such, while self-report methods hold value, it is important to acknowledge the possibility that self-perceptions might not align with objective reality.Narcissistic individuals may believe themselves to be more resilient, but this should be approached cautiously and should be investigated further.Despite the common criticism of the self-report method, two points should be made here: (1) It appears that the self-report method remains the workhorse of personality research with few alternatives (Papageorgiou et al., 2023); (2) Research suggests that most of the hypothesized trait-outcome associations in personality research can be successfully replicated (Soto, 2019).
Conclusion
The present investigation has direct theoretical and indirect applied implications.The findings build on existing evidence regarding the positive association between grandiose narcissism and psychopathology, primarily depression, through increasing resilience.Although several recent papers have reported this finding, the aetiology of this association remains unexplored.Indeed, it is fascinating that grandiose narcissism correlates positively with Machiavellianism and psychopathy, as well as with higher resilience and lower levels of common psychopathology with these findings replicating consistently across cultures.Cross-cultural research has the potential to effectively uncover adaptive and maladaptive aspects of the Dark Triad.For example, previous work has shown that the Dark Triad is sensitive to socioeconomic conditions (Yu et al., 2022).As such, future work may explore whether grandiose narcissism is particularly adaptive in relation to psychopathology in the presence of extreme adversity, and whether these adaptive properties come at a cost (e.g., peer problems due to negative stereotypes about narcissism).Considering the malleability of personality traits, joint intervention programmes with general populations could promote the adaptive-rather than maladaptive-aspects of narcissism to reduce levels of psychopathology.
Fig. 1 .
Fig. 1.Mediation model depicting putative relationships between Dark Triad traits, mental toughness, depression, anxiety, and stress for the total sample.Note.Standardized regression weights between variables are shown.Error is not indicated but was specified for endogenous variables.*p < .05,**p < .001using Bootstrapping significance estimates (1000 resamples).
Table 1
Means, standard deviations and correlations for all study variables for the total sample and country-specific samples.
Note. *p < .05,**p<.001.effects.Results (Table2) inferred that narcissism consistently exhibited a significant indirect association with depression, anxiety, and stress via mental toughness (with effect sizes ranging from moderate to large).The most notable differences occurred for Machiavellianism, with significant indirect effects existing for UK, Greece, and Italian samples, but not for the Russian and Canadian samples.Comparison of indirect effects (Table
Table 2
Specific indirect associations of Dark Triad traits with depression, anxiety, and stress through mental toughness.
Table 3
Chi-square difference tests from the multigroup country model.
Table 4
Contrasts among country relating to indirect effects. | 2023-10-14T15:11:31.243Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "e69160e9b9fd9ae72c52261d77d65d7087383358",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jpsychires.2023.10.003",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "253d8981c37ad7743e4a133c57d6298386beb972",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
261797041 | pes2o/s2orc | v3-fos-license | Riesz capacities of a set due to Dobi\'nski
We study the Riesz $(a,p)$-capacity of the so called Dobi\'nski set. We characterize the values of the parameters $a$ and $p$ for which the $(a,p)$-Riesz capacity of the Dobi\'nski set is positive. In particular we show that the Dobi\'nski set has positive logarithmic capacity, thus answering a question of Dayan, Fernand\'ez and Gonz\'alez. We approach the problem by considering the dyadic analogues of the Riesz $(a,p)$-capacities which seem to be better adapted to the problem.
Introduction and main results
In a series of two papers [10,11] Dobiński claims that the following identity is true n≥0 (tan 2 n πx) 2 −n = (2 sin πx) 2 , for all real numbers x ∈ [0, 1] which are not dyadic rationals.As it has been already noted in [2] and explained in detail in a recent paper [9] the situation is not quite so simple.In fact, if we consider the same identity with absolute values, n≥0 | tan 2 n πx| 2 −n = (2 sin πx) 2 , so as to avoid issues of defining the powers of negative numbers, in [2] the authors prove that the identity holds if and only if x does not belong to the so called Dobiński set D. To define D let x ∈ [0, 1] be a real number with dyadic expansion x = (0.a 1 a 2 . . . ) 2 and for n ≥ 1 let s n (x) = max{r ∈ N : a n = a n+1 = • • • = a n+r } 1 .Then, 2 n > 0}.So the Dobiński set comprises real numbers which can be approximated "exceedingly well" by dyadic rationals on every scale.Related problems of diophantine approximation by dyadic rationals have been considered in [13].
In a recent work [9] Dayan, Fernandéz and González prove, among other results, that D has Hausdorff dimension 0 and logarithmic Hausdorff dimension 1.Their techniques are primarily based on the mass transference principle of Beresnevich and Velani [6] which allows one to transfer measure theoretic statements for lim sup subsets of R n to statements about Hausdorff measure.The Hausdorff dimension is a precise way to talk about the size of a subset of R n .Another way to measure the size of subsets of R n is by some kind of capacity.
From now on it will be convenient to consider D as a subset of the unit circle T in R 2 via the usual correspondence x → e 2πix .Let 0 < a < 1 and f a positive measurable function on T the a-Riesz potential of f is defined as where dy is the normalized Lebesgue measure on T. Finally let 1 < p < ∞ and a as before.The (a, p)-Riesz capacity of a Borel subset E of T is defined as We shall refer to the capacities R a,p as linear if p = 2 and as non-linear if p = 2.We shall also assume that ap ≤ 1, otherwise singletons have positive capacity.There exists a remarkable relation between Hausdorff dimension, which we will denote by dim, and linear Riesz capacities for a Borel set E ⊆ T established by Frostman [1,Corollary 5.1.14]dim E = sup{1 − ap : R a,p (E) > 0}. ( This fact, together with the standard comparison results between capacities [1, Section 5.5] and the fact that the Dobiński set has vanishing Hausdorff dimension implies that R a,p (D) = 0 when ap < 1.In the same work where the Hausdorff dimension of D was studied the authors ask whether also R1 2 ,2 (D) = 0 or not [9, Section 5].In fact they formulate the question in terms of logarithmic capacity in the complex plane but it is well known that for subsets of T, logarithmic capacity is bounded below and above by the Riesz ( 1 2 , 2)-capacity [7, Corollary 2.6].
We have been able to answer the above question for all Riesz ( 1 p , p)-capacities.Theorem 1.1.Let D be the Dobiński set and p > 1.Then, Somewhat surprisingly the capacity of D exhibits a jump from full to 0 at the critical value p = 2.It should be mentioned that this statement implies, via (1), that the Hausdorff dimension of D is 0 and that the logarithmic Hausdorff dimension is 1 by [1, Corollary 5.1.14].The proof of the above theorem is presented in Section 3 and it applies to a more general class of Dobiński type sets and all (a, p) Riesz capacities (see Theorem 3.2).The proof rests on two ideas.One is the use of a discrete/dyadic version of the Riesz capacity.Discrete type capacities have appeared in potential theory in the past (see for example [5,14]) and their "combinatorial" nature suites very well the dyadic structure of D. In concrete terms one can show using a recursive formula (Lemma 2.2) that D has positive discrete capacity and this, through a comparison theorem for discrete and Riesz capacities, [4, Theorem 1] allow one to deduce that D has positive Riesz ( 1 p , p)-capacity for 1 < p ≤ 2 and zero capacity when p > 2. Finally we prove a "Kolmogorov 0 − 1" type lemma (Lemma 3.1) from which we can deduce that in fact D is of full capacity, i.e.R 1 p ,p (D) = R 1 p ,p (T) when 1 < p ≤ 2. It is worth noticing that the same phenomenon (0 − 1 type law) appears in the study of logarithmic capacity of uniform G δ -sets [12, Theorem 1.2].
Trees, dyadic capacity and the recursion formula
Let T := {0, 1} * the free monoid generated by the language {0, 1} with neutral element e.In this context we shall call T the dyadic tree.The length of a word x is denoted |x|.For two words x, y ∈ T we denote the largest common prefix of x and y by x ∧ y.If x ∧ y = x we write x ≤ y.Finally we use the notation x − = x0, x + = x1.The (Poisson) boundary ∂T of T can be identified with the metric space {0, 1} N equipped with the metric d(x, y) := 2 −|x∧y| .
We will write T := ∂T ∪ T.There exists a natural mapping from ∂T to [0, 1], which is onto and Lipschitz continuous.Moreover, every x ∈ [0, 1] which is not a dyadic rational has a unique pre-image.Dyadic rationals have two pre-images under Λ.
Our next goal is to develop a potential theory on ∂T which parallels the one we have already seen in T. A more detailed exposition of the potential theory on the boundary of the tree can be found in [4,8,3].Here we shall present only the elements that are essential for our problem.
Let ϕ a non negative function defined on T .The potential of ϕ is given by Let π be a positive weight function defined on T .Then for a set E ⊆ ∂T we define its π (discrete) capacity as follows cap π (E) := inf x∈T ϕ(x) p π(x) : ϕ ≥ 0 and Iϕ ≥ 1 on E .
When π(x) = 2 −|x|(1−ap) we shall refer to the capacity cap π =: cap a,p as discrete (a, p) capacity.The relation between the Riesz and discrete capacities can be made explicit.This has been first noted in [5] and generalized further in [4].
Theorem 2.1.[4, Theorem 1] Let p > 1, 0 < a ≤ 1/p.There exists a constant c = c(a, p) > 0 such that for any compact set In fact the restriction that K should be a compact set can be relaxed considerably.By Choquet's capacitability theorem [1,Theorem 2.3.11], Theorem 2.1 holds for all Suslin sets, in particular for all Borel sets.
Discrete capacities satisfy a recursive formula, which is of fundamental importance for our computations.It relates the capacity of a set to the capacities of the parts of its dyadic decomposition.Let x ∈ T and E ⊆ ∂T.Let also E x := {w ∈ ∂T : xw ∈ E} and π x (w) = π(xw).Then we define Informally, cap π (E, x) is the capacity of the portion of E that stays below x "viewed" from the root x.Theorem 2.2.[4, Theorem 30] Let E ⊆ ∂T a Borel set.For every x ∈ T the following equality holds Finally let us introduce a more general class of Dobiński type sets on the boundary of the tree.This is a rather natural generalization of the set D. Suppose that κ n is a sequence of positive integers.Let D(n, κ n ) := {w ∈ ∂T : w n+1 = w n+2 = . . .w n+κn = 0}.We define the Dobiński type set associated to κ n as Notice that if we consider the set Λ(∪ m D m ) where D m is the Dobiński type set corresponding to the sequence κ n = 2 n m −1 , we obtain "one half" of the Dobiński set D. The other half is obtained by considering the same construction, where instead of "strings of 0's" in the definition of D(n, κ n ) we consider strings of 1's.
Proof of the main result
Lemma 3.1.Let p > 1 and a > 0 such that ap ≤ 1 and E ⊆ T a Borel set which is invariant under rotations by angles θ, where θ is a dyadic rational number.Then, either R a,p (E) = R a,p (T) or R a,p (E) = 0.
Proof.Assume that R a,p (E) = 0.By Theorem [1, Theorem 2.3.10]there exists a unique non negative function f E ∈ L p (T) such that and Let θ a dyadic rational and define ρ θ f (x) := f (e 2πiθ x).Then it is clear that 4) and ( 5) and by uniqueness ρ θ f E = f E .Since θ is dense in [0, 1] a calculation with the Fourier coefficients of f E shows that f E = c Lebesgue a.e. on T for some positive constant c.By equation ( 5) and the fact that 0 < R a,p (E) ≤ R a,p (T) we get 0 < c p ≤ R a,p (T).Finally let y 0 ∈ E, such that (4) holds; We now turn to the main theorem.The calculation can be carried out for general (a, p), Theorem 3.2.Let D a Dobiński set associated to a sequence κ n and a > 0, 1 < p < ∞.In the case ap = 1 we have Proof.Let D(n, κ n ) as before.We start with deriving an exact formula for the discrete (a, p) capacity of the set D(n, κ n ) using the recursive formula (equation ( 3)).For a positive parameter r > 0 define the function An elementary computations shows that the following semigroup law is satisfied Next we apply n + κ n times the recursive formula (3) for the set D(n, κ n ).In the following c := cap a,p (∂T ).If we use the symbol for repeated composition of functions we have where we have used the fact that Φ r (2x) = 2Φ Consequently if ap = 1, From which is easily verified that there exists a constant A > 0 such that | 2021-10-27T01:15:52.916Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "70fa7b18d8607d74da8efd9709634e580b727523",
"oa_license": "CCBY",
"oa_url": "https://comptes-rendus.academie-sciences.fr/mathematique/item/10.5802/crmath.332.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "70fa7b18d8607d74da8efd9709634e580b727523",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
218693993 | pes2o/s2orc | v3-fos-license | Effects of service changes affecting distance/time to access urgent and emergency care facilities on patient outcomes: a systematic review
Background Reconfiguration of urgent and emergency care services often increases travel time/distance for patients to reach an appropriate facility. Evidence of the effects of reconfiguration is important for local communities and commissioners and providers of health services. Methods We performed a systematic review of the evidence regarding effects of service reconfigurations that increase the time/distance for some patients to reach an urgent and emergency care (UEC) facility. We searched seven bibliographic databases from 2000 to February 2019 and used citation tracking and reference lists to identify additional studies. We included studies of any design that compared outcomes for people with conditions requiring emergency treatment before and after service reconfiguration with an associated change in travel time/distance to access UEC. Studies had to be conducted in the UK or other developed countries. Data extraction and quality assessment (using the Joanna Briggs Institute checklist for quasi-experimental studies) were undertaken by a single reviewer with a sample checked for accuracy and consistency. We performed a narrative synthesis of the included studies. Overall strength of evidence was assessed using a previously published method that considers volume, quality and consistency. Results We included 12 studies, of which six were conducted in the USA, two in the UK and four in other European countries. The studies used a variety of observational designs, with before–after and cohort designs being most common. Only two studies included an independent control site/sites where no reconfiguration had taken place. The reconfigurations evaluated in these studies reported relatively small effects on average travel times/distance. Discussion For studies of general UEC populations, there was no convincing evidence as to whether reconfiguration affected mortality risk. However, evidence of increased risk was identified from studies of patients with acute myocardial infarction, particularly 1 to 4 years after reconfiguration. Evidence for other conditions was inconsistent or very limited. Conclusions We found insufficient evidence to determine whether increased distance to UEC increases mortality risk for the general population of people requiring UEC, although this conclusion may not extend to people with specific conditions.
Background
The impact of large-scale changes to the delivery of health services (often referred to as service reconfiguration) is important to health professionals, health service managers, and patients and the public. Programmes of service reconfiguration in the English National Health Service (NHS) are currently being implemented at a local level to deliver new models of care such as integrated care systems (ICS) [1]. Proposed reconfigurations may increase travel time and/or distance for some patients to reach their nearest hospital emergency department (ED) or other urgent and emergency care (UEC) facility, for example by closing EDs or replacing a full ED with an urgent care centre or minor injury unit. The rationale for reconfiguration is that by concentrating resources in fewer specialist centres, patients with severe acute conditions will receive better quality care and achieve better outcomes. Patients with less serious conditions will be catered for by a local urgent care centre/ minor injury unit or by triage at a large ED.
Many communities value their local UEC services and perceive that proposed changes which may increase travel time and/or distance could worsen outcomes for patients, particularly those requiring emergency medical or obstetric care [2]. In addition to increased morbidity/ mortality, potential harmful effects of reconfiguration could include financial costs for patients/families; overcrowding and longer waiting times at large EDs; environmental effects of extra road journeys; and disruption to existing clinical relationships and pathways. Commissioners and service providers need evidence regarding the impacts of reconfiguration not only on patient outcomes, but also for the wider healthcare system [3]. For example, commissioners may have questions about effects on other provision such as ambulance and community-based services. Providers may face difficulties in staffing other services if they are no longer providing emergency care.
The recent closED study [3] analysed data from five locations in England where emergency departments (EDs) were downgraded between 2009 and 2011. While the authors found no evidence of an impact on mortality (despite patients having to travel further to access an emergency facility), the study did detect evidence of an effect on the UEC system as a whole, such as an increased burden on emergency care providers. The aim of this systematic review was to assess the international evidence on the effects of reconfiguration that increases the distance people have to travel (and/or the time taken) to access emergency care. We defined reconfiguration to include large-scale system change, such as relocation of hospitals, (re) location of specialist care, and changes in provision of urgent/emergency/out-of-hours care [2]. This definition would exclude small-scale change, for example at hospital ward level or within a general practitioner (GP) practice.
The work formed one strand of a larger project, funded by the UK National Institute for Health Research, and the full technical report will be published in due course (Chambers et al., Health Services and Delivery Research, in preparation).
Methods
The protocol for this review was registered prospectively on the PROSPERO database (registration number CRD42019123061). The research question was: what is the evidence regarding effects on patients and the health system of service reconfigurations that increase the time/distance for some patients to reach an UEC facility? A list of potentially time-sensitive conditions requiring treatment at a UEC facility was developed in advance (see inclusion and exclusion criteria below). The list prioritised conditions more likely to be affected by service reconfiguration or requiring a decision as to whether to travel further to reach a more specialist facility. However, this list was not intended to be exhaustive and studies of other conditions were included if they met the other inclusion criteria. Citation tracking of included studies was performed on Web of Science (WOS) and Google Scholar in April 2019. Given the diffuse nature of the topic and associated terminology, the reference lists of all included articles were manually screened to identify additional studies.
Literature search and screening
Search results were imported into EndNote X8.2 (Philadelphia, USA: Clarivate Analytics), and automatic and manual deduplication was conducted. Records were imported into EPPI-Reviewer 4 software (London, UK: EPPI Centre) for screening, data extraction, and quality assessment. The search results were screened against the inclusion criteria by a single reviewer, with a 10% sample screened by a second reviewer. Uncertainties were resolved by discussion amongst the review team.
Inclusion and exclusion criteria Population
Population includes adults or children with conditions that require emergency treatment including but not limited to acute myocardial infarction (AMI), stroke, major trauma, severe exacerbations of asthma, chronic obstructive pulmonary disease or complications during pregnancy and the neonatal period. In practice, eligible studies could include data on any patient wishing to access an UEC facility.
Intervention
Intervention includes changes to the delivery of healthcare services (service reconfiguration) which have an effect on the time or distance for patients to access an UEC facility. The review included reconfigurations that have an effect on access to any urgent and emergency care services including ambulance services, maternity services and hospital emergency departments.
Comparison
Comparison entails outcomes (from studies with or without control sites) before and after a service reconfiguration which has an effect on time/distance to UEC.
Outcomes
Outcomes entail any quantitative or qualitative outcomes for patients including mortality/morbidity, or other perceived or measured effects. Also outcomes or impacts on the health system such as non-transportation to hospital, emergency admissions, increase or decrease in contacts/service usage.
Setting
Setting includes the UK and other developed countries. Absolute travel distances and density of population (which will affect distribution and density of healthcare facilities) were taken into account in assessing applicability of findings to the UK.
Study design
Studies of any design were eligible for inclusion.
Other inclusion criteria
Literature published since 2000 Literature published in English Grey literature in the form of service evaluations or reports from the UK Other exclusion criteria Studies that describe reconfigurations or initiatives without providing any quantitative or qualitative data Conceptual papers and projections of possible future developments Studies conducted in low-or middle-income country health systems Studies conducted in high-income countries that are not considered comparable to the UK health system Studies of air ambulance services were excluded because these services are not funded by the NHS in England Theses, conference abstracts, articles in professional magazines, books and book chapters
Data extraction and quality assessment
We extracted and tabulated key data from the included studies, including study design, population/setting, results and author-reported key limitations. The full data extraction template is provided in Additional file 2. Data extraction was performed by a single reviewer with a 10% sample checked by a second reviewer for accuracy and consistency. Quality (risk of bias) assessment was undertaken using the Joanna Briggs Institute checklist for quasiexperimental studies [4]. This nine-question checklist was chosen because the meaning of included items was considered easily understandable and because the questions are applicable to a wide range of non-randomised study designs. Quality assessment was performed by a single reviewer with a 10% sample checked for accuracy and consistency.
Evidence synthesis
We performed a narrative synthesis structured around the pre-specified research questions and outcomes. We first described the characteristics of the group of studies as a whole. We then summarised the results in terms of the types of conditions included (e.g. general UEC population, acute MI, trauma). Further analyses assessed the relevance of the study setting to the UK health system and explored rural, compared to urban and suburban, settings. The narrative synthesis was drafted by the first author and revised with input from all the authors.
Summary table reports were generated from extracted data using the EPPI-Reviewer program. Overall strength of evidence was assessed using a previously described method [5]. Evidence was rated as comparatively 'stronger', 'weaker', 'inconsistent' or 'very limited' based on volume, strength and consistency. Specifically, 'stronger evidence' represented generally consistent findings in multiple studies with a comparator group design or comparative diagnostic accuracy studies; 'weaker evidence' represented generally consistent findings in one study with a comparator group design and several noncomparator studies or multiple non-comparator studies; 'very limited evidence' represented an outcome reported by a single study; and finally, 'inconsistent evidence' represented an outcome where fewer than 75% of studies agreed on the direction of effect. All studies included in the review were included in the analysis of overall strength of evidence.
Public and patient involvement
We elicited input from the Sheffield Evidence Synthesis Centre public advisory group, who contributed across all the stages of the review including helping to understand the importance of the question to patients and the public and interpreting the findings. The advisory group emphasised how international health systems may not be directly comparable to the UK and encouraged the research team to be clear regarding applicability of international evidence.
Study selection
The PRISMA flow diagram ( Fig. 1) summarises the study selection process. Calculation of the Kappa coefficient demonstrated good agreement between reviewers for the sample of double screened records (K = 0.729, 95% CI, 0.542-0.916). Reasons for studies being excluded at the full-text stage included their covering access to services generally, not specifically emergency care; the intervention was not relevant (e.g. public access defibrillators); or the study discussed changes to services without relating outcomes to travel time or distance. Table 1 summarises the characteristics of the included studies. Six studies were conducted in the USA, with only two [3,13] from the UK. The remaining studies were conducted in other European countries; there were no studies from Canada, Australia or New Zealand. Six of the included studies focused on ED reconfiguration, providing data on patients with many different types of emergency conditions. Three looked specifically at patients with acute MI requiring access to percutaneous coronary interventions (PCI). Two studies examined the effects of service changes involving specialist trauma centres, and one looked at the effects of maternity unit closures in France.
Characteristics of included studies
The studies used a variety of observational designs, with before-after and cohort designs being most common. Knowles et al. [3] and Mustonen et al. [12] were the only studies that compared reconfiguration sites with independent control sites where no reconfiguration had taken place.
Risk of bias
Results of the quality appraisal are presented in Additional file 3. Many of the studies were inherently at high risk of bias because of lack of an independent control group. The most common design was before-after and only four studies compared outcomes between settings with and without changes in distance/time [3,6,10,12].
Most of the included studies were clear about the temporal relationship of the variables of interest (i.e. which was the 'cause' and which was the 'effect'; Q1), although the issue was sometimes confused by the use of linked datasets. Similarity between populations being compared (Q2) varied across the studies. It was also sometimes unclear whether comparison groups were being treated similarly other than the intervention or exposure of interest (Q3). This related to differences over time as well as to studies recruiting clinically diverse populations. Absence of a separate independent control group (Q4) was noted in most of the studies and few studies carried out measurements at multiple time points before and after an intervention or exposure (interrupted time series design; Q5). Completeness of follow-up (Q6) did not show a clear pattern across studies. Most studies measured outcomes in a standard (Q7) and reliable (Q8) way, although again some exceptions were identified. Statistical analysis (Q9) was judged to be appropriate with the exception of one study which presented summary data without any statistical analysis [13]. As is the case for all observational studies, the possibility of unmeasured confounders affecting the results could not be ruled out.
Effects on mortality
Most of the included studies reported changes in mortality rates following reconfiguration (Table 2). For the two large studies of general UEC populations, people experienced increases in time/distance of up to 33 miles [10] or 25 min [3]. However, most increases were considerably smaller (median less than 1 mile in Hsia et al. [10]) and neither study provided evidence of an effect on mortality. For patients with MI, increases of over 30 min were associated with significant increases in mortality, but in a large US study, only 0.2% of patients fell into this group [14]. Findings for trauma centre and maternity closures were less clear because of the small number/size of studies.
In summary (Table 3), stronger evidence (derived from studies with control groups) did not support or refute the hypothesis that reconfiguration resulting in increased travel time/distance affected mortality rates. In other words, there was no evidence of an effect, making it difficult to draw firm conclusions from this evidence. By contrast, there was evidence of increased risk from studies restricted to patients with acute MI. Evidence for other conditions was inconsistent or very limited. It was notable that none of the included studies had collected data relating to stroke patients specifically (although people with stroke were an identifiable subgroup in the study by Hsia et al. [10]).
While the evidence on mortality for the trauma population was inconsistent overall, results from two studies suggested that trauma centre closure may impact negatively on outcomes at remaining trauma centres within a region [11,16]. However, this finding may be of limited relevance to the UK, where the implementation of a network of trauma centres in recent years means that availability of trauma centres is matched to needs and significant reconfiguration resulting in closures is unlikely.
Main findings
In practice, reconfiguration of UEC services in a publicly funded health system like the UK NHS sometimes means closure of EDs or downgrading by reducing the opening hours or the variety of services provided. This is generally considered likely to increase travel distance/ time for the majority of patients in the affected area as well as the overall average distance/time to reach a suitable UEC facility. However, the studies included in this review suggested that such increases may be small (less than 1 mile or 10 min) for most people, with a small minority experiencing increases of 30 miles/30 min or more [3,10,14].
Overall, the studies found no convincing evidence as to whether increasing travel time or distance increased mortality risk for general populations of patients attending UEC facilities. The reconfigurations evaluated in these studies reported relatively small effects on average travel distance/time. This is representative of the [13]. There was some evidence of an increased risk from studies restricted to patients with acute MI, while evidence for other conditions was inconsistent or very limited. This suggests the possibility that the effect of increased distance or time may be diluted in the general UEC population by the presence of patients with less serious conditions and minimal short-term risk of death. However, one of the largest studies found no change in in-patient mortality for either the population as a whole or subgroups with specific emergency conditions [12]. The implications for the health system as a whole of reconfiguring a key part of UEC might be conjectured as being substantial. For example, attendance at remaining EDs in the area may increase, EMS staff may be required to cover a larger catchment area, and hospitals may face difficulties in staffing other services if no longer providing emergency care means that they are perceived as less-prestigious places to work and provide reduced clinical and training opportunities.
It is important to note that the findings of this review suggest that the effects of service reconfiguration on outcomes (particularly patient outcomes) may be shortlived, with health systems adapting to the new situation in the subsequent few years. In the study by Avdic [8], effects of ED closures on acute MI mortality were only statistically significant for the first year after closure, and Shen et al. identified a 4-year transition period [14]. Efforts by healthcare commissioners and providers to mitigate the effects of reconfiguration may be key to minimising the effect of changes. Avdic referred to increased investment in both emergency service provision and prevention, although the study did not evaluate whether these actually occurred [6]. A study from the USA highlighted how early notice of an ED closure was followed by close working amongst providers to minimise the effect on the EMS system in the city [8]. Also in the USA, Yaghoubian et al. reported how changes in trauma centre staffing and organisation were put in place to prepare for the closure of a nearby centre [16]. The insights provided by these studies indicate the need for greater understanding of how health service stakeholders prepare for the system-wide impacts of changes that require patients to travel further for treatment [3].
Service reconfiguration is often advocated by decisionmakers who argue that increased patient volume and/or specialisation in a smaller number of UEC facilities will increase the overall quality of patient care. This review did not directly address the relationship between volume of contacts and outcomes, as this area has been the subject of a large volume of research. However, one study included in our review attributed successful outcomes following a trauma centre closure partly to staff gaining experience from treating more patients [16]. When considering the influence of treatment by highly skilled specialist staff on patient outcomes, the substantial body of evidence for the benefits of transporting patients with stroke [17], AMI or severe trauma to specialist centres (which may be further away), rather than attending nearer non-specialist facilities, should be taken into account.
Strengths and limitations
This systematic review was undertaken by an experienced team including both methodological and topic experts. We performed a thorough search for published literature published since 2000 including supplementary searching methods such as citation tracking. The review also benefitted from the input of an experienced public advisory group.
Because of resource constraints, we abbreviated the review process by using a single reviewer to perform study selection, quality assessment and data extraction, with checking of a 10% sample by a second member of the review team. Double independent performance of these stages was not a viable option but analysis of study Much of the research included in our review originates from non-UK settings, and we have tried to keep applicability in mind throughout. The US health system is organised and funded very differently from the UK NHS but there is no reason to suppose that this would affect the relationship between distance/time and outcomes for patients with a particular condition. Given the low quantity and quality of evidence we expected to include in the review, we made a pragmatic decision to include studies from the USA with appropriate caveats. Absolute distances and times of travel vary within countries, including the UK, but large countries such as the USA, Canada and Australia are likely to have longer travel times/distances on average outside urban areas. This is also true for some of the Scandinavian countries, where travel times can be long because the population is centred in fewer areas.
Interpretation of the findings of systematic reviews should be guided by the quality and strength of the included evidence. We have assessed methodological quality of the individual studies and overall strength of evidence for key findings using a scheme successfully employed in previous reviews. Some of the included studies were judged to be at relatively high risk of bias because of their observational design and the absence of an independent control group. On the positive side, most studies acknowledged and attempted to adjust for the influence of confounding factors and some were large and/or long term. In view of this uncertainty, we have been conservative in assessing the overall strength of evidence for effects and associations (see Table 3).
Implications for further research
There is a need for further time series analyses along the lines of the closED study [3] to examine the longer-term effects of service reconfigurations on the whole UEC system and to take into account the impact of other service and technological changes over time. While such studies should ideally be controlled, uncontrolled time series also have some value and offer fewer logistical challenges.
Research is needed to better understand how local and regional health systems plan for, and adapt to, increases in travel distance/time. As suggested by other researchers [3], this could take the form of qualitative research and/or documentary analysis. The current programme of service reconfiguration provides opportunities for prospective studies across diverse settings. Research should aim to capture the perspectives of different stakeholders including health professionals, managers in both commissioner and provider organisations and the public.
Analysis of routine data will enable researchers to examine whether UEC reconfigurations reduce overall demand for ED care or merely displace demand to other parts of the health system. Data can also be used to examine the nature and extent of variation between different localities with a view to reducing unnecessary variation and improving overall quality of care.
Research is needed to assess patient outcomes other than mortality and hospital admission/length of stay. This could include effects of service reconfiguration on families who may incur additional social and financial costs because of increased travel distance/time to visit patients.
Conclusions
This systematic review found no convincing evidence to support or refute the perception that service changes that increase average travel time or distance increase mortality risk for general populations of patients attending UEC facilities. Large observational studies suggested that increases are small for most of the population affected. There was some evidence of an increased risk from studies restricted to patients with acute MI, while evidence for other conditions was inconsistent or very limited.
The relatively low quality of much of the research suggests that findings should be interpreted cautiously. In particular, 'no evidence of increased risk' does not necessarily mean 'evidence of no increase in risk' as the finding could be overturned by further research in the future.
Research priorities include work to examine the longer-term effects of service reconfigurations on the whole UEC system and to better understand how local and regional health systems plan for and adapt to increases in travel distance/time.
At the time of completing this paper, health services worldwide were confronted with unprecedented pressure on UEC services as a result of the coronavirus (COVID-19) pandemic. The effects of this event on attitudes to UEC service provision and reconfiguration remain to be seen.
Additional file 1. Medline search strategy.
Additional file 2. Data extraction template.
Additional file 3. Quality assessment results. | 2020-05-20T14:39:28.524Z | 2020-05-20T00:00:00.000 | {
"year": 2020,
"sha1": "01b3b4982949751923c6a5b9168f95a834b31b46",
"oa_license": "CCBY",
"oa_url": "https://bmcmedicine.biomedcentral.com/track/pdf/10.1186/s12916-020-01580-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01b3b4982949751923c6a5b9168f95a834b31b46",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251533575 | pes2o/s2orc | v3-fos-license | Diagnostic challenge presented by right atrial mass: A report of two cases
Right atrial masses raised pose 3 major possibilities including tumors, thrombi, or vegetations. We present 2 cases: first, a 34-year-old male with no medical history, who presented with dyspnea, pleuritic pain, and fever; and the second, 65-year-old male with similar symptoms and a history of a left renal carcinoma. Both patients had right atrial masses found on a transthoracic echocardiogram. Cardiac magnetic resonance imaging and an 18 FDG-PET were necessary finding thrombi in the first patient; and tumoral thrombi in the second one. A multimodality imaging approach to right atrial masses is essential for proper diagnosis and therapeutic decision-making.
Introduction
Right atrial (RA) masses are a diagnostic challenge. Differential diagnoses include primary tumors, metastases, vegetations, thrombi, and in some cases, artifacts [1 ,2] . Thrombi in RA have a high risk of embolism and increased mortality. Anticoagulation treatment for RA thrombi will not be enough, and other ✩ Competing Interests: The authors declare that there is no conflict of interest. * Corresponding author.
E-mail address: juanfevasquez8@gmail.com (J.F. Vasquez-Rodríguez). treatments such as thrombolysis or surgery should be considered. It is always necessary to rule out the possibility of tumoral thrombus, since its treatment and approach is very different. We present 2 clinical cases with similar clinical and echocardiographic findings in which clinical history and multimodality imaging were very important for diagnosis and therapeutic decision.
Case 1
A34-year-old male without past medical history admitted to the emergency room (ER) due to 3 days of fever, dyspnea, cough, and left pleuritic chest pain. Vital signs were: BP: 100/85 mm Hg, HR: 60 beats/min, RR: 25 breaths/min, and O2SAT:85% on room air. Physical examination revealed reduced breath sounds at the mid and lower left hemithorax.
Hemogram and chemical blood tests were normal. ECG showed sinus rhythm with T-wave inversion in precordial leads (V1-V4). Chest X-ray revealed prominence of pulmonary arteries and ascending left hemidiaphragm due to atelectasis ( Fig. 1 ); findings that were interpreted as indirect signs of pulmonary hypertension.
Based on symptoms and findings described above, clinical suspicion of pulmonary embolism (PE) was established with a low pretest probability (Wells-score < 4 points). D-Dimer test was positive at 3500 ng/mL (Reference < 500 ng/mL). CT pulmonary angiography confirmed the PE diagnosis with left pulmonary artery occlusion. Transthoracic echocardiogram showed a large mobile mass in the right atrium (R.A.) arising from the inferior vena cava (IVC) (Video 1). For better mass characterization, transesophageal echocardiogram (Video 2) and cardiac magnetic resonance (CMR) imaging were performed. The mass was hypointense on T1W and T2W se-quences without contrast uptake on the first-pass perfusion sequence, without any late gadolinium enhancement ( Fig. 2 ). These findings suggested the possibility of R.A. thrombi. Abdominal scan revealed the origin of the mass in the right renal vein, reaching the R.A., through the IVC (Video 3). No other masses were found. Enoxaparin was started at a dose of 1 mg/kg twice daily.
The size and characteristics of the mass led us to suspect neoplastic disease vs thrombus. The heart team decided to remove the mass for diagnostic and therapeutic purposes ( Fig. 3 ). Histopathological findings confirmed thrombus without tumoral cells.
The patient was discharged on anticoagulation therapy. To date, clinical follow-up has been satisfactory and he has remained asymptomatic, without bleeding or new thrombotic events.
Case 2
A 65-year-old male who presented to the ER with a 2-month history of right hypochondrium pain and progressive dyspnea on minimal efforts. His medical history includes hypertension, diabetes, and PE diagnosed the previous year. Due to an unprovoked PE, extension studies revealed a left renal carcinoma (T3bN0M0) and left nephrectomy was performed. Medications included telmisartan 40 mg daily, metformin 850 mg twice daily, and enoxaparin 1 mg/kg twice daily. Vital signs were: BP: 120/70 mm Hg, HR: 120 beats/min, RR: 23 breaths/min, and O2 SAT: 80% on room air. On physical examination, reduced breath sounds on auscultation, with no other relevant findings.
Transthoracic echocardiogram revealed a right atrial mass arising from the IVC similar to case 1. CMR and abdominal MRI were also performed. A large RA mass (50 × 41 × 39 mm), with regular borders was found, arising from the renal veins (Video 4). The mass was isointense on T1W, and hyperintense on T2W sequences ( Fig. 4 ). First-pass perfusion imaging showed early contrast uptake, and late gadolinium enhancement in the periphery of the mass (Video 5).
Suspecting recurrence of renal carcinoma, PET-CT Scan was performed showing an abnormal 18F-FDG uptake, with heterogeneous distribution, maximum SUV (Standardized Uptake Value) of 3.64, and late of 3.31. No other radiotracer uptake areas were identified ( Fig. 5 ).
All findings suggested tumoral thrombus related to probable recurrence of renal carcinoma. A lesion biopsy was performed confirming the presence of clear cell renal carcinoma. Due to metastatic involvement, the lesion was considered unresectable.
Treatment with pembrolizumab and axitinib was recommended. Currently, he has completed 1 year since his last hos-pitalization, and remains stable with outpatient follow-up by the oncology department.
Discussion
RA masses are a diagnostic challenge and its differential diagnoses include several lesions ( Fig. 6 ). Differentiation between tumors and thrombotic masses is important for predicting survival and making treatment decisions.
The most frequent cardiac tumors are secondary (metastases) and often come from malignant neoplasms of the lung, breast, hematological, esophagus, and melanomas [3] that mainly affect the right heart chambers, due to hematogenous and/or lymphatic spread; and should be rule-out in patients with history of malignancy [4] .
As in Case 2, there are other authors that have reported, metastatic involvement from renal carcinoma in cardiac chambers [5] , with similar imaging findings Primary tumors are unusual, with a secondary/primary ratio of 20:1 [6] . Only 10% of primary heart tumors are malignant and some of them could affect R.A. including: myxoma (25% in RA), lipomatous septal hypertrophy, paraganglioma, schwannoma, and angiosarcoma [7] .
Thrombi can also arise from the R.A., some of them related to implantable devices, morphologic abnormalities (Chamber dilation with blood stasis, etc.), hypercoagulability states, embolic phenomena, or present as tumoral thrombi.
Thrombotic lesions are masses of intermediate echogenicity which may change over time, with the presence of calcifications in older thrombi. Accurate evaluation of thrombotic R.A lesions usually requires multimodal imaging. CMR is especially useful because it allows tissue characterization helping with the differential diagnosis between thrombotic lesions and other types of masses. Soft thrombi are usually hypointense in T1W and T2W sequences. Because soft thrombi are nonvascularized masses, they do not have contrast uptake in the first pass sequence, and appear hypointense in both early and late enhancement sequences, like in the first case [4] .
Tumoral thrombi are a cluster of tumoral cells associated with thrombi, which are usually related to advanced renal cell carcinomas, seen in 12%-19% of those patients [8] , with findings similar to those of thrombi and tumoral masses. Other malignancies that can cause tumoral thrombus formation include, hepatocellular carcinoma, Wilms tumor, adrenocortical carcinoma, and retroperitoneal tumors. These masses usually affect the renal veins and reach the RA through the IVC in 40% of patients [9] .
Multimodality imaging approach is always necessary in these cases. Soft thrombi are highly mobile, have a wormlike shape, but not late gadolinium enhancement. Tumoral thrombi are larger, less mobile, have late gadolinium enhancement, and are sometimes seen in association with a tumoral mass. Additionally, tumoral thrombi are metabolically active, which allows their identification through the uptake of 18F-FDG, as in patient 2 [10] .
The association of right heart thrombus and PE confers a higher risk of hemodynamic compromise and death [11] . Clinical trials comparing anticoagulation, thrombectomy, and thrombolysis have shown mortality of 28.6%, 23.8%, and 11.3%, respectively [12] . Surgical results depend on the experience of the surgical team. For decision making morphologic characteristics and the possibility of other diagnoses should be considered. Surgery in these cases, is not only the therapeutic tool, but also an important diagnostic tool. For tumoral thrombi, treatment is surgical resection; however, in our case the heart team considered it was a tumor relapse with cardiac metastasis, which made the patient ineligible for surgery, with palliative treatment being the best second option for that patient.
Conclusions
Right atrial masses are a diagnostic challenge with 3 major possibilities including tumors, thrombus, or vegetations. Right chambers lesions, particularly the large, irregular, and invasive-looking ones, usually correspond to metastatic or malignant primary tumors.
Patient consent
A written informed consent for publication of these cases was obtained from the patients. The privacy was guaranteed.
Supplementary materials
Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.radcr.2022.07.045 . | 2022-08-13T15:06:05.031Z | 2022-08-10T00:00:00.000 | {
"year": 2022,
"sha1": "cfae839d86e30050abc403221f90d5ad8dcca6dd",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.radcr.2022.07.045",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b21ed482025a74c66c64718ae0fc88958b0633c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55781145 | pes2o/s2orc | v3-fos-license | The evaporation of the water-sodium chlorides solution droplets on the heated substrate
This work presents an experimental study of the evaporation of a sessile watersodium chlorides solution drop to open atmosphere on the solid substrate (anodized aluminum) under the varying heat flux. The main parameters defining drop profile were obtained: contact diameter, contact angle, height of the drop. The specific evaporation rate was calculated. The influence of the initial concentration of the evaporated solution to a value of the specific evaporation rate has been found out. The specific evaporation rate decreases with increasing of the concentration.
Introduction
Intensification of heat and mass transfer of an evaporating liquid droplet on a solid substrate is the basic perspective direction of thermal energy technology modernization in the design of high heat exchange systems.
Currently physical processes of droplet evaporation in the contact line "solid -liquid -gas" are insufficiently studied that prevents the development of technology in the chemical industry for drying liquid dispersions (pneumatic, spray, rotary, drum, spiral dryer), in aviation at solving "the problem of freezing aircrafts", in the mechanical engineering at designing heat engines, in medicine at the study of microstructures of DNA / RNA, in optoelectronics at the development of semiconductor lasers, lightemitting diodes.
Parameter that characterizes the process of droplet evaporation, in particular heat and mass transfer is the specific evaporation rate.
Heat and mass transfer processes at the phase transition of drops have been studied for a long time.In the experimental and numerical studies the dependences of the evaporation rate from the liquid vapor pressure [1], the radius of the droplet sphere [2], the initial temperature and the droplet size [3,4], the temperature of the droplet surface [5], gas pressure and temperature of the surface [6], the contact angle [7], the contact radius [8], and properties of the wetting film of the surface [9,10] have been estimated.However, all researches have been conducted using single-component liquids, acids, water, various alcohols and hydrocarbon compounds.It should be noted that researches of water-sodium chlorides solution droplet evaporation have been conducted recently.An experimental study of the water solution of potassium acetate and sodium iodide droplet evaporation [11] at heating the substrate in the temperature range of 50-100 • C has been conducted.
It was found that the average evaporation rate of solutions is much less than the evaporation rate of the pure liquid at the same temperature.The characteristic curves of the average evaporation rate from the temperature of droplet surface have been obtained.These curves show that with increasing of salt concentration in the solution the average evaporation rate decreases.
It is known that the ringlike precipitate on the periphery of the drop is formed at crystallizing solution droplet evaporation.This effect is called "coffee ring" [12].Physically, this phenomenon is explained in [13,14] at conducting the experiments with the dispersed solids in the liquid.
However, nowadays there is a lack of understanding of heat and mass transfer processes under the conditions of the water-salt solution evaporation on a solid surface.
Research technique
The purpose of this paper is the experimental study of water saline solution droplet evaporation under the heating of a solid surface.In particular, it is necessary to assess the impact of the initial concentration, the temperature of a solid substrate on the specific evaporation rate.Also it is necessary to define the dynamics of change in the evaporation rate during the phase transition of the drop.
The researches have been conducted using experimental setup shown on Fig. 1.It consists of shadow and Schlieren systems [15][16][17].
The high-speed camera Fastvideo-500M was used in each system.This camera allows to record up to 500 frames per second with a resolution of 1280 × 1024 pixels.The heat of the substrate made of anodized aluminum (Fig. 2) was realized with a Peltier element (thermoelectric transducer type A-2TM 8.0-127/126-1.4HR1).
It should be noted that the structure of the substrate surface is formed by concentrically arranged grooves (Fig. 2b).
Measuring of the temperature of the substrate was performed by eight-channel Agilent 34901A.Three thermocouples "chromel-copel" (Fig. 3) with measurement error ±0.1 • C were used as the temperature sensors.
Droplet evaporation was carried out in three modes of heating of the working area (Table 1).
According to results of the preliminary experiment the values of variable factors at three levels were defined (Table 2).
Recording of the experimental data (contact angle, contact radius, height) was carried out with using KRUSS program.The snapshots obtained during the experiment were processed by the program.The initial values of the curves plotted from the experimental data correspond to the time of placing a drop on the substrate.Recording of the investigated parameters was performed before formation of salt rings in the final stages of evaporation.
Results and discussion
The specific evaporation rate was determined by the following Eq.( 1): Where V i , A i , V i+1 , A i+1 -are the volume and surface area of the droplet at the time t and t i+1 .
01039-p.3 Figures 4-6 demonstrates the calculated specific evaporation rate of the water-sodium chlorides solution droplets on the anodized aluminum.The volume of drops: 0.02 ml, 0.04 ml, 0.06 ml, the concentration of solution: 4.8%; 9.1%; 16.7%.The drops evaporate at three heating modes.
EPJ Web of Conferences
According to the analysis of Figs.4-6 the specific evaporation rate is decreased with increasing initial concentration of salt in solution.Similar conclusions were obtained in [11], but authors did not give an explanation for this effect.It is known, that the heat of vaporization increases with increasing concentrations of water-salt solution, and as a result the amount of heat needed for evaporation arises.This fact explains the decrease of the evaporation rate of droplets with high initial salt concentration.
It was found that the specific evaporation rate decreases (up to 30%) in the evaporation process, then it increases.The calculated specific evaporation rate is in excellent agreement with the obtained results Thermophysical Basis of Energy Technologies (a) -U=5V; I=2.8A; t w =62.5°C; t s =61°C (b) -U=6V; I=3.2A; t w =73.5°C; t s =72.9°C (c) -U=7V; I=3.7A; t w =84.5°C; t s =83.1°C for drops of distilled water (104.7 ml), lying on a substrate with a surface temperature of 64 The difference is that in the final evaporation stage the specific evaporation rate of water-salt solutions doesn't increase so abruptly and doesn't reach values which higher than the specific evaporation rate at the initial evaporation stage.It should be noted that this dependence is indicative for water-salt solutions of 4.8% and 9.1% (Figs.4-6 (a-b)).It was found that the water-salt solution drops evaporation on the surface with temperatures 84.5 • C differs from other cases.The initial stage of evaporation is defined by brief reducing of the specific evaporation rate (5-10 seconds), and then it increases monotonically (Figs.4-6 (c)).Perhaps, it is due to high values of surface temperature.In this case, the heating of drop takes place rapidly, its temperature close to the boiling temperature.The evaporation process is more intensive, the diffusion increases with the droplet surface.It is found that after 30-50 seconds of the specific evaporation rate growth at investigation drop with maximum concentration (16.7%) it decreases abruptly.This moment accompanied by formation of a massive "cover of a salt ring" around the perimeter of the drop and the almost complete drying up of the solvent (Fig. 7).Probably, this effect is the cause of lowering the rate.
In the first heating mode (Figs.
Conclusion
The effect of the initial concentration, surface temperature, droplet size on the specific evaporation rate has been estimated during the experimental study of the water-salt solution droplet evaporation.The dynamics of changes in the evaporation rate during the phase transition of the drop has been defined.
Table 1 .
Parameters of studied substrates at different heating modes.
Table 3 .
Decreasing of the specific evaporation rate depending on the volume and concentration in the first heating mode.
Table 4 .
Decreasing of the specific evaporation rate depending on the volume and concentration in the second heating mode.
Table 5 .
Decreasing of the specific evaporation rate depending on the volume and concentration in the third heating mode. | 2018-12-05T20:40:03.180Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "51ecfabd73b057032256de49c0d45165b06657d4",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2014/13/epjconf_toet2014_01039.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "51ecfabd73b057032256de49c0d45165b06657d4",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
267634891 | pes2o/s2orc | v3-fos-license | Artificial Intelligence Assistive Software Tool for Automated Detection and Quantification of Amyloid-Related Imaging Abnormalities
Key Points Question What is the clinical performance of icobrain aria, an artificial intelligence (AI)–based assistive software tool for detection and quantification of amyloid-related imaging abnormalities (ARIA)? Findings In this diagnostic study that included 16 radiologists and 199 pairs of retrospective brain magnetic resonance images from patients with Alzheimer disease receiving amyloid-β–directed antibody therapy, a statistically significant improvement in diagnostic radiological reading performance was associated with the use of the software. Meaning These results suggest that AI-based assistive software may enhance the diagnostic accuracy of safety magnetic resonance imaging monitoring of ARIA for patients receiving amyloid-β–directed antibody therapies.
Brain orientation
Brain orientation is changed according to LAS.
Single time point CNN segmentation of ARIA-H abnormalities
TP1 and TP2 T2*-GRE images are processed independently and are provided as input to an ensemble of multi-class (ARIA-H microhemorrhage vs. ARIA-H superficial siderosis vs. background) CNN models.The probabilistic predictions of the models are averaged to obtain a probabilistic map of the ARIA-H microhemorrhage and ARIA-H superficial siderosis.Postprocessing is performed including thresholding to obtain ARIA-H binary segmentation masks ("any ARIA-H" vs. "background") for microhemorrhages and superficial siderosis, separately, and classification per brain region to obtain masks of labeled ARIA-H findings per brain region.
Single time point CNN segmentation of ARIA-E abnormalities
TP1 and TP2 FLAIR images are processed independently and are provided as input to an ensemble of 2 binary ("any ARIA-E" vs. "background") CNN models.The probabilistic predictions of the 2 models are averaged to obtain a probabilistic map of the ARIA-E.Postprocessing is performed including thresholding to obtain an ARIA-E binary segmentation ("any ARIA-E" vs. "background") mask and classification per brain region to obtain a mask of labeled ARIA-E findings per brain region.
CNN segmentation of brain regions
Segmentation of brain regions is performed on each modality (T2*-GRE and FLAIR), using one CNN model per modality.The obtained brain region masks are used in the postprocessing steps of 1.2 "Single time point CNN segmentation of ARIA-H abnormalities" and 1.3 "Single time point CNN segmentation of ARIA-E abnormalities", respectively.
TP1 and TP2 image registration
The T2*-GRE/FLAIR image of time point 1 is transferred to the T2*-GRE/FLAIR image space of time point 2 using affine registration.
Resampling segmentations to TP2 space
Based on the transformations obtained from Step 2. "TP1 and TP2 image registration", the segmentations obtained from Step 1. "Single time point segmentations" are resampled to the TP2 image space.
4. Longitudinal CNN segmentation of new ARIA-E findings TP1 and TP2 FLAIR images are entered as a pair of images (input image 1 and input image 2) to a two time point CNN model.The CNN model is trained to segment ARIA-E (both with and without swelling) that is present in input image 2 but not in input image 1.The new and increasing ARIA-E segmentation is obtained by providing the TP1 as input image 1 and TP2 as input image 2, while the decreasing and disappearing ARIA-E is obtained by providing TP2 as input image 1 and TP1 as input image 2. Post-processing is performed including sum of probabilities of both type of ARIA-E (with and without swelling), thresholding on ARIA-E probability to obtain binary masks ("any ARIA-E" vs. background), and classification per brain region to obtain a mask of labeled ARIA-E findings per brain region.
Longitudinal CNN segmentation of new ARIA-H findings
TP1 and TP2 T2*-GRE are entered as a pair of images (input image 1 and input image 2) to an ensemble of two time point multi-class ("ARIA-H microhemorrhage" vs. "ARIA-H superficial siderosis" vs. background) CNN model.The models are trained to segment ARIA-H microhemorrhages and ARIA-H superficial siderosis that are present in image input 2 and not in image input 1.The probabilistic predictions of the models are averaged to obtain a probabilistic map of the ARIA-H microhemorrhage and ARIA-H superficial siderosis.Postprocessing is performed including thresholding on predicted probabilities of all classes ("ARIA-H microhemorrhage" vs. "ARIA-H superficial siderosis" vs. background), and classification per brain region to obtain a mask of labeled ARIA-H findings per brain region.
Joint segmentations and annotation of differences
For each type of output (ARIA-E, ARIA-H), the masks obtained from single and two time point CNN models are combined to obtain corrected segmentation masks for TP1 and TP2.To this end, the respective binary segmentation masks are merged and potential labeling conflicts are resolved by taking into account that the two time point CNN models take precedence over the single time point CNN models.
The new, increasing, resolving and stable abnormalities are labeled based on the difference of the segmentation between TP1 and TP2.A finding is labeled "new" if it is present in TP2 but not in TP1 and does not overlap with a finding in TP1.A finding is labeled "increasing", if it is present in TP2 but not in TP1 and overlaps with a finding in TP1.A finding is labeled "resolving" if it is present in TP1 but not in TP2.A finding is labeled as "stable" if it is present in both TP1 and TP2.Following this convention, the segmentation of new and stable ARIA-H microhemorrhages and ARIA-H superficial siderosis and the segmentation of new, increasing and resolving ARIA-E are obtained.
Calculate ARIA measurements and changes
Measurements of ARIA-E, ARIA-H microhemorrhage and ARIA-H superficial siderosis are derived from the ARIA-E and ARIA-H segmentations.
For ARIA-E, the following measurements are calculated: • Longest axis: the longest possible Euclidean distance between two points within an ARIA-E finding on the FLAIR image of TP2, in the axial plane.• Number of locations affected by ARIA-E in TP2 among 10 brain regions: frontal, parietal, occipital and temporal lobes in left and right hemispheres, cerebellum and the rest of the brain (i.e, deep gray matter and brainstem).The ARIA-E segmentation should overlap with the brain region to be considered as "affected by ARIA-E" (classification was performed in previous steps as post-processing).• Volumes of new, increasing, decreasing and stable ARIA-E: voxel volume multiplied by number of voxels labeled as new, increasing, decreasing and stable ARIA-E, in the whole brain and per brain region.
For ARIA-H, the following measurements are calculated: • Count of ARIA-H microhemorrhages at TP1 and TP2: the number of separate connected components labeled as "ARIA-H microhemorrhages" in the whole brain and per brain region.Inclusion criteria: • Images with ARIA annotations by at least one expert available, for cases with ARIA according to the safety read (which was performed during the clinical trial).• Images with "No presence of ARIA" detected according to the safety read for "no-ARIA" cases.The "no-ARIA" group also includes cases with abnormal FLAIR hyperintensities or T2*-GRE hypointensities that can potentially mimic ARIA.
Exclusion criteria: • Poor quality of ground truth annotations (e.g., annotations outside of the brain, longitudinal inconsistency, etc.).• Low image quality (e.g., severe artifacts, low contrast-to-noise, etc.) In total, 475 FLAIR image pairs and 326 T2*-GRE image pairs were selected for the training dataset, having the characteristics presented below separately for the subsets on which the CNN models were trained and tuned.Among the selected data for training, 81 cases were "no-ARIA" cases, for which 76.5% had FLAIR hyperintensities that were not labeled as ARIA by the experts.All expert reads were executed through an independent third party.
ARIA-E Two time point segmentation models -data distribution summary
Each case, consisting of baseline and follow-up brain MRI FLAIR and T2*-GRE sequences, was evaluated by a panel of three neuroradiologists with experience in reading ARIA for clinical trials, who were tasked with creating manual voxel-wise annotations of ARIA-E and ARIA-H findings, if present, on the FLAIR and T2*-GRE follow-up images, respectively.For each image, the three independently created manual annotation masks were analyzed to decide whether consensus reading was required for that annotation.
Consensus was triggered for ARIA-E when the number of affected ARIA-E areas (based on connected components) was not the same across the 3 experts, or when there were individual ARIA-E areas without sufficient overlap between the intersection of the 3 masks and each individual mask.(The overlap thresholds were: > 80% for large components of at least 275 mm 3 or > 9 voxels for small components below 275 mm 3 .) Consensus was triggered for ARIA-H when not all experts found all and the same new microhemorrhages, or when the number of new superficial siderosis areas (based on individual labels used by the experts) was not the same across the 3 experts, or when there were individual superficial siderosis areas without sufficient overlap between the intersection of the 3 masks and each mask.(The overlap thresholds were: > 80% for large components of at least 275mm 3 or > 9 voxels for small components below 275mm 3 .) For masks not requiring consensus (i.e., sufficiently matching between the three experts), the ground truth mask was generated by voxel-wise majority voting.For masks requiring consensus read, consensus was obtained in a moderated session with all the experts.During the consensus read session, the panel decided whether to keep or discard the conflicting annotations labeled by each expert, resulting in consensus masks per case (i.e., ARIA-E and ARIA-H manual annotations), whereafter the ground truth on all cases is based.The presence or absence of ARIA-E/ARIA-H, as well as the 4 measurements that define the severity levels of ARIA-E and ARIA-H (ARIA-E longest axis, count of brain regions affected by ARIA-E, count of new microhemorrhages and count of new superficial siderosis sites), respectively, were derived from the final consensus masks using software scripts.
Technical guidelines were provided to the experts on using the annotation platform (individually and for performing consensus) and the experts acquired experience with the platform by performing annotations a priori on 30 cases, which were not included in the actual reader study.
Scoring process
When reading each case of the MRMC reader study, either assisted or unassisted by the software, each radiologist indicated the presence or absence of ARIA-E and, separately, presence or absence of ARIA-H.If present, they reported severity according to the ARIA Radiographic Severity scale (Cogswell et al, 2022) for ARIA-E, ARIA-H microhemorrhages and ARIA-H superficial siderosis, respectively.For subsequent statistical analyses, ARIA-H severity was based on the worst of the reported both microhemorrhages and superficial siderosis severities, as this is in accordance with the current U.S. prescribing information for currently FDA-approved Aβ-directed antibody therapies.Furthermore, they recorded the location of ARIA findings, where location was defined by a list of 9 brain regions.They also reported their measurement of ARIA-E longest axis, counts and brain location(s) of new microhemorrhages and superficial siderosis.
The whole reporting process is similar to how radiologists would evaluate and prepare a radiologic report on ARIA in clinical practice, albeit by way of a structured report.The image visualization and interpretation methodology used in the reader study was designed to match how brain MRI cases for ARIA monitoring would be read in standard clinical practice.
Radiologists were able to view images from two time points in a 3D MRI viewer, and were able to scroll, zoom in/out, adjust image contrast, or change viewing orientation as needed.For the assisted read, they were able to view and scroll through the native MRI scans and the software-annotated scans simultaneously, and to open the icobrain aria PDF report and consult all its content.In the case of assisted reading, the radiologists were instructed to report what they think was correct, thus they were allowed to disagree with the recommendations produced by the software and report on ARIA detection, severities and measurements according to their own judgment.
Study schedule and case randomization
The MRMC study consisted of two independent reading sessions separated by a washout period of at least four weeks in order to mitigate the impact of short-term memory/learning bias.Each case consisted of a baseline and a follow-up brain MRI examination (FLAIR and T2*-GRE) of a patient.Patient identity was fully anonymized in the retrospective data prior to starting the study, and the readers had no access to patient codes that would allow matching cases across the two reads (unassisted and assisted).
Half of the cases were randomly assigned to group A and the other half to group B. In reading session 1, group A cases underwent unassisted reading and group B cases underwent assisted reading with the software.In reading session 2, group A cases underwent assisted reading, and group B unassisted reading.Batches of cases were assigned to two groups of readers, alternating between unassisted and assisted reads (2 cycles of alternation in each reading session); approximately half of the readers started with unassisted reads (Reader group 1 in Session 1) and the other readers with assisted reads (Reader group 2 in Session 1), according to the Latin Square design depicted in eFigure 1.For each reader in each session, the order of cases within each group (A or B) was random.At the end of the two reading sessions, all readers have read all cases of group A and B once assisted and once unassisted.
eFigure 1. Latin Square design for assisted and unassisted reads randomization.
Before the start of session 1, readers have received adequate training on the reading of ARIA and on the use of the software, to equip them with the skills necessary to participate in the study.The readers were informed about the over-representation of ARIA cases (see next section) in the study sample, but did not know the exact proportion of ARIA-positive and ARIAnegative cases.The readers only had access to MRI data, not to clinical information (duration of treatment with aducanumab, presence/absence of symptoms, medical history, etc).
Sample size justification
To assist the power and sample size computation for the primary hypothesis of the study, an MRMC simulation was conducted.A balanced design with 1:1 ratio between the number of cases in the ARIA-positive class and the number of cases in the no-ARIA class, for both ARIA-E and ARIA-H, was considered.Note that estimates of AUC, sensitivity and specificity are independent of prevalence (Obuchowski and Bullen, 2022;Hajian-Tilaki, 2013).As such, in this study, the estimates of these metrics are unbiased, since they are not influenced by the ratio of ARIA-positive/negative cases1 .Since readers were unaware of the true distribution of ARIA in the study population, "context bias" on sensitivity and specificity was believed to be minimal.
The MRMC simulation included: -generation of cases with known ground truth ARIA presence, based on a low-level simulation of the 4 ARIA variables that define the presence and severity of ARIA-E and ARIA-H (i.e., (i) the number of sites of involvement for ARIA-E, (ii) the ARIA-E longest axis, (iii) the number of new microhemorrhages, and (iv) the number of new areas of superficial siderosis); -simulation of readers based on realistic perturbation of the 4 ARIA variables mentioned above, using inter-reader variability data that was available during product development (approximately 100 cases with ARIA-E masks and 100 cases with ARIA-H masks, read twice by 2 experts).
The simulation allowed estimating AUC for the binary classification defined for the co-primary endpoints.It was found that AUC for the detection of ARIA-E versus no ARIA-E, as well as for the detection of ARIA-H versus no ARIA-H, would be in the range [0.75, 0.90], depending on the magnitude of deviations that different readers have from the ground truth.
The MRMC simulation indicated that 15 readers and 200 cases would provide sufficient power to measure a significant difference of around 0.03-0.04 in the average AUC between assisted and unassisted reads for the detection of ARIA-E and the detection of ARIA-H.
Included Cases
The study participants for this validation study were selected from Biogen's clinical trials PRIME (NCT01677572), EMERGE (NCT02484547) and ENGAGE (NCT02477800).
Only images from participants treated with aducanumab were selected.Participants that were already included in the icobrain aria model training dataset were excluded a priori.The random selection was designed to match the distributions of each trial in terms of demographics, while ensuring that the selected dataset included sufficient ARIA-free, mild, moderate and severe ARIA-E and ARIA-H cases (i.e., ARIA positive cases were over-represented due to sample size considerations).For ENGAGE and EMERGE participants included patients aged 50 to 85 years who met clinical criteria for MCI due to AD or mild AD dementia, with amyloid pathology confirmed by visual assessment of amyloid positron emission tomography (PET; 18F-florbetapir, 18F-flutemetamol, or 18F-florbetaben) (Budd Haeberlein et al, 2022).PRIME participants were similar except ages ranged from 50 to 90 and only 18F-florbetapir PET was used to confirm amyloid pathology (Sevigny et al., 2022).The safety reads from the clinical trials were employed to obtain an approximately balanced case distribution for both ARIA-E and ARIA-H.Note that no cases selected for the reader study overlapped with those used for training the models.
In total, 200 cases were selected for the reader study, with 1 case excluded due to technical issues for generating ground truth.Among the 199 cases, 39 cases with only ARIA-E, 36 with only ARIA-H, 84 with both ARIA-E and ARIA-H and 40 cases with no ARIA, with 77.5% of the no-ARIA cases having FLAIR hyperintensities that were not identified as ARIA-E by the expert consensus.Demographics and ARIA-E/ARIA-H severity distributions obtained from the consensus ground truth establishment process are reported in eTable 1.
The baseline and follow-up brain MRI examinations (FLAIR and T2*-GRE sequences) were acquired on 31 different MR scanner models at magnetic fields of 1.5 Tesla (44%) and 3 Tesla (66%), manufactured by GE (27%), Philips (17%) and Siemens (56%).The follow-up examinations selected for the study had varying elapsed times since the start of treatment, with the smallest time difference across cases being 14 weeks, and the longest time differences across cases being more than 3 years.Additional eResults: Detection of moderate-or-severe ARIA vs no ARIA The results in eTable 2 present the detection performance of moderate-or-severe ARIA vs no ARIA.For reference, the detection performance of mild ARIA vs no ARIA (for ARIA-E or ARIA-H separately) is also included (replication from Table 2).For these subgroup analyses, cases were selected based on the ground truth severity grading: -Subgroup 1 contained all ARIA-free cases and mild ARIA cases, according to expert ground truth; -Subgroup 2 contained all ARIA-free and moderate-or-severe cases, according to expert ground truth.
Note that for the analysis of no vs mild ARIA (subgroup 1), moderate-or-severe ARIA cases (per the expert consensus ground truth) were excluded, while for the analysis of no vs moderate-or-severe ARIA (subgroup 2), mild ARIA cases (per the expert consensus ground truth) were excluded.Sensitivity was defined as the fraction of ARIA cases (in each subgroup) that were detected as having ARIA by assisted readers, unassisted readers, or standalone software, irrespective of severity.
In subgroup 1, specificity was defined as the fraction of ARIA-free cases for which the assisted readers, unassisted readers, or standalone software correctly detected no ARIA.However, in subgroup 2 analysis, specificity was defined as the fraction of ARIA-free cases for which the assisted readers, unassisted readers, or standalone software either detected no ARIA, or detected mild ARIA.As such, in this analysis, the assisted readers, unassisted readers, or standalone software were not penalized for grading ARIA-free cases as mild.However, occurrences of an inaccurate mild severity in ARIA-free cases for all three read modalities negatively contribute to the AUC metric, which explains why the perfect sensitivity and specificity for ARIA-H (last two rows of the Concordance of severity assessments eFigure 2 and eFigure 3 are graphical summary representations of the confusion matrices assessing the concordance and discordance in ARIA-E and ARIA-H severity assessment grading between assisted vs unassisted study readers compared with expert consensus ground truth.Note that averaging columns corresponding to one ground truth severity level (i.e., all columns marked by a circle at the same severity level) and expressing the result as a percentage, leads to the corresponding column of the 4x4 confusion matrix of eFigure 2bis.
Reading time in unassisted and assisted ARIA reading Per-reader performance in unassisted and assisted reading eFigure 6 shows the AUCs of assisted reading vs. unassisted per reader.
For the primary endpoint on ARIA-E detection, all but one reader reached a higher AUC in assisted reading than in unassisted reading.Coincidentally, this reader had the highest AUC in unassisted reading of ARIA-E across all readers (0.904).This reader's AUC during unassisted reading dropped very slightly, to 0.893.The magnitude of the AUC difference of -0.011 was smaller than the reader-specific standard deviation of the AUC difference (0.019).
For the primary endpoint on ARIA-H detection, all but two readers reached a higher AUC in assisted reading than in unassisted reading.Coincidentally, these readers had the highest AUC in unassisted reading of ARIA-H across all readers (0.849 and 0.851, respectively).The two readers' AUC during assisted reading dropped very slightly, to 0.847 and 0.843, respectively.The magnitude of the AUC differences of -0.002 and -0.008 were considerably smaller than the reader-specific standard deviations of the AUC difference (0.026 for both).
Similar observations can be made for the secondary endpoints.E.g., all but the two best unassisted readers reached a higher AUC in discriminating no vs mild ARIA-E in assisted reading, and all but the second-best unassisted reader reached a higher AUC in discriminating no vs mild ARIA-H in assisted reading, with the reduction in performance being minor in each case.
eFigure 2 .
Severity assessment derived from confusion matrices of unassisted reads (left side panels) and assisted reads (right side panels) compared to ground truth for ARIA-E (top) and ARIA-H (bottom).34.3% of mild ARIA-E cases and 49.1% of mild ARIA-H cases were correctly graded as "mild" by unassisted readers, which improved to 46.8% and 63.1% in assisted reading, respectively.In unassisted reading, 58.2% of moderate-or-severe ARIA-E cases and 70.4% of moderate-or-severe ARIA-H cases were correctly graded as "moderate" or "severe", which improved to 77.3% and 71.5% in assisted reading, respectively.Both unassisted and assisted readers identified over 80% of ARIA-free cases, with the former being slightly more accurate.Moreover, in assisted reading, ARIA-positive cases were less likely to be assessed as "ARIA-free" by readers (i.e., detection sensitivity went up, as illustrated in Table1).This was accompanied by a slightly higher number of mild ARIA-E cases being assessed as moderateor-severe ARIA-E (but not vice versa), and a slightly higher number of moderate-or-severe ARIA-H cases being assessed as mild ARIA-H (but not vice versa).The bars represent the percentage of cases for each combination of ground truth and reader severity (with moderate and severe being taken together due to the low sample size for severe ARIA) averaged over all 16 readers.The bars are grouped by ground truth severity and colorcoded by reader severity assessments.Bars highlighted with a black border represent cases with severity assessment concordance between ground truth and readers.eFigure 3. Confusion matrices of unassisted reads (left side panels) and assisted reads (right side panels) compared to ground truth for ARIA-E (top) and ARIA-H (bottom).Percentages are computed column-wise, i.e., with respect to the number of cases for each severity level according to ground truth.This figure is an alternative visualization to the unfolded confusion matrices in eFigure 2. Distribution of reader assessments eFigure 4. Assessments of readers for each case versus the ground truth and versus the automated measurement of icobrain aria.4A.Distribution of unassisted (top) and assisted (bottom) readers' assessments of ARIA-E severity for all cases © 2024 Sima DM et al.JAMA Network Open.
4B.
Distribution of unassisted (top) and assisted (bottom) readers' assessments of ARIA-H severity for all cases eFigure 4. Radiological readers assisted by icobrain aria assess ARIA-E severity and ARIA-H severity more accurately, on average, than unassisted readers.Moreover, concordance across assisted readers' assessment is generally higher than across unassisted readers' assessments.This is evident from the assisted readers' assessments that are concentrated more closely around the case-wise "ground truth severity" as established by expert consensus (marked as circular targets).I.e., the automatically computed severity suggested by the software (added as a dot, for reference) often successfully sways readers' judgments towards the correct severity level.Per case (i.e., per column), a deeper red square at a certain severity level signifies a higher number of readers that assessed ARIA severity at the respective level (higher inter-reader concordance); when such deep red squares are centered around the dot in the graph with assisted reads, most readers make the same assessment as the software; when deep red squares coincide with the position of the circle, most readers make the same assessment as the consensus ground truth.Cases are sorted by a measure of discordance between unassisted readers (combined for both ARIA-E and ARIA-H assessments).Several representative cases are highlighted and are explored in more detail in Figures 2 and 3 in the main text.
eFigure 5 .
Reading times per case were similar between unassisted and assisted reading, both on the level of individual readers as pooled across readers.The median reading time was 155 seconds per case in unassisted reading and 140 seconds per case in assisted reading.90% of assessments (readers x cases) were done in under 560 seconds in unassisted reading and in under 454 seconds in assisted reading.Both violin plots have identical but differently sorted content, with readers sorted by increasing AUC of unassisted ARIA-E detection (top) and sorted by increasing AUC of unassisted ARIA-H detection (bottom).
eFigure 6 .
Nearly all readers benefit from icobrain aria by reaching a higher performance during assisted reading, on primary as well as secondary endpoints, as measured by the area under the receiver operating characteristic curve (AUC).The few instances where readers' assisted AUC is lower than their unassisted AUC tend to be readers that have a high "baseline" performance (i.e., unassisted AUC).Nevertheless, except for one outlier in ARIA-E localization, the difference between both AUCs for such readers is generally small, relative to the standard deviation of the AUCs.AUC in assisted reading is plotted vs AUC in unassisted reading, for each reader, with whiskers indicating the standard deviation of both reader-specific AUCs.Each row corresponds to a primary endpoint (top row) or secondary endpoint (bottom three rows).Performance for ARIA-E and for ARIA-H is depicted in the left column and right column, respectively.
: Detection performance for ARIA severity subgroups: no vs mild ARIA (reported as secondary endpoint in Table 2) and no vs moderate-or- severe ARIA detection (exploratory endpoint). Results are reported for assisted, unassisted and standalone software. Effect size is defined as the difference between assisted and unassisted AUC, and the P value corresponds to the hypothesis test on the AUC difference.
table) do not translate to an AUC of 1. | 2024-02-14T06:18:31.750Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "34098fcafd15084fe253de82527a7402992b5519",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e1f13f3c49f271373e32a989248e03036cf5d3fe",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16951164 | pes2o/s2orc | v3-fos-license | The glycerophospholipid inventory of Pseudomonas putida is conserved between strains and enables growth condition‐related alterations
Summary Microorganisms, such as Pseudomonas putida, utilize specific physical properties of cellular membrane constituents, mainly glycerophospholipids, to (re‐)adjust the membrane barrier to environmental stresses. Building a basis for membrane composition/function studies, we inventoried the glycerophospholipids of different Pseudomonas and challenged membranes of growing cells with n‐butanol. Using a new high‐resolution liquid chromatography/mass spectrometry (LC/MS) method, 127 glycerophospholipid species [e.g. phosphatidylethanolamine PE(32:1)] with up to five fatty acid combinations were detected. The glycerophospholipid inventory consists of 305 distinct glycerophospholipids [e.g. PE(16:0/16:1)], thereof 14 lyso‐glycerophospholipids, revealing conserved compositions within the four investigated pseudomonads P. putida KT2440, DOT‐T1E, S12 and Pseudomonas sp. strain VLB120. Furthermore, we addressed the influence of environmental conditions on the glycerophospholipid composition of Pseudomonas via long‐time exposure to the sublethal n‐butanol concentration of 1% (v/v), focusing on: (i) relative amounts of glycerophospholipid species, (ii) glycerophospholipid head group composition, (iii) fatty acid chain length, (iv) degree of saturation and (v) cis/trans isomerization of unsaturated fatty acids. Observed alterations consist of changing head group compositions and for the solvent‐sensitive strain KT2440 diminished fatty acid saturation degrees. Minor changes in the glycerophospholipid composition of the solvent‐tolerant strains P. putida S12 and Pseudomonas sp. VLB120 suggest different strategies of the investigated Pseudomonas to maintain the barrier function of cellular membranes.
Introduction
Driven by technological advances lipidomics and detailed lipid profiling gain currently increasing scientific interest. Lipids play important roles in cell physiology, for example as energy storage, bioactive molecules and main constituents of cellular membranes. The assessment of detailed membrane composition does not equal the extensive characterization of other cell constituents (e.g. proteins) that are accessible by 'omic' analyses. Using the now available analytical methods, the response of the membrane composition during changing environmental conditions can be monitored.
Cell membranes consist of a multiplicity of individual protein and lipid species with the main constituents belonging to only few distinct glycerophospholipid classes. The cytoplasmic membrane of many bacteria, including proteobacteria, as well as the inner side of the outer membrane mainly contains phosphatidylethanolamine (PE), phosphatidylglycerol (PG), cardiolipin (CL) and the respective monoacyl-glycerophospholipids (lyso-PE, lyso-PG) (Fig. 1). The latter are part of the de-acylation/ re-acylation cycle to control the overall lipid species compositions, catalysed by phospholipases, such as phospholipase A 2 and lyso-phospholipases that specifically release fatty acids from the sn2 position of the glycerol backbone (Scheer et al., 2011).
For P. putida extensive evidence for long-and short-term alterations of the inner and outer membrane due to toxic organic solvent exposure exists (Ramos et al., 1997), and growth in the presence of a second phase of toluene (Isken, 2000), styrene (Park et al., 2007) or n-octanol (Blank et al., 2008) has been shown. Solvent toxicity can be correlated to low logarithms of the partitioning coefficient in an n-octanol water mixture (logP ow < 4) indicating preferred membrane partitioning with disintegration of membrane structure and vital cellular functions (Ramos et al., 2002). Aiming for constant membrane fluidity, described via the transition temperature (Tm), cells try to sustain both proton gradient (DpH) and membrane potential (DY) (Sikkema et al., 1994;Ramos et al., 2002) to enable functional protein embedding (e.g. integral efflux pumps). The transition temperature is determined by the glycerophospholipid composition (Brannigan et al., 2004;Bernal et al., 2007); depends on the glycerophospholipid head group: T m (CL) > Tm (PE) > Tm (PG); and increases proportional to chain length, saturation degree and cis/trans ratio of the acyl moieties (Weber et al., 1994;Soni et al., 2009).
The latter, i.e. cis/trans isomerization, allows short-term alterations (in the minute range) of unsaturated fatty acids, the only post-biosynthetic modification of the acyl-chain in response to organic solvents (Heipieper and de Bont, 1994). Long-term alterations embrace modifications of the glycerophospholipid properties including: (i) the saturation degree of fatty acids (Pinkart and White, 1997), (ii) the glycerophospholipid head group composition (Cronan and Gelmann, 1975), (iii) the phospholipid turnover rates, (iv) the fatty acid chain length and (v) the total phospholipid content (Park et al., 2007). Toxic organic solvents entail the inhibition of phospholipid biosynthesis (Sikkema et al., 1994) and alterations of the outer membrane (Blank et al., 2008;Fujiwara, 2008), including denser packing of anionic membrane molecules [e.g. lipopolysaccharides (LPS) and outer membrane proteins (OMP)], to increase hydrophobicity and hamper the accumulation of toxic hydrocarbons (Segura et al., 1999). Notably, both lipid profiling and adaptation studies reported mostly limited analyte numbers (n Յ 21).
Equipped with this analytical toolbox, we revisited the glycerophospholipid inventory of Pseudomonas in the context of the genetic background and the metabolic pathways of both glycerophospholipid and de novo fatty acid biosynthesis. Based on the current scientific interest in biobutanol production and existing evidence of increased n-butanol tolerance by P. putida DOT-T1E and S12, as well as Pseudomonas sp. strain VLB120 (Rühl et al., 2009), we investigated the glycerophospholipid profiles of these strains during n-butanol exposure with respect to compositional variations of membrane glycerophospholipids at changing environmental conditions.
Glycerophospholipid inventory of P. putida
We revisited the glycerophospholipid inventory of organic solvent-sensitive and solvent-tolerant P. putida strains, namely the GRAS classified strain P. putida KT2440, the solvent-tolerant strains P. putida DOT-T1E and S12, and the multipurpose [e.g. biocatalysis and solvent tolerance (Park et al., 2007), biofilm producing (Gross et al., 2007)] strain Pseudomonas sp. VLB120. Distinct species of phosphatidylethanolamine, phosphatidylglycerol and cardiolipin were characterized referring to the number of carbon atoms and unsaturated carbon bonds (Fig. 3A), with more detailed information given in Table S1. The novel LIT-FTICR-MS technique was thereby complemented with GC/MS analysis of the hydrolysed glycerophospholipids to obtain the relative amounts of the total fatty acid moieties (Fig. 3B).
The evaluation of these detailed profiles resulted in relative distributions of the glycerophospholipid species (head group composition) (Fig. 3C). All here investigated Pseudomonas strains revealed phosphatidylethanol-amine as the main membrane component, contributing on average two-thirds of the total glycerophospholipid fraction. At low amounts of cardiolipin (< 5%), phosphatidylglycerol was the second most abundant class accounting for approximately one-third. The lyso-forms of phosphatidylethanolamine and phosphatidylglycerol were detected in traces (< 1% of the total glycerophospholipid pool) (Fig. 3C). Notably, the four strains revealed similar glycerophospholipid composition, only Pseudomonas sp. strain VLB120 exhibited increased phosphatidylethanolamine (72% compared with 62%) at the expense of phosphatidylglycerol (23% compared with 33%). The determined glycerophospholipid compositions are comparable to literature: the membrane fractions of phosphatidylethanolamine in P. putida S12 and P. putida DOT-T1E were reported to be 60% (Heipieper et al., 1996) and 75% (Ramos et al., 1997) respectively.
Existing variations between our data and previous reports, for example the detection of the phosphatidylethanolamine derivates dimethyl-and monomethylphosphatidyl ethanolamine (DMPE and MMPE) (Martinez-Morales et al., 2003), might result from different analytical methods and/or culture conditions, as shown for growth rate-dependent phosphatidylethanolamine and phosphatidylglycerol synthesis in E. coli (Ballesta and Schaechter, 1972).
Besides glycerophospholipid profiling with respect to acyl moieties and lipid species, we determined features of the fatty acid moieties like the degree of saturation and the ratio between the cis-and trans-isomers of unsaturated fatty acids. Please note that this was only determined for 18:1 and not for 16:1 unsaturated fatty acids. The latter fatty acid isomers were not resolved by the applied GC separation. Still cis/trans isomerization could be addressed as isomerization of both 16:1 and 18:1 fatty acid moieties respond qualitatively comparable in response to organic solvents (Guckert et al., 1986;Weber et al., 1994;Huertas et al., 2000;Heipieper et al., 2001). These modifications are important for membrane function as they define the membrane structure and hence the transition temperature (T m), which directly influences membrane fluidity and the movement (e.g. free rotation and lateral movements) of membrane lipids and proteins (Heipieper et al., 2003;Castellanos et al., 2007). We defined the degree of saturation as the number of double bonds over the total number of detected glycerophospholipids. LC/MS analyses indicated saturation degrees of about 70% (P. putida DOT-T1E 67 Ϯ 4%, KT2440 71 Ϯ 6%, S12 70 Ϯ 11%, Pseudomonas sp. strain VLB120 67 Ϯ 15%). These values are in agreement with previous studies, for example with a saturation degree of 67% for P. putida S12 grown on glucose or LB medium (Weber et al., 1994).
Membrane density changes in accordance with the existence and conformation of unsaturated carbon bonds. Saturated fatty acids have straight acyl chains and cisunsaturated isomers introduce a kink in the acyl chain (Isken and de Bont, 1998), whereas trans-isomers almost resemble saturated fatty acids. The cis/trans ratio of 18:1 unsaturated fatty acids is therefore an interesting characteristic, whose rapid alteration (in the minute range) KT2440 VLB120 S12 DOT-T1E KT2440 VLB120 S12 DOT-T1E KT2440 VLB120 S12 DOT- enables solvent-tolerant pseudomonads to survive environmental stresses (Ramos et al., 2002). Calculated for 18:1 fatty acids a three-to fivefold excess of the cisisomer was detected for the P. putida strains KT2440, DOT-T1E and S12 (Table 2). In contrast Pseudomonas sp. strain VLB120 exhibited an 18:1-cis/trans ratio of 1.6 Ϯ 0.2, which depicts the main difference within the glycerophospholipid profiles of the investigated strains. The relative amounts of 32:1/34:1/34:2 phosphatidylethanolamine and phosphatidylglycerol were calculated from their abundance within the respective glycerophospholipid class multiplied with the relative amount of these classes. The relative amount of the major fatty acid moieties 16:0/16:1/18:0/18:1 was calculated for the total glycerophospholipid pool. Free fatty acids were not considered. Errors represent the standard deviation of independent experiments (n = 2-8). n.d.: not determined because of missing replicates. Table 2. Glycerophospholipid profiles of non-treated and treated P. putida strains: the relative amount of the glycerophospholipid classes, the degree of saturation and the cis/trans ratio.
a. The degree of saturation was calculated as the number of saturated double bonds over the total number of fatty acids. b. The cis/trans ratio represents the ration of 18:1-cis to 18:1-trans fatty acids. Errors represent the standard deviation of independent experiments (n = 2-8). n.d.: not determined because of missing replicates; n.a.: not analysed.
Application: n-butanol exposition
Equipped with the comprehensive inventory of glycerophospholipid species, we wanted to understand the effect of the short-chain alcohol n-butanol [logPow 0.8, maximum membrane accumulation of 1.59 M (Neumann et al., 2005)], a promising synthon for chemical industry, on the glycerophospholipid composition of P. putida. We previously suggested P. putida DOT-T1E, S12 and Pseudomonas sp. strain VLB120 as possible candidates for biobutanol production as treated cells, sequentially transferred between LB agar plates in an airtight desiccator with an n-butanol saturated gas phase, had lower energy requirements for cell maintenance than non-treated cells of the same strains when grown in the presence of 1% (v/v) n-butanol (Rühl et al., 2009). Hereafter, we refer to treated cells when strains underwent the sequential transfer procedure, while exposed cells were grown in shake flask where 1% (v/v) n-butanol was added at the time of inoculation. At reference conditions (w/o n-butanol exposure) the glycerophospholipid profiles of non-treated and treated cells revealed few differences. Only minor changes (below 2.5% relative difference) were observed for the cardiolipin species CL(70:3) and CL(70:4) in P. putida DOT-T1E and KT2440. In contrast, the relative amounts of the major fatty acids (16:1/ 16:0) and (18:1/18:0) changed significantly towards the unsaturated species (Fig. 4B), while the total relative amount of these fatty acids (about 99%) was not affected in the solvent-tolerant strains P. putida DOT-T1E, S12 and Pseudomonas sp. VLB120 (Table 1). Pseudomonas putida strain KT2440 slightly shifted the composition of fatty acid moieties during n-butanol treatment from 16:0 and 18:1 towards the fatty acids 17:0/17:1 and 20:1 (see Table 1).
Exposed to the non-lethal n-butanol concentration of 1% (v/v), the four Pseudomonas strains specifically responded by both short-and long-term adaptation mechanisms. We first analysed general modifications of the glycerophospholipid compositions including: (i) the glycerophospholipid species, (ii) the glycerophospholipid head group composition and (iii) the chain length of fatty acid moieties.
The relative amounts of the three glycerophospholipid classes, phosphatidylethanolamine, phosphatidylglycerol and cardiolipin, were also affected by n-butanol exposure. Both non-treated P. putida DOT-T1E and KT2440 exhibited a shift from phosphatidylethanolamine to phosphatidylglycerol at slightly decreasing cardiolipin amounts (e.g. 64% compared with 53% for phosphatidylethanolamine and 33% compared with 45% for phosphatidylglycerol in P. putida DOT-T1E). A different reaction was observed for non-treated cells of P. putida S12 and Pseudomonas sp. strain VLB120, which reacted to n-butanol by enhanced phosphatidylethanolamine and cardiolipin levels at contemporaneously diminished relative amounts of phosphatidylglycerol ( Table 2).
The treatment procedure resulted in membrane alterations that differed from that of non-treated cells. Treated cells of the three solvent-tolerant strains P. putida DOT-T1E, S12 and Pseudomonas sp. strain VLB120 almost maintained their original membrane lipid composition with the relative differences of the major glycerophospholipid species mostly below Ϯ 5% (Fig. 4A). Differing results were obtained from the application of treated P. putida KT2440 where n-butanol treatment enforced the extent of glycerophospholipid alterations after n-butanol exposure.
Considering environmental stresses, like n-butanol exposure, the structural composition of the fatty acid moieties has to be addressed in addition to the glycerophospholipid inventory. The acyl moieties at sn1 position, which are introduced by the sn-glycerol-3-phosphate acyltransferase (PlsB), are one possible regulation site within glycerophospholipid synthesis (Wilkison and Bell, 1997). Exposed to n-butanol, the relative amounts of the major fatty acid moieties, 16:0, 16:1, 18:0 and 18:1, of both non-treated and treated cells were maintained at the constant level of approximately 99% (Table 1). Only nontreated cells of P. putida DOT-T1E revealed significant relative amounts of the fatty acids 13:0, 14:0 and 15:0 iso/anteiso (Fig. 4). As a result, the contribution of 16:1/ 16:0 and 18:1/18:0 fatty acids to the total acyl moieties decreased from 99 Ϯ 0% to 54 Ϯ 1%.
Besides the above depicted possibilities of compositional membrane changes, pseudomonads can alter their fatty acids in response to organic solvents by: (i) changing the degree of saturation and (ii) changing the cis/trans isomeric ratio of the unsaturated fatty acids. In this study, diverse changes in the fatty acid composition were The glycerophospholipid inventory of Pseudomonas putida 51 recorded (Table 2). While the overall degree of saturation in non-treated cells did not change in the presence of n-butanol, a lower content of saturated fatty acid moieties within the phosphatidylglycerol species was observed for treated P. putida KT2440 [ (Fig. 4), degree of saturation approximately 15%], which resulted in a significant decrease of the overall saturation (71 Ϯ 4% to 46 Ϯ 13%). In this strain no cis/trans isomerization of 18:1 unsaturated fatty acid moieties was observed, but an increase in the relative amount of cis-unsaturated 18:1 fatty acid moieties relative differences -0.5% -0.5% -0.51% --5.0% -5.01% --15.0% < -15.01% 0.51% -5.0% 5.01% -15.0% > 15.01% Fig. 4. Relative differences of the glycerophospholipid species and fatty acid moieties of P. putida following n-butanol exposure.
A. Relative differences in the relative amounts of glycerophospholipid species were calculated compared with the average of non-treated and treated cells grown at reference conditions. B. Relative differences of the relative amounts of fatty acid moieties compared with the average abundance of fatty acids of non-treated cells grown at reference conditions. Abbreviations: wt, non-treated; t, treated cells.
( Table 2). A behaviour contrary to that of P. putida KT2440 was depicted in solvent-tolerant P. putida DOT-T1E. Non-treated cells of this strain revealed a more than threefold lower degree of saturation, 18 Ϯ 0% compared with 67 Ϯ 4%, after n-butanol exposure, while treated cells were able to maintain both saturation degree and cis/trans ratio (Table 2). Notably, only slight changes of the saturation degree were observed for the solvent-tolerant strains P. putida S12 and Pseudomonas sp. strain VLB120. These strains reacted to both the treatment procedure and n-butanol exposure by cis/trans isomerization of 18:1 unsaturated fatty acid moieties. For example, non-treated P. putida S12 grown in the presence of n-butanol had a higher content of trans-18:1. Interestingly, the cis/trans ratio of treated Pseudomonas sp. strain VLB120 exposed to n-butanol equalled that of non-treated cells under normal growth conditions (1.6 Ϯ 0.2 and 1.7 Ϯ n.d. respectively), thereby indicating a higher content of cis-unsaturated fatty acids in treated Pseudomonas sp. VLB120 cells without n-butanol exposure (2.7 Ϯ 0.6).
Differences in the change of the glycerophospholipid composition between the solvent-tolerant strains and P. putida KT2440 were observed. These differences in the response to n-butanol exposure, irrespective of the conserved genetic inventory, imply differences in the (transcriptional) regulation of both fatty acid and glycerophospholipid biosynthesis.
Glycerophospholipid inventory of P. putida
The analytical possibilities in life sciences are rapidly expanding, including the determination and quantification of lipid species. We used these new possibilities to revisit in depth the glycerophospholipid composition of the Gram-negative g-proteobacterium P. putida, as distinct strains can alter their membrane composition to allow growth in the presence of highly toxic organic solvents (Weber et al., 1993;Sikkema et al., 1994;Chen et al., 1995;Heipieper et al., 2001), including octanol and styrene. The Pseudomonas genus is characterized by the presence of the common bacterial phospholipids phosphatidylethanolamine, phosphatidylglycerol and cardiolipin (Diedrich and Cota-Robles, 1974;Ramos et al., 2002). We observed congruent compositions of the three major phospholipid classes and their respective lyso-forms in four strains; in total 305 distinct glycerophospholipids. Notably, unlike Fang and colleagues (2000), no dimethyl-phosphatidylethanolamine or monomethylphosphatidylethanolamine were detected. These differences might be explained by glycerophospholipid composition changes that occur in dependence of growth conditions (Ohta et al., 1974;Pierucci, 1979), the expo-sure to cyclic hydrocarbons [e.g. BTEX and phenol (Heipieper and de Bont, 1994;Weber et al., 1994;Pinkart and White, 1997)], aliphatic alcohols [e.g. ethanol (Dombek and Ingram, 1984)] and other organic solvents (Ingram, 1977;Gustafson and Tagesson, 1985).
Application: n-butanol exposition
Since P. putida was suggested as host for n-butanol production (Nielsen et al., 2009), we investigated the response of the glycerophospholipid composition in the presence of this aliphatic alcohol rather than to the traditionally investigated aromatic hydrocarbons (Isken and de Bont, 1998) that are important during bioremediation. Exposed to a non-lethal n-butanol concentration of 1% (v/v), the investigated strains showed changes in the cis/trans ratio and modified head group compositions; responses previously reported during adaptation of P. putida to toluene and other aromatic carbohydrates (Ramos et al., 1997). An increase of trans-unsaturated fatty acids compensates for fluidizing effects due to organic solvents and suggests high cis/trans isomerase (Cti) activity. The compositional alteration of the glycerophospholipid inventory of Pseudomonas was strain specific. Notably, the glycerophospholipid composition of cells, which were exposed to n-butanol before the actual experiment, differed significantly from phosphatidylglycerol compositions of non-treated cells, suggesting some kind of long-term adaptation.
High accumulation of extracellular n-butanol in the cytoplasmic membrane subsequently disintegrates the lipid bilayer. The here applied n-butanol concentration (1% v/v) at the measured n-butanol decrease of 24 Ϯ 5 mmol l -1 h -1 (loss due to evaporation and consumption) fully induced adaptation mechanisms over the experimental time-course. Full activity of the cis/trans isomerase can be assumed, as the half maximum trans/cis ratio (trans/cis50) in P. putida S12 was assigned at 41.2 mM [0.38% (v/v)] of n-butanol (Heipieper et al., 1995). With n-butanol degradation in P. putida KT2440 equal to that of tolerant strains, n-butanol consumption can be neglected as reason for strain-specific effects on the glycerophospholipid composition. Nevertheless, the slight changes in the glycerophospholipid profiles, relating to mostly growth and energy independent mechanisms The glycerophospholipid inventory of Pseudomonas putida 53 [e.g. cis/trans isomerization (Heipieper et al., 1995)], support our hypothesis that the reduced n-butanol effect on treated P. putida [i.e. long-term exposed to n-butanol (Rühl et al., 2009)] originated from membrane alterations.
Our findings from both metabolic pathway analysis (Rühl et al., 2009) and glycerophospholipid profiling hint for the regulation of environmental effects on the transcriptional level. For example, significantly increased fractions of shorter-chain-length fatty acids and reduced fractions of saturated fatty acids in n-butanol exposed non-treated P. putida DOT-T1E might result from nbutanol inhibition of de novo fatty acids biosynthesis by the FasII system, as previously reported for ethanol (Heipieper and de Bont, 1994). Furthermore, regulation of fatty acid and glycerophospholipid biosynthesis on the transcriptional level can be correlated to: (i) expression of the fabA and fabB genes, which are regulated by the transcription factors of fatty acid biosynthesis (FabR) and degradation (FadR) (Feng and Cronan, 2009;Zhu et al., 2009), (ii) the sn-glycerol-3-phosphate acyltransferase PlsB and the availability of acyl donors (acyl-ACP or acyl-CoA), and (iii) the CDP-diacylglycerol-glycerol-3-phosphate-3phosphatidyltransferase PgsA playing a role in maintaining head group composition. Indeed, we observed in experiments with P. putida S12 and Pseudomonas sp. strain VLB120 slight changes in head group composition after n-butanol exposure (Table 2). Different reactions of non-treated and treated cells of these strains with respect to the glycerophospholipid classes could be correlated to the respective sensitivity towards n-butanol accumulation in the phospholipid bilayer and required stabilizing effects by higher phosphatidylethanolamine or cardiolipin amounts. Here, the role of glycerophospholipids via feedback inhibition onto the FasII enzymes (Fujita et al., 2007;Zhang et al., 2008) enables the coordination of glycerophospholipids with cell growth in dependence of the encountered environment, which is a basic function for organic solvent (n-butanol) resistance. With a high number of analytes, investigation of their origin from either directed regulation or enzymatic side-activity has to be investigated. Based on our results biological consequences of minor changes in the glycerophospholipid profiles could be further addressed in more detail.
Conclusion
The glycerophospholipid compositions of cellular membranes of Pseudomonas were comprehensively determined. The relative abundance of molecular lipid species is the basis for biophysical models that describe and predict structure-function relationships of cellular membranes. Such detailed understanding is necessary to comprehend the different adaptation strategies, which we observed for the investigated strains when exposed to n-butanol. More generally, we expect major findings from detailed glycerophospholipid analysis, which is mainly driven by the availability of new high-resolution analytical techniques, as the one used here, for the so-called lipidome analysis.
Chemicals
Acetonitrile, methanol and water were of LC/MS grade, chloroform and n-propanol were of HPLC grade. All solvents were purchased from Carl Roth GmbH & Co.KG (Karlsruhe, Germany) or Sigma-Aldrich Chemie GmbH. Ammonium acetate and acetic acid of analytical grade were obtained from Merck KGaA (Darmstadt, Germany). n-Butanol and media components were purchased from Sigma Aldrich/Fluka Chemie AG (Buchs, Switzerland) and Difco Laboratories (Detroit, USA) at the highest grade available. Trimethylsulfoniumhydroxid (0.25 M in methanol) for derivatization of fatty acids was obtained from Macherey-Nagel (Düren, Germany).
Bacterial strains, culture media, treatment and growth conditions
Pseudomonas putida DOT-T1E , KT2440 (Ramos-Diaz and Ramos, 1998), S12 (Weber et al., 1993) and Pseudomonas sp. strain VLB120 (Park et al., 2007) were investigated in this study. Glucose-supplemented LB medium was used for cultivation, containing (per litre) 10 g of peptone/ tryptone, 5 g of yeast extract and 5 g of sodium chloride. All strains were incubated at 30°C in a horizontal shaker (200 r.p.m.) using 50 ml of medium in 500 ml baffled shake flasks. Growth was monitored by measuring the optical density at a wavelength of 600 nm (OD600) using a plate reader (Infinite 200 Pro series, Tecan GmbH, Crailsheim, Germany). An OD600 value of 1.0 correlated to a cell dry weight (CDW) of 1.18 gCDW l -1 .
Adaptation to n-butanol was carried out as published previously (Rühl et al., 2009). Briefly, cells were exposed to n-butanol during growth on LB agar plates using an airtight desiccator with an n-butanol saturated gas phase at 30°C. Colonies were repeatedly transferred every 2 days to new plates for at least 15 times, before harvesting and storage at -80°C prior to shake-flask experiments. Cells that underwent this procedure are referred to as treated cells.
Shake-flask experiments were started from P. putida overnight cultures with an inoculation volume of 1% (v/v). Cells harvested from cultures with addition of 1% (v/v) of n-butanol are referred to as exposed cells. Bacteria were harvested at a biomass concentration of 1 gCDW l -1 (OD600 = 0.8). For glycerophospholipid extraction the method of Bligh and Dyer was modified (Bligh and Dyer, 1959), omitting the use of an aqueous phase to increase the recovery of acidic glycerophospholipids. Lipid extraction was started with 15 mgCDW using the appropriate volume of culture medium. For high analyte recovery, cell suspensions were transferred to Teflon centrifuge tubes and cells were harvested by centrifugation (5 min, 4633 g, -6°C; Heraeus Multifuge 1 R-S). The cell pellet was gently washed with 3 ml of deionized water (0°C) and centrifuged again (5 min, 4633 g, -6°C) before resuspension in 3 ml of methanol (0°C) to quench all metabolic processes. For lipid extraction 6 ml of chloroform was added. The extraction was carried out by sonication (10 min), shaking (30 min), sonication (10 min) and shaking (60 min). Cell residues were separated by centrifugation (10 min, 4000 g, 0°C), the extracts transferred to silylated 1.5 ml glass vials, dried under nitrogen stream at 30°C and stored at -20°C. For analysis, the samples were reconstituted in a mixture of acetonitrile/methanol/chloroform (49:49:2, by volume).
ESI-LIT-FTICR-MS experiments were carried out using a LTQ-FT-Fourier transform ion cyclotron resonance hybridmass spectrometer (Thermo Scientific, Bremen, Germany) with a 7.0 Tesla actively shielded superconducting magnet and electrospray ionization source operated in the datadependent mode. Survey centroid MS spectra in the mass range m/z 185-1850 were acquired in the FTICR with a resolution R = 25 000 at m/z 400 (target accumulation value 5 000 000, maximal ion accumulation time 750 ms). The two most intensive ions were sequentially isolated for accurate mass measurements by a FTICR 'SIM scan' in a narrow mass window (Ϯ 5 Da, R = 50 000, target accumulation value 100 000, maximal ion accumulation time 750 ms) in the profile mode. Subsequent fragmentations (MS 2 , MS 3 ) were performed in the linear ion trap by collision-induced dissociation (CID) (target accumulation value 10 000, maximal ion accumulation time 150 ms). Former target ions selected for MS/MS were dynamically excluded for 60 s with a total cycle time of approximately 4.6 s. General MS conditions were: -3.5 kV spray voltage, 30 arbitrary units sheath gas flow, 5 arbitrary units auxiliary gas flow and 2 arbitrary units sweep gas flow. The temperature of the ion transfer tube was set to 225°C. Parameters for CID MS 2 and MS 3 experiments: 30% normalized collision energy, activation at q = 0.25 for 30 ms. Ion selection thresholds were 500 counts for SIM scans, 500 counts for MS 2 and 100 counts for MS 3 experiments.
Gas chromatography/mass spectrometry
Aliquots (100 ml) of the reconstituted samples were transferred to silylated 200 ml glass inserts for 1.8 ml autosampler vials and dried under a nitrogen stream at 30°C. Samples were supplemented with 30 ml of chloroform and 70 ml of trimethyl-sulfoniumhydroxid (0.25 M in methanol), mixed thoroughly, incubated for 60 min at 60°C and cooled down. A Focus GC coupled to a Polaris Q quadrupole ion trap mass spectrometer (both Thermo Scientific, Dreieich, Germany) equipped with a HP-5 MS column (30 m; 0.25 mm i.d.; 0.25 mm film thickness; GGA, Moers, Germany) with the following temperature profile was used for analysis: 150°C (4 min), 2°C min -1 , 250°C. One microlitre of sample was injected in splitless mode at 250°C injector temperature and a transfer capillary temperature of 280°C. The following mass spectrometric parameters were used: acquisition delay 3 min, ion source temperature 200°C, full scan range m/z 35-500, 70 eV electron impact ionization in the positive mode.
Profiler-Merger-Viewer software
For conversion of the raw files into text files the implemented file converter of Xcalibur (Thermo Scientific) was used. Text files were further processed by the Profiler-Merger-Viewer tool written in Java (Hein et al., 2010).
Supporting information
Additional Supporting Information may be found in the online version of this article: Table S1. Average distribution of the relative amounts of glycerophospholipid species for the investigated P. putida strains KT2440, DOT-T1E, S12 and Pseudomonas sp. strain VLB120. Average relative amounts are calculated from both experimental and analytical replicates for each experimental set-up. Data from replicates are provided in Tables S1a-S1d. Errors refer to the standard deviation of the data from replicates. Table S2. Identified distinct glycerophospholipid species for the investigated P. putida strains KT2440, DOT-T1E, S12 and Pseudomonas sp. strain VLB120. Distinct glycerophospholipid species are characterized by the respective fatty acid moieties in sn1/sn2 position. Fatty acid moieties written in bold letters refer to the prominent combinations that can be found in almost all samples. Table S3. Average distribution of the relative amounts of fatty acid moieties of the distinct glycerophospholipid species for the investigated P. putida strains KT2440, DOT-T1E, S12 and Pseudomonas sp. strain VLB120. Average relative amounts are calculated from experimental replicates for each experimental set-up. Data from replicates are provided in Table S3a. Errors refer to the standard deviation of the data from replicates.
The glycerophospholipid inventory of Pseudomonas putida 57 Table S1a. Distribution of the relative amounts of glycerophospholipid species for the single measurements (experimental and analytical replicates) of P. putida KT2440. Table S1b. Distribution of the relative amounts of glycerophospholipid species for the single measurements (experimental and analytical replicates) of P. putida DOT-T1E. Table S1c. Distribution of the relative amounts of glycerophospholipid species for the single measurements (experimental and analytical replicates) of P. putida S12. Table S1d. Distribution of the relative amounts of glycerophospholipid species for the single measurements (experimental and analytical replicates) of Pseudomonas sp. strain VLB120. Table S3a. Distribution of the relative amounts of fatty acid moieties of the distinct glycerophospholipid species for the single measurements (experimental replicates) of the investigated P. putida strains KT2440, DOT-T1E, S12 and Pseudomonas sp. strain VLB120.
Please note: Wiley-Blackwell are not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article. | 2018-04-03T05:41:59.715Z | 2011-12-14T00:00:00.000 | {
"year": 2011,
"sha1": "06b6402006838116fa4cae077048d0844ddf3010",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3815271?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "06b6402006838116fa4cae077048d0844ddf3010",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247980801 | pes2o/s2orc | v3-fos-license | A Multi-Level Iterative Bi-Clustering Method for Discovering miRNA Co-regulation Network of Abiotic Stress Tolerance in Soybeans
Although growing evidence shows that microRNA (miRNA) regulates plant growth and development, miRNA regulatory networks in plants are not well understood. Current experimental studies cannot characterize miRNA regulatory networks on a large scale. This information gap provides an excellent opportunity to employ computational methods for global analysis and generate valuable models and hypotheses. To address this opportunity, we collected miRNA–target interactions (MTIs) and used MTIs from Arabidopsis thaliana and Medicago truncatula to predict homologous MTIs in soybeans, resulting in 80,235 soybean MTIs in total. A multi-level iterative bi-clustering method was developed to identify 483 soybean miRNA–target regulatory modules (MTRMs). Furthermore, we collected soybean miRNA expression data and corresponding gene expression data in response to abiotic stresses. By clustering these data, 37 MTRMs related to abiotic stresses were identified, including stress-specific MTRMs and shared MTRMs. These MTRMs have gene ontology (GO) enrichment in resistance response, iron transport, positive growth regulation, etc. Our study predicts soybean MTRMs and miRNA-GO networks under different stresses, and provides miRNA targeting hypotheses for experimental analyses. The method can be applied to other biological processes and other plants to elucidate miRNA co-regulation mechanisms.
INTRODUCTION
The growth and development of crops are often restricted due to various environmental stresses, leading to poor harvests and yields below their genetic potential (Ku et al., 2015;. In the past decade, microRNAs (miRNAs) have been identified as important gene expression regulatory factors that play an essential role in plant growth and development (Ruiz-Ferrer and Voinnet, 2009). miRNA can target multiple genes, and multiple miRNAs can also target the same gene. miRNAs are involved in the expression of stress-responsive genes and the plant's ability to adapt to environmental change (Sunkar et al., 2007). Different stresses can induce differential expressions of corresponding miRNAs in plants, while some miRNAs can simultaneously respond to several abiotic stresses (Shukla et al., 2008;Song et al., 2019;Sun et al., 2019). Therefore, studying the cooperative relationship among miRNAs and the interactions with their target genes is essential for understanding the role of miRNAs in controlling plant growth and development.
MicroRNAs may respond to adverse effects on plant growth and development, such as drought, salinity, temperature, and other abiotic environmental factors. It was shown that willow leaves exposed to drought or high temperature induce differential expressions of some miRNAs . For example, miR169c plays a negative regulatory role under drought stress by inhibiting the expression of its target gene nuclear factor Y-A (NF-YA) . miR172a (Pan et al., 2016) and miR172c endow plants with a tolerance to salt stress and water deficiency. Meanwhile, miRNAs also indirectly respond to abiotic stress by regulating other biological macromolecules. For example, miR398c can negatively regulate multiple peroxisome-related genes (GmCSD1a/b, GmCSD2a/b/c, and GmCCS) and affect the drought tolerance of the soybean (Zhou et al., 2020). miR166k/o, miR390g, and miR396c/k mediate BX10 (Al-tolerant genotype) root elongation, and miR169r triggers the BD2 (Al-sensitive B genotype) oxidative stress, which in turn triggers a different type of plant aluminum tolerance between BX10 and BD2 . This indicates that miRNA may regulate plant growth under abiotic stress through a complex network. However, current studies typically explore the role of few miRNA in response to abiotic stresses. From a global view, how miRNAs work together as a co-regulatory mechanism has not been significantly explored.
Several studies have uncovered interesting miRNA interactions. For example, miR160 and miR167 are involved in the adventitious root program of Arabidopsis (Xu et al., 2014c). miR156 and miR172 play a role in the transition of soybean nutrition (Yoshikawa et al., 2013). Transgenic studies of miR482, miR1512, and miR1515 showed that their over-expression may lead to a substantial increase in the number of soybean nodules (Li et al., 2010). Another study verified networks of 365 tissue-specific miRNA-target interactions (MTIs) . In addition, Ismalia et al. (2019) used SVR to study the interaction between miRNA and lncRNA, constructed a network of miRNA-mRNA, miRNA-lncRNA, and miRNA-mRNA-lncRNA, and recognized their regulatory roles in stress response of Arabidopsis thaliana. Tu et al. (2022) mined the miRNA-lncRNA-TF regulatory network related to leaf and flower development of Liriodendron chinense, and pointed out that lch-lnc7374-miR156h-SPL3 and lch-lnc7374-miR156j-SPL9 are potential regulators of stamen and pistil development, respectively. And the miR157a-SPL and miR160a-ARF modules were validated using RLM-RACE, both of which are involved in leaf and flower development (Tu et al., 2022).The synergistic effects of miRNAs provide a new systematic perspective for the entire microRNome (Xu et al., 2014c), which calls for a global analysis of MTIs. Yang et al. (2021) found that the differential expression of key miRNA-target modules in plants may promote their root growth and development and enhance their tolerance to various stresses. Fu et al. (2019) revealed the response mechanism of potato miRNA-mRNA under alkali stress. It is of great significance to explore the biological mechanism of plants under abiotic stress from the perspective of miRNA-target.
Several methods have been developed and applied to explore this field with the growing miRNA-target data. Shalgi et al. (2007) first constructed a miRNA network from the target genes predicted by PicTar and TargetScan. Xu et al. (2011) constructed a human miRNA-miRNA functional synergy network through co-regulation functional modules. Meanwhile, biclustering was also applied for two different types of objects (gene and miRNA in this case) belonging to the same cluster. Various bi-clustering methods have been developed (Huang and Brutlag, 2001;Yoon and De Micheli, 2005;Caldas and Kaski, 2011;Xie et al., 2019). SAMBA (Tanay et al., 2002), ISA (Bergmann et al., 2003), BIMAX (Prelic et al., 2006), QUBIC (Li et al., 2009), and FABIA (Hochreiter et al., 2010) are some commonly used general algorithms. Contiguous column coherent (CCC) biclustering (Goncalves et al., 2009;Madeira et al., 2010;Medina et al., 2010;Goncalves and Madeira, 2014;Henriques and Madeira, 2014;Henriques et al., 2017) and LateBiccluster (Goncalves and Madeira, 2014) are designed for temporal data analysis. BicPAM (Henriques and Madeira, 2014;Henriques et al., 2017), BicNET (Henriques and Madeira, 2016) and MCbiclust (Bentham et al., 2017) are the latest tools. Pio et al. (2013) applied the biclustering algorithm to predict human miRNA-mRNA modules. The application of biclustering algorithms and miRNA-target regulation module (MTRM) mining is feasible and important for analyzing miRNA regulation mechanisms. Compared with traditional clustering methods, such as Bimax (Prelic et al., 2006) and BiBit (Rodriguez-Baena et al., 2011), CUBiBit (Gonzalez-Dominguez andExposito, 2019) shortened the computing time and provided an optimized method for finding modules in larger data. However, the result obtained by CUBiBit was mostly a fully-connected bipartite graph, and the relationship between miRNA and the target gene is complex and interactive.
In this study, we proposed a method to obtain the miRNA regulatory modules and analyze their relationship in response to abiotic stresses in the soybean as a means for extending our understanding of soybean resistance mechanisms. Previously, Xu et al. (2014d) provided a soybean miRNA-gene network, SoyFN, based on predicted miRNA targets. However, this work was based only on sequence comparisons, which may result in a high false discovery rate. In contrast, in our work, we collected experimentally proven miRNA-target relationships based on degradome sequencing in the soybean and the stringent homologs of miRNA-target pairs in A. thaliana and M. truncatula. Based on these reliable miRNA-target data, we performed a biclustering analysis. We iteratively fused the overlapping biclusters based on the SoyNet network to obtain the soybean miRNA-target regulatory modules in response to abiotic stresses. We provide soybean MTRMs with high confidence relevant to various stresses, verified by REVIGO analysis to have the concentration of GO functions, and present the miRNA-GO regulatory networks of these modules. Capturing these miRNA-target modules with biological significance expands our understanding of the complex regulatory mechanisms of miRNA. The methods used should be readily applicable to other plant and animal systems where sufficient data exists to perform the analyses.
MATERIALS AND METHODS
We collected soybean MTIs from A. thaliana and M. truncatula databases and publications on miRNAs and genes of soybean response to several abiotic stresses. Subsequently, we used homology prediction on the collected MTIs to expand the soybean MTIs. Next, we used the biclustering method to mine the soybean MTRMs to perform overlap analysis to remove the redundancy. Then, based on the soybean gene interaction network, biclustering was applied through multilevel iteration. Finally, based on soybean abiotic stress-related miRNAs and genes, the fusion regulatory module was screened to obtain soybean abiotic stress-related MTRMs. Figure 1 shows a flowchart of our tasks and results.
Data Collection
We collected miRNA-target data of A. thaliana, soybean and M. truncatula based on experimentally verified degradome sequencing results from databases [DPMIND, Tarbase, mirTarbase, and Starbase (Sethupathy et al., 2006;Hsu et al., 2011;Yang et al., 2011;Li et al., 2014;Vlachos et al., 2015;Fei et al., 2018)] and publications (Supplementary Table 1). In addition, we collected the miRNA information of the three species from the miRbase (Griffiths-Jones et al., 2008), the gene annotation of the species in the NCBI, EnsemblPlants, and the Phytozome (Goodstein et al., 2012;Howe et al., 2020). We also downloaded the homologous genes of A. thaliana and M. truncatula in Orthologous MAtrix (OMA) (Altenhoff et al., 2011). Besides, we downloaded the soybean cDNA sequence and soybean gene GO annotations from SoyBase (Grant et al., 2010), and obtained soybean gene network data from SoyNet (Kim et al., 2017).
We unified the miRNA and gene formats in the data in various databases and publications, then put the data of the same species together. Next, we annotated the miRNA-target data based on the collected and processed miRNA details and the gene annotations derived from the data of three species, including miRNA target data, related notes, and data sources. Finally, after processing the duplicated data, we obtained the miRNA-target data of the three species.
Homologous Extension
We chose A. thaliana and M. truncatula to explore the potential targets. A. thaliana as a model plant has rich high-quality data. M. truncatula and soybean are closely related and have many similar biological characteristics. We extracted the miRNA sequence and removed redundant miRNAs with the same sequence in the soybean and A. thaliana. Subsequently, we extracted the target gene corresponding to the miRNA ID. Based on these targeted genes, we obtained soybean genes homologous to these genes from the A. thaliana-soybean homologous genes downloaded by OMA. We assumed that targeting relationships may exist if the sequences coexist and the genes are homologous. Therefore, these homologous genes may be targeted by these miRNAs in soybeans.
Targets obtained only based on homology information may not exist; so, we extracted these miRNA sequences and the cDNA sequence of target genes (SoyBase) and used miRNAtarget prediction tools to predict potential relationships. We chose psRNAtarget (Dai and Zhao, 2011), TAPIR (Bonnet et al., 2010), and Targetfinder (Bo and Wang, 2005), whose results were better in non-Arabidopsis plants to predict potential soybean miRNA-target relationships (Srivastava et al., 2014). The three prediction software tools have different scoring methods. We analyzed their respective scores and merged them. The homology extension method for M. truncatula-soybean is the same as above.
Clustering Method
The current research on miRNA targeting relationships is mainly based on one-to-one relative targeting. However, the miRNA targeting relationship is a complex interaction. The traditional clustering method is to cluster the same type of data, such as k-means, whose mining results in the miRNA-target regulatory module are poor because the targeting of miRNAs is sparse. The relationship between miRNA and the target gene is a bipartite graph structure; thus, the miRNA-target regulatory group can be found by analyzing the bipartite graph. CUBiBit (Gonzalez-Dominguez and Exposito, 2019) was proposed based on Bimax (Prelic et al., 2006) and BiBit (Rodriguez-Baena et al., 2011), which shortened the computing time and provided an optimized method for finding modules in larger data. We added the miRNA-target data based on the homology expansion predictions from A. thaliana and M. truncatula into the collected soybean miRNA-target data. Then, we extracted the miRNA-Target data with GO annotations and glyma2ID based on the soybean gene annotations of SoyBase. Finally, we used the CUBiBit to perform bi-clustering to obtain the results.
Overlap and Iterative Fusion
The result obtained by CUBiBit was mostly a fully-connected bipartite graph. However, the relationship between miRNA and target gene is complex and interactive. Therefore, we proposed a method of iterative fusion for MTRM modules based on a gene interaction network (Figure 2).
We detected the completely included classes in the clustering results and removed the included classes as the initial level result. First, for each class of this level containing miRNAs and genes, we judged the degree of overlap with other classes of miRNA and genes to form alpha and beta matrices, both of which are upper triangular matrices. After that, we set two thresholds of miRNA and genes that can be potentially merged for the two classes. We then recorded the two classes that met the potential fusion class-class table requirements to form a Boolean matrix. The initial alpha threshold was 0.3, and Frontiers in Plant Science | www.frontiersin.org each iteration increased at a pace of 0.05 to conservatively determine the fusionable module and keep this value unchanged after rising to 0.8. It was sufficient if the beta threshold was greater than 0. Next, we extracted the union of two class genes and the network blocks of this pair of genes with a depth of 2 layers based on the SoyNet network for each pair of classes in the potentially merged class-class table. Subsequently, network blocks containing smaller classes were extracted from the obtained block set. We assumed that the network block with the most smaller class genes represented the function of the genes in the smaller class. Therefore, judging the number of genes in the major category of this network block can determine whether the genes of the two categories are similar in function. If the genes of the two classes were concentrated on a network block, which means that their genes interact closely and meet the conditions of potential fusion, the two classes can be merged. We compared the number of genes in the major category with the numbers of genes in all major categories in the sub-category function module to obtain scores and determine the correlation. Finally, we compared the number of genes in the major category with the number in all major categories in the sub-category function module to obtain scores to determine the correlation. The threshold was recorded as gamma. When gamma >0.3 was satisfied, the two classes were merged; otherwise, they would not be merged. For the class pairs that meet the fusion condition, we arranged them in descending order of alpha value and performed top-down non-repetitive fusion. Each class can only merge at most one class in one iteration. A new class set was formed as the new level, and the fusion result was the output. The next iteration would be performed and then another iteration until no fusion class pair could meet the two conditions.
Function Assessment
Although this study does not include any experimental validation of our prediction, we assessed the distributions of gene functions indirectly to evaluate whether the results are biologically meaningful. For the results of the above iterative fusion, the enrichment of the classes in each level were separately analyzed. For a bicluster, we extracted its genes, used SoyBase's GO BP and GO MF for enrichment analysis, and took the corrected GO ID with the smallest p-value as the best enrichment result for this type of cluster. When evaluating each class, the smallest p-value alone was not enough to assess the importance of the class. Instead, we used the cluster score to evaluate the enrichment of all the GO IDs enriched by the class. For all the enriched GO IDs of this class, we screened all the results with a p-value of less than 0.05 and then used Eq. (1) to calculate the cluster score of the class.
Among them, n is the number of gene ontologies enriched in the module, x i is the number of genes enriched in the i-th GO, and correct P i is the adjusted p-value of the i-th enriched GO.
Abiotic Stress Response miRNA-Target Regulatory Module
We collected the miRNAs of soybeans that respond to drought, salt, acid, and low temperature based on our studies of publications (Subramanian et al., 2008;Kulcheski et al., 2011;Li et al., 2011;Sha et al., 2012;Subramanian, 2012;Sunkar et al., 2012;Dong et al., 2013;Zhang et al., 2014Zhang et al., , 2018Balyan et al., 2015;Xu et al., 2016;Zheng et al., 2016;Chen et al., 2018;Gupta et al., 2019;Proust et al., 2019;Yu J.-Y. et al., 2019;Wang et al., 2020). At the same time, we collected the differentially expressed soybean genes under various stresses. We screened these genes with foldchange ≥2 and t-test p-value less than 0.05 as related genes under abiotic stress. Then we marked the genes in the module and calculated the p-value related to abiotic stress based on the hypergeometric distribution. Finally, we screened based on the cluster score calculated by the module, the p-value related to stress, and the proportion of miRNA related to stress. In addition, the screening procedures related to drought and salt stress were consistent with the screening steps of the abiotic stress module.
Construction of miRNA-Gene Ontology Network Under Abiotic Stress
Based on the results of MTRM mining under stress, we first screened the GO of the enrichment results in the screened module by p-value to remove the GO with a p-value less than 10 −5 ; then, we performed a REVIGO semantic relevance analysis and extraction of concentrated representative GO channels. Based on the MTI data, the miRNA-GO relationship data was constructed through the gene pointed to by the miRNA in the module and the enriched go pathway to which the gene belongs. The relationship between GO is based on the results of REVIGO and the GO similarity calculation. The relationship is presented by setting a threshold to remove some weaker relationships. More detailed parameters are provided here or in the location of the specific figure (Figure 3D).
RESULTS
We obtained 90,064 confirmed soybean MTIs based on multiple experimental data sources and 1,189 potential soybean MTIs based on homology to experimental data from A. thaliana and M. truncatula. A multi-level iterative bi-clustering analysis resulted in 483 soybean miRNA-target regulatory modules and was evaluated according to GO enrichment function. In addition, we identified 37 abiotic stress-related modules and predicted the underlying miRNA regulatory pathway networks.
Identification of miRNA-Target Interactions
We collected soybean miRNA-target data based on databases and related publications. First, we gathered all the soybean MTIs verified by degradome sequencing and biological experiments by mining published data. As a result, we obtained 111,650 pairs of soybean MTIs (Sethupathy et al., 2006;Song et al., 2011;Yang et al., 2011;Shamimuzzaman and Vodkin, 2012; Turner et al., 2012;Fang et al., 2013;Xu et al., 2013Xu et al., , 2014aYe et al., 2014;Yan et al., 2015Yan et al., , 2016Chen et al., 2016Chen et al., , 2017Ding et al., 2016;Liu et al., 2016;Fei et al., 2018) To expand MTIs, we predicted the target relationship between potential miRNAs and targeting genes from the MTIs of A. thaliana and M. truncatula based on homology. We obtained 12,094 unique pairs of Arabidopsis MTIs (Addo-Quaye et al., 2008;German et al., 2008;Ding et al., 2012;Xu et al., 2014b;Ma et al., 2018) and 4,394 unique pairs of Medicago MTIs (Devers et al., 2011;Lauressergues et al., 2012;Zhou et al., 2012;Ma et al., 2018) after removing redundant MTIs. Removing any redundant MTIs resulting from identical miRNA sequences, we further validated homology-based MTIs using three miRNA target prediction tools that performed well in general plants, i.e., psRNAtarget, TAPIR, and Targetfinder. In the Arabidopsis MTIs, a total of 961 unique pairs of MTIs were confirmed. In the Medicago MTIs, a total of 986 unique pairs of MTIs were confirmed, as shown in Supplementary Figure 1. There is a high overlap between the two sets of MTIs (Supplementary Table 2). After removing the redundant ones, a total of 1,189 pairs were used to expand soybean MTIs.
miRNA-Target Regulatory Modules
We integrated the 90,064 soybean MTIs with the 1,189 MTIs based on homology. We removed MTIs involving genes that do not have the glyma2 ID. A total of 11,018 MTIs were removed, and the remaining 80,235 MTIs were used for analysis in the following tasks.
We applied CUBiBit for bi-clustering analysis, with the smallest scale 2 × 2 or 6 × 2 for miRNA-target modules (i.e., at least two or six target genes and at least two miRNAs in each module), resulting in 15,380 (2 × 2) miRNA-target modules or 2,461 (6 × 2) miRNA-target modules. We contracted the overlapping modules using a multi-level iterative fusion method based on the soybean gene relationship network (see section "Materials and Methods"), yielding 6,577 (2 × 2) and 812 (6 × 2) soybean miRNA-target regulatory modules after removing the modules that were completely included in the preliminary clustering module.
We next merged MTRMs according to the set threshold until the level converged stably (level represents the number of iterations). Each level's iterative fusion is shown in Figures 4A,B. We compared the iterative results at different scales. Soybean MTRMs at the 2 × 2 scale showed better results at level 10, which contains 2,715 MTRMs. Soybean MTRMs at the 6 × 2 scale showed a better effect at level 7, which contains 483 MTRMs. Comparing the cluster score based on the GO calculation between the two scales of stable convergence ( Figure 4C) shows that the cluster score quality at the 6 × 2 scale is higher than that at the 2 × 2 level (Supplementary Table 3). Hence, we used the GO enrichment analysis result on 483 soybean MTRMs obtained at the 6 × 2 level 7.
To compare the MTRMs before and after fusion, we extracted an MTRM bicluster, as shown in Figure 4E, from the level 7 clustering results of the 6 × 2 scale and plotted it with the corresponding MTRMs under level 1 before the fusion, as shown in Figure 4E and after Figure 4D, which is a level-7 fusion. The module (1,534) is at level 1 before the fusion has 2 miRNAs and 22 targeted genes. At level 7, the module (1,534) fused an additional three modules, 1,539, 622, and 1,537, and each contains miR396. From the perspective of targeting, the module at level 7 has more miRNA-target interactions than the one at level 1.
Gene Ontology Analysis and Evaluation of miRNA-Target Regulatory Modules
We screened 254 GO pathways whose GO biological processes (BP) satisfied the p-value <0.00001 for the GO enrichment from 483 soybean MTRMs obtained at the 6 × 2 scale at level 7. We analyzed the relationship among the enriched GO terms through REVIGO (Supek et al., 2011) with a parameter of 0.5. These GO pathways have a specific aggregation ( Figure 5A). MTRMs obtained from a global perspective have several concentrated distributions of GO functions, such as cellular processes, primary metabolism, cell adhesion, hormone response, and negative regulation of biological processes. In addition, there are metachronous positive growth regulations and chalcone biosynthesis. Chalcone plays an important role in soybeans and is involved in the multi-branch pathway of flavonoids and isoflavone biosynthesis (Subramanian et al., 2006). The enrichment results mainly involve positive regulation of development, heterochronic, chalcone biosynthesis, defense response, mitochondrial mRNA modification, sulfate transport, plant-type primary cell wall biogenesis, and cofactor biosynthesis, as shown in Figure 5B.
In addition, we extracted the enrichment results of the top biclusters in terms of cluster score among the 483 MTRMs and selected the top five GO terms of each module, as shown in Figure 5C and Supplementary Table 4.
We correlated the 483 soybean MTRMs obtained by clustering with the functional annotations. We selected miRNAs that responded to drought resistance, salt resistance, heat stress, cold stress, and acid stress. And we performed a statistical analysis on the miRNAs in each of the 483 biclusters. We collected data on the differential expression of soybean genes in the MTRMs under drought, salt, low temperature, cold, and acid stress (Supplementary Table 6). The conditions for screening differentially expressed genes are log2FC > 1, p < 0.05. We obtained 2,145 differentially expressed genes under soybean drought and 1,752 differentially expressed genes under salt treatment. Figure 6B shows the genes in the module together with an abiotic stress diagram. At the same time, we calculated the p-values and FDR. We used the Benjamin Graham formula to correct the p-value of the genes in each MTRM for the differentially expressed genes under abiotic stress scenarios through the hypergeometric distribution, as shown in Figure 6C. (Lex et al., 2014) of modular genes under various abiotic stresses within the horizontal correspondence, where dots are used to refer to the corresponding cold stress, acid stress, heat stress, drought stress, and salt stress on the left. The point-to-point connection is realized longitudinally to indicate the intersection between the corresponding data sets, and the upper bar graph shows the number of genes in the intersection. In panel (C), the differentially expressed genes in each MTRM under abiotic stress are shown after screening. We used three indicators to filter the candidate clusters. According to the p-value, the related miRNA purity and the cluster score of each MTRM gene are placed under the corresponding stress. We selected the corresponding threshold, obtained the stress-related MTRMs with higher reliability, and marked them as red dots in Panel (C). Supplementary Figure 2 shows MTRMs under other types of stress.
Subsequently, we screened MTRMs related to abiotic stress, drought, and salt stress according to the p-value of differentially expressed genes corresponding to the stress in the MTRMs (p < 0.001, single adversity 0.01), the proportion of the corresponding miRNA family function (miR function ratio), and the cluster score (cluster score > median). The screening results are shown in Supplementary Table 6. We obtained 37 MTRMs related to abiotic stress, including 34 MTRMs related to drought stress, 27 MTRMs related to salt stress, 3 MTRMs related to cold stress, and 21 MTRMs related to heat stress. Figure 7A shows the set relationship of MTRMs involved in a variety of stresses. The data suggest that soybean miRNAs have basic and universal functional modules in their response mechanisms to drought, high salt, high temperature, low temperature, and other abiotic stresses. There are two shared modules (M31 and M493), involving 6 miRNAs and 11 miRNAs (Figure 7B), respectively. The six miRNAs of module M31 belong to the miR156 family. The regulated gene-enriched GO pathway is a transcription regulation, DNAdependent (p-value 4.24 e-10), and a vegetative phase change regulation with a p-value of 9.24 e-07. The 11 miRNAs of module M493 are mainly in the miR172 family, in addition to miR156, miR1533, miR4374, miR5782, miR3939. The regulated gene-enriched GO pathway involves an oxidation-reduction process (p-value = 4.66 e-12) and a root hair elongation (pvalue = 1.63e-08). Among them, miR156 is up-regulated in response to stress under drought conditions, in addition miR156d and miR156c play an important role in the heat tolerance of Arabidopsis . miR172b, miR172h, miR172j-5p are down-regulated under drought stress to cope with water stress. miR156 is involved in the regulation of gene expression and signal transduction in response to soybean stimulation in a cold stress environment (Xu et al., 2016). miR156 and miR172 have been confirmed to respond to salt stress in a variety of plants (Sun et al., 2016). Moreover, we also found stress-specific regulatory modules in our results, including 14 drought-specific MTRMs, seven salt-specific MTRMs, and two heat-specific MTRMs (Supplementary Table 7).
The functions of related miRNA regulatory modules under abiotic stress are mainly concentrated in positive regulation of developmental heterochrony, defense responses, cell wall organization, and other biological processes, as shown in Figure 7C. In addition to plant positive regulation of development and defense response, GO functions such as cell wall organization also produce different response mechanisms under abiotic stresses. For example, salt stress disrupts cell walls integrity (Liu et al., 2021), and cell walls are adaptively regulate under drought stress (Moore et al., 2008). Moreover, plants reduce gibberellin production to reduce growth in order to concentrate energy against stress (Colebrook et al., 2014), sweet briar rose (Rosa rubiginosa L.) adapts to drought conditions by regulating gibberellin (Gadzinowska et al., 2020), and pea seeds adapt to heat stress by reducing gibberellin production (Leitão and Enguita, 2016), Gibberellin in A. thaliana is activated in a low-salt environment . Thus, GOs enriched in MTRMs play an important role in various stress responses. The data of the top five modules are shown in Supplementary Table 8.
miRNA Regulatory Pathway Network Under Abiotic Stress
We explored the regulatory pathway network corresponding to the miRNA of the miRNA-target regulatory module in soybean abiotic stress and analyzed the GO terms of MTRM genes under various abiotic stresses. Stringent screening conditions were used, i.e., the p-value of MTRM stress is 0.001, and the GO BP pathway was selected with a p-value of less than 10 −5 . The REVIGObased GO language correlation analysis is shown in Figure 3. GO channels with similar functions are closer in the distance in the figure. This is because the small RNAs targeting a certain cluster of GO are functionally close, and the 37 MTRMs of soybeans in abiotic stresses identified in this study mainly focus on resistance response, iron transport, positive growth regulation, and cell wall organization. Under abiotic stress, the cooperating miRNA regulatory modules of the soybean mainly regulate these pathways to respond to the stress environment. Figures 3B,C show the correlation analysis between drought stress and salt stress with specificity.
Subsequently, we constructed the GO BP regulatory network of cooperative miRNAs under soybean abiotic stress for the above main regulatory GO categories and miRNAs, as shown in Figure 3D. Multi-component miRNA families mainly regulate gene expressions related to abiotic stress responses. For example, the miR167 family regulates the resistance response pathway; the miR171 family regulates the gibberellin biosynthesis pathway, while the miR395 family participates in regulating the iron uptake. Moreover, some miRNAs have multiple GO functional partitions, such as miR156b, which regulates developmental growth, the timing of developmental events, the response to hormones, and the response to heavy metal cadmium. The miRNA families and regulatory pathways involved in MTRM are detailed in Supplementary Table 9. DISCUSSION miRNAs are major regulators of plant growth and development. They can also regulate environmental responses (Aukerman and Sakai, 2003;Chen, 2004;Zhu and Helliwell, 2011;Khraiwesh et al., 2012;Mao et al., 2013;Turner et al., 2013;Yan et al., 2013;Wong et al., 2014;Wang et al., 2015;Kulcheski et al., 2016). Hence, the study of the role of miRNAs is crucial-not only to understand the basic events of plant biology but to improve breeding for higher yields and more resilient crop plants. While various papers have noted the role of one or a few miRNAs in regulating plant stress responses, a global analysis of the cooperative interactions is lacking. To study miRNA regulation in response to abiotic response in the soybean, we collected a large number of soybean MTIs. In addition, we proposed a multi-level iterative fusion method of soybean MTRMs based on soybean gene networks.
We mined 483 soybean MTRMs, which provide a data reference for analyzing the cooperative miRNA mechanism of the soybean. Some MTRMs are involved in the biosynthesis of chalcone, which is derived from the general phenylpropanoid pathway that plays a wide variety of roles in soybeans and other plants. In most cases, gene regulation in each MTRMs involved a multi-component miRNA gene family. In some cases, these families were predicted to act cooperatively, which is consistent with the conclusion of Wang et al. (2019). And in the MTRMs we found under abiotic stress in soybean, such as regulatory module M477, which contains miR396, miR172, miR1507 and so on. Among them, soybean miRNA396 and miRNA172 are expressed in soybean drought , and miR396s interact with growth-regulating factors (GRFs) to regulate plant growth, development and stress resistance. Liu et al. showed that 7 gma-miR396 (gma-miR396a/b/c/h/e/i/k) and 20 GmGRFs (GmGRF1/2/6-11/13-24) in soybean represent developed a many-to-many network interaction (Liu et al., 2017). Sahito et al. (2017) found that the expression level of soybean NNC1 (Nodule Number Control 1) affects its response to salt stress, while miR172 targets NNC1 and is induced by salt stress. In other plants, the expression of miR396 in rice and Arabidopsis affected the tolerance of plants under the saline-alkali stress (Ning et al., 2019), while another the expression of miR396 in rice was up-regulated under cold conditions (Sun et al., 2019). Sunflower HaWRKY6 (Helianthus annuus) gene expression is related to high-temperature stress, and miR396 has a regulatory effect on this gene (Giacomelli et al., 2012). It can be seen that miR396 has an important regulatory function under abiotic stress such as drought, cold, heat, and salt. Moreover, in the target proteins of the regulatory module M477, in addition to enzymes, transcription factors, etc., we also found some disease resistance-related proteins such as RPM1, RGA2, RGA, and some heat responserelated proteins, DnaJ, heat shock 70 kDa protein 14, etc. Jiang et al. (2017) found that PWY-6842 was up-regulated in Arabidopsis under both biotic and abiotic stress. This also indicates that the regulatory mechanism of plants under abiotic stress may have commonalities between the underlying and biotic stress mechanisms. Recent studies have also shown that under biotic and abiotic stress, plants will have a series of signal regulatory networks, such as those mediated by Ca 2+ , ABA, and G proteins (Ku et al., 2018). The same miRNAs are differentially expressed in adversity (Kar and Raichaudhuri, 2021). It means that the MTRMs of soybean under abiotic stress we excavated have important significance for the regulatory mechanism of soybean under abiotic stress and the coordinated regulation of miRNAs.
Interestingly, we found that miRNAs from different families are also involved in the same regulatory gene clusters, which indicates that different miRNA families may have crossfamily cooperative regulatory mechanisms in regulating certain functions. In contrast, miRNAs in the same family can be in different MTRMs; for example, the miR171 family (miR172b-5p, miR172h-5p, miR172f, miR172g, miR172j, and miR172k) are in multiple regulatory modules during drought and salt stress. Such hub miRNAs may be useful research targets for exploring soybean resistance mechanisms and resistance to breeding research under different stresses. After further combining the analysis of differentially expressed genes in soybeans under various stresses, we obtained the miRNA-GO regulatory network under abiotic stress. The GO BP contains a variety of important related pathways for understanding the common mechanisms in stress response. The research covering the plant miRNA regulation module can analyze the coordination mechanism of miRNA from a global perspective and determine the regulation relationship between modules, which may help explore the regulation mechanism of soybean miRNAs.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
HZ and DX conceived and designed the study. LS, Q-MQ, GL, XL, THZ, EZ, HYZ, and LW assembled the data. HC and TYZ performed the analyses and wrote the manuscript. HC wrote the modeling code. YL, HC, and GS assisted with interpreting the results. HZ, DX, and GS reviewed and revised the manuscript. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We would like to thank Carla Roberts for thoroughly proofreading this manuscript. | 2022-04-07T13:25:34.391Z | 2022-04-07T00:00:00.000 | {
"year": 2022,
"sha1": "a13d47656650b6536b529db6b0b6d3dc083405ad",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a13d47656650b6536b529db6b0b6d3dc083405ad",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261379046 | pes2o/s2orc | v3-fos-license | Metastatic Kaposi sarcoma in a non-HIV patient leading to metacarpal lysis then upper-limb amputation: a case report
Abstract We report the case of a HIV-seronegative 57 year-old man, with known classic Kaposi’s disease and in whom a secondary localization in the upper left limb led to carpal and metacarpal lysis in the left hand. This unfavorable local evolution led to left transhumeral amputation.
Introduction
Kaposi sarcoma (KS) was first described by the Hungarian dermatologist Moritz Kaposi in 1872.It is a slow-growing rare cancer characterized by generally asymptomatic purplish, pink or blue macular lesions of the skin and mucous membranes.It is defined as an angio-proliferative mesenchymal disorder that affects blood and lymphatic endothelial cells, and that is induced by viral growth factors, such as interleukin 6, and human herpesvirus-8 (HHV8) [1].
HHV8 is therefore considered as the etiological agent of not only KS but also other diseases including multicentric Castleman disease or primary effusion lymphoma [2].KS came to the forefront of public attention during the acquired immunodeficiency syndrome (AIDS) epidemic when it was commonly found in severely immunodepressed patients co-infected with human immunodeficiency virus (HIV) and the opportunistic HHV8.KS in HIV-patient could lead to fatal complications specially with the Kaposi Sarcoma Immune Reconstitution Syndrome [3].
In total there are four forms of KS: the classic form (or Mediterranean/Eastern European endemic), the African endemic form, the iatrogenic form occurring in immunodepressed patients, and the epidemic form (associated with AIDS) also known as HIV-associated KS.KS occurring in its endemic form is referred to as Kaposi's disease, one of which affects generally the aging population living in the Mediterranean basin.
The dermatological lesions are similar in all these forms but differ in terms of their severity.The diagnosis is obtained by histology of lesion biopsies and immunolabeling of HHV8 or seropositivity for HHV8 antibodies [4].We report a case of classic Kaposi's disease leading to osteolysis of the bones in the hand.
Case report
A 57 year-old HIV-seronegative patient of Northwest African origin was first diagnosed with classic KS in 2009 upon discovery of erythematous macular lesions on the right leg.A first line treatment with vinblastine was initiated but stopped 8 months later due to inefficacy and skin toxicity.A change of treatment to doxorubicin was well tolerated and permitted a good initial response.However, a recurrence of the lesions 3 years later required a change in treatment to etoposide which the patient took for 2 more years and which permitted a good response.
A further recurrence of the lesions occurred 3 years later on the right lower limb but was accompanied this time by the appearance of a lesion on the left upper limb.Treatment with doxorubicin was initiated for 7 months and initially worked well with a favorable response of lesions on the upper left limb.However, a worsening of lesions at this site one year later required new therapeutic options including radiotherapy with no success.
An X-ray showed local worsening of the Kaposi's disease leading to osteolysis at the base of the 3rd and 4th metacarpal bones (Figure 1).A bone biopsy and CT scan were performed (Figure 2) showing the evolution of the disease with carpal bone involvement.In view of the successive failed treatment attempts and the unfavorable disease evolution (Figures 3, 4, 5), the decision was made to perform a transhumeral amputation.The computed tomography shows that tumor has spread in the soft tissue up to the distal third of the forearm without osteolytic lesions at that level.Skin was red, swollen with induration up to the elbow.Transhumeral amputation was chosen over elbow disarticulation to better accommodate future upper extremity prothesis.Immunohistochemical investigations found cells positive for ERG (clone EP111) and HHV8 (clone 13B10).Successful resection with healthy margins was confirmed by analyses of the amputated limbpart.The amputation stump healed well (Figure 6) and no local recurrence had occurred at the six month follow-up but a progression of the disease was found at other sites: sub-diaphragmatic, small intestinal and multifocal bone (Figure 7).The patient experienced no phantom limb pain and was still alive one year after surgery.
Discussion
Kaposi sarcoma (KS) is a malignant vascular tumor characterized by fusiform and dilated mesenchymal cells.The dermatological lesions are characterized by purplish red erythematous macules that evolve, mainly slowly, towards plaques or nodules.The disease classically first appears in the lower limb, as in the case presented here.It is generally characterized into four groups.A new category has been evoked to group together KS occurring in patients negative for HIV but having had homosexual contact [5,6].Our patient did not report having male partners.The so-called classic form of KS affects the aging population originating in the Mediterranean basin, or Central and Eastern Europe.Most reported cases involve patients over 75 years of age and classic KS affecting patients below 50 years of age is rare with an incidence of between 4 and 8% [7].
Bone involvement, while rare with a frequency of 4.5% according to Ritz Quillac et al. [8,9], can occur in all forms of KS but is most frequent in classic KS with a prevalence up to 1 in 3 patients with Kaposi's disease.It generally arises through contiguity with a soft tissue lesion, as reported in the present case.The bone lesions are generally osteolytic with complete destruction of the bone.
To our knowledge, our case represents a rare example of classic Kaposi's disease not only affecting the upper limb but also involving osteolysis in the hand.In one review of the literature, Caponetti and al. reported 66 cases of KS with musculoskeletal system involvement [10].Involvement of the upper limb remains rare.Only few cases of classic Kaposi's disease with metacarpal involvement have previously been reported [11].One case of classic KS affecting the hand but with no bone involvement was previously reported in a 68 year-old woman [12].More recently, Bas et al. reported a case of Kaposiform hemangioendothelioma in the upper limb of a newborn baby [13].Aim and al. reported a patient with isolated KS in the finger with bone involvement but no cutaneous lesion upon clinical examination [14].One report did concern classic KS leading to osteolysis in the hand that required amputation, however the patient was over 100 years of age and of Moroccan origin [15].Classic Kaposi's disease, unlike the endemic form, does generally occur in older patients (more than 75 years of age), and yet our patient developed the disease in early age.
Involvement of the upper limb in KS, particularly when leading to osteolysis, remains therefore rare, especially in Europe.Amputation remains an efficient solution for local control and sepsis prevention but drug therapies should be pursued to stop dissemination.
Figure 1 .
Figure 1.X ray of the left hand showing complete lysis of the Fourth metacarpal bones.
Figure 2 .
Figure 2. CT Scan with bone and vascular reconstruction.
Figure 3 .
Figure 3. Lesions of the hand with a lateral view before surgery.
Figure 4 .
Figure 4. Lesions of the hand with a dorsal view before surgery.
Figure 5 .
Figure 5. Lesions of the hand with a palmar view before surgery.
Figure 6 .
Figure 6.Appearance of the amputation stump at 6-month follow-up.
Figure 7 .
Figure 7. PET-SCAN 6 months after surgery showing sub-diaphragmatic, small intestinal and multifocal bone lesions.There was no recurrence on the upper-limb. | 2023-08-31T15:21:21.598Z | 2023-08-28T00:00:00.000 | {
"year": 2023,
"sha1": "1665ce66c6932b15c6c827e926f6a3b950d74fb1",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23320885.2023.2251581?needAccess=true&role=button",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "baa377993302da44e750eaf688ab31052970576d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
186202852 | pes2o/s2orc | v3-fos-license | Discussion on the Principle and Reliability Improvement of AC Magnetic Flux Leakage Detection of Steel Rod
In this paper, the principle of ac magnetic flux leakage (referred to as MFL) detection is presented by analyzing the formation of ac magnetic flux leakage and how it is collected. It is found that ac excitation frequency is a very important equipment parameter for ac magnetic flux leakage detection of steel rod. High-frequency current not only stabilizes the penetration depth of ac magnetic field under the rod surface and leakage magnetic field of defects, but also improves the ability to detect defects and adapts to higher detection speed. It is concluded that the lift-off effect is the fundamental reason affecting the reliability of MFL detection of steel rod, and there is an inherent signal amplitude deviation from the irremovable probe bouncing on the surface of steel rod. Then, the factors that aggravate the lift-off effect are analyzed synthetically, and the corresponding countermeasures are put forward. It provides an effective basis for improving the test reliability and reasonably controls the quality risk of the rod.
Introduction
The surface finish of hot rolled steel rod is poor, better signal-to-noise ratio (referred to as SNR) and greater detection depth can be obtained by using MFL detection method which is superior to Eddy current detection method [1][2][3] 126. At present, there are two sets of MFL detection equipment arranged in shaoguan baosteel special steel co., LTD 's large rod detection line and middle rod detection line, and more MFL detection equipment will be put into use one after another. The large diameter rod detection line MFL detection equipment is used to detect steel rods with diameters of Ф70 ~ 180 mm. The steel rod is driven by a V-shaped roller and determines that the central line is being tested; The magnetic leakage device of the small diameter rod detection line is used to detect steel rods with a diameter of Ф20 ~ 80 mm. The steel rod is driven by a three-claw roller and the central line is determined to detect the steel rod. In order to adapt to the increasing detection tasks, the detection speed has been improved in stages since the equipment was put into use. In each phase of the speed -lifting test, there are problems of detection reliability. The factors affecting the reliability of detection are so numerous and complex that it is difficult to classify and deal with them effectively. In view of this, we began to discuss the principle of MFL detection of steel rods, during which we found that ac magnetization frequency is a very important equipment parameter in MFL detection; Finally, the encouraging result is that we find that the lift-off effect is the root cause that affects the reliability of test signals during detection. Therefore, we comprehensively analyze various factors that aggravate the lift-off effect and put forward corresponding countermeasures to ensure the reliability of testing and effectively control quality risks.
Mechanism of Leakage Magnetic Field Formation
The workpiece is magnetized by the excitation source (magnetic yoke) during Direct current (referred to as DC) MFL detection. Under ideal conditions, all magnetic field lines pass through the workpiece and almost no magnetic flux leaks to the workpiece surface. Once there is a defect on or below the surface of the workpiece, the magnetic field near the defect is compressed so that the density of the magnetic field Flux Leakage Detection of Steel Rod line at that point increases, then the magnetoresistor near the defect increases, and finally the magnetic field is distorted. As a result, most of the magnetic field lines bypass the defects in the workpiece, and a small part passes through the defects, and some parts leave the workpiece surface to form a leakage magnetic field in the air around the defects [4] 41-42 [5] 31-33 [6], as shown in Figure 1. Leakage magnetic field is detected by magnetic sensor and converted into electrical signal. After amplification and filtering, defect signals and acousto-optic alarm are given, and the defect is finally detected [3] 82-87 [6] [7]. The MFL detection method of steel rod is shown in Figure 2. In the process of detection, the steel rod advances along a straight line, and a pair of magnetic yoke and a pair of probes are set symmetrically on the axis of the steel rod to rotate around the steel rod to realize the spiral sweep of the surface of the steel rod. High-frequency alternating current (referred to as AC) magnetization method is used for MFL detection of steel rods. High-frequency magnetization produces skin effect [8] [9], making most of the magnetic field pass through the surface area of the steel rod. In this way, the surface area of the bar under the magnetic yoke can be saturated magnetized. Once there is a defect in the surface area of the steel rod, part of the magnetic field near the defect is immediately squeezed out of the surface of the rod to form a leakage flux [3] 20-21 [4] 12-13 [5] 36-37.
Acquisition of MFL and Signal Formation
Coil is used as a sensor in the AC MFL detection of the steel rod, and the signal of the defect is obtained through the electromagnetic induction between the coil and the MFL during the detection process. Usually, The coil is placed perpendicular to the surface of the steel rod and the horizontal component of the defect leakage magnetic field is collected during detection [3] 79-80, 95-96 [7,[10][11][12][13]. as shown in Figure 3). If a narrow crack on the surface of the steel rod is detected, the horizontal component B t of the spatial distribution of the MFL can be represented by a simple approximate formula (1) [14]: (1) In the formula: w is the longitudinal width of the crack, m; B 0 is the flux density at the bottom of the defect, wb/㎡; x is the horizontal distance from a point in the MFL to the center of the defect, m; y is the vertical distance from a point in the MFL to the center of the defect, m; f is the AC frequency, Hz; t for the time, s.
In general, There are at least 10 sinusoidal signals contained in the modulation signal of the AC MFL [3] 126 [15,16] (this determines the upper limit of the sweep speed of the AC MFL detection), That is, the defect sweep time that forms the detection signal is at least one order of magnitude larger than a sinusoidal alternating cycle. Therefore, the change in coil magnetic flux caused by the sweep speed is negligible compared to the change in magnetic flux in the collar caused by the high-frequency alternating magnetic flux. Assume that the width of the coil is 2b, the length of the coil (the length entering the paper) is l, the number of coil turns is N, and the distance from the center of the coil to the surface of the steel rod is h (called the distance of the coil). Taking the Bt value at the center of the coil as the average of the leakage flux density through the entire coil, the inductive voltage of the coil can be calculated as formula (2) according to Faraday law of electromagnetic induction [3] 74 and formula (1): In the formula: h is the distance from the center of the coil to the surface of the workpiece, m; b is half coil width, m; l is the coil length, m; From formula (2), it can be seen that when the distance value of h does not change, the inductive voltage in the coil depends on the position x of the coil in the MFL, and when x=0, the inductive voltage reaches a maximum. During the test, the coil is continuously spiral along the surface of the steel rod. Once there is an axial crack on the surface area of the steel rod, the coil can certainly reach x=0 points to obtain the highest inductive voltage.
It can also be seen from formula(2) that the high frequency of the AC magnetic field has the following effects: When the other factors in formula (2) are certain, The high frequency of AC magnetization can not only stabilize the skin depth of the magnetic field in the steel rod (the skin depth is stable at about 1 ㎜ when the frequency is higher than 1000 Hz [3] 124 [16] [17]), but also effectively improve the induction voltage from the MFL and improve the SNR. At the same time, higher ac frequency is conducive to better formation of modulation signal. In other words, defects of smaller width MFL can be detected. Correspondingly, detection speed can be improved to detect defects of certain size. However, it is not the higher the better for ac magnetization frequency, due to the MFL signal frequency is equal to the frequency of the ac MFL and ac excitation frequency, we should comprehensively consider the power of yoke magnetization and the performance of the magnetic core and hysteresis characteristics of steel rod, as well as the size of the defect was detected in the rod rolling surface area, etc., in general, magnetizing current is designed in the 1 ~ 10 KHz [5] 126 In addition, for a certain detection situation, w, l, N, f, and B 0 in formula (2) are all constants. Therefore, the maximum value of coil induction voltage can be simplified as: In the formula: K=4wlNfB 0 ; The ratio of h/b is called normalized lift-off value, referred to as lift-off value. In the process of detection, the change of h/b will produce the lift-off effect and cause the amplitude of MFL signal to fluctuate.
It can be seen from formula (3) that the maximum induced voltage of the coil is inversely proportional to the lift-off value of the coil for the MFLof the identified defect. The detection signal of the coil corresponds to different lift-off value h/b, as shown in Figure4. This confirms the statement "... When the lift-off value exceeds twice the crack width, the strength of the MFL decreases rapidly with the increase of the lift height" [3]. In terms of the h/b value versus the signal amplitude. MFL signal amplitude increases with the decrease of the values of h/b in figure 4, the ideal maximum is the lower side of the coil on the rod surface, the h/b = 1, but in the actual detection, induction coil encapsulation in wear-resistant boot (see figure 5), the distance between the induction coil and the rod surface is not less than the thickness of wear-resistant pieces, this limits the minimum value of h/b, in the process of testing for a long time, as wear-resistant pieces of thinning the minimum value for h/b becomes smaller, and higher detection sensitivity, but the noise will also rise to lower SNR, wear-resistant boot need to be replaced when SNR is not meet the requirements. As for the setting of the induction coil, In the actual detection, a set of coils is often arranged on each side of the probe substrate to obtain higher sensitivity and SNR, and the two sets of coils are connected in a differential manner to form a probe channel. The modulated signal obtained from the coil channel is demodulated and displayed in the same direction by the bridge rectifier. Generally, the length of a single coil should not exceed 30mm to ensure sensitivity and SNR. On the other hand, multi-channel coils are often arranged side by side to increase the scanning width and improve the scanning efficiency [3] 80 [6,12,13,15], but limited by the length of the magnetic yoke, the length of the coils side by side must not exceed the length of the magnetic yoke.
Surface Roughness of Steel Rods
The probe consists of a wear-resistant boot and an induction coil inside. Ideally (the surface of the steel rod is smooth and has no longitudinal bending, and the probe is subjected to a uniform push pressure from the spring). The wear-resistant boot slides against the surface of the steel rod during detection. At this time, The coil is kept constant relative to the lift-off value h/b on the surface of the steel rod. When the detection parameters are unchanged, the same defect on the steel rod is repeatedly scanned and the signal amplitude formed is consistent. However, in actual testing, the probe is subjected to the combined effect of spring pressure, rotating centrifugal force, and support force on the rough surface of the steel rod. The wear-resistant boot bounds quickly on the surface of the steel rod and makes a clatter sound, as shown in Figure 5. The induction coil bounce with the wear-resisting boot, and the lift-off value (h/b) is difficult to be constant when the induction coil is scanning the defect. The test result is that the detection signal value obtained from each scan is changed from the same defect. This creates uncertainty that the defect be effectively detected. The bounce of induction coil also Flux Leakage Detection of Steel Rod brings vibration noise signal, so that the detection SNR is reduced. Once a violent probe bounce occurs during the MFL detection of the steel rod, the defect may be missed or misreported, even if the conveyor rollers and the device have a good degree of consistency, and the straightness and roundness of the steel rod are in full conformity with the standard. It is difficult to eliminate the probe bounce in the detection process. The main reasons have been described before. Therefore, the deviation of the detection results caused by the probe bouncing is often referred to as the inherent deviation of the device. The characteristic of this inherent deviation is that the signal amplitude fluctuates within a certain range, and the range of this fluctuation depends on the amplitude of the probe bounce. It is easy to cause false alarm or missed inspection when the reference artificial injury signal is just past the alarm gate, but it will not cause a large defect missed inspection. In general, the appropriate sensitivity compensation is added to the verification sensitivity before the test is carried out to eliminate the risk of defects being missed.
The amplitude and span of the probe's bouncing are related to the rotation speed of the probe. They increase with the increase of the rotation speed, resulting in the increase of the lift-off effect, which shows that the accuracy and stability of the detection are poor. The rotation speed of the probe depends on the diameter of the steel rod and the detection speed in ensuring that the steel rod surface is fully covered. In addition, the scanning speed of the probe on the surface of the steel rod should not be too fast. Therefore, testing procedures should be developed for different diameter steel rods to overcome indirect effects of scanning speed. For example, care should be taken to reduce the detection speed appropriately for large diameter steel rods.
The Concentric Degree of Equipment
The concentric degree here refers to the coaxial degree of the center of the probe rotation and the center axis of the forward steel rod. The reliability of MFL detection is seriously affected by the concentric degree of the equipment. Figure 6a shows the scanning signal with uniform circumferential sensitivity,(In Figure 6, the horizontal direction indicates the direction of the length of the sample bar, the vertical direction indicates the amplitude of the scanning signal, and the regional color difference represents the range of the amplitude used to evaluate the severity of the signal. Figures 8 and 9 are similar to this figure). There are many factors that affect the coaxiality. First, there are inconsistencies in the positioning of the center of rotation of the probe and the axis of the steel rod center by the transmission roller, which make the difference of circumferential sensitivity too large, as shown in Figure 6b , At this time, It is easy to cause false alarm on the side of the probe close to the steel rod, and it is easy to cause leakage on the side of the probe far from the steel rod. The second is that the linear path of the roller channel is not good, which causes the steel rod to beat when it is transmitted, thus causing the instantaneous lift-off effect of the probe, and the instantaneous lift-off effect increases with the deterioration of the transmission rollers path linear and the increase of the transmission speed. There is an increased chance of missing and misreporting. The third is that the centring roller and the transmission V-type roller before and after the host are worn after the equipment has been running for a long time, resulting in the deterioration of the coaxial degree of the equipment and the shaking of the steel rod in the transmission. Specific measures to repair the equipment are taken to improve the coaxiality of the equipment and the transmission rollers based on the reasons described above. For example, the coaxiality of the roller channel relative to the center of the probe rotation is regularly adjusted, and the linear line of the roller channel transmission path is regularly calibrated. The wear condition of conveying and axial setting roller is checked in due course, and the badly worn roller is replaced in time.
Symmetry of Rotating Probes
When the support of the magnetic yoke is asymmetric relative to the rotating center on both sides, the pressure of the wear-resisting boots on the surface of the steel rod is inconsistent during the detection process, which increases the difference in the bouncing amplitude of the probes on both sides. This increases the difference in sensitivity between the two probes. At the same time, The difference increases with the speed of scanning. A similar result comes from the difference in thrust between the two probe springs. This will seriously affect the stability of equipment calibration.
The way to overcome this is to adjust the magnetic yoke position to reduce the symmetry deviation between two sides, and check to confirm that the spring thrust of the two probes to be the same. The position of the two magnetic yoke support in the rotating body of the magnetic leakage machine located in the large diameter rod detection line can be adjusted separately, and it is easy to correct the asymmetry of the probe. The position of the two magnetic yoke support in the rotating body of the magnetic leakage machine located in the small diameter rod detection line must be synchronously adjusted. It is necessary to cooperate with the roller to be adjusted to eliminate asymmetry between magnetic yoke supports。 Normally, the wear-resistant boot is parallel to the axis of rotation and points vertically to the center of rotation, but if the wear-resistant boot is not assembled or operated in accordance with the above requirements, such as the wear-resistant boot deflection or tilt relative to the axis of rotation. This leads to uneven lift values and inconsistent sensitivity of the coils on both sides of the probe, as well as differences in sensitivity between different coil channels in a set of probes. As shown in Figure 7, it can be seen from the wear condition of wear-resistant boots that Figure 7a shows the normal wear state, while Figure 7b shows the abnormal wear state.
The countermeasures for the above anomalies are to check the wear condition of wear-resistant boots frequently, adjust the position accuracy of wear-resistant boots according to the sensitivity difference of each channel, and conduct static balance calibration for all coil channels of the probe.
Bending of Steel Rods
The bending of the steel rod includes the bending of the head and/or tail and the bending of the central axis. Coaxial deviation occurs when the local bending part of the steel rod passes through the rotating body of the MFL detection equipment. The detection sensitivity of the convex side of the steel rod is on the high side, and the possibility of false alarm is large, while that of the concave side is on the contrary. If there is bending of the head and tail of the steel rod, the impact will be greater, because the head and tail bending will not only lead to false alarm signal (as shown in Figure 8), but also impact the three-claw roll to loosen or damagedue to the eccentric entry and exit of the steel rod. As a result, the unsteady clamping of the three-jaw roller on the the steel rod, the vibration of the steel rod intensified. At the same time, excessive bending of the steel rod head and tail may also damage the rotating wear-resistant boots. The solution to this problem is to install a strict "watchdog" guide in front of the device to limit over-bent steel rods entering the device and tighten the three-claw rolls before and after the device to reduce the vibration of the steel rods appropriately. When a small diameter steel rod (usually less than Ф30 mm ) is detected, it is easy to produce a head and tail swing due to the lack of rigidity. When the three-jaw roller is loose, the swing of the steel rod is more serious, because when the head of the steel rod enters into the detection device, only the three-jaw roller at the entrance holds the steel rod, and the roller at the exit is not working. Similarly, only the three-jaw roller at the exit holds the steel rod, and the roller at the entrance has been separated when the end of the steel rod leaves the detection device. At these times, the steel rod head and tail easily free swing, resulting in missed or false alarm. Overcome method is to install a entry -guided sleeve (less than Ф 28 mm) and tighten the three -roll claw in the sleeve.
Oxides on the Surface of Steel Rods
The distribution of the MFL around the probe changes when Oxide adsorption on the surface of the steel rod or the surface of the wear-resistant boots, thus the flux through the probe coil also changes. The effect is similar to the change of the coil lifting off, In general, the oxide skin is easily left on the surface of the rough steel rod, and iron oxide powder is easily attached to the pits and grooves on the surface of the rough steel rod. Dust and oxide powder are easily attached to the magnetic yoke and wear-resistant boots in the following situations: the vibration of the steel rod during transmission, the clamping of the three-claw roller before and after the host, the steel rod surface being hit by wear-resistant boots, and the magnetic field exists between the magnetic yoke and the steel rod. The MFL is mainly caused by the rough surface of the steel rod, and the oxidized powder with high magnetic conductivity is easily absorbed between the magnetic yoke and the wear-resistant boots (as shown in Figure 9a) or enters into the crack of the damaged wear-resistant boots (as shown in Figure9b) to form a alternating magnetic poles, resulting in disorderly noise, poor SNR, and even serious false alarm, as shown in Figure 9c.(In Figure 9c, the horizontal position of the small points distributed in the lower half corresponds to the axis position and length of the defects on the bar, and the longitudinal position represents the circumferential expansion position corresponding to the orientation of the steel rod section where the defects are located. The yellow line is the timely coordinate line of the manual click signal) Because the concentrated oxide powder has a polymerizing effect on the weak magnetic field around the wear-resistant boots [3] 79 [6]. The countermeasure of the above problems is to improve the surface finish of steel rod during straightening. Compressed air and electric brush are used to remove residual oxides on the surface of the steel rod. Purge and clean the detection device timely; Discover and replace damaged wear -resistant boots in time. 1) In this paper, the Principle of ac MFL detection of bar is given, and it is found that ac excitation frequency is a very important equipment parameter for MFL detection of steel rod. High-frequency excitation current produces high-frequency alternating magnetic field. This results in the skin effect of the magnetic field in the steel rod, so that the surface area under the magnetic yoke is magnetized to saturation, which is conducive to the formation of a stable MFL corresponding to the defect. At the same time, The high frequency of MFL effectively improve the induced voltage of the coil sweeping over the leakage magnetic field, as well as improve the SNR and the ability to detect defects. The high-frequency ac excitation current is conducive to detecting the MFL with a smaller width corresponding to the defect, so that the identified defect can be detected at a higher speed.
Conclusion
2) There is an inherent deviation of signal amplitude in the dynamic MFL detection of rod, which is caused by the dynamic lift-off effect of probe and rod surface that can not be eliminated.
3) Lift-off effect is the most serious factor affecting the reliability of AC MFL detection of steel rods. The factors that cause the lift-off effect and its dynamic differences may be rough surface of the steel rod, poor concentric degree of equipment and conveying roller and clamping roller, asymmetry of the rotating probe (including asymmetric distance and asymmetric position), difference in thrust of the spring, and bending degree of the steel rod, etc.. There is also a similar effect of surface adhesion oxides. In the actual detection, these factors often exist in association, so that the reliability of the test results are affected by multiple comprehensive effects. Measures are taken to minimize the negative impact on test reliability so as to maximize the function of MFL detection equipment by analyzing the comprehensive impact factors in the testing process. This is of great significance to ensure the quality of steel rods delivered. | 2019-06-26T14:51:42.632Z | 2019-06-04T00:00:00.000 | {
"year": 2019,
"sha1": "1bb05290529f902d2c5f3a4e4cd8b04003a7c8bc",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajpa.20190702.13.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c106d9f1aaf63ff1097a18f25755fabb6d53b8e4",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
21314925 | pes2o/s2orc | v3-fos-license | Unusual Sphaerophorus species from the large intestine of man.
An obligately anaerobic, gram-negative microorganism identified as a Sphaerophorus species was recovered from the fecal material of two cancer (chronic myelogenous leukemia and idiopathic thrombocythemia) patients receiving cobalt radiation therapy. The organism, isolated on sheep blood-agar, exhibited extreme pleomorphism (rods, filaments, and spheroids) and was a major component of the anaerobic fecal microflora. In one patient the numbers of Sphaerophorus species (designated as isolate 6-13-68), Bacteroides species, and Clostridium perfringens declined after irradiation; however, they were stable in this same patient after a second therapeutic dose of radiation. The numbers of anaerobes in the other patient remained fairly consistent after radiation. The biochemical and morphological characteristics and carbohydrate fermentation reactions of isolate 6-13-68 most closely resembled those of Sphaerophorus ridiculosis.
An obligately anaerobic, gram-negative microorganism identified as a Sphaerophorus species was recovered from the fecal material of two cancer (chronic myelogenous leukemia and idiopathic thrombocythemia) patients receiving cobalt radiation therapy. The organism, isolated on sheep blood-agar, exhibited extreme pleomorphism (rods, filaments, and spheroids) and was a major component of the anaerobic fecal microflora. In one patient the numbers of Sphaerophorus species (designated as isolate 6-13-68), Bacteroides species, and Clostridium perfringens declined after irradiation; however, they were stable in this same patient after a second therapeutic dose of radiation. The numbers of anaerobes in the other patient remained fairly consistent after radiation. The biochemical and morphological characteristics and carbohydrate fermentation reactions of isolate 6-13-68 most closely resembled those of Sphaerophorus ridiculosis.
The predominant components of the fecal microflora of normal humans are the anaerobic, nonsporeforming gram-negative bacilli of the family Bacteroidaceae (1,2,4,9). This heterogeneous group is composed almost entirely of members ofthe genus Bacteroides and members of Sphaerophorus and Fusobacterium to a lesser degree. The gram-negative anaerobes may sometimes outnumber the coliforms by 10 to 1000. In this study, we determined the levels of Sphaerophorus, designated as isolate 6-13-68, and other anaerobic microorganisms in the feces of two patients before, and for several weeks after, cobalt radiation therapy.
MATERIALS AND METHODS
Culture methods for fecal material. A weighed sample of feces (0.5 to 1.5 g) was placed in 100 ml of sterile NaCI (1%, w/v) and agitated until uniformly suspended. This initial dilution of feces, used as a 1:100 dilution, was further diluted with sterile NaCl to 10-8. A portion (0.1 ml) was spread on dry agar plates with a bent glass rod, immediately placed in anaerobe jars, and incubated for 3 to 7 days at 37 C.
Bacteria. Sphaerophorus freundii (9817) and S. varius (8501) were obtained from the American Type Culture Collection. We isolated strain 6-13-68 from both patients. The isolates from both patients possessed identical morphological and biochemical properties. Cultures were maintained in thioglycollate broth (BBL) and transferred every 7 days.
Anaerobiosis. We used Gaspak anaerobe jars (BBL) with an atmosphere of 90% hydrogen and 10% I Present address: University of Wisconsin Medical School and Hospital, Madison, Wis. 53706. carbon dioxide. Methylene blue indicators (BBL) were always used to insure removal of oxygen from the jars.
Media for characterization. Blood-agar plates with 10% defibrinated sheep blood (BBL) were used to demonstrate hemolysis. We studied carbohydrate fermentation in basal thioglycollate broth containing (grams per liter): Trypticase, 15; Phytone, 3; sodium thioglycollate, 0.5; cysteine, 0.125; agar, 0.70; and fermentable substrate, 10. To measure acid production, we compared the change in pH after 72 hr of incubation of thioglycollate broth cultures with and without the test substrate. Indole production and nitrate reduction were determined in Indole-Nitrite medium (BBL), hydrogen sulfide production in S I M medium (BBL), and gas and odor production in thioglycollate broth. For more reduced conditions, all liquid media contained sodium thioglycollate (BBL) at a concentration of 0.05% (w/v), and were boiled and cooled just before use.
To detect hemagglutinins in fresh isolates, we mixed equal amounts of microorganisms and sheep erythrocytes (2% in normal saline) on glass slides.
The organisms were grown for 24 hr on sheep bloodagar plates, removed with sterile cotton swabs, and suspended in NaCl (1%, w/v) before testing.
RESULTS
We isolated 6-13-68 from patient A ( Table 1) in concentrations of 108 to 1010 viable cells per gram of feces (dry weight). Throughout the 4week study, the numbers of isolate 6-13-68 and Bacteroides were fairly consistent; three samples (days -4, +4, and +10) of feces did not contain isolate 6-13-68 at a 104 dilution (the lowest dilution plated). Isolate 6-13-68 and Bacteroides species were his predominant fecal bacteria. Clostridium perfringens was also present, but in lower numbers than the gram-negative anaerobes.
Patient B, with a predominance of isolate 6-13-68 in his fecal microflora, received therapeutic total-body irradiation (TBI) on two occasions. In June 1968, he was exposed to 150-r TBI (1.5 r/hr). The numbers of anaerobic microorganisms isolated before, during, and after the first total-body exposure of patient B are shown in Table 2. Bacteroides species and isolate 6-13-68 were consistently recovered before, and for about 2 weeks after, irradiation; however, on day 18, the numbers of Bacteroides and isolate 6-13-68 dropped below 104 per gram and were not recovered for 4 weeks. C. perfringens was also present in high numbers and disappeared at the same time. At 10 months later, this patient was irradiated a second time with 150-r TBI at 1.5 r/hr, and 6-13-68 was again a predominant component of the anaerobic microflora at concentrations of 106 to 109 (Table 3). Isolate 6-13-68 and Bacteroides were isolated more consistently (only two samples were negative) during the second study than during the first. Although not isolated in consistently high numbers, C. perfringens was also present in the feces of patient B.
Colonies of isolate 6-13-68 on sheep bloodagar have a fried-egg shape (Fig. 1). They are circular, undulate, brownish-gray, possess a slight metallic gray sheen in the raised central portion of the colony, and are 3 to 6 mm in diameter after 96 hr of incubation at 37 C.
The morphology of isolate 6-13-68 is shown in Fig. 2. Short bacilli, long bent bacilli, long bacilli with blebs or swellings, and large spheroids were all present in the same thioglycollate broth culture grown for 24 hr at 37 C. After the organism had been subcultured repeatedly, it grew as regular bacilli (3 to 6,um long).
The physiological characteristics of isolate 6-13-68 were compared with those of S. varius ( § 8501) and S. freundii ( §9817), which were obtained from the American Type Culture Collection ( Table 4). The data for S. ridiculosis were taken from Prevot (7). All four are found in the intestinal tract of man, produce gas, foul odor, and hydrogen sulfide, and are extremely pleomorphic. Only S. varius produced indole and only S. freundii reduced nitrate to nitrite. Colonies of isolate 6-13-68 were beta hemolytic on sheep blood-agar only after they were exposed to the atmosphere for 72 hr at room temperature. Fresh isolates of 6-1 3-68 also possess a hemagglutinin for sheep erythrocytes.
The capacity of some Sphaerophorus species to ferment sugars, as determined by acid production from various carbohydrates, is shown in Table 5. The data for S. ridiculosis were taken from Prevot (7). Sphaerophorus varius fermented only glucose and fructose; isolate 6-13-68 and S. freundii produced acid from several carbohydrates (Table 5).
Isolate 6-13-68 did not produce acid from sucrose but S. freundii did; isolate 6-13-68fermented maltose and S. freundii did not. Sphaerophorus ridiculosis and isolate 6-13-68 differ only with acid production from sucrose, salicin, mel-ibiose, and raffinose. The latter three carbohydrates were either negative or not tested by Prevot (6) when he characterized S. ridiculosis. DISCUSSION The predominant microorganisms found in the large bowel of normal humans are the gramnegative, anaerobic, nonsporeforming bacilli of the family Bacteroidaceae. Zubrzycki and Spaulding (9) showed that Bacteroides may outnumber coliforms by 100or 1,000-fold in the feces of normal adults.
In two patients reported here, Bacteroides and Sphaerophorus were consistently present in feces. However, the Sphaerophorus population in the feces of patient B declined drastically after his first exposure to therapeutic TBI (150 r at 1.5 r/hr). The irradiation may have had some effect on the anaerobic flora, as the patient received no antibiotics or diet change during the study. This type of drastic change in the anaerobic population is the only one we have noticed in our microfloral studies of irradiated cancer patients. One year later the numbers of Bacteroides and Sphaerophorus in the same patient, after a similar exposure to TBI, were consistently high for 7 weeks.
The morphological and biochemical properties of the two Sphaerophorus isolates did not change during this 1-year period.
Although others have isolated Sphaerophorus bData taken from Prevot (7). (7). c Acid production. d No acid. 6 Control consisted of thioglycollate base without carbohydrate. species from the large bowel of man, we believe this is the first report of their presence in the feces over a long time. The type of Sphaerophorus isolated in this study has more than likely been isolated from the large bowel of man. However, owing to the lack of adequate methods for characterizing the gram-negative anaerobes, these isolates were probably lumped into the Bacteroides species. Moore et al. (5), Smith (5) have placed Sphaerophorus and Fusobacterium strains into one species, Fusobacterium, on the basis of butyric acid production from peptone or glucose. Except for a few differences in carbohydrate fermentation, our isolate most closely resembles Fusobacterium ridiculosum (5). However, on the basis of guanine-cytosine ratios, as determined in W. E. C. Moore's laboratory, isolate 6-13-68 resembles Fusobacterium mortiferum, which is the same as Sphaerophorus freundii, ATCC strain 9817 (W. E. C. Moore, personal communication). On comparing S. freundii with isolate 6-13-68, we found differences in hemolysis, hemagglutination, nitrate reduction (Table 4), and acid production from maltose and sucrose (Table 5). Prevot (7) described an S. mortiferus, an obligate serophile which produces acid from sucrose, mannitol, and sorbitol. Isolate 6-13-68 grows without blood or serum and fails to produce acid from sucrose, mannitol, and sorbitol. | 2018-04-03T04:50:33.958Z | 1970-01-01T00:00:00.000 | {
"year": 1970,
"sha1": "dc50de3dfd00daa738cde9311bcfa0e880fc63f8",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/aem.19.3.458-462.1970",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "882610937c57200c9a8a982d3a33900ee7395ee5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247972185 | pes2o/s2orc | v3-fos-license | Magnetic Drug Delivery System: New Hope for Cancer Patients
1Department of Pharmacology, MET’s Institute Pharmacy, Bhujbal Knowledge City, Adgaon, Nashik, Maharashtra, India, 422003. 2Department of Quality Assurance Techniques, MET’s Institute Pharmacy, Bhujbal Knowledge City, Adgaon, Nashik, Maharashtra, India, 422003. 3Department of Pharmaceutical Chemistry, MET’s Institute Pharmacy, Bhujbal Knowledge City, Adgaon, Nashik, Maharashtra, India, 422003. 4Department of Pharmaceutics, MET’s Institute Pharmacy, Bhujbal Knowledge City, Adgaon, Nashik, Maharashtra, India, 422003.
In conventional drug delivery system for cancer treatment anti-cancer drug given intravenously this drug may accumulate in cancerous cells which may contain a large number of leaking blood vessels, so due to accumulation of this drug it may also adversely affect on healthy tissue and show lots off side effect which is the biggest disadvantage of conventional drug delivery system. Some of the nanomedicines show high accumulation of drug on tumour site due to enhanced permeability and retention (EPR) effect it offers increases vascular permeability which is beneficial but EPR effect may vary from person to person and disease condition so it is insufficient to match challengeable and complicated tumour microenvironment. In a magnetic drug delivery system, we can manipulate a drug containing magnetic moment and block it to act on the only cancerous cell so the drug will not affect healthy tissue and these results in improvement of efficacy and reduce dosing of the drug (Fig 1). Magnetic drug delivery system can be used in the treatment of cancers, nervous system disorders, sudden sensorineural hearing loss, gene therapy, etc.
The magnetically modulated drug is also a good agent for Magnetic resonance imaging (MRI) and it gives the good advantage of diagnosis and treatment of disease by the single agent (Table 1) also used in molecular biology, cell isolation and purification, Hyperthermia, and radioimmunoassay 1 . Apart from advantages MDDS has several disadvantages such as highcost technique, required specialized magnets for targeting, trained personnel required to perform drug therapy, cannot be used for cancerous cells that situated in organs deep inside the body but recent advancements are done in which magnet is implanted near the targeted area to overcome this problem 2 . Aim of work is to gain focus on magnetic drug delivery system, during the last few decades, magnetic drug delivery systems have been a popular approach for site-specific targeting of various pharmacological agents. It avoids the reticuloendothelial system and accurately sends the medications to the target with the help of a magnetic field. History Magnetic drug delivery system is comparatively newer technique Gilchrist in 1956 publish seminar that explains after injections of magnetism inducing particles in lymph nodes near surgically removed tumour inductive heating of lymph nodes takes place. In 1963 Meyers explain how they successfully guided and block small iron particles in dog leg veins with help of a horseshoe magnet 1
Magnetic nanoparticle
Particles that have a size of less than 1 micrometer and can be manipulated using a magnetic field are magnetic nanoparticles. Magnetic nanoparticles have lots of advantages such as high stability, more carrier capacity can incorporate both hydrophilic and hydrophobic drugs and controlled release can also be possible which results in an increase in bioavailability and reduction in dosing frequency 4 , Magnetic nanoparticles also have a low sedimentation rate and higher tissue diffusion 5 . Magnetic nanoparticles did not possess any immunogenicity or virulence property 6 .
Magnetic material generally has a multidomain structure but nano-sized magnetic particles have one domain structure and their magnetism property changes to paramagnetic nature 7 . Hence due to nano size, magnetic nanoparticles show super magnetism which allows easy targeting to cancerous cells.
Corona formation 8 i.e. when nanoparticles come in contact with biological fluid they absorb proteins and lipids on their surfaces is a limitation of magnetic nanoparticles 9 . Magnetic nanoparticles generally composed of 1) Magnetic core 2) Coating 3) Functional group on the surfaceby use of suitable surfactant surfaces of nanoparticle make functionalized by making them hydrophilic 10 . Coating is done to prevent aggregation and prevent interaction with system environment. Nanoparticles which contain metal oxides Fe 2 O 3 and Fe 3 O 4 mostly used( Table 2). Magnetic nanoparticles can be prepared by ionic and non ionic method 11
Preparation of magnetic nanoparticles Green synthesis of magnetic nanoparticles
Green nanotechnology is very efficient as it helps to reduce or eliminate toxic substances to restore the environment. Synthesis of nanoparticles from the plant is currently under development process. Green synthesis of a magnetic nanoparticle is a safe, non-toxic, and environment-friendly process. Inactivated plant tissue, plant exudates, plant extract, and other parts of living plant is used in green synthesis for the production of magnetic nanoparticles. The biological method can also be used for the preparation of magnetic nanoparticles in which micro-organisms, fungi, and enzymes are used but a preparation of nanoparticles by plant or plant extract is preferred over other biological processes as it eliminates the elaborate work involved in microbial culture maintenance.
Aw w a d A . M . a n d S a l e m N . M . recommended a method for the synthesis of magnetic nanoparticles in a single-step reaction through green synthesis. Extract of carob leaf, Ferric chloride tetrahydrate, ferric chloride hexahydrate, Sodium hydroxide used in the experiment. By this process, magnetic nanoparticles are obtained at a low temperature of about 80-85°C having 4 to 8 nm size and good monodispersible properties.
Precipitation from solution
Precipitation of product from solution for nanoparticle preparation is one of the oldest methods. In this method metal precursors are dissolved in a solvent such as water and an insoluble solid is generated by the addition of precipitating agent. By homogeneous precipitation reaction, uniform particles are synthesized.
Co-precipitation
This is the most widely used method for the synthesis of magnetic nanoparticles. Aqueous salt solutions are used for the preparation of magnetic nanoparticles by base addition under an inert atmosphere at room temperature or high temperature. Spherical magnetic nanoparticles in solution is synthesized by two approaches: Partially oxidizing Fe(OH)‚ suspension with help of the different oxidizing agent and aging stoichiometric mixture of ferric and ferrous hydroxides in aqueous media which forms spherical homogeneous magnetic nanoparticles. Salt used, pH value, ferric and ferrous ion ratio, reaction temperature, ionic strength of media decides the size and shape of nanoparticles.
Microemulsion
It is isotropic, thermodynamicallymeas stable, transparent dispersion of two immiscible liquids such as oil and water stabilization is done by use of surfactant. At the water and oil interface surfactants form a monolayer, with hydrophobic tails in the oil phase and hydrophilic head dissolving in the aqueous phase. Advantages of using the microemulsion technique are the use of simple equipment, which can be used to synthesize a large variety of materials and can control particle size and composition. Nanoparticles produced by this technique are small in size with higher saturation magnetization. The structure and type of surfactant decides the properties of nanoparticles.
Polyol method
The polyol method is used for the synthesis of uniform size nanoparticles which can be used in magnetic resonance imaging (MRI). In this method, Diagnosis and treatment by single agent not possible Diagnosis and treatment can be possible by single agent 5.
Cost is Low Cost is high fine metallic particles are synthesized by reducing dissolved metallic salt and precipitating metals from the solution including polyol. Thermal decomposition of organic precursors Samples with good size control, good crystallinity, and narrow size distribution have yielded after decomposition of iron (Fe) precursors in presence of hot organic surfactant. Thermal decomposition synthesized nanoparticles with the high level of monodispersity.
Hydrothermal method
The solvothermal method is another name for the hydrothermal method. By this method, magnetic nanoparticles and ultrafine powders can be synthesized. This is one of the successful method to grow crystals of different material.
Chemical vapour deposition
Chemical vapour deposition is the method that can produce a wide range of materials and can take the benefits of a large database of chemistries that have been developed for this process. In this method, the particle size distribution of nanoparticles can be controlled by controlling the mixing of cold gas with hot gas carrying evaporated material.
Spray pyrolysis
In spray pyrolysis solid is prepared by spraying solution on reactors where the solvent evaporates from aerosol droplets with condensation of solute within droplet after that drying and thermolysis of the particle at high temperature is done. Maghemite nanoparticles start with Fe3+ salt and organic compound acts as reducing agent generated by most of the pyrolysis process.
Laser pyrolysis
It is a pyrolysis process in which laser energy is used for the synthesis of nanoparticles. This method allows rapid cooling and highly localized heating as compared to heating gases in a furnace. In this method flowing mixture of gases is heated with a carbon dioxide laser which initiates a chemical reaction.
Sonochemical reaction
Novel materials having unusual properties are synthesized by the sonochemical reaction. In sonochemical reaction, ultrasound energy is used. Nanopowder prepared by this method is generally amorphous, porous, and agglomerated 12 (Fig 2). Applications of magnetic nanoparticles a. Magnetic nanoparticles used as the carrier in conjugation with tetrahedral antibodies and chemotherapeutic agent 13 . b. Antibody linked fluorescent MNP used for target imaging and treatment of GIT cancer 14 . c. MNP can be used in hyperthermia and magnetofection. d. MNP can replace fluorescent and optical labels in biosensors 15 . e. For DNA absorption MNP coated with meso-2, 3-dimercaprosuccinic acid containing carboxylic acid group used 16 . f . For Escherichia coli detection nanoparticles with iron oxide gold core-shell used 17 . g. Human chorionic gonadotropin can be detected by SPR sensor chip which is the combination of MNP with antibodies. h. Signal of fluorescence can be increased by nanoparticles 18 . i. MNP can be used as the vector for gene transportation 19 . j. MNP can be used in tumour thermotherapy as magnetic nanoparticles can produce the thermal effect in changing magnetic field 6 .
Microsphere
Magnetic microspheres are proteins or polymers having particle size of 1-100 micrometer and it is biodegradable 20 . Magnetic microspheres are small enough as can circulate through capillaries without forming embolic occlusion but can able to entrapped in microvessels and dragged in adjacent tissues by using magnetic field 21,22 .Without magnetic carrier major portion of the drug reaches to RES (Reticuloendothelial system) organs while drug with magnetic carrier all drug reaches to targeted tissue (Fig 3) 23 . Magnetic microsphere can be used for the controlled release of drugs, antibodies, vaccines, hormones, etc. Lupron Depot®, Nutropin Depot® are products of polymeric microsphere 22 .By altering the size of the microsphere, changing drug content, altering magnetite content, changing hydration state, and changing drug release characteristics of carrier rate and amount of drug release can be controlled 23 .
Loaded magnetic microsphere injected in the blood vessel by using 18 or 16 number needle guided by the magnet and in a very short period, they gathered at the targeted site where they emit radiations to kill cancerous cells. There are two types of microsphere one is the therapeutic microsphere which is used for the treatment of disease and the other one is the diagnostic microsphere which is used for diagnosis e.g. Microsphere used for liver metastasis imaging, bowel loops, and other abdominal structures can distinguish by using magnetic microsphere 24
Various methods of preparation are a. Solvent evaporation
Solvent evaporation technique used to prepare polymer encapsulated microsphere. The auxiliary solution was prepared by the addition of drug, magnetite, and polymer in a volatile organic solvent. The resulting auxiliary solution is homogenized and stirred at a 22-30°c temperature which forms a magnetic microsphere that is separated by centrifugation and stored at 4°C 25,26 .
b.Multiple emulsion method
In this method, w/o/w emulsion is formulated. Aqueous protein solution containing the active ingredient is dispersed in the lipophilic phase. The polymer solution is generally a continuous phase that encapsulates protein present in the dispersed aqueous phase, before adding to an aqueous solution of polyvinyl alcohol primary emulsion subjected to homogenization or sonication which forms multiple emulsion, and then solvent evaporation is done to form magnetic microsphere 27 .
c. Phase separation emulsion polymerization
The aqueous solution of polymer, drug, and magnetite is added in vegetable oil then emulsification is done by stirring heating is done at 100-150°c for stabilization of emulsion, then with the continuous stirring cross-linking agent is added which forms magnetic microsphere which is separated by washing 25 .
d. Dispersion co-polymerization
This method involves the reaction of various monomers at the interface between two immiscible liquid phases to form a film of polymer that envelopes the dispersed phase. In this method, two monomers are used one monomer dissolved in a continuous phase while the other monomer dispersed in a continuous phase. E.g. by dispersion co-polymerization of styrene and polyethylene oxide vinyl benzyl (PEO-VB) amphiphilic magnetic microsphere formed having a size range 5-100 micrometer 25 .
e. Hot melt microencapsulation
In hot melt microencapsulation initially, heating of polymer is done and then added into solid particles of the drug, this mixture suspended in an immiscible solvent, continuous stirring is done after that heating is carried out heating temperature should be 5°c more than the melting point of the polymer. Polymer particles solidified by cooling and microsphere is formed which then washed with petroleum ether 26 .
f. Microwave-assisted preparation of magnetic albumin microsphere
This method is faster than the traditional method and produces comparatively small particles. This method is mostly used for the preparation of magnetic protein microsphere 28 . Factors that influence properties of microsphere
Choice of solvent
The solvent should be chosen such that it can dissolve the selected polymer, more volatile and less toxic
Antifoaming agent
Foaming is one of the major problems which can disturb the formation of microspheres antifoaming agents such as dimethicone, spans are used to avoid foaming.
Surfactant
Surfactants generally stabilize emulsion by reducing surface tension and avoiding coalescence and agglomeration. Methylcellulose, tween, span, sodium dodecyl sulphate, etc. is used as a surfactant. Application of magnetic microsphere a. Magnetic microspheres used in immobilization of enzymes, isolation of cells, purification of proteins 25,29 . b. Mitoxantrone, paclitaxel, doxorubicin likes drugs can be incorporated in microsphere and used in cancer treatment 30 . c. For a preclinical study to treat liver and brain tumour magnetic microsphere labelled with Rhenium-188 and Yttrium-90 used 31 . d. Stem cell extraction and bone marrow purging possible by magnetic microsphere 32 . e. For treatment of cancer by localized hyperthermia magnetic microsphere with paclitaxel and cisplastin used 33 . f. Magnetic beads which are coated with streptavidin are used for the detection of bacteria 34 .
Magnetic Liposome
Liposomes are spherical vesicle that contains a minimum one lipid bilayer which contains cholesterol and natural non-toxic phospholipid. Liposome is advantageous as it has greater bioavailability, particle size can be adjustable, Hydrophilic, as well as hydrophobic drugs, can be incorporated, Surface modification can be possible which can help to pass biological barrier 35 . In the case of magnetic liposome magnetite is an additional component prepared by aentrapment of ferrofluid within the core of the liposome, which helps to guide the liposome to the targeted site with help of the magnet. Thermosensitive liposome release drug after heating by EMR 1 . There are two types of magnetic liposomes one which encapsulates metal-oxide ions in the aqueous layer while the other contains metal oxide enveloped in lipid layer 36 . Application of magnetic liposome: 1. Magnetic liposome use as cell or tissue sitespecific accumulation drug delivery. 2. Magnetic liposome has specific functions in magnetic-related characteristics such as contrast agent, magnetic-targeted ability, and heating generation
Magnetic microbubble
Magnetic microbubble is responsive to applied magnetic field changes and can be visible by ultrasonography used for magnetic drug delivery. Recently for the formulation of magnetic microbubble suspension of protamine functionalized microbubble mix with a suspension of heparinized NP 41 . Magnetic microbubble guided to tumours and imaging is done by ultrasonography. Focused ultrasound is used to collapse microbubble and then the drug is released 37 .
Magnetic Microcapsule
Magnetic microcapsule which is used in vivo and in vitro consisting of poly (allylamine hydrochloride) and poly (sodium 4-styrenesulfonate) and developed by LbL deposition10.This is the most promising drug delivery system with remote navigation by the magnetic field. Mesoporous mushroom (Agaricusbisporous) is used to prepare microcapsules called iMushbots. IMushbots show higher drug retaining properties at alkaline pH i.e. in blood while easy drug release in acidic medium (cancerous cells) 38 .
ConCLuSion
Magnetic delivery is very useful in treating life-threatening diseases; Drug targeting is very easy with the magnetic drug delivery system which is the biggest advantage. Magnetic drug delivery is the latest technology that received attention in the 1990s.In the early 20th century Paul Ehrlich proposed the Magic bullet concept i.e. drugs reaches the right site in the body at right time at the right concentration and magnetic drug delivery systems fulfil all these objectives. It is a challenging area for researches carried out in the future so more researches, long-term toxicity study, and characterization should be done for continuous improvement in the field. | 2022-04-06T15:16:03.749Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "23ca1ac9f1bfe50261b1be59b22fc6ce8b6ad752",
"oa_license": "CCBY",
"oa_url": "http://www.biotech-asia.org/pdf/vol19no1/BBRA_Vol_19_No_1_p_191-198.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5abbffb22b7be44bb6cf61cbf77675cf45c10692",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
237702875 | pes2o/s2orc | v3-fos-license | In Vitro Regeneration of The Endangered Cactus Turbincarpus Mombergeri Riha, A Hybrid of T. Laui x T. Pseudopectinatus
Turbinicarpus mombergeri is a cacti species formed by a hybridization process between Turbinicarpus laui and Turbinicarpus pseudopectinatus. Under natural conditions, it is very dicult for two species be genetically compatible for hybridization, and to produce owers at the same time. Thus, T. mombergeri is a very interesting and a rare species. Unfortunately, the current populations are decreasing and now it is considered critically endangered. The aim of this research was to develop a successful protocol for propagating T. mombergeri using the in vitro culture techniques. Seed disinfection was performed with Plant Preservative Mixture, and 80% of germination occurred at day 45 in Murashige-Skoog medium. The shoots were cut longitudinally, and the segments were transferred to media containing 2.22 or 4.44 µM benzyladenine to induce shooting. The generated shoots were highly hydrated, and presented abundant callus. The hyperhydricity was controlled by reducing salt medium concentration, by increasing calcium levels and by using polyethylenglycol. The reduction of callus was attained by adding tri-iodo benzoic acid. Vigorous and thick shoots were generated in medium containing urea, and rooting improved in the presence of 0.5 µM indoleacetic acid. Plantlets with normal morphology were obtained, and the survival rate of the plants in soil was 80%. The methodology developed represents an alternative for propagation of T. mombergeri under controlled conditions for commercial or conservation purposes.
Introduction
The Cactaceae family is native to the American continent and comprises about 2,000 species. The major diversity of cacti, however, is located in Mexico, with more than 600 species, of which 80 % are endemic Cacti are succulent plants well adapted to dry and desert-like conditions. Many of them possess globeshaped stems, combining the highest possible volume for water storage with the lowest area for water loss by transpiration (Gibson and Nobel 1986). The cacti species are valued in the international market because of the beauty of their owers and the characteristic morphology of the stems.
Unfortunately, the natural populations of cacti are decreasing because the devastation of their natural habitat and over-collection (Goettsch et al. 2015). According to the Convention on International Trade of Endangered Species (CITES 2015), 35 cacti species are included in Appendix I, among them, several species of the genus Turbinicarpus. Particularly, the native populations of Turbinicarpus mombergeri have suffered the negative effects of plant looting because they are unique and very uncommon in the wild (Sotomayor et al. 2004). This species is generated by the hybridization of Turbinicarpus laui (Fig. 1a) and Turbinicarpus pseudopectinatus (Fig. 1b).
Recently, Khan et al. (2020) investigated the genetic architecture of hybridization in four areas of eastern Brazil that contain Melocactus concinnus, M. ernestii, M. glaucescens, M. paucispinus, and M. zehntneri. They observed that the genomic introgression among these species is very low, which con rms that Melocatus maintain their genetic integrity with selection favoring parental genotypes. Thus, the case of T, mombergeri generation is rare considering that is di cult that two species be genetically compatible to hybridize and to produce owers at the same time.
T. mombergeri is a semi-globose cactus and possesses elliptical areoles with most of the spines in the lateral position (Fig. 1c). This species grows in calcareous gypsum rocky soil surrounded by thick xerophilous scrub. A single locality with three areas of occupancy of approximately 10,000 m 2 is known.
The natural population of T. mombergeri is estimated fewer than 250 adult individuals, therefore, it is considered critically endangered. In addition, the T. mombergeri plants are usually taken from the natural habitat, reaching high prices on the international market (Sotomayor et al. 2004).
An alternative for propagating and preserving rare and threatened cacti are the in vitro culture techniques.
By using this methodology, more than one hundred species have propagated, either by organogenesis The aim of this research was to develop an e cient protocol for regenerating T. mombergeri in vitro and for contributing to its conservation.
Materials And Methods
Disinfection and germination of seeds The seeds of T. mombergeri were donated by the Instituto Nacional de Investigaciones Agrícolas y Pecuarias, in San Luis Potosí. The seeds (n = 20) were rinsed with water and commercial soap,
Induction of T. mombergeri shoots
The seedlings germinated from seeds segmented longitudinally, and the apical tip was removed. The segments were cultivated in MS medium containing 2.22 µM (B2) or 4.44 µM (B4) benzyladenine (BA) and 1 % activated charcoal (AC), and solidi ed with 8 g L -1 agar (Phyto Technology, KS, USA). The pH medium was adjusted to pH 6.7 to obtain a nal of pH 5.7 after sterilization. The percentage of new shoots was evaluated at 30, 60 and 90 days. The presence of callus and hyperhydricity was recorded as: low (less than 10 % of callus or hyperhydricity on the tissue surface), medium (between 10 and 30 % of callus or hyperhydricity on the tissue surface) or high (more than 50 % of callus or hyperhydricity on the tissue surface).
Rooting of shoots and acclimation
To promote the root formation, the compact shoots were transferred to ½ WPM-2Ca-P (WPC) alone or supplemented with 5.71 µM IAA (WPC-1) or 0.5 mg L − 1 urea (WPC-2). The percentage of rooting and the length of roots was recorded at 60 and 90 days. Others shoots were maintained in the medium WCPT-2 for 90 days, transferred to medium WCPT-1 for 60 days and maintained in the WCP medium for 120 days (WCPT-3).
The roots of regenerated plants were washed carefully with tap water to eliminate traces of culture medium, and were treated with Raizone Plus (Fax S.A de C.V, México). The plants were transferred to pots (6 x 7 cm) containing a sterilized mixture of commercial soil and sand (1:1) and were covered with plastic bags for 4 weeks to promote a progressive environment acclimation. The bags were gradually perforated to reduce humidity, and after 2 months the plants were uncovered and transferred to greenhouse. The plants survival was recorded after 6 months in ex vitro conditions.
Statistical analysis
A completely randomized design was selected and signi cant differences between mean values were evaluated by ANOVA using the Tukey test with a 95% of signi cance level with the GraphPad Instat 3 program (GraphPad Software Inc., Version 3.10).
Shoot induction
To initiate the in vitro culture of endangered species, the use of seeds is the preferred method because it avoids destroying the mother plants and preserves genetic diversity. In this work, seed disinfection with PPM proved to be e cient, and no contamination was observed (data not shown). This compound is a broad-spectrum biocide with no adverse effects on in vitro seed germination, callus proliferation or callus regeneration.
Although the germination is usually low in seeds with a hard coat (Rojas-Aréchiga and Vazquez-Yanez, 2000), as with T. mombergi, we obtained 80% of the germination at 45 days (Fig. 2a). Longer periods did not improve the response. The percentage of germination obtained in this study was higher than the data reported for T. laui with in vitro conditions (29 %) After seed germination, well-de ned epicotyls and roots were observed at 30 days, reaching 4 mm and 5.8 mm, respectively, at 90 days. The presence of spines was evident after 14 days and increased proportionally to time culture (Fig. 2b). Since T. mombergeri is a little known species, there is no information about the germination rate or growth parameters in the wild for comparison.
The epicotyls of T. mombergeri were cut longitudinally, and the segments were transferred to a medium with BA. Because the scarcity of material (14 germinated seeds) only B2 and B4 media were tested. These BA concentrations were selected because, in previous studies, they successfully induced the shoot formation in T. laui (Santos-Díaz et al. 2003b); it also has been reported that BA was e cient in propagating other Turbincarpus species (Pérez-Molphe et al. 2015). Data showed that 50% and 7% of the explants cultivated on B2 and B4 media, respectively, regenerated one shoot at 90 days, which were highly hydrated, and presented abundant callus formation (Table 1, Fig. 3a). This response was lower than that reported by Dávila-Figueroa et al. (2005), who obtained between 7.8 to 19.7 shoots per explant during the propagation of several Turbincarpus species. It has described that the heterosis in hybrids can affects the regenerative capacity. For example, the ability for generating in vitro shoots was higher in a tomato parental line than in their hybrids, and this difference was attributed to the heterosis and maternal effects (Ohki et al 1978). Additional genetic studies must be done to determine if this phenomena, is also present in the hybrid T. mombergeri.
The shoots were transferred to B2 to increase shoot number, and after a second subculture an average of 2.8 shoots per explant were obtained, still hydrated and with abundant callus. Hyperhydricity have been described during micropropagation of many cacti species, such as, Mammillaria gracillis, M. pectinifera, Escobaria minima and Pelecyphora aselliformis, among others (Giusti et al. 2002;Poljuha et al. 2003). This effect has often been considered a physiological response to simultaneous stress factors of the in vitro culture, which negatively impacts the micropropagation e ciency and survival of plants in ex vitro conditions (Debergh et al. 1992). Some biochemical characteristics present in hyperhydric tissues are reduced dry weight, and less lignin, cellulose and calcium content, as well as a low Ca + 2 /uronic acid ratio Therefore, to reduce the hyperhydricity in T. mombergeri shoots, the effect of culture media (MS, ½ MS, ¼ MS, ½ WPM media), osmotic agents (1 % PEG) and double calcium concentration (2Ca) were tested. The reduction in salts concentration in ½ MS medium generated 21% of compact shoots at 90 days. This percentage improved in ¼ MS medium or ½ WPM medium containing 2Ca concentration and PEG, generating between 80 to 90% of compact shoots or shoots with a very low degree of hyperhydricity ( Table 2).
The bene cial calcium effects could be attributed to the cell walls strengthening, providing rigidity by reversibly cross-linking with the pectic chains. Its association with plasma membrane also helps to maintain its stability by bridging phosphate and carboxylate groups (White and Broadley 2003). Calcium was also important for vegetative buds formation, and the development of owers and roots in tobacco pith explants (Capitani and Altamura 2004). Furthermore, it is an essential element for cactus nutrition, representing 85% of dry weight in some species (Gallaher 1975). As T. mombergeri grows in calcareous soil, high levels of Ca might be required for good shoot development.
Reduction on salt concentration also seem to in uence the T. mombergeri shoots compaction since the use of ½ MS, ¼ MS or ½ WPM media generated a higher number of compact shoots than the use of MS medium. Better results were obtained in ½ WPM medium compared to ½ MS medium. The major differences in macronutrients among these media are in ammonium and nitrate ion concentrations, as well as, total ion concentration. Full-strength MS is high in ammonium (20.6 mM) and nitrate ions (39.4 mM), while WPM contains lower concentrations of both ammonium (5 mM) and nitrate (9.7 mM) ions. It has been reported that the ratio of NH 4 :NO 3 affects the levels of hyperhydricity in several species, such as, Aloe polyphylla (Ivannova and Van Staden 2008) and date palm (El-Dawayati and Zayed 2017). Thus, a reduction in the NH 4 :NO 3 ratio could also contribute to reducing the hyperhydricity of T. mombergeri shoots.
On the other hand, 100% of shooting was observed in ½ WPM-2Ca-P medium ( Table 2) generating two shoots per explant. Although the formation of compact shoots was attained, the presence of callus was still very high as shown in Fig. 3b.
Several approaches have been used to reduce callus formation, including cytokinin elimination or the employment of auxin transport inhibitors, such as TIBA. This compound enhanced somatic embryogenesis in groundnut and shoot formation in Morus alba (Venkatesh et al. 2009;Bhau and Wakhlu 2001) and enhanced Rosa hybrida micropropagation (Singh and Syamal 2000). Thus, we cultivated the T. mombergeri shoots in ½ WPM-2Ca-P added with 0.5, 1 and 2 mg L − 1 TIBA. The callogenesis was reduced in the presence of TIBA proportionally to the concentration (Table 3). This result suggests that the T. mombergi shoots must synthesize high levels of endogenous auxins that are responsible for callus generation. Figure 3c shows the aspect of T. mombergeri shoots without callus at 90 days of culture.
Root formation and transfer to soil
The compact shoots (2 to 3 cm high) were transferred to media ½ WPM-2Ca-P medium with 1 mg L − 1 TIBA (named WCPT) alone or in combination with 5.7 µM IAA (WCPT-1) or 0.5 mg L − 1 urea (WCPT-2) to induce the rooting of shoots (Table 4). After 90 days in the WCPT medium, 21.7% of the explants developed roots of approximately 3.8 mm long. In WCPT-1 medium, the percentage of rooting lightly increased and longer roots (4.6 mm) were generated at 90 days. A reduction on compact shoots, however, was observed according to time, probably because of the presence of the auxin in the medium, which induced an incipient callus formation.
The shoots cultivated in media WCPT-2 generated the lowest percentage of rooting, the root length was similar to that obtained in the WCPT-1 medium, but the shoots duplicated their diameter at 90 days (Fig. 3d).
Taking in account these results, an additional experiment was performed (WCP-3). The shoots were maintained in the medium WCPT-2 for 90 days to generate wide and thick shoots. The plant material was then transferred to medium WCPT-1 for 60 days, to induce a vigorous radical system, and was nally maintained in the WCP medium for 120 days (Table 4). Using this strategy, the callus formation was avoided completely, and at the end of experiment 96% of rooted shoots were generated with well-de ned roots from with an average length of 13 mm. Figure 3e shows the aspect of rooted shoots after 1 year in culture. These results shows that T. mombergeri requires a long period to develop strong roots. In wild conditions, most Turbinicarpus species exhibit a very thick primary root, which represents 80% of the plant body, and acts like an anchor, and more importantly, as water storage for dry periods. The root growth is therefore a time-consuming event.
The bene cial effect of urea in growth and rooting of T. mombergeri shoots is attributed to a higher availability and better absorption of organic nitrogen. It is well known that nitrogen is required for the synthesis of chlorophyll and for amino acid metabolism, which are essentials for plant growth and development. Several urea transporters have been identi ed across different cellular membranes. For example, in Arabidopsis, a symporter, that cotransports urea with protons at high a nity, has been described. In the tonoplast, various tonoplast intrinsic proteins (TIPs), a subfamily of aquaporins, transport urea in a channel-like manner. These transporters seem to optimize the nitrogen intake and compartmentation in dependence of the nitrogen forms being available in the medium (Kojima et al. 2006). Further studies must be done to identify the putative urea transporters in Turbinicarpus species.
The T. mombergeri plants were transferred to soil, and 85% survived after 1 year. At this period, the plants showed the characteristic spines pattern observed in mature plants (Fig. 3f).
In summary, this work shows that reduction of salt medium concentration, high level of calcium concentration and presence of PEG reduced the shoots hyperhydricity. The addition of TIBA decreased caulogenesis and the presence of urea promoted the development of thick shoots. The protocol developed allowed the successful micropropagation of the critically endangered cacti T. mombergeri, contributing to its conservation.
Con icts of interest/Competing interests
The authors declare that there is no con ict of interest.
Availability of data and material (data transparency) Data and material are available at the Faculty of Chemistry, UASLP.
Code availability (software application or custom code) The software used was Microsoft Word.
Ethics approval
No animals or persons were used in this work.
Consent to participate MLSD, JRA and MSSD give their consent to participate in this paper.
Consent for publication MLSD, JRA and MSSD give their consent for the publication of this paper. | 2021-09-09T20:48:05.033Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "44141624c37da47752b06e0c991a8a624ec1aa4b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-742548/v1.pdf?c=1637263293000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "999bff1f8efb2eb330fc1a239effd9c7927a1587",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
94938801 | pes2o/s2orc | v3-fos-license | Soil Profile Alteration in a Brown Forest Soil under High-Input Tea Cultivation
Received 7 November 2005. Accepted 4 April 2006. Corresponding author : S. S. Abe (shinabe@phanes.muses.tottori-u.ac.jp, fax +81-857-31-5367). Abbreviations : Av-P, available P; Et-B, HCl-extractable B; Et-Cu, HCl-extractable Cu; Et-Fe, HCl-extractable Fe; Et-Mn, HCl-extractable Mn; Et-Mo, HCl-extractable Mo; Et-Zn, HCl-extractable Zn; Ex-Al, exchangeable Al; Ex-Ca, exchangeable Ca; Ex-K, exchangeable K; Ex-Mg, exchangeable Mg; NF-0, native forest; TC, total C; TF-19, 19-yr tea fi eld; TF-34, 34-yr tea fi eld; TN, total N. Soil Profi le Alteration in a Brown Forest Soil under High-Input Tea Cultivation
Soils in the tea (Camellia sinensis L.) gardens of Japan have distinctive properties as indicated by strongly acidic reaction, and high exchangeable acidity and high active-Al contents (Kawai and Morita, 1958a, b). Such uniqueness is primarily due to the long-term high-input farming systems (Kato et al., 2001). To enhance the quality and quantity of tea leaves, Farmers in Japan usually apply nitrogen fertilizers at a rate of more than 1.0 Mg N ha -1 yr -1 , which results in severe soil acidifi cation in the tea fi elds (Tachibana et al., 1995;Matsumoto et al., 2002). Heavy fertilizer application also causes chemical pollution in the agroecosystems and neighboring local environments through the leaching or erosion of applied nutrients (Kato et al., 2001). However, the high-input farming practice has been continued in spite of the environmental risks since tea plants prefer an acidic soil and are a source of dietary Al that is toxic for most of other plants (Yokota et al., 2005).
Lack of understanding of the process and magnitude of environmental deterioration under tea cultivation has made it diffi cult to explore alternative management practices. We, therefore, investigated the changes in physicochemical properties of soil profi les in a tea fi eld using a chronosequence of cultivated soil series, which can provide useful information about alternative fertilization and soil management practices towards the sustainable tea farming in Japan.
Materials and Methods
The study site was at the Education and Research Center for Biological Resources of Shimane University ( N 3 5 º 3 0 ' , E 1 3 3 º 0 6 ' ) i n J a p a n . M e a n a n n u a l precipitation was about 1800 mm and mean daily temperature was 14.6ºC (19.1/10.7ºC of max./min. temp.) in the last 30 years. Geological components consisted of tertiary sedimentary rocks with some volcanic ash materials. The soil was classifi ed into brown forest soils under the United Classifi cation System of Japan and Ultisols according to the U.S. Soil Taxonomy. Soil texture was silty clay and the clay content gradually increased with depth (Hashi, 1999). Mineral constituents in the clay fraction consisted of kaolinite accompanied by illite, chlorite and smectite in addition to certain amounts of quartz (Abe et al., unpublished).
Soil samples were collected from 19-and 34-year tea fi elds respectively) in May 1998 and a neighboring native forest (NF-0) in 1997. Camellia sinensis L. cv. Yabukita, Asatsuyu and Yaeho were planted in 3000 m 2 plots of TF-19 and TF-34. TF-19 and TF-34 received 0.7 t N ha -1 yr -1 , 0.6 t P 2 O 5 ha -1 yr -1 , 0.3 t K 2 O ha -1 yr -1 and 0.1 t MgO ha -1 yr -1 in addition to dolomitic lime (1.0 t ha -1 yr -1 ), oilcake (3.0 t ha -1 yr -1 ), fi sh fl our (1.9 t ha -1 yr -1 ) and bone meal (1.0 t ha -1 yr -1 ) during 1995-1997. Vegetation type in NF-0 was a broadleafed semi-deciduous forest comprised of Lonicer japonica and Ilex integra, etc. Three soil pits were dug between hedgerows along the toposequence of each tea plot (average slope is 15.9% in TF-19 and 32.7% in TF-34), and a representative soil profi le was made in NF-0 on the gently undulating hilltop for the fi eld survey and soil sampling. The samples obtained were air-dried, gently ground and passed through a 2 mm Short Report mesh sieve for the physicochemical analysis.
Soil pH was measured in deionized water and 1.0 M KCl, at a soil solution ratio of 1:2.5 with a glass electrode. The amount of total C (TC) and N (TN) were determined by dry combustion method using a NC analyzer (Sumigraph NC-80, Sumika Chem. Anal. Serv. Ltd.). The amount of available P (Av-P) was determined by means of the Bray No. 2 method followed by spectrophotometric measurement with molybdate. Exchangeable Ca, Mg and K (Ex-Ca, Ex-Mg and Ex-K, respectively) were leached with 1.0 M NH 4 OAc at pH 7 and the amount was determined by inductively coupled plasma spectroscopy (ICPS) (ICP Mass 2010, Shimazu Co.). Exchangeable Al (Ex-Al) was obtained by subtraction of exchangeable H from exchangeable acidity using IITA's method (IITA, 1979). Selected micronutrients, i.e., Fe, Mn, Cu, Zn, Mo and B (Et-Fe, Et-Mn, Et-Cu, Et-Zn, Et-Mo and Et-B, respectively), were extracted with 0.1 M HCl as described by Viets and Boawn (1965) and subsequently examined on the ICPS. The data obtained were statistically analyzed by ANOVA using StatView (SAS Inst. Inc.).
Results and Discussion
Fig. 1 shows the changes in physicochemical properties of soil profi les investigated, and Table 1 gives the correlation matrix between components examined. Soil pH measured in H 2 O ranged from 4.47 to 4.72 in the profi le of NF-0, although TF-19 and TF-34 showed lower pH values (3.60-4.07) throughout the profi le. As shown in many reports (Kawai and Morita, 1958a;Tachibana et al., 1995;Matsumoto et al., 2002), soil acidifi cation occurred throughout the profi les under the tea cultivation. The pH values at the 0-10 and 10-30 cm layers were lower than the rests (deeper than 30 cm). The magnitude of acidifi cation was more prominent in the topsoils to which fertilizer was applied. The pH value of the soils extracted with 1.0 M KCl was substantially lower than that with H 2 O. On the other hand, there was no signifi cant difference in pH values between TF-19 and TF-34 in both extractions (H 2 O and KCl). This suggests that the soil reaction reached equilibrium within the 19 years of tea cultivation in the study site.
The TC contents in the surface layers (0-10 cm) increased signifi cantly with the lapse of time under cultivation. Total C was considered to exist entirely in organic forms, because the result of the HCl test, which detects carbonates in the soils, was negative. The mean accumulation rate of organic C in the surface layers during the cultivation was calculated as 1.22 kg C m -3 soil yr -1 . High litter production of tea plants and application of organic amendments could be responsible for rapid accumulation of organic matter. The benefi cial effect of organic matter on soil structure is well known (Brady and Weil, 2001). However, there were no relevant differences in bulk density and three phase distribution of the surface layers among NF-0, TF-19 and TF-34 (Data not shown here). Foot compaction between hedgerows would diminish the positive infl uence of organic matter accumulation on soil structure.
The TN content was well associated with the TC content (R=0.99) ( Table 1). Nitrate nitrogen (NO 3 -N) accounted for a few percent out of TN in the surface layers of TF-19 and TF-34 (Hashi, 1999). In general, the distribution of ammonium nitrogen (NH 4 -N) was negligible in the tea fi elds (Kihou and Yuita, 1991;Matsumoto et al., 2002) since applied N in the form of NH 4 -N is susceptible to nitrifi cation by microbes (Hayatsu, 1993). Most of TN in the surface layers was accumulated in organic forms in the tea fi elds as suggested by Kihou and Yuita (1991). On the other hand, the ratio of NO 3 -N to TN generally increased with depth refl ecting leaching of applied N (Hashi, 1999).
Accumulation of Av-P was noticed only in the surface layers of TF-19 while considerable increase of Av-P was recognized in the 10-30 and 30-50 cm layers as well as in the surface layers in TF-34. Downward mobility of P through the profi le is usually low especially in acid soils due to strong fi xation of inorganic P by the active forms of Al and Fe under acidic conditions in the topsoils to which the fertilizer was applied (Sanchez and Uehara, 1980). The accumulation of Av-P in the subsoils of TF-34 implies that addition of fertilizer P has exceeded the P retention capacity of the surface layers. The amount of Av-P was well associated with that of Et-Fe (R=0.77), which suggested the signifi cant role of Fe in P fi xation. Phosphorus accumulated in the surface layers of TF-19 and TF-34 might make it susceptible to runoff and erosion because of the steep landscape in the study site. Exchangeable base status was very low in NF-0 which would be due to the leaching under humid climate. The loss of NO 3 -N in TF-19 and TF-34 suggested above might have been accompanied with the loss of counter bases. The high susceptibility of the bases to the leaching was indicated by the signifi cant decrease in exchangeable Na contents of TF-19 and TF-34 in comparison with that of NF-0 (data not shown here). However, the amounts of Ex-Ca, Ex-Mg and Ex-K generally increased under cultivation. A similar trend was observed for distribution of Ex-Ca and Ex-Mg as indicated by their correlation coeffi cient (R=0.84) ( Table 1). The contents of Ex-Ca and Ex-Mg in the surface layers differed in the order; TF-19 > TF-34 > NF-0. The contents of Ex-Ca were higher in TF-19 and TF-34 than in NF-0 throughout the profi le while The Ex-Mg content in the subsurface layers (at depth below 10 cm) was not signifi cantly different between NF-0 and TF-19. However, the content of Ex-Mg was lower in TF-19 and TF-34 than in NF-0 at the depth of 70-90 cm. The contents of Ex-K increased with in the lapse of time under cultivation throughout the profi le. These fi ndings suggest that generally exchangeable bases are increased by fertilizer application in spite of soil acidifi cation.
The contents of Ex-Al signifi cantly increased by acidifi cation under tea cultivation might have enhanced the Al activity. As suggested before, soil reaction may have reached the equilibrium during the 19-yr cultivation whereas Ex-Al seemed to be still increased after the 19-yr cultivation. Dong et al. (1999) reported that both the available Al in soils and the uptake of Al by tea leaves increased with decreasing soil pH. Therefore, it is necessary to keep soil reaction at an optimal level (Recommended range of soil pH is between 5.0 and 5.5 under the tea farming in Japan) so as to control Al content of tea leaves since the tea product is considered to be a potentially important source of dietary Al.
The distribution of Et-Fe was relatively erratic in the profi les (Fig. 1). This suggests that the Et-Fe contents were not well correlated to the period under cultivation in TF-19 and TF-34. On the other hand, Et-Fe was found to be highly correlated with TC (R=0.79) and TN (R=0.80) in addition to Av-P (R=0.77) ( Table 1). Iron shows highly complex reactions with organic matter and makes stable compounds with P under acidic condition. The content of Et-Mn in NF-0 was highest in the surface layers and decreased with depth refl ecting the soil profi le development process under natural weathering. There was a small amount of Et-Mn throughout the profi le in TF-19 and TF-34. This suggests that Et-Mn as well as exchangeable bases is susceptible to the leaching under strongly acidic conditions (Goto et al., 1994). The contents of Et-Cu markedly increased in the surface layers but not signifi cantly in the subsurface layers during tea cultivation. Copper preferentially complex with organic matter (Yoshida and Nakao, 1971) as indicated by high correlation coeffi cients with TC (R=0.83) and TN (R=0.82) ( Table 1). On the contrary, there was no signifi cant difference in Et-Zn contents of the surface layers among the three plots. However, TF-34 had a signifi cantly larger amount of Et-Zn in the subsurface layers (10-30, 30-50 and 50-70 cm) than NF-0 and TF-19. The Et-Mo content tended to increase with the lapse of time under tea cultivation while there was no signifi cant difference in its contents in the layers at 0-10, 50-70 and 70-90 cm depth between TF-19 and TF-34. Increase in the contents of Et-Cu, Et-Zn and Et-Mo might have originated from organic amendments or agro-chemicals such as P fertilizers that usually include impurity of heavy metals. The Et-B content was greater in TF-19 and TF-34 than in NF-0 throughout the profi le (Fig. 1) and highly correlated with pH extracted with H 2 O (R=-0.84) and KCl (R=-0.81) ( Table 1). This suggests that boron extractability with 0.1 N HCl was driven by soil reaction. There was no signifi cant difference in Et-B distribution between TF-19 and TF-34 because of the similarity in soil reaction.
The fi ndings of this study revealed that acidifi cation and chemical contamination of soils under tea cultivation were caused by heavy application of fertilizers. Severe acidic reaction and excess amount of active Al in the soils prevents root elongation and its physiological function, which would eventually reduce quality and quantity of the tea leaves (Tachibana et al., 1995(Tachibana et al., , 1996. Heavy fertilization also induces chemical pollution in local environments by leaching, runoff and/or erosion of applied nutrients, e.g., N, P, and bases. Soil contamination with heavy metals such as Cu, Zn and Mo is another aspect of environmental degradation in the tea fi elds. Soil pH adjustment may be the key to improve fertilizer effi ciency and to reduce its application rate. Therefore, it is necessary to develop a simple soil testing system to allow the farmers to easily check their soil conditions, which could be helpful for appropriate use of fertilizers and limes. | 2019-04-05T03:28:55.838Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "ca698641dc394d5c7419749efb57beec73588436",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1626/pps.9.457",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e99401b9f2a8a119223be992ef0bcf86fd0b78c1",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
5079345 | pes2o/s2orc | v3-fos-license | Strengths and Difficulties Questionnaire: internal validity and reliability for New Zealand preschoolers
Objectives This observational study examines the internal construct validity, internal consistency and cross-informant reliability of the Strengths and Difficulties Questionnaire (SDQ) in a New Zealand preschool population across four ethnicity strata (New Zealand European, Māori, Pasifika, Asian). Design Rasch analysis was employed to examine internal validity on a subsample of 1000 children. Internal consistency (n=29 075) and cross-informant reliability (n=17 006) were examined using correlations, intraclass correlation coefficients and Cronbach’s alpha on the sample available for such analyses. Setting and participants Data were used from a national SDQ database provided by the funder, pertaining to New Zealand domiciled children aged 4 and 5 and scored by their parents and teachers. Results The five subscales do not fit the Rasch model (as indicated by the overall fit statistics), contain items that are biased (differential item functioning (DIF)) by key variables, suffer from a floor and ceiling effect and have unacceptable internal consistency. After dealing with DIF, the Total Difficulty scale does fit the Rasch model and has good internal consistency. Parent/teacher inter-rater reliability was unacceptably low for all subscales. Conclusion The five SDQ subscales are not valid and not suitable for use in their own right in New Zealand. We have provided a conversion table for the Total Difficulty scale, which takes account of bias by ethnic group. Clinicians should use this conversion table in order to reconcile DIF by culture in final scores. It is advisable to use both parents and teachers’ feedback when considering children’s needs for referral of further assessment. Future work should examine whether validity is impacted by different language versions used in the same country.
Objectives This observational study examines the internal construct validity, internal consistency and cross-informant reliability of the Strengths and Difficulties Questionnaire (SDQ) in a New Zealand preschool population across four ethnicity strata (New Zealand European, Māori, Pasifika, Asian). Design Rasch analysis was employed to examine internal validity on a subsample of 1000 children. Internal consistency (n=29 075) and cross-informant reliability (n=17 006) were examined using correlations, intraclass correlation coefficients and Cronbach's alpha on the sample available for such analyses. setting and participants Data were used from a national SDQ database provided by the funder, pertaining to New Zealand domiciled children aged 4 and 5 and scored by their parents and teachers. results The five subscales do not fit the Rasch model (as indicated by the overall fit statistics), contain items that are biased (differential item functioning (DIF)) by key variables, suffer from a floor and ceiling effect and have unacceptable internal consistency. After dealing with DIF, the Total Difficulty scale does fit the Rasch model and has good internal consistency. Parent/teacher inter-rater reliability was unacceptably low for all subscales. Conclusion The five SDQ subscales are not valid and not suitable for use in their own right in New Zealand. We have provided a conversion table for the Total Difficulty scale, which takes account of bias by ethnic group. Clinicians should use this conversion table in order to reconcile DIF by culture in final scores. It is advisable to use both parents and teachers' feedback when considering children's needs for referral of further assessment. Future work should examine whether validity is impacted by different language versions used in the same country.
IntrODuCtIOn
Educational achievement and problems in primary and secondary school aged children can arise as a result of behavioural and emotional problems when the child is of preschool age. [1][2][3][4][5] Consequently, screening to identify children with or at risk of behavioural problems at a preschool age is an increasingly used preventative strategy, aiming to enhance the success of support programmes and early intervention. 6 Such screening is best performed using standardised methods, and for behavioural assessment, this means the use of a questionnaire-based measure. The Strengths and Difficulties Questionnaire for parents (SDQ-P) and for teachers (SDQ-T) is a tool used worldwide for this purpose to screen preschool children's psychosocial attributes (positive and negative behaviours). 7-10 It consists of 25 items, making up five subscales: Emotional Symptoms, Conduct Problems, Hyperactivity, Peer Problems and Prosocial Behaviour. 7 8 Before using a measure such as the SDQ, establishing validity and reliability is key for optimum decision-making. At present, there are two dominant approaches to the development and testing of measures: Classical Test Theory (CTT) and Modern Test Theory (also known as item response theory). 11 In CTT, it is assumed that the observed scores on items are the sum of the true score (which we cannot directly measure) and measurement error. However, neither the true score nor the measurement error can be determined and the approach is therefore flawed. 12 In strengths and limitations of this study ► A key strength of this study is the inclusion of all 4-year-old and 5-year-old children in New Zealand for whom a Strengths and Difficulties Questionnaire assessment was available in 2011, resulting in our ability to assess the validity of the tool at the population level and with sufficient power to make sound conclusions. ► A strength of the study included robust data quality checks and the exclusion of 39% of cases for which we had concerns about their quality (it being incomplete or containing multiple inconsistencies). ► A limitation was our inability to assess differential item functioning by other key variables that may affect validity, for example, first language or country of birth, as such data were not available. ► Future work should examine whether validity is impacted by different language versions used (in the same country).
Open Access addition, the best conclusion that can be made following satisfactory tests of validity and reliability using CTT is that an outcome measure is an ordinal scale. Yet, many statistical tests that examine the validity of scales assume that the data arising are of interval nature. Indeed, in the preschool population, the SDQ has only been tested using parametric, CTT approaches, as demonstrated in our recent systematic review 13 to which we return below. By contrast, Modern Test Theory approaches, such as Rasch analysis, are underpinned by mathematical models that specify the conditions under which equal interval measurements can be estimated from outcome measurement data. [14][15][16] These approaches are therefore more robust.
Evaluations of the structural validity of the SDQ drawing on CTT in preschoolers has been extensively researched using factor analysis (eg, by Klein et al, Tobia et al and Mieloo et al [17][18][19] ), Cronbach's alphas (α) 13 and correlation coefficients 13 20 and Weighted Least Squares in older children. 21 Our systematic review found acceptable to good evidence for the 5-factor SDQ structure in preschoolers, when confirmatory factor analysis (CFA) had been used. 13 A different approach to examining structural validity, using Modern Test Theory, can be achieved by examining whether each of the subscales are unidimensional and fit the Rasch model (ie, examining internal construct validity). 15 Like CFA, Rasch analysis is a confirmatory approach to examining whether items belong to the subscales under investigation. However, there are known limitations to using factor analysis on ordinal scales, including its parametric basis and the emergence of 'difficulty factors', which may spuriously indicate multidimensionality. 22 In addition, factor analysis does not allow detailed investigation of item function in regard to targeting, differential item functioning (DIF) and local dependency between items, whereas Rasch analysis includes such assessments. 23 We identified one study which had employed Rasch analysis on SDQ data that had been self-completed by 12-18-year olds in Sweden. 24 This study showed that none of the SDQ scales was psychometrically robust, with misfitting items in all five subscales and poor internal consistency. However, that study did not examine whether the scale was invariant across different subgroups.
Internal consistency of the SDQ-P subscales has been reported in many studies and synthesised in a systematic review. 13 The sample size-weighted average Cronbach's α for the five subscales was below the threshold of 0.70 (implying inadequate internal consistency for shorter, established scales) and for the Difficulty scale α was 0.79 (acceptable for group comparisons but not for individual use) (Streiner and Norman, p. 91). 25 Inter-rater reliability of SDQ subscales between two parents and between two teachers has previously been found to be acceptable when correlation coefficients were used (between 0.42 and 0.64 for parents and between 0.59 and 0.81 for teachers). 20 Other studies have examined scores between different types of informants (eg, parent and teacher). The systematic review showed that the sample size-weighted average correlation coefficients generated from these studies were weak to moderate (between 0.25 and 0.45). 13 The validity and reliability of the SDQ have not previously been examined in New Zealand, a country with a sizeable indigenous population (Māori, 15.4%) and immigrant population (25.2% born overseas). 26 New Zealand is a multicultural society, impacting on values, ways of living and languages spoken. It cannot be assumed that measures capturing psychological constructs will have cultural equivalence. 27 28 Indeed, a New Zealand qualitative study has shown that parents from Māori, Pacific Island, Asian and new immigrant groups questioned the cultural validity of the SDQ. 29 Cultural equivalence therefore needs further investigation.
In summary, the use of CTT approaches to examine the validity of the SDQ are limited, evidence suggests cross-informant reliability is weak and there is no evidence for cultural equivalence for the New Zealand population. Therefore, we aimed to use Modern Test Theory, and specifically Rasch analysis, to examine the internal construct validity and cultural equivalence of the SDQ in a New Zealand preschool population across different ethnicity strata and to examine reliability between parents and teachers (cross-informant reliability). We hypothesised that the SDQ subscales and the Difficulty scale would (1) have cross-informant reliability (with consistency in scores by parents and teachers); (2) fit the Rasch model (demonstrating unidimensionality and internal construct validity) and (3) have cultural equivalence across ethnic strata (demonstrated by an absence of DIF).
MethODs study design and sample
This observational study used SDQ data gathered during the New Zealand Before School Check (B4SC), which takes place when the child is aged (4 or exceptionally aged 5). 9 The B4SC is carried out by registered nurses based in primary care and involves the assessment of the child's general health, hearing, oral health, vision, growth as well as developmental and behavioural problems. The latter is evaluated using the Australian SDQ version for 2-4-year olds, completed by the parent. If the child is in preschool, the nurse also requests their teacher to complete the SDQ for the child. Clear instructions for the administration of the SDQ are provided within the B4SC handbook. In New Zealand, there is no other SDQ data collection point during childhood.
Data sources/quality, missing data and bias: Permission to use the full, deidentified 2011 national B4SC SDQ dataset for preschoolers (n=51 251) from the New Zealand Ministry of Health was provided by the B4SC Governance Board. Data quality checks on SDQ data resulted in the deletion of 20 024 cases (out of n=51 251, 39%) for the following reasons: 1. Individual item data from the parent questionnaire were missing completely (n=19 197) or partially (n=1) since (1) we would not have been able to carry out a quality check of the subscale scores and (2) we would not be able to use these data for the Rasch analysis); thus, 19 198 were removed from the analysis set. 2. District Health Boards (DHB) for which we had fewer than 15% of data on individual items, since the quality of their data is in doubt: although a total of 12 720 records came from these DHBs, this extra step only entailed the removal of a further 375 records from the analysis set after step 1. 3. Children's ages were recorded as younger than 4 or older than 5 when the SDQ was completed (we suspect some of these ages may have been entered incorrectly; however, this step only entailed the removal of a further 451 records from the analysis set after steps 1 and 2. 4. Cases with all zero scores: these were deemed potentially erroneous as the Prosocial subscale is scored in the opposite direction from the other subscales; although 1038 cases fitted this profile, none had complete parental item data and so no further record was removed on the basis of this criterion after steps 1, 2 and 3. Study size: In total, 29 075 cases remained in the parents' dataset; 17 006 remained for the parent-teacher cross-informant reliability analysis. Rasch analysis uses fit statistics, but these are not suited to such large sample sizes. Fit to the Rasch model is considered acceptable when the observed data fit the predetermined Rasch model, 15 30 traditionally examined with fit statistics (eg, the item-trait interaction χ²). A non-significant χ² indicates fit to the Rasch model. Power increases with large samples, which inflates the χ² and results in negligible small differences appearing as a statistically significant misfit between the data and the model. 31 32 Therefore, our Rasch analysis was carried out on a smaller sample (n=1000), to allow examination of convergence to the Rasch model. The sample was created by randomly sampling equal numbers of cases from the total parent sample, for four main ethnic groups (250/ethnic group): New Zealand European (NZE), Māori, Asian and Pasifika. This is well above the recommended sample size for studies using Rasch analysis. For example, it has been suggested that to have 99% confidence that the estimated item difficulty is within ±½ logit of its stable value on the interval metric, the minimum sample size range is 108-243 (best to poor targeting). 33 34 Instruments The SDQ consists of 25 items, each with three response options: not true, somewhat true and certainly true. The four SDQ subscales reflecting problematic behaviours or emotions (Emotional Symptoms, Conduct Problems, Hyperactivity, Peer Problems) contain 15 positively worded items and 5 negatively worded items. 7 8 Positively worded items are reverse scored (in New Zealand this is done on data entry); thus, higher subscale scores denote greater problems. Scores from these four subscales are also summed to give an overall Difficulty score ranging from 0 to 40. The five items making up the Prosocial Behaviour subscale are positively worded and higher scores denote better social behaviour.
Data analysis
Cross-informant reliability (between parents and teachers) was assessed for those cases for which both parent and teacher SDQ data were available (n=17 006). The intraclass correlation coefficient (ICC) is the preferred statistical technique and was used. 25 35 However, as many studies of the SDQ have used correlations, 36 we will also present those.
Each SDQ subscale and the Difficulty scale were fitted to the Rasch model to examine fit, using RUMM2030 software. 37 Fit was considered acceptable if there was a non-substantial deviation of individual items and respondents from the Rasch model (individual item and person fit residuals should be within the range of ±2.5, the average fit residual statistics should be close to a mean of 0 and SD of 1, the item χ² should be non-significant). In addition, we used the root mean square error of approximation (RMSEA) to examine fit, with RMSEA<0.02 suggesting data fit the Rasch model (box 1). 32 Log-transformed item scores generated from the response choices should reflect the increasing or decreasing latent trait to be measured (threshold ordering). 30 When a given level of problems is not confirmed by the expected response option to an item, disordered thresholds are observed. Disordering is only considered statistically significant if the 95% CI of the threshold locations do not overlap. When significant disordering is observed, response categories can be combined.
An assumption of the Rasch model is that the answers to one item should not be dependent on the responses to another item, conditional on the trait being measured. This local independence is examined by exploring the correlations between items' residuals, which should not be more than 0.20 above the average residual correlation. 38 If locally dependent items are observed, they can box 1 Calculation of root mean square error of approximation (rMseA) In Rasch analysis, RMSEA is calculated as follows: 32 χ² is the item-trait interaction chi-square (obtained from the analysis within the Rasch software), df is its degrees of freedom. N is the sample size. Notice that the RMSEA has an expected value of 0 when the data fit the model. Overfit of the data to the model, χ²/df<1, is ignored. For a given χ², RMSEA decreases as sample size (N) increases.
Open Access be combined into a testlet, a bundle of items that share a common stimulus. 39 The Rasch model expects that each item is invariant (unbiased) across key groups (eg, ethnicity or gender), 40 41 examined statistically with an analysis of variance and visually by examining the item characteristic curves. Variance (DIF) can be uniform; the bias is present consistently across the trait. For example, uniform DIF by ethnic group implies that item difficulty is different for individual ethnic groups across the trait even though their underlying level of problems is the same. DIF can also be non-uniform; the bias is not consistent across the trait. DIF analysis is affected by large sample sizes with non-significant DIF showing as significant; hence, inspection of item characteristic curves is also important. When uniform DIF is observed, two strategies can be employed. First, DIF items (if present in >1 item) can be combined into a testlet to examine if DIF is cancelled out at the test level; second, the item can be split by the variable for which DIF is observed. In our analysis, we considered the final solution to be the one with the best improvements in fit statistics.
Another key assumption of the Rasch model is that a scale must be unidimensional. This is examined by creating two subsets of items, identified by a principal component analysis of the item residuals, with those loading negatively forming one set and those positively loading the second set. 42 An independent t-test is used to compare estimates derived from the two subtests for each respondent. When fewer than 5% of the t-tests are significant (or the 95% CI of t-tests includes 5%), unidimensionality is supported. 42 43 Targeting of the subscales to the population was examined with person-item-threshold maps.
Internal consistency was examined with Cronbach's α and Person Separation Index (PSI) statistics. PSI is an indicator of the number of statistically different strata (groups) that the test can identify in the sample. 44 Interpretation of the PSI is similar to Cronbach's α with values≥0.70 suitable for group comparisons and ≥0.85 for individual clinical use. However, Cronbach's α can only be calculated when there are no missing data and is not considered robust with skewed data. 45 Therefore, we present PSI and Cronbach's α in summary tables as well as the number of groups between which the subscale is able to discriminate. 46 Finally, for polytomous scales, two Rasch models can be used. The Rating Scale version assumes that the distance between thresholds is equal across items. 14 The Unrestricted (Partial Credit) model does not make this assumption. 47 A log-likelihood test examines whether results from these two models are significantly different and if this is so the Partial Credit model should be used. This test was significant (p<0.001) for all subscales and therefore the Partial Credit model was used.
Patient and public involvement
End users of our research include families, preschool teachers, service providers and the Ministry of Health.
The research aims and questions were part of a tender prepared by the Ministry of Health, to which we responded. Thus, we did not have the ability to include end users in the development of study questions. The analysis presented here did not require participant recruitment or data collection and end users were therefore not consulted about the study design. Researchers in New Zealand have a responsibility to ensure their research is of value and culturally responsive to Māori. Therefore, guidance for the study was sought from the University's Mātauranga Māori committee, which members are drawn from a wide range of Māori communities. The findings from the part of the study reported here were presented to the Ministry of Health.
results
The child gender split was balanced with 49% female and 51% male in the full parent sample as well as the cross-comparison sample; 99.6% were aged 4 at the time of the B4SC (0.4% of children had recently turned 5). Child ethnicity in the parent sample was 57% NZE, 23% Māori, 12% Pasifika and 8% Asian; this distribution was similar in the cross-comparison sample 63% NZE, 16% Māori, 7% Pasifika and 7% Asian. As noted above, there were no missing data in the selected samples.
Cross-informant reliability (n=17 006)
Cross-informant reliability between parent and teachers as measured by correlations was generally poor (all <0.5, mean 0.28) and ICCs (all <0.6, mean 0.13). Cross-informant reliability was better in the Hyperactivity subscale and worst in the Prosocial subscale, better for NZE and worst for Pasifika children (table 1). Table 2 displays results from the Rasch analysis.
Internal validity and cross-cultural equivalence
emotional symptoms subscale All items in this subscale had ordered thresholds, items were locally independent and the subscale was unidimensional. Person fit was adequate with a mean person fit residual reasonably close to 0 and the SD below 1.4 (table 2: analysis 1). However, overall fit to the Rasch model was unsatisfactory (RMSEA>0.02). PSI was below 0 and Cronbach's α 0.15. All item fit residuals were within the acceptable range of −2.5 to 2.5; however, four out of five item χ² values were statistically significant, indicating misfit.
There was statistically significant uniform DIF by ethnicity in items 16 and 24, which was confirmed by visual inspection of the item characteristic curves (figure 1). Items 16 and 24 were combined into a testlet. This resulted in poorer person fit and similar RMSEA values (0.072). We therefore split these items by ethnic groups instead, creating unique items for NZE, Māori, Asian and Pasifika peoples, resulting in 11 items for the subscale. This step improved overall fit to the Rasch Open Access model; however, the RMSEA was still greater than the acceptable value of 0.02 and internal consistency unacceptably low (table 2: analysis 2).
After items were split, all item fit residuals were within range, although two still had statistically significant χ² values (items 24NZE and item 8). Table 3 shows that the easiest item to endorse is item 16 and the hardest to endorse is item 13. The split item locations show that for children with the same level of Emotional Problems, item 16 is more readily endorsed when they are Māori and less readily endorsed when they are Pasifika (difference of 0.42 logits). Item 24 is endorsed more readily by parents of Asian than NZE children (difference of 0.49 logits). Figure 2 displays the targeting of the subscale to the population, clearly demonstrating the large number of extreme cases.
Conduct Problems subscale
Conduct Problems item thresholds were ordered, items were locally independent and person fit and unidimensionality were acceptable. However, overall fit to the model was unsatisfactory (RMSEA>0.02, table 2: analysis 3). Internal consistency was poor (PSI 0.10, α 0.65) with the subscale being able to discriminate between three strata.
Item fit residuals were within acceptable range though two had significant χ² (items 5 and 18). Statistically significant DIF by ethnicity was present for item 12 and by gender for item 7. These two items were split by ethnicity and gender, respectively (table 2: analysis 4), resulting in satisfactory fit residuals, one item with a significant χ², significant improvement in RMSEA (0.03) but poor internal consistency (PSI=0.11, splitting items leads to missing data and α cannot be calculated).
The easiest item to endorse was item 5 and the hardest item 12 (table 3). The split item locations show that for children with the same level of Conduct Problems, item 12 is more readily endorsed when they are Pasifika and less readily endorsed when they are NZE (difference of 1.22 logits). Item 7 is endorsed more readily by parents of boys than girls (difference of 0.32 logits). Targeting showed a floor effect (figure 2). hyperactivity subscale Ordered thresholds, local independence, person fit and unidimensionality were observed for the Hyperactivity subscale; however, overall fit to the model and internal consistency was unsatisfactory (RMSE>0.02; PSI 0.30, α 0.48; subscale discriminates between three strata, table 2: analysis 5). Item fit residuals were out of range for item 21 and item 25 had a significant χ². Uniform DIF was statistically significant by ethnicity in two items (15 and 21). These items were therefore split by ethnicity. This improved fit to the Rasch model (table 2: analysis 6) and displayed better fit than when these two items were combined into a testlet. Item fit residuals were within acceptable range of −2.5/+2.5; only one item had a significant item χ² statistic (table 3), and RMSEA was close to 0.02. However, internal consistency remained poor (PSI=0.31). The easiest item to endorse was item 15 (for Asian children) and the hardest item 10. The split item locations show that, for children with the same level of hyperactivity problems, item 15 is more readily endorsed when they are Asian and less readily endorsed when they 2).
Prosocial subscale
The subscale met the requirements for threshold ordering, local independence, person fit and unidimensionality. Overall fit to the Rasch model and internal consistency were unsatisfactory (RMSEA>0.02; PSI negative values, α 0.29, subscale able to discriminate between two strata, table 2: analysis 9). Item fit residuals were within the −2.5/+2.5 range, though two had significant item χ² statistics. There was no DIF. Item 17 was the easiest to endorse; item 4 was the hardest to endorse. A ceiling effect was observed in the person-item-threshold map ( figure 2).
Difficulty scale
Two items had disordered thresholds; however, this was not statistically significant and item response categories did not need to be combined. Some local dependency was present in two item pairs. Unidimensionality was observed (table 2: analysis 10). Five item fit residuals were out of the acceptable range of −2.5/+2.5 and four items showed uniform DIF by ethnicity (items 12, 16, 21 and 23). To examine whether DIF was present at the test level, these items were combined into a testlet. This Open Access resulted in an absence of DIF; however, one item pair remained locally dependent (items 2 and 10). A second testlet was created to deal with this local dependency. The resulting scale was unidimensional, with locally independent items (table 2: analysis 11). The RMSEA was within range suggesting overall fit to the Rasch model. Internal consistency was good (PSI 0.71, α 0.77, the scale was able to discriminate between six distinct strata). The fit residual for one item was slightly out of range (item 15, -2.777); however, given the negative value of this residual, this indicates redundancy rather than misfit and the item was therefore retained. The easiest item to endorse was item 15, the hardest item 14. The person-item threshold map showed a normal distribution, although located to the left of the item locations on the latent trait. A conversion table was produced, which can be used to convert the raw ordinal score to an interval scale (table 4).
DIsCussIOn
This study has shown that the SDQ items response categories work well; however, the five subscales diverge significantly from the Rasch model and four SDQ subscales include items that are biased by key variables with ethnicity having the greatest contribution. This raises critical questions about cultural equivalence. The five subscales suffer from a floor and ceiling effect and their internal consistency statistics are well below the acceptable range. By contrast, the Total Difficulty scale, which combines the four subscales capturing children's problems, is unidimensional, fits the Rasch model (after dealing with DIF and local dependency) and has internal consistency sufficient to distinguish between six groups of children. The study has also shown that parents and teachers score children in their care differently. Thus, all three study hypotheses are rejected. This section will discuss our findings in terms of fit to the Rasch model, internal consistency, cultural equivalence and cross-informant reliability.
Open Access
Fit to the rasch model The Total Difficulty scale did fit the Rasch model, after dealing with four DIF items and two locally dependent items. This scale has good internal consistency and is able to discriminate between six groups of children on the latent trait. We observed the population distribution, while following a normal pattern, was to the left of the item locations on the latent trait. Thus, the precision of person estimates at the lower of the scale will not be as good as for those at the higher end of the scale. However, the SDQ is used for screening and arguably precise measurement at the lower end is not needed, since all one needs to establish is that the child does not need to be referred for further assessment or intervention. As we achieved fit to the Rasch model, we were able to provide a conversion table which can be used by clinicians to convert the raw ordinal score to more accurate interval level and which takes account of DIF.
Internal consistency
The five subscales are relatively short, which affects internal consistency and the subscales' ability to make fine distinctions between groups of people on the underlying trait. 25 In addition, there was significant divergence between the PSI and Cronbach's α statistics, with PSI being much smaller than alpha. This divergence can be explained by the way these statistics are calculated. The calculation of Cronbach's α assumes all SEs for individuals are the same, making it not a very robust statistics for skewed data. 45 This assumption results in relatively high values even in the presence of extreme scores and the Cronbach's α values are therefore meaningless for SDQ Open Access data. This issue has not been raised in the SDQ literature; indeed, Cronbach's α values are widely reported as satisfactory. 48 In Rasch analysis, the SE for every individual is estimated and the calculation of the PSI statistic takes these into account. Since SEs are largest for people with extreme scores, PSI will be smaller than Cronbach's α as observed in our skewed data. However, the purpose of the SDQ is to identify those children who would benefit from further assessment or intervention. Thus, the fact that we observed a floor and ceiling effect is not necessarily problematic.
Cultural equivalence
This study examined invariance by ethnicity at the item level and found lack of cultural equivalence. DIF (especially by ethnicity) was found for all the four subscales measuring problems, suggesting there are a number of questions to which parents respond differently despite overall scoring the same amount of problems on the trait being measured. The only other Rasch analysis study we were able to locate (conducted on data from children aged 12 to 18) did not include a DIF analysis and thus we cannot compare our findings against theirs. 24 Lack of measurement invariance of the subscales has also been shown by others (although on older children than in our sample) when using a CFA approach. 5051 Richter et al found varying factor loadings and thresholds between different ethnic Norwegians and minority ethnic groups of adolescents and concluded that the total difficulty score is preferable. 49 Similarly, Ortuño-Sierra et al demonstrated that measurement variance was only partial, with 11 of the 25 items not being variant across different European samples. 50 By contrast, others have shown measurement invariance between British Indian and British white children using multigroup confirmatory factor analyses and demonstrated evidence of acceptable fit across ethnicity, although again their population was older (5-16 years) than the sample considered here. 51 If measurement variance (DIF) is ignored, the child's difficulties can be overestimated or underestimated since the difficulty of the item varies by ethnic group, potentially leading to inaccurate identification of cases. This is important, given caseness has been shown to vary for different ethnic groups within the same country and between countries. [52][53][54] Our study is unable to assess why such DIF occurs, since the study drew on secondary data. However, we can pose some possible factors that may have affected measurement variance, as discussed below.
Our recent qualitative study suggests there is variation in the way the SDQ is administered-some parents complete the tool by themselves and others receive support from nurses, possibly impacting on the way questions are interpreted. 29 In addition, New Zealand preschool parents from Māori, Pacific Island, Asian and new immigrant groups questioned the cultural validity of the SDQ. 29 Respondents in an Australian qualitative study exploring the SDQ in Aboriginal community-controlled health services reported that the use of a questionnaire Open Access as opposed to a general conversation or interview was deemed culturally inappropriate and that inter-relationships with peers were considered of less importance than relationships with family and participants. 55 There are 85 different language versions available from the Youth in Mind website, though not one in Te Reo Māori (http://www. sdqinfo. org/). Translations and adaptations are not permitted without the involvement of that study team, which provides confidence in the robustness of translations. However, for our study, we do not know whether respondents were offered the SDQ in the language of their choice, as such data are not collected as part of the B4SC. The literature includes six studies that examined and demonstrated some issues with SDQ translations. 13 Using a language version that is not understood by respondents will affect validity, 56 which may have occurred here.
It is possible that poor literacy impacts on answering the SDQ, as found by others. 57 58 In New Zealand, there are many people (in proportion) with poorer than average literacy skills. 59 In addition, 18.6% of the New Zealand population report speaking two or more languages, the majority being born overseas (60.4%); many among these will have English as a second language. 60 These aspects have particular relevance for Māori whānau (extended families) in New Zealand where it is estimated that 20% of Māori children and youth have Conduct Problems. 61 Therefore, it is important that screening of Māori children during the preschool years is accurate in ensuring that Māori whānau both receive the support they need and at the same time are not pathologised by false positive findings. The 2013 New Zealand Census found that 21% of the almost 700 000 Māori population could hold conversation about everyday things in Te Reo Māori, which has been a national official language since 1987. 62 Yet, there is not Māori version of the SDQ, or a New Zealand version incorporating commonly used Māori words.
Cross-informant reliability
Cross-informant reliability was examined with ICCs which were well below the acceptable cut-off value of 0.6 (the mean in our study was 0.126). However, some argue that correlation coefficients can be used in the assessment of cross-informant reliability of the SDQ since parents and teachers make SDQ ratings based on different sources of information. 7 48 Our systematic literature review found weighted averages of coefficients between different informants ranged from 0.24 to 0.45, 13 similar to findings by others (range 0.26-0.47). 48 In our study, the mean correlation coefficient was 0.28, meaning only 8% of the variance can be explained by scores from different informants. This implies the importance of taking into account the views of both parents and teachers when making a decision for onward referral, a practice that is not commonplace in New Zealand. 63 A key strength of this study is the inclusion of all preschool children in New Zealand for whom an SDQ assessment was available in 2011, resulting in our ability to assess the validity of the tool at the population level, with sufficient power to make sounds conclusions and ability to generalise to the wider New Zealand preschool population. Another strength was robust data quality checks and the exclusion of 39% of cases for which we had some concerns about quality (it being incomplete or containing multiple inconsistencies). From our steering group meetings, we gathered that there were a few reasons underlying these quality issues. In some DHBs, staff enter only the total scores, as opposed to item-level data. This practice leads to potential summing errors of total scores and these could not be checked or indeed analysed (hence we excluded these cases). Second, some DHBs told us they set the default values of answers as zero rather than blank. Consequently, when there were missing data (eg, if a teacher-completed SDQ was not available), the software would have summed these and arrived at total scores of 0. Given that the Prosocial scale is scored in the opposite direction of the others, zero scores on all subscales would be highly inconsistent and therefore shed doubt on data quality (and hence these were also excluded). An additional limitation was our inability to assess DIF by other key variables that may affect validity, for example, first language or country of birth, as such data were not available.
In conclusion, the Total Difficulty scale is internally valid and has acceptable internal consistency. Clinicians should use the conversion table as it accounts for bias by ethnic group. The five subscales are not valid and not suitable for use in their own right in New Zealand. Since consistency of scores between parents and teachers was poor, it is advisable to use both parents and teachers' feedback when considering children's needs for referral to further assessment. Future work should examine whether validity is affected by different language versions used (in the same country). ethics approval New Zealand Health and Disability Ethics Committee (Northern A, NTY/12/04/028/AM05) and the Auckland University of Technology's Ethics Committee (12/163).
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement Quantitative data from the study can be obtained from the author, subject to the funder's permission.
Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/ | 2018-04-27T07:21:59.316Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "44c6cb451ced1c34e3b5a32be5496b7b2c1f510a",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/4/e021551.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "43ff7149b5fea3646172099ab998b48f812710dc",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232089935 | pes2o/s2orc | v3-fos-license | Evacuation of residents in a natural disaster during the COVID-19 era
Evacuation of residents in a natural disaster during the COVID-19 era T. Sawano *, N. Ito, A. Ozaki, Y. Nishikawa, S. Nonaka, Y. Kobashi, A. Higuchi and M. Tsubokura From the Research Center for Community Health, Minamisoma Municipal General Hospital, 54-6, 2 Choume, Takami-cho, Haramachi-ku, Minamisoma, Fukushima 975-0033, Japan, Department of Radiation Health Management, Fukushima Medical University School of Medicine, 1 Banchi, Hikarigaoka, Fukushima, Fukushima 960-1247, Japan, Department of Surgery, Jyoban Hospital of Tokiwa Foundation, 57 Banchi, Jyobankamiyunaga-Yamachi, Iwaki, Fukushima 972-8322, Japan, Department of Breast Surgery, Jyoban Hospital of Tokiwa Foundation, 57 Banchi, Jyobankamiyunaga-Yamachi, Iwaki, Fukushima 972-8322, Japan, Department of Internal Medicine, Soma Central Hospital, 5-18, 3 Choume, Okinouchi, Soma, Fukushima 9760016, Japan and Medical Governance Research Institute, 12-13, 2 Choume, Takanawa, Minato-ku, Tokyo 1080074, Japan
At 11:07 p.m. on 13 February 2021, an earthquake with a magnitude of 7.3 struck Fukushima, Japan, which was considered to be an aftershock of the Great East Japan Earthquake (GEJE) that occurred in March 2011. 1 A tsunami of up to 20 cm was observed as result of this earthquake. In addition to the damage to numerous houses, expressways and railroads, mainly in Miyagi and Fukushima prefectures, as of 15 February, 152 residents were injured; fortunately, there were no fatalities and the damage was limited.
This was the first major earthquake since the outbreak of the novel coronavirus disease (COVID-19) in Wuhan, China, in December 2019 and the SARS-CoV-2 outbreak in Japan. Evacuation shelters were opened to affected residents in the hardest-hit municipalities. In Soma City, a northern coastal municipality in Fukushima Prefecture, where the Japanese seismic intensity scale was 6 upper in this quake, 2 evacuation shelters were opened 40 min after the earthquake hit and 87 people, including many elderly, had evacuated by 2:30 a.m. To prevent the spread of COVID-19, in addition to hand disinfection and body temperature checks, two buildings on the same site were prepared for the zoning of people with fever. In the gymnasium, which served as the evacuation shelter, tents with open roofs were set up at intervals of approximately two meters and contained a single household ( Figure 1). There was a swift and adequate response, possibly because the damage was limited and the earthquake hit the municipality that had experienced the GEJE and the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident. In contrast, this event highlighted the importance of preparing for the evacuation of residents in the COVID-19 era.
The importance of controlling communicable diseases during natural disasters was recognized even before the COVID-19 pandemic. 2,3 For example, in Japan, hospital admission rates and estimated morbidity of pneumonia had significantly increased immediately after the Great Hanshin-Awaji Earthquake in 1995. 4 It has been suggested that influenza virus, norovirus and tuberculosis infections may have occurred in evacuation shelters after the GEJE. 5,6 In light of these cases, the need to train experts in infectious disease control during disaster evacuation was suggested even before the COVID-19 pandemic in Japan, one of the most disaster-prone areas worldwide. The evacuation response to natural disasters during the COVID-19 pandemic requires more attention. For example, evacuations associated with hurricanes in the COVID-19 pandemic may accelerate the spread of infection, emphasizing the need to carefully consider the destination of residents' evacuation. 7 Recent reports also suggested that disasters may exacerbate infections, especially among the poor. 8 Considering the importance of countermeasures against infectious diseases during past disasters, organizations such as the World Health Organization, the Cabinet Office, Government in Japan and the Japan Medical Association issued strategies for evacuation shelter use under the COVID-19 pandemic, calling for the attention of the public and municipalities. 9 The strategy recommends opening as many evacuation shelters as possible, including hotels and public facilities, limiting the number of people in each evacuation shelter and dispersing evacuation. In public facilities (e.g. school gymnasiums), it is also recommended to consider social distance by using one area for a family and allowing more space between areas. Particularly in areas with the ongoing COVID-19 pandemic, it is crucial to respond to residents who have contracted COVID-19 and who are receiving home treatment.
Notably, evacuation among the vulnerable, such as the elderly and the disabled, requires special attention. At this time, such vulnerable populations are likely to be affected more if the scale of the disaster is larger. Even vulnerable residents who should evacuate should not be deterred from doing so for fear of being infected with COVID-19. From the experience of the FDNPP accident, the evacuation of the vulnerable may pose a heavier physical and mental burden. 10 A major challenge is that these vulnerable populations are also highly vulnerable to COVID-19. Thus, significant care must be taken to ensure that infection control and minimizing the health impacts of evacuation on them can be implemented safely at the same time. During the COVID-19 pandemic, protecting the health of vulnerable populations requires further consideration.
Conflict of interest: None declared. | 2021-03-03T06:23:22.866Z | 2021-02-27T00:00:00.000 | {
"year": 2021,
"sha1": "bb0bb7ae4bb90f62d3b1f0ee39aa732ce5d9103d",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/qjmed/article-pdf/114/7/445/41098324/hcab044.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9103ddf9a6e9f64149b58191de00ffa7537b61bc",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255861324 | pes2o/s2orc | v3-fos-license | Artemisinin-based combination therapy in pregnant women in Zambia: efficacy, safety and risk of recurrent malaria
In Zambia, malaria is one of the leading causes of morbidity and mortality, especially among under five children and pregnant women. For the latter, the World Health Organization recommends the use of artemisinin-based combination therapy (ACT) in the second and third trimester of pregnancy. In a context of limited information on ACT, the safety and efficacy of three combinations, namely artemether–lumefantrine (AL), mefloquine–artesunate (MQAS) and dihydroartemisinin–piperaquine (DHAPQ) were assessed in pregnant women with malaria. The trial was carried out between July 2010 and August 2013 in Nchelenge district, Luapula Province, an area of high transmission, as part of a multi-centre trial. Women in the second or third trimester of pregnancy and with malaria were recruited and randomized to one of the three study arms. Women were actively followed up for 63 days, and then at delivery and 1 year post-delivery. Nine hundred pregnant women were included, 300 per arm. PCR-adjusted treatment failure was 4.7% (12/258) (95% CI 2.7–8.0) for AL, 1.3% (3/235) (95% CI 0.4–3.7) for MQAS and 0.8% (2/236) (95% CI 0.2–3.0) for DHAPQ, with significant risk difference between AL and DHAPQ (p = 0.01) and between AL and MQAS (p = 0.03) treatments. Re-infections during follow up were more frequent in the AL (HR: 4.71; 95% CI 3.10–7.2; p < 0.01) and MQAS (HR: 1.59; 95% CI 1.02–2.46; p = 0.04) arms compared to the DHAPQ arm. PCR-adjusted treatment failure was significantly associated with women under 20 years [Hazard Ratio (HR) 5.35 (95% CI 1.07–26.73; p = 0.04)] and higher malaria parasite density [3.23 (95% CI 1.03–10.10; p = 0.04)], and still women under 20 years [1.78, (95% CI 1.26–2.52; p < 0.01)] had a significantly higher risk of re-infection. The three treatments were generally well tolerated. Dizziness, nausea, vomiting, headache and asthenia as adverse events (AEs) were more common in MQAS than in AL or DHAPQ (p < 0.001). Birth outcomes were not significantly different between treatment arms. As new infections can be prevented by a long acting partner drug to the artemisinins, DHAPQ should be preferred in places as Nchelenge district where transmission is intense while in areas of low transmission intensity AL or MQAS may be used.
Background
Malaria is a poverty-related disease and a major public health problem in many sub-Saharan African countries where over 90% of the cases worldwide are found. Pregnant women and children are at higher risk of malaria infection and of developing serious complications related to the disease. Malaria in pregnancy is associated with higher risk of maternal anaemia, low birth weight, spontaneous abortion, stillbirths and maternal mortality [1][2][3].
There are few treatments with known safety and efficacy for the treatment of malaria in pregnancy. Some anti-malarials known to be efficacious e.g. quinine, are not well tolerated, resulting in poor compliance and higher risk of treatment failures [4]. For other treatments, there are insufficient data as pregnant women are systematically excluded from treatment efficacy studies. Therefore, pregnant women lack proven effective and safe anti-malarial therapies [5]. In such context of limited information, and weighing risks and benefits, the World Health Organization (WHO) allows the use of artemisinin-based combination therapy (ACT) during the second and third trimester of pregnancy [1].
To confirm this expert opinion, we assessed the safety and efficacy of three artemisinin-based combinations, namely mefloquine-artesunate (MQAS), dihydroartemisinin-piperaquine (DHA-PQ) and artemether-lumefantrine (AL), in pregnant women in the second or third trimester with a confirmed Plasmodium falciparum malaria infection. This study was part of a multi-centre trial carried out also in Burkina Faso, Ghana and Malawi. Each site tested 3 ACT medicines so that each country dataset could be analysed separately [6] and give detailed site specific data. This paper reports results collected in the Zambian site, Nchelenge, Luapula Province. Knowing that anti-malarial drug's efficacy depends not only on the parasite susceptibility to the drug and on its blood concentration but also on the host's immunity which may be affected by factors such as pregnancy itself, age, parasite density and malaria transmission intensity, the impact of these factors on the treatment outcome were assessed [7,8]. The results of this study provide the national policy makers the information for a wider and alternative choice of treatments to be used during pregnancy.
Methods
The trial was conducted between June 2010 and August 2013 in Nchelenge district in Luapula Province, Zambia; one of the provinces where malaria prevalence is higher than the national average (32.1% vs 14.9% in 2012) [9]. Nchelenge district is located in the northern part of the province on the swampy shores of Lake Mweru, borders with the Democratic Republic of Congo (DRC) and has an estimated population of 178,000 inhabitants, mostly peasant farmers and/or fishermen. The district has three seasons: cool dry winter, hot dry and rainy season. Malaria transmission is perennial because of the presence of Anopheles funestus during the dry season and Anopheles gambiae in the wet season [10]. In 2012-2013, the entomological inoculation rate (EIR) was estimated at 70 infective bites/person/year [11], and annual malaria incidence at more than 700/1000 person years in the general population and more than 1900/1000 person years among under-five children [12]. The study protocol of this trial is been described in detail elsewhere [13]. Briefly, pregnant women aged at least 15 years, in the second and third trimester, with Hb ≥7 g/dL, HIV negative, a P. falciparum mono-infection of any density and irrespective of having symptoms (excluding illness at time of screening that required hospitalization such as severe malaria) were recruited into the trial and randomized to one of the following treatments: artemetherlumefantrine (AL), mefloquine-artesunate (MQAS) and dihydroartemisinin-piperaquine (DHA-PQ) using a randomization list provided of 300 participants in each arm. Sealed envelopes labeled with patient's unique code and containing treatment allocation were provided according to randomization list. A woman was defined as symptomatic if any of the following were present: fever (temperature >37.5 °C) at baseline with parasitaemia (any density); parasite count >2000/µL, regardless of symptoms; at least 3 or more of the following symptoms: fever in the past 24 h, weakness/fatigue; muscle and/or joint aches, headache, convulsion, with parasitaemia of any density). Gestational age was estimated by symphysio-fundal height and then confirmed by obstetric ultrasound, including the fetal viability assessment [14,15]. A blood sample of about 5 mL was collected before treatment for the assessment of haematological and biochemistry parameters. All study drugs were given on days 0, 1 and 2 under direct observation and according to the manufacturer's recommendations (Eurartesim ® from Sigma-Tau Industrie Farmaceutiche Riunite S.p.A., 40 mg of dihydroartemisinin and 320 mg of piperaquine phosphate per tablet, 3 tablets once per day over 3 days; mefloquine-artesunate from Far-Manguinhos Ministério da Saúde-Fundação Oswaldo Cruzm, 100 mg artesunate and 220 mg mefloquine per tablet at 3 tablets once per day over 3 days; Coartem ® from Novartis Pharma AG, 20 mg artemether and 120 mg lumefantrine per tablet at 4 tablets twice per day over 3 days). After completing the 3-day treatment, patients were asked to return to the clinic for follow up visits on day 3, 7 and then once every week until day 63. At each visit, a medical history, and current clinical signs and symptoms were collected, including information on any adverse events (AE), a blood sample for malaria smears and dried blood spots (DBS) for later genotyping, for full blood counts (days 7, 14, 28 and 63 only) and for total bilirubin, alanine aminotransferase (ALAT) and creatinine (days 7 and 14 only). Rescue treatment (Quinine) for recurrent infections was according to local national guidelines [16]. (In Zambia, AL is used for treatment of uncomplicated malaria in second and third trimester of pregnancy). At the end of the active follow-up period, women were asked to continue with the antenatal clinic monthly or when they felt unwell until delivery. Recurrent malaria episodes after day 63 were treated with quinine.
Giemsa-stained thick and thin blood films were read independently by two readers, followed by a third reader in case of significant discrepancy. Parasite density was estimated by counting the number of asexual parasites per 200 white blood cells (WBCs) assuming a WBC count of 8000/µL. Total bilirubin, ALAT and creatinine were measured using Flexor Junior biochemistry analyzer. Full blood count was obtained using the Sysmex XT-2000i haematology analyzer. Haemoglobin (Hb) was measured using Hemocue (Angelholm, Sweden). For polymerase chain reaction (PCR) analysis, DBS were prepared on filter paper (Whatman 3MM), and were subsequently transported to the Institute of Tropical Medicine (ITM), Antwerp, Belgium, where centralized genotyping (GluRP, MSP2 and MSP1) was conducted [17]. Samples that failed to produce a result were classified as indeterminate.
Consent was obtained in all cases from study participants and/or legal representative for those between 15 and 17 years old. The study was approved by the Institutional Review Board of the ITM and the Ethics Committee of the Antwerp University Hospital. In addition, the study was also approved by Tropical Diseases Research Centre (TDRC) Ethics Review Committee, the Zambia Medicines Regulatory Authority and Zambia Ministry of Health. The trial was registered at clinicaltrials.gov (NCT00852423).
The primary endpoints were the PCR-adjusted cure rates at day 63 and the safety outcomes as described elsewhere [13]. AEs and serious AEs (SAEs) were recorded and monitored regularly throughout the study by an independent Data and Safety Monitoring Board (DSMB). Secondary endpoints were PCR-unadjusted cure rates at day 63, PCR adjusted and unadjusted time to treatment failure, asexual parasite clearance [18], gametocytaemia (prevalence and density) and Hb changes during follow up.
The study was designed to show that all 3 treatments had similar (PCR-adjusted) cure rates (within 5% difference), with 95% power for each of the 3 pair-wise comparisons and 80% power for the combined hypothesis that all treatments were therapeutically equivalent [13].
Data were captured into an electronic clinical record form (e-CRF) developed in MACRO (InferMed©). A statistical analysis plan was pre-specified before the database lock. For the primary outcome, three analysis populations were used: (1) per-protocol (PP), (2) intention-to-treat (ITT) that excluded lost to follow-up (LTFU)/withdrawals and missing/indeterminate PCR results, and (3) ITT with multiple imputations of LTFU/ withdrawals and missing/indeterminate PCR results. The PP analysis was considered as the primary analysis approach. Major protocol violators, defined prior to analysis, were excluded from the PP analysis.
PCR-adjusted treatment failure rate between pair-wise treatment groups was compared using a Chi square test. The 95% exact confidence intervals for the difference in failure rates were determined. If the difference in true (PCR adjusted) failure rates was less than 5%, treatments were considered therapeutically equivalent. Briefly, risk difference was computed for the following groups: AL and DHAPQ; AL and MQAS; and MQAS and DHAPQ. The 95% confidence interval for a proportion was calculated using the Wilson score method. Baseline variables to be included in the Cox-regression model to compute the adjusted hazards of re-infection (new infection) and recrudescence were selected using the log-rank test for equality across strata. The covariates were included if the p value was 0.25 or less except study treatment dosage. The starting covariates were treatment, symptomatic malaria, parasite density, maternal age, gravidity, anaemia, study treatment dosage, gestational age, haematological and biochemical parameters. Covariates in the multivariable model that were not statistically significant (>0.05) were dropped off except where literature shows them as important variables [gravidity and gestational age (dropped for new infection)] to have in the final model. The proportion hazard assumptions for the Cox-regression model were evaluated using graphical approach [19].
The hematological and biochemistry profiles by day of follow-up were assessed using box-plots plotted at each time point. Differences in these parameters between treatment arms at each day of follow-up were assessed using Kruskal-Wallis test.
Firth logistic regression was used to assess impact of placental malaria (categorized as placental malaria or no infection) on birth outcomes (still birth, miscarriage, premature live delivery, intrauterine fetal death and term live birth) for separation and 'empty cells' in the model. A "stillbirth" was defined as a baby born dead after 24 weeks gestation; a baby born dead before 24 weeks gestation or during the 24th week was considered a "miscarriage". "Preterm live born" was defined as a delivery before 37 weeks of gestation following echography. This was calculated as date of delivery minus date of echography (in weeks) plus gestational age determined through echography. Or based on the Ballard score which determines gestational age based on the sum of neuromuscular and physical scores [20]. A neonate with a score of 30 or lower was labeled "preterm" using this method. Logistic regression was used to assess impact of placental malaria (categorized as placental malaria or no infection) on birth weight. It was also used to assess risk factors for malaria. Placental malaria was classified as acute infection; chronic infection; past infection or no infection and analysed as binary outcome, placental malaria or no infection. For safety, all individuals having received at least one treatment dose were included and analysed in terms of proportions with Chi square test for the difference. Delivery related AEs, caesarean sections or reasons for caesarean sections and pregnancy outcomes were not included in the AE report. Also SAEs which were pregnancy related were excluded.
Results
A total of 1722 pregnant women were screened for malaria infection, regardless of symptoms. Out of these, 900 met the inclusion criteria and were randomized to one of the three study arms: 300 to AL, 300 to MQAS and 300 to DHAPQ. The ITT analysis included 900 pregnant women. The PP analysis included 729 women, i.e. 258 in the AL, 235 in the MQAS and 236 in DHAPQ arms (Fig. 1). The main reasons for exclusion from the PP analysis were lost to follow-up and withdrawals. The baseline characteristics (age, gravidity, parasite density, Hb, symptoms) of the excluded patients were similar to those included in the PP analysis.
In the PP analysis, the graphs for the global proportional hazards (PH) assumptions testing for treatment failure adjusted for several variables (treatment, anaemia, gestational age, gravidity, parasite density, maternal age and malaria symptoms at baseline) were roughly parallel and met the PH assumptions. The day 63 PCRadjusted treatment failure rate was 4.7% (12/258) (95% CI 2.7-8.0) for AL, 1.3% (3/235) (95% CI 0.4-3.7) for MQAS and 0.8% (2/236) (95% CI 0.2-3.0) for DHAPQ (Table 2), with significant risk difference between AL and DHAPQ (p = 0.01) and between AL and MQAS (p = 0.03) treatments. Figure 2 which shows the time to PCR adjusted and unadjusted treatment failure confirms this difference. Figure 3 presents the risk difference computed for the pairwise comparisons conducted for PCR-adjusted and unadjusted treatment success rates at day 63. AL showed somewhat higher (about 3%) PCR-adjusted treatment failures. Therapeutic equivalence could be shown for MQAS and DHAPQ but not for AL as compared to the other 2 treatments. The ITT analysis gave similar results ( Table 2). When considering recrudescence, i.e. treatment failure due to the reappearance of the same strain as identified by PCR analysis, its hazard was higher in patients treated with AL than in those treated with DHAPQ (HR: 10.47; 95% CI 2.18-50.19; p < 0.01) although the estimates were unstable probably due to the small or low number of observations. The hazard was not significantly different in the MQAS than in the DHAPQ arm (HR: 1.56; 95% CI 0.26-9.38; p = 0.63) ( Table 3). The hazard of treatment failure was higher in younger women than in those over 20 years (HR: 5.07; 95% CI 1.01-25.43; p = 0.05). Higher parasite density at baseline was associated with a higher hazard of PCR-adjusted treatment failure (HR: 3.35; 95% CI 1.07-10.45; p = 0.04) ( Table 3).
Placental malaria infection (acute and chronic) was similar between the treatment arms (p = 0.47) ( The study drugs were generally safe with a total of 7 SAEs for mother. A woman treated with MQAS died 41 days after treatment, probably because of meningitis. There were three SAEs in DHAPQ [low haemoglobin, measles and sickle cell mother in haemolytic crisis (vasoocclusive)]. An additional SAE in the MQAS arm, severe vomiting, was considered related to study treatment and recovered completely. The other two in MQAS were asthmatic attack and pneumonia. They all recovered.
There were 21 stillbirths: 8 (2.7%) in AL, 3 (1.0%) in MQAS and 10 (3.3%) in the DHAPQ arms and three miscarriages (two in MQAS, one in DHAPQ arms, and none in the AL arm). The preterm delivery was 4.3% in AL, 2.0% in MQAS and 3.3% in the DHAPQ arm. There were 15 congenital malformations [3 cleft lip and palate, one club foot, one ear tag, 6 polydactyl, one syndactyl, one umbilical hernia, one depression on parietal bone, one tongue tie) observed (4 (1.3%) in each of DHAPQ and MQAS arms and 7 (2.3%) in AL arm] with no significant difference between the arms (p = 0.54).
Discussion
With the range of 0.8-4.7% recrudescences, the three artemisinin-based combinations used for the treatment of uncomplicated malaria in the second and third trimester of pregnancy were efficacious, in an area of high endemicity in Nchelenge district, Zambia. Therapeutic equivalence could be shown for MQAS and DHAPQ but not for AL as compared to the other two treatments. In Nchelenge Zambia, there were significantly more treatment failures in the AL arm compared to the other two arms, though AL efficacy was still above the 90% cure threshold recommended by WHO for adopting new antimalarial treatments as policy [21]. In Uganda, in an area with malaria transmission as high as that in Nchelenge, AL administered to pregnant women was also extremely efficacious, with even less treatment failures (0.7%) than in this trial [22]. In Zambia, ACT has been shown to have excellent cure rates among children and adults [23,24]. Their efficacy, determined by the drug partnering an artemisinin derivative, namely mefloquine, lumefantrine, and piperaquine for the treatment tested in this study, usually exceeds 95% [25]. However, there have been reports pointing to the effect of the physiological changes during pregnancy, e.g. as increased volume of distribution, reduced gut motility, possibly altering drug disposition and metabolism, and thus leading to incorrect dosing [26][27][28]. This does not seem to apply to the results observed in Nchelenge as treatment efficacy was very high, possibly due to the underlying anti-malarial immunity in the Nchelenge district population, including pregnant women, due to the intense malaria transmission and high exposure to infection. The importance of pre-existing immunity on the therapeutic response is also supported by the association between treatment failure (both new infections and recrudescences) and young age [29]. Also transmission intensity may not influence the risk difference between treatments but may influence individual failure rates.
Pregnant women have an increased susceptibility to malaria, and this susceptibility is greatest in the first pregnancy (primigravidae) [30]. The decreasing prevalence and intensity of infection in successive pregnancies mirrors the acquisition of antibody immunity to the variant surface antigens, expressed on the parasitized red blood cells infecting the placenta. Antibodies titres against VSA-PAM are associated to clinical outcomes [31,32] and opsonizing antibodies that allow phagocytic clearance of infected erythrocytes are associated with a better treatment outcome in pregnant women [33]. Results from Nchelenge and other studies suggest that antibodies to VSA-PAM might have important roles in determining both pregnancy outcomes and the effectiveness of antimalarial drugs in pregnancy. Other factors such as cellular immunity, cytokines, and hormonal changes might also influence outcomes in pregnancy [29] and also affect treatment outcome. In Nchelenge, pregnant women treated with AL had a higher risk of new infection than the other two treatments. This is probably due to the shorter post-treatment prophylaxis offered by lumefantrine which is eliminated more rapidly [34] than piperaquine [35]. When the artemisinin component is rapidly eliminated, a new infection would encounter only the partner drug and this may explain the association between the risk of new infection and treatment given. It indirectly confirms that the distinction between recrudescence and new infection and genotyping is reasonably reliable. Considering that Nchelenge women who experienced a new infection during follow up had a higher risk of acute or chronic placenta malaria, both conditions associated to the delivery of low birth weight babies, a longer post-treatment prophylaxis would be extremely important in this area of intense malaria transmission. Therefore, DHAPQ could be preferentially chosen for such conditions, while AL could be used where transmission is low.
Recrudescence may easily occur in the context of emergence or spread of parasite resistance to a given anti-malarial when the partially efficacious anti-malarial may fail to clear the resistant strain or simply select for mutant parasites. In Zambia, artemisinin resistance has not been reported yet. Recrudescence can be caused by the parasites surviving the effect of a shorter-acting ACT [6], in this case AL. Low study drug dosage may play an important role in recrudescence in the AL group as the point estimate indicates low study drug dosage suggests a double independent risk for recrudescence. However, the power of the study was to assume a clear association. Besides parasite sensitivity to drug and the level of the concentration of the drug in the blood, host immunity and parasite density at presentation contributes to positive treatment outcome. Immunity can be affected by different factors, including age, body temperature, pregnancy and parity [29,36]. The Nchelenge study has shown that younger women and high malaria parasite density at baseline are associated with recrudescence and could not demonstrate a significant association between treatment failure and parity.
The three artemisinin-based combinations tested are generally safe in second and third trimester of pregnancy in Zambia. Patients on MQAS had higher rates of treatment-related AE. Dizziness was the most common, followed by vomiting and weakness. Dizziness has been reported even in other studies as related to MQ treatment [39]. On the pregnancy outcomes, there was no significant difference between treatments for stillbirths, miscarriages, congenital malformations and prematurity, a finding similar to those of other studies on AL [22,40,41], mefloquine [5] and DHAPQ [42].
This trial was done in an area where the majority of the population practice farming and fishing as a source of livelihood and they migrate to farming areas for a considerable period [43], possibly explaining the relatively high number of lost-to-follow-ups and withdrawals. Nevertheless, considering that the post-treatment follow up was up to day 63 and that pregnant women are a group particularly difficult to follow, the sample size had been estimated assuming a dropout rate of 20%, while the actual figure was 16%. Such a relatively high dropout rate is unlikely to have had a major influence on the trial's results as the patients excluded and those included did not differ significantly on their baseline characteristics.
Conclusions
The study has shown that both AL and DHAPQ were well tolerated in second and third trimester pregnant women, with low treatment failures. MQAS was less well tolerated than the other two treatments though it had similar low treatment failure. DHAPQ seems to be well tolerated and has low treatment failure with a longer post-treatment prophylaxis. As new infections can be prevented by a long acting partner drug to the artemisinins, DHAPQ should be preferred where transmission is intense as in Nchelenge while and in areas of low transmission intensity AL or MQAS may be used. | 2023-01-17T14:39:10.832Z | 2017-05-16T00:00:00.000 | {
"year": 2017,
"sha1": "82832b7efb3b919248465d4dfea946dc951598bf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12936-017-1851-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "82832b7efb3b919248465d4dfea946dc951598bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14343463 | pes2o/s2orc | v3-fos-license | Geovisual Analytics Approach to Exploring Public Political Discourse on Twitter
Online Survey on the design, functionality and applicability of SPoTvis to other contexts: SPoTvis is a geovisual analytics tool that allows users to explore spatial patterns in Twitter conversation surrounding the 2013 U.S. Government shutdown. A short video demonstrating SPoTvis can be found here: http://www.personal.psu.edu/bws180/SPoTvis_Video/SPoTvis.mov. SPoTvis can be found here: http://www.personal.psu.edu/bws180/ProjectEnv/. The survey consists of general follow-up questions, as well as questions on the graphic design (the overall appearance of the views and their layout), interface functionality (in terms of both ease of use and ability to obtain information) and applicability of the tool to other contexts. We greatly value your insights. Feel free to contact me at any time with any questions.
Introduction and Political Context
Geovisual analytics has roots in visual analytics, a decade-old interdisciplinary field that focuses on developing, applying and understanding the application of methods and tools that provide visual-computational support for analytical reasoning [1].Geovisual analytics integrates approaches from cartography and GIScience more broadly with those of visual analytics; it has been described as focusing "…on visual interfaces to analytical/computational methods that support reasoning with/about geo-information-to enable insights about something for which place matters" [2].Maps are central to geovisual analytics methods, and the prototypical geovisual analytics application uses multiple, dynamicallylinked views to enable users to explore geographical variation in phenomena of interest.This approach is adopted here in the development of a geovisual analytics tool, the spatial patterns of Tweets visualization (SPoTvis) application.SPoTvis is designed to support the analysis of public political discourse in which difference of opinion is likely and that difference is grounded in place.Here, we present the approach, implementation and the capabilities of SPoTvis through a case study analysis of a divisive political situation in the United States, with a focus on events in fall, 2013, and their reflection in Twitter discourse at the time.
In 2010, the U.S. Congress passed the Patient Protection and Affordable Care Act (commonly known as ACA or "Obamacare"), significantly reforming the country's health insurance regulations and marketplace.The votes occurred mostly along party lines, with Democrats generally favoring the bill and Republicans opposing it.Debate over funding for the act continued between the law's passage and its implementation.On 1 October 2013, the U.S. federal government shut down due to disagreement about whether ACA funding would be included in its general fiscal appropriations for 2014.On the same day, the HealthCare.govonline insurance marketplace mandated by the ACA made an ill-fated debut, with long wait times and crashes plaguing the site and impeding signups.The shutdown was eventually ended by congressional compromise 16 days later, while the website was gradually reformed over a period of several months.
In the days prior to, during and after the shutdown, the Twitter conversation surrounding the events seemed just as divided as the politicians' discourse.The hashtags #shutdown and #Obamacare were often seen trending during this period, with acrimonious tweets directed at the instigators of the shutdown, as well as the proponents of the ACA; however, it was difficult to tell where these veins of messages were originating and whether they were always coming from the same people.As we scanned the online conversation during this period, two main research goals emerged.First, we wanted to determine if other subthemes could be identified beyond the general hashtags of #shutdown and #Obamacare.Was there any conversation from furloughed government workers about household financial struggles?Were people talking about closures of national parks and monuments?Were people mentioning troubles signing up for insurance on HealthCare.gov?We hoped to get beyond the noise to find the most poignant subjects of concern for U.S. residents.
Second, we wanted to uncover spatial patterns in the usage of these subthemes.Did the patterns of support and opposition to the shutdown follow known geographic regions of Republican and Democratic party dominance?Did the public discourse in a congressional district match the political position of the district's congressional representative?What subthemes were of most concern in different areas of the country?What regions were talking about the most similar things?
The remainder of this paper describes the development, application and assessment of SPoTvis as a web-based geovisual analytics tool that helps users to uncover some answers to the above questions.The tool offers a term polarity plot coupled with a pair of interactive maps that allow users to compare Twitter subthemes between any two states or congressional districts (Figure 1).The themes are weighted and placed on the term polarity plot's horizontal axis that represents a continuum of interest between the two districts.Terms near the left and right edges of the axis tend to only appear in one of the two political units, while terms near the center of the axis experience more balanced usage patterns between the two units.Demographic attributes and indicators of partisanship provide additional contextual information that enriches users' understandings of how the conversation follows political leanings.Finally, an option to "see similar districts" helps to determine whether regional patterns exist in the discourse.As users explore the map and exercise these functions, they can increasingly make sense of the way varying opinions and values appear in a social media conversation across space.In the following sections, we first justify the value in exploring Twitter data using visual analytics applications.We then outline our data collection process and present findings from a statistical analysis on the relationships among places, politics and discourse on Twitter in the context of the 2013 U.S. Government shutdown.Next, we discuss the rationale behind the design of SPoTvis, how the tool was implemented and considerations for its performance and usability.We provide brief results on spatio-political patterns found using SPoTvis.We then evaluate SPoTvis' ability to enable insight discovery, as well as the tool's design, functionality and applicability to other contexts using a two-part user study.Lastly, we conclude with a summary of the findings and exciting future directions for SPoTvis development.
Justification for Visual Analytics and Twitter Use
Journalists and scientists are using social media data increasingly to uncover newsworthy stories or to better understand what people talk about and how they share information.For example, researchers have shown that the public can generate and spread information during natural disasters and other emergencies using social media outlets, such as Twitter [4].It is also possible to gauge disease activity [5] and public sentiment on climate change [6].While Twitter provides an endless stream of conversations for potential study, making sense of and drawing conclusions from such large data sources is challenging.Incorporating the geographic component of tweets, or where people tweet from, further complicates this process.As noted above, visual analytics provides tools and frameworks for addressing these challenges [1,7].
In the virtual Twitterverse, political conversation follows a unique structure and happens among a special group of Twitter users that is not representative of all users.Smith et al. [8] mapped thousands of Twitter topic networks and identified six different kinds of network crowds based on distinct social structures.The authors found that heated political subjects often result in polarized crowds that ignore one another and share different resources.Party affiliation often divides the crowd, and although both crowds are focused on the same topic, little engagement happens between them.Diakopoulos et al. [9] explored how political discord tweeted during broadcast events can be used for journalistic inquiry in their tool VoxCivitas.The authors collected tweets during President Obama's 2009 speech at the Copenhagen 15 meeting and extracted keywords and sentiment from them.VoxCivitas allows users to search and filter by prominent keywords tweeted during the speech.The tool displays sentiment on a timeline to gauge whether the Twitter reaction during that time in the speech was positive, negative, controversial or neutral.While SPoTvis does not calculate or report an explicit measure of sentiment, it does display a unique signature of conversation topics exhibited by any two political units, an approach that could complement standard sentiment analysis.SPoTvis also extends beyond the scope of VoxCivitas by considering the spatial component of political tweets, including direct comparison between places, as well as the ability to find both similar and dissimilar places based on term usage.
Other recent research directly related to SPoTvis has used keyword analysis and dynamic visualization approaches to extract political topics and sentiment from Twitter data.Xu et al. [10] used Twitter data from the 2012 U.S. presidential election, as well as the Occupy Wall Street movement to demonstrate the agenda-setting role of elite "opinion leaders" in social media conversation.The authors assessed public competition of topics associated with the two political events using topic modeling and integrated storyline-ThemeRiver [11] visualizations.Tsou et al. [12] also used the 2012 presidential election as a case study, but explored how keywords related to political candidates change in response to major events.The authors clearly documented how online conversations about political candidates shift after a major event, such as a hurricane, and how the discussions relate to political candidates.The authors also mapped the proportions of tweets about Barack Obamavs.Mitt Romney before and after such events; they found that the geographic discussion about the candidates does not simply mirror the voting map.
While research on using and mapping social media is undoubtedly growing, leveraging Twitter data remains challenging.Simply displaying tweets on a map leaves much to be desired in terms of cartographic design and the ability to identify patterns [7].However, visualizing social media topics is important because online conversations are manifested in complex and unpredictable ways.We have yet to find any analysis or tool that compares two spatial units based on their Twitter content, a place in the literature that SPoTvis fills.The combination of interactive maps and visual elements implemented in SPoTvis offers a path toward sensemaking and meaningful visualization and analysis based on large geo-social data.
Data Collection and Analysis
The data for this application consists of spatial data representing political boundaries in the United States (at state and congressional district levels), demographic and political data that provide cultural context and tweets about the government shutdown and the ACA.
Spatial Data and Demographic Attributes
Spatial data in this project consisted of U.S. state boundaries, as well as district boundaries for the U.S. House of Representatives.The U.S. has a bicameral legislature in which members of the Senate are elected on a statewide level and members of the House of Representatives are elected based on a geographic district that can potentially be smaller (but not larger) than a state boundary.The use of the two scales of spatial aggregation allows users to compare the political climate for both senators and house representatives.It also allows users to detect both broad regional differences and more local variation in conversation themes.
To help users contextualize the patterns in tweet content, we collected demographic attributes for states and house districts that may provide insight intohow people view the shutdown and Affordable Care Act.The median household income and percentage of unemployed persons (age 16 and over) give a general sense of the economic situation and political concerns in a given district or state.The percent without health insurance may lend insight into how people feel about the Affordable Care Act.Finally, the percentage of employed population that work for the federal government gives an idea of what proportion of the state or district was potentially furloughed during the shutdown.Users can then make inferences about whether the financial and emotional toll imposed by the furloughs affects the attitudes reflected by the tweets.
The latest available Cook Partisan Voter Index (PVI) for each unit was also recorded to help users understand the political leanings of states and districts.This index takes into account the unit's voting behavior in recent presidential elections and assigns the unit a score based on how "Republican" or "Democrat" the unit is compared with the other units in the United States.For example, the state of Nevada has a Cook PVI of D+2, meaning that it leans slightly Democratic, while the state of Wyoming's Cook PVI of R+22 means that the state leans heavily Republican.Used frequently by think tanks, pundits, and journalists, the Cook PVI aids users' understandings of the general political persuasions of people generating tweets in each state and district.Although the segment of the population producing geolocated tweets may not exhibit the same Cook PVI as the entire voting population of the unit, the Cook PVI provides a relative measure of partisanship that is still informative for analysis.
Tweet Collection
The tweets used in this research were collected continuously during a period of eight weeks between 1 September and 27 October 2013.This period includes much of the congressional ACA funding discussion, the shutdown itself and some of the shutdown aftermath.During this entire period, the ACA was a major topic of focus in both news media and social media.For further details of the tweet collection, see Bodnar and Salathé [13].
To be included in SPoTvis, a tweet needed to contain exact geographic coordinates (in other words, the originator of the message explicitly opted-in to the use of location services).A tweet also needed to contain one of the following keywords: #shutdown, shutdown, #ACA, ACA, Affordable Care Act, #Obamacare, Obamacare and healthcare.gov.Terms such as "website", "health" and "insurance" were not considered, because they were deemed too general and might introduce themes unrelated to our topics of interest.This dataset resulted in approximately 70,000 tweets within the boundaries of the contiguous United States that could be used for analysis.
Statistical Analysis of Tweets
Tweets were spatially joined to the vector boundary files mentioned above in order to perform analysis at both the state and house district levels.Of all states, California had the most tweets, with 7169, and Vermont had the fewest, with 90.Because a small percentage of Twitter users enable location services, we acknowledge that our collection represents only a small sample of the tweets generated during the study period.Nevertheless, the histogram in Figure 2 shows that most states had somewhere between 200 and 1600 tweets.The number of tweets per house district ranged from 42 to 807, with the District of Columbia as an outlier with 4005.The histogram in Figure 3 shows that most districts yielded a sample of 100 to 200 tweets.
The textual content of tweets was analyzed using the R statistical software, version 2.15.1, using the "tm" package.We imported the text of the tweets in the corpus data format.The preprocessing of the raw tweets included converting text to lower case, removing all English stop words, punctuation and URLs and stripping extra whitespace.Porter's stemming algorithm was then applied to convert inflected words to their root forms.
A raw word count was first performed on the tweets to determine the most frequent terms in the dataset.To maintain consistency when building the term polarity plots for the units (described later in this paper), we manually selected 50 of the most common keywords that were meaningful in the context of our research, such as "obama", "gop" (a nickname for the Republican party), "congress" and "furlough".The broad term "shutdown" (present in most of the tweets) and obscure words, such as "can" and "still", were omitted from this keyword list in favor of terms that were more descriptive of subthemes in the conversation, such as "school", "worker", "tax" and the names of politicians.The popularity of a term was measured by the number of tweets containing it.Table 1 shows the top 10 keywords at the national level.For a given unit, in other words, a state or a congregational district, we can obtain a popularity measure for each keyword.Thus, we can construct a vector for the unit of interest: where Fk = Ck/ni, Ckbeingthe total number of tweets containing the k-th keyword and ni the total number of tweets in unit i. Figure 4 shows the popularity of 10 keywords for the secondand thirddistricts of Alabama (coded as District 102 and 103, respectively).Note that the biggest difference occurs in the words "gop" and "obama".People in the seconddistrict tweet frequently about "obama", but very little about "gop".In this particular comparison of districts, no other keywords exhibit such a large variance.People in the thirddistrict use the terms "obama" and "gop" with a frequency that is much closer to being equal.A unique feature of SPoTvis is that it allows users to explore the set of districts that exhibit similar keywords in their tweets.By previous construction, we have for each unit an N × 1 vector as the keyword popularity measure, where N is the number of keywords under consideration.Thus, the similarity between two districts can be measured by their distance in this N(=50)-dimensional keyword space.While there are many similarity/distance measures in the existing literature, we chose the commonlyused Euclidean distance.For P(uniti) and P(unitj) defined above, their Euclidean distance is defined as:
Examining Links among Places, Politics and Tweets
We examined the above similarity measures to explore whether units with similar keyword usage patterns exhibit any detectable patterns in partisan leanings.We hypothesized that a unit's Cook PVI should be positively correlated with the Cook PVIs of its similar units (here, we mean "similar" in terms of keyword usage patterns), but negatively correlated with its dissimilar units.We chose U.S. states as the units of interest and calculated for each the average Cook PVI of its 10 most similar states (Y1i) and also the average Cook PVI of its 10 most dissimilar states (Y2i).We constructed the following two regression models: where Xi is the Cook PVI of state i.The corresponding hypotheses are H1: β1 > 0 and H1: β2 < 0. The estimation result is shown in Table 2.Both of the coefficients are statistically significant, and their signs are as we expected.Note that the magnitudes of these two coefficients, as well as the R-square values, are not very large.This is reasonable, because many other factors could influence the Cook PVI of the similar states.Shown in Figure 5 is the scatter plot of the data with fitted regression lines.The blue and red represent the data of similar and dissimilar states, respectively.From these results, we can conclude that states with similar patterns of keyword usage in their conversations tend to have similar partisan leanings.This result further reinforces the polarized crowd structure found by Smith et al. [8] in virtual (rather than spatial) network space.
SPoTvis Design, Functionality and Use
In this section, we discuss the rationale behind the design of SPoTvis.We explain how the tool was implemented and the considerations made for achieving the best performance and usability.Lastly, we present findings based on visual data exploration using SPoTvis.
Design Rationale
Central to the design of SPoTvis is comparison: "What shutdown-related keywords were being tweeted in my congressional district, as compared to, say, neighboring districts?What about all districts held by a certain political party?Additionally, can I compare my district with statewide trends?"These were the types of questions we strove to visually and interactively address.Additionally, we wanted to supplement this keyword and spatial comparison with demographic information, such as unemployment, income and health insurance enrollment aggregated to the same levels of comparison.
These goals required a duality approach to interface design; one that forces users to input two parameters in order to explore a comparison between two entities, two groups of entities or one entity and a group.Our aim was to capture the analytical and insight-seeking interests of social scientists and news journalists, but also the social visualization interests of society at large.The primary components of the application include a term polarity plot, two spatial map views and a simple graph panel for demographic statistics (refer to Figure 1).
The core component of the interface is the term polarity plot.The plot combines a relatively newer text visualization technique, the word cloud, with proportional symbols, a more traditional method for visualizing the magnitude of quantitative data.One of the earliest word cloud implementations, a collective mental map of landmarks in Paris, dates back to 1976 [14].The first usage of proportional symbols dates back far earlier to William Playfair's 1801 statistical graphics that scaled the areas of countries relative to the areas of circles [15].
Independently, these techniques for data visualization remain quite prominent in today's interactive web environment.An early usage report on Many Eyes [16], a public website developed by IBM that allows users to freely upload their data and create interactive visualizations, revealed proportional symbol plots and word clouds to be the most and third most used graphics, respectively [17].Moreover, users of Wordle [18], a web-based tool for visualizing text, create a new wordle about every ten seconds [19].Wordles advance the standard word cloud through innovative text placement algorithms and attention to aesthetics, but lack interactivity and semantic word placement.
In visual analytics, word cloud text visualizations have been used to track content evolution in documents through time [20] and to compliment more advanced text visualizations, such as TextFlow, in depicting how topics evolve in large text collections over time [21].Word cloud visualizations have been combined with parallel coordinate plots to reveal regional and linguistic differences between U.S. Circuit Court decisions [22].However, few interactive visual analytics tools interlink the word cloud and proportional symbol techniques, with a notable exception by Mike Bostock et al. [23] of The New York Times.In an online article, the authors visualized the comparison between words used by Democrats and Republicans at the 2012 National Convention.Font and bubble size conveyed cumulative word use frequency, and bubble color relayed the proportion of use by political party.
Our term polarity plot extends the work discussed above by integrating the spatial component of text data through bubble placement and dynamic linked map views.The term polarity plot visually encodes keyword frequency for two compared units in a single interactive graphic.For any comparison, the 50 designated shutdown-related keywords appear atop of and are inherently linked to dynamic bubbles.Font size and bubble area are scaled linearly in proportion to the total number of times the word was tweeted within the two compared units.
Horizontal bubble positions are derived based on the relative occurrence between spatially-driven comparisons and placed on a diverging continuum.For example, a bubble within the center of the plot indicates similar use between the two spatial units being compared, whereas a bubble plotted at the far left or right end of the plot indicates a word predominately occurring within only one of the compared units.Politically neutral terms, such as "congress" often appear toward the center, whereas other, more politically charged terms are pulled toward the edges of the plot.An example from the initial map view is "harryreidsshutdown" (a derogatory reference to the Democratic Senate Majority Leader, Harry Reid), which appears far to the side of the Republican-leaning states.It is important to note that bubble placement in the term polarity plot represents similarity in the usage of individual terms between two spatial comparisons; whereas, the show "similar/different" functionality returns spatial entities that are most similar or different in patterns of term usage across all 50 keywords.
Bubble placement in the vertical direction is based on a Gaussian curve to more intuitively fill the plotting area, but portrays no spatial or semantic context.Users can shuffle the vertical placement of bubbles for better clarity in the event that bubbles and text become cluttered and illegible.For further clarity, or to focus on a particular subset of keywords, users can remove uninteresting bubbles from the plot by clicking on them.These demoted bubbles are reduced in size and drop vertically to the bottom of the plot.Demoting the largest bubble in the plot re-scales the remaining bubbles to foster more visually prominent word comparisons.Re-clicking demoted bubbles reintegrates their size and placement within the plot.To focus on one term, users can hover over the bubble of interest.This highlights the bubble and slightly expands the font size of the word, while also decreasing the visual emphasis of non-highlighted bubbles through reducing their bubble and font opacities.
To visually reinforce the importance of bubble position in relation to keyword frequency comparisons, a diverging color scheme suggested by Harrower and Brewer [24] was implemented.This color scheme further serves to visually link data views (other than partisanship) through the map and statistical graph components of the tool.
SPoTvis' two map views provide a spatial context to the keyword comparisons and allow users to explore the patterns exhibited by different states and congressional districts.To manage the quantity and complexity of information displayed in the map views, we used a multiscale visualization approach to information design.Data and data representation are abstracted at different zoom levels in accordance with Shneiderman's information seeking mantra: "Overview first, zoom and filter, then details-on-demand" [25].For example, users first click a state, which updates the term polarity plot to a keyword comparison at the state level, zooming the user to the geography of the state.At this zoom level, congressional districts become visible, and the user can choose to click one, again updating the plot.Alternatively, users can filter using drop-down menus, which contain all possible state and congressional district aggregates, as well as a unique option to select all Democratic-or Republican-leaning states (Figure 6).Additionally, the map views provide options to display the ten most similar or dissimilar states or congressional districts to an entity of interest.Similarities are depicted in the color linked to the left or right comparison views, purple or green, respectively.To alleviate any confusion, dissimilarities are shown in orange, a color intentionally not linked to any other aspect of the interface.If a user clicks to reveal similar or dissimilar geographic entities, links to these areas are displayed.Because some of the smaller congressional districts are hard to discern at the national scale, users can hover over the ten links to flash the centroids of the similar or dissimilar areas and click the links to zoom to these areas.The map views also allow users to toggle on and off Cook Partisan Voting Index (PVI) value maps at both the state and congressional district levels.
Finally, bar charts display the percent of federal government workers, the percent of unemployed, the percent of individuals without health insurance and median household income for any two political units being compared.The charts are aligned horizontally to portray the relative balance between values and to preserve screen real estate.Cook PVI values are also displayed in this area to coincide with the color-coding of their representation in the map views.
Implementation
SPoTvis was written as a web application in JavaScript, HTML and CSS to make the tool available to any interested, politically-aware Internet user.D3 (Data-Driven Documents) was the primary JavaScript library used for development.D3 binds document object model (DOM) elements to raw data [26].By doing so, the library operates within the environment of the DOM rather than wrapping the DOM in code in a cumbersome scheme.Furthermore, D3 simplifies element creation, update and removal, which allows for an efficient way to create complex and dynamic visualizations.
Besides handling DOM elements, D3 has many convenience features that are empowering in the development process.In the term polarity plot, a force layout is imposed on the bubbles.This physical simulation uses pseudo-gravity and charge to pull bubbles toward their respective normalized x-values.If conflicts arise, they are handled in an organic way.This process determines bubble position.
TopoJSON, a topology-preserving extension to GeoJSON, was used in combination with D3 to produce the map views.TopoJSON was chosen, because it uses inferred topology to significantly reduce the number of vertices downloaded to the client, thereby shortening the load time.Furthermore, by maintaining topology, we reduce the likelihood of visibly misrepresenting complicated areal units, such as congressional districts.
Performance and Usability
To keep performance smooth for this prototype tool, all keyword frequencies, similar districts and demographic attributes are pre-calculated and stored in a single JavaScript Object Notation (JSON) file for each state and congressional district.The geometries of the states and districts are sent to the client at the time the application loads.These boundaries are generalized as much as feasible in order to reduce the amount of data transferred.Apart from the initial application load and the occasional retrieval of the state and district JSON files, the application does not need to make any other queries to the server, and no calculations are performed on the fly.
When the term polarity plot undergoes an update, sometimes the bubbles will get "stuck" as they attempt to switch positions on the plot.The bubble color always reflects the exact position that the bubble should occupy on the x-axis, thereby indicating whether the bubble is out of place.The user can drag and drop a stuck bubble to the correct side of the graph, remove irrelevant bubbles to create more space or click the "shuffle" button to assign all of the bubbles a new position on the y-axis, thereby opening some space for movement.
Data Exploration
Using SPoTvis to explore various states and congressional districts reveals some patterns that indicate how an area's Twitter conversation may be influenced by local current events and political leanings.Comparing just about any combination of states or districts reveals a blame game at play.Units with a highly Republican Cook PVI tend to use the term "obama" more often, and units with a highly Democratic Cook PVI tend to use the term "gop" more often.This pattern is readily visible when comparing polarized states, such as Texasvs.Massachusetts (Figure 7a), but can even be seen when comparing more moderate states, such as Nevadavs.North Carolina, or districts within a state, such as Washington 7 (containing the liberal city of Seattle) with Washington 5 (containing the more conservative city of Spokane and the surrounding area).The latter comparison is shown in Figure 7b.Local issues also make their way into the term polarity plot.These patterns are more easily seen if the user rescales the plot by removing the dominant terms of "obama" and "gop".Figure 8 shows the detail of the term polarity plot comparing Maryland (left) with New Jersey (right).These states are geographically close and have a similar Cook PVI; however, the sentiment of the much larger percentage of federal workers in Maryland is captured by the terms that indicate impact, with "work", "washington", "furlough", "employee", "workers", "check" and "families" more dominant on the left side of the plot (along with some specific attention to the GOP speaker of the house, Boehner).In New Jersey, the focus is less on outcomes and more on blame, with more instances than in Maryland of terms "tea", "gopshutdown", "republican", "obamacare", "blame" and "shutdownthegop", with "school" as the only somewhat prominent term not linked to blame.Issues at the district level are more difficult to identify with confidence.Some districts contain a low enough number of tweets that a few enthusiastic individuals can skew the results of the plot by tweeting prolifically using certain keywords.This produces unexpected results, such as the conservative Texas 31district tweeting "shutdownthegop" in much larger proportion than the liberal Texas 35district in Austin and San Antonio.
The inconsistent mix of tweet frequencies in districts causes some districts to almost always appear when the user clicks the button to show "different" word use.Florida's 12th district is one of these.A close examination of the data shows the keyword "cruz" (in reference to Republican Senator Ted Cruz) appearing in 335 of the 471 tweets, likely the work of a small number of individuals.This effect could be mitigated by filtering out (or including only a sample from) users whose contributions exceed a particular percentage of the total tweets from the district.
The rankings of similar keyword patterns provide an interesting spatial indication of which units emphasized similar themes of conversation.Sometimes, these patterns match other spatial trends throughout the U.S. For example, clicking the "similar" word use button for the relatively religious western state of Utah yields a result that includes states in and along the "Bible Belt" in the southern U.S. All 10 of Utah's most similar states have Republican Cook PVIs, a result that might be expected given Utah's heavily Republican Cook PVI of R+22.In contrast, the similar states for Massachusetts (whose Cook PVI is Democratic leaning at D+11) are 80% Democratic, although they are spread out across the country (Figure 9).Some patterns are more difficult to explain, such as the moderately Republican leaning state of Georgia and Democratic leaning state of New York showing up in each other's list of 10 most similar states.
In summary, explainable patterns are abundantly visible in SPoTvis, but they are not readily predictable for any pair of units.Adding more tweets for analysis would smooth out some of the most anomalous keyword usage patterns, especially in the congressional districts.
SPoTvis User Evaluation
"The purpose of visualization is insight".
-Stuart Card, Jock Mackinlay and Ben Shneiderman [27] In this section, we explore the meaning of insight and report on various approaches to collecting insights in the context of a two-part user study of SPoTvis.In the first part of the study, participants were asked to choose roles for themselves (e.g., politician, political scientist, journalist, etc.).Based on those roles, participants were tasked with using SPoTvis to explore the Twitter data surrounding the government shutdown and to work through analytical tasks relevant to the roles chosen.Participants were asked to document insights they discovered and the approaches they took in obtaining those insights from using the tool.The second part of the study was an online survey designed to evaluate user experiences, interface design, functionality and future applications of SPoTvis to other contexts.
We first discuss the ways in which SPoTvis enabled users to answer and explore various questions and the processes they took to arrive at answers or to generate new hypotheses.We then evaluate the effectiveness of SPoTvis based on its ability to provide users with the necessary mechanisms for insight discovery.We conclude with promising directions for the future development and application of SPoTvis.
Contextualizing Insight
The goals of insight are discovery, decision-making and explanation.The usefulness of information visualization can be evaluated in terms of the extent to which such cognitive activities are achieved [27].Formally defining insight, however, is problematic, because it takes many forms and is open to varying interpretations across and within disciplines that aim to measure it.Insight derives from the complexity of a dataset's entirety.It accumulates and is generative.It is inexact, uncertain and qualitative in nature.It is unexpected, unpredictable and creative.The meaning of insight is relevant and embedded in the data domain [28].Insight can be experienced in a spontaneous, indescribable and unrepeatable moment or characterized as a unit of discovered knowledge [29].
Because of the ambiguous meaning of insight, measuring a visualization's ability to achieve its characteristics is challenging.Controlled experiments on benchmark tasks are a primary method for evaluating visualizations, but force users to discover shallow, researcher-defined insights in short, definitive amounts of time [28].Users are often asked to answer simple questions that can easily be used to measure the accuracy of the user's discovery of the predefined insight.This approach to measuring insight limits the unexpected, deep, qualitative and relevant insights the researcher might more readily obtain by allowing users to explore the visualization on their own and discover their own insights.The trade-off, of course, is that unconstrained exploration is likely to vary dramatically across individuals, and generalizations are difficult or impossible to derive from such unstructured activities.In the research presented here, we adopt a middle-ground using a semi-constrained exploration activity designed to provide opportunities for a wide range of insights on the part of participants, while providing us with a framework for synthesizing results.
Study Design
The purpose of visual analytics is to enable insight discovery through interactive visual interfaces [1].The primary goal of conducting a user study on SPoTvis was to assess its ability to enable individuals with relevant expertise to discover their own spatio-political insights from the Twitter data surrounding the government shutdown.An additional aim of the study was to provide us with input on the design, functionality and future applications of SPoTvis.Thus, we designed our study in a way that captured the creative, deep and qualitative insights of users, as well as recorded straightforward answers to questions that assessed users' understandings of the tool's design and components.The study had two stages, outlined below.
The first stage asked participants to interact with SPoTvis and explore the Twitter data it accesses on their own within a week's time at their own pace.Participants were emailed a web link to SPoTvis, a link to an online demonstration video and a basic guide describing the tool and how to use it.Participants were first asked to create roles for themselves (e.g., politician, political scientist, journalist, etc.).Based on the selected roles, participants were tasked with writing short essays documenting interesting findings or patterns that they uncovered in the Twitter data, as well as the approaches they took to achieve those insights.
The second part of the assessment was an online survey.Participants were asked specific questions about their experiences using SPoTvis, SPoTvis' graphic design (the overall appearance of views and layout), interface functionality (in terms of both ease of use and ability to obtain information) and the applicability of the tool to other contexts.
Ten participants engaged in the study; nine completed both parts, while one participant chose to only take the survey.All participants had taken one or more courses in cartography, and the majority of participants actively do research in cartography, GIS and visualization.Three participants were graduate students recruited from the Department of Geography at the Pennsylvania State University.One participant was recruited from a Massive Open Online Course (MOOC) on mapmaking.Six participants were European academics recruited from the Department of Geoinformation Processing at the University of Twente in the Netherlands.The following sections present the results from the two-part user assessment of SPoTvis.
SPoTvis Assessment/Part 1
The first part of the SPoTvis user study resulted in nine reports, detailing tasks, approaches, types of interactions and obtained insights from varying roles or analytical perspectives, chosen by the participants.In the subsections below, we summarize report characteristics and the methods applied to analyze them, then present findings.
Report Analysis
Report length ranged from 147 to 511 words, with a mean word count of 375.The participant with the shortest report provided tables and screen shots to illustrate her findings and interactions with SPoTvis.Three participants assumed the roles of some type of journalist, two chose to be political scientists, one a state politician, one a human scientist and one a graphics enthusiast, and one did not specify a particular role (while these are adopted roles rather than characterizations of participants' actual expertise, for clarity of reporting below, we refer to the participants by their role).Tasks ranged from as vague as "exploration" to specific problems, such as, "How population composition (mainly Latin-American) affects the perception of the 2013 shutdown." Reports were analyzed primarily by the lead author.A priori codes were designed to extract participants' tasks, types of interactions and achieved insights.Thus, we were particularly interested in understanding and synthesizing the workflows of SPoTvis' users, starting with their problem definitions and tracing the interactions that allowed them to arrive at solutions.Additionally, participants' reactions towards SPoTvis' design and functionality were flagged and related to the effectiveness of the tool in enabling insight discovery.
During the coding process, "analytical approach" emerged as a broad, organizational theme within which participants' workflows fit neatly.Four participants took data-driven approaches to analysis (i.e., comparisons were made based on term usage, political or demographic questions); four participants took spatially-driven approaches (i.e., comparisons were made based on places of interest or spatial questions); and one participant took a tool-driven evaluative approach to outline the (dis)advantages of SPoTvis for a specific context.In the following results subsection, findings from the reports are organized based on these three approaches.
Report Results
Participants who took data-driven approaches to analysis, in comparison to other participants, had more specific tasks they wanted to accomplish, engaged in more types of interaction and arrived at more specific insights (Table 3).Two of these participants chose to be journalists, one chose to be a human scientist and one a political scientist.These participants all had refined analytical objectives that extended beyond data exploration.The political scientist, for example, wanted to understand the relationship between ethnicity and perceptions of the government shutdown trending on Twitter.To investigate this relationship, the participant first referenced auxiliary ethnicity maps of the U.S. to better understand the spatial distribution of Hispanics and Latin Americans across the country.With this knowledge, the participant then selected New Mexico, the state with the highest Hispanic/Latino population composition, as the reference state and compared term usage between New Mexico and all other states.The participant concluded that, "while a slight pattern in the similarity of terms can be seen in the states with highest ratio of Hispanic population, similar results appear in some of the states with the lowest Hispanic population.In general, the level of perception recoded by this dataset seems to reflect more the political preferences".The political scientist's workflow illustrates the insight-enabling process supported by SPoTvis and exemplifies the rich, qualitative and accumulative insights obtainable by users.
The data-driven group also leveraged SPoTvis' more advanced functionality to support their sophisticated analytical reasoning.For example, the journalist and political scientist used the term demotion functionality to analyze relationships between political leanings and very specific subsets of terms.The Chinese journalist analyzed differences between term use and political party across scale using the "show similar" functionality and inferred that Democratic-leaning states (as an aggregate) tended to discuss (and potentially) blame the GOP, while Republican-leaning states (as an aggregate) focused more on Harry Reid in their discourse.At the individual district level, the Chinese journalist found that term usage varied regardless of party affiliation.Overall, SPoTvis' functionality empowered the data-driven users to explore complex questions deeply.
The participants who took a spatially-driven approach to using SPoTvis assumed the roles of political scientist, university news journalist, graphics enthusiast and unspecified.As a group, these participants had both exploratory and more specific tasks they wanted to accomplish (Table 4).They framed their comparisons using known geographical boundaries, places of interest and spatial adjacency.The political scientist and news journalist sought to answer questions grounded in specific places of interest, comparing term usage, political leanings and demographics within defined spatial bounds.The graphics enthusiast and unspecified analysts pursued an unstructured exploration, documenting more-or-less random and disconnected findings.
In contrast with the data-driven analysts, these four participants often did not use SPoTvis' more advanced functionality, such as term demotion or "show similar/different" spatial entities; nor did these users comment as frequently on potential links between term usage/political leanings and demographic variables, such as percent unemployed, percent federal worker, etc.However, although they used SPoTvis in a more restricted way, the spatially-driven users did generate some valuable insights.In comparison to the data-driven analysts, these analysts clearly had broader exploratory tasks and arrived at insights in a qualitatively different way.What these findings reveal is that SPoTvis effectively supported different approaches to analytical reasoning across varying roles and vastly different task definitions.
The last participant interpreted the first part of the SPoTvis user study in a slightly different way; in that the participant chose to evaluate SPoTvis based on the theoretical needs of a state politician.Rather than taking an analytical approach to explore the data and arrive at insights, the participant commented on the advantages and disadvantages of SPoTvis in meeting the needs of a state politician.The participant found that SPoTvis was able to quickly guide the politician in finding important topics within and across places and intuitively linked demographic variables to prominent topics.The politician, however, felt limited by a fixed (rather than user-specified) set of terms and overwhelmed by the use of color in showing multiple attributes.We consider these (and other) evaluative responses more thoroughly in the following section, which reports on the SPoTvis user experience,the effectiveness of the tool's design and functionality and future applications of the tool to other contexts.
Data-driven
Demote words to focus on specific subsets; compare all Democratic-leaning states/districts with all conservative-leaning states/districts using drop-down menus; click on map views to compare the spatiality of within/between political leanings with demographics Terms "obama" and "gop" were more popular in Republican and Democratic-leaning states, respectively; terms "gopshutdown" or "shutdownthegop" were more often used in Democratic districts; terms "work", "house", "school" and "pay" correlated with districts having high levels of unemployment, uninsured and low median household income; in many states, the higher the Cook PVI was, the better welfare and income was Human Scientist Explore the concerns of population beyond the noise created by certain political terms
Political Scientist
Explore the relation of ethnic origin (Hispanic) and the perception of the 2013 shutdown
Data-driven
Select NM as the reference state, because of its high Hispanic/Latino population composition; compare term usage between reference state and all other states The pattern in the shutdown perception and ethnic origin is not clear; a slight pattern in the similarity of terms occurs in states with the highest ratio of Hispanic population, but similar results also appear in some of the states with the lowest Hispanic population; the level of perception recoded by this dataset seems to reflect more the political preferences of places
Spatially-driven
Compare the spread of keywords for each university district-adjacent district pairing Cook PVI is largely Republican for all districts in the study, yet large differences in term usage, potentially influenced by Democratic-leaning individuals associated with the university; terms populating nearby districts included "furlough", "worker", "employee", "cost", "boehner" and "money," while university district used terms "cancel", "food", "washington", "service", "barackobama" and "gop" more often Graphics Enthusiast Exploration Spatially-driven Compare the spread of keywords between two states based on adjacency; compare the spread of keywords between districts within a state; compare the spread of keywords between the district and its respective state OR in comparison with ID shows a clearer blame game as compared to FL in comparison with GA; polarized use of words between conservative MN 7 district and Democratic MN 8; Democratic TX 28 more focused on "congress" as compared to the overall conservative leaning of TX, which was more focused on "gop" and "obama" Not Specified Exploration Spatially-driven Compare the spread of keywords between two states based on adjacency; compare the spread of keywords between two states based on political leaning WA and OR were seemingly interested in very different aspects of the shutdown; OR was more focused on the shutdown itself (GOP and members of the house), while WA expressed more opinion about potential reactions due to the shutdown (like work, money, debt, military, etc.); Twitter conversation gravitated around Obama and Obamacare in conservative-leaning states, while Democratic leaning states conversed more about the GOP and Republicans
SPoTvis Assessment/Part2
The second part of the SPoTvis user study was an online survey consisting of a Likert scale, multiple choice and short answer questions designed to collect feedback on the SPoTvis user experience, the overall appearance of SPoTvis' views and layout, interface functionality and applicability of the tool to other contexts (Appendix I).Ten participants completed the survey, nine of which also completed the first part of the study.Six participants reported having spent over an hour using SPoTvis; one spent 30-45 min; and three spent 15-30 min.All participants recorded having watched the demonstration video prior to completing the survey.
User Experience
The first part of the survey aimed to assess users' reactions to and feelings toward SPoTvis (Figure 10).Three users found their initial experiences using SPoTvis easy and fluid.Four participants found their initial experiences neither confusing nor particularly intuitive, while three users initially felt the tool was more confusing than easy to use.Nine participants felt positively towards SPoTvis while they interacted with it, while one participant felt neutrally towards the tool.Moreover, most users commented on the interface being easy to use once accustomed to its functionality.
Design Evaluation
Questions in the SPoTvis design portion of the survey aimed to evaluate the appearance of SPoTvis' views and layout.The first questions asked users to rate the effectiveness of the tool's design, how aesthetically pleasing they found the design to be and how effective the use of color was in the design (Figure 11).While most participants reacted positively toward the overall design and aesthetics, responses varied on the effectiveness of color in the design.More specific reactions to color use in SPoTvis are discussed below.
Following the design rating questions were short answer questions that first assessed users' understandings of design choices, then allowed users to comment on aspects of the tool they found to be most innovative and most confusing.Participants were also encouraged to provide design suggestions in this part of the survey.When asked what the colors of bubbles represented in the term polarity plot, six respondents correctly linked the shades of purple and green to their respective map views, depicted below the plot.However, two respondents confused bubble color with political leanings, rather than spatial comparisons.The remaining two participants commented on the use of color more broadly instead of directly answering the question of what color represents in the plot.Many users felt that SPoTvis overused color to represent too many attributes.SPoTvis uses color to encode left/right comparisons, political leanings, similarities, differences and combinations of these via differing fill, stroke and on-hover highlight colors.This explains some of the confusion participants had in understanding what the color of bubbles represented in the term polarity plot, particularly in the context of using SPoTvis for relatively short periods of time.
When asked what bubble/font size represented in the term polarity plot, seven respondents completely understood that the visual variable of size encoded the frequency of term usage between two spatial comparisons.The other three respondents did not necessarily misunderstand what bubble/font size conveyed, but rather provided responses on how sizes should have varied more (or less) or simply stated that "size worked".
Aspects of the design participants found most innovative included: the term polarity plot, term polarity plot coordination with map views, integration of bubble placement with spatial comparisons, left/right aspect for comparison, multiscale visualization and bubble movement.One participant commented, "I think including the plot reacting to the interaction with the maps is something innovative in the geovisualization field".The participant found the separated approach to information design effective in reducing cognitive overload, thus making the data more interpretable.Overall, reactions focused on the clean design of the term polarity plot and comparison views, the core and most unique components of SPoTvis.More subtle suggestions participants provided for refining the design of SPoTvis included: labeling the term polarity plot's x-axis and adding map labels to provide a better sense of place.
Functionality Evaluation
The SPoTvis functionality section of the survey evaluated how easy SPoTvis was to use, as well as its ability to provide users with insights.Eight participants found SPoTvis' interaction options obvious and useful, while two participants felt neutrally towards the tool's overall functionality (Figure 12).Because comparison was central to the design of SPoTvis, we were particularly interested in using this section of the survey to understand how users made comparisons.In response to a question about how spatial comparisons were made (when presented with three, multiple choice options), six participants reported that they used both drop down menus and map clicking (on states/districts) to make comparisons; two participants used only drop down menus; and two participants used only map clicking.With respect to map use and interaction, users had mixed reactions to spatial navigation using the map views.Six participants were able to easily navigate in the map views, while four participants assigned a neutral/intermediate rating to SPoTvis navigation.One participant specifically commented on how map navigation supported insight gathering: "It is good you classified the zoom levels in three stages.The user is never out of the scope of the map, and the feature under study is always highlighted.And it is easy to go up and down between levels".Two participants suggested adding "pan/zoom" functionality to enhance way finding within the map views.Eight of the ten participants found the Cook PVI base map layer very useful for linking political leanings to spatial comparisons.These participants chose to almost never toggle the layer off.The other two participants found it useful to occasionally switch the layer on and off during exploration.
To evaluate functionality in the term polarity plot, we asked users to describe in short sentences what bubble and term movement/placement represented.Eight participants correctly inferred that movement and placement of bubbles/terms were a function of the combined, relative use of the terms between two spatial comparisons.The other two participants misinterpreted movement and placement in the plot as being related to political party or tweet composition, rather than to spatial units.When asked how SPoTvis' coordinated functionality supported insight gathering, participants commented on how intuitive it was to make comparisons using the term polarity plot together with the map views.One participant, for example, stated: "The speed of clicking in the maps and getting the new plot ready in (a) few seconds is quite good, because you never get tired of exploring the plot."For this user, interaction inspired additional exploration.Another user commented, "There was enough flexibility in the interactions to support experimentation, which resulted in new insights".Participants further found the term demotion functionality useful for analyzing very specific term subsets of interest.Overall, results indicate that SPoTvis' interaction options engaged users and fostered data exploration.
When asked how interaction detoured/inhibited insight gathering, participants felt limited by the ability to only compare two entities or two groups of entities at a time.Participants further wanted a "history of interactions" to more rigorously collect and maintain acquired insights.The "similar/different" functionality was critiqued on the basis that users wanted a better explanation of what the similarity measure took into account, as well as a more complex similarity criteria.For example, one participant commented: "I was expecting, say, to find states or districts similar to others in terms of the four parameters (demographic variables) between both maps".Currently, SPoTvis derives similarity only on the basis of how spatial units used key terms.Extending the SPoTvis capability to explore similarity in multiple ways and using multiple criteria together is a useful suggestion that we plan to explore in future versions of SPoTvis.
Future Applications and Summary
In the final part of the survey, we asked participants to envision ways in which SPoTvis might be used in someone else's work.Participants provided very interesting and highly diverse suggestions that ranged from very broad in scope (e.g., "any exploratory analysis") to specific applications in crisis management, economics, environmental studies, journalism, law, political science, social science and technical science.One participant imagined SPoTvis "being useful to do comparative sightings between places in wildlife scenarios, protest landscapes and mapping movement patterns."Another participant envisioned SPoTvis "as a generic near real-time tool for human pattern analysis".This participant went on to say that SPoTvis "could help to find patterns tied to (the) geographic domain, probably never represented before as a combination of volunteered geographic information (VGI) and maps, that can be used as guidelines or triggers for other ideas".One participant responded to the question from a geovisual analytics tool development perspective, commenting on the benefit of integrating the D3 visualization framework into more traditional geographic information systems to advance spatial data exploration and reasoning.Lastly, one participant felt that visualization teachers might want to demonstrate SPoTvis to their students to illustrate innovative design in the geovisualization field.Clearly, there exist many exciting opportunities to extend SPoTvis to other research areas, as well as to integrate the tool's novel functionality into more comprehensive spatial analysis software.
The SPoTvis user experience was generally positive and enjoyable.Users appreciated the aspirations, fluidity and novelty of SPoTvis' design and functionality.In some instances, SPoTvis' options for interaction clearly inspired data exploration.The term polarity plot and left/right comparison views were considered by participants to be valuable contributions to support the analysis of perspectives.Users further provided useful suggestions for improving the usability of SPoTvis in future versions, with an emphasis on how color is used and on improvements for the "similar/different" functionality.Moving forward, we plan to explore the use of texture, as suggested by one participant, to alleviate the cognitive overload associated with encoding too many attributes by color.For example, we could represent similar and different spatial units on the map using varying densities of areal patterns.We also plan to develop a more sophisticated measure of similarity, one that is more intuitive and takes into account not just similarities in shared term usage, but also in political leanings and demographic variables.The careful addition of axis and place labels will also be considered to more directly inform users about the variables' meanings and to provide a better sense of place in the map views.
A key strength of the design and functionality of SPoTvis is that it is tailored to a special purpose, bounded in time, space, observations and context.Our longer-term ambitions for the future development of geovisual analytics applications will focus on extending SPoTvis' functionality to be more flexible and scalable.We envision a SPoTvis-inspired application that consumes user-specified data of any applicable types in real-time from streaming feeds (not just Twitter, but photo tags, RSS news feeds and others), allowing analysts in any number of disciplines to spatially compare trending or most relevant terms, topics or images in the term polarity plot.The core views and interactions unique to SPoTvis would remain, but analysts would have control over selecting an appropriate number of entities to assess in the term polarity plot and the ability to aggregate their data across meaningful areal units.
Conclusions
Although the above trends and results are interesting, do they tell us anything new?Spatial patterns of political opinions and voting behavior in the U.S. are well documented, but always changing.Our analysis and tool demonstrate that people's conversations on social media often reflect these known trends and are worth monitoring to detect new or shifting patterns.It seems feasible to suggest that applications like SPoTvis may be employed more often in the future to act as real-time forecasters of political winds.
We identified 50 of the most common words and topics in tweets generated in fall, 2013, related to the U.S. Government shutdown and ACA implementation.We observed that these keywords touched political figures, the labor force and domestic life.Furthermore, we identified that areas with similar political persuasion tended to tweet about a similar set of topics.
At the same time, it is important to note what people are not talking about.For example, various news stories reported the closure of national parks and museums due to the shutdown, but these topics did not appear in the list of most common words in the tweets we examined.News stories about children being locked out of the zoo may provide dramatic imagery for the media, but we do not find evidence that people are consistently discussing these topics nationwide to the degree that they are discussing certain other subjects (like impacts on their jobs or the politics of the shutdown).
The results shown in SPoTvis represent aggregations that may not appear when using other scales or areal units.At the individual level, many people's political opinions and tweets do not strictly follow party lines, just as at the state level, not every voter chooses the same party.There are plenty of Democrats in Texas or Republicans in Massachusetts whose local influences are underrepresented when examined at the state or national scale.This was a reason we wanted to include congressional districts on the map despite the relatively low number of tweets.Furthermore, party preferences along urban-rural or ethnic divides are not well represented by state lines, while congressional districts may intentionally be drawn to follow these boundaries.
Our research might be strengthened by an increased number of tweets and a wider variety of tweet contributors.It is important in any analysis that uses social media as primary data to consider the biases in these data related to both population bias (e.g., who has access to the Internet and who is most inclined to use these media) and sampling bias of algorithms and application programming interfaces through which researchers are allowed to sample the data [30].We suspect that the strength of the relationship between a unit's Cook PVI and that of its most similar and dissimilar units would be increased if there were more tweets in the dataset, but that there would still be bias due to the difference between the population that the PVI measures and that reflected on Twitter.The addition of some "smart filtering" to detect the influence of overly dominant tweeters in the dataset could also help make the results more meaningful.
An increased number of tweets in the dataset would allow for temporal analysis, examining how themes in conversation changed as the shutdown was approached, endured and resolved.The fluidity of movement offered by the bubbles in the term polarity plot could create an ideal environment for showing how topics shifted in prominence and place of origin during consecutive time periods.The temporal information is readily available in the timestamps of the existing tweet data; however, there are probably not enough tweets in the dataset examined here to create any meaningful temporal representations at the congressional district level; however, state-level aggregates per day might have high enough frequencies to result in usable data.
Acknowledging the above limitations, we assert that SPoTvis successfully addresses many of the challenges set forward by Elwood [31] for dealing with heterogeneous, qualitative and dynamic spatial data.For example, SPoTvis supported deep, qualitative insight gathering by nine analysts who adopted highly diverse roles seeking to address distinctly different tasks.SPoTvis' design and functionality allowed users to take both data-driven and spatially-driven approaches to exploring, generating new hypotheses from and making inferences about the discourse posted on Twitter surrounding the 2013 U.S. Government shutdown.The interaction between the map and the term polarity plot to compare any two spatial units is a unique visual analytics technique that could be adapted to a variety of datasets and scales, such as RSS news feeds from across the world.Ultimately, SPoTvis succeeds at mining a set of personal expressions via tweets and demonstrating how the themes therein are connected to known patterns of political persuasion in the United States.
Figure 3 .
Figure 3. Distribution of tweets by congressional district.
Figure 5 .
Figure 5. Scatterplot with fixed regression lines comparing a state's Cook PVI with the average Cook PVIs of its 10 most similar states (blue) and 10 most dissimilar states (red).
Figure 6 .
Figure 6.Initial view of SPoTvis comparing Democratic-leaning states with Republican-leaning states.
Figure 7 .
Figure 7.A blame game at play.(a) SPoTvis comparison of Texas and Massachusetts; (b) comparison of Washington 7 and Washington 5.
Figure 8 .
Figure 8.A detailed view of the term polarity plot comparing Maryland (left) with New Jersey (right) exposes the concerns of the federal workforce in Maryland.
Figure 9 .
Figure 9.The set of states with similar keyword use sometimes follows political and cultural lines.Similar states to Massachusetts are shown on the left.Similar states to Utah are shown on the right.
Figure 10 .
Figure 10.Initial experiences (left) and feelings toward SPoTvis (right).Each graph depicts the frequency of response on a five-step Likert scale with the end points labeled using the terms shown below the x-axis.
Figure 11 .
Figure 11.Ratings on SPoTvis' design (left), aesthetics (middle) and use of color (right).Each graph depicts the frequency of response on a five-step Likert scale with the end points labeled using the terms shown below the x-axis.
Figure 12 .
Figure 12.Ratings on SPoTvis' functionality (left), ease of spatial navigation (middle), and Cook PVI toggle frequency (right).Each graph depicts the frequency of response on a five-step Likert scale with the end points labeled using the terms shown below the x-axis.
Table 1 .
Ten most frequent terms in the Twitter dataset.
Table 3 .
Data-driven approaches to insight discovery.
Table 4 .
Spatially-driven approaches to insight discovery. | 2016-01-25T19:18:26.375Z | 2015-03-05T00:00:00.000 | {
"year": 2015,
"sha1": "339405545c22628df26179fc55eae31b5ef3e42b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/4/1/337/pdf?version=1425557097",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "339405545c22628df26179fc55eae31b5ef3e42b",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science",
"Geography"
]
} |
54049501 | pes2o/s2orc | v3-fos-license | Isovector and flavor-diagonal charges of the nucleon
We present an update on the status of the calculations of isovector and flavor-diagonal charges of the nucleon. The calculations of the isovector charges are being done using ten $2+1+1$-flavor HISQ ensembles generated by the MILC collaboration covering the range of lattice spacings $a \approx$ 0.12, 0.09, 0.06 fm and pion masses $M_\pi \approx$ 310, 220, 130 MeV. Excited-states contamination is controlled by using four-state fits to two-point correlators and three-states fits to the three-point correlators. The calculations of the disconnected diagrams needed to estimate flavor-diagonal charges are being done on a subset of six ensembles using the stocastic method. Final results are obtained using a simultaneous fit in $M_\pi^2$, the lattice spacing $a$ and the finite volume parameter $M_\pi L$ keeping only the leading order corrections.
Introduction
This talk presents an update on results given in Refs. [1][2][3] on isovector and flavor diagonal charges of the nucleon using our clover-on-HISQ lattice approach. A summary of the 2 + 1 + 1-flavor HISQ ensembles generated by the MILC collaboration [4], and the number of measurements made on them in the ongoing clover-on-HISQ study is given in Table 1. The improvements made since the results reported in Refs. [1][2][3] are • cost-effective increase in statistics using the truncated solver method and the coherent source sequential propagator technique. • The correction for possible bias in the truncated solver method is now made on all ensembles.
• Addition of a second physical mass ensemble at weaker coupling, a06m135.
• Excited-state contamination (ESC) is controlled using 4-states in the analysis of the 2-point correlation functions and 3-states for the 3-point functions. • Fits to 2-and 3-point functions are done using the full covariance matrix in the mimimization of χ 2 .
• A simultaneous fit in a, M π and M π L is used to extract physical results in the limits a → 0, M π = 135 MeV and M π L → ∞ from lattice data obtained at different values of a, M π and M π L.
Associated results for the isovector form factors, G A (Q 2 ),G P (Q 2 ), G E (q 2 ) and G M (q 2 ), on these ensembles were presented by Yong-Chull Jang at this conference [5]. [4] and used in our clover-on-HISQ study. The a06m310 * and the a06m220 * ensembles represent a second analysis with larger source smearings, σ = 12 and 11, respectively, as described in Ref. [3].
Controlling excited-state contamination
Our goal is to extract the matrix elements of various bilinear quark operators between ground state nucleons. The lattice operator χ(x) = abc q a 1 T (x)Cγ 5 (1±γ 4 ) 2 q b 2 (x) q c 1 (x) used to create and annihilate the nucleon state couples to the nucleon, all its excitations and multiparticle states with the same quantum numbers. The correlation functions, therefore, get contributions from all these intermediate states. This ESC can be evaluated and controlled using fits including as many states as the data allow in the spectral decomposition of the two-and three-point functions. In our study we use: 1 shows data from the a09m220 ensemble and highlights a number of features in the data and control over ESC using the simultaneous fit in t and τ: (i) with increased statistical precision (HP → AMA), the convergence w.r.t. τ is demonstrated to be monotonic in all three charges, g A,S ,T . Previous HP estimates for both g S ,T were affected by a lack thereof. In fact, we now require this monotonic behavior when evaluating the statistical reliablity of data. (ii) Increasing the source smearing size σ = 5.5 → 7.0 reduced ESC in g A,S , but marginally increases it in g T . (iii) The fits including τ = 16 data (right panels) confirm the results of the fits without it (middle panels), indicating convergence.
The renormalized values of the isovector charges, using the renormalization factors given in Ref. [3], are summarized in Table 2. The table also reproduces the CalLat results for g u−d A from Ref. [6] on the five ensembles analyzed by both Collaborations. We compare these results in Sec. 4.
Simultaneous fit in a, M π and M π L
Having calculated renormalized charges at various values of a, M π and M π L, we perform a simultaneous fit to obtain results in the limit a → 0, M π = 135 MeV and M π L → ∞. When fitting data given in Table 2 from the 10 HISQ ensembles, we include only the lowest order correction terms [3]: Eq.
(3) corrects a mistake made in Ref. [3] for the analysis of the isovector g u−d S . The leading chiral term is proportional to M 2 π for the isovector case, and proportional to M π for the flavor diagonal cases. Fig. 2 shows that with reduced errors due to higher statistics data from 4 ensembles (a12m220S , a12m220, a09m220 and a09m310) and the addition of the second physical-mass ensemble a06m135, the behavior versus a, M π and M π L in the simultaneous fits is visibly clearer compared to the "9-point" fits presented in Ref. [3]. There is no significant evidence for finite volume corrections in any of the three charges for M π L > 3.5. There is some dependence of g u−d S on M 2 π . The most evident trends are the positive slope versus a in g u−d A and the negative slope versus a in g u−d S . Based on these fits shown in Fig. 2 and made using Eq. (3), our final estimates for the isovector charges, in the MS scheme at 2 GeV, are: and g u−d T , with higher statistics on the a09m220 ensemble. The left panels give the results based on 8000 HP measurements reported in Ref. [3]. The middle and right panels show new results with 123,392 AMA measurements. While the results from all three fits are consistent, the reliability of the fits, especially for g u−d S , is greatly improved when (i) the monotonic convergence in τ is manifest, and (ii) the fits and the values without (middle panels) and with (right panels) the t sep = 16 data overlap.
Given the improved data and the fits in Fig. 2, the continued 2.5σ deviation of g u−d A from the experimental value indicates that we are underestimating our errors. The largest change from results presented in Ref. [3] is the 1σ increase in the estimate of g u−d S . Most of this increase is due to correcting the form of the leading chiral term, i.e., M π → M 2 π , in Eq. (3). The major source of error in g u−d T is now from the renormalization factor due to the poor convergence of the perturbative matching between the MS and RI-sMOM schemes. Table 2 on the 5 common ensembles are consistent. Our conclusion is that the majority of the difference comes from the final extrapolation in a. While we find a positive slope controlled by the data on the three a = 0.06 fm ensembles, CalLat finds a negative slope anchored by the data on the coarser lattices. So the question is whether the differences in the two methods are manifest only at weaker couplng or are there systematic effects being missed in one or both calculations?
The two sets of calculations are being done on the same 2+1+1-flavor HISQ ensembles, but there are notable differences. These include: (i) Möbius domain wall versus clover for the valence quark action; (ii) gradient flow smearing with t g f /a = 1 versus one HYP smearing to smooth the lattices; a12m310 a12m220L a12m220 a12m220S a09m310 a09m220 a09m130 a06m310 a06m310r 12 a06m220 a06m220r 11 a06m135 extrap. a extrap. (iii) different construction of the sequential propagator. CalLat inserts a zero-momentum projected axial current in all timeslices on the lattice simultaneously. This gives a summed contribution from all timeslices between and on the source and sink points plus all timeslices outside. CalLat thus uses a 2-state fit to g A = C 3 (τ + 1)/C 2 (τ + 1) − C 3 (τ)/C 2 (τ) to extract the charge where C 3 are 3-point functions with the insertion on all timeslices; (iv) CalLat report a much better statistical signal with fewer measurements. The better statistical precision of the CalLat results for a given number of measurements is easy to understand: the CalLat fits to extract g u−d A are based on a range of τ values that is shifted by 6-8 timeslices to smaller τ compared to our fit range. Since the errors in the data increase by a factor of two for every increase in τ by two lattice units, they gain a factor of up to 2 4 . Choosing values of τ within the range we have simulated, our estimates for the quantity they calculate, g A = C 3 (τ + 1)/C 2 (τ + 1) − C 3 (τ)/C 2 (τ), have similar errors. Note, also, that the CPU cost of the CalLat calculation is, ensemble by ensemble, higher because they simulate domain wall fermions and did not use the multigrid algorithm for propagator inversion.
The question, therefore, reduces to why their data can be fit starting at much smaller values of τ? The correction due to ESC in their smeared-smeared data is less than 10% even at τ ∼ 3 on the five common ensembles. The necessary condition to achieve this in our approach is reducing the overlap of the nucleon interpolating operator with the excited/multiparticle states to essentially zero. Since the source smearing used by the two collaborations is similar and the neutron interpolating operator is the same, the difference "must" come from the use of the gradient flow to smear the lattices. Further investigations are needed to confirm this interpretation (similar source smearing on gradient flow smoothed lattices produces sources with much smaller overlap with excited states) since one does not, a priori, expect the gradient flow smoothed lattices to change the overlap with the excited states, but only to reduce ultraviolet fluctuations.
Disconnected Contributions
We have calculated the disconnected contributions of light quarks on 5 ensembles a12m310, a12m220, a09m310, a09m220 and a06m310. For the strange quark we added the physical mass ensemble a09m130 and increased the statistics. The stocastic method used is the same as described in Ref. [1]. The chiral-continuum plots for these data are shown in Fig. 3. The renormalization is carried out using the same factors as for isovector currents. While this has been shown to be a good approximation for g A and g T [7], the same is not true for g S . So the data for g S in Fig. 3 is shown only for completeness. Our estimates for the axial and tensor charges, after a simultaneous chiral-continuum extrapolation are: g l T = 0.0042(79) g s T = 0.0043(34) .
Our new result g s T = 0.0043(34) is an improvement over the previously published value g s T = 0.008(9) [1]. The result for g l T is also still consistent with zero. Based on the current data, it is reasonable to assume that the magnitude of both after extrapolation is 0.01. Therefore, to get a precise value will require higher precision data on more ensembles to improve the chiral-continuum extrapolation in M 2 π and a. Given that we can bound their magnitude to be 0.01, we will continue to neglect the disconnected compared to the connected contribution to g u T and g d T as discussed below. These flavor diagonal tensor charges give the contribution of each quark's electric dipole moment (qEDM) to the neutron EDM as discussed in Refs. [1,2]. They are also probed in the measurements of transversity in deep inelastic scattering: the tensor charges are the integral over the longitudinal momentum fraction of the experimentally measured quark transversity distributions [1,8].
Results for the connected parts of the flavor diagonal charges, using the same renormalization factor as for the isovector currents, are Estimates for all three charges are consistent with those given in Ref. [1], and there is no significant reduction in the errors, which are still dominated by the final simultaneous chiral-continuum extrapolation.
were we also give the experimental values [9]. There is a 2-3σ difference between the lattice and experimental results for both g u A and g d A . The analogous results for the neutron are given by the u ↔ d interchange. From these axial charges, one gets the contribution of the quarks to the spin of the proton, ∆Σ q /2 = (g u A + g d A + g s A )/2 = 0.11(5).
Summary
This talk presents the current status of our results for isovector and flavor diagonal charges of the nucleons using 10 ensembles of 2 + 1 + 1-flavor HISQ ensembles generated by the MILC collaboration [4]. The increase in statistics and the addition of a second physical mass ensemble has improved the fits, both to control excited state contamination as well as for the final chiral-continuum-finite volume extrapolation. Our estimate g u−d A = 1.20(3) is 2.5σ below the experimental value. We find deviations of similar size for the flavor diagonal charges g u A and g d A .
Results for the tensor charges are stable and the error in them is now dominated by the uncertainty in the renormalization factor. We have corrected an error in the form of the leading chiral correction used in the final simultaneous fit to the data for g u−d S , M π → M 2 π . As a result, the estimate for g u−d S = 1.08(11) is about 1σ larger than the value reported in Ref. [1]. Our immediate goal is to double the statistics on the second physical mass ensemble a06m135 and finalize the analysis for publication. | 2018-01-09T20:27:47.000Z | 2018-01-09T00:00:00.000 | {
"year": 2018,
"sha1": "5da38e6dc3540a2c39dbe857e8b164af2f1f1278",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/10/epjconf_lattice2018_06029.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "c9f9208136bdf1dd792910344a5b4dde494b28b7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251607403 | pes2o/s2orc | v3-fos-license | Efficacy of Off-Label Anti-Amoebic Agents to Suppress Trophozoite Formation of Acanthamoeba spp. on Non-Nutrient Agar Escherichia Coli Plates
Acanthamoeba keratitis (AK) is a dangerous infectious disease, which is associated with a high risk of blindness for the infected patient, and for which no standard therapy exists thus far. Patients suffering from AK are thus treated, out of necessity, with an off-label therapy, using drugs designed and indicated for other diseases/purposes. Here, we tested the capability of the off-label anti-amoebic drugs chlorhexidine (CH; 0.1%), dibromopropamidine diisethionate (DD; 0.1%), hexamidine diisethionate (HD; 0.1%), miltefosine (MF; 0.0065%), natamycin (NM; 5%), polyhexamethylene biguanide (PHMB; 0.02%), povidone iodine (PVPI; 1%), and propamidine isethionate (PD; 0.1%) to suppress trophozoite formation of Acantamoeba castellanii and Acanthamoeba hatchetti cysts on non-nutrient agar Escherichia coli plates. Of the eight off-label anti-amoebic drugs tested, only PVPI allowed for a complete suppression of trophozoite formation by drug-challenged cysts for all four Acanthamoeba isolates in all five biological replicates. Drugs such as NM, PD, and PHMB repeatedly suppressed trophozoite formation with some, but not all, tested Acanthamoeba isolates, while other drugs such as CH, DD, and MF failed to exert a relevant effect on the excystation capacities of the tested Acanthamoeba isolates in most, if not all, of our repetitions. Our findings suggest that pre-testing of the AK isolate with the non-nutrient agar E. coli plate assay against the anti-amoebic drug intended for treatment should be performed to confirm that the selected drug is cysticidal for the Acanthamoeba isolate.
Introduction
Acanthamoeba spp. are microscopic, single-celled living organisms commonly found in the environment in soil, dust, and various water sources (i.e., fresh, brackish, and sea water) that can cause severe illness such as Acanthamoeba keratitis (AK), granulomatous amoebic encephalitis (GAE), or disseminated infection [1,2]. The life cycle of Acanthamoeba spp. consists of two stages: the actively dividing trophozoite stage, and the cyst, a permanent stage allowing the parasite to better survive hostile conditions. During the trophozoite phase, Acanthamoebae divide mitotically and feed on organic matter, bacteria, and other microbes. Under unfavorable conditions, such as a lack of food, extreme temperatures, or a high or low pH value, the trophozoite transforms into a double-walled cyst form, a process Microorganisms 2022, 10, 1642 2 of 10 called "encystation". If the conditions become more favorable again, the cyst may convert back into a trophozoite, a process called "excystation" [1,3].
Despite the fact that AK may lead to a complete loss of vision, there is still no specific drug on the market to treat this disease [4]. For this reason, ophthalmologists are forced to use other compounds as off-label anti-amoebic drugs, such as the disinfectant/antiseptic diamidines propamidine isethionate (PD; Brolene), hexamidine diisethionate (HD; Hexacyl), and dibromopropamidine diisethionate (DD; Golden Eye). Other off-label anti-amoebic drugs are the disinfectant/antiseptic biguanides polyhexamethylene biguanide (PHMB; Lavasept) or chlorhexidine (CH; Curasept) (reviewed in [4]). In addition, the antibiotic neomycin, the disinfectant/antiseptic povidone iodine (PVPI), and the anti-leischmanial drug hexadecylphosphocholine (MF; miltefosine) have been described as effective against Acanthamoeba spp. isolates. Some authors have also suggested the use of antifungals such as miconazol, clotrimazol, voriconazol, or natamycin (NM) in AK [4].
In case of bacterial keratitis, following corneal smear and culture, the antibiogram helps in finding the appropriate antibiotic medication to treat the keratitis. In contrast, there is neither a standardized treatment regime nor a standardized procedure for AK to test the in vitro susceptibility of the causative agent against the potential off-label drugs, and there is no overall accepted method to define Acanthamoeba susceptibility to potential anti-amoebic agents. However, it would be highly valuable for clinicians to define the antiamoebic susceptibility of the given clinical isolate as soon as possible, so that an aggressive (eye drops hourly), but effective, topical treatment can be introduced at an early stage.
Narasimhan et al. [5] and Kowalski and colleagues [6] described non-nutrient agar Escherichia coli plate assays to observe Acanthamoeba growth properties under in vitro conditions. In a recent study, we reported that the survival rates of drug-treated trophozoites/cysts determined by enzymatic-and dye-based viability assays might differ substantially from those yielded with the non-nutrient agar E. coli plate assay and proposed the latter test system as the gold standard for studying the treatment efficacy of drugs against Acanthamoeba spp. isolates [7]. In that study, we also observed that the off-label anti-amoebic drugs PHMB, PD, NM, and PVPI were more effective against the A. castellanii strain 1BU than the off-label anti-amoebic drugs CH, HD, DD, and MF under in vitro conditions; however, this was without providing a quantitative evaluation of the observed findings made with the non-nutrient agar E. coli plate assay [7]. In the study presented here, we aimed to quantify the effectiveness of the off-label anti-amoebic agents PHMB, CH, HD, PD, DD, NM, MF, and PVPI in killing cysts of the A. castellanii strains 1BU (sequence type T4), 3ST (T4), and 9GU (T4), and the A. hatchetti strain 11DS (T6), under in vitro conditions. Additionally, we aimed to establish a method which could offer reliable testing of the anti-amoebic susceptibility of Acanthamoeba spp. isolates.
Medium and Non-Nutrient Agar Preparation
Peptone-yeast-glucose medium (PYG), Neff's constant-pH encystment medium, and Page's amoeba saline (PAS) were prepared as described in [8]. For the non-nutrient agar, 15 g of agar (Sigma-Aldrich) was mixed with 100 mL PAS and 900 mL distilled water and autoclaved at 121 • C for 15 min.
Acanthamoeba Isolates
The A. castellanii strains 1BU and 3ST (both sequence type T4, isolated from corneal specimen of keratitis patients [9]), as well as 9GU (sequence type T4, isolated from the contact lens of a non-AK patient [9]), and the A. hachetti strain 11DS (sequence type T6, isolated from a contact lens case of a keratitis patient [9]) were obtained from the Division for Mycotic and Parasitic Agents and Mycobacteria of the Robert Koch Institute, Berlin, Germany.
Acanthamoeba Cultures
Culturing of 1BU, 3ST, 9GU, or 11DS trophozoites was conducted as described in [8]. Briefly, trophozoites were grown in tissue culture flasks containing 5 mL of PYG broth medium at 30 • C, in an airtight container. Encystment was induced by replacing PYG broth with Neff's constant-pH encystment medium after trophozoites had reached confluence. After one week of cultivation in Neff's constant-pH encystment medium at 30 • C, cysts were scraped off and collected by centrifugation for 10 min at 800× g. The sedimented cysts were washed once with 5 mL PAS and subsequently resuspended in PAS to a concentration of 3.30 × 10 6 cysts/mL. This cyst solution was stored at 4 • C until use.
Off-Label Anti-Amoebic Agents and Their Preparation
For the experiments carried out here, the off-label anti-amoebic agents and concentrations listed in Table 1 were used. CH, DD, HD, and MF were obtained as powders, PHMB as a 20% solution, and PVPI as a 7.5% solution. PD was available in a concentration of 0.1% as Brolene ®® eye drops, and NM in a concentration of 5% as Natamet ®® eye drops. Agents were dissolved or diluted in PBS for treatment of cysts to final concentrations corresponding to the clinically used concentrations [2,10]. Cysts treated with PBS or Lysoform (20%, Rossmann GmbH, Berlin, Germany) served as controls.
Non-Nutrient Agar Escherichia coli Plate Assay
The non-nutrient agar Escherichia coli plate assay was carried out as described in [8]. Briefly, cells of E. coli strain IM08B [11] were grown overnight on sheep blood agar plates (BD, Heidelberg, Germany) at 35 • C. Fresh grown IM08B colonies were picked with a cotton swab and suspended in PAS equivalent to a McFarland of 4.5. A 100 µL aliquot of this suspension was spread out onto the surface of a non-nutrient agar plate.
In the second step, 2 × 10 4 cysts were mixed with 100 µL PBS in the absence and presence of one of the putative anti-amoebic agents (Table 1) and were incubated for 2 h at 30 • C and 650 rpm to prevent sedimentation. Then, 10 µL of the drug-containing cell suspensions was transferred into fresh tubes and supplemented with 990 µL of PBS (thus diluting the drugs by 100-fold), and the cysts were carefully suspended by pipetting the cell suspension gently up and down. Next, 10 µL of the cell suspensions (~20 cysts) was pipetted onto the middle of the non-nutrient agar E. coli plates that were marked on the backside with a 6 × 6 mm 2 square and crosslines (Figure 1), and the suspension was allowed to desiccate for about 10 min. In the next step, bright-field images of the inoculation regions were taken to determine the exact numbers of cysts that were placed onto the agar plates.
pipetted onto the middle of the non-nutrient agar E. coli plates that were marked backside with a 6 × 6 mm 2 square and crosslines (Figure 1), and the suspension w lowed to desiccate for about 10 min. In the next step, bright-field images of the in tion regions were taken to determine the exact numbers of cysts that were placed o agar plates. Acanthamoeba-inoculated plates were incubated upside up for 24 h at 30 °C. after, plates were sealed with parafilm (Pechiney, Menasha, WI, USA) and wer bated upside down for up to 3 weeks at 30 °C.
Bright-field images were taken with a Leica DMI4000 B microscope every w up to 3 weeks with a 10-fold objective and 8-fold magnification changer, which am to a total of 80-fold magnification. Twenty-five images were taken from the cen cover the complete cyst solution spotting area) and eight images from the periph each plate at each time point ( Figure 1). These experiments (with each off-lab ti-amoebic agent) were repeated five times on different days with freshly prepared All captured images were evaluated manually. Cysts and trophozoites on each p were counted by the operator. Dark circular structures were identified as cysts, w the lighter oval to polygonal structures were identified as trophozoites.
Statistical Analysis
Statistical analysis was performed using GraphPad Prism, version 7.02. Dat analyzed with a Kruskal-Wallis test followed by Dunn's post hoc test to compa number of trophozoites in different treatment groups in relation to the PBS contro analyzed time points. P values < 0.05 were considered as statistically significant. Acanthamoeba-inoculated plates were incubated upside up for 24 h at 30 • C. Thereafter, plates were sealed with parafilm (Pechiney, Menasha, WI, USA) and were incubated upside down for up to 3 weeks at 30 • C.
Bright-field images were taken with a Leica DMI4000 B microscope every week for up to 3 weeks with a 10-fold objective and 8-fold magnification changer, which amounts to a total of 80-fold magnification. Twenty-five images were taken from the center (to cover the complete cyst solution spotting area) and eight images from the periphery of each plate at each time point (Figure 1). These experiments (with each off-label anti-amoebic agent) were repeated five times on different days with freshly prepared drugs. All captured images were evaluated manually. Cysts and trophozoites on each picture were counted by the operator. Dark circular structures were identified as cysts, whereas the lighter oval to polygonal structures were identified as trophozoites.
Statistical Analysis
Statistical analysis was performed using GraphPad Prism, version 7.02. Data were analyzed with a Kruskal-Wallis test followed by Dunn's post hoc test to compare the number of trophozoites in different treatment groups in relation to the PBS control at the analyzed time points. P values < 0.05 were considered as statistically significant.
Results
Cysts are the more robust life stage of Acanthamoeba spp. and are considered less susceptible to drug treatment. In order to prevent a recurrent AK infection, the drug of choice also needs to target the cyst stage of the amoeba [4]. Hence, we tested the efficacy of eight putative anti-amoebic agents (0.02% PHMB, 0.02% CH, 0.1% HD, 0.1% PD, 0.1% DD, 5% NM, 0.0065% MF, 1% PVPI) against the cyst stages of four different Acanthamoeba isolates in a single treatment approach (Figure 2).
Cysts are the more robust life stage of Acanthamoeba spp. and are considered less susceptible to drug treatment. In order to prevent a recurrent AK infection, the drug of choice also needs to target the cyst stage of the amoeba [4]. Hence, we tested the efficacy of eight putative anti-amoebic agents (0.02% PHMB, 0.02% CH, 0.1% HD, 0.1% PD, 0.1% DD, 5% NM, 0.0065% MF, 1% PVPI) against the cyst stages of four different Acanthamoeba isolates in a single treatment approach (Figure 2). Figure 2. Impact of the drug treatment on the excystment and growth behavior of A. castellanii 1BU, 3ST, and 9GU and A. hatchetti 11DS cysts on non-nutrient agar E. coli plates. Cysts were incubated in PBS medium either in the absence of drugs (control) or with the drugs indicated; afterwards, they were diluted in medium to a cell density of 2000 cysts/mL. Then, 10 μL of the solution (~20 Figure 2. Impact of the drug treatment on the excystment and growth behavior of A. castellanii 1BU, 3ST, and 9GU and A. hatchetti 11DS cysts on non-nutrient agar E. coli plates. Cysts were incubated in PBS medium either in the absence of drugs (control) or with the drugs indicated; afterwards, they were diluted in medium to a cell density of 2000 cysts/mL. Then, 10 µL of the solution (~20 cysts) was pipetted onto the centers of fresh non-nutrient agar E. coli plates. Cyst-inoculated plates were subsequently incubated at 30 • C for up to 3 weeks, and images of the central and peripheral regions after 3 weeks of incubation are shown. Images are representative of five independent biological experiments. The disinfectant lysoform served as a killing control. Dark circular structures were identified as cysts, whereas transparent amorphous structures were identified as trophozoites. CH, chlorhexidine; DD, dibromopropamidine diisethionate; HD, hexamidine diisethionate; MF, miltefosine; NM, natamycin; PHMB, polyhexamethylene biguanide; PVPI, povidone iodine; PD, propamidine isethionate; C, central region; P, peripheral region. Scale bar, 100 µm.
In our assay, cysts were co-incubated with the drug for 2 h, and defined numbers of drug-treated cysts were subsequently spotted onto the center of E. coli-coated non-nutrient agar plates (about 15-20 cysts per plate). Monitoring the central and peripheral regions of the cyst-inoculated plates for up to 3 weeks by light microscopy allowed us to determine whether drug-treated cysts retained their ability to form trophozoites in an excystmentstimulating environment (i.e., on an E. coli lawn formed on the surface of a non-nutrient agar plate). Representative images of cysts and trophozoites of the 1BU, 3ST, 9GU, and 11DS isolates on the non-nutrient agar E. coli plate at the center and periphery after 3 weeks are displayed in Figure 2 (higher-resolution versions of these images can be found in Supplementary Materials).
On all plates, remnants of the original cysts spotted in the central region of the plate remained observable in the same location during the entire follow-up. This location did not change, irrespective of whether an excystation took place or not. However, with the microscopic setup used in this study, discrimination between remnants of cysts releasing a trophozoite and cysts that did not excyst was not possible. Except for the disinfectant Lysoform, which was used as a positive control in this study, only one of the eight drugs, 1% PVPI, allowed for complete suppression of trophozoite formation for all four Acanthamoeba test strains in all five repetitions ( Figure 3). Other drugs such as 5% NM and 0.02% PHMB effectively suppressed outgrowth of cysts in three out of the four tested Acanthamoeba isolates, while 0.1% PD and 0.1% HD repeatedly prevented trophozoite formation for only two and one isolate, respectively. The anti-amoebic drug candidates 0.02% CH, 0.1% DD, and 0.0065% MF, on the other hand, failed to suppress the excystation of drug-challenged cysts on the E. coli lawn in all cases ( Figure 3). Notably, the anti-amoebic drug candidates that allowed the suppression of trophozoite formation with at least two out of the four Acanthamoeba test strains, failed to be effective with different isolates. While 5% NM treatment of cysts still allowed for trophozoite formation by drug-treated A. hatchettii 11DS cysts in four out of five repetitions, excystation for all three A. castellanii test strains (i.e., 1BU, 3ST, 9GU) was effectively prevented by this drug. The diamidine PD at a concentration of 0.1% suppressed trophozoite formation of A. castellanii isolate 1BU and of A. hatchettii strain 11DS, but failed to prevent trophozoite formation of drug-challenged A. castellanii 3ST and 9GU cysts in one and three out of five repetitions, respectively. The biguanide PHMB, at a concentration of 0.02%, suppressed trophozoite formation of drugchallenged A. castellanii isolates 1BU and 9GU, and of A. hatchettii strain 11DS, but failed to do so for drug-challenged A. castellanii 3ST cysts in one out of five repetitions. Comparing the number of plates featuring trophozoites anywhere on the plate at a given time revealed a consistency in trophozoite-negative/positive plates in 48 out of 50 experiments. Only in two experiments were trophozoites detected after three weeks of incubation, but not after one week, suggesting that an observation period of one week is already sufficient to predict whether the anti-amoebic drug candidate chosen for treatment is cysticidal for the Acanthamoeba isolate intended to be killed.
Discussion
AK is an infectious disease, which is associated with a high risk of blindness for the infected patient, and thus far, no standard therapy exists. Patients suffering from AK are usually treated with an off-label therapy with drugs designed and indicated for other diseases, for example, the disinfectant PD, which is used in ophthalmology as a treatment option for minor bacterial or fungal infections of the eye or eyelid [12]. Another example is MF, which was originally developed as a cancer medication but is currently used as a treatment option against cutaneous leishmaniasis in humans and animals [13].
Several clinical reports indicate that the drugs PD and MF have also succeeded in treating individual AK cases [12]. However, information on whether these drugs work on different Acanthamoeba spp. isolates is usually missing [9]. In order to fill this gap, this study examined the in vitro activity of several drugs suggested as off-label therapy options for AK against four different Acanthamoeba spp. isolates of sequence types T4 and T6. In the process, the focus was on cysts, the dormant stage of this amoeba, which are usually less susceptible to drug treatment than the actively dividing trophozoites [8,14]. We treated the cyst suspension with each drug for 2 h, then diluted the cysts and drug suspension by 100-fold, and placed an aliquot of the diluted cyst/drug mixture onto the center of a non-nutrient agar E. coli plate, thereby transferring lower concentrations of the drug onto the test plate. This method differs from the in vitro drug treatment procedure reported in other studies [5,6], in which the drug was completely withdrawn from the cysts after the incubation step and before the drug-challenged cysts were seeded in/onto the growth medium. However, we found it important to observe the cyst behavior under the conditions described here, as in vivo, the anti-amoebic drugs are also continuously in contact with the pathogen while being diluted in human tear fluid and the deepithelialized corneal tissue over time. We found that, under our test conditions, only one of the suggested AK drug candidates, the disinfectant/antiseptic PVPI, effectively prevented the excystment and formation of trophozoites for a period of at least 3 weeks (suggesting a cysticidal efficacy of 100%) on all four Acanthamoeba spp. isolates. Most of the AK drug candidates, on the other hand, worked effectively on cysts of some, but not all, Acanthamoeba spp. isolates tested in this study. However, the disinfectants/antiseptics CH and DD and the anti-leischmanial drug MF failed to exert a relevant killing capacity against cysts of all four tested Acanthamoeba spp. isolates in our in vitro test system (i.e., pre-treating cysts with the drug for 2 h, and culturing the drug-treated cysts on non-nutrient agar E. coli plates for up to 3 weeks). Our findings for CH and MF are in contrast to several in vitro and in vivo studies reporting promising cysticidal activities of these compounds against Acanthamoeba spp. isolates [5,10,[15][16][17][18][19][20]. Travis K. Redd and colleagues [16] recently reported minimum cysticidal concentrations (MCCs) for CH between 3.1 µg/mL and 25 µg/mL when testing the cysticidal activity of the disinfectant on nine human AK isolates, which are about 8-to 30-fold lower than the CH concentration used in our assays (0.02% = 200 µg/mL). Narasimhan and colleagues [5] reported MCCs for CH in the range of 1.6 µg/mL to 100 µg/mL for 19 AK isolates when the compound was co-cultured with the cysts for 48 h. A promising in vivo activity of CH was reported by Kosrirukvongs and colleagues [18] on AK isolates when the drug was applied hourly for one month followed by four times a day for up to nine months. The anti-leischmanial drug MF was reported to display a significant cysticidal activity on Acanthamoeba spp. in vitro, although MF failed to completely prevent trophozoite formation in this study, even if the drug was co-cultured with the cysts for 72 h [19]. A more recent study identified MCCs for MF in the range of 0.001% to 0.0013% for five AK isolates and reported a promising in vivo activity of topical MF as monotherapy in the treatment of AK [20]. Satisfactory in vivo activity of MF against AK was also reported by Thulasi and colleagues [17] in a clinical case study, in which the drug was applied perorally. The discrepancy between our observations and the finding listed above might be, at least in part, explained by the fact that in our in vitro test system, cysts were in contact with the drug for two hours only, whereas in the other in vitro studies and in the in vivo setting, the residence time of the drug on cysts was much longer. Support for this hypothesis is given by earlier findings made by Sunada and colleagues [10], who reported a 100% cysticidal effect for 0.02% chlorhexidine gluconate on AK isolates if the cysts were exposed to the drug for 24 h, while trophozoite formation was observed in all repetitions when the cysts were co-cultured with the drug for one hour only.
We also observed that some of the off-label AK drug candidates were highly effective (no trophozoites formed within 3 weeks post-treatment) in some of our biological replications but failed to kill all cysts in the other replications for reasons which are still unknown. The latter observation indicates that false positive or false negative results might occur if the in vitro drug efficacy is tested only once with the non-nutrient agar E. coli plate assay. However, in the vast majority of our repetitions (48/50), the efficacy of the drug against the Acanthamoeba spp. isolate was already reliably assessable after one week post-treatment. Thus, we propose that the start of an empirical treatment of AK should be accompanied by an in vitro-based drug efficacy study on cysts, which should be carried out in at least two independent experiments. Furthermore, an observation period of one week is usually sufficient to assess the treatment efficacy. This testing procedure of the in vitro susceptibility of the AK isolate with the non-nutrient agar E. coli plate assay against the intended anti-amoebic drug is likely to offer valuable information at an early stage of the treatment regime in terms of whether the drug is likely to be active on the Acanthamoeba isolate in vivo.
Study limitations: in our study, only a small number of Acanthamoeba isolates were tested with only one drug concentration by co-culturing cysts and the drug for 2 h. Thus, we cannot exclude that some of the anti-amoebic drug candidates such as 0.02% CH, 0.1% DD, and 0.0065% MF, which failed to be cysticidal in our study, might be highly effective against other AK isolates, while 1% PVPI, which was cysticidal for all four Acanthamoeba isolates tested here, might fail to exert a cysticidal effect on other AK isolates. Similarly, higher concentrations of the drugs and/or longer co-incubation intervals might render drugs more effective. As earlier observations also indicated that a good in vitro activity of a certain anti-amoebic drug candidate might not necessarily correlate with the clinical outcomes of AK [14], we also cannot exclude that pre-testing of the in vitro susceptibility of the AK isolate with the non-nutrient agar E. coli plate assay against the anti-amoebic drug intended for treatment might indicate a susceptibility of the AK isolate that is not seen in vivo.
Conclusions
Our findings strongly suggest that an empirical treatment of an AK patient with any of the AK drug candidates studied here should be accompanied by an in vitro activity testing of the AK isolate against the chosen drug. This test can determine whether the off-label drug is also highly effective against the cyst stage of this isolate. Such a procedure could reduce the risk of recurrent AK infections, since theoretically, even one surviving cyst could induce a new episode of AK in the absence of treatment. Monitoring trophozoite formation of drug-challenged Acanthamoeba spp. cysts on the non-nutrient agar E. coli plate for 1 week in experiments and repeated at least twice is sufficient to reliably inform us about the efficacy of the tested drugs to prevent excystation. In our opinion, this is an important measure to estimate the overall susceptibility of the given clinical isolate to the anti-amoebic drugs considered for treatment.
Data Availability Statement:
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request. | 2022-08-17T15:11:40.223Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "bf1416ad25bef7b328f20cfe46ee6c32ebd746f3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/8/1642/pdf?version=1660390747",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "624dfa76df76ebb7ba9f104dead1c2ecab4b6200",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12673242 | pes2o/s2orc | v3-fos-license | Detection of oral streptococci in dental biofilm from caries-active and caries-free children
This work correlated the presence of oral streptococci in dental biofilm with clinical indexes of caries and oral hygiene in caries-active and caries-free children. S. mutans and/or S. sobrinus in the dental biofilm does not indicate a direct risk for developing dental caries.
It is well established that microorganisms have an important role in caries etiology (13). The oral streptococci, especially Streptococcus mutans and Streptococcus sobrinus have often been associated with dental caries in humans (18). Other microorganisms, such as Streptococcus mitis and Streptococcus salivarius have also been linked with the disease or absence of it. Their interplay within the dental biofilm is an important feature for the establishment and maintenance of the oral microflora and the development of a cariogenic dental plaque (16).
The detection and identification of oral streptococci in the dental biofilm is considered to be an important step for the understanding of dental caries. Several methods have been proposed to identify and differentiate oral streptococci: biochemical tests (24), immunologic and genetic methods with DNA probes (4), microbiologic methods (21), polymerase chain reaction (PCR) (8) and variations of this technique, such as real-time PCR (26) and nested PCR (22). The PCR method is faster, more sensitive and specific than the current microbiologic methods (9).
However, for clinical studies, the best predictor of dental caries is the past caries experience. This information is limited since the caries risk can be already established (2).
The aim of the present study was to correlate the presence of oral streptococci in the dental biofilm of children with different caries patterns, using specific primers to identify S. mutans, S. sobrinus and S. salivarius using the PCR technique.
Ten children were invited to participate in this study according to the following inclusion criteria: age ranging from 6-8 yr; studying in the same school an under same dietary pattern; presenting erupted permanent molars and different pattern of dental caries activity (DMFT/dmft>0 or DMFT/ dmft=0). Children taking any medication during the study were excluded. The protocol was approved by the Ethical Board of the Health Science Center at the Federal University of Paraíba (protocol 231/04), and informed consent was obtained from the children's parents.
The clinical indexes obtained were: gingival bleeding index (GBI); simplified oral hygiene index (OHI-S) (6), and dental caries (DMFT and dmft) (25). The following salivary tests were also carried out: salivary flow rate, buffer capacity and oral streptococci counting (OSC).
The dental biofilm samples were obtained from the buccal surface of the first (deciduous or permanent) molar on the left side of the lower jaw, using sterile dentine spoons. The material was dispersed in sterile tubes containing brain heart infusion (BHI; Difco). After biofilm collection, the children were instructed to chew paraffin wax for 5 min, and then to spit into a sterile tube. The stimulated saliva and biofilm samples were kept on ice, transported immediately to the laboratory and examined within 2 h of collection.
The buffer capacity was obtained adding 1 ml of saliva to 3 ml of HCl (0.005%), and the final pH was measured.
The oral streptococci count was obtained after serially diluting the saliva samples with sterilized saline, inoculating on MS agar plates, and incubating at 37ºC for 2 days in a 5% CO 2enriched atmosphere. The microorganisms from saliva and dental biofilm samples were routinely cultured in brain heart infusion broth (BHI) (Difco, USA), mitis salivarius agar (MSA) (Difco, USA) and mitis salivarius agar supplemented with 440 mmol l -1 sucrose, 39 mmol l -1 potassium tellurite and 0.2 units ml -1 of bacitracin (MSB).
Saliva samples were vortexed for 30 s and serially diluted (1:10, 1:100 and 1:100) in isotonic saline solution. The samples were then inoculated on MSB agar plates, in duplicate and incubated at 37ºC for 2 d in a 5% CO 2 -enriched atmosphere. Before the examination, the plates were left at room temperature for 24 h. To avoid bias, all plates were processed and examined by the same investigator. Colonies of oral streptococci were identified morphologically and counted. The results are expressed as (CFU) ml -1 . The dental biofilm samples dispersed in BHI were vortexed for 30 s, plated in duplicate on agar plates of MS and MSB and incubated at 37ºC for 2 d in a 5% CO 2 -enriched atmosphere. Aliquots of oral streptococci from the dental biofilm samples were stored in tubes containing Skin Milk medium (Difco, USA).
The microorganisms (100 μl) stocked in Skin Milk were grown in 5 μl of BHI at 37ºC for 24 h in a 5% CO 2 -enriched atmosphere. DNA extraction from the microorganisms of the dental biofilm samples were carried out according to Buikema et al. (3), modified. The samples were kept at -20ºC.
PCR was conducted using specific primers for the glucosyltransferase (GTF) enzyme to detect S. mutans, S. sobrinus and S. salivarius (Hoshino et al., 2004 (8), modified). The PCR products were analyzed by 2.0% agarose gel electrophoresis, after staining with ethidium bromide. The gels were photodocumented using ImageMaster (Amersham Pharmacia Biotech, USA) for subsequent analysis.
The product size in each species was in accordance with the expected size. No results were found for S. sobrinus. S. salivarius was identified in 80% of dental biofilm samples cultivated in MSA and 55.5% in MSB. S. mutans was identified in 60% of the dental biofilm samples cultivated in MSA and 78% in MSB (Table 1). There was no clear relationship between caries experience and caries activity as well as oral streptococci counting (OSC). Some studies found a relationship between high OSC (above 10 6 UFC/ml) and high caries indexes (5,15). On the other hand, Loesche and Straffon (12) demonstrated that caries can occur in the absence of S. mutans and Matee et al. (14) observed that the level of oral streptococci in the saliva of children cannot predict future caries.
Correlations between salivary tests (including OSC) and patients with high caries activity and low caries activity were not observed. These findings are in accordance with Sundin et al. (23) who also found a weak or non-existent relationship between these variables. However, Ravald and Birkhed (20) demonstrated that individuals with low salivary flow rate have a higher predisposition to cariogenic activity.
In the present work, MSB medium selected the microorganisms of the mutans group (S. mutans and S. sobrinus) and S. salivarius of the salivarius group. These findings are in accordance with Yoo et al. (27) who concluded that MSB medium is not specific for selecting streptococci of the mutans group, suggesting that a new selective medium is required for reliable isolation of mutans streptococci.
In this work, S. mutans and S. salivarius were identified in the dental biofilm of patients with HCA and LCA ( Table 1). The association between S. mutans with S. sobrinus and carious lesions has been observed previously. The differentiation between S. mutans and S. sobrinus is important due to their different behavior in the initial colonization phase and different virulence mechanism (11). Considering the cariogenicity of S. salivarius, few studies related this bacterium with dental caries (1).
S. sobrinus is frequently present at a lower level than S. mutans (17) due to its inability to carry N-acetylglucosamine. This is an energy-requiring process which depletes the intracellular levels of phosphoenolpyruvate (7) and reduces the energy inside the microorganism that can be used for other purposes. MSB medium can also inhibit the growth of S. sobrinus. Pereira et al. (19) demonstrated that S. mutans has ability to inhibit plaque formation by S. sobrinus and recolonize surfaces. Individuals with increased numbers of mutans streptococci and lactobacilli were associated with increasing prevalence of caries (10).
Few studies have evaluated the prevalence of S. salivarius in dental biofilm. In general, this microorganism is identified on the tongue. In this study, this microorganism was frequently identified in cultivated samples of dental biofilm in MSA and MSB, as well in patients with HCA and LCA. Similar findings were observed by Hoshino et al. (8), who identified this microorganism in 9 of the 10 salivary samples analyzed in cariesfree patients, and patients with high caries indexes (DMFT and dmft). S. salivarius showed a high prevalence in dental biofilm samples in this study, and it can cooperate with S. mutans to form a cariogenic dental plaque. This must be investigated further in future studies.
In spite of the low number of patients in our study, it could be expected that if a direct and strong correlation between OSC and dental caries takes place, this relationship could be differentiated between these carious risk groups. The unclear relation of specific microorganisms and clinical caries parameters support the hypothesis that other factors are necessary to be operating for caries development (e.g. lack of oral hygiene and/ or a sugar rich diet). Nevertheless, the multifactorial etiology for dental caries does not invalidate the use of PCR for microorganism detection since this technique can be a reliable tool for bacteria identification in the complex biofilm environment. Finally, it can be concluded that the presence of S. mutans and/ or S. sobrinus in the dental biofilm does not indicate a direct risk for the development of dental caries.
ACKNOWLEDGMENTS
We are grateful for the technical assistance of Amely Branquinho Martins and Teresa Cristina S. L. Grisi. We are also thankful to Itácio Padilha and Marcela Lins. This work was supported by CAPES and CNPq. | 2017-06-27T08:51:27.282Z | 2008-10-01T00:00:00.000 | {
"year": 2008,
"sha1": "8d2465868f0bcc0e6258ac22b6ca7d50090f6759",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/bjm/v39n4/arq09.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d2465868f0bcc0e6258ac22b6ca7d50090f6759",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55270490 | pes2o/s2orc | v3-fos-license | Optimum Fermentation Condition of Soybean Curd Residue and Rice Bran by Preussia aemulans using Solid-State Fermentation Method
An environmental method for using soybean curd residue (SCR) and rice bran (RB) was developed in this study. SCR and RB were utilized as growth medium for Preussia aemulans, a new fungus isolated from Cordyceps sinensis fruiting body. According to Orthogonal test and Duncan’s multiple range test, the optimum fermentation condition of fermented SCR and RB for producing polysaccharide, adenosine and ergosterol were summarized. Under the optimum fermentation condition of SCR, the polysaccharide, adenosine and ergosterol contents were reached to 39.18 ± 1.06 mg/g dry matter, 127.94 ± 1.82 mg/100g dry matter and 37.53 ± 0.11 mg/100g dry matter, respectively. And under the optimum fermentation condition of RB, the content of polysaccharide, adenosine and ergosterol were also enhanced 3-fold, 10-fold and 10-fold, respectively. Therefore, the fermented SCR and RB could be utilized as nutritious functional food or food additives in the future. Keywoeds: Cordyceps sinensis, Preussia aemulans, soybean curd residue, rice bran, solid-state fermentation
Introduction
In recent years, due to the serious economic and environmental concerns, the utilization of food by-products is unprecedentedly expected to increase and become more efficient.Thus, reduce, reuse, and recycle (3R) of by-products is getting more and more important in food industries (Wang & Nishino, 2008).
Soybean curd residue (SCR), is produced from the tofu industry in China and Japan.It was once consumed as a traditional food, but modernization and urbanization in the lifestyle has reduced its status to that of a mere industrial waste, which is now mainly incinerated like other industrial wastes (Ohno, Ano, & Shoda, 1996;O'Toole, 1999).The main disadvantage of SCR is natural spoilage when storage is not under refrigeration.In Japan, 0.7 million tons of SCR is disposed annually, mostly by incineration which has caused severe environmental pollution (Mizumoto, Hirai, & Shoda, 2006).In fact, SCR is rich in carbohydrate, protein and many other nutrients, suggesting that it is a potential source of low cost medium for the growth of mycelia.Many researchers have investigated the possibility of bioconversion of the residues by submerged and solid-state cultivation (Yokoi, Maki, Hirose, & Hayashi, 2002;Shi, Yang, Li, Wang, & Zhang, 2011).
For reuse of SCR and RB, reduce of waste and recycle of organic material, according to the nutrition profile of SCR and RB, it could be considered as a growth medium for fungi.In this study, the SCR and RB were used as a culture medium for Preussia aemulans (P.aemulans) which was isolated from Cordyceps sinensis fruiting body.The objective of this research was to find out the optimum fermentation condition, which maximize quantity of polysaccharide, ergosterol and adenosine using SCR and RB, respectively.Fermented SCR and RB were detected to produce potential functional animal feed to substitute the antibiotic added to the feed, and improve the safety of food.
Isolation and Cultivation of P. aemulans
The fruiting body of Cordyceps sinensis was purchased from Qinghai, China, and the isolated P. aemulans mycelium (SIID11759-01) was identified by Techno Suruga laboratory co., ltd, Japan.The stroma of Cordyceps sinensis fruiting body was sterilized with ethanol three times, air-dried, cut into small segment and transferred to slant tube fermentor to incubate for 7 days, at room temperature.The white mycelium appeared on the surface during slant fermentation.Then, mycelium was transferred to agar medium, which contained (per liter): 20 g of sucrose, 10 g of peptone, 20 g of agar powder, 1.5 g of MgSO 4 , 3 g of KH 2 PO 4 .After 7 days of the culture, when white mycelium appeared on the surface of the medium, the mycelium was transferred into the liquid medium, which was containing (per liter): 20 g of sucrose, 10 g of peptone, 4 g of potato powder, 1.5 g of MgSO 4 , 3 g of KH 2 PO 4 .The Cordyceps sinensis mycelium was incubated in a 200 mL of flask with 100 mL of PDA liquid medium, and the mixture was stationary cultured for 7 days.After the stationary culture, the P. aemulans mycelium was inoculated to SCR and RB followed by the orthogonal test design.
Orthogonal Test Design
SCR was obtained from the inamoto toufu factory in tsukuba, Japan.The carbon nitrogen ratio, moisture content, and pH value of SCR were 10.8, 80% and 5.5, respectively.According to the initial conditions, the fermentation conditions were designed to investigate the optimum condition for yield of polysaccharide, ergosterol and adenosine by solid-state fermentation.The carbon resources, nitrogen resources (3% w/w), adding dosage of carbon sources and the fermentation time, were regarded as correlated factors of culture condition.The optimum fermentation condition was obtained by an orthogonal layout L 9 (3 4 ) in a 200 mL flask with 20 g of SCR.The level of factor is shown in Table 1.The inoculum size of P. aemulans mycelium (liquid medium) was 20 % (v/w).After fermentation, the fermented SCR mycelia mixture was dried and grounded into powder for further experiment.
RB was collected from Automatic rice-polishing machine in tsukuba, Japan.The carbon nitrogen ratio and moisture content of RB were 12 and 10%, respectively.According to the initial conditions, the fermentation conditions were designed to investigate the optimum condition for yield of polysaccharide, ergosterol and adenosine by solid-state fermentation.The carbon resources, nitrogen resources (3% w/w), adding dosage of carbon and nitrogen sources, moisture content and the fermentation time, were regarded as correlated factors of culture condition.The optimum fermentation condition was obtained by an orthogonal layout L 16 (4 5 ) in a 500 mL flask with 20 g various moisture content of RB.The level of factor is shown in Table 3.The inoculum size of P. aemulans mycelium (liquid medium) was 20 % (v/w).After fermentation, the fermented RB mycelia mixture was dried and grounded into powder for further experiment.Mean values were mean of three determinations with standard deviation (±).
Determination of Polysaccharide Content
The fermented SCR and RB dried powder was extracted with boiling water for two hours.The water-soluble polysaccharide was precipitated by adding eight volumes of 99.5% ethanol and stored at 4℃ overnight.The precipitated polysaccharide was collected by centrifuging at 7000 rpm for 30 min.Then the precipitate was dissolved in 10 mL of distilled water.The total polysaccharide was determined by the phenol-sulfuric acid method with some modifications (Li, Ding & Ding, 2007).The color reaction was initiated by mixing 1 mL of the polysaccharide solution with 0.5 mL of 5% phenol solution and 2.5 mL of concentrated sulfuric acid, and the reaction mixture was incubated in a boiling water bath for 15 min.After cooling it to room temperature, the optical density (OD) of the mixture was determined at 490 nm and the polysaccharide content was calculated with D-glucose as the standard.The results were expressed as milligram of glucose equivalent per gram of the fermented SCR and RB.
Determination of Ergosterol Content
The fermented SCR and RB dried powder were extracted with a mixture of methanol and dichloromethane in the ratio of 75/25 (v/v) and the solid-to-liquid ratio was 1/10 (w/v) using ultrasonic-assisted extract method for 1 h (50 W) at ambient temperature.Then, the supernatant was collected and filtered by filter (0.45 μm) for HPLC determination.The samples were analyzed by the HPLC (JASCO International Co., Ltd) with a reverse-phase Capcell-Pak C 18 column (4.6 mm I.D. × 150 mm, particle size of 5 μm Nacalai Tesque, Inc. Japan) in a flow rate of 1.0 mL/min, the column temperature was set at 30°C and the UV detection was operated at 254 nm.The mobile phase was methanol (99.5%), and the concentration of ergosterol was calculated by comparing peak areas with appropriate standards.
Determination of Adenosine Content
The fermented SCR and RB dried powder were extracted with deionized water (1/10 w/v) by using ultrasonic-assisted extract method for 1 h (50 W) at ambient temperature.Then, the supernatant was collected and filtered by filter (0.45 μm) for HPLC determination.The samples were analyzed by the HPLC (JASCO International Co., Ltd) with a reverse-phase Capcell-Pak C 18 column (4.6 mm I.D. × 150 mm, particle size of 5 μm Nacalai Tesque, Inc. Japan) in a flow rate of 1.0 mL/min, the column temperature was set at 30°C and the UV detection was operated at 260 nm.The mobile phase was a mixture of acetonitrile and water (5:95, v/v).And the concentration of adenosine was calculated by comparing peak areas with appropriate standards.
Statistical Analysis
Experimental results were means ± standard deviation (SD) of triple determinations.The data were analyzed by one-way analysis of variance (ANOVA).Tests of significant differences were determined by Student's t-test analysis at P = 0.05 or independent sample t-test (P = 0.05).
Orthogonal Test Results of Fermented SCR
The yield of polysaccharide, ergosterol and adenosine from fermented SCR were shown in Table 1, the optimum fermentation conditions and the significant levels were shown in Table 2.
The highest mean yield of polysaccharide was 31.43 ± 0.37 mg/g dry matter.The optimum levels of factors were sucrose as carbon source, beef extract as the nitrogen source, 15% of adding dosage of carbon source, and 10 days of fermentation time, respectively.The R value of various factors indicated that the nitrogen source was the highest among these factors.And the significant levels indicated that all of the factors significantly related with the yield of polysaccharide.
About ergosterol, the highest mean yield of ergosterol was 35.65 ± 2.76 mg/100g dry matter.The optimum levels of factors were glucose, beef extract, 10% of adding dosage of carbon source, and 15 days of fermentation time, respectively.The R value of various factors indicated that the nitrogen source was the highest among these factors.
And the significant levels showed that the ergosterol yield of the fermented SCR was significantly related to all of the factors.
The highest mean yield of adenosine was 117.96 ± 1.24 mg/100 g dry matter.The optimum levels of factors were glucose, yeast extract, 10% of adding dosage of carbon source, and 15 days of fermentation time, respectively.The R value of various factors indicated that the nitrogen source was the highest among these factors.And the significant levels revealed that the adenosine content of the fermented SCR was significantly related to all of the factors.
Further, in order to evaluate the fermented SCR, the solid-state fermentation was enlarged by using 500 mL flask with 50 g SCR, under optimum conditions.The polysaccharide yield of the fermented SCR was reached to 43.49 ± 2.48 mg/g dry matter.Compared with the unfermented SCR (12.91 ± 0.39 mg/g dry matter), the polysaccharide content was 4-fold improvement during the fermentation by P. aemulans under the optimum fermentation conditions of polysaccharide yield (OPCPS-SCR).The ergosterol yield of the fermented SCR was reached to 37.53 ± 1.34 mg/100 g dry matter.In contrast with the unfermented SCR (3.13 ± 0.26 mg/100 g dry matter), the ergosterol content was enhanced about 10-fold during the fermentation by P. aemulans under the optimum fermentation conditions of ergosterol yield (OFCER-SCR).According to previous reports, the ergosterol content of the fermented SCR as much as cultured C. sinensis of Wanfong (38 mg/100 g dry matter) (Li, Yang & Tsim, 2006).And the adenosine yield of the fermented SCR was reached to 148.32 ± 4.21 mg/g dry matter.Compared with the unfermented SCR (12.68 ± 1.36 mg/100g dry matter), the adenosine content was increased by 10-fold under the optimum fermentation conditions of adenosine yield (OFCAD-SCR).And on basis of the previous reports, the adenosine content of fermented SCR was 5-fold higher than that of nature C. sinensis (Tibet and Qinghai) (Li, Yang & Tsim, 2006).
The Optimum Fermentation Condition of Fermented SCR
According to the results of orthogonal test, the optimum fermentation condition of polysaccharide, ergosterol and adenosine were different.Therefore, it was necessary to discuss the integrated optimum fermentation condition.
Orthogonal Test Results of Fermented RB
The polysaccharide, ergosterol and adenosine yield of fermented RB were shown in Table 3, the optimum fermentation conditions and the significant levels were shown in Table 4.
The highest mean yield of polysaccharide in the orthogonal experiment was 70.02 ± 1.94 mg/g dry matter.The optimum levels of factors were maltose of carbon source, yeast extract of the nitrogen source, 10% of adding dosage of carbon source, 60% of moisture content and 15 days of fermentation time, respectively.The R value of various factors indicated that the nitrogen source was the highest among these factors.And the significant levels indicated that all of the factors significantly related with the yield of polysaccharide.The highest mean yield of ergosterol was 86.47 ± 1.76 mg/100g dry matter (Table 3).Because the contents of several samples were extremely low, the range and variance analysis could not be used in this experiment.Therefore, the optimum levels of factors were sucrose, beef extract, 5% of adding dosage of carbon source, 90% moisture content and 15 days of fermentation time, respectively.
The highest mean yield of adenosine was 281.31 ± 2.12 mg/100 g dry matter.The optimum levels of factors were glucose, yeast extract, 10% of adding dosage of carbon source, 90% moisture content and 15 days of fermentation time, respectively.The R value of various factors indicated that the moisture content was the highest among these factors.And the significant levels were indicated that the adenosine content of the fermented RB was significantly related to all of the factors.
Further, in order to evaluate the fermented RB, the solid-state fermentation was demonstrated by using 500 mL flask with 50 g RB, under optimum conditions.The mean polysaccharide content of the fermented RB was reached to 71.16 ± 2.63 mg/g dry matter.Compared with the unfermented RB (19.80 ± 1.23 mg/g dry matter), the polysaccharide content was increased to almost 4-fold during the fermentation by P. aemulans under the optimum fermentation conditions of polysaccharide content (OPCPS-RB).The ergosterol content was reached to 88.04 ± 0.36 mg/100 g dry matter, enhanced about 10-fold during the fermentation by P. aemulans under the optimum fermentation conditions of ergosterol content (OFCER-RB).The mean adenosine content of the fermented RB was also enhanced to 282.25 ± 1.83 mg/g dry matter.Contrast with the unfermented RB (30.13 ± 1.53 mg/100g dry matter), the adenosine content was increased by 10-fold during the fermentation by P. aemulans under the optimum fermentation conditions of adenosine content (OFCAD-RB).According to the results of orthogonal test, the optimum fermentation condition of polysaccharide, ergosterol and adenosine were different.Therefore, it was necessary to discuss the integrated optimum fermentation condition.The polysaccharide contents of OFCPS-RB (71.16 ± 2.63 mg/g dry matter), OFCER-RB (55.40 ± 2.29 mg/g dry matter) and OFCAD-RB (52.16 ± 1.58 mg/g dry matter) were shown in Figure 2 a.According to Duncan's multiple range test, the polysaccharide content of OFCPS-RB was significantly higher than that of OFCER-RB and OFCAD-RB.The ergosterol contents of OPCPS-RB, OPCER-RB and OPCAD-RB were 14.79 ± 0.48, 88.04 ± 0.36 and 31.85 ± 0.87 mg/100 g dry matter, respectively (Figure 2 b).The adenosine contents of OPCPS-RB, 276.94 ± 1.96 and 282.25 ± 1.83 mg/100 g dry matter, respectively (Figure 2 c).As the results, the significant levels of OPCER-RB ( B, A, A ) were higher than those of OPCPS-RB ( A, C, B ) and OPCAD-RB ( C, B, A ), thus OPCER-RB was the optimum fermentation condition for producing polysaccharide, ergosterol and adenosine.
Conclusions
The optimum fermentation conditions of SCR were: glucose, beef extract, 10% of adding dosage of carbon source, and 15 days of fermentation time, respectively.Under the optimum fermentation conditions, the polysaccharide, ergosterol and adenosine content were 39.18 ± 1.06 mg/g, 37.53 ± 0.11 mg/100 g dry matter and 127.94 ± 1.82 mg/100 g dry matter, respectively.
The optimum fermentation conditions of RB were: sucrose, beef extract, 5% of adding dosage of carbon source, 90% moisture content and 15 days of fermentation time, respectively.Under the optimum fermentation conditions, the polysaccharide, ergosterol and adenosine content were 55.40 ± 2.29 mg/g dry matter, 88.04 ± 0.36 and 276.94 ± 1.96 mg/100 g dry matter, respectively.
The results indicated that the polysaccharide, ergosterol and adenosine content of SCR and RB were improved by solid-state fermentation using P. aemulans.The effective utilization of such agricultural waste not only solves environmental problems, but also promotes the economic value of the agricultural products.The fermented SCR and RB were rich in physiological active substances, low in cost, could be explored as ecological feed or functional food material in the further.
Table 3 .
L 16 (4 5 ) orthogonal layout and results of fermented RB by P. Aemulans *Note: CS, carbon source; NS, nitrogen source; ADCS, adding dosage of carbon source; MC, moisture content; FT, fermentation time.Mean values were mean of three determinations with standard deviation (±).ND means not detected. | 2018-12-11T13:16:51.368Z | 2015-05-15T00:00:00.000 | {
"year": 2015,
"sha1": "ca2f40122ef0244913dc1b4de28e4527e511251a",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijb/article/download/47560/26196",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ca2f40122ef0244913dc1b4de28e4527e511251a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
250645632 | pes2o/s2orc | v3-fos-license | Reference compounds for characterizing cellular injury in high-content cellular morphology assays
Robust, generalizable approaches to identify compounds efficiently with undesirable mechanisms of action in complex cellular assays remain elusive. Such a process would be useful for hit triage during high-throughput screening and, ultimately, predictive toxicology during drug development. Here we generate cell painting and cellular health profiles for 218 prototypical cytotoxic and nuisance compounds in U-2 OS cells in a concentration-response format. A diversity of compounds that cause cellular damage produces bioactive cell painting morphologies, including cytoskeletal poisons, genotoxins, nonspecific electrophiles, and redox-active compounds. Further, we show that lower quality lysine acetyltransferase inhibitors and nonspecific electrophiles can be distinguished from more selective counterparts. We propose that the purposeful inclusion of cytotoxic and nuisance reference compounds such as those profiled in this resource will help with assay optimization and compound prioritization in complex cellular assays like cell painting.
• The authors begin the study by screening 218 "cytotoxins and prototypical nuisance compounds" but do not provide details on how these chemicals were initially selected. Please provide additional details on the selection criteria / process for these compounds, either in the manuscript text or as supplement.
• In Figure 6 the authors propose the idea of a nuisance compound informer set for use in interpretation of HCS assay results. Are the authors proposing that the 218 prototypical cytotoxicants and nuisance compounds they initially screened and used to construct phenotypes 1-9 is adequate for this purpose, a good starting point, etc. Or, should practitioners attempt to build their own informer sets based on the biological question at hand or cell model of interest? Some briefly clarifying language in the discussion would be helpful here.
• When visualized on PCA and compared to other clusters, the "gross injury" cluster (Cluster 9) appears to be somewhat of a catchall for cytotoxic treatments that produce profound morphological effects that don't look like anything in Clusters 1-8. That said, Cluster 9 still appears to be a useful classifier for readily identifying cytotoxic treatments. The "rainbow charts" such as those found in Figure 1f are an interesting and visually appealing way to readily understand the phenotypes associated with different concentrations of a test chemical. However, they are also a bit deceptive. Showing a series of test concentrations (points) connected by a line overlaid on a gradient ranging from 1 to 9 visually implies that the trajectory of the cellular phenotype has to progress through each stage before reaching the Cluster 9 "gross injury" phenotype. I don't think that is the case. The cells would not necessarily pass through each of the different phenotypic clusters as a function of dose. The authors should provide some clarifying language to the manuscript text or figure legend to ensure these plots are interpreted properly.
Reviewer #2: Remarks to the Author: Summary In this manuscript, Dahlin et al. presents their results towards identification of compounds causing cellular injury by the use of morphological profiling (Cell Painting). The authors analyse a historical cell painting dataset and finds that cell painting activity is correlated with loss of cells. Profiling a subset of 218 compounds in dose-response and clustering analysis reveals a cluster of compounds causing gross injury. Two compound categories are then the focus: Electrophiles and lysine acetyltransferase inhibitors (KATIs). While non-specific electrophiles and historical KATIs cause broad cellular perturbations, resulting in cell painting profiles correlating to the gross injury cluster, targeted electrophiles, at low concentration, and next-generation KATIs show less cell painting activity and do not correlate to gross injury. The hKATIs and ngKATIs are extremely rigorously characterised in cell-free assay, revealing that ngKATIs are superior probe compounds for KATs. Next, 254 compounds were evaluated for modulation of "cell health" by profiling apoptosis induction (caspase activation), loss of confluence and membrane disruption (CytoTox) and this was correlated to activity in cell painting. High correlations between the cell health and cell painting activities were found, especially to compounds in "gross injury" cluster, indicating that cell painting can be used to determine interference with cell health. From this analysis the authors propose the generation of a compound collection, an informer set, that causes cellular injury, that can be used as control compounds in various phenotypic screens. By inference/correlation to the "cell injury compounds", compounds with unspecific activities can be down-prioritised as hits from HTS or HCS.
Assessment: Overall, the work performed is thorough and provides a valuable resource for chemical biology researchers using the cell painting assay and for research and industry groups using HTS or HCS. The work shows that cell painting is good at detecting the perturbations related to cell injuries while more subtle effects, such as epigenetic changes, are not well detected, providing a clear perspective on what can be expected from cell painting. We recommend publication of this manuscript, but the authors can consider some improvements: -The paper was hard to read and took significant effort to properly understand. There might be several different causes for this, but one is that the figures are extremely information-dense and, at least in the versions that we have received, has many features and fonts that simply cannot be distinquished. Furthermore, the text is very concise without much support for non-experts. The authors should very strongly consider the accessibility of the manuscript (Figures and text) to the more general reader. It is important that meaningful information can be readily extracted from all panels in the main figures. For instance, we had a hard time understanding the purpose of subpanel 1c, but this also applies to other sub-panels. The figure-legends are likewise very brief on information. While it is a choice to leave the analysis of the presented data completely to the reader and just state the conclusion(s), providing a bit more details on how these conclusions are reached from the data would help guide the reader through the information presented. Again, this would make the paper more readable and approachable by non-experts, ultimately increasing its impact.
-Another issue that we believe the authors could provide a more nuanced discussion of relates to the general use of cell painting as a tool for generation of mechanistic hypotheses. The presented data confirms what has been speculated / an accepted fact in the community, namely that many bioactive compounds do not afford active cell painting profiles at the concentrations at which they should modulate their target. The authors specifically mention this on P8 line 197-199. This is actually a major take-home messages from the paper. Some additional comments or considerations from the authors along that direction could be favorably included in the discussion section. I.e. how large is, in their best view, the mechanistic space that CP, in its present form, can resolve in a meaningful way. Of course, as increasingly sophisticated analyses of image data, such as the fluorescence images that underlie CP, are being developed this may further improve the information that can be extracted and thereby the different phenotypes that can also be distinguished by CP.
Specific questions and comments: -The following contains some more specific questions and comments, that should be clarified and/or changed before publication: • Figure 1a, figure legend: "Active (Mahalanobis distance) CP compounds…", what is the threshold for mahalanobis distance? 3 SD's from mean? Is this a typical way of determining if a compound is active? It seems this would change with the dataset as the distribution of Mahalanobis distances would depend on the number of active and inactive compounds.
• Page 5 line 107 and Figure 1b: How are the clusters determined / distingushed? Hierarchical clustering does not directly give a number of optimal clusters. Is this by manual inspection of the correlation matrix in supplementary figure 1 or some algorithm? This should be included in the main text or in the methods section. o Perhaps the clusters could be indicated in the correlation matrix in Fig S1 • Figure 1d: A non-specified cut-off is given for activity (Mahalanobis distance), perhaps 3 SD's as in figure 1a? As for figure 1a: Is it meaningful to have a cut-off for the activity measurement that depends on the subset of data analysed? o The same cut-off is used in figure 2a and 3a it seems. Could the authors say if this is a general guideline?
• Page 5 line 120-122: The correlations from the existing MLI dataset to clusters 1-9 does not seem to be shown in figure 1e, as this contains retested compounds.
• Page 5 line 125: "We found that… were called bioactive upon retesting …": What is the definition / threshold of "bioactive"? Is the mp-value by Hutz et al used or a Mahalanobis distance cut-off?
• Figure 1f: In rainbow plots: The dots have different sizes, does this mean anything? Seems like it could be correlated to CP activity score?
• Figure 2a: Is the legend for the activity score vs cell number and PCA plot the same? In the left panel, no description of point size is given, but colours are explained. Perhaps make shared legend between the two.
• Page 7, line 155: "..including lysine acetyltransferases (KATs), has recently …", there seems to be missing a word like "inhibitors" or "binders" • The compounds 468-472 are presented as ngKATI's, but it is not clear what 469 and 472 targets. Are they negative control compounds for 468 and 470+471 respectively? They seem inactive in the cell-free assays. o It was noticed that 468 is compound A-485 and 469 is negative control compound called A-486 (from SGC). Perhaps this should be more clearly indicated in the manuscript • Figure 3c: ngKATI's are called inactive (p7 line 177-178), but 468 and 471 are assigned to cluster 5, "kinase inhibitors" at some concentrations, even though they are inactive by the Mahalanobis distance. Does this reflect a real biological effect or is it by random chance? Are there high correlations to kinase inhibitors that could indicate a weak (off-target) activi-ty?
• Figure 4a,b,c,: It is not clear what "overlap" in the rightmost plots is. There is no mention in the figure text or methods section as far as we can see.
• Figure 4d: A note: When printed on our printer, the light grey colour of the presumed "NA" cells (i.e. most 40 or 80 µM datapoints) is identical to the white colour of the mean value. Perhaps a different colour can be chosen for "NA". This meaning of the grey values should also be indicated in the figure text.
• A note: Besides use for sorting out nuisance compounds through their broad effects on cell health, which is the main focus of the current study, a related perspective of CP is its use to study the differential activity of compounds that display both antibacterial activity as well as unspecific effects on mammalian cells. The idea being to prioritise those compounds for further studies/development as potential antibiotics that display the smallest degree of perturbation of mammalian cell health. Nat. Chem. 2021, 13, 47-55. • Page 10 and line 236-237: "We found a strong correlation between compound treatments with strong CP signals…" How is a strong CP signal defined? Mahalanobis distance threshold or other measure?
• Figure 5b: It is very hard (impossible) to distinguish the activity scores by the tone of grey when the points are overlapping and are semi-transparent. Perhaps different colours or point shapes could be used, or the stratification could be removed.
• Page 10 line 237-241 and Figure 5: The reference to panels in the text seems to be off, e.g. the text refers to the entanglement in figure 5c, but it is in panel d. There is no reference to panel e or f in the text.
• The authors propose the use of a 'cell injury informer set'. It seems the list is in the excel supporting file (informer set column, denoted by "Y"), but this does not seem to be referenced in the manuscript. Page 10 line 255: "… we propose an informer set of control com-pounds to model cell injury …", the list should be referenced here? Reviewer #3: Remarks to the Author: The authors have generated a novel resource for possible detection of non-specific bioactivity in high-throughput phenotypic and high-content assays. The Resource includes 218 "prototypical cytotoxic and nuisance compounds" that have been tested in the "Cell Painting" (CP) phenotypic assays. The authors suggest that these compounds "provide a blueprint for routinely detecting nuisance compounds in triage activities during HTS". Overall, this is an interesting study, and its results may be valuable for the entire field of phenotypic assays. The experiments are described in great detail and the data generated in this study are shared with the community including a large collection (11 TB) of images in a web-accessible database, which is certainly a plus of this study. What is somewhat questionable is the breadth of the appeal and whether the reported observations and claims of the study are in sync. Specifically, the authors seem to suggest two major applications of their data: (i) the library of 218 "trouble-maker" compounds that can be used to test an assay robustness, and (ii) the reference profiles of these compounds in the CP assay that can be used to detect if a new molecule is a nuisance compound. The big question is whether the reported data can be extrapolated to other assays or other compounds, and whether the signal to noise ratio reported in this study is high enough for the task. If the authors agree with the above summary of their chief messages, then in this reviewer's opinion, these messages are not outlined crisply or quantitatively; so, it is recommended to do so in the revised manuscript. Specifically, in can the authors suggest specific assays (other then CP) where they expect this benchmark set of 218 compounds (also, how this specific set was selected?) to perform well? And if a new set of compounds is profiled in CP (or other assays) can the authors forecast the expected accuracy of determining if a compound can be classified as a nuisance compound based on its profile? There are several additional comments or questions as follows.
-Until recently, it was popular to predict nuisance compounds using structural alters (e.g., PAINS). Have the authors attempted to use such predictors for the compound library they selected for testing? -Line 55: "The utility of certain compound classes, including lysine acetyltransferases (KATs), has recently been questioned…": KATs are not compound classes; the authors probably meant KATIs. -Line 181: "The ngKATIs occupied different PCA feature-space from most hKATIs, with the summary morphological fingerprints being essentially null for ngKATIs while the hKATIs mirrored cluster 9 (Figure 3b)." Figure 3B shows that hKATIs are more distributed in the PC space but a good fraction of them forms a cluster nearly overlapping with ngKATIs' cluster; so, the distinctiveness of these two classes based on this analysis is not very obvious. Can the authors comment on this observation. -Line 202: "Other ngKATIs likely behave similarly, given some shared chemical scaffolds and the lack of red-flag interference chemotypes38,39." Can the authors provide more chemical structure sensitive information, i.e., what shared scaffolds and how prevalent are "red-flag" phenotypes are in hKATI's vs ngKATIs as well as in other compounds in their dataset? Is there a correlation between chemotypes and nuisance behavior? -Line 208: "The CP activities and relative cell numbers of 254 profiled compounds…": Where does this number of compounds come from? Prior to this, the authors were describing a 218 compound dataset.
Reviewer #4: Remarks to the Author: In Dahlin et al., a generalizable cellular imaging approach and resource are provided for flagging nuisance compounds and prioritizing safer chemotypes from phenotypic discovery. The work is accompanied by publicly available cellular profiling images in U2OS. The authors studied 218 compounds in dose response by Cell Painting, a well-established technique from the same authors, and also compared the results to a related MLI dataset. Good correlation of the 2 datasets was found for many of the MLI compounds, but not all. I don't recall seeing any speculation around the compounds that were not correlated, this may be useful to add for some compounds even if anecdotally for one or two. The authors note strong connection between cell death/depletion and the CP bioactivity score, as well as other key markers of cellular imaging in live profiling. Important observations were detailed around the improved next gen KATIs vs historical compounds. Similarly, non-specific electrophiles fared worse than targeted covalent drugs, including glutathione alterations, although notably in some cases the compound target wasn't present in U2OS (KRAS G12C, BTK). This is important because, for example, ibrutinib is noted to have drug-induced liver injury in some patients, and such a generalized approach would miss more tissue specific tox effects of drugs. It may be especially of interest for drug R&D to develop CP protocols in hepatocytes, cardiomyocytes, and other cell types more connected to safety profiling downstream of lead characterization. 24 hr and 48 hr timepoint comparisons showed 24 hr time points may often be used for efficiency's sake. Figures 5 c/d/e/f-please check that the corresponding text in the manuscript corresponds with these panels appropriately and that all panels are referenced.
Finally-the key output of this resource paper for me is the list of suggested compounds covering various tox mechanisms. While Fig 6 gives an exemplar approach, I assume this is provided in Supplemental XLS File 1. However, it wasn't clear to me if the authors intended Column F "Informer Set" as their recommended set for anyone intending to follow this up. There is also Column D "Cellular injury" which is nice too. But I would really like it stated somewhere what the recommended set is, given that's the crux of the paper. Dear Dr. Eldridge, We thank you for the review decision for our manuscript NCOMMS-22-30998-T. We have enclosed a revised version of our manuscript entitled, "Reference compounds for characterizing cellular injury in high-content cellular morphology assays" for consideration for publication in Nature Communications.
To assist with reviewing, we have also included a "tracked changes" version of our manuscript.
The following are our responses to reviewer feedback:
REVIEWER #1
In this manuscript, a collection of prototypical cytotoxic and nuisance compounds, along with a number of historical and next generation lysine acetyltransferase inhibitors (hKATi, ngKATi) were profiled with the Cell Painting assay, a target agnostic, imaging-based bioactivity screening approach. The biological activity observed in the Cell Painting assay was then compared to a variety of cell health and reactivity assays of various types. The authors observed that phenotypic profiles from the CP assay can distinguish compounds with distinct mechanisms of action and that a particular phenotypic cluster in the CP assay is strongly correlated with biological activity in other cell health assays. The end result of this work is a framework for establishing a nuisance compound informer set for HCS assays that would be useful in a variety of proposed applications (e.g. hit picking, HTS triage, mechanistic interpretation of unknown chemicals). The manuscript is well written, technically sound and presents results that will be of interest to the research community, in particular the pharmaceutical and toxicological research communities. Below are some minor suggestions that would improve the overall quality of the work.
Author response: We thank the Reviewer for this overall feedback. We have addressed these minor suggestions as noted below.
At several points in the manuscript, the authors use the phrase "chemical matter". To this reviewer's knowledge, this phrase is not commonly used in the scientific literature. What is meant by "chemical matter"? Is this different from just "chemicals" in some meaningful way?
Author response: This phrase is more common in the medicinal chemistry and drug discovery literature, but there is no reason not to use "chemicals" to avoid confusion. Accordingly, we have changed the phrase "chemical matter" to "chemicals" or "compounds" in our revised manuscript. We thank the Reviewer for this suggestion.
The authors begin the study by screening 218 "cytotoxins and prototypical nuisance compounds" but do not provide details on how these chemicals were initially selected. Please provide additional details on the selection criteria / process for these compounds, either in the manuscript text or as supplement.
Author response: We agree with the Reviewer that these are important details for the reader. Therefore, we have included additional details summarizing our compound selection process as a Supplementary Note in our revised manuscript ("Supplementary Note 1. Summary of compound selection process").
In Figure 6 the authors propose the idea of a nuisance compound informer set for use in interpretation of HCS assay results. Are the authors proposing that the 218 prototypical cytotoxicants and nuisance compounds they initially screened and used to construct phenotypes 1-9 is adequate for this purpose, a good starting point, etc. Or, should practitioners attempt to build their own informer sets based on the biological question at hand or cell model of interest? Some briefly clarifying language in the discussion would be helpful here.
Author response: We have included a more explicit reference to the formulation of the proposed cellular injury informer set in our revised manuscript. The text now states: "Based on our data and cumulative experience with HTS, we propose an informer set of control compounds to model cell injury phenotypes in HCS and other phenotypic assays including mechanism-based and nonspecific modes of gross cellular injury ( Figure 6; Supplementary Data 1, column "Proposed informer set")." We agree with the Reviewer that additional clarifying language would be useful for readers. To further clarify these points, we have included the following text in the discussion section of our revised manuscript: "This proposed set should serve as a useful starting point for practitioners, and could be subject to future improvements as additional evidence is generated by the scientific community. Although practitioners could build their own custom sets, using a common set (or even subset) of reference cellular injury compounds may benefit the scientific community as a whole and enable data harmonization. It is likely that modifications to the set may be needed for specific assays and model systems, such as the addition of compounds with other cytotoxic MoAs, or the removal of redundant compounds if throughput or cost is a concern." When visualized on PCA and compared to other clusters, the "gross injury" cluster (Cluster 9) appears to be somewhat of a catchall for cytotoxic treatments that produce profound morphological effects that don't look like anything in Clusters 1-8. That said, Cluster 9 still appears to be a useful classifier for readily identifying cytotoxic treatments. The "rainbow charts" such as those found in Figure 1f are an interesting and visually appealing way to readily understand the phenotypes associated with different concentrations of a test chemical. However, they are also a bit deceptive. Showing a series of test concentrations (points) connected by a line overlaid on a gradient ranging from 1 to 9 visually implies that the trajectory of the cellular phenotype has to progress through each stage before reaching the Cluster 9 "gross injury" phenotype. I don't think that is the case. The cells would not necessarily pass through each of the different phenotypic clusters as a function of dose. The authors should provide some clarifying language to the manuscript text or figure legend to ensure these plots are interpreted properly.
Author response: This is an excellent point, and we thank the Reviewer for this feedback. Our original intention for connecting the dots is to provide readers with a visual guide to track phenotypic progression, but we agree that this (along with the numbering of clusters) could imply to some readers that the trajectory of the cellular phenotype has to progress through each cluster before reaching the Cluster 9 "gross injury" phenotype.
We have included the following text in the Figure 1 legend of our revised manuscript: "Select CP profiles of cellular injury compounds; rainbow plots denote assigned cluster at each compound concentration; arrow indicates compound concentration of representative image. For rainbow plots, note that phenotypic trajectories do not have to progress through each cluster before reaching the cluster 9 "gross injury" phenotype (dotted lines)." Furthermore, we have changed the lines connecting the points in these rainbow plots (Figures 1-3) to dotted lines. We hope that this will further indicate that the trajectory is assumed.
Line 355: The ATCC acronym is misspelled.
Author response: We have corrected the misspelled acronym in our revised manuscript.
REVIEWER #2
Summary In this manuscript, Dahlin et al. presents their results towards identification of compounds causing cellular injury by the use of morphological profiling (Cell Painting). The authors analyse a historical cell painting dataset and finds that cell painting activity is correlated with loss of cells. Profiling a subset of 218 compounds in dose-response and clustering analysis reveals a cluster of compounds causing gross injury. Two compound categories are then the focus: Electrophiles and lysine acetyltransferase inhibitors (KATIs). While non-specific electrophiles and historical KATIs cause broad cellular perturbations, resulting in cell painting profiles correlating to the gross injury cluster, targeted electrophiles, at low concentration, and next-generation KATIs show less cell painting activity and do not correlate to gross injury. The hKATIs and ngKATIs are extremely rigorously characterised in cell-free assay, revealing that ngKATIs are superior probe compounds for KATs. Next, 254 compounds were evaluated for modulation of "cell health" by profiling apoptosis induction (caspase activation), loss of confluence and membrane disruption (CytoTox) and this was correlated to activity in cell painting. High correlations between the cell health and cell painting activities were found, especially to compounds in "gross injury" cluster, indicating that cell painting can be used to determine interference with cell health. From this analysis the authors propose the generation of a compound collection, an informer set, that causes cellular injury, that can be used as control compounds in various phenotypic screens. By inference/correlation to the "cell injury compounds", compounds with unspecific activities can be down-prioritised as hits from HTS or HCS.
Author response: We thank the Reviewer for this overall feedback and believe this is an accurate summary of the manuscript.
Assessment:
Overall, the work performed is thorough and provides a valuable resource for chemical biology researchers using the cell painting assay and for research and industry groups using HTS or HCS. The work shows that cell painting is good at detecting the perturbations related to cell injuries while more subtle effects, such as epigenetic changes, are not well detected, providing a clear perspective on what can be expected from cell painting. We recommend publication of this manuscript, but the authors can consider some improvements: Author response: We thank the Reviewer for this overall feedback. We have addressed these minor suggestions as noted below.
The paper was hard to read and took significant effort to properly understand. There might be several different causes for this, but one is that the figures are extremely information-dense and, at least in the versions that we have received, has many features and fonts that simply cannot be distinquished. Furthermore, the text is very concise without much support for non-experts. The authors should very strongly consider the accessibility of the manuscript (Figures and text) to the more general reader. It is important that meaningful information can be readily extracted from all panels in the main figures. For instance, we had a hard time understanding the purpose of subpanel 1c, but this also applies to other sub-panels. The figure-legends are likewise very brief on information. While it is a choice to leave the analysis of the presented data completely to the reader and just state the conclusion(s), providing a bit more details on how these conclusions are reached from the data would help guide the reader through the information presented. Again, this would make the paper more readable and approachable by non-experts, ultimately increasing its impact.
Author response: We thank the Reviewer for this valuable feedback. We have addressed the readability issues by several means: (1) we have included highresolution versions of our figures in our revised submission in case the poor readability was due to a technical issue, (2) we have significantly expanded the figure captions, (3) we have included additional text throughout the revised the results section to provide readers about how certain conclusions are reached based on the data. We believe these changes, along with the other revisions such as expanded commentary in the discussion section, should improve the readability of our revised manuscript without adding excessive length.
Another issue that we believe the authors could provide a more nuanced discussion of relates to the general use of cell painting as a tool for generation of mechanistic hypotheses. The presented data confirms what has been speculated / an accepted fact in the community, namely that many bioactive compounds do not afford active cell painting profiles at the concentrations at which they should modulate their target. The authors specifically mention this on P8 line 197-199. This is actually a major take-home messages from the paper. Some additional comments or considerations from the authors along that direction could be favorably included in the discussion section. I.e. how large is, in their best view, the mechanistic space that CP, in its present form, can resolve in a meaningful way. Of course, as increasingly sophisticated analyses of image data, such as the fluorescence images that underlie CP, are being developed this may further improve the information that can be extracted and thereby the different phenotypes that can also be distinguished by CP.
Author response: We thank the Reviewer for raising this issue. We have included the following additional text in the discussion section of our revised manuscript: "An important observation from testing the historical and next-generation KAT inhibitors are that some high-quality bioactive compounds do not lead to active CP profiles at concentrations at which they modulate their target. This suggests there are limits to the mechanistic space that can be captured by CP. In some cases, bioactive compounds may simply not produce detectible morphological changes in cells, whether at all or within the usual 24-to 48-h compound treatment windows of the conventional CP assay. Therefore, it should not be assumed that a bioactive compound will produce a CP phenotype. More sophisticated image analyses and/or alternative experimental protocols may improve the number and type of phenotypes that can be reliably detected with CP and similar morphological assays. The large number of phenotypes caused by cellular injury compounds (cluster 9) suggests a significant portion of CP morphologies may be affected by compound-mediated cellular injury. For any given morphological assay, this could be estimated by testing a diverse selection of bioactive probes and cellular injury control compounds." Author response: The threshold for "active" is 3 SD from the mean, as denoted in Figure 1A by the dotted line and the coloring of the "active" box. The Mahalanobis distances are calculated with respect to the vehicle-treated well profiles, i.e. DMSOtreated wells. The goal of this metric is to provide a measure of how different a compound's effect is on cell morphology compared to the effect of DMSO. It is prudent to make this comparison for each assay plate as well as for each cell-painting experiment to account for any plate effects or day-to-day variability. In our analysis of this historical data, the definition of "active" depends on the behavior of the DMSOtreated wells, not the number of active/inactive compounds.
Hutz et al. suggest that both mp-value and Mahalanobis distance are reasonable metrics to consider, and that in fact mp-value is not usually optimal for compound prioritization. This prioritization effort benefits from inspection of not only whether a treatment is distinct from vehicle control, but also how large the difference is. In that same vein we chose a cutoff of the activity score to flag compounds as "active."
Relevant text from Hutz et al: In some cases, such as in the HCS data set tested here, nearly all of the assayed treatments may have statistically significant mp-values. The mp-value itself is designed to merely say whether two treatments are different; its value should not be used to further prioritize this subset of significant treatments. However, as mentioned above, the Mahalanobis distance calculated during the mp-value calculation process can be used for a secondary prioritization, as it indicates the magnitude of the difference between the treatments.
Page 5 line 107 and Figure 1b: How are the clusters determined / distingushed? Hierarchical clustering does not directly give a number of optimal clusters. Is this by manual inspection of the correlation matrix in supplementary figure 1 or some algorithm? This should be included in the main text or in the methods section. Perhaps the clusters could be indicated in the correlation matrix in Fig S1 Author response: The choice of nine clusters was based on several competing factors. Too few clusters would not be specific enough for certain MoAs, while an excessive number of clusters could may lead to overfitting. Given our past experiences with cell painting, we also examined the clustering of microtubule poisons, which produce a characteristic and highly reproducible phenotype/cluster. Additionally, since all of the images and associated data are publicly available, readers can experiment with alternative numbers of clusters. Figure S1 are not the same as for the rest of the paper. To avoid confusion, we have now updated the corresponding figure caption to state: "Compounds (1-171, 455-501) were sorted by unsupervised hierarchical clustering using the average profile for each compound across all treatment concentrations " For the rest of the paper, the doses are considered (and clustered) separately. The main reason was to provide a more visually tractable heatmap for inspection (there would be six times as many rows and columns if not for the aggregation). Figure 1d: A non-specified cut-off is given for activity (Mahalanobis distance), perhaps 3 SD's as in figure 1a? As for figure 1a: Is it meaningful to have a cut-off for the activity measurement that depends on the subset of data analysed? The same cut-off is used in figure 2a and 3a it seems. Could the authors say if this is a general guideline? Figure 1D,
the threshold for "active" is 3 SD from the mean using the the Mahalanobis distances calculated with respect to the vehicle-treated well profiles, i.e. DMSO-treated wells. The goal of this metric is to provide a measure of how different a compound's effect is on cell morphology compared to the effect of DMSO.
To make this more explicit, we have revised the figure caption. Our revised manuscript now states: "Clusters of cell injury compounds correlate with cell number. The cut-off is 3 SD from the mean of DMSO-treated wells using the CP activity (Mahalanobis distances)." We have also revised the figure panels ( Figure 1D, 2A, 3A) to note the cutoff is 3 SD.
Page 5 line 120-122: The correlations from the existing MLI dataset to clusters 1-9 does not seem to be shown in figure 1e, as this contains retested compounds. Figure 1E. This is noted in the figure caption: "Inset: heatmap and dendrogram shows pairwise correlation coefficients between each MLI CP compound profile and each of the 9 clusters (red arrowhead, enrichment of cluster 9)".
Author response: The correlations of the existing MLI dataset are in
Page 5 line 125: "We found that… were called bioactive upon retesting …": What is the definition / threshold of "bioactive"? Is the mp-value by Hutz et al used or a Mahalanobis distance cut-off? Page 7, line 155: "..including lysine acetyltransferases (KATs), has recently …", there seems to be missing a word like "inhibitors" or "binders"
Author response: We have corrected this text in our revised manuscript. It now states "The utility of certain compound classes, including lysine acetyltransferase (KAT) inhibitors, has recently been questioned."
The compounds 468-472 are presented as ngKATI's, but it is not clear what 469 and 472 targets. Are they negative control compounds for 468 and 470+471 respectively? They seem inactive in the cell-free assays. It was noticed that 468 is compound A-485 and 469 is negative control compound called A-486 (from SGC). Perhaps this should be more clearly indicated in the manuscript Author response: We have more clearly indicated that 469 and 472 are negative control analogs in our revised manuscript. In the main text, it now states: "However, highly potent and specific "next-generation" KAT inhibitors (ngKATIs) have now been reported, including the KAT3 inhibitor A-485 (468) and its negative control analog A-486 ( Figure 3c: ngKATI's are called inactive (p7 line 177-178), but 468 and 471 are assigned to cluster 5, "kinase inhibitors" at some concentrations, even though they are inactive by the Mahalanobis distance. Does this reflect a real biological effect or is it by random chance? Are there high correlations to kinase inhibitors that could indicate a weak (off-target) activity?
Author response: After we examined the correlation for each compound at each concentration to cluster 5 (kinase cluster), we speculate that this is due to random chance, though we cannot rule out some weak off-target activity.
This situation illustrates the importance of perform follow-up experiments. In this specific case, one could perform cellular kinome profiling to investigate whether this cluster assignment is in fact due to some off-target kinase activity. Figure 4a,b,c,: It is not clear what "overlap" in the rightmost plots is. There is no mention in the figure text or methods section as far as we can see. Figure 4, "overlap" refers to the number of objects (cells) that are positive for caspase 3/7 activation (green fluorescence) and loss of membrane integrity (red fluorescence). Like confluence, caspase 3/7 activation, and cell viability (membrane integrity), these values were then calculated as an AUC to account for testing in concentration-response format. We have now included a description of the overlap readout in the figure legend and methods sections of our revised manuscript. In the accompanying supplemental data, we have also quantified the overlap in terms of area. A note: Besides use for sorting out nuisance compounds through their broad effects on cell health, which is the main focus of the current study, a related perspective of CP is its use to study the differential activity of compounds that display both antibacterial activity as well as unspecific effects on mammalian cells. The idea being to prioritise those compounds for further studies/development as potential antibiotics that display the smallest degree of perturbation of mammalian cell health. Nat. Chem. 2021, 13, 47-55.
Author response: In
Author response: We thank the Reviewer for raising this additional perspective regarding cell painting. This approach could likely benefit from purposefully including nuisance/cytotoxic compounds to characterize cellular injury phenotypes that would want to be avoided during antibiotic discovery. We have included a reference to this work in our revised manuscript, since it helps to further articulate the potential scope of our work.
Our revised text now states: "Groups have developed customized assays to detect nephrotoxicity, pulmonotoxicity, antibiotic toxicity in mammalian cells, and other toxicities using specialized cell models and stains for each."
Page 10 and line 236-237: "We found a strong correlation between compound treatments with strong CP signals…" How is a strong CP signal defined? Mahalanobis distance threshold or other measure?
Author response: We have revised this text to indicate our cutoff for strong CP signal. Our revised manuscript now states: "There was strong correlation between replicate compound treatments at each of 24-and 48-h, and high correlation between the pairwise 24-and 48-h compound treatments (Figure 5a). There was also a strong correlation between compound treatments with strong CP signals (i.e., CP activity/Mahalanobis distance > 10), which could be attributed to the higher signal-tonoise of their CP profiles (Figure 5b)." Furthermore, we have revised the coloring of Figure 5b to make the link between high CP activity and strong correlation qualitatively more apparent (see next response).
Figure 5b: It is very hard (impossible) to distinguish the activity scores by the tone of grey when the points are overlapping and are semi-transparent. Perhaps different colours or point shapes could be used, or the stratification could be removed.
Author response: We thank the Reviewer for pointing this out. Our revised Figure 5B now indicates the activity score by color gradient.
Page 10 line 237-241 and Figure 5: The reference to panels in the text seems to be off, e.g. the text refers to the entanglement in figure 5c, but it is in panel d. There is no reference to panel e or f in the text.
Author response: We thank the Reviewer for alerting us to these errors. We have corrected the figure references in our revised manuscript.
The authors propose the use of a 'cell injury informer set'. It seems the list is in the excel supporting file (informer set column, denoted by "Y"), but this does not seem to be referenced in the manuscript. Page 10 line 255: "… we propose an informer set of control com-pounds to model cell injury …", the list should be referenced here?
Author response: We have included a reference to the formulation of the proposed cellular injury informer set in our revised manuscript. The text now states: "Based on our data and cumulative experience with HTS, we propose an informer set of control compounds to model cell injury phenotypes in HCS and other phenotypic assays including mechanism-based and nonspecific modes of gross cellular injury (Figure 6; Supplementary Data 1, column "Proposed informer set")." We have also modified the column header in the revised supporting file to better indicate the compounds we propose as part of a cellular injury informer set ("Proposed informer set").
Furthermore, we have modified our description of the Supplementary materials to now state: "Supplementary Data 1: Key compound descriptors (categories, SMILES, purity, annotations) for study compounds and proposed cellular injury informer set (XLSX)."
Page 18, line 424-436: Is the Mahalanobis distance calculated before feature reduction?
Author response: Yes, the Mahalanobis distances are calculated before feature reduction, although it is preceded by dimensionality reduction via PCA and taking the first principal components capable of explaining >= 90% of the variance (outlined in Methods section).
Apart from the Mahalanobis-distance calculations, additional feature reduction was required to avoid the over-representation of highly similar cell-painting features, in particular for downstream analyses that dealt with compound-compound correlations, like the hierarchical clustering. To clarify this feature reduction after calculating the Mahalanobis distances, we have amended our Methods section to state: "The R cytominer package was then used to reduce the number of redundant features by removing those which were highly correlated." Supplementary note 1: Rule-Of-Five compliance references Fig S3. Perhaps this should be Fig S4? Author response: Correct, and we have corrected this figure reference in our revised manuscript.
REVIEWER #3
The authors have generated a novel resource for possible detection of non-specific bioactivity in high-throughput phenotypic and high-content assays. The Resource includes 218 "prototypical cytotoxic and nuisance compounds" that have been tested in the "Cell Painting" (CP) phenotypic assays. The authors suggest that these compounds "provide a blueprint for routinely detecting nuisance compounds in triage activities during HTS". Overall, this is an interesting study, and its results may be valuable for the entire field of phenotypic assays. The experiments are described in great detail and the data generated in this study are shared with the community including a large collection (11 TB) of images in a web-accessible database, which is certainly a plus of this study. What is somewhat questionable is the breadth of the appeal and whether the reported observations and claims of the study are in sync. Specifically, the authors seem to suggest two major applications of their data: (i) the library of 218 "trouble-maker" compounds that can be used to test an assay robustness, and (ii) the reference profiles of these compounds in the CP assay that can be used to detect if a new molecule is a nuisance compound. The big question is whether the reported data can be extrapolated to other assays or other compounds, and whether the signal to noise ratio reported in this study is high enough for the task. If the authors agree with the above summary of their chief messages, then in this reviewer's opinion, these messages are not outlined crisply or quantitatively; so, it is recommended to do so in the revised manuscript. Specifically, in can the authors suggest specific assays (other then CP) where they expect this benchmark set of 218 compounds (also, how this specific set was selected?) to perform well? And if a new set of compounds is profiled in CP (or other assays) can the authors forecast the expected accuracy of determining if a compound can be classified as a nuisance compound based on its profile?
Author response: We thank the Reviewer for this valuable feedback. To clarify, we actually suggest a subset of the 218 profiled compounds to be used as a reference set for phenotypic assays. As addressed in other reviewer comments, we have made this more explicit in our revised manuscript. We have also specified our selection process for the compound set as a supplementary note.
Our data suggest that cellular injury compounds should produce sufficiently high signal:noise to make them robust choices for cell painting reference compounds. Furthermore, our supplementary data show that these phenotypes are reproducible in independent experiments. Based on our experiences with phenotypic screening and gene expression profiling (including L1000), we hypothesize that this set should be applicable to a wide variety of screening and profiling assays, and not exclusive to just cell painting. This is supported by the observations that cell injury compounds produce strong CP phenotypes and strong L1000 signatures, and the targets/pathways/mechanisms for cell injury are inherent to most biological systems.
The question about forecasting accuracy and whether a compound can be predicted as a nuisance is currently unsettled. As we have previously described, cellular nuisance behavior is highly context dependent and is more difficult to neatly classify than biochemical nuisance behavior (PMID 33592188).
We have included the following additional text in the discussion section of our revised manuscript to address the points regarding the applicability to other assays, and our predictions regarding expected accuracy: • "Although we only profiled one cell line, this approach is likely generalizable to other biological systems and profiling assays" • "In one study, compounds profiled with the L1000 transcriptome profiling assay and CP, cytotoxic compounds produced robust signatures in both techniques46. This further suggests that the proposed approach can be applied to other assays and cell types." • "Given the complexities of cellular nuisance compounds and their dependence on context, it is difficult at this point to quantify the sensitivity and accuracy of such an informer set in predicting whether an active compound is acting by a nuisance mechanism. The use of such a standardized set by the chemical biology and drug discovery communities should help to address this important question." Until recently, it was popular to predict nuisance compounds using structural alters (e.g., PAINS).
Have the authors attempted to use such predictors for the compound library they selected for testing?
Author response: We did not explicitly use structural alerts (such as PAINS) for this compound library. Since our overall goal was to characterize the effect of cytotoxic compounds on the cell painting readout, we wanted to include a broad class of chemotypes and cytotoxic mechanisms. PAINS, for example, were derived from cellfree AlphaScreen assays, and in our collective experience are generally enriched in nonspecific electrophiles. In our proposed informer set, nonspecific electrophiles are accounted for by several of the historical KAT inhibitors.
Line 55: "The utility of certain compound classes, including lysine acetyltransferases (KATs), has recently been questioned…": KATs are not compound classes; the authors probably meant KATIs.
Author response: We have corrected this text in our revised manuscript. It now states "The utility of certain compound classes, including lysine acetyltransferase (KAT) inhibitors, has recently been questioned."
Line 181: "The ngKATIs occupied different PCA feature-space from most hKATIs, with the summary morphological fingerprints being essentially null for ngKATIs while the hKATIs mirrored cluster 9 (Figure 3b)." Figure 3B shows that hKATIs are more distributed in the PC space but a good fraction of them forms a cluster nearly overlapping with ngKATIs' cluster; so, the distinctiveness of these two classes based on this analysis is not very obvious. Can the authors comment on this observation.
Author response: This observation is mostly a reflection of the concentration response. The hKATIs that overlap the ngKATI cluster in this PCA plot correspond to the hKATIs tested at lower compound concentrations (below the concentrations were they appear to be cell-active). By contrast, the kHATIs points that do not overlap with the ngKATI cluster in this PCA plot (and occupy cluster 9) are at the relatively higher concentrations were they decrease histone acetylation.
To clarify this point, our revised manuscript now states: "The ngKATIs occupied different PCA feature-space from most hKATIs, with the latter occupying cluster 9 (cell injury) when tested at higher concentrations that coincide with their reported cellular KAT inhibition activities (Figure 3a)." Line 202: "Other ngKATIs likely behave similarly, given some shared chemical scaffolds and the lack of red-flag interference chemotypes38,39." Can the authors provide more chemical structure sensitive information, i.e., what shared scaffolds and how prevalent are "red-flag" phenotypes are in hKATI's vs ngKATIs as well as in other compounds in their dataset? Is there a correlation between chemotypes and nuisance behavior?
Author response: We have provided additional detail on these interference chemotypes in our revised manuscript. The main text now states: "Other recently reported ngKATIs likely behave similarly. For example, the KAT7 inhibitor WM-3835 contains the same acylsulfonohydrazide scaffold as 470-472.39 Neither WM-3835 or CPI-1612 (a KAT3 inhibitor) contain red-flag interference chemotypes found in many hKATIs (e.g., quinones, polyphenols).32,40." We have also included a reference to the key paper describing the problematic chemotypes in the historical KAT inhibitors (ref 32).
Additional work is needed to more firmly correlate chemotypes with nuisance behavior in cellular assays, but in general we have observed highly electrophilic species like quinones leading to gross cytotoxicity. We have therefore added the following sentence in the discussion to speculate about this trend, and the need for additional studies to make more specific claims about chemotypes: "Future efforts could focus on the association between specific chemotypes and CP profiles, as well as phenotypic profiles in general. In this work, potent electrophiles (quinones, benzothiophene 1,1dioxides, unstable succinimides, maleimides, etc.) produced strong CP profiles associated with cellular injury, whereas relatively weaker electrophiles (acrylamides) occasionally produced similar profiles at higher micromolar concentrations. Given our previous experiences determining structure-interference relationships with problematic chemotypes in biochemical assays2, the generalization of chemotypes with specific CP profiles would benefit from testing a variety of analogs with and without the suspected problematic structural feature." Line 208: "The CP activities and relative cell numbers of 254 profiled compounds…": Where does this number of compounds come from? Prior to this, the authors were describing a 218 compound dataset.
Author response: This 254 is composed of the 218 compound dataset, plus the addition of KAT inhibitors and targeted/nonspecific electrophiles. This also takes into account sample availability and assay throughput factors.
We have changed the text in our revised manuscript to indicate this: "The CP activities and relative cell numbers of 254 profiled compounds (218 cellular injury compounds plus KATIs and electrophiles, based on sample availability and assay throughput) were correlated with culture confluence (phase contrast), caspase-3/7 activation (GFP channel, fluorogenic caspase 3/7 substrate), and cell viability (RFP channel, CytoTox dye which marks compromised membrane integrity) by live-cell imaging (Figure 4a)."
REVIEWER #4
In Dahlin et al., a generalizable cellular imaging approach and resource are provided for flagging nuisance compounds and prioritizing safer chemotypes from phenotypic discovery. The work is accompanied by publicly available cellular profiling images in U2OS. The authors studied 218 compounds in dose response by Cell Painting, a well-established technique from the same authors, and also compared the results to a related MLI dataset. Good correlation of the 2 datasets was found for many of the MLI compounds, but not all. I don't recall seeing any speculation around the compounds that were not correlated, this may be useful to add for some compounds even if anecdotally for one or two. The authors note strong connection between cell death/depletion and the CP bioactivity score, as well as other key markers of cellular imaging in live profiling.
Important observations were detailed around the improved next gen KATIs vs historical compounds. Similarly, non-specific electrophiles fared worse than targeted covalent drugs, including glutathione alterations, although notably in some cases the compound target wasn't present in U2OS (KRAS G12C, BTK). This is important because, for example, ibrutinib is noted to have drug-induced liver injury in some patients, and such a generalized approach would miss more tissue specific tox effects of drugs. It may be especially of interest for drug R&D to develop CP protocols in hepatocytes, cardiomyocytes, and other cell types more connected to safety profiling downstream of lead characterization. 24 hr and 48 hr timepoint comparisons showed 24 hr time points may often be used for efficiency's sake.
Author response: We thank the Reviewer for this overall feedback and believe this is an accurate summary of the manuscript.
Regarding the MLI compounds whose "historical" data correlated with our "newly generated" cell injury phenotype -but were not bioactive upon re-testing: we did not speculate on this because it is likely multifactorial. We were unable to discern any clear SAR or chemotype explanation in the compounds that were tested. Another explanation could be changes in the compound samples themselves, but again there was also no clear connection with chemical vendor or other sample information (sample age, QC, etc.). We have now included the following brief explanation of this in our revised manuscript: "There was no clear trend in terms of chemical structure (chemotypes) or sample information (vendor, QC, age) for those MLI-HC compounds that were not bioactive upon re-testing." We also note that the chemical structure and QC information is provided in our supplemental dataset for all the MLI compounds tested in our study, and it should therefore be possible for readers to investigate in more detail if desired.
In Fig 1c, Author response: We have increased the font size describing the columns in Figure 1C so that is more readable.
Figures 5 c/d/e/f-please check that the corresponding text in the manuscript corresponds with these panels appropriately and that all panels are referenced.
Author response: We thank the Reviewer for alerting us to these errors. We have corrected these figure references and added the appropriate corresponding text in our revised manuscript.
Finally-the key output of this resource paper for me is the list of suggested compounds covering various tox mechanisms. While Fig 6 gives an exemplar approach, I assume this is provided in Supplemental XLS File 1. However, it wasn't clear to me if the authors intended Column F "Informer Set" as their recommended set for anyone intending to follow this up. There is also Column D "Cellular injury" which is nice too. But I would really like it stated somewhere what the recommended set is, given that's the crux of the paper.
Author response: We have included a reference to the formulation of the proposed cellular injury informer set in our revised manuscript. The text now states: "Based on our data and cumulative experience with HTS, we propose an informer set of control compounds to model cell injury phenotypes in HCS and other phenotypic assays including mechanism-based and nonspecific modes of gross cellular injury ( Figure 6; Supplementary Data 1, column "Proposed informer set")." We have also modified the column header in the revised supporting file to better indicate the compounds we propose as part of a cellular injury informer set ("Proposed informer set").
EDITORIAL FEEDBACK
Please complete or update the following checklists to verify compliance with our research ethics and data reporting standards. Address all points on the checklist, revising your manuscript in response to the points if needed. The forms must be downloaded and completed in Adobe Reader rather than opened in a web browser. Each form must be uploaded as a Related Manuscript file at the time of resubmission. Editorial policy checklist: https://www.nature.com/documents/nr-editorial-policy-checklist.pdf Reporting summary: https://www.nature.com/documents/nr-reporting-summary.pdf Author response: We have completed the Editorial policy checklist and have included it in our revised submission. We have also included an updated Reporting summary in our revised submission.
Your paper uses custom code/software. Please complete the following code and software submission checklist and make your code available for reviewer assessment, if you have not already done so. The code/software can be provided in a zip file with a readme.txt file or other instructions for installing and running the software. If appropriate, also provide example data and expected output. If you have any issues with the file upload, please let me know. https://www.nature.com/documents/nr-software-policy.pdf Author response: Our manuscript relies on open-source (CellProfiler, R) and commercially available software (GraphPad Prism, Adobe Illustrator). It does not utilize custom code. Therefore, we believe this checklist is not applicable.
All Nature Communications manuscripts must include a "Data Availability" section after the Methods section but before the References. If any of the data can only be shared on request or are subject to restrictions, please specify the reasons and explain how, when, and by whom the data can be accessed.
Author response: Our original manuscript already contained a Data Availability section. In our revised manuscript, we have moved this section after the Methods section but before the References, as instructed.
Please also include a "Code Availability" section after the "Data Availability" section. If the code can only be shared on request, please specify the reasons.
Author response: Our manuscript does not utilize custom code. Therefore, we believe this section is not applicable.
All novel microarray, DNA sequencing, RNA-seq or proteomic datasets must be deposited in a publicly accessible database, and accession codes and associated hyperlinks must be provided in the "Data Availability" section.
Author response: Our manuscript does contain novel microarray, DNA sequencing, RNA-seq, or proteomic datasets. Therefore, we believe this section is not applicable.
We strongly encourage you to deposit all new data associated with the paper in a persistent repository where they can be freely and enduringly accessed. We recommend submitting the data to discipline-specific and community-recognized repositories.
Author response: We agree with the importance of persistent data repositories, and have already deposited the relevant data in such repositories (FigShare, Image Data Resource) where they are publicly available without restriction.
As noted in our Data Availability section: "The following data are deposited at Figshare (10.6084/m9.figshare.20293992) and are available without restriction: (1) CP extracted features, (2) processed live-cell imaging data, (3) processed intracellular glutathione data, and (4) ALARM NMR spectra and UPLC-MS chromatograms for KAT inhibitors. The multi-terabyte collection of CP images, metadata, and associated CellProfiler object-level files are deposited at the Image Data Resource (idr.openmicroscopy.org, accession number idr0133)." To maximise the reproducibility of research data, we strongly encourage you to provide a file containing the raw data underlying the following types of display items: -Any reported means/averages in box plots, bar charts, and tables -Dot plots/scatter plots, especially when there are overlapping points -Line graphs The data should be provided in a single Excel file with data for each figure/table in a separate sheet, or in multiple labelled files within a zipped folder. Name this file or folder 'Source Data', and include a brief description in your cover letter. The "Data Availability" section should also include the statement "Source data are provided with this paper." Author response: We agree with the importance of reproducibility in research. That is why in our original and revised submission materials and associated links (Figshare, Image Data Resource), we have essentially included all of the relevant raw and processed data used for all the manuscript display items. In addition, as noted in our data availability statement, all data is available from the corresponding authors without restriction.
We also mandate the presentation of uncropped versions of any gels or blots, labelled with the relevant panel and identifying information such as the antibody used.
Author response: We have labelled the uncropped versions of our western blots in Supplementary Figure 6A to now specify the antibody used. We have also indicated the links between the cropped and uncropped blots by color coding the borders in our revised Supplementary Figure 6. Any additional uncropped gels are, of course, available upon request.
Please replace your bar graphs with plots that feature information about the distribution of the underlying data. All data points should be shown for plots with a sample size less than 10. For larger sample sizes, please consider box-and-whisker or violin plots as alternatives. Measures of centrality, dispersion and/or error bars should be plotted and described in the figure legend. Nature Communications is committed to improving transparency in authorship. As part of our efforts in this direction, we are now requesting that all authors identified as 'corresponding author' create and link their Open Researcher and Contributor Identifier (ORCID) with their account on the Manuscript Tracking System prior to acceptance. ORCID helps the scientific community achieve unambiguous attribution of all scholarly contributions. You can create and link your ORCID from the home page of the Manuscript Tracking System by clicking on 'Modify my Springer Nature account' and following these instructions. Please also inform all co-authors that they can add their ORCIDs to their accounts and that they must do so prior to acceptance.
Author response: Both corresponding authors (JLD, BKW) have our ORCID associated with our Springer Nature accounts. We have also informed all co-authors about the need to add their ORCIDs to their accounts prior to acceptance. | 2022-07-20T13:23:31.606Z | 2022-07-14T00:00:00.000 | {
"year": 2023,
"sha1": "d03869385cde2d06f429b678b8dbc3a1b070d170",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "378c6385b2e5c8f6926e774d6c15cc0ec3084e12",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
} |
236705366 | pes2o/s2orc | v3-fos-license | Identification of amyloid beta in small extracellular vesicles via Raman spectroscopy
One of the hallmarks of Alzheimer's disease (AD) pathogenesis is believed to be the production and deposition of amyloid-beta (Aβ) peptide into extracellular plaques. Existing research indicates that extracellular vesicles (EVs) can carry Aβ associated with AD. However, characterization of the EVs-associated Aβ and its conformational variants has yet to be realized. Raman spectroscopy is a label-free and non-destructive method that is able to assess the biochemical composition of EVs. This study reports for the first time the Raman spectroscopic fingerprint of the Aβ present in the molecular cargo of small extracellular vesicles (sEVs). Raman spectra were measured from sEVs isolated from Alzheimer's disease cell culture model, where secretion of Aβ is regulated by tetracycline promoter, and from midbrain organoids. The averaged spectra of each sEV group showed considerable variation as a reflection of the biochemical content of sEVs. Spectral analysis identified more intense Raman peaks at 1650 cm−1 and 2930 cm−1 attributable to the Aβ peptide incorporated in sEVs produced by the Alzheimer's cell culture model. Subsequent analysis of the spectra by principal component analysis differentiated the sEVs of the Alzheimer's disease cell culture model from the control groups of sEVs. Moreover, the results indicate that Aβ associated with secreted sEVs has a α-helical secondary structure and the size of a monomer or small oligomer. Furthermore, by analyzing the lipid content of sEVs we identified altered fatty acid chain lengths in sEVs that carry Aβ that may affect the fluidity of the EV membrane. Overall, our findings provide evidence supporting the use of Raman spectroscopy for the identification and characterization of sEVs associated with potential biomarkers of neurological disorders such as toxic proteins.
Introduction
Alzheimer's Disease is the most common form of dementia and has an overwhelming impact on patients' lives and their families. The formation of Ab senile plaques and tau tangles are the hallmark of AD. Ab is a 36-43 amino acid peptide that is derived from proteolysis of amyloid precursor protein (APP). Understanding the role of Ab in the molecular pathways that lead to pathological changes in the brain of patients with AD is a longstanding goal in the AD research eld. While the mechanisms of age-related accumulation of Ab in the AD patients' brain remains unclear, it has been hypothesized that the alterations in the metabolism of APP could be related to AD progression. The non-amyloidogenic pathway, which prevents the formation of the toxic Ab forms, proceeds from the proteolysis of APP on the cell surface by a-secretase followed by g-secretase. On the other hand, the amyloidogenic pathway includes cleavage of APP by b-secretase generating 99 amino acid C-terminal fragment that is then cut by g-secretase, leading to generation of the neurotoxic Ab 40 and Ab 42 peptides. 1,2 The Ab 42 peptide is shown to be more hydrophobic and prone to form brils compared to Ab 40 peptide and is found to be highly prevalent in senile plaques. 3 Moreover, several studies showed that intracellular Ab 42 can be located in multivesicular bodies of neurons and further enveloped into small extracellular vesiclesexosomes. 4,5 Exosomes are nanometer-sized small extracellular vesicles (sEVs) derived from the endocytic pathway and released from the cells upon diffusion of cytosolic multivesicular bodies with the plasma membrane. Exosomes have been detected in different uids of the human body including serum, plasma, saliva, breast milk, amniotic uid, semen, and urine. 6 Their molecular cargo reects the state of the releasing cells and contains membrane proteins, endosome-associated proteins, cytosolic proteins, lipids, and nucleic acids. Functions of sEVs in normal physiology and in a variety of pathological processes are under extensive study. They are known to facilitate intercellular communication between neighboring cells or distant cells and to play a role in cardiovascular diseases, 7 cancer, 8 metabolic 9 and neurological disorders, 10,11 and autoimmune diseases. 12,13 Due to the lack of explicit consensus in the eld of extracellular vesicles on the appropriate nomenclature, and to adhere to the MISEV 2018 guidelines 14 we chose to use the term "small extracellular vesicles" or "sEVs" for the purpose of this study to refer to EVs in the approximate size range of 50-200 nm. In more broader contexts we used a collective term "extracellular vesicle" or "EVs".
With relevance to neurodegenerative diseases, it has been proposed that the generation and progression of many neurodegenerative disorders are associated with exosome-mediated transport of misfolded proteins [15][16][17] and specic RNA species in exosomes. [18][19][20] Furthermore, recent clinical studies showed elevated levels of AD-associated proteins, tau, and Ab, in exosomes isolated from plasma, serum, and cerebrospinal uid (CSF) of AD patients. [21][22][23][24][25] These ndings stimulate further exploration of the sEVs as potential biomarkers of neurodegenerative diseases. Extracellular vesicles are characterized by a wide variety of methods. Morphological features of EVs are described by nanoparticle tracking analysis (NTA), 26 electron microscopy (EM), 27 and atomic force microscopy techniques (AFM). 28 Their molecular cargo is characterized mainly by ow cytometry, western blot, immunoprecipitation, and immunohistochemistry methods 29 as well as by mass spectrometry, and quantitative polymerase chain reaction. In addition, there are several emerging techniques that complement traditional methods for EVs characterization by their ability to reveal new information about the EVs molecular cargo or to characterize the composition of individual EVs. These methods include uorescencebased techniques, 30,31 atomic force microscopy, 32 surface plasmon resonance (SPR), 33 Raman spectroscopy, 34,35 and electrochemical sensing methods. 36 Among these novel approaches, Raman spectroscopy enables sensitive label-free detection and analysis of EVs protein content.
Raman spectroscopy is an optical method where a laser beam is used to irradiate a sample resulting in inelastic scattering of photons. The difference in energy of these photons corresponds to the chemical bonds that are present in the sample. 37 Due to its label-free and non-destructive nature with high chemical specicity, Raman spectroscopy has great application potential in the characterization of extracellular vesicles. Several studies have been published in the past decade using Raman spectroscopy as a tool to analyze the biochemical content of EVs. The pioneering work reporting the rst Raman spectrum of sEVs was published in 2009. 38 Later studies demonstrated the use of Raman spectroscopy for characterization of single extracellular vesicles, 39 as well as clusters of EVs trapped in the laser focus. 40 In addition, recent studies have indicated that Raman spectroscopy can be used for tissue characterization by analyzing the spectral signature of cancer EVs for prostate cancer diagnosis, 41,42 as well as tissue-specic EVs derived from mesenchymal stromal cells 43 and peripheral blood mononuclear cells. 44 Furthermore, urinary EVs from diabetic patients and hyperglycemic endothelial cells 45 have been successfully characterized by Raman spectroscopy. Immune-capture based single EV Raman spectroscopy 46 has also been reported as a promising approach.
In the research eld of neurological disorders, Raman spectroscopy has been used to investigate structural features and changes of toxic proteins such as Ab, 47-49 a-synuclein, 50,51 and tau 52 by analyzing the amide bands in the protein spectrum that is particularly sensitive to the protein's conformational state and environment. Moreover, differences in the Raman ngerprint of blood samples of patients compared to a healthy control have been reported for a variety of neurological conditions such as AD, 53 Parkinson's disease (PD), 54,55 dementia with Lewy bodies, 56 and Huntington disease. 57 Recent reports have demonstrated the ability of Raman spectroscopy to accurately distinguish PD 58 and Amyotrophic Lateral Sclerosis (ALS) 59 patients from healthy control group based on their EVs prole. Our group previously demonstrated the application of laser tweezers Raman spectroscopy for exosomes heterogeneity analysis 39 and surface-enhanced Raman spectroscopy for biochemical analysis of EVs. [60][61][62][63][64] However, to our knowledge, specic Raman studies indicating Ab association within sEVs have not been reported. Here we report for the rst time the use of Raman spectroscopy for the identication and characterization of Ab associated with sEVs, as well as the structural and dynamical effects of Ab on the membrane of sEVs.
Materials and methods
2.1 Ab 1-42 pure protein preparation Ab 1-42 protein samples were prepared by resuspension of the stock Ab 1-42 protein (stock number: A9810, Sigma-Aldrich, USA) in DMSO to a nal concentration of 10 À6 M and vortexed prior usage.
Cell culture models
In this work, we used sEVs derived from the MC65 cell culture model and midbrain organoids, as described next.
2.2.1 MC65 AD cell culture model. We used MC65 cells derived from human neuroblastoma SK-N-MC cell line with conditional expression of transfected APP-derived construct, consisting of carboxyl-terminal 99 residues of APP (APP-C99), under negative regulation of tetracycline (TC) sensitive promoter. 65 Upon withdrawal of TC from the cell culture media, the cells express C99 which is then converted to Ab by cleavage with intramembrane proteases g-secretase and b-secretase. Ab remains inside the cell and forms aggregates within 3-4 h aer removal of TC with complete apoptotic death of cells in 72 h.
MC65 cells were cultured in a 75 ml ask in Dulbecco's Modied Eagle medium supplemented with 4.5 mg ml À1 supplemented with 0.1 mg ml À1 tetracycline, 50 IU ml À1 penicillin, and 50 g ml À1 streptomycin. In order to prevent the addition of nonspecic FBS EVs, we cultured the cells with an EV-depleted FBS (Life Technologies®). This ensures that the resulting sEVs in the cell culture medium supernatant only originate from the plated cells. The MC65 cells were cultured in the presence of TC for 24 h and the growth media was then collected MC65(TC+) for further isolation of sEVs. Expression of APPC99 in MC65 cells was induced by removing TC from the cell culture medium and cells cultured for another 16 h. At this point, the cell culture media MC65(TCÀ) was harvested and centrifuged at 2000g for 30 min to remove any cells and debris.
2.2.2 Midbrain organoids 3D cell culture. The midbrain organoids were developed in the Early Drug Discovery Unit at McGill University. 66 Briey, peripheral blood mononuclear cells (PBMCs) were isolated from the blood of healthy individuals and reprogrammed into an induced pluripotent stem cell line (iPSCs). The use of iPSCs and stem cells in this research is approved by the McGill University Health Centre Research Ethics Board (DURCAN_IPSC/2019-5374). The iPSC used for Midbrain organoids generation was AIW002-02, a healthy male control line derived reprogrammed from PBMCs and obtained from the MNI's Open biorepository (C-BIG). Aer the formation of embryoid bodies (EBs), they were patterned into neuronal midbrains by inductive signals. To promote tissue growth, EBs were embedded in Matrigel scaffold and cultured in a six-well plate or orbital shaker. Cell culture media for sEVs isolation was collected aer 120 day old maturation of the MBOs. The media was collected aer a 7 day period, before the weekly media change.
2.3 Isolation of sEVs from cell culture media sEVs were isolated by differential ultracentrifugation with two rounds of spinning. First, we employed a low-speed centrifugation of the sEVs containing media to remove the cell portions, cell debris, apoptotic bodies or large biopolymers, and microvesicles. For this, 34 ml of the cell culture media from MC65 cells and midbrain organoids were centrifuged at 300g for 10 min, followed by 2000g for 10 min centrifugation and a nal step centrifugation at 10 000g for 30 min. All low-speed centrifugations (300-10 000g) were performed using a Beckman Coulter Microfuge 20R centrifuge with a FA361.5 Biosafe rotor. The second round is a high-speed centrifugation which has the following steps: 120 000g for 90 min, collected supernatant was discarded, and the pellet was dispersed in ultrapure water and centrifugated one more time at 120 000g for 90 min to pellet the sEVs. UC was performed using Beckman Optima TLX Ultracentrifuge with an SW 28 swinging bucket rotor. The resulting pellets were nally resuspended in up to 100 ml of ultrapure water and stored at À80 C until use. The samples were aliquoted (50 ml) to reduce freeze-thaw cycling which may otherwise damage the sEVs. In this way, only one freeze-thaw cycle is used, which has been shown previously to not have a signicant effect on the integrity of sEVs. 67,68 Moreover, dispersion and aliquoting of the resulting pellet allows characterization of the same isolated sEVs sample by complementary characterization methods to meet MISEV guidelines (e.g., electron microscopy, SP-IRIS, NTA, etc.).
sEVs characterization
2.4.1 Nanoparticle tracking analysis. Nanoparticle Tracking Analysis (NTA) was carried out using a NanoSight model LM10 (Malvern Panalytical Ltd, UK), equipped with a blue (405 nm) laser and a sCMOS camera. The isolated sEVs were thawed to room temperature and diluted 500-fold in ltered ultrapure water. Filtered ultrapure water ($2 ml) was also used to thoroughly ush the NTA tubing to conrm the background to be free of any nanoparticle contamination prior to the next sample addition. Next, 1 ml of each diluted sample was loaded into a single-use syringe and the syringe was placed to an automated syringe pump (Harvard Bioscience, MA, USA) for injection. Three consecutive 30 s videos of each sample in ow conditions with at least 130 particles per frame during each run were recorded at camera level 12. The data was analyzed using a NanoSight NTA 3.1. soware with the detection threshold set to 5 and screen gain 10 to track the statistically relevant number of particles, concurrently minimizing the distorting background artefacts.
2.4.2 Transmission electron microscopy. sEVs were deposited on glow discharged carbon lm-coated copper TEM grids and incubated at room temperature for 5 min. Next, 8 ml of ltered 1% uranyl acetate (UA) solution was dropped on the surface of TEM grids and incubated for 1 min for staining. Aer, excess UA was removed by contacting the lter paper with the edge of the TEM grids. The grids were then dried at room temperature for 30 min. Transmission electron microscopy was performed using a FEI Tecnai G2F20 transmission microscope operating at 80 kV.
SP-IRIS.
Tetraspanin kits, as well as the buffer and blocking solutions, were purchased and used as-is from Nano-View Biosciences. The following detection antibodies were used: anti-CD9 AF488, anti-CD63 CF647, and anti-CD81 CF555. sEVs were diluted in Solution A at 10Â, 100Â, or 1000Â, and 35 ml of each dilution was incubated on a chip for 6 h at room temperature in a 24 well plate. 1 ml of Solution A was added to each well and the plate was shaken at 500 rpm for 3 min 750 ml of the solution was removed from each well and replaced with 750 ml of Solution A then shaken at 500 rpm for 3 min. This step was repeated twice more for a total of 4 shaking steps. During these steps, a blocking mixture was prepared, combining 1 : 1 Solution A and blocking solution. Antibodies were diluted 1 : 600 in a blocking mixture. Aer the nal mix, 750 ml of the solution was taken out of each well and 250 ml of antibody mixture was added. Chips were then incubated at room temperature for 1 h. Aer incubation, 500 ml of Solution A was added to each well. 750 ml of the solution was then immediately taken out and replaced by 750 ml of new Solution A. This was shaken at 500 rpm for 3 min followed by removing 750 ml of solution from each well. 750 ml of Solution B was then added to each well and the plate was shaken at 500 rpm for 3 min followed by removing 750 ml of solution. This was repeated 3 times. 750 ml of MilliQ water was then added to each well and shaken at 500 rpm for 3 min for a total of 5 shaking steps aer antibody incubation. Each chip was washed in two successive dishes of MilliQ water, taking care to avoid drying of the chip between dishes. In the nal dish, the chip was tipped at a 45-degree angle and slowly pulled out of the water. These were then dried on absorbent paper and added to the chuck. Chips were scanned by SP-IRIS and all three uorescent channels. Data were analyzed with uorescence cut-offs of 600, 400, and 400 arbitrary units for the blue, green, and red channels, which were chosen by limiting the number of particles on the negative control MIgG spot to less than 10 for all chips.
Raman spectroscopy setup and data acquisition
A WITec Confocal Raman microscopy system (WITec Alpha300R) with a 633 nm HeNe laser, maximum power of 5 mW at the sample, coupled into a microscope equipped with a 50Â objective (NA 0.8, WD 0.58 mm, theoretical laser focal spot diameter $1 mm), a spectrometer (UHTS400 NIR, 400 mm focal length, with a 300 grooves per mm grating corresponding to a spectral resolution of <6.8 rel. cm À1 /pixel at 633 nm), and a CCD camera was used for these experiments. The acquisition time for sEVs characterization was 60 s. The spectra were collected aer air-drying 5 ml of isolated sEVs solutions on a glass cover slip from multiple points within the droplet ngerprint focusing on small aggregates of sEVs and in the rim area of the droplet. This approach allows size-based separation of sEVs from possible contaminants such as large EVs or protein aggregates, via convection currents that drive smaller particles to the outside of the ring. This is not possible if the spectra are measured from pellets where the EVs are clumped together, making the separation of larger aggregates (including protein aggregates) from the actual sEVs more challenging. Moreover, measuring Raman spectra of sEVs in liquid pellets presents difficulties due to their intrinsic Brownian motion, which will cause particles to move in and out of the laser beam. In addition, the momentum of the photons in the laser beam may push particles out of the focal region and, if not controlled properly, may make the measurements less accurate.
Data pre-processing and statistical analysis
The statistical analysis and data processing were performed using WITec Project Five build-in soware (ImageLab) and OriginPro (OriginLab, Northampton, MA). Prior analysis the quality of Raman spectra was assessed, and data pre-processing was performed in order to minimize insignicant variability. Pre-processing of the data included correction of baseline by subtraction of the spectral background from glass, cosmic rays, and other background deviations. Next, in order to enhance the spectral quality, we reduced the noise by applying Savitzky-Golay smoothing and then the data were normalized. Principal component analysis (PCA) was performed using OriginPro PCA for spectroscopy app. PCA was performed on a range of 900-1800 cm À1 , 1540-1800 cm À1 , and 2800-3100 cm À1 . The variance-covariance matrix was utilized for further analysis and the reduction of initially complex data was achieved by PCA. Next, to build the PCA score plot we used the rst two principal components (PCs). The optimal number of PCs to describe major features of the spectra was chosen based on the size of the corresponding eigenvalues of the PCA scree plot. The eigenvalues of the PCs aer the rst two were signicantly smaller suggesting that the rest of the PCs may not have much interpretive values and add relatively little to the information already retained by rst two PCs. Peak deconvolution was achieved by using the OriginPro built-in Multiple Peak Fit tool. The peak positions were chosen based on existing literature and further deconvolved using Voigt peak shape function.
Results
The workow of the study is represented in Fig. 1, which describes schematically the steps followed to isolate, characterize, and analyze the sEVs. Specically, in this study, we employed three different sEVs groups, isolated from two types of cell cultures: 2D MC65 neuroblastoma cell line and 3D midbrain organoids. The MC65 cell line is an in vitro AD model that provides a neuronal source of sEVs containing Ab. We believe that it is important to fully investigate and understand the signatures of sEVs associated Ab in simulated conditions before examining human samples. The study of an in vitro model of AD allows the investigation of possible roles Ab protein has in neurons, [69][70][71] and subsequently in neuronal sEVs, and may provide valuable insights into the pathogenesis of AD. Future work building on this data will apply Raman-based detection of AD in clinical settings. sEVs isolated from 3D midbrain organoids serve as an additional negative control in this study and represent healthy brain neurons. As described in the Methods section, sEVs isolation was achieved by rst centrifuging the cell culture media several times at low speed to remove the remaining cell fragments, debris, and microvesicles, followed by two cycles of high-speed centrifugation. We expect that, in accordance with previous reports, the remaining pellet contains the small sEVs of interest. We will further denote the sEVs isolated from untreated and tetracycline treated MC65 cells line as TCÀ sEVs and TC+ sEVs, respectively. The sEVs isolated from organoids culture media are labeled as osEVs. The sEVs were characterized by established methodologies such as NTA and TEM and were further studied by Raman spectroscopy to reveal their biochemical content. Subsequently, the recorded spectra were analyzed by PCA to identify the Ab content of each sEVs group.
sEVs characterization by NTA and TEM
First, we characterized the size and concentration of the isolated sEVs via NTA. Fig. 2A shows the size distribution plots for all analyzed sEVs groups. The mean concentration of TCÀ sEVs, as measured by NTA, 6.5 Â 10 9 EVs ml À1 , was higher than the mean concentration of TC+ and osEVs samples, which was 4.2 Â 10 9 EVs ml À1 and 4.5 Â 10 7 EVs ml À1 , respectively. Additionally, the mean particle size as measured by NTA was 157.3 nm AE 3.8 nm, 164.1 nm AE 11.2 nm, and 293.5 AE 2.7 nm for TCÀ sEVs, TC+ sEVs, and osEVs, respectively. One can see that the mean particle size of TCÀ sEVs was comparable with the one that is recorded for TC+ sEVs. On the other hand, we observed a slightly larger particle size for osEVs. TEM images, presented in Fig. 2B conrm this result, showing an increased size for organoids sEVs. Moreover, TEM images revealed the sEVs cup-shaped morphology, which is a typical experimental artefact related to deation of EV structure during the sample preparation.
To conrm that sEVs were enriched during ultracentrifugation, expression of sEVs associated tetraspanins, CD9, CD63, and CD81, were tested by immuno-capture and immuno-uorescence, using the SP-IRIS method implemented into the ExoView R100 instrument. This equipment utilizes a micropatterned chip with an array of spatially distinct antibody spots. During incubation, sEVs are captured by these antibodies and subsequently labeled with uorescent detection antibodies. By directly imaging these antibody arrays, up to four co-expressed surface proteins (capture antibody and three uorescent detection channels) can be detected on a single sEV.
For both sEVs populations, all three tetraspanins were expressed with both capture and uorescence detection of each tetraspanin. Furthermore, the tetraspanin prole of each sEVs population was very similar with most CD9 positive sEVs detected on the CD81 capture spot, the most CD63 positive sEVs detected on the CD63 capture spot, and similar amounts of CD81 positive sEVs captured on each spot. These results show that the co-expression of these tetraspanins is highly consistent between these sEVs populations.
In addition, we note that the resuspension of sEVs in ultrapure water did not notably change the characteristics of analyzed sEVs. Their size, morphology, and surface protein expression (Fig. 2) is comparable to the ones reported for sEVs resuspended in PBS or commercially available EVs resuspension buffers that maintain osmotic pressure. We believe that the ability of EVs to withstand the isotonic solution pressure can be explained by the higher rigidity of the EVs lipid bilayer that is enriched with cholesterol, sphingomyelin, and gangliosides compared to the membranes of their cells of origin. 72,73 Moreover, we experimentally determined that the composition of the resuspension buffer did not majorly impact the physical or chemical nature of sEVs (data not shown). To do this, we isolated sEVs by differential centrifugation from cell culture media and resuspended them using either 0.1% ltered PBS or ultrapure water, both as the nal buffer as well as during intermediate steps of processing. Then, we characterized sEVs by NTA, resistive pulse sensing (RPS), and SP-IRIS methods. The results of concentration and size distribution analysis did not show a major difference between the two groups of sEVs. We found that the sEVs resuspended in water had a similar concentration (8.8 Â 10 11 particles per ml) compared to sEVs resuspended in PBS (2.4 Â 10 11 particles per ml), indicating similar yield for particles. Finally, we determined that both sEVs groups had similar CD9, CD63, and CD81 tetraspanins proles, which further suggests that the chemical nature of sEVs remains generally similar regardless of the choice of resuspension buffer.
Raman spectroscopy analysis of sEVs isolated from MC65 (TCÀ/+) cells and midbrain organoids cell culture media
The MC65 AD cell culture model used in this study overexpresses 99-amino acid carboxyl-terminal fragments (bCTF) of APP under tetracycline promoter regulation. This model is designed to mimic the pathological pathway of APP that leads to amyloidogenesis. This pathway involves cleavage of mature APP by band g-secretases, where b-secretase cleaves the amino terminus of Ab, and membrane-associated bCTF. Further, bCTF is cleaved by g-secretase resulting in the release of Ab 40 or Ab 42 peptides and APP intracellular domains (AICDs). 65 As bCTF undergoes endocytosis, it can be trafficked to endosomal compartments such as multivesicular bodies (MVBs) and possibly enveloped in sEVs or exosomes. 74 It has been shown previously that APP CTFs are overabundant in cerebrospinal uid (CSF) of AD patients and suggested to be potential diagnostic biomarkers of AD. 75 The control samples of sEVs are isolated from the same cell culture model in the presence of tetracycline (TC+) and midbrain organoids sEVs (osEVs). The midbrain organoids were developed from PBMCs of healthy individuals and were used in this study because they are biochemically and biophysically more similar to tissues due to their ability to mimic cell-matrix and cell-cell interactions. Therefore, they are representative of healthy brain neurons.
For the Raman spectroscopy analysis, the isolated sEVs were resuspended in ultrapure water and placed on a clean glass microscope slide to allow air-drying. Spectra were mostly recorded from the small aggregates of sEVs and from the edge of the dried sample, where sEVs accumulate preferentially due to the "coffee-ring effect". This effect is observed upon the evaporation of water from droplet samples that contain smallsized particles. Explicitly, in a sample with a heterogeneous particle size distribution, the smallest particles ow radially toward the contact line during the drying process. The angle between the surface of the drying EVs sample and the microscope slide decreases progressively during water evaporation which limits the size of the particles that can approach the edge of the droplet. Therefore, aer drying, the particles will be separated based on their size due to convective currents inside the droplet, as reported by Jeong et al. 76 As the droplet dries, the smaller (lighter) particles such as sEVs are deposited and concentrated at the outer edge of the dried sample, and the bigger (heavier) particles, such as large EVs or protein aggregates that could be co-isolated during differential ultracentrifugation, are concentrated closer to the center region. By positioning the laser spot in the ring and adjacent to the ring area we ensure that we measure particle sizes in the typical sEVs range according to MISEV 2018 nomenclature and as measured here by NTA and TEM (Fig. 2). This is particularly important for the acquisition of reliable Raman spectra. In our case, we were able to record high-quality spectra with 633 nm continuum laser excitation at relatively low power of few mW and acquisition times on the order of one minute.
Next, we analyzed the collected Raman spectra from TCÀ sEVs (n ¼ 11), TC+ sEVs (n ¼ 10), and osEVs (n ¼ 7) samples and compared them with spectra recorded from pure Ab 42 protein (n ¼ 10). The Raman spectra of the "ngerprint region" 900-1800 cm À1 from all sEVs groups and Ab 42 pure protein represent a complex set of peaks with shared features among all sEVs samples and some variations (Fig. 3A). The Raman peak assignments are given in Table 1. All sEVs groups shared the same peak positions at 1123 cm À1 and 1290 cm À1 assigned to C-N vibration and amide III a-helix protein structure, respectively. Peaks at 1436 cm À1 and 1453 cm À1 are assigned to the lipid content, specically to the CH 2 and CH 3 deformation in lipids and triglycerides. Additionally, Ab 42 pure protein spectra presented two distinct peaks at 1000 cm À1 and 1600 cm À1 , assigned to the breathing of the benzene ring and C]C vibration corresponding to phenylalanine, respectively. These peaks can be also observed in the TCÀ sEVs spectra, suggesting that these sEVs could potentially carry Ab protein. Amide I region was also located at similar positions for TCÀ sEVs, osEVs, and Ab 42 pure protein covering the area 1650-1668 cm À1 . In the osEVs spectra, these peaks can be observed, while they are missing in the TC+ sEVs spectra.
Then, we performed PCA of the collected data and the results are shown in Fig. 3B. The rst two principal components represented 58.0% and 8.9% variability of the total variance, respectively. It is important to note that these scores may be inuenced by both spectrum intensity and spectrum shape. 77,78 The samples are spread along the PC1 axis with TCÀ sEVs located on the negative side, while TC+ sEVs and osEVs are distributed loosely on the positive side of the axis. Ab 42 pure protein spectra form an elongated cluster between PC1 and PC2. By this, it is clear that the different sample groups can be distinguished from each other based on their Raman spectra, which also serves as a valuable starting point for further analyses.
Raman spectroscopy can effectively determine the secondary structure of proteins. 81 The peaks centered at 1667-1668 cm À1 assigned to C]O and a small contribution of C-N stretch corresponds to b-sheet protein conformation. The peaks located at 1650-1660 cm À1 region arising from the coupling of C-N stretching vibration and N-H bending vibrations correspond to an a-helix structure. Therefore, our attention was further focused on the amide I region which is mostly affected by the Nanoscale Advances secondary structure of the proteins. The a-helix rich structure might originate from Ab peptide bonded to a plasma membrane. In an attempt to identify specic peaks within the amide I region, we performed peak deconvolution analysis. Fig. 4 depicts the deconvolution of the amide I region in TCÀ sEVs and Ab 42 pure protein spectra. For this, the most intense peaks at 1600 cm À1 and 1650 cm À1 in the spectra were centered, xed, and tted until high values of R 2 were obtained. The recorded Raman spectra are shown as solid lines and deconvolved peaks are marked as dashed lines. The amide I region deconvolution clearly identies the presence of a peak at 1650 cm À1 in the spectra of TCÀ sEVs (Fig. 4A) that corresponds to an a-helical conformation of the protein. The peak at 1663 cm À1 in the spectra of Ab 42 pure protein is assigned to an a-helical conformation with a potential contribution from "disordered structures" (Fig. 4B). On the other hand, the amide I region of TC+ sEVs and osEVs is too weak to provide a reliable t and to obtain information. In addition, the presence of only a-helical structure of the proteins in the spectra of TCÀ sEVs conrms that the collected spectra represent the proteins within sEVs and not insoluble protein or peptide deposits, which typically adopt an enriched b-sheet conformation. 87 The deconvolution of the Ab 42 pure protein spectra identied strong peaks centered at 1600 cm À1 and 1663 cm À1 . The broad peaks in the amide I region of TCÀ sEVs spectra indicated the presence of mainly monomeric form or small oligomers of Ab. A previous NMR study characterized Ab associated with a phospholipid bilayer-mimicking environment as a monomeric amphipathic a-helix conformer. 88 Next, the Raman spectra region between 1540-1800 cm À1 that includes the amide I region was analyzed by PCA. Fig. 4C depicts the score plot of the rst two principal components that cumulatively represent 48.1% variability of the total variance. In order to highlight different cluster regions, shaded ellipse areas are shown in the plot. One can observe that TCÀ sEVs and Ab 42 pure protein are closely clustered in the positive side of the PC1 axis. In contrast, TC+ sEVs and osEVs spectra are dispersed along the PC1-PC2 plane. Next, the analysis of PC loadings showed the contribution of the individual wavenumber to PC1 and PC2 (Fig. 4D). While the PC1 loading resembles the spectra of Ab 42 pure protein, the biochemical meaning of the second PC is more difficult to interpret. Taken together, these data revealed that the major secondary structure of the proteins and potentially Ab within the analyzed sEVs is typical to the a-helix form of proteins.
Additionally, Raman spectra in the "high-wavenumber region" can provide valuable information about the biochemical composition of the sEVs. Fig. 5 compares the experimentally recorded and deconvolved Raman bands that were obtained under the same experimental conditions as for the amide I region. Two major peaks were present within all sEVs spectra in this region at 2845 cm À1 and 2878 cm À1 . These peaks are characteristic vibrational features of lipids and correspond to symmetrical and asymmetrical CH 2 vibrations, respectively. The analysis of Ab 42 pure protein did not show the presence of the 2845 cm À1 peak and only showed weak intensity of the 2878 cm À1 peak. Specically, these peaks can be attributed to the presence of long acyl chain lipids such as fatty acids and ceramides. In addition, there was a small contribution of cholesterol to these peaks. On the other hand, the characteristic peak of the proteins is located at 2930 cm À1 . Fig. 5A and B depicts stronger intensities at 2930 cm À1 in deconvolved peaks of the TCÀ sEVs and Ab 42 pure protein compared to TC+ sEVs (Fig. 5C) and osEVs (Fig. 5D).
It has been shown previously that the ratio of Raman intensities at 2930 cm À1 and 2845 cm À1 (I 2930 /I 2845 cm À1 ) reects the ratio of the protein and lipid content. 89,90 Table 2 shows the calculated Raman intensity ratio for all analyzed sEVs groups. The intensity values are calculated for the area under the curve for each peak.
One can see that TCÀ sEVs had a higher I 2930 /I 2845 cm À1 intensity ratio compared to the TC+ sEVs and osEVs, indicating a higher concentration of proteins within TCÀ sEVs. These results indicate that Ab protein could be present in the TCÀ sEVs and could be at higher concentrations than in TC+ sEVs and osEVs. To complement these ndings, we performed PCA of the peaks in the "high-wavenumber region" between 2800- Fig. 5E represents the score plot in the PC1-PC2 plane where the rst PC was responsible for 82.4% of the variability and PC2 carried 8.0% of variability of the total variance. It can be clearly seen that TCÀ sEVs and Ab 42 pure protein spectra were clustered on the negative side of the PC1 axis while the TC+ sEVs and osEVs were clustered on the positive side of the PC1 axis. Fig. 5F shows the loading plots of PC1 and PC2 and the average spectra of the sEVs analytes and Ab 42 pure protein control protein. The loading spectrum for PC1 had several peaks at both positive and negative sides where the most signicant wavenumbers are 2930 cm À1 , 2845 cm À1 , and 2878 cm À1 and resembled the spectra of Ab 42 pure protein and sEVs groups spectra. In contrast, the chemical meaning of the second principal component was not clear from the shape of the loading. Finally, PCA was able to successfully cluster similar spectra and segregate different ones.
Next, we evaluated the effect of Ab on sEVs lipid membrane composition and structure. For this, we used the ratio of Raman peaks at 2845 cm À1 (CH 2 sym. ) to 2878 cm À1 (CH 2 asym. ) that describe an estimated lipid uidity or degree of unsaturation 91,92 (Table 3). The higher the I 2845 /I 2878 cm À1 ratio is, the more unsaturated lipids are present, and the higher is the uidity of the EV membrane. The results show that all three groups of sEVs had the same degree of saturation. Furthermore, in order to analyze the structure of lipids, we calculated the ratio of Raman peak intensities at 2845 cm À1 (C-H stretch of CH 2 ) to 2930 cm À1 (C-H stretch of CH 3 ) that has been shown to Fig. 4 Analysis of the Raman spectra of the amide I region 1540-1800 cm À1 . (A) Deconvolution of amide I region of averaged spectra of TCÀ sEVs and (B) Ab 42 pure protein indicated the presence of two peaks at 1600 cm À1 (labeled as peak 1) and 1650 cm À1 in TCÀ sEVs spectra and 1663 cm À1 for Ab 42 pure protein spectra (labeled as peak 2). Deconvolved spectra are shown as dotted lines. (C) The score plot of the first two principal components for each sEVs group. Colors represent each sEVs group as shown in the legend. Colored regions are to provide visual aids. (D) Comparison of the PC1 and PC2 loadings with Ab 42 pure protein spectrum and with average spectra of each sEVs group. Dotted lines represent zero-axes of the PCA loadings. Shaded areas represent AE1 standard deviation. Spectra are offset for clarity. correlate with the number of C atoms in the fatty acid chain. 92,93 We observed a slight change in the lipids structure, with the higher prevalence of unsaturated lipids with a longer chain in the TC+ sEVs and osEVs, and prevalence of lipids with a shorter chain length in TCÀ sEVs. This observation together with previously published reports indicates the effect of Ab association to EV membrane uidity by changing the structure of EV membrane lipids. 94
Discussion
Alzheimer's disease is a neurodegenerative disease that remains challenging to diagnose in early stages. This prognostic uncertainty of existing diagnostic methods in combination with high costs and invasiveness of current diagnostic procedures further emphasizes the importance of developing sensitive and accurate alternative tests for early AD diagnosis. The overall goal of the study was to explore the use of sEVs as carriers of toxic proteins. We used Raman spectroscopy to characterize sEVs associated with Ab protein as potential biomarkers for AD diagnosis. First, we demonstrated a clearly different biochemical prole of Ab associated sEVs compared to the control sEVs groups. In particular, intense peaks at 1650 cm À1 and 2930 cm À1 and their similarities with the spectra of pure Ab protein indicate the presence of the Ab protein in TCÀ sEVs. On the contrary, less intense, or lacking bands at these positions in TC+ sEVs and osEVs conrm the hypothesis that these peaks are associated with Ab protein. The observed differences in the PCA results in the amide I and "high-wavenumber regions" of the spectra can be explained by additional contributions from other proteins in sEVs cargo in the amide I region, as well as a lower overall signal-to-noise ratio in this region compared to the "high-wavenumber region".
In order to evaluate a priori the ability of Raman spectroscopy to detect Ab in our sEVs we performed an estimate of the number of Ab molecules in our laser spot. First, we calculated the number of Ab molecules per sEV based on published data. 22 Fiandaca et al. 22 reported the Ab 42 concentration (pg ml À1 ) in total exosomes solution and the number of exosomes per ml. To determine the mass of Ab 42 per sEV, we divided the Ab concentration (expressed in pg ml À1 ) by the number of exosomes per ml. Next, we converted the mass of Ab to the number of molecules per sEV by rst converting the mass of Ab to moles using the molar mass and further converting to the number of molecules using the Avogadro's number. We applied this procedure to the values reported in the aforementioned study. The reported concentration of Ab 42 is 18.5 pg ml À1 in exosomes (2.78 Â 10 9 particles per ml) isolated from plasma of AD patients (n ¼ 3) and 0.83 pg ml À1 in exosomes (3.49 Â 10 9 particles per ml) extracted from age matching healthy individuals (n ¼ 3). We used the values of the exosomal Ab 42 protein concentration extracted from AD patients' plasma to estimate the concertation of the protein in our TCÀ sEVs. The concentration of Ab 42 protein obtained from the analysis of healthy controls was used to calculate the protein concentration in TC+ sEVs and osEVs. Beginning with the number of Ab molecules per one sEV in TCÀ/+ sEVs and osEVs solutions, we calculated approximately 885 Ab molecules per sEV, 31.5 molecules per sEV, and 30 molecules per sEV, respectively. The calculated values indicate a higher load of Ab in TCÀ sEVs. Next, knowing the number of Ab molecules per sEV we can calculate the expected number of Ab molecules in our laser spot by assuming that the laser spot is a cylinder with 0.5 mm radius and 2 mm height and the sample is composed of concentrated sEVs lling the laser beam. Then, we calculated the estimated volume of one sEV of each group based on the mean size of sEVs analyzed by NTA. Subsequently, we calculated the number of sEVs of each group in the laser spot described above. The estimated number of Ab molecules in the laser beam spot is 6.8 Â 10 5 Ab molecules for TCÀ sEVs, 2.1 Â 10 2 Ab molecules for TC+ sEV, and 1.7 Â 10 2 Ab molecules for osEVs. These estimates are supported by the differences of Raman intensities, where a linear relationship is expected to the number of molecules in the analyte. It is important to note that the 2930 cm À1 peak corresponds to overall protein concentration within analyzed sEVs. However, the main difference between TCÀ MC65 cells and TC+ MC65 cells is the presence of tetracycline and overexpression of Ab in TCÀ MC65 cells. This indicates that the isolated sEVs will have mainly the same molecular composition and the major variability is the presence of Ab in TCÀ sEVs as detected by Raman spectroscopy. In addition, it is important to mention that PCA can be used for further semi-quantitative analysis of Ab in future studies of disease diagnosis using human clinical samples.
Next, the deconvolution of the amide I region of TCÀ sEVs showed that Ab associated with sEVs is in an a-helical conformational form and in the size of a monomer or a small oligomer. These ndings may shine light on a potential mechanism of propagation of neurodegeneration by sEVs carrying toxic oligomers. There is no consensus in the eld regarding the structure of the toxic oligomers. The process of transformation of the monomers into toxic oligomers has been shown to be structure dependent. Specically, it has been noted that toxic oligomers, as well as Ab brils, have a b-sheet enriched secondary structure that provides a high adherence site for further brillation. [95][96][97] Conversely, a number of studies showed that early oligomers of Ab and a-synuclein have an a-helical secondary structure and are prompted by helix-helix interactions. 51,98 This knowledge and our results further suggest that sEVs may be involved in toxic oligomers spread within the neurons in CNS.
In addition, we observed differences in the lipid structures of sEVs. The lipids with longer fatty acid chains are prevalent in control sEVs groups, TC+ sEVs, and osEVs. On the other hand, TCÀ sEVs have shorter fatty acid chain lengths. Since the main difference between TCÀ sEVs and TC+ sEVs is the presence of the Ab protein, we suggest that the association of Ab protein with plasma membrane alters plasma membrane uidity. The plasma membrane uidity depends on several factors, such as degree of fatty acids saturation, length of fatty acid tail, cholesterol content, and temperature. Specically, the lengths of fatty acids tails affect the membrane rigidity by creating intermolecular interactions between phospholipid tails. In the case of TCÀ sEVs we observe a two-fold reduction of the chain length and as a result, a potential increase in membrane uidity. However, the cause of this phenomenon remains to be explored. One possible explanation for the increased EV membrane uidity is the formation of transmembrane oligomeric pore structures that are proposed to occur with the peptide's interaction with the EV plasma membrane. In addition, the length of the fatty acid chain shortens with an increase in temperature. However, this parameter should not affect our results since sEVs from all three groups were analyzed under the same experimental conditions.
Overall, our results conrm that Ab protein is present in sEVs and can be detected via Raman spectroscopy. Moreover, our study uncovered the role of Ab protein in the plasma membrane uidity, paving the way for other studies on this topic. Future studies using clinical samples of AD patients will be necessary to demonstrate the potential of sEVs for early AD diagnosis. Further, studies of the sEVs derived from AD patients and healthy controls via Raman spectroscopy will possibly indicate spectral biomarkers that may correlate to the development of AD. The analysis of molecular conformation of sEVs associated Ab protein is particularly important in understanding the role of sEVs in the propagation of neurodegeneration as it has been previously proposed in the literature. Potential pathologies underlying AD other than misfolded proteins and their conformers can be explored via Raman spectroscopy in sEVs from clinical samples. For instance, a comparison of the metal ions contents in EVs that has been shown to correlate with aggregation of Ab protein and deposition of plaques. Moreover, another area of great interest is exploring lipidomic changes that may contribute to the disease development and may potentially be detected in EVs molecular content via Raman spectroscopy. The main drawback of the technique that limits its translation to clinic is the relatively low intensity of the Raman signal. Nonetheless, this limitation can potentially be addressed by technologies aimed at enhancing Raman signals such as plasmonic nanomaterials in surface enhanced Raman spectroscopy, or coherent Raman techniques.
Conflicts of interest
The authors declare that the research was conducted in the absence of any commercial or nancial relationships that could be construed as a potential conict of interest. | 2021-08-03T00:06:01.200Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "75805b26cb067e1165bd491871532ccf19ff2e5c",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/na/d1na00330e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "71b2a671cd08bdcc3120533bd569925ca94d3338",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
259398668 | pes2o/s2orc | v3-fos-license | The Influence of Regular Physical Activity on the Functional Parameters of the Youthful Organism
To find out the effect of systematic training in adaptive football on the motor abilities of students with congenital hearing loss. 24 nineteen-year-old male students with congenital hearing loss were observed. They were divided into a main group of 12 students who started training in adaptive football, and a control group of 12 young men who had physical activity only in the process of university physical education. Standard functional tests were used and traditional standards were recorded. The obtained data were processed by Student's criterion (t) and correlation analysis. Systematic training in the framework of adaptive football among young men with hearing loss increased their coordination properties, motor characteristics, and body stability in space. Those who go in for adaptive football increased their level of strength capabilities and increased endurance. Systematic training in adaptive football significantly increased the accuracy of sports and domestic movements and stimulated internal organs. These results were achieved by enhancing the basic functional and metabolic parameters of the trainees' organisms. Training in adaptive football increases the motor capabilities and the effectiveness of neural control over the muscles in young men with congenital hearing loss studying at the university.
Introduction
Modern science recognizes that regular feasible muscular loads improve the morphological and functional status of the body of any healthy person with signs of pathology (Skripleva et al., 2018;Vorobyeva et al., 2018a).The onset of a healing effect occurs only if the work of the muscular apparatus is enhanced due to the stimulation of biosynthetic and nervous processes (Kachenkova et al., 2020;Zavalishina, 2020b).This result is of particular importance for modern society in the course of systematic health improvement of student youth with somatic pathology (Komarov et al., 2019).An increase in her general physical activity during adaptive sports training leads to an improvement in the functioning of their internal organs, especially the heart and lungs (Pavlov & Kuznetsova, 1998;Kotova et al., 2017).Therefore, it is necessary to continue the search for effective approaches to improving the health of young people, especially students, by increasing their physical activity (Apanasenko & Popova, 2000;Zavalishina et al., 2020a).The planned implementation of this approach can lead to the physical strengthening of young people with somatic pathology and their involvement in the labor process (Mal et al., 2018).
It is clear that the systematic manifestations of physical activity, especially of the lower extremities, significantly increase the general adaptive characteristics of a person suffering from pathology (Zavalishina et al., 2021a;Zavalishina et al., 2022).In this regard, the results of adaptive football training with young men suffering from congenital health disorders and especially hearing loss are very interesting for modern coaches (Vorobyeva et al., 2018b).For this category, starting to play sports, low physical development, low working capacity, and rather weak socialization are characteristics.It was found that muscle activity during football training, including adaptive training, can increase the overall physical capabilities of this category of trainees (Bespalov et al., 2018).It is very important for increasing the effectiveness of training in adaptive football the assessment of changes in the physical development of young men who have impairments in the auditory analyzer and have started training in adaptive football.
Purpose: to find out the effect of systematic training in adaptive football on the motor abilities of students with congenital hearing loss.
Materials and Methods
To perform the work, 24 nineteen-year-old male students with confirmed congenital hearing loss of I-II degree were taken under supervision.Two groups of surveyed students were formed among them.The main group included 12 students who started regular training in adaptive football.The control group consisted of 12 students who experienced muscle loads only at university classes devoted to physical culture.The indicators were assessed initially and after six months of observation.
In the course of the study, standard functional tests were used and the results obtained were assessed according to several control standards in the observed students.The following indicators were recorded in the work: the duration of a run for a distance of 30 m, the duration of a run for a distance of 60 m, the length of a long jump, the duration of a 4x9 shuttle run, the number of jumps on a rope for a period of 25 s, the length of a distance that can be run in 6 minutes, the number performed pull-ups, the values of torso flexion from a lying position for 1 minute.
Statistical processing in this study was performed using the StatSoft, Inc. program.USA, by calculating the values of the Student's t-test (t) and Pearson's correlation coefficient.
Results and Discussion
When performing this study, the dynamics of their indicators were traced among young male students (Table 1).The initial state of the speed-strength parameters recorded in the thirty-meter run test (6.1±0.09s), in the sixty-meter run test (10.5±0.49s), during the registration of the jump length (1.55 ± 0.29 m) was small.When included in the study, the endurance of the examined was low.This was indicated by the small distance that the examined could run within six minutes.The low initial physical abilities were also evidenced by the small number of pull-ups performed on the crossbar, which they were capable of.The results of their participation in the shuttle run and the small number of jumps they made with the help of a rope indicated the small coordination capabilities of those observed in the outcome.At the first examination, all the observed had a rapid onset of fatigue, accompanied by a mass of errors in motor actions, a decrease in attention, and inhibition of sports movements.Evaluating the results obtained, it was clear that initially, all male students with hearing loss had poor physical development.At the end of the observation, there were no significant changes in the recorded characteristics in the control group.The surveyed students who formed the main group, at the end of adaptive football lessons, found an increase in their physical capabilities (Table 1).This was evidenced by the dynamics of their physical capabilities (decrease in the time spent running for several short distances; increase in the distance of the jump), increase in the level of strength (increase in the number of pull-ups and body lifts from a horizontal position), optimization of coordination (fast shuttle run, a large number of jumps on a skipping rope in 25 seconds) and stimulation of endurance (greater distance covered by running in 6 minutes).
After six months of regular training in the main group during physical activity, the manifestations of fatigue weakened.This was judged by the dynamics of sensations and a decrease in the value of the pulse under load at the end of the study.In the control, this indicator was unchanged during the entire observation period.During the training in adaptive football among the boys of the main group, the most difficult movements in mastering were movements with a high speed of implementation and a rapid change in the vector of movement, jogging from a place with a quick stop, a combination of running and dribbling with a transition to walking with a changing direction of movement.
Acceleration of mastering the skills of rational leg movements occurred in the course of frequent repetition of motor actions by the trainees.The boys included in the main group, after 6 months of training in the football section, increased locomotor stability, reduced the number of irrational movements during active movements, and developed the ability to long-term deep breathing.
A very important aspect in the development of motor actions in football is the correlation between the duration of the shuttle run and the time of running the thirty-meter run (r=0.681;p<0.056) found at the end of the work.Improvement in jumping results using the standard rope correlated among the observed football players by the end of the study with the distance of the jump on the plane (r=0.517;p<0.052).The revealed acceleration of running at different distances was also correlated with the value of the distance of a jump from a place without a run (r=0.610;p<0.51).
Physical activity has long been considered to be a strong stimulant for all body tissues (Shilenok, 1997;Zavalishina, 2021).By increasing the work of striated muscles, lower extremities, and torso during football training, the body activates metabolism, hemodynamics in all organs, and protein synthesis processes (Karpov et al., 2021a).In regularly working muscles, capillaries open to allow blood cells to pass through (Fayzullina et al., 2020).With an increase in muscle activity, a larger amount of oxygen comes to the skeletal muscles and more substances that have plastic and energy significance (Dorontsev et al., 2022).Under these conditions, skeletal muscles significantly activate the synthesis of various proteins and generate more adenosine triphosphate (Vorobyeva et al., 2020).This increases their size and enhances their strength characteristics (Karpov et al., 2020).
It is known that feasible physical activity, which is not excessive for the body, enhances all processes in it (Zavalishina, 2020c).This has previously been found in a young human body and a mature human body without obvious pathology (Zavalishina et al., 2018).There were no significant gender differences in body responses to adequate physical activity (Vorobyeva et al., 2018c).
Of particular interest have always been studies on the effect of physical activity on a sick organism, including those with impaired functions of analyzers (Zavalishina et al., 2021b).In these studies, regular moderately dosed muscle activity was considered an effective approach to health improvement or a component of ongoing treatment (Tkacheva & Zavalishina, 2019;Mikhaylova et al., 2021).However, with the existing somatic pathology, it is not possible in all cases to improve the patient's condition by using only physical training.They can rightly be considered a significant element of general strengthening procedures, often as part of often different complexes aimed at eliminating the pathology of somatically different (Karpov et al., 2021b;Zavalishina et al., 2021c).
Increasing muscle activity is recognized as having more opportunities in terms of health improvement when one analyzer malfunctions (Zavalishina et al., 2021d).At the same time, the effect of physical training on the level of physical development in hearing loss remains not completely clear.The effect of regular football training in the presence of hearing loss has not yet been fully elucidated.Their health-improving potential for hearing loss in adolescence has not been definitively established.The serious need for these studies is because hearing loss can impair socialization, lower the quality of life and significantly reduce a person's ability to work.
Recently, disruptions in the work of analyzers are increasingly common among young people, which leads to the impossibility of fully realizing their labor potential inherent in nature, often contributing to the appearance of disability.For this reason, a further search for types of physical stimulation of a young organism with hearing loss is very relevant today.Regular football training can be considered an effective way of influencing the body which can help in this case.Their potential for hearing loss is still undervalued due to testing in a few studies.
Previously used for many disorders in the body, physical activity has always had a strictly therapeutic orientation.Most often these were different types of athletics.They showed a therapeutic effect on a sick human body, but their ability to increase the physical fitness of the deaf was unclear.At the same time, one could unambiguously think that an increase in physical activity stimulates an increase in the volume of skeletal muscles.
Applying physical activity, in our study, it was not possible to influence the severity of hearing loss.In addition, the preventive possibilities of increasing muscle activity in young people in terms of reducing the risk of developing hearing pathology and aggravating existing hearing loss also remain highly controversial.At the same time, the possibility of physical stimulation of hearing-impaired youths with the help of football training was unequivocally clear.Given these circumstances, the study, on the one hand, closed the existing gaps in scientific knowledge, and, on the other hand, confirmed the already known information.
The results of the observation testified to the serious healthimproving manifestations of systematic football training in people of adolescence with hearing loss.The results obtained in the study suggest that regular football can normalize heart parameters and increase physical capabilities.
The information found in the course of this study unequivocally indicates a serious health-improving potential of playing football in people adolescence with hearing loss.Significantly more functionally beneficial changes found at the end of the observation in the group of trainees were determined by more pronounced stimulation of the muscles of the leg of the trunk in the young men of this group during their football activities.
Considering the obtained results, it becomes clear that frequent football loads should be considered an effective means of general somatic strengthening of young men, not only healthy but also those with severe hearing loss.
The authors believe that, despite some difficulty in coming into contact with young people, it is quite possible to involve them in regular football training in an organized way.Moreover, such regular training enhances the functioning of their muscular system, circulatory system, and respiratory system.In this regard, it is clear that significant muscle activity during regular football training increases the overall adaptive capacity of vital organs.An essential mechanism for increasing the physical capabilities of the body of young men and its cardiovascular system due to the growth of regular physical activity (Zavalishina & Makhov, 2019;Karpov et al., 2019a).This creates conditions for increasing its capabilities concerning the oxygen supply of all formations in the body (Zavalishina et al., 2019).At the same time, in conditions of an increase in general physical activity due to football training in young men with hearing loss, the degree of myocardial development increases with biologically favorable inhibition of hemostasis, which ensures optimal hemocirculation in tissues (Evgrafov & Kuznetsov, 2010).
An increase in locomotor stability in people included in the main group after six months of training is associated with successful adaptation to physical stress in trainees of their entire muscular system (Karpov et al., 2019b).The increase in the degree of stability of the subjects in the process of running along a changing trajectory during six-month training in adaptive football was also provided during the implementation of sports activities due to the development of an increase in the fitness of their vestibular system (Zavalishina et al., 2021e;Zavalishina et al., 2021f).
Taking into account the information known from the scientific literature and the data of our research, it is legitimate to assume that in the case of regular physical activity during football training, the development of skeletal muscles increases and the mobility of the main joints increases (Mal et al., 2021).Systematic football training leads to an increase in biosynthetic processes in various cells, optimizing the state of all parts of the body (Skoryatina et al., 2017).Physical activity during football loads brings into line the processes of excitation and inhibition in the cerebral cortex, the subcortex, in the autonomic nervous system, strengthening the body and preventing the appearance of many types of pathology (Mal et al., 2019;Makurina et al., 2022).
Conclusion
The possibility of significant improvement in conditions of a dosed muscle load is well known.Football training can be considered very effective in terms of recovery, and stimulating physical fitness among trainees.At the end of six months of adaptive football training, the students with congenital hearing loss significantly increased their strength capabilities, increased speed parameters, improved coordination of movements, and increased overall endurance.The presence of physical activity only during physical education classes at the place of study was not accompanied by changes in the recorded indicators.Regular adaptive football classes effectively stimulate movement and increase the overall physical performance of young men with congenital hearing loss during their university studies.
Table 1 .
Dynamics of motor abilities | 2023-07-10T23:52:20.193Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "59fbb76abd77442dd2585b9860a9e0a61ad13b72",
"oa_license": "CCBY",
"oa_url": "https://jbiochemtech.com/storage/files/article/4cd16239-ac1a-4141-aa46-04dc818e0933-PT4D81U1yGMrY1Iz/jbio-vol-14-no-2-2023-18-23-1053.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4dbb175865d1b196f7f0d7ec029f028c2fc8df3e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
258865710 | pes2o/s2orc | v3-fos-license | Enabling Large Language Models to Generate Text with Citations
Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We develop automatic metrics along three dimensions -- fluency, correctness, and citation quality -- and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement -- For example, on the ELI5 dataset, even the best models lack complete citation support 50% of the time. Our analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.
Introduction
Large language models (LLMs; Brown et al., 2020;OpenAI, 2023) have gained increasing popularity as a tool for information seeking.While they generate engaging and coherent responses, their outputs are prone to hallucination and often contain factually incorrect information (Ji et al., 2023).This makes it harder for users to trust and verify LLMgenerated outputs without any supporting evidence.
In this work, we study a new generation paradigm for LLMs, in which we require LLMs When did the US break away from England?
The US took the first step towards gaining independence from GB when it declared independence on July 2, 1776 (although the event is now commemorated on July 4, 1776, the date when the Declaration of Independence was officially adopted by Congress) to provide citations to one or a few text passages for any statement they generate (Figure 1).Incorporating citations brings several benefits: (1) users can easily verify LLMs' claims with the provided citations; (2) LLMs can generate text that faithfully follows cited passages, which has the promise to improve correctness and alleviate hallucination.
Multiple commercial systems have adopted this paradigm: Bing Chat2 and perplexity.ai3respond to user questions in natural language with references to Web pages.Nakano et al. (2021); Menick et al. (2022) share a similar motivation, but they mainly experiment with commercial search engines and closed-source models, making their results difficult to evaluate.Retrieval-augmented LMs (Borgeaud et al., 2022;Izacard et al., 2022) incorporate retrieved passages during both training and inference, but do not guarantee faithfulness to retrieved passages or explicitly provide citations.Additionally, previous studies mostly rely on human evaluation (Nakano et al., 2021;Menick et al., 2022;Liu et al., 2023), which is expensive and difficult to reproduce.We argue that the absence of automated evaluation hinders the advances of such systems.A: Student loans can affect the debt to income ratio [1], which is a key factor in determining the amount that ... [2][3] Table 1: The three datasets used in our ALCE benchmark.These datasets cover a wide range of question types and the corresponding corpora span from Wikipedia to Web-scale document collection.
We present ALCE, the first reproducible benchmark for automatically evaluating LLMs' generations with citations.ALCE assumes a naturallanguage question and a retrieval corpus, and requires building end-to-end systems to retrieve relevant passages from the corpus, generate a response to the question, and cite corresponding supporting passages.We compile three datasets that cover different types of questions and corpora-ASQA (Stelmakh et al., 2022), QAMPARI (Rubin et al., 2022), and ELI5 (Fan et al., 2019)-as shown in Table 1.Different from previous benchmarks (Lee et al., 2019;Bohnet et al., 2022), ALCE evaluates long-text generation, focusing on automatically evaluating citation quality, and allows citing multiple passages for individual statements.
We design automatic evaluation methods in three dimensions: fluency, correctness, and citation quality.Specifically, we use MAUVE (Pillutla et al., 2021) to measure fluency, propose tailored correctness metrics for each dataset, and adopt a natural language inference (NLI) model (Honovich et al., 2022) to measure citation quality.We showcase how the three dimensions together contribute to a robust evaluation, preventing systems from exploiting shortcuts.Additionally, we conduct human evaluation and demonstrate a strong correlation with our automatic metrics.
We experiment on multiple systems with stateof-the-art LLMs and retrievers and also propose novel prompting strategies to synthesize retrieved text into text generation.Although all systems are capable of providing fluent and coherent responses, there remains substantial room for improvement in terms of correctness and citation quality: For example, on the ELI5 dataset, around 50% generations of our ChatGPT and GPT-4 baselines are not fully supported by the cited passages.Additionally, we find that (1) a closed-book model (generating answers without accessing any retrieved documents) with post-hoc citing achieves good correctness but much worse citation quality; (2) although interactive retrieval approaches (Yao et al., 2023;Schick et al., 2023) offer more flexibility in when/what to retrieve, they do not improve the performance on this challenging benchmark; (3) summarizing the retrieved passages in a shorter text improves correctness but not citation quality; (4) reranking multiple generations boosts citation quality measured by human evaluation; (5) incorporating more retrieved passages in context does not help Chat-GPT but improves GPT-4 performance.
Our extensive analyses highlight three major challenges of building LLMs to generate text with citations: (1) the retrieval quality is crucial to the final performance and has substantial room for improvement; (2) LLMs' limited context window restricts the number of passages they can incorporate; (3) current LLMs struggle to synthesize multiple documents in context without being distracted by irrelevant ones, although better instruction tuning brings significant improvement.These challenges pose promising research directions for developing better systems integrating retrieval and LLMs.
Task Setup and Datasets
Our task is formalized as follows: Given a query q and a corpus of text passages D, the system is required to return an output S, which consists of n statements s 1 , ..., s n , and each statement s i cites a list of passages C i = {c i,1 , c i,2 , . ..}4 , where c i,j ∈ D. In this work, we segment LLMs' output into statements by sentence boundaries. 5While LLMs may include sentences that do not require a citation, such as "I'm happy to help", we observe that almost all sentences that LLMs output provide valuable information and require citations, similar to findings in Liu et al. (2023).In this work, citations are enclosed by box brackets such as [1][2].
We divide the corpus D into 100-word passages following previous works on open-domain question answering (Karpukhin et al., 2020;Petroni et al., 2021;Piktus et al., 2021), in contrast to commercial systems like Bing Chat, which cite entire Web pages.We take 100-word passages because it is easier for humans to verify, and allows for more retrieved passages to fit in LLMs' limited context.
We choose QA datasets so that (1) they contain factual questions, in which references are important; (2) questions require long-text answers that cover multiple aspects; (3) answering the questions requires synthesizing multiple sources.We select three datasets (Table 1) and introduce them below.See §B for additional statistics.
ASQA (Stelmakh et al., 2022) is a long-form factoid dataset.As shown in Figure 1, each question is an ambiguous question from AmbigQA (Min et al., 2020) that requires multiple short answers to cover different aspects, and the dataset provides a longform answer that covers all short answers.Since most questions can be answered by Wikipedia, we use the 2018-12-20 Wikipedia snapshot as D.
QAMPARI (Rubin et al., 2022) is a factoid QA dataset constructed from Wikipedia, where the answer is a list of entities that are drawn from different passages.Same as ASQA, we use the 2018-12-20 Wikipedia as the corpus.
ELI5 (Fan et al., 2019) is a long-form QA dataset built on the Reddit forum "Explain Like I'm Five".6Most ELI5 questions are how/why/what questions that require long answers and multiple passages as evidence.Due to the diverse topics discussed in the questions, we use Sphere (Piktus et al., 2021)-a filtered version of Common Crawl7 -as the corpus.The ELI5 dataset is widely used in related work due to its challenging nature (Nakano et al., 2021;Menick et al., 2022;Liu et al., 2023).
We randomly select 1,000 examples from the development set of each dataset for ALCE.Our benchmark primarily assesses the citation capabilities of existing LLMs and does not provide training data, as there are no available examples that provide supervision for citations in these datasets.
Automatic Evaluation
Our benchmark measures the following three dimensions of system responses: • Fluency: whether the model's generated text is fluent and coherent.
• Correctness: whether the answer is accurate and covers all aspects of interest.
• Citation quality: whether the answer is well supported by the cited passages and no irrelevant passages are cited.In the following, we present automatic metrics for each dimension and discuss why the combination of the three metrics provides a robust evaluation.
Fluency
We use MAUVE (Pillutla et al., 2021) to evaluate the fluency of the output ( §C).We deploy MAUVE for ASQA and ELI5 and omit it for QAMPARI, as QAMPARI only requires a list of short answers as the response and LLMs consistently adhere to the format in our experiments.As MAUVE is sensitive to output length and text style, and most LLMs are capable of producing fluent text, we mainly employ it as a sanity check as long as the MAUVE scores are high enough.
Correctness
Our objective is to measure the informativeness and utility of the generation to the question.Liu et al. (2023) propose to directly evaluate perceived utility by humans, a process difficult to automate.Therefore, we use correctness-whether the response is accurate compared to a ground truth answer-as a proxy.Evaluating the correctness of long-form generation is a challenging task (Krishna et al., 2021), and we describe our strategy for each dataset below.Figure 2 illustrates the metrics and we include additional implementation details in §C.
For ASQA, we follow Stelmakh et al. ( 2022) and calculate the recall of correct short answers by checking whether the short answers (provided by the dataset) are exact substrings of the generation (exact match recall; EM recall).
For QAMPARI, we follow Rubin et al. ( 2022) and calculate the precision and recall of the model prediction, by checking the exact match to the gold answer list.We add one additional adjustment: considering that users often want to know only a few example answers of the question, our evaluation considers recall to be 100% if the prediction includes at least 5 correct answers (recall-5).Unlike ASQA and QAMPARI, the ELI5 dataset does not provide short entity answers.Fan et al. (2019) use ROUGE for evaluation, which does not reflect the correctness well (Krishna et al., 2021;§A).Inspired by works in summarization evaluation (Zhang and Bansal, 2021;Kamoi et al., 2023;Wang et al., 2020), we use Instruct-GPT (text-davinci-003; Ouyang et al., 2022) to generate three "sub-claims".Then we use TRUE 8 (Honovich et al., 2022), a T5-11B (Raffel et al., 2020) model fine-tuned on a collection of natural language inference (NLI) datasets, to check whether the model output entails the sub-claims (claim recall).TRUE targets factual correctness and has been used by previous works in similar context (Bohnet et al., 2022;Gao et al., 2023).We demonstrate that claim recall provides a more accurate measure of correctness than existing metrics (more details in §A).
Citation Quality
We evaluate citation qualities using two metrics: (1) citation recall, which determines if the output is entirely supported by cited passages, and (2) citation precision, which identifies any irrelevant citations.Although we prioritize citation recall as it entails a well-supported and truthful answer, enhancing precision is crucial for better user satisfaction, reducing the need for human review of extraneous For this question, citation recall = 2 / 3 = 66%
Citation Precision
Detect "irrelevant" citation: one citation alone does not support the claim, and removing it does not affect other citations combined to support the claim.
When did the US break away from England?
If recall = 0, then precision = 0 Recall = 1 if the concatenation of all cited passages fully supports the segment.
We use an NLI model to determine "fully support".
We use an NLI model to verify whether a statement is supported by its citations.
passages.Figure 3 provides an illustrated example.
We use the NLI model TRUE (Honovich et al., 2022) again to automatically examine whether the cited passages entail the model generation.We conduct human evaluation ( §6) to demonstrate strong human correlation of our metric.
Citation recall.We calculate the citation recall of each statement (0 or 1) and average over all statements in the model response.For each statement s i , its citation recall is 1 if and only if there is at least one citation (C i ̸ = ∅) and ϕ(concat(C i ), s i ) = 1, where ϕ(premise, hypothesis) is the NLI model that outputs 1 if the premise entails the hypothesis, and 0 otherwise; concat(C i ) concatenates all passages in C i together (details in §C).The NLI evaluation is in accordance with the attributable to identified sources (AIS) framework (Rashkin et al., 2023) Citation precision.Our citation precision evaluation detects citations that are irrelevant, but it does not require citing a minimal set.We follow this design because human writing often cites redundant sources to enhance credibility; human readers may also appreciate multiple citations, especially when it pertains to critical claims such as medical advice.
We calculate the citation precision for each citation (0 or 1) and average over all citations in the response.We first define if a citation is "irrelevant".Intuitively, a citation c i,j is "irrelevant" if (a) c i,j itself cannot support s i and (b) removing c i,j does not affect the rest of the citations to support s i .Formally, c i,j is "irrelevant" if and only if c i,j has a precision of 1 if s i has recall=1 and c i,j is not irrelevant.For example (Figure 3), when s 3 cites three references [2][4][5] and recall=1, [2] is "irrelevant" if ϕ([2], s 3 ) = 0 and ϕ ([4][5], s 3 ) = 1.For condition (b) to work, we set recall=1 as a prerequisite for precision= 1.Note that this algorithm overlooks the scenario when one citation partially supports the statement.We discuss the details in §E.
ALCE is Robust to Shortcut Cases
We showcase how the ALCE evaluation is robust to two possible shortcuts in §D: (1) using the top-1 retrieved passage as the response and citing itself, and (2) using the first two sentences of the top-1 passage.Both cases have almost-perfect citation scores, but (1) has low fluency due to its unnaturally long length compared to human answers, and (2) has low correctness due to low coverage.
Modeling
In this section, we discuss three major modeling components for an ALCE system-retrieval, synthesis, and post-editing.
Retrieval
We explore simple, off-the-shelf retrievers.We use dense retrievers for Wikipedia, including GTR (Ni et al., 2022) and DPR (Karpukhin et al., 2020); we use BM25 for Sphere.For each question, we retrieve the top-100 passages.
Synthesis
We focus on how to prompt an LLM to interact with the retriever, and synthesize and cite the evidence (without fine-tuning internal parameters).One noteworthy challenge is that existing LLMs all have limited context window and thus can only fit a handful of passages.
VANILLA.We simply provide the model with the top-k9 passages and instruct the model to cite accordingly (Table 2).We also use in-context learning (Brown et al., 2020) and prepend two demonstrations.The complete instruction is in Table 23.SUMM/SNIPPET.With a 4K context window, we can at most safely fit k = 5 passages.As shown in Figure 4, top-5 retrieved passages can only cover 56.8% percent of the answers in ASQA.
To tackle this limitation, we propose to provide summaries or snippets of passages instead of the full text (summaries are abstractive but snippets are spans from passages).We acquire summaries and snippets by prompting ChatGPT with instructions (prompts in Table 25 and 26). 10 Then we replace all passages with summaries/snippets.Summaries or snippets significantly reduce the passage length, allowing for more passages to fit in: for ASQA, they reduce passage length by 6× on average.
Though SUMM/SNIPPET allows for more retrieved passages, they are lossy compressions.To alleviate this problem, we propose INTERACT, an interactive prompting scheme to allow the model to check the full text of certain passages.At each step, the model can execute one of three actions: (1) "Check: Document [1][2]" to check the full text of the corresponding documents; (2) "Output:" to output a statement of the answer; (3) "End." to end the generation.§C provides more details.
INLINESEARCH.
The above methods all display retrieval results at the beginning.In INLI-NESEARCH, we allow LLMs to call "search" during the generation process (Yao et al., 2023;Press et al., 2022;Jiang et al., 2023).At each step, the model can execute one of three actions: "Search: {query}" to search among the top-100 passages11 by using GTR; the "Output" and "End" actions are the same as INTERACT.For each "Search" action, we display the best retrieved passage in the context.The passage is removed after one action to save context space.Table 3 shows an example.
CLOSEDBOOK.We also add a simple closedbook baseline, where the model is only prompted with the instruction and the question, without any retrieved passages provided.Consequently, this variant does not cite any evidences.
Post-editing
In this section we discuss two strategies for refining the output to further improve its quality.
RERANK.We randomly sample n sample = 4 responses for each question, and select the best response using the automatic citation recall score.we expect RERANK to improve the citation quality.
POSTCITE.
For each statement, we find the best matching passage among the top-100 retrieved passages using GTR and cite it.We combine this with CLOSEDBOOK in our experiments.
Main Results
We present the main results on three datasets in Table 4, 5, and 6 respectively (full results in §G.6).
We first note that all models achieve good fluency scores (except some models on ELI5 mainly due to their longer generations).We summarize the main takeaways from the experiments below.
VANILLA achieves strong performance.GPT-4 brings limited improvement but is better at using long context.We evaluate GPT-4 with VANILLA and different numbers of passages (more results in §G.6).GPT-4 brings consistent (but limited) improvement on correctness, but often at a cost of citation quality.GPT-4 can also incorporate more passages due to its longer context window, which boosts both correctness and citation quality.On the contrary, including more passages with ChatGPT-16K does not improve the results (Table 7), suggesting that processing more passages is non-trivial and GPT-4 is better at synthesizing information from its long context than ChatGPT.
Comparison of Different LLMs
Table 7 compares different LLMs on ASQA using VANILLA (more results in §G.6).Notably, instruction-tuned models (Vicuna-13B and LLaMA-2-Chat) outperform the original LLaMA models in correctness and considerably enhance the citation quality.We observe that while the original LLaMA models are able to copy facts from the context, they struggle with accurately citing the sources or simply do not cite.Notably, the best open-source model, LLaMA-2-70B-Chat, achieves comparable correctness score as the OpenAI models, but still lags behind in citation quality.
Retrieval Analysis
The retrieval results play a crucial role to the correctness and the citation quality.Figure 4 presents the retrieval recall@k with different datasets and Figure 4: Retrieval recall@k on ASQA (EM recall), QAMPARI (recall-5), and ELI5 (claim recall).Retrieval recall serves as an upper bound for model performance, and we compare them with two models' correctness results in the figure (dashed lines): "Vanilla (5-psg)" is ChatGPT VANILLA with top-5 passages in context; "Oracle" is the same model except that it uses 5 gold passages ( §G.1), whose recall matches Recall@100 on all three datasets.4 shows the correctness performance of two models: (1) ChatGPT VANILLA with top-5 passages (our primary baseline); (2) an oracle version of the same model employing 5 gold passages ( §G.1; the 5 gold passages match the retrieval recall@100).Notably, both models' correctness lags behind the corresponding retrieval recall (except for ELI5 top-5).The discrepancy suggests that despite the presence of accurate answers in context, LLMs struggle to utilize them in their outputs.
We compare the impact of different retrievers and different numbers of passages to LLMs. Figure 4 (right) shows that GTR outperforms DPR in both correctness and citation quality, emphasizing the importance of deploying better retrievers.Contrary to the retrieval recall trend in Figure 4, more passages in context do not yield substantial improvement for ChatGPT.Specifically, correctness plateaus at top-1 passage and citation quality plateaus at top-3.GPT-4 (Table 7) exhibits an increasing trend with more passages, but the improvement is not proportional to the retrieval performance.This indicates the limited ability of LLMs in utilizing multiple passages within context.
Other Ablations
We provide additional ablations in §G.In summary, we find that (1) using comprehensive instructions enhances the citation quality of instruction-tuned models ( §G.2); (2) including at least one demonstration improves the performance ( §G.3); (3) finetuned models (FiD; Izacard and Grave, 2021) with POSTCITE lag behind LLMs in both correctness and citation quality and fail to generalize ( §G.4).
Human Evaluation
To verify that our automatic evaluation correlates with human judgement, we conduct human evaluation on selected models and request workers to judge model generations on three dimensions similar to Liu et al. ( 2023)-(1) utility: a 1-to-5 score indicating whether the generation helps answer the question; (2) citation recall: the annotator is given a sentence and all passages that the sentence cited, and is asked to judge whether the passages fully support the sentence; (3) citation precision: given a sentence and one of its citations, the annotator is asked to judge whether the citation "fully supports", "partially supports", or "does not support" the sentence.Each citation gets a precision score 1 if the output sentence has a citation recall of 1 and this citation at least "partially supports" it.See Appendix F for more details.
Model outputs score high utility.The utility scores do not differ significantly between models, ranging 3.7-3.9for ASQA and 3.5-3.6for ELI5.Upon inspection, all tested models are mostly able to output fluent answers that are related to the question, despite differences in factual correctness.
Our automatic evaluation of citation quality strongly correlates with human judgements.As shown in Table 8 (ASQA) and Table 9 (ELI5), the relative rankings induced by human and our automatic metrics are consistent.The absolute citation scores from human and ALCE are very close except for RERANK (which uses the automated citation recall for reranking).This suggests that an improvement on ALCE citation metrics translates to improvement on human preferences.Furthermore, the Cohen's kappa coefficient between human and ALCE suggests substantial agreement for citation recall (0.698) and moderate agreement for citation precision (0.525).We also show in §G.5 that our automatic evaluation achieves high accuracy when treating human annotations as gold labels (85.1% for citation recall and 77.6% for citation precision).2022) augment LLMs' output by interpolating it with a kNN module; though none of them explicitly provide citations to the retrieved sources.Other works prompt or fine-tune LLMs to "retrieve on-the-fly" (Parisi et al., 2022;Schick et al., 2023;Shuster et al., 2022;Jiang et al., 2023;Yao et al., 2023;Press et al., 2022), which offers flexibility of when and what to search.Gao et al. (2023); He et al. (2022) propose to first generate text without accessing external documents and then retrieve relevant documents and revise the generation to be consistent.
Among previous explorations, Nakano et al. ( 2021); Menick et al. (2022) are the closest to our setting, where LLMs are trained to answer questions while providing citations.However, they do not explore retrieval strategies and simply use commercial search engines, which are not reproducible, and their models and training data are closedsource.To the best of our knowledge, we are the first to implement end-to-end systems that retrieve, synthesize, and cite documents with LLMs.
Conclusion
We propose ALCE, the first automatic benchmark for evaluating LLM generations with citations.We deploy automatic metrics to measure fluency, correctness, and citation quality, and verify their efficacy via human evaluation.We explore a variety of strategies for incorporating citations in LLMs and demonstrate that current systems have considerable room for improvement on ALCE.
Our experiments highlight a number of promising research directions, including (1) enhancing retrieval and refining retrieval integrations in LLMs, (2) developing long-context LLMs, and (3) advancing LLMs' ability to synthesize multiple sources.What's even more intriguing is that these research proposals extend beyond the ALCE setup (for example, long-context LLMs have numerous exciting applications), and ALCE can serve as a valuable testbed for their development.
Limitations
Our evaluation still has room for improvement: (1) MAUVE is found to be sensitive to output length and may provide unstable results; (2) for the ELI5's correctness evaluation, the automatically generated claims may not cover all possible answers due to the open-ended nature of the questions; (3) our citation quality evaluation is limited by the accuracy of the NLI model; for citation precision, the NLI model cannot detect the case of "partially support" and thus leads to a lower citation precision score than the human evaluation.
Although we believe our curated datasets closely resemble the distribution of real-world user questions, we acknowledge that they do not cover more challenging scenarios, such as multi-hop reasoning, math reasoning, and code completion.
In our experiments, we focus on prompting LLMs without updating their model weights.Training a model directly to incorporate citations remains challenging due to the lack of supervised data.However, we observe that certain humaninstruction datasets contain examples similar to our task setup.We leave the exploration of training LLMs to generate citations for future work.
A Generating Claims for ELI5
We elect not to use ROUGE-L as our main correctness metrics since it does not account for the different ways of expressing the same answer and it can be easily gamed (Krishna et al., 2021).We further illustrate this issue in Table 10.A system can easily achieve high ROUGE-L score by retrieving and returning the top passage from a BM25 index.However, the claims evaluation metric does not reward this approach since the output often lacks different aspects of the answers.Instead, we leverage the original answers to generate sub-claims and use them to serve as an estimate of the different aspects of the answers that we expect the model to cover.This approach is inspired by works in summarization evaluation and claim verification (Zhang and Bansal, 2021;Kamoi et al., 2023;Wang et al., 2020).
Specifically, we use text-davinci-003 to generate the sub-claims.We first manually annotate three question and answer pairs from the original ELI5 training set with 3 sub-claims each.Then, we prompt text-davinci-003 with these pairs as demonstrations.The full prompt with an example is shown in Table 22.
InstructGPT generates coherent and faithful sub-claims.To ensure that the generated subclaims are of good quality, we manually inspect a random sample of 40 answers and their generated sub-claims (totaling to 120 sub-claims).For each sub-claim, we assign a score of 1 if it is relevant to the question and faithful to the facts presented in the ground truth, and 0 otherwise.We found that 112 out of the 120 (93.33%) sub-claims received a score of 1, meaning that our generated sub-claims are of high quality and faithful to the ground truth.Furthermore, the average number of words in the generated sub-claims is 14 words, and they are typically just one sentence long.This is aligned with the intent behind the metric-to capture short factual claims made by the original answer.
NLI model accurately predicts the entailment of sub-claims.We further analyze our sub-claim evaluation metrics by checking the error rate of the final prediction of the NLI model.To this end, we first manually annotate the entailment scores between 40 outputs and their sub-claims (in total of 120 pairs; these are the same questions from the previous analysis).We then use the NLI model to obtain the entailment scores for the output and sub-claims.Using the human annotations as the ground truth label, we found that the NLI model achieved an accuracy of 80.0%.
B Dataset Statistics
For ASQA, human answers have an average length of 65 words.For QAMPARI, each question has on average 13 answers.For ELI5, human answers have an average length of 131 words.
MAUVE.
When running MAUVE, we concatenate the question and the model output (or human answer) by space.We truncate both the references and the model generations to 100 words, as we found MAUVE results are unstable beyond this length for ELI5 (this is due to that ELI5 has a lot of extremely long human answers).
Exact match for ASQA and QAMPARI.Both ASQA and QAMPARI provide aliases for their short answers.We normalize the response and the short answers similarly to Rajpurkar et al. (2016) and report the score with the best-matching aliases.For ASQA, Stelmakh et al. (2022) also propose a QA-based evaluation which we found to be not as stable, and thus we do not report it in our paper.
Output truncation.Before evaluation, we trun-cate model output by new lines, as non-instructiontuned models may generate more content after new lines that are irrelevant.
INTERACT.Empirically, we found that models tend to execute too many consecutive "check" actions, so we force the model to always "output" after each "check".We limit the maximum number of passages to check as 3 to avoid exceeding the length limit.The full passages are removed from the context after one action to save context space.Table 27 provides an example for INTERACT.
Main experiments. For all experiments except
ChatGPT RERANK, we run each model three times with different seeds and each time we sample two demonstrations from a pool of four.We report the averaged scores for all experiments in the main paper and we report the standard deviations in Appendix G.6.
Decoding methods.Based on preliminary experiments we choose the following decoding methods: For ChatGPT and GPT-4, we use sampling with temperature 0.5; for all open-source models, we use Nucleus sampling (Holtzman et al., 2020) Table 11 demonstrates the experiments to show that ALCE is robust to shortcut cases.Using the top-1 passages or first two sentences of the top-1 passages induces almost perfect citation quality, but fluency and correctness are dramatically lower.
E Citation Recall Discussion
Our citation precision evaluation cannot detect a citation that partially supports the statement and hence will falsely penalize it.Consider a statement s 3 and its citations [2] will be counted as "irrelevant" while it should not be penalized.Liu et al. (2023) conduct human evaluation on citation precision in a different way: For each citation, they ask annotators to judge whether the citation (1) fully support, (2) partially support, or (3) does not support s i .One citation c i,j is precise if (a) c i,j fully supports s i or (b) C i fully supports s i , c i,j partially supports s i , and no c ∈ C i alone fully supports s i .This evaluation solved the corner case we mentioned in the main paper (one citation partially supports the claim but is identified as "irrelevant").However, it is challenging to conduct such evaluation automatically, as there is no existing model that can judge whether a citation "partially" supports a claim.We also explore prompting ChatGPT to conduct such a task, which yields poor results.We defer it to future work to collect supervised data to train a better ϕ that can detect "partial support".
F Human Evaluation
We employ Surge AI (https://www.surgehq.ai/) for our human evaluation.The average pay to workers is 20 USD per hour.We randomly sample 100 examples from ASQA and ELI5 and annotate outputs of selected models: ChatGPT VANILLA, ChatGPT RERANK, and Vicuna-13B VANILLA.
F.1 Utility
To check if the model output is useful to downstream users, we measure the utility of the response S. We first show the query q and model response S to the worker and ask them to rate their agreement with the statement "The response is a helpful and informative answer to the query" on a Likert scale of 1-5, corresponding to Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree.
F.2 Citation Recall
The annotators are shown the question q, the statement s i , and all of its citations C i , and they rate if the joint set of citations fully support the statement (recall=1) or if they do not support all the claims (recall=0).We calculate the overall recall score for the generation by taking an average of all the statements' recall scores.
F.3 Citation Precision
We show the question q and a pair of a statement s i and one of its citation c i,j ∈ C i to the annotator.We ask the annotator if the citation fully supports, R@1 R@3 R@5 R@20 R@100 DPR 29.6 44.R@1 R@3 R@5 R@20 R@100 DPR 6.7 13.5 partially supports, or does not support the factual claims in s i .Citation c i,j has a citation precision of 1 if s i has a recall of 1, and c i,j fully or partially supports s i .Finally, we take an average of precision scores of all citations in the statement S to obtain the citation precision score.
G.1 Retrieval Analysis
Oracle.Since the original datasets do not contain gold passages at the same granularity level as our setting (100-word passages), we approximate gold passages by running the following algorithm on the top-100 retrieved passages.We first calculate recall score for each passage.Then, we sort the passages using their recall score and take the top 5 passages as our initial oracle set.Finally, we iterate through all passages that were not initially in the oracle set and try to replace the passages in the oracle set in a greedy fashion: we calculate the change in the recall score of the oracle set for every possible replacement and proceed with the replacement that results in the largest recall improvement.The set of 5 oracle passages were able to match the recall scores of the top-100 retrieved passages.
Detailed retrieval results.We show detailed retrieval results in Tables 12, 13, and 14.
G.2 Effect of Instructions
Table 15 shows results of using a full instruction (Table 23) and a short version of the instruction (Table 24).We see that the full version induces stronger correctness and citation recall, while the two instructions lead to similar citation precision.
G.3 Effect of Demonstrations
Table 16 shows results on effect of different numbers of demonstrations.We see that numbers of demonstrations do not affect ChatGPT's correctness but using at least one demonstration ensures high citation recall.For the original LLaMA model, Table 16 shows the trend that more demonstrations lead to better performance.
G.4 Fine-tuned Models
To better understand the differences between finetuned models and prompted large language models, we train state-of-the-art question answering model, Fusion-in-Decoder (FiD; Izacard and Grave (2021)), and evaluate it in conjunction with POSTCITE.Due to the lack of training data with citation annotation, we first train a T5-base FiD model for 5 epochs on the ASQA training set with a batch size of 64 and a learning rate of 1e-4.During evaluation, we use POSTCITE to add citations to the output.We also use k = 5 passages during both training and evaluation of the FiD model.
Then, we evaluate this model on both ASQA (in-domain) and ELI5 (out-of-domain), and the results can be found in Tables 17 and 18.Note that this is not a direct comparison, as ALCE assumes only evaluation data available and uses only fewshot data for prompting.As the results show, the FiD baseline still significantly lags behind prompting ChatGPT in both correctness and citation quality (even though it is trained on 4000+ examples).When tested on another dataset (ELI5), FiD performs even worse, showing that it is challenging to solve the problem by fine-tuning a small pretrained model.
G.5 More Human Evaluation
We evaluate the accuracy of our automatic metrics by treating the human annotations as gold labels.
For citation recall, ALCE achieves an accuracy of 85.1%; for citation precision, ALCE has an accuracy of 77.6%.Regarding detecting insufficient citations, ALCE has a recall of 82.3% and a precision of 84.2%; regarding detecting "irrelevant" citations, ALCE has a recall of 75.6% and a precision of 66.1%-ALCE is effective in detecting "irrelevant" citations, but due to the limitation of the NLI model (cannot detect "partial support"), it has a relatively high false positive rate.
G.6 Main Results
We show full results of our experiments along with the standard deviation in Tables 19, 20, and 21.We repeat all experiments with three different random seeds.However, for ChatGPT RERANK, we use only one seeded run since each run repeats the generation step four times, and more experiments would incur significant costs.
Original question: How do we hear differences in sound besides volume and pitch?Passage: Pitch refers to the frequency of soundwave, and volumn refers to the amplitude of the soundwave.Besides volumn and pitch, we can also tell the difference between sounds based on the tone of sound.For example, we can differentiate the sound of different instruments based on the tone of the sounds.Claim 1: Volume of sound is the amplitude of the soundwave.Claim 2: Pitch is the frequency of soundwave.Claim 3: We can use the tone of the sounds to differentiate the sound of different instruments.
Original question: How are we able to discern whether a sound is coming from in front of us or behind us?Passage: There are multiple explanations for why we can localize sounds.One explanation is that sounds travelling to the corresponding side of one's ear will be slightly louder.Another explanation is that there is a slight difference in the hitting time to one's left and right ear based on the sound's direction.However, these explanation means that when a sound is exactly in front of someone or exactly behind someone, he or she can not tell the difference.Claim 1: We can localize sounds by recognizing that the sound travelling to the corresponding side of one's ear will be slightly louder.Claim 2: We can also localize sounds by recognizing the difference in hitting time to one's left and right ear based on the sound's direction.Claim 3: We cannot tell the difference between a sound that is exactly in front of us or exactly behind us.
Table 22: Prompt used to generate the sub-claims for ELI5 questions.Blue text is model generation.Brown text is the ELI5 example that we want to generate sub-claims for.We construct the prompt by manually writing the sub-claims for three questions from the training set.
Instruction: Write an accurate, engaging, and concise answer for the given question using only the provided search results (some of which might be irrelevant) and cite them properly.Use an unbiased and journalistic tone.Always cite for any factual claim.When citing several search results, use [1][2][3].Cite at least one document and at most three documents in each sentence.
If multiple documents support the sentence, only cite a minimum sufficient subset of the documents.
Table 23: Instruction for VANILLA.
Instruction: Write a high-quality answer for the given question using only the provided search results and cite them properly using Figure 1: The task setup of ALCE.Given a question, the system generates text while providing citing passages from a large retrieval corpus.Each statement may contain multiple citations (e.g., [1][2]).
Generating text with citations is closely related to attribution.Rashkin et al. (2023) define the "attributable to identified sources" (AIS) score to measure how faithful a generated text is to its sources.Bohnet et al. (2022) apply AIS scores on a single-document short-answer QA dataset.Honovich et al. (2022);Yue et al. (2023) study automatic evaluations for the AIS score.A concurrent work(Liu et al., 2023) conduct human evaluation on commercial generative search engines to examine their citation qualities.Scientific citation text generation(Funkquist et al., 2022) is a related task to ALCE where the model is provided the papers-to-cite and context and is required to recover the citing text.It is different from ALCE as all citations are provided and the model only needs to perform the summarization.Retrieval-augmented LMs.Many studies have explored augmenting LMs with externally retrieved information.Guu et al. (2020);Borgeaud et al. (2022);Izacard et al. (2022) pre-train language models with retrieved passages, whileKhandelwal et al. (2020);Zhong et al. ( [1][2][3].
When did the US break away from England? A: The US declared independence on July 2, 1776 [1][2] ... The Treaty of Paris was later signed on September 3, 1783 [3].
Table 2 :
An example of our VANILLA method.Different colors represent prompt, model generation, and <actions>.We also provide two in-context demonstrations before the test example.
Table 3 :
An example of INLINESEARCH.
Table 4 :
, and Experiments on ASQA.For CLOSEDBOOK, we use POSTCITE to get citations.k-psg: putting topk passages from the retrieval results into the context.Chat-13B and Chat-70B refer to LLaMA-2-Chat.
Table 5 :
Experiments on QAMPARI."Rec.-5":we set the recall to be 100% if the prediction includes at least 5 correct answers.
rent LLMs are not proficient in an interactive usage.Retrieving text on the fly does not improve performance.All datasets show that VANILLA outperforms INLINESEARCH on citation quality (and on correctness for ASQA and ELI5).By manually examining the examples, we find that it is challenging to ask detailed questions without seeing any passages.To improve INLINESEARCH, one may need to provide more context about the questions in advance or encourage the model to call retrievers with more detailed and diverse queries.
Table 6 :
Experiments on ELI5.We use claim recall for the correctness evaluation.Chat-13B and Chat-70B refer to LLaMA-2-Chat.
Table 8 :
Human citation quality evaluation vs. ALCE citation quality evaluation on ASQA.
Table 9 :
Human citation quality evaluation vs. ALCE citation quality evaluation on ELI5.
Table 15 :
Effect of different instructions on ASQA.
Table 17 :
Comparison of Fusion-in-Decoder with Chat-GPT on ASQA.Both models use top-5 GTR passages.
Table 18 :
Comparison of Fusion-in-Decoder with Chat-GPT on ELI5.Both models use top-5 GTR passages.
Table 21 :
ELI5 full results.theoriginal question and passage, and generate 3 additional claims that are supported by the passage and answer the question.Original question: What's the difference between Shia vs. Sunni Islam?Passage: The main difference between Shia and Sunni Muslim is related to ideological heritage and issues of leadership.This difference is first formed after the death of the Prophet Muhammad in 632 A.D. The ideological practice of the Sunni branch strictly follows Prophet Muhammad and his teachings, while the Shia branch follows Prophet Muhammad's son-in-law Ali.Nowadays, Sunni and Shia are the major branches of Islam.Claim 1: The major branches of Islam are Sunni and Shia.Claim 2: Prophet Muhammad died in 632 A.D. Claim 3: The ideological practice of the Sunni branch strictly follows Prophet Muhammad and his teachings.
Table 24 :
Short instruction for VANILLA.: Write an accurate, engaging, and concise answer for ... Document[1](Title: How to Treat and Prevent Food Poisoning -MsPrepper) just a typical gastro upset.Salmonella is most commonly caused by eating undercooked or raw foods like eggs or meat.You know how your mom always warned you not to eat raw cookie dough?This is why.Most people do eat cookie dough and they are fine, but salmonella is a risk.If you do contract salmonella, you could start to feel bad within in a couple of hours after eating contaminated food, and sometimes it could take a day or two.Common symptoms are nausea and vomiting, loose stools (sometimes bloody), flu like symptoms, and stomach cramps.To treat Document [2](Title: FDA Issues Warning About Eating Raw Cookie Dough, But Not For Salmonella Risks) FDA Issues Warning About Eating Raw Cookie Dough, But Not For Salmonella Risks Used to licking the spoon or placating yourself with full-on chunks of raw cookie dough?The Food and Drug Administration issued a warning on Tuesday that strongly advises against continuing the habit.The agency asserted that consuming raw batter of any kind, whether for bread, cookies or pizza, could make a person sick.While you may have been warned in the past against eating raw dough due to the risk of contracting salmonella from raw eggs, the FDA is citing raw flour as the culprit for a Document [3](Title: It's Probably OK to Eat Raw Cookie Dough -As Long As You're Smart About It -The Crux -Very Top Secret Information) First, when most people think about health risks and cookie dough, they think about raw egg.Eggs can be contaminated with salmonella bacteria, and food safety recommendations encourage people to cook eggs until the white and yolk are firm in order to kill any bacteria.However, anyone making cookies can do things to reduce this risk by using pasteurized egg products.When my kids and I make cookie dough, we never use regular eggs.Instead, we use shell eggs that have been pasteurized to kill any harmful bacteria without actually cooking the egg itself.(A great public health innovation, if Document [4](Title: How Dangerous Is It to Eat Raw Cookie Dough?| Men's Health) Can Eating Raw Cookie Dough Really Make You Sick?Scientists reveal the truth about this supposedly dangerous delicacy By Katherine Dempsey There are few things more tempting in life than eyeing a bowl of cookie dough and deciding whether or not to stick your finger in for a scoop.It's a bit like playing Russian roulette.You could get lucky and enjoy the delicious dough without conseqence, but there's always the risk of getting serously sick with a food-borne illness.That's because multiple ingredients within the dough could be contaminated by pathogens such as Salmonella and E. coli, says Soohyoun Document [5](Title: How Dangerous Is It to Eat Raw Cookie Dough?| Men's Health) is usually pasteurized, so it's not likely that the egg would make you sick.However, other ingredients in the dough could potentially harbor pathogens.Experts say that a prime suspect in a 2009 E. coli outbreak linked to prepackaged cookie dough was actually the flour.Nuts and chocolate have also been linked to Salmonella outbreaks.Bottom line: You're better off skipping raw cookie dough.At least you'll have killer cookies as a consolation.The article How Bad Is It To Eat Raw Cookie Dough?originally ran on Prevention.comThe 9 Foods Most Likely to Make You Sick Mmm, Just Don't ... Question: Why is it bad to eat cookie dough for risk of salmonella but things like Cookie Dough Bites are ok?Answer: Raw cookie dough is not recommended to be eaten due to the risk of salmonella [1][2].Eating raw flour is also a risk for food poisoning [2].The egg in cookie dough is usually pasteurized, so it's not likely that the egg would make you sick, but other ingredients in the dough could potentially harbor pathogens [4][5].However, prepackaged cookie dough like Cookie Dough Bites is safe to eat because the dough is made with pasteurized egg products and heat-treated flour [2][3].. Cookie Dough Bites are safe to eat since they do not contain eggs.2. Salmonella is a bacteria found in eggs.3. Eating raw cookie dough carries the risk of salmonella infection. Instruction | 2023-05-25T01:16:19.909Z | 2023-05-24T00:00:00.000 | {
"year": 2023,
"sha1": "cdad12ee9f932e0c73a9e18604700fea7bc033ad",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.emnlp-main.398.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "406b370ae9757be8e23b774cf0b37f987e7987e9",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
4410414 | pes2o/s2orc | v3-fos-license | Identification of a New Marine Bacterial Strain SD8 and Optimization of Its Culture Conditions for Producing Alkaline Protease
While much attention has been given to marine microorganisms for production of enzymes, which in general are relatively more stable and active compared to those from plants and animals, studies on alkaline protease production from marine microorganisms have been very limited. In the present study, the alkaline protease producing marine bacterial strain SD8 isolated from sea muds in the Geziwo Qinhuangdao sea area of China was characterized and its optimal culture conditions were investigated. Strain SD8 was initially classified to belong to genus Pseudomonas by morphological, physiological and biochemical characterizations, and then through 16S rDNA sequence it was identified to be likely Pseudomonas hibiscicola. In addition, the culture mediums, carbon sources and culture conditions of strain SD8 were optimized for maximum production of alkaline protease. Optimum enzyme production (236U/mL when cultured bacteria being at 0.75 mg dry weight/mL fermentation broth) was obtained when the isolate at a 3% inoculum size was grown in LB medium at 20 mL medium/100mL Erlenmeyer flask for 48h culture at 30°C with an initial of pH 7.5. This was the first report of strain Pseudomonas hibiscicola secreting alkaline protease, and the data for its optimal cultural conditions for alkaline protease production has laid a foundation for future exploration for the potential use of SD8 strain for alkaline protease production.
Introduction
Over the years, researchers around the world have been interested in producing biological products particularly enzymes owing to their wide ranges of physiological, analytical and industrial applications. Among all biological resources for enzyme production, microorganisms are especially important because of their extensive biochemical diversity, possibility of mass culture and ease of genetic operations. So far, microorganisms are now known to play a key role in the production of both extracellular and intracellular enzymes in the commercial scale, and more than 3000 different microbial extracellular enzymes have been reported [1].
Among all the enzymes, proteases have occupied an important place as they were the first to be produced in bulk, and now they constitute about two-thirds of total enzymes used today [2], and the proteases are the main enzyme produced by microbial sources. They are used in a wide range applications, including in food, meat and leather processing industries as well as pharmaceutical industries. In particular, microbial alkaline proteases have dominated the worldwide enzyme market, accounting for a 67% share of the detergent industry [3,4].
A wide range of microorganisms was found to produce alkaline protease, including bacteria, molds, yeasts and mammalian tissues [5,6]. However, bacteria were preferred as they grow rapidly, need less space, could be easily maintained and were accessible for genetic operations. Species of Bacillus, Pseudomonas, Halominas, Arthrobacter and Serratia were the important protease producing bacteria. Among all bacterial species, bacilli played an important role in production of alkaline protease owing to their chemoorganotrophic characteristics and their abilities to secrete a high level of alkaline protease. In particular, more and more attention had been given to marine microorganisms of a wide range of habitats as enzymes derived from them were found relatively more stable and active than those derived from plants or animals [7,8] and would have more advantages than traditional enzymes [9]. While alkaline (serine) proteases were found active over broad ranges of temperatures (35-80°C) and pH (7)(8)(9)(10)(11)(12) [10], alkaline protease produced by marine bacteria had significant activity and stability at high pH and temperatures [11,12].
Production of extracellular proteases of microorganisms was known to be largely influenced by the presence of easily metabolizable sugars (such as glucose) and medium components [13]. In addition, several other factors such as aeration, inoculum density, pH, temperature and incubation time also could affect the amount of protease produced [14][15][16]. However, the studies on alkaline protease production from marine microorganisms have been very limited [17]. In our recent study, one type of microorganism producing alkaline protease was isolated from sea muds of the Qinhuangdao sea area in China [18]. In the current study, we carried out further morphological, physiological and biochemical characterization, as well as 16S rDNA sequence analysis of this isolate SD8. In addition, we optimized its culture parameters for enhanced production of stable alkaline protease which was previously found to be stable in organic solvent and sodium dodecyl sulfonate (SDS) [18].
Materials and Methods
This in vitro study did not involve humans, human data or animals, and thus there were no ethics or consent requirements for this study. As small samples of sea muds did not damage marine environment and wildlife and did not involve endangered or protected species, specific permission was not required for this work.
Reagents and fermentation media
Casein used for the protease assay was bought from Sigma (St. Louis, MO, USA). The other chemicals used in the study were of analytical grade commercially available in China. All the experiments were carried out independently in triplicates and repeated twice.
Morphological, physiological, and biochemical characterization of the SD8 isolate
The bacterial strain SD8 known to produce an alkaline protease that was stable in SDS and organic stable used in the present study, which was isolated from sea muds of the sea area of Qinhuangdao, China [18]. This had been compared and identified tentatively as Pseudomonas hibiscicola [19] according to Bergey's Manual of Determinative Bacteriology [20]. Morphological examination was carried out either on nutrient agar or in nutrient broth plus aged sea water, followed by Gram staining. Physiological and biochemical tests were carried out as described previously [21].
Further characterization was done on the basis of 16S rDNA sequencing as follows. The total genomic DNA of the strain SD8 was separated and purified by using the method described by Redburn and Pate [22]. The 16S rDNA of the isolate was amplified using the universal primers P1 (5 0 -AGAGTTTGATCATCCTGGCTCAG-3 0 ) and P2 (5'-ACGGCTACCTT GTTACGACTT3 0 ) [23]. The amplification was done by initial denaturation at 94°C for 3 min followed by 35 cycles of 94°C for 30s, 51°C for 30s, 72°C for 3 min and final extension at 72°C for 10 min. The PCR products were sequenced by Beijing Sun Biotech Co. Ltd (Beijing, China).
Sequence alignments of the strain SD8 were achieved with the NCBI's BLAST program. All the sequences of 16S rDNA were aligned using the multiple sequence alignment program CLUSTAL-W (Dublin, Ireland). Phylogenetic and molecular evolutionary analyses were processed through the molecular evolutionary genetics analysis software MEGA 5.05 (Tempe, Arizona, USA).
Experimental culture conditions on protease production
The current study has investigated optimal culture conditions for alkaline protease production from the SD8 isolate. To measure the effect of carbon sources on enzyme production, different carbon sources (sucrose, soluble starch, maltose, lactose, glycerin and glucose) were examined in the enzyme production media [24]. To examine the time kinetics of enzyme secretion, after the strain SD8 was inoculated in protease-producing LB medium and incubated at 37°C under shaking conditions (150 rpm), culture samples were withdrawn aseptically every 6 h and enzyme activity was monitored as described below. The growth curve of the strain SD8 was also investigated in LB medium at 37°C under shaking conditions (150 rpm). Optical density (OD600) was determined every 6 h. In order to investigate the influence of pH on protease production, the isolate was cultivated in LB medium at varying pH values (5.5-10.5, with the interval of increase of 1.0), and protease activity was quantified after incubation of 48 h at 37°C under shaking conditions at 150 rpm. To observe the effect of temperature on the protease production of the bacterial strain, 30°C and 37°C were selected for the culture. In addition, to investigate the effect of dissolved oxygen levels on the alkaline protease production, 10, 15, 20, 25, and 30 mL of culture liquid were enclosed to 100mL Erlenmeyer flasks, respectively, and alkaline protease activities were quantified after incubation with the optimal conditions revealed. Finally, in order to investigate the effect of inoculum size on alkaline protease secretion, 1%, 3%, 5% and 7% inoculum sizes were transferred to the culture media, respectively, and alkaline protease activities were detected after incubation with the other optimal conditions revealed above. The amount of bacteria in the optimum fermentation condition was weighed by an electronic balance after being dried for 5h at 80°C.
Protease activity assay
A modification of the method of Kunitz [25] was used to assay protease activity using casein as a substrate and L-tyrosine as a standard. The 0.6ml reaction mixture consisted of 150μl of 1% casein in 200mM glycine-NaOH buffer (pH 10.0) and 150μl of culture supernatant. The reaction was started by adding culture supernatant at 40°C. After incubation for 15min, the reaction was stopped by adding 300μl of 0.4M trichloroacetic acid. The reaction liquid was kept on ice for another 10 min, then centrifuged at 10000 rpm for 10 min at 4°C. The 0.3ml supernatant was mixed with 1.5ml of a 0.4M Na 2 CO 3 solution and 0.3ml of Folinphenol reagent and was incubated at 40°C for 20min. The concentration of L-tyrosine of digested casein was determined by monitoring an increase in absorbance at 680nm. The culture medium without SD8 bacteria added was used as the blank control in the absorbance reading. The calibration curve was constructed using L-tyrosine as a standard. One unit of protease activity was defined as the amount of enzyme that releases 1μg/ml of L-tyrosine equivalent per min [18].
Statistics
All data were expressed as the means ± SEM. A one-way ANOVA using SPSS 13.0 software was used to conduct a statistical comparison of differences among the groups and a value of P<0.05 was considered as being statistically significant.
Identification of strain SD8 that produces alkaline protease
The strain SD8 was characterized as a Gram negative, motile, rod-shaped bacterial strain. Its morphological and biochemical characteristics were listed in Table 1. The colony morphology Table 1. Morphological and biochemical characteristics of strain SD8.
Colony morphology
Round, yellow, apophysis, marginal tidy, smooth surface, mucoid Based on these characteristics, the strain SD8 was identified probably as genus Pseudomonas.
To carry out 16S rDNA sequence analysis, the genome of strain SD8 was used for PCR amplification of 16S rDNA. Agarose gel electrophoresis of PCR product was shown in Fig 2. It can be seen that the molecular weight of PCR product of strain SD8 was corresponding to 1500bp of the DNA marker and the amplification was successful. The DNA sequence of 1440 bp product was obtained by 16S rDNA sequencing (Beijing Sanpo Polygala Biological Technology LTD, Beijing, China). The gene sequence was uploaded to GenBank database (accession number KM668099).
Following the sequence determination of the PCR product and 16S rDNA analysis, the strain SD8 was phylogenetically characterized and identified/compared with the closest relatives using BLAST (NCBI) search. Sequence analysis of 16S rDNA sequence for the isolate SD8 based on in silico analysis showed a 99% homology with Pseudomonas hibiscicola (Fig 3). The strain SD8 falls in the cluster comprising members of the Pseudomonas hibiscicola with the reliability of 79%. Thus with this evidence of identification, the isolate may belong to Pseudomonas hibiscicola.
Effect of fermentation medium on alkaline protease production
Four mediums were selected to observe the medium effect on the protease production of strain SD8. Among the four media, the LB medium was found to produce the alkaline protease with the highest activity (up to 176 U/mL) (Fig 4). The enzyme activities yielded from the starch (P<0.05), beef extract-peptone (P<0.01) and glucose medium (P<0.01) were significantly lower than that from the LB medium. Hence, the LB medium was selected to carry out the subsequent experiments.
Previously, different carbon sources were found to have different influences on extracellular enzyme production [26]. In the current study, for investigating the influence of carbon sources on production of alkaline protease, effects of adding 5% carbon from different sources to the LB medium were examined (Fig 5). When compared to LB medium alone (blank control, without additional carbon added), while addition of lactose did not affect the alkaline protease activity, addition of maltose or glucose (P<0.05), or either of the three other carbon sources (sucrose, soluble starch, and glycerin) (P<0.01) significantly decreased the enzyme activity when compared to the blank control. The alkaline protease activity was found the lowest when sucrose was added (reduced about 50%). Therefore, the LB medium without carbon source was selected to perform the subsequent experiments.
Effect of culture conditions on alkaline protease production
The current study has investigated the effect of incubation time on alkaline protease production of strain SD8. Protease activity was found to increase rapidly after incubation for 24 h (Fig 6), and it was highest (180 U/mL) when the incubation time was 48 h, after which time it declined slowly. Compared to 48 h, all other incubation time periods produced significantly lower yields of alkaline protease. Therefore, the optimal fermentation time for alkaline protease production was 48 h for strain SD8. The growth curve of strain SD8 was also showed in Fig 6. The amount (as assessed by OD600 measurement) of strain SD8 increased quickly after 18 h, and it reached up to 1.77 at 42 h, which was almost to the same as the maximum (1.83) at 48 h. After that, the amount of strain SD8 declined slowly. The current study also examined influences of culture medium pH and temperatures and found a gradual increase in the protease production in strain SD8 with increasing pH, with the optimum being at pH 7.5 (185U/mL) (Fig 7). At pH 8.5, the protease activity was 179 U/mL, which was lower than that of pH 7.5 (P<0.05). In either lower pH (5.5 or 6.5) or higher pH (9.5 or 10.5) cultures, the protease activity was significantly lower than that of pH 7.5 (P<0.01). When the incubation temperatures were 37 and 30°C, the alkaline protease activities of the cultures were 185 and 216 U/mL, respectively. Thus, 30°C was chosen for carrying out the subsequent work.
Different dissolved oxygen levels in the incubation liquid of the bioreactor could be obtained by varying medium quantity in the Erlenmeyer flask, which could influence the alkaline protease production. As shown in Fig 8, the alkaline protease activity was highest (196 U/mL) with the 20 mL culture liquid when under the optimal conditions established above. When the culture medium was 15 or 25 mL, the respective alkaline protease activity was significant lower than that of 20mL (P<0.05). When the medium was even lower or higher in volume (10 or 30 mL), the respective alkaline protease activity was significantly and substantially lower than that of 20 mL (P<0.01). Effects of initial pH in influencing the production of alkaline protease from strain SD8. The single factor investigated was selected at optimal conditions. *P<0.05 and **P<0.01 compared to pH 7.5. All the data were given as means ± SEM (n = 3).
doi:10.1371/journal.pone.0146067.g007 Fig 8. Effects of medium quantity in influencing the production of alkaline protease from strain SD8. The single factor investigated was selected at optimal conditions. *P<0.05 and **P<0.01 compared to 20 mL/100 mL Erlenmeyer flask. All the data were given as means ± SEM (n = 3). doi:10.1371/journal.pone.0146067.g008 The organism density (as affected by the inoculum size) could also affect the alkaline protease production in strain SD8. The enzyme activity was highest (236 U/mL) with the 3% inoculum size. When the inoculum size was lower (1%) or higher (5 or 7%), the alkaline protease activity was significantly lower when compared to the 3% inoculum size (P<0.01, Fig 9). So 3% inoculum size was chosen as the optimal condition.
The bacterial mass was also determined in optimum fermentation conditions. The bacterial mass was found to be about 0.75 mg dry weight/mL fermentation broth when producing the optimum protease activity.
Discussion
More attention has been given to marine microorganisms for enzyme production as enzymes derived from them have been found in general relatively more stable and active compared to those from plants and animals. While alkaline protease produced by marine bacteria is known to have significant activity and stability at high pH and temperatures, studies on alkaline protease production from marine microorganisms have been very limited. In this study, one marine bacterial strain SD8 isolated from sea muds of the Qinhuangdao sea area in China producing alkaline protease relatively at a low yield was identified to be likely Pseudomonas hibiscicola following morphological, physiological and biochemical characterization as well as 16S rDNA sequence analysis. In addition, the current study has identified the LB medium (among 4 different media) as the optimum culture medium producing alkaline protease by the strain SD8. Furthermore, the current study has determined the optimal culture conditions for talkaline protease production (no additional carbon source, 48h fermentation time, initial pH7.5, 30°C temperature, 20 mL culture liquid/100 mL Erlenmeyer flask and 3% inoculum size). When cultured at the optimized parameters, alkaline protease production was enhanced to 236 U/mL.
The strain SD8 was identified as Pseudomonas hibiscicola based on data of its morphology, physiology and biochemistry assays as well as 16S rDNA sequence analyses. While there were some reports that genus Pseudomonas could produce alkaline protease including Pseudomonas Effects of inoculum size in influencing the production of alkaline protease from strain SD8. The single factor investigated was selected at optimal conditions. **P<0.01 compared to 3% inoculum size. All the data were given as means ± SEM (n = 3). aeruginosa [27], Pseudomonas fluorescens [28] and Pseudomonas putida [29], Pseudomonas hibiscicola was not found to secrete alkaline protease up to now. The finding of the strain SD8 being able to produce the alkaline protease and belonging to Pseudomonas hibiscicola has thus enriched our understanding of the characteristics of the Pseudomonas hibiscicola.
Since the types of medium and carbon sources could influence the production of alkaline protease [24], the current study has carried out the optimization of medium and carbon sources. It was found that the production of alkaline protease of strain SD8 was substantially different with different types of medium and carbon sources, with SB medium without additional carbon added being found to be optimal. Kumar et al [5] reported that lactose was the best carbon source for protease production by Marinobacter sp. GA CAS9, and Pant et al [6] reported that galactose was the best carbon source for protease production by Bacillus subtilis. These and our current studies have proved that the carbon sources affect the protease production and the best carbon source is different for different bacterial strains.
In addition, sine the culture conditions also could influence the production of alkaline protease of microorganisms [16], the current study has also investigated optimal culture conditions of SD8 strain for the production of alkaline protease. The results indicated that the culture time, initial pH, temperature, medium quantity, and inoculum size could also influence the production of alkaline protease. The phenomenon of the effect of initial pH on production of alkaline protease was similar as that reported previously [30]. Our data of the optimal cultural conditions for alkaline production from SD8 strain has laid a foundation for further exploring the potential use of SD8 strain for alkaline protease production.
The activity of alkaline protease by the strain SD8 was found lower than that by Marinobacter sp. GA CAS9 or by Bacillus subtilis [5,6,31]. While the Marinobacter sp average activity was somewhere in the range of 400-1000 U/ml [5] and that of Bacillus subtilis was between 576-842 U/ml [31] at 24-48 hours, that of SD8 strain (or P. hibiscicola) was 236 U/ ml at 48 hours. However, the alkaline protease produced by SD8 has been recently found to be stable in alkaline and SDS solutions and organic solvents [18]. These characteristics of the alkaline protease produced by SD8 will be important for its potential applications in industries that require a stable protease, as these extreme conditions often exist in commercial processes.
Conclusions
In this study, marine bacterial strain SD8 was initially classified to belong to genus Pseudomonas by morphological, physiological and biochemical characterizations, and then identified to be likely Pseudomonas hibiscicola through 16S rDNA sequence. In addition, in attempts to optimize its culture conditions, this study has investigated mediums, carbon sources and culture conditions for maximum production of alkaline protease. Optimum enzyme production (236U/mL with bacterial mass being at 0.75 mg dry weight/mL fermentation broth) was obtained when the isolate at a 3% inoculum size was grown in LB medium at 20 mL medium/ 100mL Erlenmeyer flask for 48h culture at 30°C with an pH 7.5. This was the first report of strain Pseudomonas hibiscicola secreting alkaline protease. The finding of the strain SD8 being able to produce the alkaline protease and belonging to Pseudomonas hibiscicola could enrich our understanding of the characteristics of the Pseudomonas hibiscicola, and our data for its optimal cultural conditions for alkaline protease production has laid a foundation for future exploration for the potential use of SD8 strain for alkaline protease production. Further studies will be required to investigate the production, activity and stability of the alkaline protease produced by SD8 strain and conduct comparative studies with proteases produced by other species or bacterial strains. | 2016-05-12T22:15:10.714Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "f54b68edb0cd9562a35f070b8abb5249a458c65b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0146067&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f54b68edb0cd9562a35f070b8abb5249a458c65b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
255838856 | pes2o/s2orc | v3-fos-license | Bevacizumab and Combination Chemotherapy in rectal cancer Until Surgery (BACCHUS): a phase II, multicentre, open-label, randomised study of neoadjuvant chemotherapy alone in patients with high-risk cancer of the rectum
In locally advanced rectal cancer (LARC) preoperative chemoradiation (CRT) is the standard of care, but the risk of local recurrence is low with good quality total mesorectal excision (TME), although many still develop metastatic disease. Current challenges in treating rectal cancer include the development of effective organ-preserving approaches and the prevention of subsequent metastatic disease. Neoadjuvant systemic chemotherapy (NACT) alone may reduce local and systemic recurrences, and may be more effective than postoperative treatments which often have poor compliance. Investigation of intensified NACT is warranted to improve outcomes for patients with LARC. The objective is to evaluate feasibility and efficacy of a four-drug regimen containing bevacizumab prior to surgical resection. This is a multi-centre, randomized phase II trial. Eligible patients must have histologically confirmed LARC with distal part of the tumour 4–12 cm from anal verge, no metastases, and poor prognostic features on pelvic MRI. Sixty patients will be randomly assigned in a 1:1 ratio to receive folinic acid + flurourcil + oxaliplatin (FOLFOX) + bevacizumab (BVZ) or FOLFOX + irinotecan (FOLFOXIRI) + BVZ, given in 2 weekly cycles for up to 6 cycles prior to TME. Patients stop treatment if they fail to respond after 3 cycles (defined as ≥ 30 % decrease in Standardised Uptake Value (SUV) compared to baseline PET/CT). The primary endpoint is pathological complete response rate. Secondary endpoints include objective response rate, MRI tumour regression grade, involved circumferential resection margin rate, T and N stage downstaging, progression-free survival, disease-free survival, overall survival, local control, 1-year colostomy rate, acute toxicity, compliance to chemotherapy. In LARC, a neoadjuvant chemotherapy regimen - if feasible, effective and tolerable would be suitable for testing as the novel arm against the current standards of short course preoperative radiotherapy (SCPRT) and/or fluorouracil (5FU)-based CRT in a future randomised phase III trial. Clinical trial identifier BACCHUS: NCT01650428
Recent improvements in the quality of surgery, preoperative magnetic resonance imaging (MRI) and pathological reporting, now call into question the approach of treating all patients, clinically staged as T3, with radiotherapy or chemoradiation to prevent local recurrence. Rather factors that portend distant recurrence should be considered, including tumour location and the sub-classifcation of T3, nodal status and presence of extramural invasion. For carefully selected patients low rates of local recurrence can be achieved if good quality TME is performed even when patients receive no radiotherapy [12][13][14]. Metastatic disease, in contrast, is now the predominant cause of recurrence and death. It appears a commonly held belief that any systemic chemotherapy treatment, is likely to be more effective if administered before and not after radical surgery.
For colon cancer, systemic treatment is given postoperatively based on histopathogy of the surgical specimen. The concept of neoadjuvant chemotherapy (NACT) is being examined in primary colon cancer in the FOXTROT trial (ISRCTN 87163246.) with promising early results [15].
For rectal cancer staging MRI can identify patients at risk of local and/or systemic relapse preoperatively. In particular, extramural vascular invasion (EMVI) is easily identified on preoperative MRI, and predicts for systemic failure with good concordance between MRI diagnosed and eventual pathological confirmation of EMVI [16].
National Comprehensive Cancer Network (NCCN) CRC Guidelines recommend a 6-month postoperative course of adjuvant chemotherapy for patients with stage II/III rectal cancer following chemoradiation [17], although this is not evidence-based [18] and recent data suggests there is no benefit from adjuvant 5FU apart from reducing local recurrence [19]. Potential explanations include the difficulty in delivering systemic chemotherapy treatment following CRT and surgery [4][5][6]20]. Consensus recommendations suggest that decisions regarding adjuvant chemotherapy in LARC should be dictated according to initial preoperative clinical stage [21,22]. Neoadjuvant chemotherapy (NACT) has therefore been recommended as a priority of future research, to decrease the high metastases rate [23].
Previous studies suggest tolerability and compliance with chemotherapy in the neoadjuvant setting should be high [24][25][26]. In the Grupo Cáncer de Recto 3 study [27] NACT was delivered at full systemic doses to 94 % of patients. In the GEMCAD 0801 study, a 15 % pathological complete response (pCR) was achieved with capecitabine + oxaliplatin (XELOX) plus BVZ [28] without any radiotherapy.
The BACCHUS study examines whether intensive NACT can achieve a pCR rate in primary rectal cancer sufficient to warrant further investigation. Chemo-triplet schedules demonstrate high response rates [29]. The OLIVIA phase II study randomised 80 patients with unresectable colorectal cancer liver-only metastases [30] comparing FOLFOX plus BVZ with or without irinotecan and reported response rates of 61.5 and 80.5 % respectively with acceptable toxicity. These are the experimental arms in BACCHUS, which will allow evaluation of the potential benefit of BVZ in combination with modern effective-doublet and triplet chemotherapy regimens and omitting radiotherapy in LARC.
Study design
The BACCHUS trial is an investigator initiated, multicentre, open-label, prospective, randomized phase II study. All participants have to provide written informed consent, signed and personally dated, before inclusion in the trial. The trial EudraCT number is 2010-022754-17, and is registered on ClinicalTrials.gov (BACCHUS: NCT01650428).
Trial organisation
The sponsor is University College London. Central coordination is financed by Cancer Research UK (CR UK) and carried out by the CR UK & University College London Cancer Trials Centre (UCL CTC). An independent data monitoring committee (IDMC) will monitor the conduct and safety of the trial. Participating sites are required to report all serious adverse events (SAE) as defined by the protocol to UCL CTC in line with applicable regulations.
Ethics and informed consent
The final protocol was approved by Riverside l Research Ethics Committee (ref: 12/LO/1158). Appropriate approval from respective local ethics committee is required to join this trial. This study has been approved by the ethics committee of the following Hospitals or Universities: Barnet and Chase Farm Hospital, Blackpool Teaching Hospitals, East and North Herts Hospitals, NHS Greater Glasgow and Clyde Hospitals, Heatherwood and Wexham Park Hospitals, Hillingdon Hospitals, Imperial College Healthcare NHS Trust, North Middlesex University Hospital, The Royal Marsden Hospital, University College Hospital, London. This study is conducted in accordance with the most recent version of the Declaration of Helsinki and according to GCP. Written informed consent, signed and personally dated, is obtained from each patient before inclusion in the trial.
Population
Patients with histologically confirmed adenocarcinoma of the rectum require specific tumour and patient criteria for inclusion. A staging MRI is mandated. The lists of inclusion and exclusion criteria are presented in Tables 1 and 2.
Trial entry has been restricted to patients in whom MRI suggest primary tumour or lymph nodes do not extend to ≤1 mm from, or breach the circumferential resection margin (CRM)-since even with preoperative chemoradiation up to 30 % of these patients would have a positive CRM (≤1 mm) after TME. Eligibility is also confined to patients with MRI estimated penetration of the mucularis propria >1 mm and/or patients with cN2 predicted by MRI and extramural vascular invasion (EMVI), but T3 tumours must have a predicted ≥2 mm margin from the mesorectal fascia.
These criteria are likely to form a group of patients making up about 40 % of rectal cancers overall. Such • MRI-evaluated-evaluated locally advanced tumour with the following: • T3 tumours extending (≥4 mm), beyond the muscularis propria N0-N2 • Or tumours (involving or threatening the peritoneal surface) or presence of macroscopic extramural venous invasion (V2 disease) • AND for tumours below the peritoneal reflection, the primary tumour or involved lymph node (on MRI) must be >1 mm from the mesorectal fascia patients have a 50 % 5 year survival [32] and a local recurrence rate of 6-10 % with surgery alone. Trial entry has also been restricted to patients younger than 70 years with distal rectal tumours, 4-12 cm from the anal verge. Accurate clinical staging with MRI is more diffcult in the low rectum, at lower than 4 cm there is a 5-15 % risk of involved lateral pelvic lymph nodes, which are not resected at TME. A lack of evidence to suggest a benefit from oxaliplatin containing adjuvant chemotherapy for stage II colorectal cancer [34][35][36] and insufficient data to support a benefit from adjuvant chemotherapy in Stage III colorectal cancer in patients over 70 years has informed the decision to exclude patients over 70 years from this trial [33].
Study objectives and endpoints
The primary objective of the BACCHUS study is to evaluate the efficacy of FOLFOXIRI + BVZ and FOL-FOX + BVZ in terms of their ability to produce pCR. Secondary objectives include evaluation of the safety and tolerability of the two regimes and the feasibility of delivering them, as well assessment of additional measures of efficacy such as progression free and overall survival.
The primary endpoint is pathological complete response (pCR) at surgery; secondary endpoints include ORR, CRM negative (R0) resection rate, T and N stage downstaging, PFS, DFS, OS, local control, 1 year colostomy rate, adverse events, compliance with chemotherapy treatment, tumour regression grade (TRG), and tumour cell density (TCD).
Survival curves for DFS and OS will be plotted. Cumulative incidence of local recurrence will be computed accounting for death as competing risk. Differences in survival will be tested with the log-rank test. Hazard ratios and 95 % confidence intervals (CI) will be computed using Cox regression. A table will present the completion rate of the neo-adjuvant treatment, pCR frequency, patients with a R0 resection with 90 and 95 % CI. Frequency and percentages for toxicity will be presented according to the Common Terminology Criteria for Adverse Events (CTCAE) version 4.0. All proportions will be presented with 95 % CI.
Randomisation and stratification
Patient Randomisation will be performed centrally at the UCL trials centre. Eligible patients are randomly assigned to one of the two treatment arms in a 1:1 ratio and stratified according to treating centre, gender and presence or absence of EMVI. Treatment process and schedules for the BACCHUS trial are summarised in Figs. 1 and 2.
Neoadjuvant Chemotherapy
In both arms, chemotherapy is delivered with bevacizumab. In total, 6 cycles of chemotherapy are prescribed preoperatively every 2 weeks (bevacizumab omitted during cycle 6). Adverse events are monitored from informed consent to 3 months after surgery and dose modification can be made according to specified protocol guidelines.
Assessments/follow up Response and resectabilty evaluation Clinical response has not been shown to be a robust surrogate endpoint to predict outcome. However, patients will undergo response evaluation with MRI of the pelvis prior to cycle 4 and at the end of all treatment (prior to surgery) according to the Response Evaluation Criteria in Solid Tumours (RECIST 1.1) and additionally MRI-based TRG assessment is required [37]. An additional response evaluation according to standard uptake values (SUV) changes with PET/CT is mandated prior to cycle 4. Patients who do not respond will come off all trial treatment (allowing the investigator to proceed to whatever treatment is felt most appropraite ie surgery or SCPRT/CRT followed by surgery).
Tolerability to treatment is evaluated at each visit including physical examination, vital signs, WHO performance status, clinical laboratory profile, and adverse events, graded according to NCI-CTCAE v.4.03.
Surgery and histopathology
Surgery should be performed 8-12 weeks after termination of chemotherapy, and a minimum of 8 weeks after the final dose of bevacizumab. Surgical dissection according to TME principles should not differ between the two trial groups and can be performed open or laparoscopically. Surgery may include anterior resection, abdominoperineal resection or a low Hartmann's procedure.
Pathological evaluation of resected specimens will be according to guidelines included in the study protocol. The 5th edition of TNM will be used. In addition, circumferential resection margin (CRM) will be assessed and a margin of 1 mm or less considered positive. TRG will be presented as data categorised into five groups-TRG 0, TRG 1, TRG 2, TRG 3 and TRG 4 using the Dworak method. Also, the quality of the resected specimen will be evaluated with separate scoring for the mesorectum and the anal canal. Formalin fixed and paraffin embedded (FFPE) tumour tissue obtained at baseline will be evaluated for KRAS and BRA status and plasma /buffy coat collected at baseline and before the 2 nd , 3rd and the 4th cycle (and also if the patient relapses), will be assessed for angiogenic markers (FFPE and serum) in the BACCHUS trial. Serum obtained at baseline, during preoperative treatment, postoperatively and at follow up will be evaluated for circulating tumour DNA.
Adjuvant chemotherapy and follow-up
Patients can be treated with postoperative chemotherapy according to the local protocol of each participating centre. Patients will be followed up every 6 months for up to 42 months after randomisation, to document progression, recurrence and survival. Postoperative investigations/surveillance are performed according to local practice.
Statistical considerations and sample size estimation
The primary endpoint for this trial is the pCR of the TME specimen. The proportion of patients in each arm who achieve a pCR will be presented, along with a 95 % CI. Within each group the achieved pCR rate will be compared to the historical rate achieved by radiotherapy alone (5 %). In the United Kingdom patients without a threat to the circumferential resection margin are likely to be treated with short course preoperative RT, and not chemoradiation. The study is powered on the assumption that a proportion of patients will have a pCR. It is well recognised that patients who have a complete clinical response (cCR) both on imaging and clinical examination will from time to time refuse surgery. For the purpose of this study, patients who have a sustained cCR at 12 months will be considered the same as a patient with a complete pathological response. Patients with a transient clinical response where subsequent relapse is observed within this 12 month period, will not.
Based on pCR with similar regimens prior to liver resection, and primary tumours responding better than metastases, we anticipate a pCR rate of 15-20 %. Compared to 5 % pCR rate historically for radiotherapy alone, a type I error α =0.05 and a type I I error β =0.8, 27 patients for the FOLFOX arm are required. The same number of patients is required for the FOLFOXIRI arm. Assuming 10 % of patients will be non-evaluable, 30 patients will be recruited to each arm (i.e. a total of 60 patients. NACT will be considered worth exploring further in a randomised phase III trial if at least 4/27 pCRs are observed. In the instance of more than 27 patients being assessed for pCR, the first 27 randomised patients per arm will be assessed. The study is not powered for a direct comparison between the two arms.
Quality assurance/safety
Monitoring will be conducted centrally at UCL CTC and on-site monitoring will be scheduled if there is any evidence of non-compliance at site. An independent data safety monitoring committee (IDMC) meeting will be held periodically to review interim analysis, or as necessary to address any issues.
Translational research
Analyses of both tumour tissue and plasma with tissue microarray, proteomics and genomics may generate increased knowledge of prognosis and prediction of response to chemotherapy in the BACCHUS trial. Hence, a schedule for collection of plasma and of fresh tissue for freezing, at different stages of treatment in each arm, is defined in the study protocol.
Tumour tissue and blood samples will be stored for future research. At surgical resection blocks of tumour and normal mucosa will be also be collected. In addition, H&E stained slides from the diagnostic biopsy and resection samples will be collected to undertake Tumour Cell Density and Tumour regression Grading.
Plasma and Peripheral Blood Leucocytes (PBL) samples will be collected at baseline and at the following timepoints during treatment :-Baseline (prior to starting treatment cycle 1), prior to starting treatment cycle 2, prior to starting treatment cycle 3, prior to starting treatment cycle 4 (preferably at radiological response assessment) and if patient relapses.
Conventional size-based radiological criteria using RECIST may not be the optimal method of assessing response to chemotherapy, especially with a regimen integrating bevacizumab) [37,38]. Hence, imaging biomarkers will also be explored in terms of mri-based TRG and MRI diffusion weighted imaging [39]. Exploratory SPECT imaging using Tc99m-maraciclitide as the tracer in a subset of patients at baseline and post treatment will provide information regarding changes in angiogenesis with treatment.
Discussion
Why have we chosen to use bevacizumab? Solid tumours are characterised by changes in structural architecture, which forms a barrier to uptake and penetration of cytotoxic drugs [40] and engenders hypoxia. Bevacizumab, a recombinant humanized monoclonal antibody against vascular endothelial growth factor (VEGF), increases response rates in metastatic colorectal cancer when combined with fluoropyrimidine-based regimen. When given neoadjuvantly, a VEGF inhibitor may act to prevent vessel formation and thus establishment of distant micrometastases.
The BACCHUS study explores the use of bevaizumab in LARC with only potential loco-regional spread, which to some extent should limit evolutionary diversity in the tumour, and hopefully enhance response. All three adjuvant trials testing the role of bevacizumab; QUASAR, AVANT and CO8 excluded rectal cancer, because of the confounding issue of radiotherapy [42,43] yet recent retrospective analysis of a cohort of 667 consecutive patients with metastatic colorectal cancer showed that patients treated with capecitabine, oxaliplatin and bevacizumab in whom the primary tumour originates in the rectum and/or sigmoid colon had better outcomes than patients with right-sided primary tumours [44]. Tumours in the distal colon and rectum also have higher expression of VEGF A (a hypothetical target of bevacizumab) than those in the proximal colon [45]. If there is an interaction between the location of the primary tumour and the effectiveness of antiangiogenic agents, future studies should stratify for the precise location of the primary tumour.
There is a consistently reported problem with delivery of, and compliance with chemotherapy following preoperative SCPRT or CRT and surgery. The EORTC 22921 trial showed compliance to postoperative adjuvant chemotherapy was very poor at 42.9 %. At least 25 % of patients in whom chemotherapy might be considered may not be sufficiently fit for treatment or decline [5,6,19]. The Chronicle trial highlighted this difficulty [46].
Neoadjuvant chemotherapy for locally advanced rectal cancer
In locally advanced rectal cancer, the NSABP-R03 study employed a weekly schedule of 5FU and folinic acid for six weeks prior to definitive preoperative chemoradiation. A response rate of 44 % was achieved in the first 39 patients who completed all 6 cycles [7,47]. Only 2 patients (5 %) progressed on this regimen. In a phase II study using neoadjuvant capecitabine and oxaliplatin, the clinical response rate was 88 % and no patient progressed radiologically [25]. Hence, anxieties that patients will progress on neoadjuvant chemotherapy appear unfounded.
The culture is now changing slowly away from the routine or blanket use of radiotherapy. The GEMCAD 0801 study achieved a 15 % pCR with XELOX + BVZ in [28] in a population very similar to those intended to be recruited into BACCHUS and without any radiotherapy. The Tribe study [31] showed a high clinical response rate in both arms -viz 53 % for FOLFOX + BVZ versus 65 % FOLFOXIRI + BVZ, with Grade 3 diarrhoea manageable at 9 and 19 % respectively.
Induction Bevacizumb and FOLFOXIRI has been shown to be a feasible regimen with acceptable toxicity (mainly neutropenia) in a multicentre study [49]. The ongoing Italian Trust study aims to treat 43 patients with LARC using FOLFOXIRI + BVZ followed by capecitabine based chemoradiation with bevacizumab. To date 23 patients have been randomised with a PCR of 38 %, and only 7 % surgical morbidity [52].
Our results should be better than the Tribe study since previous adjuvant chemotherapy impacted negatively on response in the FOLFOXIRI + BVZ arm. In BACCHUS, because the chemotherapy is neoadjuvant, patients will be chemotherapy naive. Since patients do not have metastatic disease, response rates for both arms should be even higherprobably in the region of 90 % since in the EXPERT C study XELOX and XELOX and cetuximab provided clinical response rates of 64 and 54 % respectively overall, and 71 % versus 51 % for patients expressing wild type KRAS [49].
Limitations
The design of the BACCHUS trial has been criticised because, due to safety reasons and because patients over 70 years with stage II rectal cancer disease do not appear to benefit from adjuvant chemotherapyparticularly with oxaliplatin [33,38]. Despite patients with rectal cancer across Europe having a median age at presentation of 71 years, an upper age limit of 70 years is mandated in BACCHUS because of these safety and futility concerns. The median age in most chemotherapy metastatic trials is 65 and the median age in most chemoradiation studies is 63 years [4][5][6][7]50].
BACCHUS focuses on the efficacy and feasibility of preoperative FOLFOXIRI+ BVZ . The randomised design was chosen (albeit inevitably limited by the small number of patients) to compare efficacy in terms of pathological complete response and acute toxicity, in order to demonstrate the feasibility of avoiding radiation in this group of patients.
Neoadjuvant chemotherapy without chemoradiation
Neo-adjuvant chemotherapy may achieve better access to malignant cells when the tumour has an intact blood supply, and offer better compliance to treatment [27] unlike an adjuvant approach which has failed to show any overall survival benefit in rectal cancer. Given neoadjuvantly, systemic doses of chemotherapy can be delivered at an earlier stage of disease rather than the delay of up to 18 weeks associated with standard CRT plus surgery. Two studies from the Memorial Sloan-Kettering Cancer Center (MSKCC) support the feasibility of neoadjuvant chemotherapy alone in rectal cancer [51,52]. This feasibility study in patients with clinical stage II-III rectal cancer (but not T4 tumours) used FOLFOX + BVZ [52]. The R0 resection rate was the primary outcome. They reported a pCR in 8/29 patients (27 %). BACCHUS is a corroborative feasibility study but assessing more intensive chemotherapy in one arm. Based on the MSKCC results, a large multicentre Phase II/III study is currently accruing patients. In this CALGB PROSPECT/Allianz N1048 trial, patients are randomised to either 5FU-based chemoradiotherapy, surgery, and adjuvant FOLFOX chemotherapy, or the novel selective arm treating with 6 cycles of FOLFOX neoadjuvant chemotherapy and surgery alone.
The primary endpoints of the Phase III components are time to local recurrence and disease-free survival.
Two small Japanese NACT studies have also demonstrated the feasibilty of this NACT approach and have included bevacizumab [53,54]; there is a suggestion of increased surgical morbidity, but the rectal tumours were situated lower, on average 4.7 cm from anal verge, than those we hope to include in the BACCHUS study and surgery was performed earlier than specified in BACCHUS (3-8 versus 8-12 weeks) A higher dose of Bevacizumab was administered ie 7.5 mg/kg in these studies-in contrast to BACCHUS where the dose is 5 mg/Kg.
Finally the Olivia trial [30] used FOLFOXIRI and bevacizumab neoadjuvantly, in patients with mCRC deemed resectable and were offered surgery 5-7 weeks after their last bevacizumab dose and 3-5 weeks after their last chemotherapy cycle, a similar surgical timing of the 8-12 weeks mandated in BACCHUS.
The BACCHUS trial will therefore evaluate the efficacy of an intensive versus a standard first-line chemotherapy combination both with bevacizumab in patients with locally advanced/high risk rectal cancer to examine local control and long-term disease outcomes. Treatment duration is limited to a maximum of 3 months FOLFOXIRI + BVZ versus FOL-FOX + BVZ.
PCR was chosen as the primary endpoint to confirm non-inferiority regarding the comparative efficacy with the standard chemoradiation option for these patients as this will then allow more confident treatment decisions to exclude radiotherapy for such patients in the future. Histopathological response is considered as a useful endpoint after chemotherapy for metastatic colorectal cancer (mcrc), representing a marker of sensitivity to preoperative treatments and a prognostic factor associated with longer survival [55,56]. Although we are hoping in time to show that a neoadjuvant approach may influence overall survival, perhaps via biological/ microenvironmental mechanisms surrounding micrometastases when a primary remains in situ, in contrast to adjuvant therapy, In BACCHUS we are testing the feasibility of bevacizumab in a neoadjuvant setting where bleeding and perforation could prejudice the performance and quality of surgery. If the phase 2 passes tests of efficacy, safety and feasibility, we plan to develop a phase III study. Potential designs include, 3 months of neoadjuvant chemotherapy prior to surgery, followed by the option for a further 3 months of postoperative chemotherapy randomised against initial surgery followed by 6 months of postoperative adjuvant chemotherapy according to histology or alternatively against the current standard of SCPRT or chemoradiation.
Conclusions
The BACCHUS trial will give further information about the feasibility, safety, tolerability and benefit of neoadjuvant FOLFOX or FOLFOXIRI + BVZ in this distinct disease setting of locally advanced but clearly resectable rectal cancer. | 2023-01-16T14:11:20.740Z | 2015-10-23T00:00:00.000 | {
"year": 2015,
"sha1": "f91eaa9158dbd325e73862130a88950be5c015f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12885-015-1764-1",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f91eaa9158dbd325e73862130a88950be5c015f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
246814410 | pes2o/s2orc | v3-fos-license | Higher mortality of hospitalized haematologic patients with COVID-19 compared to non-haematologic is driven by thrombotic complications and development of ARDS: An age-matched cohorts study
Background and Objectives The characteristics of COVID-19 in haematologic patients compared to non-haematologic patients have seldom been analyzed. Our aim was to analyze whether there are differences in clinical characteristics and outcome of haematologic patients with COVID-19 as compared to non-haematologic. Patients and methods Retrospective cohort study in 2 University hospitals of patients admitted with laboratory-confirmed COVID-19 included in the SEMICOVID19 database. The cohort with underlying haematologic disease was compared to a cohort of age and date-of-COVID-19-matched controls without haematologic disease (1:2). Results 71 cases and 142 controls were included from March-May 2020. Twenty (28.1%) had received recent chemotherapy. Twelve (16.9%) were stem cell transplant recipients (SCT). Eleven (15.5%) were neutropenic concurrently with COVID-19 diagnosis. Haematologic patients presented ARDS (58.5 vs 20.7%, p = 0.0001), thrombotic complications (15.7 vs 2.1%, p = 0.002), DIC (5.7 vs 0.0%, p = 0.011), heart failure (14.3 vs 4.9%, p = 0.029) and required ICU admission (15.5 vs 2.8%, p = 0.001), MV (14.1% vs 2.1%, p 0.001), steroid (64.8 vs 33.1%, p = 0.0001), tocilizumab (33.8 vs 8.5%, p = 0.0001) or anakinra treatment (9.9% vs 0%, p = 0.0001) more often. In-hospital mortality was significantly higher (38.0% vs 18.3%, p = 0.002). Conclusions Our results suggest COVID-19 has worse outcomes in haematologic patients than in non-haematologic, independently of age, and that the development of ARDS and thrombotic complications drive the higher in-hospital mortality.
Introduction
Haematologic patients present a high risk for infection, due to immune-compromise secondary to underlying disease and subsequent therapy. Viral infections such as RSV or Influenza that are considered mild in immunocompetent hosts can become life-threatening in certain haematologic patients (Kmeid et al., 2016;Sheshadri et al., 2019).
The characteristics of SARS-CoV-2 infection and COVID-19 in haematologic patients are not yet well known. In the context of COVID-19 pandemic, the underlying haematologic disease could influence the inflammatory response and viral clearance, and modify manifestations and outcome of the disease (Chamilos et al., 2021).
Studies published so far suggest haematologic patients with COVID-19 present a higher mortality as compared with general population data (García-Suárez et al., 2020). However, the characteristics of COVID-19 in haematologic patients as directly compared to non-haematologic patients have seldom been analyzed. There is a lack of information regarding differences in clinical presentation, incidence of different complications and management of patients with haematologic malignancy and COVID-19, compared to non-haematologic cases. The scarce prior published series that compare to the general population (Passamonti et al., 2020) present mainly population-based data and lack detailed information of cases.
We present a cohort of haematologic patients with COVID-19, and compare them to non-haematologic patients with COVID-19.
Setting and study design
We performed a retrospective cohort study in 2 University hospitals in Madrid, Spain, of admitted patients with SARS-CoV-2 laboratoryconfirmed pneumonia included in the SEMICOVID19 Registry (compiled by the Spanish Society of Internal Medicine) from March to May 2020. Both centres are tertiary teaching hospitals, with reference Haematology Departments, that possess stem cell transplantation units and treat complex Haematology patients.
The SEMICOVID19 is an ongoing, nationwide multicentre anonymized online database of consecutive adult patients admitted with SARS-CoV-2 laboratory-confirmed pneumonia from 131 different Spanish hospitals. Inclusion criteria for the registry were age ≥ 18 years and first hospital discharge with a confirmed diagnosis of COVID-19; exclusion criteria were subsequent admissions of the same patient and denial or withdrawal of informed consent, as described elsewhere (Casas-Rojo et al., 2020). Patients were cared for according to local protocols and clinical judgment of their attending physician.
For the present study, only 2 of these hospitals were selected that had included all their haematologic hospitalized patients with COVID-19 in the Registry database. A retrospective cohort study was designed to compare the differences between patients with underlying haematologic disease and patients without underlying haematologic disease. All patients with underlying haematological disease were selected and two controls without haematologic disease were selected for each haematologic patient, matched by age and date of COVID-19. To ensure a standard process of choosing controls, an algorithm was used to select those of the same age among the possible controls diagnosed at the nearest date of COVID diagnosis.
Data collection
The SEMICOVID Registry includes epidemiological, clinical, laboratory and radiologic data extracted from electronic medical records. For more comprehensive information on the registry, see previously published works (Casas-Rojo et al., 2020).
A complementary standardized form was fulfilled for haematologic patients that included specific data about haematologic disease: underlying haematologic disease, ECOG, status, therapy, stem cell transplantation.
Definitions
We considered SARS-CoV-2 infected patients those with a microbiological confirmation by reverse transcription polymerase chain reaction (RT-PCR) testing of a respiratory sample. All patients admitted with symptomatic COVID-19 infection were included, with or without pneumonia.
We included in the "Haematologic disease cohort" patients (or "Haematologic patients") admitted with SARS-CoV-2 infection who had an underlying active haematological malignancy, or were stem celltransplantation (SCT) recipients (as treatment for haematological malignancy). Patients were considered as having active onco-hematologic disease when they were under treatment (chemotherapy, or targeted therapy) or being still immunocompromised due to their underlying hematological condition or treatment. The "Non-haematologic disease cohort" (or "Non-haematologic patients") included patients admitted with SARS-CoV-2 infection and without onco-haematologic disease or SCT. Patients with active solid tumours were excluded from both cohorts and will be analysed separately.
Disease status at the time of SARS-CoV-2 detection was defined according to each specific disease's revised criteria for leukemia, myeloproliferative neoplasm, multiple myeloma and lymphoma (Döhner et al., 2017;Cheson et al., 2014;Kumar et al., 2016).
Performance status at the diagnosis of COVID-19 was graded according to the Eastern Cooperative Oncology Group (ECOG) (Oken et al., 1982).
The main outcome variable was in-hospital mortality.
Statistical analysis
Quantitative variables were expressed as means and standard deviations (SD) and/or medians and interquartile ranges, and qualitative variables as frequencies and percentages.
To compare differences between haematologic and nonhaematologic cohorts, the Mann-Whitney U test, χ2 test, Fisher's exact test or Student t test were used where appropriate.
To explore risk factors associated with in-hospital death among haematologic patients, univariable and multivariable logistic regression models were used. Variables with a p < 0.05 in univariable analyses were selected into the multivariable.
All statistical analyses were performed using SPSS system (version 26.0 for Windows, SPSS Inc., Chicago, IL, USA). The statistical significance level was set at a two-sided p value of < 0.05. An odds ratio (OR) was reported along with 95% confidence interval (CI).
Results
From March to May 2020, 5592 patients with COVID-19 were admitted to the 2 hospitals. Among them, 71 (1.3%) cases had an underlying haematologic disease. One-hundred and forty-two patients with COVID-19 but without haematologic disease admitted to the hospital during the study period were selected as the control cohort.
Characteristics of underlying haematologic disease
Characteristics of haematologic diseases are summarized in Table 1 and Fig. 1. The most common was NHL, followed by MM, CLL and MDS. In 14.1% the haematologic disease was at an initial stage, whereas in 16.9% it was refractory or relapsed. Performance status measured by ECOG scale was > 1 in 12.9 %.
In-hospital mortality was significantly higher among haematologic cases versus controls (38.0% vs 18.3%, p = 0.002). However, there were no significant differences in in-hospital mortality in patients who developed ARDS, or required ICU admission or ventilation, according to the presence of haematologic disease (Table 3).
Risk factors for in-hospital mortality among haematologic patients
Univariable analysis of factors associated with in-hospital mortality are displayed in Table 4.
When considering only haematologic patients, only development of ARDS (96.3 vs 36.4, p 0.0001) (OR 37.635 (4.583-309.060) p 0.001) was independently associated with a higher probability of in-hospital death in the multivariable analysis. Other factors such as recent chemotherapy, neutropenia, targeted therapies or uncontrolled haematologic disease were not predictors of in-hospital mortality in the multivariable analysis.
In-hospital mortality in recipients of SCT was similar to that of nonrecipients (33.3% vs 39% (p = 0.999), however, there were no preengraftment cases of SCT.
Administration of G-CSF in haematologic patients was not associated with development of ARDS (p = 0.417).
Discussion
Our results show that, despite immunosuppression, haematologic patients with COVID-19 present significantly more respiratory and thrombotic complications as compared to non-haematologic patients, and a higher in-hospital mortality.
Some of the cancer-associated factors that have been advocated to contribute to worse outcomes of SARS-CoV-2 infection (lymphopenia and lymphocyte dysfunction, hypercoagulability, immuno-metabolic deregulation related to myeloid cell dysfunction) converge in patients with haematological malignancy. On the other hand, chemotherapyinduced neutropenia and monocytopenia might attenuate the hyperinflammatory response to the virus, whereas neutrophil recovery, treatment with G-CSF or immunotherapy could enhance it (Chamilos et al., 2021).
Several series attribute the increased mortality of haematologic patients to their higher age (García-Suárez et al., 2020). Age is an important prognostic factor in COVID-19 (Moreno-Torres et al., 2021). However, in our age-matched cohort, we still observed a significant difference in mortality between haematologic and non-haematologic cohorts. Nevertheless, patients with ARDS and patients requiring ICU had a similar mortality, regardless of the presence underlying haematologic disease. In the present series, both the development of ARDS and thrombotic complications were more frequent and could account for the increased mortality, in the haematologic cohort.
In COVID-19 patients, ARDS is driven by the inflammatory response to SARS-CoV-2, rather than by direct viral damage (Osuchowski et al., 2021). However, despite immunosuppression, haematologic patients in this series presented ARDS more often than non-haematologic. Haematologic patients with pneumonia are at risk of developing ARDS during neutropenia recovery (Rhee et al., 2009;Malek et al., 2021). It is a matter of controversy whether G-CSF could exacerbate the effect of neutrophil recovery contributing to ARDS (Rhee et al., 2009;Mignard et al., 2019). G-CSF upregulates the production of cytokines that increase alveolar permeability and neutrophil influx, and may enhance secretion of pro-inflammatory cytokines by alveolar macrophages (Rhee et al., 2009). In the present series, G-CSF was not a risk factor for ARDS, although it was administered to only a minority of patients.
Nevertheless, endothelitis and coagulopathy leading to in-situ thrombosis is increasingly gaining consideration in the pathogenesis of respiratory failure in COVID-19 (Bonaventura et al., 2021). Microthrombosis seems to be involved in the physiopathology of acute respiratory distress syndrome (ARDS) (Bonaventura et al., 2021;O'Donnell et al., 2021). The development of a pro-thrombotic state is an important feature of COVID-19. In the present series, thrombotic events were strikingly more frequent in haematologic patients. Cancer is a wellknown risk factor for thrombosis and, in particular, patients with active onco-haematologic conditions are known to be at higher risk for thromboembolism (Kekre and Connors, 2019). The baseline predisposition for thrombotic events seems to place haematologic patients more at risk for developing COVID-19 complications, both at the macro and at the microvascular level.
In immune-compromised patients there is a trend to longer persistence of viral shedding (Taramasso et al., 2021) that could contribute to a greater direct damage and mortality. Persistence of positive PCR could not be adequately evaluated in this retrospective series, as only 35% of cases had at least one control test after the diagnosis of COVID. Among those, there was a non-significant trend to a longer persistence of SARS-CoV-2 positivity.
Several studies report an inferior mortality of stem cell transplantation recipients as compared to other haematologic patients (Piñana et al., 2020). In published series, median time from SCT was, in general, long, and patients had had time enough to recover before presenting COVID-19. In addition, therapies typically used for graft versus host disease could mitigate the inflammatory response (Saraceni et al., 2021). On the contrary, outcome of recently transplanted patients that suffer from SARS-CoV-2 infection during the pre-engraftment period is not well known, and cases of ARDS at the moment of neutrophil recovery have been described (Malek et al., 2021). In our series, median time from transplantation to COVID-19 was close to 2 years and no preengraftment cases were detected. Nevertheless, we were not able to find differences in in-hospital mortality as compared to non-transplant recipients.
Patients receiving small molecule kinase inhibitors (such as JAK or BTK inhibitors) might be protected from hyper-inflammation, and it has been speculated that discontinuation of such therapies in cancer patients with COVID-19 could unleash the hyper-inflammatory response to SARS-CoV-2, in addition of adversely affecting the outcome of the underlying malignancy (Wijaya et al., 2021;Stack et al., 2021). In our series, all patients who were under ibrutinib or other BTK inhibitors maintained it throughout the COVID-19 episode. There were no significant differences in outcome according to BTK inhibitor treatment, though the sample is too small to draw any conclusion.
The major strength of the present study is the direct comparison of a cohort of haematologic patients with COVID-19 with a nonhaematologic cohort matched by age and date of diagnosis of COVID. This allows to avoid bias secondary to different age range of patients with haematologic disease, and bias secondary to presentation in different moments of the COVID-19 learning curve at the beginning of the pandemic. Patients included were only those admitted to two reference hospitals in Madrid, which ensures the homogeneity in management and therapeutic options, and increases the probability of the observed differences being attributable to differences in the response to SARS-CoV-2 in haematologic patients.
Limitations of our study include the small simple size and the heterogeneous haematologic population, which prevents from drawing any conclusion about specific types of haematologic disease or specific haematologic therapies in relation to COVID-19 outcome. During the study period, at the beginning of the pandemic, patients who received steroids, remdesivir and tocilizumab did so in the setting of clinical trials, or off-label, as compassionate use. In the case of steroids and tocilizumab, their use was significantly inferior than in patients without hematological malignancy, probably for fear of increasing immunocompromise without the certainty of a beneficial effect. Only 1 case had access to remdesivir . In this respect the results may not be applicable to the current management of COVID-19. Patients included belong only to the first COVID-19 wave, when antibody detection during recovery was not systematically addressed, and consequently it was not possible to analyze. Our series includes only hospitalized patients with COVID-19 and the results cannot be generalizable to a wider population of nonadmitted haematologic patients.
Our results suggest COVID-19 has worse outcomes in haematologic patients than in non-haematologic, independently of age, and that the development of ARDS and thrombotic complications drive the higher inhospital mortality. Immune-compromise does not prevent inflammatory complications but may in addition impede viral elimination. Maximal stress in preventive measures in haematologic patients is warranted (Malek et al., 2021), and, if unfortunately infected, close surveillance with antiviral, anti-inflammatory and anticoagulant treatment before decompensation as well as prompt consideration of intensive care management in those deteriorating (Giesen et al., 2021).
Funding There was no funding granted for this article.
Availability of data and materials The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Ethics
This study was carried out in accordance with the Declaration of Helsinki and was approved by the Institutional Research Ethics Committee of Málaga on March 27, 2020 (Ethics Committee code: SEMI-COVID-19 27-03-20), as per the guidelines of the Spanish Agency of Medicines and Medical Products.
Consent for publication Only patients who had previously given consent for their medical records to be used for medical research were included in this registry. Data confidentiality and patient anonymity were maintained at all times, in accordance with Spanish regulations on observational studies.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-02-15T14:16:41.022Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "19f3ba56d37eedfe08c3b36e29c891ae9ca68235",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.clinpr.2022.100137",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be7502d48bf5c715ee92f5823f3ed36996d37ec9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4951972 | pes2o/s2orc | v3-fos-license | Behavior of Human Bone Marrow-Derived Mesenchymal Stem Cells on Various Titanium-Based Coatings
The chemical composition and texture of titanium coatings can influence the growth characteristics of the adhered cells. An enhanced proliferation of the human mesenchymal stem cells (hMSCs) would be beneficial. The present study was aimed to investigate whether titanium deposited at different atmospheres would affect the cell growth properties, cellular morphology, and expression of surface markers of hMSCs. Titanium-based coatings were deposited on silicon wafers under oxygen, nitrogen, or argon atmospheres by ultra-short pulsed laser deposition using two different gas pressures followed by heating at 400 °C for 2 h. The characteristics of the coated surfaces were determined via contact angle, zeta potential, and scanning electron microscopy (SEM) techniques. Human MSCs were cultivated on differently coated silicon wafers for 48 h. Subsequently, the cell proliferation rates were analyzed with an MTT assay. The phenotype of hMSCs was checked via immunocytochemical stainings of MSC-associated markers CD73, CD90, and CD105, and the adhesion, spreading, and morphology of hMSCs on coated materials via SEM. The cell proliferation rates of the hMSCs were similar on all coated silicon wafers. The hMSCs retained the MSC phenotype by expressing MSC-associated markers and fibroblast-like morphology with cellular projections. Furthermore, no significant differences could be found in the size of the cells when cultured on all various coated surfaces. In conclusion, despite certain differences in the contact angles and the zeta potentials of various titanium-based coatings, no single coating markedly improved the growth characteristics of hMSCs.
Introduction
The concept of tissue engineering has made great advances in the field of regenerative medicine, with the idea of using biomaterials and cells to construct new tissues to replace damaged ones in the body. The properties of human mesenchymal stem cells (hMSCs) provide potential for use in regenerative medicine as delivery vehicles for cell-based therapies [1]. They have been considered an alternative cell source for differentiated autologous chondrocytes due to their self-renewal and multipotent capacity to differentiate into different types of cells, including chondrocytes [2][3][4][5][6]. However, the quantity of hMSCs in bone marrow declines dramatically with age. It has been estimated that there is approximately one MSC per 10,000 cells in a newborn child´s bone marrow, whereas a 50-year-old adult has one MSC per 400,000 cells, and an 80-year-old person has one MSC per 2 million cells [7]. Accordingly, it has been shown that the number of colony-forming units harvested per aspirate significantly decreased with age in women [8]. Importantly, the number of hMSCs needed in clinical experiments can be up to 24 million [9]. Thus, it can take a rather long time until enough cells in the monolayer expansion culture for the needs of clinical operations are obtained. Therefore, the enhanced rates of MSC growth in expansion cultures in vitro under environment, which maintain the MSC phenotype and differentiation capacity, would be beneficial for the purposes of tissue engineering.
Silicon has been widely used in biomedical applications [10][11][12][13][14]. It has been shown that the bioactive coating of silicon with bioactive molecules enhanced hMSC proliferation and retained the phenotype of MSCs, as well as the other types of cells [15][16][17][18][19][20]. Silicon substrates were shown to be a feasible novel cell culture material with good biocompatibility properties for myoblast cell adhesion and proliferation [19], and modified silicon has been widely used as a scaffold in cell-based therapy, tissue engineering, or both [15,[21][22][23]. Silica nanoparticles have been shown to increase human adipose tissue-derived stem cell proliferation through ERK1/2 activation [24]. Various coatings on silicon wafers have also enhanced the proliferation of hMSCs compared with other types of cells, such as chondrocytes or osteoblasts [16].
Surface properties, such as topography, chemistry, stiffness, roughness, wettability, and energy, have been noted to influence the cell growth and differentiation capacity of stem cells [25,26]. Titanium (Ti) is often used to create artificial joints, pins, and other implants for orthopedic operations, since it does not irritate the human body and is shown to be biocompatible. Titanium dioxide (TiO 2 )-coated CoCrMo has improved the osteogenic differentiation and adhesion of hMSCs [27], and our previous study showed that TiO 2 coating on cell culture dishes promoted hMSC proliferation without a loss in their chondrogenic differentiation capacity [28]. Cathodic arc plasma-treated Ti has been shown to enhance bone marrow MSC functions [29].
Oxygen and nitrogen are the main gases in the air. The surface coatings on materials are usually exposed to an atmospheric oxygen environment during coating processes or at least during application in cell cultures or as an implant. Ti especially is very reactive and forms at least a thin oxide layer on the surface. Argon is chemically very inactive and has been used to provide an inert atmosphere during deposition. It has been shown that argon protection could effectively reduce the air contaminants on acid-etched Ti implant surfaces and maintain the surface hydrophilicity and biological activity of implants [30,31], enhancing the early bone formation on Ti surfaces [30,32]. Nitrogen is a chemically neutral gas and does not change the biological properties of the samples. Therefore, in the present study, the Ti-based coatings were deposited on silicon or glass in oxygen, nitrogen, or argon atmospheres under two different gas pressures to study how the basic surface properties are affected. Then, the hMSCs were cultivated on these various coatings to investigate whether the different coatings with different surface properties on silicon would affect the proliferation, adhesion, and differentiation of the hMSCs. The main goal was to investigate whether it would be possible to find a coating that would provide the optimal proliferation of hMSCS without a loss in their cellular characteristics. An ultra-short pulsed laser deposition technique was used as a new technology, which allows for the production of well-controlled surface textures in different gas atmospheres.
Sample Preparation
High purity (100) silicon (Si-Mat, Landsberg am Lech, Germany) and glass microscope slides (Thermo Scientific, Menzel, Braunschweig, Germany) of a size of 76 mm × 26 mm × 0.8 mm were used as substrates. Thin films were deposited using the ultra-short pulsed laser deposition technique (USPLD). A total of six different sets were deposited, and for each set there were coated samples of both silicon and glass. First, the samples were loaded into a vacuum chamber. In vacuum, the sample surfaces were gently cleaned using HiQ Argon (AGA, Espoo, Finland) ion sputtering (SAM-7KV, Minsk, Belarus) before film deposition. For USPLD, we used a Tangerine fs fiber laser (Amplitude Systèmes, Pessac, France). The pulse length was 0.3 ps, with a pulse repetition rate of 2 MHz. Thin films were deposited in oxygen, nitrogen, and argon atmospheres using two different gas pressures-2 × 10 −4 mbar or 2 × 10 −3 mbar. As a target, we used high purity titanium. After the depositions, the samples were heated at 400 • C for 2 h in the same atmosphere as that during the deposition. Subsequently, the samples were transitorily ultrasonicated in an ethanol-acetone solution (50:50 in volume).
Contact Angle Measurements
The sessile drop method was used to determine the contact angles of the different surfaces. A custom-made apparatus with a digital camera was used to take a photo of a 10-µL drop of deionized water on each surface. The contact angles were then measured with the GNU image manipulation program (GIMP, version 2.7.3, www.gimp.org). Mean values and standard deviations were then calculated.
Zeta Potential Measurements
The zeta potentials were measured using the electrokinetic analyzer (SurPass, Anton Paar GmbH, Graz, Austria) with the adjustable gap cell. We measured the zeta potentials from the coated and heat-treated silicon samples at a pH of about 7.0, according to the principles of the measurement previously described [28]. The electrokinetic analyzer´s pH meter was used to monitor pH.
Cultivation of Human Mesenchymal Stem Cells (hMSCs)
Human MSCs were isolated from bone marrow materials with permission from the North-Savo Health Care District Ethical Committee (license no. 62/2010), as described previously [28]. Human MSCs were cultured in the MSC culture medium in an incubator at 37 • C with 20% O 2 tension and 5% CO 2 . When the cells reached 90%-95% confluency in the monolayer culture, they were harvested with trypLE, and 20,000 hMSCs were seeded in the MSC culture medium onto the coated silicon wafers or glass pieces, which were deposited in oxygen, nitrogen, or argon atmospheres from the Ti source. After the cells had been cultivated on various coated materials for 48 h at 37 • C, the samples were collected for MTT and immunocytochemical assays, and for scanning electron microscopic analyses.
Characterization of Human Mesenchymal Stem Cells
To ensure the mesenchymal characterization of the hMSCs from different donors used in this study, the expression of MSC-associated markers of CD73 (1:200), CD90 (1:200), and CD105 (1:200) were examined with an immunocytochemical assay as described in our previous study [33]. The functional characterization of hMSCs was performed using chondrogenic, osteogenic, and adipogenic differentiation assays of hMSCs as previously described [28,33].
The chondrogenic differentiation of hMSCs was performed in a pellet culture with 500,000 cells in the chondrogenic medium for 4 weeks. The osteogenic or adipogenic differentiation was carried out in a monolayer culture with 100,000 cells in the osteogenic or adipogenic medium for 4 weeks, respectively. The medium was changed three times per week during the culture period for all differentiation experiments. At the end of the differentiation periods, the cell pellets from chondrogenic differentiation were examined with histological stainings-for proteoglycans (PGs), with toluidine blue staining; for type II collagen, with immunohistochemistry [33,34]. The differentiated cells from osteogenesis and adipogenesis assays were visualized with stainings for alkaline phosphate (ALP) activity and Oil Red O (ORO), respectively [33,34].
Metabolic Activity Measurement
The metabolic activities of the hMSCs cultured on various coated silicon samples were analyzed with an MTT colorimetric assay. After 48-h cultivation, the cells attached to the coated silicon wafers were carefully transferred to a new 24-well plate after washing with phosphate buffered saline (PBS), then 2 mL of a 0.5-mg/mL MTT reagent 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide was added, and the cells were incubated at 37 • C for 3 h. Finally, the MTT formazan salt was dissolved in 1 mL of dimethylsulphoxide/ethanol (1:1, v/v), and the absorbances were measured at 595 nm with a 96-well plate reader. Three replicates for every sample were used in the measurement, and the experiment was repeated three times with three different donor cells.
Immunocytochemical Analyses
Immunocytochemical assays on silicon wafers were not successful. Therefore, the characteristic surface antigens of hMSCs were immunostained in cells adhered to the various glass coatings, manufactured in the same conditions as the coatings of silicon wafers. After 48-h cultivation, the cells attached to the coated glass samples were carefully transferred to a new 24-well plate after being washed with PBS; then, the cells were fixed with 4% paraformaldehyde. The fixed cells were further incubated with anti-CD73 (1:200), CD90 (1:200), CD105 (1:200), and CD45 (1:200) antibodies overnight at 4 • C. On the next day, the cells were incubated with a secondary antibody (FITC-labeled goat anti-mouse, 1:200) for 1 h at room temperature in darkness. Finally, the cells were photographed with a fluorescence microscope after incubation with 1 µg/mL of 4 -6-diamidino-2-phenylindone for 15 min at 37 • C [28,33]. The experiment was individually repeated three times with three different donor cells.
A 48-h culture time was chosen, since the surfaces were then rather confluent, and longer times were considered to have adverse effects on the cells. This time was thought to be long enough to observe whether loss in the hMSC-specific surface markers would appear.
Scanning Electron Microscopic Analysis of Cell-Free Coated Materials and Cells
Scanning electron microscopic imagings (SEM) and energy-dispersive X-ray spectroscopy (EDS) analysis of the coated surfaces were carried out using a Hitachi S-4800 FE-SEM (Hitachi Science System Ltd., Ibaraki, Japan) equipped with an EDS detector at an accelerating voltage of 5-10 kV.
After 48-h cell cultivation, the cells attached to the coated silicon samples were carefully transferred to a new 24-well plate after washings with PBS; then, the cells were fixed with 2.5% glutaraldehyde in a 0.1-mol/L sodium cacodylate buffer (pH 7.4) for 2 h at room temperature. The samples were further dehydrated with a series of gradually increasing concentrations of ethanol and hexamethyldisilazane. Finally, the samples were covered with gold by sputtering (AGAR auto sputter coater, Agar Scientific, Stansted, UK) for 20 min and monitored with the FE-SEM. The experiment was repeated three times with three different donor cells.
Statistical Analysis
A one-way ANOVA (IBM SPSS Statistics 21, New York, NY, USA) followed by the Bonferroni post-hoc test was used to check the statistical significance of the differences in cell proliferation between the different coated surfaces. The Kruskal-Wallis test was used to examine the statistically significant differences on the contact angle and zeta potential between the different coatings under the same gas pressure or different gas pressures in the same coatings. Significance level p < 0.05 was considered statistically significant, and p < 0.01 was considered as highly statistically significant.
Surface Characterization
The contact angle is used to determine the wettability of a solid surface so that the larger the contact angle (>90 • ) is, the more hydrophobic the solid surface is. In this study, the contact angles of the coated silicon samples were 90 • or higher (Table 1). Significantly increased contact angles of coated silicon were achieved with a nitrogen atmosphere at a lower pressure, and with the presence of argon gas during the deposition ( Table 1). The gas pressure had no significant effect on the contact angles of the coated silicon wafers under the oxygen and argon atmospheres at two different gas pressures (Table 1). Table 1. The contact angles of the titanium-based coatings deposited on silicon. Oxygen, nitrogen, and argon gases were used during the depositions under the gas pressures of 2 × 10 −4 mbar and 2 × 10 −3 mbar.
Pressure (mbar)
Oxygen Statistical significances at p value < 0.05: a nitrogen-silicon at higher pressure vs. nitrogen-silicon at lower pressure; b nitrogen-silicon at higher pressure vs. argon-silicon at higher pressure.
The zeta potential, an electrical surface property, depends on the properties of the material surface and the liquid on it. The higher the zeta potential is, the stronger the aggregative stability is, while a lower zeta potential means faster coagulation. In this study, the argon atmosphere at both gas pressures resulted in high negative zeta potential values, as well as coating under lower nitrogen atmospheres (Table 2), while zeta potentials were lowest in samples coated under the oxygen atmosphere ( Table 2). The differences in pH values were small during the measurement (Table 2). Table 2. Zeta potential (ZP) measurement of the coated silicon wafers. Oxygen, nitrogen, and argon gases were used during the depositions under gas pressures of 2 × 10 −4 mbar and 2 × 10 −3 mbar.
Pressure
Lower Deposition under different conditions affected the surface roughnesses of the coatings on silicon wafers. Surfaces deposited under the higher pressure appeared to have slightly rougher surfaces than those deposited under the lower pressure ( Figure 1). The size of the particles in the silicon material deposited in oxygen-plasma under the higher pressure appeared more uniform (Figure 1).
Characterization of the Used hMSCs
The hMSCs used in this study were characterized by immunocytochemical stainings of MSCassociated markers-CD73, CD90, and CD105. All three donor hMSCs used in this study expressed surface markers CD73, CD90, and CD105 (Figure 2A), but did not express leukocyte marker CD45 (Figure 2A). The functional characterizations of hMSCs included chondrogenic, osteogenic, and adipogenic differentiation assays. After 4-week chondrogenic differentiation, the cell pellet was stained for PGs and type II collagen ( Figure 2B). The osteogenically differentiated cells in the monolayer culture expressed alkaline phosphate activity (ALP) (Figure 2C), and the adipogenic differentiation produced cells that had a high degree of Oil Red O stained fatty droplets (ORO) ( Figure 2C).
Characterization of the Used hMSCs
The hMSCs used in this study were characterized by immunocytochemical stainings of MSC-associated markers-CD73, CD90, and CD105. All three donor hMSCs used in this study expressed surface markers CD73, CD90, and CD105 (Figure 2A), but did not express leukocyte marker CD45 (Figure 2A). The functional characterizations of hMSCs included chondrogenic, osteogenic, and adipogenic differentiation assays. After 4-week chondrogenic differentiation, the cell pellet was stained for PGs and type II collagen ( Figure 2B). The osteogenically differentiated cells in the monolayer culture expressed alkaline phosphate activity (ALP) (Figure 2C), and the adipogenic differentiation produced cells that had a high degree of Oil Red O stained fatty droplets (ORO) ( Figure 2C).
The hMSC Morphology and Adhesion on Various Coated Silicon Samples
The scanning electron microscopic analysis showed that the hMSCs displayed a fibroblast-like morphology (Figure 3) when the cells were cultivated on all various coated surfaces. The morphology of the hMSCs on various coated surfaces was similar to those cultured as a monolayer culture on standard polystyrene cell culture plates (Figure 3). The hMSCs grown on silicon wafers coated under a nitrogen atmosphere at the higher pressure appeared somewhat smaller in size than the cells grown on other coatings (M2, Figure 3). Therefore, image analysis of the cellular morphology was performed. The data of the cellular area and the perimeter gave some support for the assumption that the hMSCs cultured on the nitrogen-coated surface under high pressure were smallest in size. However, the differences were not statistically significant ( Table 3). The circularity of the cells-a value of 1 representing a circular shape-remained almost constant at all coatings, varying in a range between 0.32 and 0.34 (Table 3). This indicates that the cells were mainly spindle-shaped. Solidity describes in geometrical terms the stiffness and deformability of an object. The higher the solidity is, the lower the cell deformability is. In the present study, the solidity values of the hMSCs cultured on various coated silicon samples were between 0.63 and 0.65, and no statistically significant differences could be noticed within various coated silicon samples (Table 3). Table 3. The cell size and shape-associated parameters (mean ± S.D.) of the human mesenchymal stem cells cultured on various coated silicon samples (n = 75). Oxygen, nitrogen, and argon gases were used during the depositions under gas pressures of 2 × 10 −3 (higher pressure) mbar and 2 × 10 −4 (lower pressure) mbar.
Materials
Cell
The hMSC Morphology and Adhesion on Various Coated Silicon Samples
The scanning electron microscopic analysis showed that the hMSCs displayed a fibroblast-like morphology (Figure 3) when the cells were cultivated on all various coated surfaces. The morphology of the hMSCs on various coated surfaces was similar to those cultured as a monolayer culture on standard polystyrene cell culture plates (Figure 3). The hMSCs grown on silicon wafers coated under a nitrogen atmosphere at the higher pressure appeared somewhat smaller in size than the cells grown on other coatings (M2, Figure 3). Therefore, image analysis of the cellular morphology was performed. The data of the cellular area and the perimeter gave some support for the assumption that the hMSCs cultured on the nitrogen-coated surface under high pressure were smallest in size. However, the differences were not statistically significant ( Table 3). The circularity of the cells-a value of 1 representing a circular shape-remained almost constant at all coatings, varying in a range between 0.32 and 0.34 (Table 3). This indicates that the cells were mainly spindle-shaped. Solidity describes in geometrical terms the stiffness and deformability of an object. The higher the solidity is, the lower the cell deformability is. In the present study, the solidity values of the hMSCs cultured on various coated silicon samples were between 0.63 and 0.65, and no statistically significant differences could be noticed within various coated silicon samples (Table 3). Table 3. The cell size and shape-associated parameters (mean ± S.D.) of the human mesenchymal stem cells cultured on various coated silicon samples (n = 75). Oxygen, nitrogen, and argon gases were used during the depositions under gas pressures of 2 × 10 −3 (higher pressure) mbar and 2 × 10 −4 (lower pressure) mbar.
Materials
Cell
The hMSC Proliferation on Various Coated Silicon Samples
The MTT assay was used to analyze the cell proliferation when the hMSCs were cultivated on various coated silicon samples. The present results from three separate experiments showed no obvious differences in the cell number between any coated silicon samples after cultivation for 48 h (Figure 4).
The Expression of the hMSC-Associated Markers after hMSCs Were Cultivated on Various Coated Silicon Samples
The International Society for Cellular Therapy has stated that the hMSCs must express CD73, CD90, and CD105 and lack expression of CD45, CD34, CD14, or CD19 as the minimal criteria for defining multipotential MSCs [4]. In the present study, the hMSCs from three donors all expressed CD73, CD90, and CD105, but did not express CD45 ( Figure 5). However, no noticeable differences could be seen in the expression of CD73, CD90, and CD105 when the cells were cultured on various
The hMSC Proliferation on Various Coated Silicon Samples
The MTT assay was used to analyze the cell proliferation when the hMSCs were cultivated on various coated silicon samples. The present results from three separate experiments showed no obvious differences in the cell number between any coated silicon samples after cultivation for 48 h (Figure 4).
The hMSC Proliferation on Various Coated Silicon Samples
The MTT assay was used to analyze the cell proliferation when the hMSCs were cultivated on various coated silicon samples. The present results from three separate experiments showed no obvious differences in the cell number between any coated silicon samples after cultivation for 48 h (Figure 4).
The Expression of the hMSC-Associated Markers after hMSCs Were Cultivated on Various Coated Silicon Samples
The International Society for Cellular Therapy has stated that the hMSCs must express CD73, CD90, and CD105 and lack expression of CD45, CD34, CD14, or CD19 as the minimal criteria for defining multipotential MSCs [4]. In the present study, the hMSCs from three donors all expressed CD73, CD90, and CD105, but did not express CD45 ( Figure 5). However, no noticeable differences could be seen in the expression of CD73, CD90, and CD105 when the cells were cultured on various
The Expression of the hMSC-Associated Markers after hMSCs Were Cultivated on Various Coated Silicon Samples
The International Society for Cellular Therapy has stated that the hMSCs must express CD73, CD90, and CD105 and lack expression of CD45, CD34, CD14, or CD19 as the minimal criteria for defining multipotential MSCs [4]. In the present study, the hMSCs from three donors all expressed CD73, CD90, and CD105, but did not express CD45 ( Figure 5). However, no noticeable differences could be seen in the expression of CD73, CD90, and CD105 when the cells were cultured on various coated samples ( Figure 5).
Discussion
The hMSCs have gained wide interest in cell-based tissue engineering of bone and articular cartilage, due to their multipotentiality to differentiate into osteoblasts and chondrocytes [28,35]. However, the large number of the chondrogenic hMSCs needed in the clinical application has also been noticed [9]. Our previous study showed that the proliferation rate of the hMSCs significantly increased when they were cultured on TiO2-coated cell culture dishes without loss of their capacity for chondrogenic differentiation [28]. Titanium and its alloys have been widely used as implant
Discussion
The hMSCs have gained wide interest in cell-based tissue engineering of bone and articular cartilage, due to their multipotentiality to differentiate into osteoblasts and chondrocytes [28,35]. However, the large number of the chondrogenic hMSCs needed in the clinical application has also been noticed [9]. Our previous study showed that the proliferation rate of the hMSCs significantly increased when they were cultured on TiO 2 -coated cell culture dishes without loss of their capacity for chondrogenic differentiation [28]. Titanium and its alloys have been widely used as implant material in orthopedic application because of their desirable biocompatibility and bioactivity. Moreover, silicon possesses considerable potential in biochemical applications [19,36,37]. Therefore, in the present study, we investigated whether Ti-based coatings deposited in an oxygen, nitrogen, or argon atmosphere on silicon would be beneficial for the proliferation of the hMSCs. High speed ions of laser plasma plume effectively ionize gas atoms. Plasma treatments have been widely used in manufacturing surface modifications, which promote cell adhesion and proliferation [38].
In this study, the coatings with Ti were deposited under three different gas atmospheres, with the idea to investigate whether some coating conditions would yield surfaces optimal for the hMSC proliferation, yet maintain their expression of surface markers typical of hMSCs. The different coating conditions yielded surfaces, which had different characteristics of their contact angle or zeta potential, as well as their roughness. The hMSCs cultured on various Ti-based coatings on silicon wafers showed protrusions and firm adhesion on the coated surfaces. Coatings with Ti under lower nitrogen pressure produced the highest contact angle with relatively smooth surfaces. The cells appeared to be slightly smaller and had a relatively round shape on Ti-based coatings on silicon deposited under higher nitrogen pressure. Thus, the hydrophilicity of the materials obviously facilitates the cell adhesion and spreading [16,39]. Previously, it has been showed that a reduction of 80% in the cell adhesion was apparent when the contact angle was increased from 57 • to 122 • [40]. A hydrophilic polyurethane matrix promoted chondrogenesis of MSCs [41]. Superhydrophilic vertically aligned carbon nanotubes have permitted the adhesion and maintenance of human chondrocytes [42]. However, the results from our present study indicated that the proliferation of the hMSCs was not significantly different during the 48-h cultivation on various Ti-based coatings on silicon.
It has been shown that the surface roughness affected cell growth, adhesion, spreading, and cell functions [43][44][45][46][47][48][49][50][51][52][53][54][55]. Even though it has been shown that the cell adhesion or proliferation could be enhanced when the cells cultured on rougher surfaces [43,46,49,51,55], the surface roughness could also reduce the cell adhesion, proliferation, or both [47,53,55,56]. Additionally, oxidized Ti samples with rougher surfaces improved the cell adhesion and osteogenic differentiation of the hMSCs [43]. However, no positive effects on the cell proliferation were observed [43]. The proliferation and differentiation of the cells derived from human mandibular bone was enhanced by the surface roughness of the Ti implant [44]. Our present results show that the variations in the surface properties affected by different sample production conditions did not remarkably change the investigated cellular properties, with the exception of a minor difference on the cell adhesion on Ti-based coating deposited in nitrogen under higher pressure compared with the others. This further confirmed our previous studies that the surface roughness did not significantly affect hMSC proliferation [16,26].
Gaseous plasma has been shown to improve biocompatibility by changing the chemical compositions and modifying the surface charge and roughness [57,58]. Oxygen, nitrogen, or argon, a variety of different plasma, can be applied for the surface modification. Oxygen is the most commonly used for plasma treatment of the surface to improve the wettability and controlling of the biocompatibility. It has been shown that the cell proliferation increased by 30% when HEMC-1 cultured on oxygen plasma-treated polymers after 48 h [59]. Oxygen plasma-treated samples could enhance not only the cell adhesion and proliferation, but also the protein adhesion [38]. Nitrogen plasma treatment was more effective than argon and oxygen treatments in the modification of cyclic olefin copolymer microfluidic devices [60]. However, exposure to the air of argon plasma-treated surfaces leads to the incorporation of oxygen or nitrogen species [61][62][63]. Our present study showed that the hMSCs did retain the hMSC phenotype when cultured on Ti-based materials coated in oxygen, argon, or nitrogen, which further confirms our previous study performed with a TiO 2 -coated cell culture dish [28].
In conclusion, ultra-short pulsed laser deposition was used as a new technology for Ti-based surface coating deposition under various atmospheres. The present results indicated that the oxygen, nitrogen, or argon atmospheres on Ti-based coatings on silicon wafers produced surfaces, which were different in their surface characteristics but surprisingly appeared to be suitable for the hMSC cultivation and maintenance of their phenotype. Thus, none of the coatings was superior for providing an enhanced proliferation of hMSCs. | 2016-10-31T15:45:48.767Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "600fbaa155aa65f72a90432cadc63caf8067f37a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/9/10/827/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc8a2b53e4f7effdb602b0145e2351c334a0c30f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
225235696 | pes2o/s2orc | v3-fos-license | Silicon waveguides with graphene: coupling of waveguide mode to surface plasmons
Silicon waveguides with graphene layers have been recently intensively studied for their potential as fast and low-power electro-optic modulators with small footprints. In this paper we show that in the optical wavelength range of 1.55 μm, surface plasmons supported by the graphene layer with the chemical potential exceeding ∼0.5 eV can couple with the guided mode of the silicon waveguide and affect its propagation. On the other hand, this effect might be possibly utilized in technical applications like a very low-power amplitude modulation, temperature sensing, etc.
Introduction
Two-dimensional (2D) materials, with graphene as their most well-known representative, have been recently successfully implemented into various guided-wave photonic devices, especially modulators, due to their ability to efficiently modify the phase and/or amplitude of propagating guided modes [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Strong dependence of the surface conductivity of graphene on the chemical potential (or Fermi level energy), controlled by either doping or applied voltage, makes it possible to modify the complex effective refractive index of an Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. optical waveguide with a graphene sheet overlay [21]. For the optical communication wavelength range of 1550 nm, a graphene layer with the chemical potential µ c below about 0.5 eV introduces a very strong optical attenuation, while for µ c > 0.5 eV the attenuation is low while the real part of the effective refractive index is changed. In principle, a graphene layer can thus be utilized for both amplitude and phase electrooptic modulation.
We have recently compared various approaches to numerical modelling of light propagation in a silicon waveguide with graphene overlay [22], and we revealed quite irregular fluctuations of both attenuation and phase of the guided mode in dependence of the chemical potential above approximately 0.5 eV. At first, this effect appeared to be a numerical artifact of the simulation method used, however it was reproduced also in other, completely independent simulation approaches, and it was reported in [23]-see figure 1. Since we were interested in the mechanism behind this effect, we decided to analyze it in more detail. In this communication, we show that this effect is due to the coupling of surface plasmon modes supported by the graphene stripe ('ribbon plasmons' [24]), even at the telecommunication optical wavelength band around 1550 nm, with the mode of the silicon waveguide. To demonstrate this, we calculate the complex propagation constant of the (quasi-) TE mode of the silicon waveguide loaded with the graphene stripe using a strongly simplified coupled mode theory (CMT) and compare it with a full-wave numerical simulation using the commercial software packet COMSOL Multiphysics [25]. We show that despite the rather crude simplifications used in our implementation of the CMT, the similarity of both results is convincing.
A simplified structure of the Si rib waveguide modulator with a graphene layer used for the comparison, inspired by the design described in [7], is shown in figure 1.
The geometrical parameters of the waveguide structure analyzed here are close to those used in practical devices: the total silicon layer thickness is h = 220 nm, the rib waveguide width is w = 450 nm, the residual silicon thickness after the shallow etch is d = 50 nm. The graphene layer is deposited only on the top of the rib waveguide, separated from silicon with a thin SiO 2 layer, t = 10 nm. The superstrate is air. The wavelength of the optical wave propagating in the waveguide is 1550 nm.
The paper is organized as follows: in the next section, we review the properties of surface plasmons at the vacuum optical wavelength of 1550 nm supported by a graphene layer, and present an approximate solution of plasmonic modes propagating along the graphene stripe. Then we describe a simplified coupled-mode theory for the coupling of the multitude of graphene plasmonic modes with the mode of the silicon waveguide and confirm the qualitative CMT results with a 'rigorous' full-wave numerical electromagnetic solution obtained with COMSOL Multiphysics [25]. In the final section, we discuss the effect of coupling of the surface plasmons with the waveguide mode on the physical properties of the waveguide structure that may be useful in design and operation of silicon photonic devices with graphene layers.
Surface plasmons on graphene
Although the properties of surface plasmons supported by a graphene layer have already been analyzed in detail [24,[26][27][28], most publications concentrate on the midinfrared spectral region. We thus first review the properties of surface plasmons at the telecom wavelength band of 1550 nm propagating along the graphene sheet sandwiched between two dielectric media-in this case SiO 2 and air. The optical properties of a graphene monolayer are determined by its complex surface conductivity σ s , which can be described with an approximate expression [1,27] (note that we use the convention exp(−iω t) for time-harmonic quantities): Here, ω, µ c , τ, T, k B and ℏ are the circular frequency of light, the chemical potential of the graphene layer, the time constant corresponding to the graphene relaxation time, the absolute temperature, the Boltzmann constant and the reduced Planck constant, respectively. We used the following values in our simulations: ω = 1.216 × 10 15 s −1 (corresponding to the optical free-space wavelength of 1550 nm), τ = 0.2 ps, and T = 300 K. The dependences of the real and imaginary parts of the surface conductivity on the chemical potential, calculated from (1) for λ = 1550 nm, are shown in figure 2. Note that in the range of µ c > 0.5 eV, the positive imaginary part of the surface conductivity strongly prevails. This is a condition allowing propagation of a surface plasmon at the interfaces of a graphene layer, considered as an infinitely thin layer with a finite surface conductivity σ s , sandwiched between two dielectrics.
Surface plasmon on an infinite graphene sheet
For further considerations, we need to know the surface plasmon propagation constant and field distribution in dependence of the chemical potential of the graphene layer with parameters given above. Let us first consider propagation of a (TM polarized) surface plasmon in the z direction on a planar structure unlimited in the ±x direction (the coordinates axes are considered as in figure 1). Such a wave has a single magnetic field intensity component H x and two electric field intensity components E y and E z . Their field distributions in dielectric media are where N sp = β sp /k 0 is the (complex) effective refractive index of the surface plasmon, also called a modal index [29], β sp is the propagation constant, p 1 = (N 2 sp − ε air ) 1/2 and p 2 = (N 2 sp − ε SiO2 ) 1/2 are the (normalized complex) transverse decay constants into air and SiO 2 substrate, respectively, and Next, it follows from Maxwell equations that The field continuity conditions at the graphene layer sound as The dispersion equation for the surface plasmon is then obtained from (3) and (4) in the form Realizing that p 2 = (p 2 1 + ε air − ε SiO2 ) 1/2 , this equation can be cast into the fourth-degree polynomial in the variable p 1 . However, not all roots of this polynomial also satisfy the original dispersion equation (5). Moreover, according to (2), the existence of the surface plasmon as a physically realizable wave confined to the graphene layer and decaying in the direction of propagation requires that the real parts of both p 1 and p 2 and the imaginary part of the effective refractive index N sp = (p 2 1 + ε air ) 1/2 are positive. At the wavelength of 1550 nm, just one surface plasmon wave is supported in our structure in the range of the chemical potential µ c considered in figures 3 and 4. These figures show real and imaginary parts of the effective refractive index of the surface plasmon, its propagation length L sp = 1/[2k 0 Im{N sp }], and its penetration depths into air and SiO 2 , d air = 1/(k 0 Re{p 1 }) and d SiO2 = 1/(k 0 Re{p 2 }), respectively. Note that in the range of µ c > 1 eV, the propagation length typically reaches a fraction of a micrometer, and the penetration depths into both dielectric media are practically the same, of the order of a few nanometers, due to the very large effective index of the plasmon mode, p 1 ≈ p 2 ≈ N sp . From this approximation and (3), it also follows that electric field intensity components are practically equal in magnitude, although mismatched in phase, E z ≈ −iE y,1 ≈ iE y,2 , and magnetic field components are scaled with respect to the permittivities of the surrounding media, H x,1 /ε air ≈ −H x,2 /ε SiO2 . A very strong vertical confinement of the surface plasmon justifies the fact that the proximity of silicon was neglected in this analysis. Its influence will be taken into account later in the CMT approach.
Surface plasmons on a graphene stripe
The surface plasmon mode propagating in the z direction, described in the previous section, cannot couple with the guided mode of the silicon waveguide because of a huge (1-2 orders of magnitude) mismatch of their effective refractive indices (note that the effective refractive index of the quasi-TE mode of the silicon waveguide at λ = 1550 nm calculated with COMSOL is 2.3754). However, the graphene stripe on top of the silicon ridge waveguide in figure 1 supports a number of higher order ('nanoribbon' [24,[30][31][32]) modes with smaller propagation constants, and some of them can match with that of the mode of the silicon waveguide. We will use the approach of the effective-index method (EIM) [33] to approximately determine their propagation constants and field distributions. Similarly to a mode of a planar dielectric waveguide, the plasmonic modes of a graphene stripe result from the interference of two plasmons that are propagating under some angle with respect to the z axis and are reflecting from the stripe edges. Following the idea of the EIM, the dispersion equation for such modes can be written in the form of the transverse resonance condition (in the x direction) where k 0 q m is the transverse propagation constant of the mth mode, m is the mode number, and R is the (amplitude) reflection coefficient of the plasmon from the edge of the graphene stripe.
Reflection and scattering of a surface plasmon from discontinuities in the graphene plane-including the reflection from the edge of a graphene stripe-has already been studied in detail and reported in a number of recent papers [31,[34][35][36][37][38][39]. It has been found that the reflection at the stripe edge is close to the total, |R| . = 1, while the phase of the reflection coefficient nontrivially depends on the detailed morphology of the graphene edge, on the inhomogeneity of the graphene conductivity due to redistribution of charges, on the excitation of evanescent waves near the stripe edge, etc. To keep our analysis as simple as possible, we decided not to consider this anomalous phase shift. Numerical tests with various kinds of boundary conditions (perfectly electric or magnetic (PMC) walls and Fresnel reflection coefficients) finally led us to the application of the PMC approach. This choice allows for a very simple evaluation of the effective refractive indices and the electromagnetic field distributions of the graphene stripe modes, which are quite close to those obtained by using more rigorous COM-SOL simulations. Some lowest-order modes of the graphene stripe (including central and edge 'ribbon plasmons' [32]) are out of scope of this approach. However, these modes cannot couple with the mode of a silicon waveguide due to strong mismatch of their propagation constants.
By taking R 2 = 1, we obtain the solution of the dispersion equation (6) in the form The effective refractive indices of the stripe plasmon modes are then obtained from the relation In our approximation, q m are real numbers. Since N sp is complex, the effective refractive indices N m are complex too. Consequently, there is no clear transition between propagating and evanescent plasmonic modes of the graphene stripe. However, since the imaginary part of N sp is significantly smaller than its real part, the real parts of high-order modes N m reach low enough values for efficient coupling with the fundamental (quasi-)TE silicon waveguide mode. However, their imaginary parts are nonzero, which indicates that the coupling may introduce a rather significant loss. As an example, the real and imaginary parts of the effective refractive indices N m of the graphene stripe are plotted in figure 5 for two values of the chemical potential, µ c = 1.0 and 1.6 eV. A full-vector field distribution of plasmon stripe modes is given by the superposition of two surface plasmons with equal amplitudes and with the wave vectors where the sign ± relates to the direction of propagation in the (x, z) plane, and the subscripts 1, 2 are related to the regions y > 0 and y < 0, respectively, in accordance with (2). Note that all components of electric and magnetic fields are nonzero, except for H y . Basics of the CMT describing simultaneous coupling of the mode of the silicon waveguide with several plasmon modes of the graphene stripe are briefly described in the next session.
Coupling of waveguide mode with surface plasmons on graphene stripe
Simultaneous coupling of a mode of a silicon waveguide with several plasmonic modes of a graphene stripe on top of the silicon waveguide can be considered as mutual coupling among modes of several parallel waveguides. One of the waveguides is the silicon waveguide, while the other 'waveguides' correspond to individual plasmonic modes supported by the graphene stripe. We denote the field distribution of the eigenmode of the silicon waveguide without graphene as (9) where β 1 is its propagation constant. Similarly, the field distributions of plasmonic modes are where M is the number of plasmonic modes taken into account. In this approach, the eigenmodes ('supermodes') of the complete waveguide system with the (generally complex and anisotropic) permittivity distributionε(x, y) are constructed as linear superpositions of eigenmodes of individual waveguides with the corresponding permittivity distributionsε m (x, y), where γ s are the propagation constants and a sm are the expansion coefficients of the 'supermodes'. Specifically,ε is the complete permittivity distribution of the waveguide structure including graphene layer as shown in figure 1,ε 1 is the permittivity distribution of the silicon waveguide in figure 1 without the graphene layer, andε m , m = 2, . . . , M + 1 are identical permittivity distributions containing the graphene stripe on the SiO 2 pedestal of the width w, surrounded by air. Applying the principles of the complex CMT [40,41] (chapter 10), we arrive to the following generalized eigenvalue equation for the complex amplitudes a sn and the propagation constants γ s : where A mn =S (e m × h n + e n × h m ) · z 0 dxdy, Here, e − m denotes the electric field distribution of the mth mode with inverted z-component, and S is the cross-section of the whole waveguide structure. Using similar arguments as in [22], we obtain In our simplified implementation of the CMT, we further neglect off-diagonal elements of the matrix A, ignore mutual coupling among various graphene modes, i.e. we set C mn = 0 for m, n > 1 and m ̸ = n, and assume that C n1 = C 1n , where C 1n is given by equation (14). Such a procedure helps significantly simplify the calculation, while keeping all aspects important for our analysis: the diagonal terms of the matrix C represent corrections to the propagation constants of modes due to modified waveguide structure and thus affect the phase mismatch among the interacting modes, while the off-diagonal elements of C describe the coupling of the silicon waveguide mode with plasmonic modes of the graphene stripe. Note that if we retain only the first terms A 11 and C 11 in (12), i.e. if we neglect the coupling, we obtain for the propagation constant γ 1 an expression equivalent to that obtained with the perturbation method in [22], equation (10).
The variance of real and imaginary parts of the effective refractive index of the (quasi-)TE mode of the silicon waveguide due to the presence of the graphene layer is shown in figure 6 in dependence of its chemical potential. The results of three methods are presented there. The red line corresponds to the results of the perturbation method of [22] that does not take into account the coupling with the surface plasmon modes in the graphene stripe, the blue line represents the result of a numerical simulation with COMSOL, and the yellow line shows the CMT results. In the COMSOL simulations, the graphene layer was represented by a boundary condition with surface conductivity [22]. The results obtained by using COMSOL and our approximate CMT method are in fairly good agreement and provide convincing physical arguments for the presence of coupling between TE polarized silicon waveguide modes and graphene surface plasmon modes.
The graphs in figure 6 show that the coupling of the silicon waveguide mode with surface plasmon modes of the graphene stripe has a typical resonant character. When the chemical potential is gradually changing (due to chemical doping or applied external electric field), the propagation constant k 0 N sp of the surface plasmon on the graphene sheet is changing too, as is shown in figure 3. Consequently, the propagation constants β m of plasmonic modes of the graphene stripe are gradually changing too. Just one of them in a time comes into resonance with the propagation constant of the silicon waveguide, and both the phase and amplitude of the waveguide mode are affected by the coupling. For µ c > 1 eV, the peaks of the imaginary part of the effective index of refraction of the silicon waveguide mode caused by coupling with graphene stripe plasmon modes are of the order of 10 −4 , which corresponds to a rather strong attenuation of the silicon waveguide of the order of several dB/mm. Figure 7 shows the distribution of the horizontal electric field intensity component of the fundamental TE mode of the silicon rib waveguide coupled with a surface plasmon on the graphene stripe, calculated with COMSOL, for the resonance value of µ c = 1.61 eV (see figure 6). The 'decoration' of the mode field with the field distribution of the surface plasmon is apparent. Note that, according to (13), only surface plasmon modes with the same symmetry can couple with the silicon waveguide mode.
Although the dominant electric field component of the plasmons is vertical (E y ), there is curiously negligible coupling among the graphene plasmons and the TM-polarized waveguide mode. The reason stems from the nature of the coupling mechanism. According to (13), only electric field components parallel with the graphene layer can contribute to the coupling; apparently, there is no graphene conductivity in the vertical direction. When a graphene layer is also deposited on the side walls of the silicon waveguide (as was considered, e.g. in [22]), graphene stripe plasmon modes supported by the side walls can contribute to the coupling too. As a result, the (quasi-)TM waveguide mode is also affected, the graphene mode spectrum is more complicated, and so are the phase and attenuation dependences of the waveguide mode on the chemical potential of the graphene layer.
Conclusions
Propagation of surface plasmons on graphene sheets has been previously studied, mostly in the THz and infrared frequency ranges. On the other hand, graphene layers on silicon waveguides have recently been used very often in the design and construction of photonic devices for modulation, switching, etc. These devices typically operate within the telecommunication band around 1550 nm, where the surface plasmon propagation on graphene layers has attracted much less attention. In this communication, we show that in the range of the chemical potential of graphene above 0.5 eV, surface plasmons supported by graphene stripes deposited on the top of a silicon waveguides rather strongly affect their guiding properties due to the coupling of surface plasmons with the mode propagating in the waveguide. This effect has been independently studied by both the approximate method based on the CMT, and by 'rigorous' numerical simulations using COM-SOL. Although the accuracy of the approximate method is not high, it offers a deep insight into the process and contributes to the understanding of the details of the coupling mechanism. The effect has not necessarily been considered harmful for the operation of silicon photonic devices, rather it may be employed in design of specific devices such as low-power modulators and sensors. Although our analysis was focused at the near infrared telecom wavelength range, we are highly convinced that this effect takes place not only in the near-to mid-IR silicon transparency window, but also in the THz spectral range where the silicon waveguides are being used as well [18,[42][43][44]. | 2020-07-30T02:07:15.954Z | 2020-08-21T00:00:00.000 | {
"year": 2020,
"sha1": "ba6733d339fc7b2a54a6555af1d35f21a6266bed",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2040-8986/aba965",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "2b0149ee70672f651a55f76a0b9d4479487e0eaa",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
235707994 | pes2o/s2orc | v3-fos-license | Recovery of Industrial Wastes as Fillers in the Epoxy Thermosets for Building Application
Epoxy resins are currently used in many areas of construction, such as resistant coatings, anchors, fibre-reinforced polymer (FRP) composites, grouts, etc. This paper deals mainly with epoxy composites that can be applied during the rehabilitation of concrete constructions. The influence of a filler type on epoxy thermoset composites was monitored, whilst three different types of epoxy resin were used in order to achieve a better representation and confirmation of the results. During the testing of fillers, these were mainly secondary raw materials, including pre-treated hazardous waste (neutralisation sludge), representing various shapes and sizes of particle, while their amount in the epoxy matrix was chosen with regard to optimal viscosity and workability. Physical and mechanical parameters, like compressive and flexural strengths, cohesion with the concrete and thermal expansion of the epoxy composites containing various fillers were determined. The microstructure of epoxy composites with a different filler type and chemical resistance against chemical aggressive media were all monitored. The microstructure of epoxy composites was monitored using scanning electron microscopy (SEM) supported by energy-dispersive X-ray spectroscopy (EDX). Computed tomography (CT) was also used for the evaluation of the cohesion of the epoxy composites with concrete and dispersion of the filler in the epoxy matrix.
Epoxy resins are polymeric substances that are mainly formed by cross-linking when an epoxy class enters the reaction [7].
During the curing of epoxide resins, crosslinking between the epoxy molecules and reactive groups on each end of the curing agent occurs [8].
The principle of epoxy resins curing using polyamines is stated in Figure 1. The means of crosslinking fundamentally affects the properties of epoxy thermosets. Generally, epoxy resins have excellent cohesion with a substrate, low shrinkage and good permeability resistance to water, acids, alkalis and other corrosive substances [6].
The monitoring of the compressive strength of polymeric concrete and polymeric Figure 1. The polyaddition reaction of amines (the amine reacts with the epoxide oxygen and forms a hydroxyl group) [9].
The means of crosslinking fundamentally affects the properties of epoxy thermosets. Generally, epoxy resins have excellent cohesion with a substrate, low shrinkage and good permeability resistance to water, acids, alkalis and other corrosive substances [6].
The monitoring of the compressive strength of polymeric concrete and polymeric mortars based on epoxy resins has already been described in many previous scientific works. It was observed that the compressive strength depends mainly on the resin content [10]. Rebeiz et al. [11], proved that by adding 15% of fly ash into the resin, the compressive strength increases by up to 30%. The application of the filler significantly influences the change in the mechanical, thermal and processing properties of the epoxy composites [12]. Atzeni et al. [13] dealt with the substitution of a conventional quartz flour filler by using fly ash in epoxy composites based on bisphenol A, and it was found that the results of mechanical properties of the epoxy composites did not differ significantly. Lin et al. [14], examined different approaches to the thermal conductivity of powder-filled epoxy resins depending on various shapes and sizes of particles. A comparative study of the performances of fly ash with epoxy resin reported that a combination of fly ash and epoxy resin can provide higher mechanical strength than epoxy composite containing silica fume. The addition of a filler can improve a matrix's compressive and tensile properties although its flexural strength may decrease [15]. The mechanical properties of epoxy polymer concrete are influenced by the matrix-to-aggregate ratio and the tensile and flexural properties are dependent on resin content in the epoxy concrete [16]. From an economical point of view, it is recommended that the minimum amount of resin is used to minimise the cost [17], and, therefore, it is important to optimise the mix proportions. Replacing primary raw materials with secondary ones will further improve the economic demands of epoxy composites. By using some suitable by-products, it would even be possible to improve some properties of epoxy composites, not only the physical and mechanical parameters, but also their long-term durability. However, the effect on the properties of a polymer matrix due to an introduction of filler is still unknown [18]. Mainly for these reasons, the dependences of the types of fillers used on the characteristic properties of epoxy composites, such as compressive and flexural strength, adhesion to concrete, but also chemical resistance, are monitored and evaluated in this paper.
For the selection of suitable fillers for epoxy composites, the maximum possible filler content in the polymer matrix and particle size and shape index are important. [19]. In the study by Jin et al. [20], experiments were carried out with the filler ratio (nano-Al 2 O 3 and nano-SiC particles) within a range of 5-15 wt.% and monitored thermal properties, also focusing on morphology, and use of mineral fillers in the epoxide matrix. The coefficient of linear thermal expansion is important to analyse the processes of structure formation and the behaviour of epoxy composites, with different amounts and types of fillers, under the influence of a thermal field. [21].
It was found that the addition of waste glass powder at an amount of 7.4 to 35.9% has a positive effect on the pull-off strength of epoxy polymer mortars [22]. An irregular shape of the crumble filler significantly increases the strength of epoxy composite compared to the spherical shape of glass powder [23]. It was proven that the tensile and compressive strength of epoxy composites was improved with the increase of fly ash content [24]. Environmental acceptance of waste foundry sands in polymer concrete requires reliable knowledge of the sand composition. The incorporation of waste foundry sands in epoxy composites can contribute to sustainable industrial growth and the production of high-quality polymer concrete [25,26]. Neutralization sludge (NS) is hazardous waste that cannot be used without its complete incorporation into other material, as there is a risk of hazardous pollutants being released into the environment [27]. Heavy metals in neutralization sludge could have a positive effect on the properties of epoxy-based materials [28].
However, a detailed comparison of the dependence of the influence of the filler component's particle shape on the behaviour of the polymeric compound in terms of rheologic properties and properties of the resulting product was not summarised in any available literature. Following previous research of epoxy composites containing fine waste from the production of mineral wool board insulation [29], and the use of secondary raw materials as fillers in epoxy polymer concrete [30], in which the temperature and chemical resistance of repair composite [31] was also monitored, this paper also studies the influence of the filler particle shape and size based on industrial waste and the properties of the epoxy composite.
By using waste products as fillers in epoxy composites, it is possible to save primary resources and achieve the required properties of epoxy composites in a much more environmentally friendly way. The use of pre-treated hazardous waste could, in particular, reduce the volume of hazardous waste in landfills, as tons of unused waste are landfilled worldwide.
Tested Formulations
Proportions between resin (epoxy resin (A), hardener (B)) and fillers are stated in Figure 2. The mix ratio by weight of resin to hardener with ER2 and ER3 binders was the same. Tested materials (epoxy composites) were prepared by first mixing the filler into component A (epoxy resin), mixing slowly for 5 min, then adding component B (hardener), mixing the mixture slowly for 5 min again so that no air was introduced into the mixture. Finally, samples for individual tests were prepared. For the preparation of samples for compressive and flexural strength determination, the fresh mixture was poured into a silicone triple mould sprayed with the demoulding agent and finally tapped lightly with the mould to remove excess air. The silicone mould was also used to prepare the sample for abrasion resistance. After 24 h, the samples were demoulded and conditioned at 23 • C and relative humidity 50% (laboratory conditions) before testing. To prepare samples for the determination of hardness and impact resistance, fresh material was applied onto the surface of the cement-bonded particleboard.
Epoxy Resin
During the experimental verification stage, four types of epoxy resins were used, differing mainly in the type of hardener used-see Table 1. The manufacturer Lena Chemical, Ltd. (Sternberk, Czech Republic) supplied the epoxy resin and hardener for the ER1 binder, and IN-CHEMIE Technology, Ltd. (Olomouc, Czech Republic) supplied the epoxy resins and hardeners for ER2 and ER3 binders. Properties of the epoxy binders are stated in Table 2. To verify the influence of the type of filler on the physical and mechanical parameters of epoxy composites, it is more appropriate to use a number of different types
Epoxy Resin
During the experimental verification stage, four types of epoxy resins were used, differing mainly in the type of hardener used-see Table 1. The manufacturer Lena Chemical, Ltd. (Sternberk, Czech Republic) supplied the epoxy resin and hardener for the ER1 binder, and IN-CHEMIE Technology, Ltd. (Olomouc, Czech Republic) supplied the epoxy resins and hardeners for ER2 and ER3 binders. Properties of the epoxy binders are stated in Table 2. To verify the influence of the type of filler on the physical and mechanical parameters of epoxy composites, it is more appropriate to use a number of different types of epoxy resins in order to improve reproducibility and confirm the results. All epoxy resins used can be characterised by Bfl (combustible materials-very limited contribution to fire floorings)-S1 (quantity/speed of smoke emission during combustion absent or weak) classification for reaction to fire according to the standard EN 13501-1 [32].
Fillers
Based on the differences in the morphology of the grain, the fillers were chosen using the widest spectrum of possible shape properties. Another evaluation criterion for the selection of input raw materials was the maximum possible use of secondary raw materials and waste materials from which a significant reduction of the ecological impacts of industrial production can be obtained. The quartz sand mixtures (Chejn, Ltd., Sušice, Czech Republic) were used as the reference filler. They are impurity-free and have the optimal round grain shape mainly used as a filler for polymer concrete (PC), resins, grouts, grouting and backfills. The sand fraction of 0-1.5 mm (ISG A1) was used within the research, which is commonly used for epoxy mortars.
Waste Glass from Solar Panels (WGS)
The solar (photovoltaic-PV) panels, that were used in this work, are manufactured by the QS Solar company (Nantong) in China. These panels are thin-layer modules with a base made of amorphous SiO 2 and have not been polluted by other materials. Currently, it is estimated that the service life of the panels, defined by a 20% reduction in efficiency, will remain at a sufficient quality level for 30-40 years following installation. However, in most cases, the main reason for the disposal of a panel is due to mechanical damage caused during transportation or installation. The biggest problem for lower-quality panels is delamination, whereby the 'sandwich' structure of a PV panel comes apart due to the effects of temperature and UV radiation. In order to recycle PV panels, the PV Cycle system was devised. This is a Europe-wide activity engaged in by the manufacturers and importers of photovoltaic (PV) panels based on voluntary responsibility for the product during its whole service life. One of the containers is intended for crystal quartz panels, and the second container is intended for thin-layer panels, for which different recycling is used. Glass accounts for the largest part of the weight of crystal PV panels (60-70%) and the aluminium frame (approximately 20%), while for thin-layer panels the glass and aluminium parts account for at least 95%. During 2010, the highest number of panels were installed in the Czech Republic, namely 160,000 tons of PV panels, for which their service life is estimated to end in 2040 [33]. When using supplied, already non-functioning panels, the aluminium frames had to be removed first. The top glass layer was exceedingly difficult to remove from the polymeric surface, and that is why the panel had to be thrown into the ball mill OM(tumbling drum)-20f (BRIO Hranice, Ltd., Hranice, Czech Republic) for approximately 20 min following the removal of all metal parts, where the glass became separated from its tenacious polymeric underlay. Individual parts were then ground using the ball mill down to the required size (fraction of 0-1.5 mm) for approximately ten minutes.
Waste from the Production of Mineral Insulation Boards (RGI)
The waste from the production of insulation boards made of mineral wool and containing a high proportion of recycled glass (>80%) was selected as another perfect filler in the form of a secondary raw material for the rehabilitation masses being developed. This is the dry by-product of the production process that falls off from below a pulper in front of a hardening chamber so it does not contain any organic elements, and it is, therefore, possible to classify this as recycled glass without organics.
Fly Ash (FA)
The filter fly ash from the thermal power plant (Veolia, Plc., Třebovice, Czech Republic) from the combustion of hard coal was also selected as a filler. FA was contaminated by the influence of the flue gas denitrification (DeNOx) using selective, Selective Non-Catalytic Reduction (SNCR), when a urea solution (CO(NH 2 ) 2 ) is injected into a boiler at high temperature.
Neutralisation Sludge (NS)
The neutralisation sludge is created as a by-product during the surface treatment of metal elements (ŽDB, Plc., Bohumín, Czech Republic). The objective of galvanic plating is to create a metal coating on predominantly metallic base materials. The protective, anticorrosive layer shields the product against the influences of the environment, thus, extending their useful service life. During the galvanic plating process, a number of waste prod-ucts are generated-sludges and filter cakes from the neutralisation station-that contain dangerous substances. According to the European Waste Catalogue (EWC), the selected hazardous waste (HW) is classified under code 19 02 05-Sludges from physical and chemical treatment containing hazardous substances. These types of sludge are characterised by several dangerous properties such as HP5 (Specific Target Organ Toxicity/Aspiration Toxicity), HP14 (Ecotoxic) and HP15 (Waste capable of exhibiting a hazardous property listed in Annex III of the 2008/98/ES Directive, not directly displayed by the original waste). This sludge had to be dried and ground by the vibratory disc mill RS 200 (Retsch GmbH, Haan, Germany) down to a suitable granulometry, which would represent finer fillers (FA, RGI) and so that it can be successfully uniformly dispersed in the polymer matrix, prior to usage itself.
Waste Foundry Sand (WFS)
In the Czech Republic, the annual consumption of foundry sands is about 800,000 tons, of which only less than 10% is recycled. Bentonite or cement-bonded sands and water glass mixtures are practically environmentally friendly. Due to their variable nature, natural foundry sands are being increasingly replaced by synthetic sands, into which the specified amount of bonding admixture is added (bentonite on most occasions) [34]. The sustainable usage of WFS provides an economical and environmentally friendly solution compared to the high costs of disposal in landfills and extraction of primary raw materials [35]. The used WFS was adjusted to a granulometry of less than one millimetre in a foundry, so it does not need to be pre-treated before further use.
Summary of Properties of Input Raw Materials
The fillers were characterised by the determination of chemical composition ( Table 3, particle size distribution (Figure 3), density and specific surface area (Table 4) and particle shape ( Figure 4). As can be seen from the size distribution curve (Figure 3), the types of filler used showed fine-grained particles with different sized grains. Three filler types (FA, RGI, NS) were selected with a particle size less than 0.63 mm (fine-grained) and three filler types (REF, WFS, WGS) were selected with a particle size less than 1.6 mm (coarse-grained). These fractions were chosen to monitor the effect of filler particle size on the tested properties of epoxy composites.
To study the stage of synergic influence of a filler and epoxy resins, fillers were used, based on the different particle shapes (in Figure 4, it is possible to see the characteristic grain shape of individual fillers) from spherical, to arched sharp-edged, to acicular, among others.
Compressive and Flexural Strength
Compressive and three-point flexural strength was determined pursuant to the EN 12808-3 [36] on specimens shaped like small beans with dimensions of 20 mm × 20 mm × 100 mm seven days after specimen preparation. This standard allows samples of these dimensions to be used, and it is possible to subsequently determine the compressive strength using the fractions of beams. There are chemically resistant epoxy-based grouts on the market with quartz sand as filler, which are basically also epoxy composites, so it is possible to use this standard. The distance between the supports was 80 mm and the pressure area during compressive loading was 400 mm 2 . The testing pressure RT 200/10-1 D servo (ratioTEC Prüfsysteme GmbH, Langenenslingen, Germany) was used for the compressive and flexural strength determination. The specimens were stored in a laboratory environment, during the polymerisation and subsequently until the time of testing. Compressive strength was tested from each formulation on three specimens and flexural strength on fractions of these beams, i.e., six specimens.
Cohesion with Concrete
The cohesion of epoxy composites was determined according to the EN 1542 standard [37]. The thickness of the layer of epoxy mortar applied on the surface of the concrete As can be seen from the size distribution curve (Figure 3), the types of filler used showed fine-grained particles with different sized grains. Three filler types (FA, RGI, NS) were selected with a particle size less than 0.63 mm (fine-grained) and three filler types (REF, WFS, WGS) were selected with a particle size less than 1.6 mm (coarse-grained). These fractions were chosen to monitor the effect of filler particle size on the tested properties of epoxy composites.
To study the stage of synergic influence of a filler and epoxy resins, fillers were used, based on the different particle shapes (in Figure 4, it is possible to see the characteristic grain shape of individual fillers) from spherical, to arched sharp-edged, to acicular, among others.
Compressive and Flexural Strength
Compressive and three-point flexural strength was determined pursuant to the EN 12808-3 [36] on specimens shaped like small beans with dimensions of 20 mm × 20 mm × 100 mm seven days after specimen preparation. This standard allows samples of these dimensions to be used, and it is possible to subsequently determine the compressive strength using the fractions of beams. There are chemically resistant epoxy-based grouts on the market with quartz sand as filler, which are basically also epoxy composites, so it is possible to use this standard. The distance between the supports was 80 mm and the pressure area during compressive loading was 400 mm 2 . The testing pressure RT 200/10-1 D servo (ratioTEC Prüfsysteme GmbH, Langenenslingen, Germany) was used for the compressive and flexural strength determination. The specimens were stored in a laboratory environment, during the polymerisation and subsequently until the time of testing. Compressive strength was tested from each formulation on three specimens and flexural strength on fractions of these beams, i.e., six specimens.
Cohesion with Concrete
The cohesion of epoxy composites was determined according to the EN 1542 standard [37]. The thickness of the layer of epoxy mortar applied on the surface of the concrete was approximately 5 mm, and the pull-off testing was performed seven days after the application of the epoxy composite using the DYNA pull-off tester PROCEQ Z16 (Proceq SA, Zürich, Switzerland). Three repetitions of the cohesion with concrete of each formulation were performed.
Dynamic Viscosity
Determination of the viscosity was performed using an MYR VR 3000 V1L rotational viscometer (MYR Viscotech, Ltd., El Vendrell, Spain). The temperature of the fresh mixtures was 20 • C at the start of the determination process and a spindle type R6 was used to measure the viscosity. The determination was performed immediately after the mixing of all epoxy composite components. Three repetitions of the dynamic viscosity determination of each epoxy mixture were performed.
Abrasion Resistance
The abrasion resistance was determined according to the EN 13892-3 standard [38] on three specimens from each type of epoxy composite with dimensions of 70 mm × 70 mm and a thickness of at least 30 mm, these being the same as the ones used in the hardness test. The test specimens were clamped in the N-1001 RT Böhm abrasion resistance tester (FORM + TEST Seidner & Co. GmbH, Riedlingen, Germany) on a grinding track on which the abrasive (20 g of corundum) was poured. Each test specimen was tested for 16 cycles, each of 22 rotations, with a sample load of 294 N. After each cycle, the sample was rotated 90 • and a new abrasive was poured onto the grinding path. Abrasion resistance was expressed by reducing the volume after 16 cycles, in cm 3 to 50 cm 2 .
Impact Resistance
The determination of impact resistance of the epoxy composites was performed according to the EN ISO 6272-1 standard [39] using falling weight onto a large striker area. Epoxy mortars were firstly applied in a thickness of approximately 4 mm to a cement particle board, then they were tested for impact resistance after seven days. Two samples from each formulation were tested and the test, at the same height of impact of the weight, was performed at 5 spots at least 2 cm apart.
Hardness
The surface hardness of epoxy composites was determined according to the standard EN ISO 868 [40]. A D type TQC hardness tester, LD0550 series was used to determine the hardness of materials based on epoxy resins (ER) showing high hardness. the hardness was determined on samples for determining the impact resistance at 5 spots at least 2 cm apart.
Thermal Expansion
The determination of the coefficient of linear thermal expansion (α) was performed using the CLASIC 30/100/15DIL dilatometer (CLASIC CZ, Ltd.,Řevnice, Czech Republic,). The samples were of the same dimensions as in the determination of flexural strength (20 mm × 20 mm × 100 mm). The measurement was executed within a temperature range of 20-60 • C for approximately 18 h. Three repetitions were performed with each type of epoxy composite.
The coefficient of thermal expansion (α) was determined according to the following equation: where α is the coefficient of linear thermal expansion in (K −1 ), L 0 is the original length of the specimen in (mm), ∆T is the temperature change in (K), and ∆L is the change in the length of the specimen L in (mm).
Effects of the Aggressive Environment
Firstly, the epoxy composites in a fresh state were applied to a thinner layer on an acetone-cleaned and dried laboratory slide. The specimens were left to polymerise for seven days on a clean underlay at a temperature of 20 ± 2 • C, Samples were then immersed into a lockable glass cuvette with a specific aggressive media, which in practice can affect epoxy composites primarily intended for the rehabilitation of concrete structures (40% H 2 SO 4 , 40% NaOH, 10% CH 3 COOH, gasoline, 10% NaCl, 30% H 2 O 2 ). The chemically aggressive environment acted on the samples for 28 days at a temperature of 23 ± 2 • C. After this period, the samples were taken out of the aggressive medium, dried and then in accordance with Table 5, the effects of the aggressive environment on the epoxy composites were visually evaluated. This method was applied mainly in order to determine in which chemically aggressive environment the materials can be used and in which there is considerable degradation of the material. The evaluation system for the effects of the aggressive environment was prepared according to the expected behaviour of the epoxy composites in the aggressive media. The effect of the filler on the chemical resistance of epoxy materials was also monitored. Table 5. Evaluation system for accelerated chemical resistance test designed with regard to the expected behaviour of the epoxy composites in the aggressive media.
Indication of a Breach
Evaluation Criterion
7
The material shows no changes 6 Colour changes 5 Swelling + colour changes 4 Peeling the material off the slide 3 Peeling the material off the slide + swelling + colour changes 2 Peeling the material off the slide + softening 1 Complete decomposition of the material
Microstructure-Digital Microscope and SEM
Microstructure changes were examined using a Keyence VHX950F digital optical microscope (Keyence Ltd., Osaka, Japan) due to the use of different filler types. Examination concentrated on looking for possible defects, air pores, microcracks, micro splits and micro blisters, also the means of distribution, the quality of homogenisation and eventual clustering of the filler within the cross-section of the specimen were evaluated. For monitoring the inner structure and microstructure, a TESCAN MIRA3 XMU (TESCAN Ltd., Brno, Czech Republic,) Scanning Electron Microscope (SEM), which allows the examination of materials with magnification up to 1,000,000×. Using the SEM, the samples of epoxy composites were examined at a resolution of up to 5000×. The acceleration voltage was 15 and 20 kV. Samples with a thickness of approximately 4 mm were prepared by sputtering by gold in a high vacuum.
FTIR
Using FTIR, it is possible to identify especially organic compounds that have a spectrum area within the wavenumbers of 400 to 4000 cm −1 . Within the 4000-2500 cm −1 range, there is valence vibration of hydrogen, and a free binding of O-H absorbs at the highest wavenumbers at approximately 3600 cm −1 . To determine the infrared spectrum, the Frontier PerkinElmer IR/NIR type spectroscope with an attenuated total reflection (ATR) diamond crystal as an adaptor was used. A small amount of the sample (10 mg) with a part size <100 µm homogenised with 300-400 mg KBr and a tablet was then extruded, which was then inserted into the FTIR spectrometer to determine its composition. Using FTIR, the spectrum of used treated hazardous waste (NS) and fly ash (FA) was also determined for comparison with individual spectrums.
CT Tomography
The computed tomograph (CT) Phoenix v|tome|x m 300 (Jess W Jackson & Assoc. Inc., Bristol, VA, USA) was used particularly to monitor the inner structure of epoxy composites, cohesion of polymer mortar with a concrete underlay, and the distribution of fillers in the epoxy matrix. This is a multi-purpose tomograph used for the analysis and 3D viewing of a wide spectrum of materials that operates with a voltage of 300 kV/500 W.
Compressive and Flexural Strength
From the obtained results of compressive ( Figure 5) and flexural strength (Figure 6), it is obvious that the most significant reduction in strength occurs when neutralisation sludge (NS) is added (compared to the reference filler). This is primarily caused by the influence of discretely distributed individual grains. The highest strength was reached in the ER1 and ER3 samples when using WFS as a filler. The reference filler, in terms of compressive strength reached, works best with the ER2 resin; waste glass from solar panels QS solar reaches similar compressive strengths for all monitored binders; epoxy composites containing RGI filler reached their highest strength with the ER1 binder; NS behave the same with all binding resins; epoxy composite with WFS showed the highest strength when combined with ER1. The coarser-grained filler forms the skeleton of the epoxy composite, which is sufficiently strong, and, therefore, the samples containing WGS, WFS and REF showed the highest compressive strengths. According to the Technical Conditions for Rehabilitation of Concrete Structures III [41], repair materials for concrete with the R4 class static function must show a compressive strength of at least 45 MPa. This limit value was reached by all epoxy materials ( Figure 5).
was then inserted into the FTIR spectrometer to determine its composition. Using FTIR, the spectrum of used treated hazardous waste (NS) and fly ash (FA) was also determined for comparison with individual spectrums.
CT Tomography
The computed tomograph (CT) Phoenix v|tome|x m 300 (Jess W Jackson & Assoc. Inc., Bristol, VA, USA) was used particularly to monitor the inner structure of epoxy composites, cohesion of polymer mortar with a concrete underlay, and the distribution of fillers in the epoxy matrix. This is a multi-purpose tomograph used for the analysis and 3D viewing of a wide spectrum of materials that operates with a voltage of 300 kV/500 W.
Compressive and Flexural Strength
From the obtained results of compressive ( Figure 5) and flexural strength (Figure 6), it is obvious that the most significant reduction in strength occurs when neutralisation sludge (NS) is added (compared to the reference filler). This is primarily caused by the influence of discretely distributed individual grains. The highest strength was reached in the ER1 and ER3 samples when using WFS as a filler. The reference filler, in terms of compressive strength reached, works best with the ER2 resin; waste glass from solar panels QS solar reaches similar compressive strengths for all monitored binders; epoxy composites containing RGI filler reached their highest strength with the ER1 binder; NS behave the same with all binding resins; epoxy composite with WFS showed the highest strength when combined with ER1. The coarser-grained filler forms the skeleton of the epoxy composite, which is sufficiently strong, and, therefore, the samples containing WGS, WFS and REF showed the highest compressive strengths. According to the Technical Conditions for Rehabilitation of Concrete Structures III [41], repair materials for concrete with the R4 class static function must show a compressive strength of at least 45 MPa. This limit value was reached by all epoxy materials ( Figure 5). The highest flexural strength was reached in samples using recycled material from the production of glass insulation (RGI). This fact can be explained by the 'rod-like shaped' particle shape of RGI fine filler particles. Fly ash worked best with ER3 epoxy binder when almost the same flexural strength was achieved as in the case of a composite containing RGI filler. Neutralisation sludge (NS) showed comparable strengths in all used The highest flexural strength was reached in samples using recycled material from the production of glass insulation (RGI). This fact can be explained by the 'rod-like shaped' particle shape of RGI fine filler particles. Fly ash worked best with ER3 epoxy binder when almost the same flexural strength was achieved as in the case of a composite containing RGI filler. Neutralisation sludge (NS) showed comparable strengths in all used binders. Waste foundry sand (WFS) reached its highest strength when combined with the ER2 binder.
Cohesion with Concrete
Failure in the sub-concrete layer occurred in all monitored samples, except for those containing NS. The highest cohesion with sub-concrete was reached by the reference filler in combination with the used ER2 binders: fly ash with ER3 and a waste foundry sand with ER3. The comparison of individual resins: WGS filler had the highest cohesion with sub-concrete combined with the ER3 epoxy resin. Recycled material from the production of glass insulation (RGI) reached the highest cohesion with the used ER2 and ER3 binders. Fly ash reached the highest cohesion with the ER3 binder. For samples using the NS filler, the value of cohesion with concrete was the lowest. In terms of the resin used, the highest cohesion with the sub-concrete with the waste foundry sand filler was reached by the ER3 binder. All examined samples met the requirements of the EN 1504-3 standard [42], which determines the minimum value of the cohesion with concrete 2.0 MPa for R4 class repair materials with a static function-see Figure 7. Courard et al. [43], stated that increasing substrate roughness promotes epoxy mortar adhesion due to better mechanical interlocking for high-strength concrete substrates. The pull-out strength of the epoxy adhesive systems that contained fillers based on micro silica improved up to 20% [44].
Dynamic Viscosity
As is obvious from the results of the evaluation of dynamic viscosity (Figure 8), its values strongly depend on the type and particularly the amount of the filler, rather than on the type of binder used. Solid particles form a spatial network that highly increases resistance to flow. The lower the viscosity of the matrix (in higher temperatures) the greater its influence on the formed network; low viscosity fluid is able to destroy the formed network and, in this way, lower the viscosity [45]. The lowest values of dynamic viscosity were reached for materials using the neutralization sludge, the second-lowest dynamic viscosity was recorded in samples with fly ash, followed by the recycled material from the production of glass insulation. The highest values were reached for the waste foundry sand (WFS), and slightly lower values in samples with WGS and with the reference filler. The lowest values of the dynamic viscosity were reached by the mixture containing neutralization sludge (NS), which was caused by the highest value of the specific surface area for this type of filler since a greater amount of epoxy resin, which is viscose, is used for coating the individual particles.
Dynamic Viscosity
As is obvious from the results of the evaluation of dynamic viscosity (Figure 8), its values strongly depend on the type and particularly the amount of the filler, rather than on the type of binder used. Solid particles form a spatial network that highly increases resistance to flow. The lower the viscosity of the matrix (in higher temperatures) the greater its influence on the formed network; low viscosity fluid is able to destroy the formed network and, in this way, lower the viscosity [45]. The lowest values of dynamic viscosity were reached for materials using the neutralization sludge, the second-lowest dynamic viscosity was recorded in samples with fly ash, followed by the recycled material from the production of glass insulation. The highest values were reached for the waste foundry sand (WFS), and slightly lower values in samples with WGS and with the reference filler. The lowest values of the dynamic viscosity were reached by the mixture containing neutralization sludge (NS), which was caused by the highest value of the specific surface area for this type of filler since a greater amount of epoxy resin, which is viscose, is used for coating the individual particles.
Abrasion Resistance
The highest abrasion resistance was observed in samples using fly ash (FA) as a filler and the ER1 as a binder, alternatively, the lowest ability to resist the abrasion with the highest volume reduction was observed in composites containing waste foundry sand (WFS) as a filler and in all types of binders. Based on the comparison of the abrasion resistance results (Figure 9) it is possible to state that coarser fillers of regular, spheric, monoclinic to tetragonal shape (REF, WFS), but also irregular shape (WGS), have a negative influence on the abrasion resistance of epoxy composites. This is caused by easier abrasion of the grains by corundum sand than in composites with finer fillers (FA, RGI, NS), where the contact zone between the filler and the binder is greater in total volume, and particularly on the surface, the fillers are better coated by the polymeric matrix. The wear resistance of polymers is improved by the addition of fillers [46,47]. Yousif et al. [48] found out that the epoxy composite experiences high wear resistance when it is subjected to fine sand particles followed by grain and finally coarse sand.
Abrasion Resistance
The highest abrasion resistance was observed in samples using fly ash (FA) as a filler and the ER1 as a binder, alternatively, the lowest ability to resist the abrasion with the highest volume reduction was observed in composites containing waste foundry sand (WFS) as a filler and in all types of binders. Based on the comparison of the abrasion resistance results (Figure 9) it is possible to state that coarser fillers of regular, spheric, monoclinic to tetragonal shape (REF, WFS), but also irregular shape (WGS), have a negative influence on the abrasion resistance of epoxy composites. This is caused by easier abrasion of the grains by corundum sand than in composites with finer fillers (FA, RGI, NS), where the contact zone between the filler and the binder is greater in total volume, and particularly on the surface, the fillers are better coated by the polymeric matrix. The wear resistance of polymers is improved by the addition of fillers [46,47]. Yousif et al. [48] found out that the epoxy composite experiences high wear resistance when it is subjected to fine sand particles followed by grain and finally coarse sand.
Impact Resistance
The highest impact resistance was recorded in the epoxy composites containing fly ash as the filler, whilst the highest values were observed with the samples based on ER1 Figure 9. Abrasion resistance of epoxy composites depending on the type of filler and epoxy binder used.
Impact Resistance
The highest impact resistance was recorded in the epoxy composites containing fly ash as the filler, whilst the highest values were observed with the samples based on ER1 and ER3 resins-see Figure 10. Generally, it is possible to assess that the materials containing finer fillers (FA, RGI) showed better impact resistance. Materials with more coarse fillers were more fragile and, therefore, demonstrated a lower impact resistance.
Impact Resistance
The highest impact resistance was recorded in the epoxy composites containing fly ash as the filler, whilst the highest values were observed with the samples based on ER1 and ER3 resins-see Figure 10. Generally, it is possible to assess that the materials containing finer fillers (FA, RGI) showed better impact resistance. Materials with more coarse fillers were more fragile and, therefore, demonstrated a lower impact resistance.
Hardness
Shore D hardness was tested on samples aged 28 days and the results are shown in Figure 11. The hardness values depend on the time of the action of a foreign body, on its geometry and material properties, load weight, elastic properties of the tested materials and on the temperature during the test. The hardness of polymers is subject to more com- Figure 10. Impact resistance of epoxy composites depending on the type of filler and epoxy binder used.
Hardness
Shore D hardness was tested on samples aged 28 days and the results are shown in Figure 11. The hardness values depend on the time of the action of a foreign body, on its geometry and material properties, load weight, elastic properties of the tested materials and on the temperature during the test. The hardness of polymers is subject to more complex regularities than materials of a metallic nature, as demonstrated by polymer properties such as relatively low elasticity and viscoelastic behaviour. Zhang et al. [31] reported that low crosslink density decreases the hardness of polymer composites. Epoxy materials with smaller particles (d < 208 µm) have a homogeneous microstructure, and a volume fraction of the particulate waste greater than 300 µm can be used to obtain any useful increase in hardness. This produced a hardness gradient and a hardened surface five times harder than the bare resin matrix obtained [33]. plex regularities than materials of a metallic nature, as demonstrated by polymer properties such as relatively low elasticity and viscoelastic behaviour. Zhang et al. [31] reported that low crosslink density decreases the hardness of polymer composites. Epoxy materials with smaller particles (d < 208 µm) have a homogeneous microstructure, and a volume fraction of the particulate waste greater than 300 µm can be used to obtain any useful increase in hardness. This produced a hardness gradient and a hardened surface five times harder than the bare resin matrix obtained [33]. Figure 11. Surface hardness of epoxy composites depending on the type of filler and epoxy binder used.
It was verified that the influence of the type and amount of filler can slightly influence the results for material hardness. Furthermore, it was proven that materials containing finer particles as a filler (FA, RGI) show higher values of the Shore D hardness than the Figure 11. Surface hardness of epoxy composites depending on the type of filler and epoxy binder used.
It was verified that the influence of the type and amount of filler can slightly influence the results for material hardness. Furthermore, it was proven that materials containing finer particles as a filler (FA, RGI) show higher values of the Shore D hardness than the materials filled by WGS, NS and WFS, in which the particle size ranged up to 1.5 mm. The highest value of Shore D hardness was recorded for samples using the fly ash filler. Moreover, it is possible to see that the type of resin used did not have any influence on the hardness of the composite surface. On the contrary, the lowest values were observed in samples using the reference filler and WGS.
Coefficient of Linear Thermal Expansion
The coefficient of linear thermal expansion (α) is a particularly important parameter of polymers used mainly in engineering applications. A low-value α is often desirable for acquiring dimension stability and this can be achieved by the addition of a solid and fine graphite filler. It was observed that the α value of the hardened epoxy resin (ER) is 60 × 10 −6 K −1 and by the adding of 2.5% (weight) of graphite plates, it reduces to 36-41 × 10 −6 K −1 , which is approximately 30-40% lower in value [49]. The main cause of the decrease in the α value, in this case, is considered to be the subtle dispersion and rigidity of graphite plates in the ER matrix, which can inhibit the expansion of polymeric chains during the rise in temperature.
The crosslinking point mutually pulls molecular chains under a micro-Brownian motion, thereby preventing the molecular chains from expanding with rising temperature. In the rubbery region (190-250 • C), the coefficient decreases as the crosslinking density of cured resin and on the contrary increases in the glassy region (50-140 • C), when an increase in the crosslinking density occurs [50]. From the results shown in Figure 12, it is obvious, as stated by Wong [51], that the coefficient of the linear thermal expansion decreases with a decrease in the filler content (NS, RGI). The linear thermal expansion was higher with waste foundry sand, fly ash, silica sand and waste glass from solar panels, which all build a stronger skeleton than finer fillers. The type of resin used had a negligible effect on the α value. higher with waste foundry sand, fly ash, silica sand and waste glass from solar panels, which all build a stronger skeleton than finer fillers. The type of resin used had a negligible effect on the α value.
Effects of the Aggressive Environment
Organic acids, such as HCOOH, CH3COOH and CH3CH2COOH, are quite weak acids and so they are much less dissociated. These acids mainly act as solvents; their effect leads to the creation of surface blisters and the separation of segments of macromolecular chains. It was also proven that an increase in chemical resistance can be achieved by the implementation of fillers that are able to react with a diffusing acid. Inner fillers or pigments (TiO2, graphite, soot, chromium oxides), present in the epoxy matrix, increase the diffusion of an aggressive (corrosive) medium since the penetration at the pigment/binder interface occurs along the pigment particles [52].
Effects of the Aggressive Environment
Organic acids, such as HCOOH, CH 3 COOH and CH 3 CH 2 COOH, are quite weak acids and so they are much less dissociated. These acids mainly act as solvents; their effect leads to the creation of surface blisters and the separation of segments of macromolecular chains. It was also proven that an increase in chemical resistance can be achieved by the implementation of fillers that are able to react with a diffusing acid. Inner fillers or pigments (TiO 2 , graphite, soot, chromium oxides), present in the epoxy matrix, increase the diffusion of an aggressive (corrosive) medium since the penetration at the pigment/binder interface occurs along the pigment particles [52].
Based on the evaluation of the executed accelerated chemical resistance test in Table 6, it is obvious, from Figures 13-15, that no damage to the structure of the epoxy composites or their surface occurred due to exposure to NaOH, NaCl or distilled water. Regarding the methods using liquid-phase decomposition, supercritical or subcritical fluid decomposition and peracid decomposition have been widely investigated [53][54][55][56][57]. From the results of the test, it is obvious that epoxy resins showed poor resistance to acetic acid solutions and hydrogen peroxide. Oxidative degradation occurs because of exposure to H 2 O 2 [58]. The degree of damage observed was similar for all types of resin used. Only the ER3based epoxy composites achieved slightly better chemical resistances, particularly when exposed to CH 3 COOH. The higher chemical resistance of the ER3 resin is related to the fact that it contains formaldehyde and phenol in the A component, thanks to which it is possible to rank it among Novolac resins. The added functionality of the phenolic resin increases the ability of the resin to crosslink, creating a stronger polymer network with high resistivities. The high chemical and solvent resistivities and temperature compatibility of epoxy phenolic resins are most useful when used in high-performance applications and in corrosion resistance [59,60]. Samples with NS and the ER3 resin also exhibited high chemical resistance, and they can be used in a chemically aggressive environment to avoid a possible release of contaminants from the material to the environment, thus, ensuring appropriate environmental protection.
Microstructure-Digital Microscope and SEM
From the optical digital microscope images (Figure 16), it is possible to see how the filler is incorporated into the polymeric matrix. All types of fillers used are perfectly coated with the epoxy resin. Filler particles are equally distributed in the mixture and air pores have shown a maximum diameter of 100 µm.
Microstructure-Digital Microscope and SEM
From the optical digital microscope images (Figure 16), it is possible to see how the filler is incorporated into the polymeric matrix. All types of fillers used are perfectly coated with the epoxy resin. Filler particles are equally distributed in the mixture and air pores have shown a maximum diameter of 100 µm.
to various chemically aggressive environments.
Microstructure-Digital Microscope and SEM
From the optical digital microscope images (Figure 16), it is possible to see how the filler is incorporated into the polymeric matrix. All types of fillers used are perfectly coated with the epoxy resin. Filler particles are equally distributed in the mixture and air pores have shown a maximum diameter of 100 µm. The samples were also microscopically examined ( Figure 17) following exposure to various chemically aggressive environments. In comparison with the reference images, it is obvious that after the exposure to a solution of sulphuric acid, no surface damage to the polymeric composite occurred. Acetic acid had a significant degrading influence on the samples-in Figure 17b,d, there are evident cracks and peeling of the epoxy thermoset layers from the surface.
The samples were also microscopically examined ( Figure 17) following exposure to various chemically aggressive environments. In comparison with the reference images, it is obvious that after the exposure to a solution of sulphuric acid, no surface damage to the polymeric composite occurred. Acetic acid had a significant degrading influence on the samples-in Figure 17b,d, there are evident cracks and peeling of the epoxy thermoset layers from the surface. From the photomicrographs obtained from SEM ( Figure 18), it is obvious that the filler particles are perfectly coated with the polymer matrix in all epoxy materials. Similar results were recorded in all epoxy binders. No chemical bond between epoxy matrix and filler particles occurred in any binder (ER1, ER2 and ER3). The different parts of a filler are only physically bound together in the epoxy resin. No evident clusters of particles are present, and they do not occur in the area with an increased number of air pores either. In Figure 18d, there are clearly visible cenospheres that commonly occur in the high-temperature fly ash. They are equally distributed in the sample and no clustering of these particles has occurred. From the image of the neutralization sludge sample (Figure 18e), it is obvious that particles of this filler are perfectly incorporated into the epoxy matrix and, therefore, there is no danger that any negative release or leaching of pollutants into the environment can occur. From the photomicrographs, it is not apparent if any chemical reactions between the binder and filler were present, and no new structures were observed. From the photomicrographs obtained from SEM ( Figure 18), it is obvious that the filler particles are perfectly coated with the polymer matrix in all epoxy materials. Similar results were recorded in all epoxy binders. No chemical bond between epoxy matrix and filler particles occurred in any binder (ER1, ER2 and ER3). The different parts of a filler are only physically bound together in the epoxy resin. No evident clusters of particles are present, and they do not occur in the area with an increased number of air pores either. In Figure 18d, there are clearly visible cenospheres that commonly occur in the hightemperature fly ash. They are equally distributed in the sample and no clustering of these particles has occurred. From the image of the neutralization sludge sample (Figure 18e), it is obvious that particles of this filler are perfectly incorporated into the epoxy matrix and, therefore, there is no danger that any negative release or leaching of pollutants into the environment can occur. From the photomicrographs, it is not apparent if any chemical reactions between the binder and filler were present, and no new structures were observed. From Figure 19, the chemical composition of parts of the epoxide composite specified by the EDX analysis is evident. Blue areas represent calcium and red areas represent iron. From the photomicrograph supported by the EDX evaluation, it is clear that contaminants from particles of neutralisation sludge (NS) have not been released into the polymeric matrix, thus ensuring its perfect incorporation into the inner structure is ensured. From Figure 19, the chemical composition of parts of the epoxide composite specified by the EDX analysis is evident. Blue areas represent calcium and red areas represent iron. From the photomicrograph supported by the EDX evaluation, it is clear that contaminants from particles of neutralisation sludge (NS) have not been released into the polymeric matrix, thus ensuring its perfect incorporation into the inner structure is ensured. Figure 19. The EDX analysis of NS particles in the polymeric matrix-graphic illustration of the elements present.
FTIR
Due to the incorporation of hazardous waste (NS) into the polymeric matrix ( Figure 20), no new chemical bonds between the binder and filler have occurred. In the evaluation of the spectrum itself, in the material with fly ash (Figure 21), aluminosilicate strips and silicates from the fly ash itself were found, whilst the bonding of the -OH group to the aluminous element of the fly ash occurred, which was possible to observe at a wavenumber of 850 cm −1 . In the sample containing NS filler, carbonates stripes (CaCO3) were observed and at the start of the middle area of IR, iron oxides were most likely identified, which were also observed in the chemical analysis of the sludge itself. In the organic component, only compounds typical for epoxides were observed, such as the oxirane ring (C-O-C), C-H bonds and other aromatic compounds, and these findings were observed in all ER used. In all spectrums of the epoxy composites, the hydroxyl (-OH) group with a value of approximately 3400 cm −1 was detected.
FTIR
Due to the incorporation of hazardous waste (NS) into the polymeric matrix (Figure 20), no new chemical bonds between the binder and filler have occurred. In the evaluation of the spectrum itself, in the material with fly ash (Figure 21), aluminosilicate strips and silicates from the fly ash itself were found, whilst the bonding of the -OH group to the aluminous element of the fly ash occurred, which was possible to observe at a wavenumber of 850 cm −1 . In the sample containing NS filler, carbonates stripes (CaCO 3 ) were observed and at the start of the middle area of IR, iron oxides were most likely identified, which were also observed in the chemical analysis of the sludge itself. In the organic component, only compounds typical for epoxides were observed, such as the oxirane ring (C-O-C), C-H bonds and other aromatic compounds, and these findings were observed in all ER used. In all spectrums of the epoxy composites, the hydroxyl (-OH) group with a value of approximately 3400 cm −1 was detected.
CT Tomography
In the CT scans (Figures 22-25), the interface of the connection of the epoxy composite on a concrete underlay (concrete curb with damaged corner) can be clearly seen. From the images, it is obvious that a perfect connection of the epoxy mortar to the underlay
CT Tomography
In the CT scans (Figures 22-25), the interface of the connection of the epoxy composite on a concrete underlay (concrete curb with damaged corner) can be clearly seen. From the images, it is obvious that a perfect connection of the epoxy mortar to the underlay concrete has occurred, while no separation layer has been formed. Edges are smooth without any evident defects; the only defects are caused by the provisory laboratory formwork. The formwork used in practice is more suitable. Moreover, it is also evident from the images that a perfect distribution of the filler's components has occurred. In Figure 22, a minor defect can be seen caused by the lower viscosity of the mixture and the segregation of the binding component due to gravitational power and differing densities of both components. However, this visual defect has no impact on the resulting physical and mechanical parameters or the long-term durability. From all CT images, it is evident that the epoxy composites contained at least a minimum number of open pores. In Figure 25, lighter grains of fly ash, which are equally dispersed in the epoxy matrix, can be seen. The black areas are air pores that are equally distributed throughout the composite. Even in this CT scan, the perfect cohesion of the repair material to the concrete underlay can be seen. The colour difference is caused by the different density of the concrete (2400 kg/m 3 ) and epoxy composites (1500 kg/m 3 ), fly ash has a density of approximately 2600 kg/m 3 , and, therefore, its grains can be clearly seen in the mixture. From the SEM analysis and the CT tomography, it is obvious that finer and coarser particles of fillers are dispersed equally throughout the composite.
Conclusions
It was determined that the type of filler used has a more significant influence on the resulting properties of the mixture than the type of resin itself. Fillers that were used for the experiment were mainly secondary raw materials, which were chosen to have as many different shapes and particle sizes as possible. Regarding the obtained results, it can be stated that coarser grains of a filler with a more equal geometric shape (REF, WFS) have a positive influence especially on the compressive strength thanks to less compression of the filler grains in the epoxy composite. Alternatively, epoxy composites containing finer In Figure 25, lighter grains of fly ash, which are equally dispersed in the epoxy matrix, can be seen. The black areas are air pores that are equally distributed throughout the composite. Even in this CT scan, the perfect cohesion of the repair material to the concrete underlay can be seen. The colour difference is caused by the different density of the concrete (2400 kg/m 3 ) and epoxy composites (1500 kg/m 3 ), fly ash has a density of approximately 2600 kg/m 3 , and, therefore, its grains can be clearly seen in the mixture. From the SEM analysis and the CT tomography, it is obvious that finer and coarser particles of fillers are dispersed equally throughout the composite.
Conclusions
It was determined that the type of filler used has a more significant influence on the resulting properties of the mixture than the type of resin itself. Fillers that were used for the experiment were mainly secondary raw materials, which were chosen to have as many different shapes and particle sizes as possible. Regarding the obtained results, it can be stated that coarser grains of a filler with a more equal geometric shape (REF, WFS) have a positive influence especially on the compressive strength thanks to less compression of the filler grains in the epoxy composite. Alternatively, epoxy composites containing finer In Figure 25, lighter grains of fly ash, which are equally dispersed in the epoxy matrix, can be seen. The black areas are air pores that are equally distributed throughout the composite. Even in this CT scan, the perfect cohesion of the repair material to the concrete underlay can be seen. The colour difference is caused by the different density of the concrete (2400 kg/m 3 ) and epoxy composites (1500 kg/m 3 ), fly ash has a density of approximately 2600 kg/m 3 , and, therefore, its grains can be clearly seen in the mixture. From the SEM analysis and the CT tomography, it is obvious that finer and coarser particles of fillers are dispersed equally throughout the composite.
Conclusions
It was determined that the type of filler used has a more significant influence on the resulting properties of the mixture than the type of resin itself. Fillers that were used for the experiment were mainly secondary raw materials, which were chosen to have as many different shapes and particle sizes as possible. Regarding the obtained results, it can be stated that coarser grains of a filler with a more equal geometric shape (REF, WFS) have a positive influence especially on the compressive strength thanks to less compression of the filler grains in the epoxy composite. Alternatively, epoxy composites containing finer fillers (FA, RGI) show higher flexural strength, better abrasion resistance and better impact resistance. Epoxy mortars containing quartz sand as a filler were also tested in order to compare the results of tested materials with the available materials containing only primary row materials. Compared to other commercial epoxy materials, even better mechanical properties have been achieved with epoxy composites containing waste materials as fillers. The highest flexural strength was recorded in samples containing the RGI filler, which was the most heterogeneous in terms of particle shape, and it can be assumed that the 'rod-shaped' particles of a filler had a positive influence on the load resistance. Based on the evaluation of the FTIR analysis, no new chemical bonds between the filler and binder particles were observed. The highest strength was recorded in the ER1 and ER3 samples using the waste foundry sand filler. No damage to the polymer structure occurred due to exposure to NaOH, NaCl or distilled water. The best resistance against the chemically aggressive environment was observed in the ER3 Novolac epoxy resin, although composites based on ER1 and ER2 also showed outstanding chemical resistance. The filler type has no particular influence on the chemical resistance. The filler components were only physically bonded to the epoxy matrix. It was also proven that it is possible to incorporate hazardous waste (NS) into the epoxy matrix. Newly developed epoxy materials with a high content of by-products can be used in practice (building applications) due to their high mechanical parameters and chemical resistance, e.g., as polymer mortars, rehabilitation materials, polymer concretes, adhesives and grouts. | 2021-07-03T06:16:56.576Z | 2021-06-23T00:00:00.000 | {
"year": 2021,
"sha1": "662eed979fa692842b7d1e7893269be3e0b562ec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/13/3490/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e36ebb67793574532c8556d6be653a2c04108c14",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222147425 | pes2o/s2orc | v3-fos-license | Milk consumption and childhood anthropometric failure in India: Analysis of a national survey
Abstract Dairy milk has been shown to contribute to child growth in many countries, but the relationship between milk intake and anthropometric outcomes among Indian children has not been studied. The objectives were to describe children aged 6–59 months who consume dairy milk in India and determine if dairy milk consumption was associated with lower odds of stunting, underweight and anthropometric failure among Indian children. This was a cross‐sectional study based on the fourth Indian National Family Health Survey (NFHS‐4), which was a national survey conducted between 2015 and 2016 by the Ministry of Health and Family Welfare. The primary exposure was the consumption of dairy milk within the past day or night. The primary outcomes were stunting (height‐for‐age z score < −2), underweight (weight‐for‐age z score < −2) and the composite index of anthropometric failure (CIAF), which is a combination of weight‐for‐age, weight‐for‐height and height‐for‐age. Multivariable logistic regression models and coarsened exact matching (CEM) were used to determine the relationship between dairy milk and odds ratios of each outcome. Setting was in India. Participants were children (N = 107,639) aged 6–59 months. Children who consumed dairy milk in the past day or night had an odds ratio of 0.95 for underweight (95% CI 0.92–0.98, P = .0005), 0.93 for stunting (95% CI 0.90–0.96, P < .0001) and 0.96 for CIAF (95% CI 0.93–0.99, P = .004), compared with children who did not consume dairy milk after adjusting for relevant covariates. When CEM was used among a subset (n = 28,207), evidence for relationships between dairy milk and anthropometric outcomes was consistent but slightly weaker. Widespread, equitable access to dairy milk among childhood may be part of an effort to lower the risk of anthropometric failure among children in India.
Undernutrition leading to poor growth is not just a childhood problem; it extends into adulthood and to future generations such that parents who are stunted are more likely to have children with stunting (Corsi et al., 2016). On the other hand, effective nutritional interventions to reduce child stunting can also benefit subsequent offspring growth (Martorell & Zongrone, 2012).
India has a history of an unestablished dairy industry and lack of access to safe, inexpensive milk (Ohlan, 2016). However, 30-50% of children in India consume dairy milk (Agrawal et al., 2019), which has been associated with taller height (de Beer, 2012) and lower risk of undernutrition (Basit, Nair, Chakraborthy, Darshan, & Kamath, 2012;Dror & Allen, 2011). The Infant and Young Child Feeding Practices ing dairy thereafter. Dairy milk is a nutrient-rich food well accepted by children, providing energy, protein, fat, vitamin B 12 , calcium, and can be fortified with vitamins A and D and other micronutrients vital to child growth and development (Dror & Allen, 2011). Though dairy milk consumption and child growth have been studied in developed countries (de Beer, 2012), the specific relationship between dairy milk and child stunting and underweight within the context of Indian dietary intakes is not well described (Shivakumar et al., 2019). As dairy milk has become more widely available in India in recent years (Gupta, 2015;National Institute of Nutrition, 2011), it may have unrealized potential to address undernutrition among Indian children.
In this research, we address the need to investigate the relationship between dairy milk consumption and child stunting and underweight in India using a large-scale nationally representative survey. The objectives of this study were to describe children aged 6-59 months who consume dairy milk in India and determine if dairy milk consumption was associated with lower odds of stunting, underweight and anthropometric failure among Indian children. It was hypothesised that higher dairy milk consumption during childhood in India was associated with lower odds of stunting, underweight or anthropometric failure.
| METHODS
We conducted a cross-sectional study using the fourth Indian as census enumeration blocks in urban areas and villages in rural areas, which typically contain 100-150 households each, and selected with a probability proportional to size within each stratum. Selected PSUs were visited by field teams who compiled lists of all residential households to serve as the sampling frame for the second survey stage. A fixed number of 22 households were then randomly selected within PSU to be visited by survey teams (IIPS and ICF, 2017).
Survey respondents provided informed oral consent prior to each interview. Questionnaires were administered orally by interviewers and responses recorded using electronic data capture and CAPI software to provide feedback and ensure the robustness of data quality.
Fieldwork was completed between January 20, 2015 and December 4, 2016. The survey response rate was nearly 98% at the household level and was 97% among eligible women.
| Eligibility criteria
The total study sample was composed of singleton children aged 6-59 months at the time of the NFHS-4 survey (N = 107,639). Cases with missing outcome or exposure data were excluded.
Key messages
• Dairy milk is known to provide essential nutrients for growth during childhood, can be a vehicle for micronutrient supplementation, and is becoming more widely available in India.
• In this study, we identified that children aged 6-59 months in India who consumed milk had lower odds of stunting, underweight and anthropometric failure after adjustment for relevant covariates.
• Efforts to improve children's nutritional status in India may include better access to safe, sterile dairy milk.
| Exposures
The primary exposure was child consumption of tinned, powdered or fresh milk in the day or night preceding the interview, measured as a dichotomous variable (yes/no). When powdered and tinned milk is prepared according to directions, the nutritional content is analogous to fresh milk for macronutrients and most micronutrients (Dietitians of Canada, 2020a;Dietitians of Canada, 2020b). Factors hypothesised to have a relationship with both the exposure and outcome included child age, household wealth (measured in quintiles; Rutstein & Johnson, 2004), maternal education (measured as none, primary, secondary or higher), maternal body mass index (BMI, measured as weight in kilograms divided by height in m 2 ), birth weight in kilogrammes, birth size (used as a proxy when birth weight is not possible to measure) (Dharmalingam, Navaneetham, & Krishnakumar, 2010), time of breastfeeding initiation after birth (measured in hours), current breastfeeding (yes or no), fever or cough in past 2 weeks (yes or no), home air quality related to cooking fuels used (clean or solid), access to an improved sanitary facility and drinking water source (yes or no), safe disposal of stools (yes or no), child vaccination status (complete or incomplete), vitamin A supplementation in the past 6 months (yes or no), dietary diversity score and state of residence. Many of these factors describe living environments patterned by socio-economic status; they are related to adequate and safe food, food handling, storage and preparation procedures in the home, which can influence child growth and anthropometry.
Dietary diversity was calculated as a score from 0 to 7 points (World Health Organization, 2010). If a child consumed the following foods during the preceding day or night to the interview, 1 point was given for at least one consumption of dairy other than milk including yogurt and cheese; chicken, duck, other birds or liver, heart or organ meat or fish or shellfish or other meat; eggs; peas, beans, lentils or nuts; breads, noodles or grains or potatoes, cassava or tubers; pumpkin, carrot or squash or mango, papaya or vitamin A-containing fruit; and dark green leafy vegetables or other fruit. Dietary diversity scores were classified into quintiles for the child's age group (<12; 12-24; 24-36; 36-48; and >48 months), as diet varies during different stages of early childhood and to capture the high proportion of children with low dietary diversity in the sample. Quintiles were made within each age group and then summed and reported among the entire sample.
| Outcomes
The primary outcomes were child stunting, underweight and composite index of anthropometric failure, measured by height-for-age (HAZ), weight-for-age (WAZ) and weight-for-height z scores, which were standardised according to the World Health Organisation (WHO) Growth Standards (World Health Organization, 1995). Stunting was defined as HAZ less than 2 standard deviations (SD) below the WHO Growth Standards median and underweight as WAZ less than 2 SD below the median (World Health Organization, 2010). The composite index for anthropometric failure (CIAF) combines WAZ, weight-for-height and HAZ to create a single measure of child anthropometry related to undernutrition. CIAF captures nuances in undernutrition that may be missed by individual measures of stunting, underweight and wasting (Nandy & Miranda, 2008). Weights and heights were measured by NFHS-4 trained staff members (two trained staff measured child length and height). For children less than 2 years of age, a SECA 417 Infantometer (SECA, Germany) was used to measure child length; for older children and adults, staff used a SECA 213 stadiometer to measure height. A SECA 874 U digital scale measured child and adult body weight (IIPS, 2014). Height was measured in metres and weight in kilogrammes. Implausible values were defined as 6 SD from the mean or more for HAZ and WAZ (Shi, Korsiak, & Roth, 2018).
| Statistical analysis
Analyses consisted of (1) descriptive analyses of the distribution of covariates among children with stunting, underweight and CIAF; and (2) multivariable logistic regression to determine the odds ratios (ORs) for stunting, underweight and CIAF among children who consumed dairy milk compared with children who consumed no dairy milk.
Prevalence estimates of stunting, underweight and CIAF were calculated accounting for the survey design and sampling weights.
Descriptive analyses using frequencies and proportions were conducted to quantify the distribution of covariates among individuals with stunting and underweight.
Unadjusted logistic regression was used to assess the relationship between dairy milk consumption (binary exposure) and ORs for stunting, underweight and CIAF (binary outcomes). Separate models were used for each outcome. Adjusted multivariable models included covariates determined a priori (listed above) to assess for potential confounding. Linear regression was used to determine the relationship between dairy milk consumption and HAZ and WAZ z scores. Coarsened exact matching (CEM) was used, which has been demonstrated to limit bias and confounding, reduce model dependence and improve estimates of directionality within relationships (Iacus & Porro, 2011).
Using CEM, we created a cohort (n = 28,207) matched on age in months, diet score, state of residence, wealth quintile, maternal education, maternal BMI, birth weight, birth size and time of breastfeeding initiation after birth to gain an estimate of the directionality within the relationship between dairy milk consumption and child anthropometric outcomes.
Twenty states with the highest number of respondents were individually analysed for the relationship between dairy milk intake and odds of stunting, underweight and CIAF within each state. Additional stratified and interaction analyses were conducted according to highor low-milk consumption at the state level. We defined state level of consumption based on the median proportion across states (33%).
Models accounted for survey design characteristics and sampling weights using the survey package in R. Since NFHS-4 was a two-stage stratified cluster sample, weights were calculated for each stage and cluster to determine sampling probabilities for each (IIPS and ICF, 2017). For all statistical tests, an alpha level of 0.05 was used, and 95% confidence intervals were calculated. Multicollinearity was assessed using the variance inflation factor (VIF); all covariates remained under a VIF of 3.5 (O'Brien, 2007). All analyses were conducted using R version 3.5.1 (R Core Team, 2014).
The proposed study used publicly available and anonymised data obtained from the Demographic and Health Surveys programme (IIPS and ICF, 2017). Permission to access the data via online registration through the DHS website was obtained. This analysis involved secondary use of an anonymous public-use health survey without access to identifiers. According to TCPS2, this research is considered exempt from REB review (Government of Canada, 2018).
| RESULTS
A total of 107,639 children aged 6-59 months were included in this analysis (Figure 1). Participant characteristics are shown in Tables 1 and 2. The mean age of children was 24.9 months, and 49% were male. Within the study sample, 40.5% of children had stunting, 35.0% had underweight and 56.2% had CIAF. At the time of the survey, 86.8% of children were currently breastfed. Children's dairy milk consumption appeared to be similarly distributed across wealth, dietary diversity and mother's education (Table S1).
Unadjusted logistic regression models showed that children who consumed dairy milk in the previous day or night had lower odds of underweight, stunting and CIAF. When adjusted for all covariates specified a priori (listed above), these relationships were maintained (Table 3). Children who consumed dairy milk had 0.95 the odds of underweight (95% CI 0.92-0.98, P = .0005), 0.93 the odds of stunting (95% CI 0.90-0.96, P < .0001) and 0.96 the odds of CIAF (95% CI 0.93-0.99, P = .004), compared with children who did not consume dairy milk after adjustment for all pre-specified covariates. Analyses using CEM revealed weaker evidence of a relationship between dairy milk consumption and all anthropometric outcomes among children (Table S3).
Children who consumed dairy milk and resided in states with below-median proportions of dairy milk consumption had lower odds of stunting, underweight and CIAF than those residing in states with (Table S2). There was evidence that children who consumed dairy milk had higher HAZ and WAZ scores, while adjusting for all pre-specified covariates (Table 4).
| DISCUSSION
In this large, population-based and nationally representative survey from India, we investigated the cross-sectional relationship between milk consumption and child stunting, underweight and anthropometric failure. We have several key findings. First, consumption of dairy milk among children aged 6-59 months was associated with slightly lower odds of stunting, underweight and anthropometric failure than those who did not. Second, this finding was relatively consistent across geographic regions, although probabilities were somewhat stronger in states with lower proportions of children who consumed dairy milk.
Third, despite some attenuation in the relationship following matching with a reduced sample, direction and relative magnitude were maintained, which suggests adequate control of the known confounders. Despite differences in influential variables such as household wealth, region of residence, other dietary intake and maternal BMI, children who consumed dairy had more favourable growth than those who did not.
Results of the present study are consistent with other evidence
showing potential for dairy milk consumption to support child growth worldwide (de Beer, 2012;Wiley, 2012). However, few other studies have evaluated the relationship between dairy milk consumption and child growth among young children in India. One analysis identified that Indian children younger than 2 years who consumed dairy milk had lower odds of stunting (Aguayo et al., 2016). A randomised controlled trial determined that consumption of vitamin-and mineralfortified milk among children aged 1-3 years in India reduced the burden of diarrhoea and acute respiratory illness and increased child height and weight relative to unfortified milk, suggesting that fortified milk may be an effective and acceptable strategy to reduce child morbidity in addition to providing macronutrients for growth (Sazawal et al., 2007).
There are a number of biological mechanisms that could be underlying the relationship between dairy consumption and child growth. Dairy intake increases circulating insulin-like growth factor-1 and is the only dietary source of whey and casein proteins, which are known to promote linear growth and may lower the incidence of stunting (Hoppe, Molgaard, & Michaelsen, 2006). Dairy milk is a nutrientdense food, providing carbohydrates, protein, fat, vitamin B 12 and calcium, and is a vehicle for vitamin A and D supplementation (Michaelsen, Nielsen, Roos, Friis, & Mølgaard, 2011). It also contains highly bioavailable zinc, magnesium, potassium and phosphorous, which are essential for child development and especially important for catch-up growth among children with anthropometric failure (Golden, 2009). It is probable that the combination of macro-and micronutrients provided by dairy contribute synergistically to child growth. We noted that the majority of children were currently breastfed. Dairy milk is richer in protein and micronutrients than breast milk, especially if the mother is undernourished. Although breast milk offers many physiological and immunological benefits, prolonged breastfeeding has been associated with lower maternal educational status and wealth, delayed introduction of complementary foods and lower weight gain and undernourishment among children in developing countries (Fawzi, Herrera, Nestel, el Amin, & Mohamed, 1998). Though dairy milk can be a complement to breast milk, it is possible that prolonged breastfeeding displaced dairy milk some children's diets.
Higher dietary diversity (defined as the inclusion of milk, meat, eggs, lentils, starchy staples, vitamin A fruits, other fruits and other dairy in the diet; Ruel & Menon, 2002) during childhood has been associated with lower risk of stunting and underweight (Corsi et al., 2016). Though dairy milk is relatively inexpensive and has become more accessible in India in recent years, it is possible that children who have access to dairy also may have access to other energydense, growth promoting foods. Our findings suggest this may have been the case among children living in states with lower milk consumption, who had lower odds of anthropometric failure if they did have access to milk. Dairy milk consumption among Indian children can vary by maternal education and household income, with children of uneducated mothers and living in poor households consuming the least dairy products (Agrawal et al., 2019). However, India has steadily become the world's largest producer of dairy milk due to a rise in consumer demand and agricultural capacities (Gupta, 2015;National Institute of Nutrition, 2011). Fortified dairy products with vitamins A and D are now regulated in India (Food Safety and Standards Authority of India, 2019), which holds promise for improving child nutritional status provided they are safely handled, inexpensive and accessible to children. Though lactose intolerance affects a high proportion of South Asian adults, children are usually able to tolerate lactose well until adolescence or early adulthood (Heyman & Committee on N, 2006). Widespread, equitable access to dairy milk for children may be an important part of a dietary strategy to promote child growth in India. Fortified infant formula is also recommended for children when breast milk is unavailable (National Institute of Nutrition, 2011) and may play a similar role to dairy milk in promoting child growth; however, formula was not included in this analysis due to its distinct nutritional properties from dairy milk (Institute of Medicine Adjusted for age in months, diet score, wealth quintile, maternal education, maternal body mass index (BMI) b , birth weight, birth size, time of breastfeeding initiation after birth, current breastfeeding, fever or cough in past 2 weeks, home air quality related to cooking fuels used, access to an improved sanitary facility and drinking water source, unsafe disposal of stools, child vaccination status, vitamin A supplementation in the past 6 months and region of residence. b Stunting model was adjusted for maternal height in place of BMI.
T A B L E 4 The relationship between dairy milk consumption and height-for-age z score (HAZ) and weight-for-age z score (WAZ)
Coefficient (95% CI) P value
Height-for-age z score Milk (yes) 0.07 (0.05-0.09) <.0001 Weight-for-age z score Milk (yes) 0.05 (0.03-0.06) <.0001 Note: Adjusted for age in months, diet score, wealth quintile, maternal education, maternal BMI (for WAZ model), maternal height (for HAZ model), birth weight, birth size, time of breastfeeding initiation after birth, current breastfeeding, fever or cough in past 2 weeks, home air quality related to cooking fuels used, access to an improved sanitary facility and drinking water source, unsafe disposal of stools, child vaccination status, vitamin A supplementation in the past 6 months and region of residence. Covariate estimates not shown.
weights, we were able to account for non-response and underrepresented groups in our sample (Package 'survey' [computer program], 2019). Use of CEM allowed for an estimate of directionality within the observed relationship by balancing covariate distributions between children who did and did not report consuming dairy milk (Iacus & Porro, 2011).
Limitations of this study include cross-sectional design, which rendered our ability to describe a causal relationship between observed variables. Though our analysis accounted for variables related to diet and child growth such as maternal BMI, region and wealth, residual confounding is possible. We were not able to identify whether the milk consumed by children in this analysis was fortified with vitamins A and D or other nutrients; therefore, the mechanism at work remains unclear. Prospective cohort studies or randomised controlled trials are needed to further evaluate the effect of commercially available dairy milk on child health in India. Milk consumption was measured as a dichotomous variable at a single point in time, and volume was not quantified. The large sample size increases the likelihood of Type I error (rejecting a null hypothesis that is true) or of finding a difference that is statistically but not practically significant.
Anthropometric failure remains a persistent problem for children in India, such that over one-third of children in India are stunted or underweight (IIPS and ICF, 2017). Among children aged 6-59 months in India who participated in the nationally representative NFHS-4 survey, those who consumed dairy milk had lower odds of stunting, underweight and anthropometric failure than those who did not, after adjusting for relevant covariates. Given these results, dietary strategies in India may prioritise widespread and inexpensive access to sterile dairy milk to optimise child growth. | 2020-10-06T13:34:59.204Z | 2020-09-30T00:00:00.000 | {
"year": 2020,
"sha1": "d475f620ce31b637a34280a0543dd23aecc6b60d",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mcn.13090",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8ce3aafbd073bf16c0981d02e40c237e0ce2167",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210848595 | pes2o/s2orc | v3-fos-license | Interaction and Kinetics Study of the Co-Gasification of High-solid Anaerobic Digestate and Lignite
This study aims at investigating the interaction and kinetics behavior of the co-gasification of digestate and lignite. The co-pyrolysis performances of digestate and lignite blended by dry process were better than that blended by wet process, while the wet-blending process could improve the performance in co-gasification stage because of the larger pore diameter and pore volume. When anaerobic digestion (AD) time was 40 days, the synergistic interaction between digestate and lignite were the most remarkable based on the results of thermogravimetric analysis (TG) and the experiments in the lab-scale downdraft fixed bed gasifier. Kinetics study showed that the increase of AD time and the addition of digestate in lignite decreased the activation energy of the co-gasification reaction.
Introduction
The exhaustion of fossil energy and environmental pollution are becoming the barriers of sustainable social development. Bioenergy with low greenhouse gas emission meets the growing energy demand and plays a critical role in promoting renewable alternatives. Through anaerobic digestion (AD), methane can be generated under ambient conditions from various substrates, such as sewage sludge, food waste, forestry resource, living stock manure, and agricultural waste [1,2]. During the past decades, the Chinese government paid great attention to the development of biogas industry. By the end of 2015, 41.93 million household biogas facilities and 110,975 biogas plants have been built in China, resulting in the significant growth in digestate output, which was mostly used as soil fertilizer [3]. However, the digestate contains high content of harmful substances such as heavy metals, pathogens, trace herbicides and fungicides, which will have potential adverse effects on food safety and ecological environment. With the rapid development of high-solid anaerobic digestion (HSAD), total solid (TS) content higher than 10%, the production rate of digestate has increased dramatically and there is an urgent demand to dispose and reuse the digestate safely.
For the lignocellulosic biomass, only the fractions of cellulose and hemicellulose can be converted in the AD process, and the energy conversion ratio of lignocellulosic biomass is about 33-50% because of its rigid structure and existence of non-biodegradable lignin [4]. Moreover, the lignin content in the digestate is relatively higher than that in the raw biomass feedstock, which is favorable for gasification to realize the complete energy conversion, eliminating the pathogens and immobilization of the heavy metals in the inorganic matrix [5][6][7][8][9][10]. The properties of digestate and its feasibility of gasification were studied by many researchers. Li et al. [11] found that more than 80 wt% of the digestate were volatile matters, which can be used as gasification feedstock to produce syngas. However, digestate has low energy density, which leads to low heating value of produced gas and low-quality products in pyrolysis and gasification. The bio-oil obtained from digestate pyrolysis needs to be further upgraded to improve its quality to overcome high acidity, instability, low heating value, etc. [12]. Wang et al. [13,14] investigated the pyrolysis performances of the corn straw fermentation residues and found that the phenol yield, especially the content of vinyl phenol, increased gradually with the increase of temperature. Although gasification is indisputably considered as a promising and effective technology to dispose digestate, it still encounters many problems, such as low gasification efficiency and low-valued products.
Co-gasification is widely adopted for the disposal of biomass and clean utilization of coal. It could not only inhibit the generation of SO x and NO x , but also reduce the emission of greenhouse gases [7,[15][16][17][18]. Coal has the advantages of high energy density and high calorific value, while the combustion of coal may lead to serious environmental pollution. Although the pyrolysis and gasification of coal have been studied for many years, there are still a lot of challenges to overcome. For instance, hydrogen used as gasification agent is needed in the gasification process of coal to produce CH 4 , but the price of hydrogen is expensive [19]. Since the digestate has high H/C ratio, co-gasification of the digestate and coal may enhance the overall energy efficiency if they are blended with appropriate proportion and approach. Yao et al. [20] conducted the co-gasification of digestate and woody chips at different mass ratios and moisture contents. The results showed that when the mass ratio of digestate was 20 wt% and moisture content was 30 wt%, the optimal energy efficiency had reached 70.8%. However, the co-gasification of digestate and coal has been seldom reported.
To realize the highest reaction activity and the maximum energy recovery, the interaction and the kinetics in co-gasification of digestate and lignite need be investigated to find out a reasonable AD time for co-gasification of digestate and lignite. In addition, the interaction reaction mechanism should be investigated to further optimize the overall energy efficiency of AD and gasification coupling process. Until now, there has been no study that has reported on the interaction and kinetics study of co-gasification of digestate and lignite.
To explore the interaction between the digestate and lignite in co-gasification process, it is necessary to mix the two feedstocks homogeneously. Many approaches to mix the materials such as using ethanol, incipient wetness impregnation method and physical methods, are used in the field of electrode material preparation, catalytic pyrolysis and raw materials mixing, respectively [21]. Wu et al. [22] investigated the effects of mixing methods on the cellulose-hemicellulose interactions during pyrolysis, blending cellulose and hemicellulose manually with a hydraulic press machine under 20 MPa comparing with native mixtures. Couhert et al. [23] compared the intimate mixing using mortar with simple mix when blending the cellulose, hemicellulose, and lignin during pyrolysis. In addition, the pore structure of digestate biochar is quite different from that of coal char. The blending method may affect the interaction between biochar and catalytic minerals, such as alkali metals and alkaline earth metals, the morphology and covalent linkages between digestate and coal. Thus, the blending method plays a significant role in affecting the performances of co-pyrolysis and co-gasification. The effect of different blending methods on the properties of co-pyrolysis and co-gasification was investigated in detail. The effects of different digestion treatment time on the interaction and kinetics of co-pyrolysis and co-gasification were also explored by TG. The co-gasification experiments of digestate and lignite were conducted in a lab-scale downdraft fixed gasifier. Moreover, the kinetic models such as three-dimensional diffusion, nucleation and growth models were employed by using Coats-Redfern method in order to observe the optimum mechanisms for the thermal conversion process to describe the reactive behavior and to determine the kinetic parameters.
Effect of Blending Methods on Co-pyrolysis and Co-gasification
Digestate and lignite were blended with the ratio of 50:50% (wt/wt). Then, the effects of blending methods on thermal conversion were investigated using TG. As shown in Figure 1, the reaction process can be subdivided into two stages: pyrolysis (stage 1, S1) and gasification (stage 2, S2). As the temperature increasing from 200 • C to 650 • C, the decomposition and emission of volatiles happened in the first stage. In the second stage, gasification of biochar took place with the temperature ranging from 700 • C to 950 • C. For the sample blended by dry process, the values of the derivative curve (DTG) of TG curve in stage 1 were slightly higher than that blended by wet process and the T max was lower than that of the sample blended by wet process, which indicated that the dry-blending process can enhance the co-pyrolysis greater than wet-blending process. On the other hand, the values of DTG curve of dry process in stage 2 were lower than that blended by wet process, which indicated that the wet process will promote co-gasification greater than dry-blending process. The adsorption-desorption isotherms and pore size distribution of biochar samples are presented in Figure 2. According to IUPAC, the adsorption/desorption isotherms of biochar samples presented the resemble features between the type I and II. The hysteresis loops of the biochar belong to the type H4, indicating the rich microporous structures. The adsorption-desorption isotherms of the both biochar samples showed a sharp knee at P/P 0 around 0.01, indicating the narrow pore diameter distribution [24]. The adsorption capacity of biochar samples was quantified by the amount of adsorbed nitrogen [25]. When the P/P 0 is less than 0.01, the micropores started to be filled quickly and the adsorption capacity of sample blended by wet process was higher than that of biochar sample blended by dry process, indicating the content of micropores and pore volume in biochar sample blended by wet process were more than that of biochar samples blended by dry process. The pore sizes of the biochar samples were smaller than 2 nm, indicating that the pores in the biochar were mainly micropores. Moreover, compared with blending by dry process, blending by wet process increased the number and pore diameter of micropores.
The surface area, pore volume and average pore diameter parameters of biochar samples under different blending methods are shown in Table 1. Compared to dry process, the specific surface area, pore volume and average pore diameter of samples by wet process increased by 14.69%, 19.23%, and 32.00%, respectively. Ping et al. [24,26] reported that micropore was the main contribution to the surface area, and the results confirmed that the amount of micropores in the biochar sample blended by wet process was larger than that blended by dry process. It was found that blending lignite and digestate by wet process can promote the formation of pores in the biochar dramatically, accelerating the co-gasification of the mixture. Digestate and lignite were rich in organic volatile matters. Ethanol, used as organic solvents, can break the bridging of organic matters and dissolve the samples partially [27]. Therefore, when blending the samples by wet process, part of the organic matters was extracted from the inside of the biomass and enriched outside of the sample particles. It would leave spaces among the various components in the material and the links of interaction among the various components disappeared. As a result, the catalytic ingredients in the materials blended by wet process were not able to play a catalytic role, which lead to the inhibition of the pyrolysis reaction in stage 1 comparing with dry process, while blending by dry process made the components more intimate and strengthened the interactions among the different components in stage 1. Although blending by wet process would promote the performance of co-gasification, the improvement is not obvious comparing with blending by dry process. Therefore, the following experimental samples were mixed by dry process.
TG Analysis of Digestate and Lignite
The experimental TG/DTG curves of lignite and digestate are shown in Figure 3a,b, respectively. It can be seen that the ash content of digestate was obviously higher than that of lignite, and the ash content of digestate increased with the increase of AD time, which was in accordance with the results of the proximate analysis shown in Table 2. From the DTG curves, it can be observed that there were two reaction stages occurring in sequence. The decomposition of volatiles and emission of gaseous species took place in stage 1 with the temperature ranging from 200 • C to 400 • C. The pyrolysis temperature of digestate was much lower than that of the lignite, which ranged from 210 • C to 650 • C, meaning that digestate could be pyrolyzed more easily than the lignite at lower temperature. In the next stage, biochar gasification reactions took place with the temperature ranging from 650 • C to 950 • C. The gasification temperature of digestate was higher than that of lignite, which ranged from 700 • C to 935 • C, indicating that the lignite could be gasified earlier than digestate. Li et al. [11] investigated the effect of mass ratio of grass and chicken manure on the digestate TG. The results showed that the volatile matters in digestate increased and the contents of ashes and fixed carbons decreased with the increase of grass contents. Because the anaerobic sludge brought inorganic non-flammable salts and sands into the mixture, the ash content of digestate was higher than the grass. The pyrolysis and gasification performances (the maximum rates of the weight loss (DTG max ), and corresponding maximum temperature (T max )) of lignite, digestate and mixtures were calculated from the TG results, as shown in Table 3. The highest and the lowest DTG max for single samples among digestate in stage 1 were 9.87%·min −1 for AD0 and 7.22%·min −1 for AD40. The highest and the lowest DTG max for single samples among digestate in stage 2 were 4.19%·min −1 for AD0 and 2.62%·min −1 for AD40, respectively. Obviously, the DTG max of digestate decreased with the increase of AD time in the pyrolysis and gasification stages in spite that the DTG max of AD25 was slightly higher than that of AD10 in stage 2, indicating that the reactivity of digestate decreased as AD time continued, and the DTG max in the pyrolysis stage was significantly higher than that in the gasification stage. On the other hand, for lignite, the trend was opposite to that of digestate, and its DTG max in gasification stage was higher than that in pyrolysis stage, which was similar to the Xu's study in that the DTG max of lignite was higher than that in pyrolysis stage and the trend of biomass was opposite to the lignite [28]. For the mixture with the mass ratio of lignite to digestate 50:50% (wt/wt), Ln-AD0 and Ln-AD40 showed the highest DTG max value of 5.22%·min −1 and 4.26%·min −1 in stage 1 and 2. Whereas, Ln-AD40 had the lowest DTG max value of 3.75%·min −1 in stage 1 and Ln-AD25 had the lowest DTG max value of 4.11%·min −1 in stage 2, respectively.
Analysis of Interaction between Digestate and Lignite
Equations (2) and (3) were used to calculate the theoretical DTG curve and to identify whether there were interactions in stage 1 and stage 2. Figure 4 illustrated the comparison between the experimental and the calculated TG and DTG curves of lignite and digestate with the mass ratio 50:50% (wt/wt). It can be seen that the AD time had great influences on the performances of the co-pyrolysis and co-gasification. For stage 1, the DTG values obtained from experiments were lower than that from theoretical calculation for all the mixtures, indicating that the addition of the digestate inhibited the co-pyrolysis reaction. For stage 2, the values obtained from DTG were higher than from theoretical calculation for AD10 and AD40, which indicated that the addition of the digestate promoted the co-gasification reaction. For AD0 and AD25, the DTG experiment results were close to the theoretical calculation. It was hardly to judge whether the addition of AD0 and AD25 in lignite can promote the reaction in stage 2. To quantify the interaction between digestate and lignite in stage 1 and stage 2, two parameters, Root Mean Square (RMS) and MR, were calculated according to the Equations (4) and (5). The RMS of co-pyrolysis and co-gasification of lignite and digestate with different AD times is shown in Figure 5a. In stage 1, with the increase of AD time, the RMS increased slightly and then, remained constant gradually. In stage 2, the RMS firstly decreased and then, increased with the increase of AD time. The RMS of Ln-AD0 and Ln-AD40 were higher than that of other samples, indicating that the synergistic interaction of the digestate and lignite were the most remarkable. The MR of co-pyrolysis and co-gasification of lignite and digestate with different AD times is shown in Figure 5b. It can be seen that all the MR values for stage 1 were less than zero, indicating the interactions between lignite and digestate were negative. On the other hand, the MR values for stage 2 were higher than zero indicating the interactions among the all mixtures during the co-gasification were positive. This agreed well with the trend of experimental and calculated values of DTG curves. The interaction between lignite and digestate of 40 d were the most remarkable because it has the most prominent RMS and MR values. Digestate mainly consisted of the degraded corn straw, cow dung, and sludge. Corn straw and cow dung contained high content of cellulose, hemicellulose, and lignin. The cellulose and lignin content had significant impact on the co-pyrolysis. The lignin can inhibit the pyrolysis of cellulose [29]. Therefore, the existence of cellulose and lignin may have a negative impact on the co-pyrolysis of lignite and digestate. Besides cellulose hemicellulose and lignin, the substrate also contained some protein and lipid, which had the characteristics of rich fat structure, long fat chain and low bond energy. During the gasification process, the protein and lipid were easy to break and form abundant free radicals and volatiles. Free radicals not only reacted with organic matters, but also participated in the reaction of lignite, thus, promoting the gasification reaction [30]. For the AD time of 0 d, the content of organic matters was the highest. Therefore, the synergistic interaction of gasification was remarkable. With the increase of AD time, more organic matters could be hydrolyzed and consumed in AD. After being digested for 10 d, the synergistic interaction between lignite and digestate weakened relatively. Moreover, as the AD reaction going on, more hydrolyzed biomass participated in AD and the structure was broken, resulting in more porous surface structure of pyrolysis biochar, which was favorable to the gasification reaction and the diffusion of gasification products. Therefore, the synergistic interaction in co-gasification were obviously enhanced when the mixture of AD40 and lignite was used as feedstock. As the AD time continued to increase, the synergistic interaction in co-gasification would continue to be enhanced. In addition, digestate and lignite were influenced by the catalytic effects of alkali metals and alkaline earth metals in the whole reaction process, which enhanced the thermal conversion performances. Alkali metals and alkaline earth metals in ash, such as Ca, K, can significantly promote the thermal conversion reaction in the pyrolysis and gasification process. Edreis [31,32] reported that the mixture of the petroleum coke and biomass wastes had high reactivity because of the catalytic effects of alkali metals in the mixture. Fe 2 O 3 also played a catalytic role during the pyrolysis of sewage sludge because Fe 2 O 3 enhanced the evaporation of the volatile and promoted the crack of biochar [33].
Co-Gasification of Digestate and Lignite in a Lab-Scale Downdraft Fixed Bed Gasifier
The gasification experiments were conducted in a lab-scale downdraft fixed bed gasifier at 950 • C. Figure 6a showed the gas compositions and biochar yields of single samples. The char yield of lignite was lower than the all digestate indicating that the ash content of lignite was lower than that of digestate, which was consistent with TG experiments. The main gas products of digestate and lignite were CO, CO 2 , CH 4 , H 2 , and C n H m . The CO contents of AD10, AD25, and AD40 were lower than AD0, indicating part of organic matter was consumed during the anaerobic digestion. However, the CO contents increased lightly with the increasing of anaerobic digestion time from 10 days to 40 days. The reason may be that the surface of biomass was hydrolyzed and the structure was broken, resulting porous surface, which was in favor for the gasification. Figure 6b presented the gasification experimental results of mixture that the digestate and lignite was blended as mass ratio 50:50% (wt/wt). The calculated values of gas compositions and char yields were based on the single samples according to the Equation (2). The CO contents of experiments were higher than the calculated values. Moreover, the experimental values of CO 2 content and char yields were lower than the calculated values, increasing the yield of CO and improving the CO 2 consumption. It indicated that the synergistic interaction of digestate and lignite occurred in the co-gasification process. The experimental value of Ln-AD40 biochar yield, which compared with calculation values, reduced the greatest, 11.89%, among the four mixtures from Figure 6b. This means that the synergistic interaction of Ln-AD40 in the gasification process was the most remarkable, which was consistent with the interaction analysis results of TG in Section 2.3. According to Hu's study, a part of alkali metal K migrated from biomass to coal char surface, while a part of alkali-earth metal Ca was transferred from coal to the biomass char surface in the co-gasification, leading to the synergy interaction of biomass and coal [34]. The metal migration between lignite and digestate in their co-gasification may be the reason that resulted in the synergy interaction in co-gasification of digestate and lignite.
Kinetic Analysis
The kinetic parameters including activation energy E, pre-exponential factor A, and correlation coefficients R 2 for different samples are shown in Tables 4 and 5, respectively. It can be seen that all correlation coefficients of each experimental sample in the two reaction stages were approximately 1, which showed that the corresponding reaction models fitted the experimental results well.
S1 S2
Ln-AD0 In stage 1, the activation energy of lignite and digestate with the mass ratio of 50:50% (wt/wt) was lower than that of digestate, but higher than that of lignite. According to the calculation by model A 0.5 , the activation energy of lignite in stage 2 was similar to the Xu's study that the activation energy of lignite gasification was 183.90 kJ·mol −1 [28]. According to the results of model High activation energy means that the reactions needs higher temperature or longer reaction time [35]. The activation energy decreased with the increase of AD time, and the addition of digestate in lignite can significantly reduce the activation energy. As shown in Table 2, the ash content of digestate increased with the increase of AD time. The alkali metals and alkaline earth metals cannot be consumed in AD process, leading to the more alkali metals and alkaline earth metals in the digestate with the increase of AD time, which played a catalytic role in the pyrolysis and gasification [31]. Hence, the catalytic effect was becoming more and more obvious with the increase of AD time, resulting in the reduction of activation energy gradually.
Feedstock Materials
The selected coal samples were collected from Xiaolongtan lignite (Ln). The digestate was produced from HSAD of corn straw, cattle manure and sludge in lab-scale AD reactors. The corn straw was crushed less than 3 cm after air dried. The corn straw, sludge, cattle manure, and water were blended as mass ratio 1.13:3.65:6.39:1 and the total weight of mixture was 7.00 kg. The AD conditions were as follows: total solid 30% and ambient temperature 35 ± 1 • C. The mixture samples were digestated for 0 day, 10 days, 25 days, and 40 days, and denoted as AD0, AD10, AD25, and AD40, respectively. The digestate samples were dried at 105 • C for 24 h, ground and screened below 200 mesh. The ultimate and proximate analysis of digestate and lignite are shown in Table 2. The ash compositions of digestate and lignite are presented in Table 6.
Experimental Set-Up
The effects of two different blending methods (by wet process and by dry process) on the TG experiments of mixtures were investigated with mass ratio of digestate to lignite 50:50% (wt/wt). That the ethanol was used as dispersing medium to mix the digestate and lignite was defined as wet process. The dry process used mortar to mix the different samples.
Secondly, four kinds of digestate were blended with lignite with the ratio of 50:50% (wt/wt). TG experiments were carried out to investigate the interaction of lignite and digestate under different digestion times blended by optimal method. Afterwards, the co-gasification experiments were carried out in the downdraft fixed bed gasifier to investigate whether the co-gasification can improve the performance.
Finally, the reaction kinetics of co-pyrolysis and co-gasification were explored under different conditions. The experiments contained three repetitions.
The Wet Process and Dry Process
For wet process, 1.00 g digestate and 1.00 g lignite were blended at the ratio of digestate to lignite 50:50% (wt/wt) in 50 mL ethanol (≥ 99.7%, Beijing Chemical Works) in a 150 mL beaker. The mixture was stirred for 30 min at 350 r·min −1 , and then, placed for 24 h. After the ethanol was volatilized, the beaker was placed in the oven at 105 • C for 24 h. The samples were ground into powder. For dry process, 1.00 g of digestate and 1.00 g of lignite were poured into the mortar for complete blending.
Preparation and Pore Structure Analysis of Pyrolysis Biochar
To investigate the influence of blending methods on the co-pyrolysis and co-gasification performances, the mixtures of the digestate and lignite were prepared according to the Section 3.3.1. Then, 2.00 g samples were pyrolyzed to produce biochar in a tubular furnace. Pure N 2 was used as carrier gas and preloaded for 2 min. The tubular furnace was vacuumed and purged with N 2 for three times. Finally, the flow rate was set at 100 mL·min −1 . The temperature of the tube furnace rose from room temperature to 950 • C at 15 • C·min −1 and kept for 1 h. The pore structures of pyrolysis biochar prepared by two blending methods were characterized.
Nitrogen adsorption experiments (temperature, 77 K) were conducted using physical adsorption analyzer (Micromeritics ASAP 2020HD88). The surface area and average pore diameter of biochar samples are measured using Brunauer-Emmett-Teller (BET). The pore volume is calculated from the t-plot method.
TG Experiments
Non-isothermal co-gasification experiments of digestate and lignite were carried out by TG analysis (Setaram Labsys Evo, Lyon, Rhône Province, France). Pure CO 2 was introduced in the reactor as gasifying agent. Temperature was risen from room temperature to 950 • C at 15 • C·min −1 .
Reactivity Measurements
The reactivity of pyrolysis and gasification reactivity was calculated with the following [36,37]: where R m is the reactivity (%·(min· • C) −1 ), DTG max is the maximum mass loss rate (%·min −1 ), and T max is the maximum temperature, correspondingly ( • C).
Analysis of Interaction Between Digestate and Lignite
The TG/DTG theory values of the co-gasification of lignite and digestate with different AD times are calculated according to the Equations (2) and (3) [35,38]. By comparing the theoretical and experimental TG/DTG results, it could be concluded whether there is synergistic interaction during the co-pyrolysis and co-gasification of lignite and digestate: where w is the weight loss (%), gas composition (vol%), and char yields (wt%), dw/dt is the weight loss rate (%·min −1 ), and x D and x L correspond to the mass ratio of digestate to lignite, respectively. In order to quantitatively evaluate the interaction of co-pyrolysis and co-gasification, two parameters were used to characterize the reaction. One is the RMS to judge whether there is interaction between digestate and lignite. However, it cannot analyze whether the interaction is positive or negative. Another parameter, MR, is defined as the ratio of average absolute error to average calculated value. Positive MR indicates that fractions of the mixture promotes each other in the reaction. On the contrary, if MR is negative, the interaction is inhibited [30,39].
3.6. Co-gasification of Digestate and Lignite in a Lab-scale Gasifier The gasification experiments of digestate and lignite were conducted in a lab-scale downdraft fixed bed gasifier as shown in Figure 7. The internal diameter of quartz tube is 35 mm and the distributor plate is located in the middle of quartz tube. A crucible is placed on distributor plate to store the digestate and lignite. For each experiment, the crucible was stored with 2.0 g feedstock in the downdraft fixed bed gasifier. The CO 2 gas (99.99%) was selected as the gasification agent and the gas flow rate was 60 mL·min −1 in the gasification process. The gasifier was heated from room temperature to 950 • C at the rate of 50 • C·min −1 and stayed the same temperature for a certain time. The gas bag was used to collect the product gas, the composition of which was analyzed by gas chromatography.
Kinetics Study
The kinetics analysis of co-pyrolysis and co-gasification was carried out. The conversion rate is expressed with the following [41]: where α is the conversion ratio (%), T is the absolute temperature (K), A is the pre-exponential factor (min −1 ), β is the constant heating rate (K·min −1 ), with R = 8.314 J·mol −1 ·K −1 , and m is the mass of sample (g). Using Coats-Redfern integral method, the weight loss Equation (6) are fitted and calculated as follows [42,43]: [44]. Suppose Y is ln − ln(1 − α)/T 2 /(1 − n) or ln − ln(1 − α)/T 2 , and Y = ax + b. The activation energy E and pre-exponential factor A of the reaction can be obtained through the values of slope and intercept. In this study, the reaction mechanism function (g(x)) used for the calculation are shown in Table 7 [40]. The mechanism functions D 3 and D 4 are attribute to a three-dimensional diffusion model. The D 3 and D 4 belong to the Jander equation and Ginstling-Brounshtein equation, respectively. The mechanism function A 0.5 is belong to Avrami-Erofeev equation and the power exponent n is 0.5, which is attribute to randomly nucleating and nucleus growth model. Table 7. Typical kinetic model functions expressions of g(x) and f(x) for solid-state reactions [40].
Conclusions
The dry-blending process can improve the reactivity during co-pyrolysis, while the wet-blending process could promote the co-gasification because of the improvement of the pore diameter and pore volume. The thermal conversion of the digestate, lignite and their mixtures occurred in two reaction stages, pyrolysis and gasification. The synergistic interaction occurred in the co-gasification, and not in the co-pyrolysis. Based on the TG results and the co-gasification experiments in the downdraft fixed bed, the synergistic interaction was the most remarkable when the sample of AD40 and lignite was mixed as mass ratio 50:50% (wt/wt). Three repeated experiments showed consistent results. From the results of kinetic study, the Avrami-Erofeev equation A 0.5 , belonging to the randomly nucleating and nucleus growth model, was found to be the most suitable for the whole co-gasification process. The activation energy of the mixture decreased sharply from 192.37 kJ·mol −1 to 145.73 kJ·mol −1 with an increase of AD time. The co-gasification was found to be a promising way for energy recovery from digestate waste and lignite. | 2020-01-22T12:21:47.504Z | 2020-01-22T00:00:00.000 | {
"year": 2020,
"sha1": "264dc4f5cd9d0c64d019562d80022b26f863f2c2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/3/459/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "808ecb8d563d19d61a03971f2649aa03a85e94bf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
53062927 | pes2o/s2orc | v3-fos-license | Discrete Time-Frequency Signal Analysis and Processing Techniques for Non-Stationary Signals
This paper presents the methodology, properties and processing of the time-frequency techniques for non-stationary signals, which are frequently used in biomedical, communication and image processing fields. Two classes of time-frequency analysis techniques are chosen for this study. One is short-time Fourier Transform (STFT) technique from linear time-frequency analysis and the other is the Wigner-Ville Distribution (WVD) from Quadratic time-frequency analysis technique. Algorithms for both these techniques are developed and implemented on non-stationary signals for spectrum analysis. The results of this study revealed that the WVD and its classes are most suitable for time-frequency analysis.
Introduction
In nature, most of the signals are non-stationary and time-varying signals. Further, the classical and modern methods are widely used to process the stationary signals in which they transform the signals from time-domain to frequency-domain and vice versa. The stationary signals do not change in their statistical properties over the length of the analysis time. Many signals of biological origin are varying in a random manner called non-stationary signals and are changing their properties over the length of the analysis time. The basic idea of time-frequency analysis is to design a joint function, which can describe the Journal of Applied Mathematics and Physics characteristics of signals on a time-frequency plan. Time-frequency transforms map a one-dimensional function of time x(t) into a two-dimensional function of time and frequency x(t, f) [1].
In order to process such non-stationary signals, time-frequency analysis and processing methods are required. Generally, they fall into one of the two categories of time-frequency distributions (TFDs), the linear time-frequency distributions and the quadratic time-frequency distributions (QTFDs). The TFDs give useful information about frequency changes over time. The signal component could be considered as energy continuity in time without abrupt changes in frequency [2].
Non-stationary signals comprise of mono component or multi-component.
Linear TFDs, such as short-time Fourier transform (STFT), which is often used as a first choice of tool in time-frequency analysis, due to their simplicity in usage, well-established algorithm and analysis technique [3]. In order to get enhanced time-frequency resolution QTFDs have been introduced. QTFD classes are non-linear methods in which Wigner-Ville Distribution (WVD) is the primary distributions of QTFD class, from which so many classes called Cohen's TFDs, have been introduced for various non-stationary signal-processing applications. Consequently, studies on the TFRs have been applied to analyze, modify and synthesize non-stationary signals or time-varying signals. In this paper, two types of time-frequency representation techniques are considered; Linear Time frequency distribution and quadratic time frequency distribution and their principle properties are investigated. The realization of this distribution for hardware and software platforms requires a discrete version. As a result, algorithms were developed for discrete time-frequency STFT and WVD techniques and were tested on non-stationary signals for joint time-frequency analysis.
Short-Time Fourier Transformation
STFT is one of the linear time-frequency representations based on the straightforward approach of slicing the waveform of interest into a number of short segments and performing the analysis on each of these segments, using standard Fourier transform. A window function is applied to segment the data, which effectively isolates the segment from the overall signal data, since the segment within the window is assumed as stationary and provides time localization. Then, Fourier Transform is applied to the windowed data and the spectrum or spectrogram could be calculated from the estimated Fourier coefficients.
The STFT of the signal x(t) is given by [4] ( ) is a window function and τ is the variable that slides the window across the signal, x(t). The discrete version of STFT of the signal x(n) is given by Upon selection of discrete STFT, the next step is to select an appropriate window and its size where two closest sinusoids can be distinguished using Equation (3). However, non-stationary signals may involve a large number of sinusoids in close proximity. This results in a very small Δf and consequently a large window is required. This makes the STFT very similar to the Fourier transform and will hamper temporal resolution. In order to select an appropriate window size a novel empirical model is proposed in [5] [6], which adaptively selects a window size and is given by where f s is the sampling frequency and μ = 386.3 for
Wigner and Wigner-Ville Distributions
All Quadratic Time-Frequency representations should satisfy the time and frequency shift invariance belong to general class of distributions introduced by Cohen and are given by the following expression [7] ( ) ( ) Here, the range of all integrations is from −∞ to ∞.
A real valued signal x(t) is used in WDF, which has positive and negative frequency components and introduced aliasing or cross-terms between positive and negative frequencies in time-frequency domain.
Wigner-Ville Distribution
A simple approach to avoid aliasing is to use an analytic signal before computing the WDF. Ville (1948) proposed the use of the analytic signal in time-frequency representations of a real signal. An analytic signal is a complex signal that contains both real and imaginary components. The advantage of using analytical signal is that in the frequency domain the amplitude of negative frequency components are zero. The imaginary part is obtained by Hilbert transform. The Journal of Applied Mathematics and Physics analytic signal may be expressed by, [8] [9], where H[x(t)] is the Hilbert transform, which is generated by the convolution of the impulse response h(t) of 90˚ phase shift as follows The discrete form of the equation is given by, (5), the continuous time WVD is obtained for continuous time signal Its discrete version is
STFT Algorithm Implementation
Fast Fourier Transform (FFT) is applied using straight forward approach using the separate function [B,t,f] coded in MATLAB. Here, B is a complex matrix containing the magnitude and phase of the STFT frequency spectrum with the rows encoding the time-axis and the column representing the frequency-axis and t and f are optional argument vectors that can be helpful in plotting.
DWVD Algorithm Implementation
The following are the steps involved to develop the algorithm: Step 1: Convert the real signal into analytical signal using Hilbert transform.
Step 2: Compute the WVD using a separate function. The input function has arguments x and f s .
Step 3: Compute the instantaneous autocorrelation using loop to construct an array.
Step 4: Find the WVD magnitude using FFT.
1) Two Sequential Sinusoid
The proposed STFT and WVD techniques are tested over different inputs such as a two sequential sinusoid of 10 Hz and 50 Hz. The sinusoid is preceded and followed by a time gap of 0.5 sec. The simulated signals are shown in Figure 1.
The STFT magnitude spectrum and contour plot shown in Figure 2 The lack of finite support in either time or frequency is evident from the appearance of energy slightly before 0.5 sec and after 0.5 sec and energy other than 10 and 50 Hz as shown in Figure 3. Further, when the window length is increased, the frequency resolution increases but there is a decreases in the time resolution.
The DWVD magnitude spectrum and contour plot shown in Figure 4 and = , which introduces the cross term due to instantaneous autocorrelation.
2) Chirp Signal
A sinusoid that has increases in frequency over time is called a chirp signal.
This signal can be generated by multiplying the argument of a sine function by a S. Sivakumar, D. Nedumaran Journal of Applied Mathematics and Physics linearly increasing term. A linearly increasing sine wave that varies between 10 and 200 Hz over a 1 sec period is generated as shown in Figure 6.
The STFT magnitude spectrum and contour plots shown in Figure 7 and In the DWVD magnitude spectrum and contour plots shown in Figure 9 and
3) ECG Signal
ECG signal is one of the non-linear multicomponent non-stationary signals.
The ECG wave form discrete data is imported to MATLAB [15] and is shown in Table 1 shows the important parameter to design the window function.
From the two STFT spectrums of the Arrhythmia ECG signal, the window width plays a predominant role; since the 128-point window unravels the high frequency components very well than the 32-point, even though there is a compromise in time resolution as shown in Figure 12 and Figure 13. Thus, the Journal of Applied Mathematics and Physics Whereas in DWVD magnitude spectrum shown in Figure 14, the actual signals are found to be in short distances so that local oscillation takes place that introduces cross term between the two auto terms. When it is dominant enough, it could not able to provide good time and frequency resolution. From the above discussion, if the signals are multi component non-linear non-stationary signals, the DWVD is not suitable to analyze the signal until the cross term is eliminated.
Conclusion
In this work, two time-frequency analysis methods viz., discrete STFT and WVD algorithms were developed and compared with their performance for the purpose of defining and applications of the time-frequency resolution of the non-stationary signals. The performance of these methods was tested in three different non-stationary signals and their merits and demerits were investigated. The results of this study revealed that the time-frequency resolution of the STFT technique is inversely related to the window length. Increasing the window length increases the frequency resolution, but at the cost of reduction in frequency tracking capability. Conversely, WVD has several advantages over the STFT. It reduces the cross terms and sampling frequency by using an analytical signal. A DWVD also maintains some of the properties such as marginal and invariability of the non-stationary signals. It also produces a good spectrum of Journal of Applied Mathematics and Physics time-frequency structure. In the DWVD, the kernel ( ) , 1 θ τ ∅ = introduces cross terms. These cross-terms will be reduced by introducing the window, kernel and adaptive filters, which will make the DWVD a more suitable and powerful tool for non-stationary signal analysis. Since The Wigner Ville distribution preserves all the information, it will be used for two-dimensional signal processing like digital image processing. This work supports the need of using time-frequency distributions when dealing with non-stationary signals. | 2018-10-09T18:10:37.392Z | 2018-09-28T00:00:00.000 | {
"year": 2018,
"sha1": "bca0b3c4d5c90bf327279dc5a72748f9a6f8f665",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=87557",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bca0b3c4d5c90bf327279dc5a72748f9a6f8f665",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
90953684 | pes2o/s2orc | v3-fos-license | Chronic Hepatitis C Virus Infection, Why Not Treat Now?
Biologics are widely used in the treatment of rheumatoid arthritis (RA) these days and have shown excellent therapeutic effect. However, they tend to weaken the im-munity and increase the susceptibility to infection. This becomes more problematic in patients with chronic infection of hepatitis B virus (HBV) and hepatitis C virus (HCV) because biologics can lead to the reactivation of these hepatitis viruses.
Biologics are widely used in the treatment of rheumatoid arthritis (RA) these days and have shown excellent therapeutic effect. However, they tend to weaken the immunity and increase the susceptibility to infection. This becomes more problematic in patients with chronic infection of hepatitis B virus (HBV) and hepatitis C virus (HCV) because biologics can lead to the reactivation of these hepatitis viruses. The reactivation of HBV can cause fatal outcomes in some patients and preemptive antiviral therapy is strongly recommended in all HBV-infected patients undergoing immunosuppressive therapy including biologics for RA. However, there are no specific guidelines about the management of chronic HCV infection in patients treated with immunosuppressive therapy.
In contrast to HBV, HCV reactivation usually follows a mild clinical course and rarely causes severe hepatitis or hepatic decompensation. Lee et al. [1] reported that enhanced HCV replication or increase in HCV RNA level was relatively common (27%) in HCV-infected patients treated with systemic chemotherapy or immunosuppressive therapy, but it did not lead to serious sequelae. Even patients with liver cirrhosis had relatively good liver function in spite of enhanced HCV replication. Torres et al. [2] reported that HCV reactivation occurred in 23% of HCV-infected patients receiving cancer treatment and most had an unremarkable clinical course with no liver failure or liver-related death.
Presence of HCV infection is not contraindication to therapy with tumor necrosis factor-alpha (TNF-α) inhibitors. Although TNF-α inhibitors potentially increase HCV replication, only a few cases of drug with-drawal due to suspected recurrence of liver disease related to HCV were reported so far [3]. This suggests that TNF-α inhibitors are safe in patients with HCV infection in short term although there are insufficient data to assess their long term safety.
In this edition of the journal, Kwon et al. [4] reported the result of their retrospective study about the changes in the transaminase and viral load associated with biologic therapy in 17 RA patients with HCV infection. Transaminase was increased in 4 patients (2 in adalimumab and 2 in tocilizumab). Two adalimumab-treated patients also showed increase in HCV RNA level. One patient stopped adalimumab and the other patient received anti-viral therapy using interferon (IFN) and ribavirin, which was very effective. One tocilizumab-treated patient with the increase in HCV RNA level received anti-viral therapy using ribavirin and sofosbuvir and, in the other tocilizumab-treated patients, transaminase was normalized within 2 months. Authors said that use of biologics in HCV-infected patients could lead to changes in transaminase and viral load and regular follow up of liver function and viral RNA is necessary. HCV infection is confirmed by simultaneous presence of anti-HCV antibody and HCV RNA. However, HCV RNA level does not correlate with either the severity of liver disease or the risk for progression to cirrhosis or hepatocellular carcinoma (HCC) in previous studies [5,6] and it is controversial whether it is essential to follow up HCV RNA levels in HCV-infected patients treated with immunosuppressive agents [3]. As for the treatment, HCV is very different from HBV in that HCV is a curable disease. Once antiviral treatment leads to sustained virologic response (SVR) which is defined by undetectable HCV RNA 12 to 24 weeks after the end of therapy, reappearance of the HCV RNA is very rare [5]. IFN-based regimens have long been used for the treatment of HCV but are only moderately effective [7]. Furthermore, these regimens have significant adverse effects, which made them difficult to use in patients being treated with immunosuppressive agents.
However, recently introduced direct-acting anti-virals (DAAs) (ledipasvir/sofosbuvir, sofosbuvir, daclatasvir, asunaprevir, ombitasvir/paritaprevir/ritonavir, dasabuvir, elbasvir/grazoprevir) changed the approach to HCV hepatitis [7]. These oral regimens are much tolerable compared to IFN-based therapies. They have shown SVR rates greater than 90% and are rapidly replacing IFN-based therapies. Because of high tolerability and efficacy of DAAs, many patients on immunosuppressive therapy after organ transplantation have been simultaneously treated with DAAs for HCV infection. Furthermore, there is a report that DAAs can be used concomitantly with anti-neoplastic agents and this therapeutic intervention may prevent delay in the administration of chemotherapy in HCV-infected cancer patients [8].
When it comes to the treatment of HCV-infected patients with RA, concomitant treatment with biologics and DAAs seems to be promising. When etanercept, a TNF-α inhibitor, was used with IFN and ribavirin for the treatment of HCV, SVR rate was higher than in the control group who were treated with IFN and ribavirin only (63% versus 32%) [9]. There has been a concern of RA flare followed by IFN treatment, but DAAs can avoid this problem.
Cirrhosis caused by HCV is a leading indication for liver transplantation and HCV is the most common cause of HCC in most industrialized countries [5]. When patients achieve a SVR to treatment, HCV does not recur in greater than 99% of patients, even in those who are immunosuppressed or receive chemotherapy and HCC is less likely to develop in these patients. Considering the sequelae of chronic HCV infection and the effectiveness and tolerability of recent DAAs, it looks like that time has come to think about simultaneously treating HCV through close cooperation with the hepatologists rather than just following up liver function and HCV RNA level in HCV-infected RA patients on biologic treatment.
ACKNOWLEDGMENTS
The author is grateful to Prof. Woo-Im Chang (Department of Internal Medicine, The Catholic University of Korea, St. Vincent's Hospital) for her critical advice for this manuscript. | 2019-04-02T13:14:48.150Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "840444ceef2e4d9e2e504bfc430f6d034dc83a15",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4078/jrd.2018.25.3.151",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "841f2753eddca9edb42e77b865ef7f3515c12654",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228391479 | pes2o/s2orc | v3-fos-license | Evaluation of Operative Management for Perforated Sigmoid Diverticulitis
Abstract Objectives: The present study was performed to examine operative cases of perforated diverticulitis and to consider the corresponding treatments. Methods: In the 10-year period from January 2007 to December 2016, 20 cases of perforated sigmoid diverticulitis were treated surgically in our hospital. We examined the background factors, physical findings, preoperative diagnoses, surgical findings, and postoperative courses. Results: Twenty patients with sigmoid colon diverticulitis, eleven males and nine females with a median age of 67.5 years (25 to 87 years), were included in the analysis. Preoperative complications included chronic kidney failure, including posttransplantation failure in 4 cases (20%), among others. Surgery was performed using open methods, including 15 patients who underwent the Hartmann procedure and 5 patients who underwent colon resection or suture closure with stoma construction. Among the postoperative complications, stoma dropout, deep venous thrombosis with pelvic abscess formation, pneumonia, and wound dehiscence were detected in one case each. Postoperative polymyxindirect hemoperfusion (PMX-DHP) was effective in 2 cases (10%). No deaths occurred. Conclusions: For perforated sigmoid diverticulitis, we performed colon resection or suture closure
Introduction
Accurate diagnosis and treatment of perforated sigmoid diverticulitis are important considering its frequency among serious operative emergencies [1][2][3] . The efficacy of laparoscopic lavage has recently been reported in Europe for the treatment of perforated diverticulitis [4][5][6][7] . However, compared to the conventional Hartmann procedure, some reports have revealed high rates of postoperative abscess formation among other severe complications, including mortality 8,9) . In this study, we analyzed operative cases of perforated sigmoid diverticulitis and examined their corresponding management.
Methods
In the 10-year period from January 2007 to December 2016, 20 cases of perforated sigmoid diverticulitis were treated surgically in our hospital ( Table 1, 2 ). We examined the background factors, physical findings, preoperative diagnoses, surgical findings, and postoperative courses and considered the treatments applied for perforated sigmoid diverticular disease. The severity of diverticulitis is graded with the use of Hincheyʼs criteria 10) . Patients with Hinchey classⅠ have small pericolic or mesenteric abscesses, whereas those with Hinchey classⅡ have larger abscesses. Hinchey classⅢ is present when a peridiverticular abscess has ruptured and caused 日外科系連会誌 44 (2) :161-166,2019
Evaluation of Operative Management for Perforated Sigmoid Diverticulitis
Yoshiko Bamba 1) , Shimpei Ogawa 1) , Michio Itabashi 1) , Akiyoshi Seshimo 1) , Shingo Kameoka 2) , Takahiro Okamoto 3) and Masakazu Yamamoto 1) Hospital discharge purulent peritonitis. Rupture into the free peritoneal cavity with fecal contamination signifies Hinchey class Ⅳ. Surgery for sigmoid diverticulitis in our hospital is indicated for Hinchey class Ⅲ or Ⅳ perforation, and Hinchey class Ⅰ or Ⅱ conditions including refractory cases, repeated cases, and fistula formation cases. We reviewed cases of Hinchey class Ⅲ or Ⅳ perforation. The perforation type was classified as free perforation or cover perforation according to intraoperative findings during laparotomy. Free perforation applies to a perforated segment that is exposed in the free abdominal cavity, with digestive juice and fecal discharge. Cover perforation applies to a perforated segment that is covered by other organs, such as the intestinal canal and omentum. Depend on the situation of perforation site, we selected colon resection including Hartmann procedure, or suture closure. Regarding postoperative abscesses, an abdominal pelvic CT examination was generally performed within the first week after the operation for evaluation. Polymyxin-direct hemoperfusion (PMX-DHP) was applied in cases of septic shock after surgery.
. Background factors / physical findings
Twenty patients with perforated sigmoid colon diverticulitis, including eleven males and nine females with a median age of 67.5 years (25 to 87 years), were included in the analysis. Preoperative complications included 4 cases of chronic kidney failure (20%), including post-transplantation failure, 2 cases of multiple myeloma (10%)( Table 3 ). Seven patients (35%) with long-term steroid use (including previous use) were included. Four patients (21%) had no complications. In addition, sigmoid colon diverticulum was found to have developed in one patient after gastroscopy with barium.
. Preoperative diagnosis
By the Hinchey classification, 12 cases (60%) were class Ⅲ , and 8 cases (40%) were class Ⅳ. Free abdominal gas was observed in all cases. Twelve patients (60%) underwent surgery within 24 hours from the onset of symptoms, Four patients (20%) underwent surgery from 2 to 5 days after symptom onset, and Four patients (20%) underwent surgery from 6 to 10 days after symptom onset. The median preoperative white blood cell count was 10,095/ μl (910 to 21,230/ μl), and the median CRP level was 11.72 mg/dl (0.12 to 57.2 mg/dl).
. Operation
According to the intraoperative findings, 14 cases of free perforation (70%) were diagnosed, and 6 cases of cover perforation (30%) were diagnosed. Eight Hinchey class Ⅳ cases were diagnosed as free perforation according to intraoperative findings. Of the 12 cases diagnosed as Hinchey class Ⅲ, 6 cases were diagnosed as free perforation by intraoperative findings, and 6 cases were diagnosed as cover perforation. Surgeries were performed by open methods, including 15 patients who underwent the Hartmann procedure, 4 patients who underwent colon resection or suture closure of the perforation site with stoma construction, 1 patient who underwent perforation site stoma construction ( Table 4 ). The median operative time was 175.5 minutes (91 minutes to 282 minutes), and the median bleeding volume was 116 ml (10 ml to 603 ml). All patients underwent intra- peritoneal lavage with 10,000 ml to 20,000 ml of physiological saline during open surgery. Broad antibiotics were administered in all cases perioperatively.
. Postoperative course
Regarding postoperative complications, stoma dropout, deep venous thrombosis with pelvic abscess formation, pneumonia, and wound dehiscence were detected with one patient for each case (Table5). Reoperation was required in 3 cases (15%). In one case, washout and drainage were performed due to an unknown puncture site, but pandemic peritonitis developed, and reoperation was performed due to a diagnosis of perforated sigmoid colon diverticulitis. Reconstruction for stoma dropout after surgery and split-skin grafting for wound dehiscence were performed in one patient each. Postoperative PMX-DHP was effective in 2 patients (10%). No deaths occurred. Fourteen patients were discharged postoperatively, and the median hospitalization period was 25 days (12 to 62 days). Six patients (32%) were postoperatively transferred to other departments or other hospitals, and the median hospital stay was 19 days (2 days to 89 days). The reasons for transfer to other departments were treatment of the original disease (2 cases) and treatment of postoperative pneumonia (1 case).
Discussion
In the 8 Hinchey class Ⅳ cases, the surgical findings revealed free perforation. Six of the 12 Hinchey class Ⅲ cases were cover perforations, and abscesses were localized to the omentum and small intestine; the ascites fluid was serous, and no contamination was observed. Currently, the Hartmann procedure is generally performed for perforated colon diverticulitis, but for patients with localized abscesses and intraperitoneal contamination, laparoscopic lavage may also be indicated. Six cases diagnosed as Hinchey class Ⅲ before surgery were later determined to be Hinchey class Ⅰ or free perforation. Laparoscopic intraperitoneal contamination must be considered when evaluating recent reports of laparoscopic lavage. Depending on the size of the perforation and the presence of blood flow disturbances, intestinal resection should be selected for cases with large perforations and a lack of blood flow, even in cases without localized abscesses and intraperitoneal contamination. On the preoperative status, the patients had chronic kidney failure, systemic diseases and cancer. The status and treatment of the diseases should be correlated to the perforated diverticular disease 10) . Without a good course on the treatment of perforated diverticular disease, the situation would become more serious because of the disease. We should select the operation depending on the situation. Among the postoperative complications, only one persistent abscess was detected. Therefore, intraperitoneal lavage and resection of the perforated colon can be considered effective. The perforated hole of the persistent abscess was 2 cm in size, and a large fecal lump was found in the Douglas fossa. Among the preoperative complications, many patients with long-term steroid use and systemic disease were observed. Oral corticosteroids are reportedly associated with an increased risk of diverticular perforation 11) . As a postoperative complication, stoma dropout was observed in an obese patient with a body mass index (BMI) of 31 who used high-dose steroids for treatment of granulomatosis with polyangiitis (GPA), suggesting the influence of the underlying disease. In some cases, the site of perforation could not be confirmed during surgery. We found free air in the upper abdomen, initially leading us to suspect upper gastrointestinal perforation. During surgery, the perforation site could not be identified, even by intraperitoneal observation, and even though extensive washout and drainage were performed. Then, peritonitis recurred, requiring reoperation. Although this situation is relatively infrequent, the perforation site may not be identifiable, especially in cases of small perforations of the diverticula. In particular, when the entire small intestine is expanded due to edema with general peritonitis, the visual field is poor, complicating identification of the perforation site. When surgery is indicated, these considerations must be noted, and as much information as possible should be obtained before surgery.
In recent years, several reports on the utility of laparoscopic lavage for perforated diverticulitis have been published in Europe [4][5][6] . However, the Ladies trial compared laparoscopic peritoneal lavage and sigmoidectomy for perforated diverticulitis with purulent peritonitis of the colon in 34 teaching hospitals from 2010 to 2013 in Belgium, Italy and the Netherlands 9) . Major morbidity and mortality within 12 months were observed in 30 (67%) of the 45 patients in the laparoscopic peritoneal lavage group and in 25 (60%) of the 42 patients in the sigmoidectomy group (odds ratio 1. 2 895% CI 0. 5 4-3 .03 , p=0.58). The researchers concluded that laparoscopic lavage was not superior to sigmoidectomy for the treatment of purulent perforated diverticulitis. Additionally, the SCANDIV Randomized Clinical trial did not support laparoscopic lavage 8) . Currently, laparoscopic lavage for Hinchey class Ⅲ perforated diverticulitis treatment remains controversial.
There are general consensus for Hinchey class Ⅳ cases not to be selected for laparoscopic peritoneal diverticulitis 7,12) . Although some cases of severe preoperative complications and poor general condition with Hinchey class Ⅲ were observed, surgery could be conducted relatively safely by performing removal and thorough lavage of the abdominal cavity. Intraperitoneal contamination and preoperative complications must be carefully considered in the context of laparoscopic washout and drainage for Hinchey class Ⅲ cases.
In conclusion, we performed colon resection or suture closure with stoma construction by open meth-ods for 20 cases perforated sigmoid diverticulitis at our hospital. Many patients had preoperative complications, but no deaths occurred postoperatively, and the postoperative course was relatively favorable. When considering laparoscopic lavage for Hinchey class Ⅲ , intraperitoneal contamination and preoperative complications should be considered.
Conflicts of interest: None. | 2020-04-30T09:11:41.155Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "da9d138b951fc9914e583e9826462fe67467e1db",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jjcs/44/2/44_161/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "53f7ddc096e0b147a1186b604b0dc6ff43acfc80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23750487 | pes2o/s2orc | v3-fos-license | Control of Noise in Chemical and Biochemical Information Processing
We review models and approaches for error-control in order to prevent the buildup of noise when gates for digital chemical and biomolecular computing based on (bio)chemical reaction processes are utilized to realize stable, scalable networks for information processing. Solvable rate-equation models illustrate several recently developed methodologies for gate-function optimization. We also survey future challenges and possible new research avenues.
This review is organized as follows. In Section 2, we describe general concepts for considering (bio)chemical binary gates. Gate design for decreasing noise amplification is addressed in Section 3. Section 4 addresses optimization of AND gates. Section 5 describes gate design as part of a network. Section 6 is devoted to summary and discussion of future challenges.
From (Bio)chemical Information Processing Gates to Networks
Processing of large quantities of information at high levels of complexity requires utilization of a paradigm of scalable networking of simple gates. Recent chemical and biochemical computing literature has usually implicitly assumed an approach similar to that used in Si-chip electronic devices [184,185] of designing fault-tolerant systems that can avoid buildup of noise without prohibitive use of resources. However, with biomolecules, one could perhaps also use design concepts borrowed from processes in living organisms. [186] Hybrid solutions can be expected, with bio-inspired elements supplementing the electronics designs. Other approaches include massive parallelism, [182] specifically with DNA. [187] Thus far, the vast majority of the recent enzyme-based biocomputing realizations and analyses reported, [10,12,[14][15][16][17]19,20,30] and similarly the rest of the biomolecular computing literature, have at best realized only small networks of gates, even though the aim has been to follow the digital approach based on analog gates and other elements connected in increasing-complexity networks. [184,185] Biomolecular computing is presently also far from the complexity of coupled biochemical reaction sets needed for mimicking processes in living organisms. Additionally, near-future applications of moderate-complexity biocomputing systems will likely be in novel sensor systems, [15] processing several input signals and yielding Yes/No digital outputs, corresponding to the "Sense/Respond" or "Sense/Diagnose/Treat" actions. Therefore, both the biochemical steps and the output transduction to electrodes/electronic computers for the "action" step, suggest the use of the binary Yes/No digitalization.
More importantly, the binary/digital information processing paradigm offers an approach for control of the level of noise buildup in complex networks. Chemical and biochemical systems operate as larger levels of noise than electronic computers. The inputs reactant and the "gate machinery" chemical concentrations, e.g., catalysts, typically fluctuate within at least a couple of percent of the range of values identified as the binary 0 and 1, and careful attention to the control of noise build-up is required for networks as small as 2-3 gates. [10,15,19,20] While we talk of digital information processing, the network elements are actually always analog. Figure 1 offers an illustration for the simplest "gate" function: the identify, means signal transmission, conversion, or transduction. A possible analog response is also shown. The input and output signals are actually not limited to two values, or to the range bounded by them, of the digital/binary 0 and 1 selected as appropriate for a specific application. The analog signals can also be considered beyond the "digital" range, if physically allowed, as shown by the broken line extensions. Chemical concentrations can only be nonnegative, but the binary 0 does not have to always be at the physical zero.
A simple model is offered by an irreversible diatomic chemical reaction described within the rate-equation approximation with the rate constant k, of the species A, of the initial concentration 0 (0) A A , pairwise combining to yield the product, C, of concentration C(t) at We assume that the information-processing application identifies a reference value, max A , of the input as the digital 1, and, in the simplest case, the physical zero as the digital 0 input. The product of the reaction constitutes the output signal used/measured at the "gate time" g t . The binary values for the output are then set by the gate itself: 0 and 1 will be, respectively, 0 C and 2 max max max -7 - The logic-range variables, x and z, represent the input, 0 (0) A A , and the output, ( ) normalized to the "digital" range of values: With these definitions, we have the gate-response function, shown in Figure 2, which depends on the parameter combination max g The digital-1 of the input, max A , is generally determined by the specific application or other network elements to which the present gate is connected. However, we can to some extent vary the reaction rate constant, k, by altering the physical and chemical conditions of the system, within the range allowed by the operational environment. We can also possibly adjust the reaction time, g t . This allows a certain degree of control of the "response function shape" which can be used for gate design and optimization, as elaborated on in the following sections.
The considered chemical reaction generally yields concave shapes, shown in Figure 2.
Furthermore, as illustrated in the figure, the actual shape of the gate response function cannot be varied significantly by just "tweaking" the parameters. Indeed, order of magnitude variations in the parameter values -which might not be practical in many applications -are needed to achieve qualitatively different shape. This difficulty [10,12,19] is shared by most, but especially catalytic, (bio)chemical computing gates and can be traced to that "activities" of reagents and catalysts are effectively cancelled out in the leading, linear order in defining the rescaled "logic-range" variables, see Equations (5)- (6). Finally, we note that both variables in Figure 2 need not be limited to [0,1]; the curves are well-defined and shown for x and z larger than 1 as well.
Noise Handling Considerations
Important topics of noise amplification and filtering will be addressed here on the example of the "identity" gate just introduced. Two-input/one-output AND gates will be addressed in the next section. The following reaction exemplifies a response more realistic of typical chemical kinetics: with the rate constant K and initial conditions This reaction can be considered as a two-input process. However, here we regard 0 A as the input set by the environment in which the gate is used, whereas will be for now assumed small (so that it will limit rather than drive the output) and controllable-supply "gate machinery" chemical.
The rate equations and their solution are summarized in Equations (5)-(6) are then used to rescale the input and output in terms of the "logic" variables: This expression depends on two dimensionless combinations, These parameters can be controlled, see Figure 3, by changing the physical and chemical conditions (vary the rate constant K), the "gate machinery" chemical supply, 0 B , and the reaction time, g t .
One can prove [188] by algebraic considerations that the function in Equation (12) is always monotonically increasing convex; see Figure 3. Indeed, for catalytic (bio)chemical reactions and many other (bio)chemical processes, convex response curves -and surfaces, for more than one input -are generally expected. The product of the reaction -the output -is typically proportional to (linear in) the input-signal chemical concentration(s) for small inputs.
For large inputs, the output is usually limited, for example, by the reactivity of the available (bio)catalyst, or, in our case, the availability of the second reactant, B. Therefore, the output signal reaches saturation in the large-input limit.
There are many possible sources of error in gate functioning. The most obvious noise is that in the inputs, which is actually quite large in chemical and biochemical environments. The gate function will transfer the resulting distribution of the input values into noise in the output. In addition, the binary 0 and 1 signal values need not be sharply defined. In applications, input/output signals in certain ranges of values may sometimes constitute 0 or 1. For example, a range of normal physiological concentrations can be 0, whereas an interval of pathophysiological values can be 1, and these ranges need not even be bounded. The gate function can also be noisy, and the distribution of its values can be displaced away from the desired digital values/ranges: In our notation, noise and fluctuations in concentrations and physical parameters of the system can lead to a distribution of the values of ( ) z x , for each x, rather than a sharply defined function such as in Equations (7), (12). The mean values of this distribution need not pass precisely through the expected logic values at the "logic" inputs.
We will term "analog" the noise due to the spread of the output signal about the reference "digital" values (or ranges). In order to prevent buildup of noise as gates and other network elements are connected, we have to pass our signals through "filters" with response close to that shown in Figure 4. Ideally we would like to have the sigmoidal property -small slopes/gradients at and around the digital values -in all or most of our gates. Filters can also be used as separate network elements. There is evidence that filtering for suppressing analog noise buildup is utilized by Nature. [189,190] Experimental attempts to realize a biochemical filter have only recently shown preliminary successes. [191] The inset in Figure 4 points out the property that filtering can push those values which are far from the correct digital result even closer to the wrong answer. Thus, the process of digitalization itself introduces also the "digital" type of noise: small probability of a wrong binary value. Digital errors are not very probable and only become important to correct for large enough networks. Standard techniques based on redundancy are available [192,193] for digital error correction.
For enzyme-based gates studied by our group, for the presently realized network sizes the analog error correction is important and has recently received significant attention. [10,12,14,15,17,19,20,30] It has been estimated [10,12] that up to order 10 such gates can be connected in a network before digital error correction is warranted.
Experimental realizations of the sigmoidal behavior ( Figure 4) have been an ongoing effort, [188] based on the ideas [10,188,190] that an additional reactant, F, which depletes the product, but can only consume (react fast with) a small quantity of it, will suppress the response at small inputs without voiding the saturation property at large inputs, thus yielding a sigmoidal response.
In connection with the system of the type defined in Equation (9), we can consider reactions C F , with a fast rate, , and with … denoting inert chemicals. This added reaction, however, delays the saturation at large inputs. Another option is C F A , which, however, introduces a feedback loop -the effects of which have not been studied -by regenerating some of the input. A variant realized experimentally [188] was for a system more complicated than the single reaction in Equation (9), and then the output of the added process, to the one in Equation (9), is still solvable in a closed form in quadratures, because the solution steps lead to a single differential equation which, while nonlinear, is of the Bernoulli form. [194] However, the expressions obtained are sufficiently complicated so that the closed-form results are unilluminating, and numerical evaluation is needed to make them illustrative/tractable, which is outside the scope of the present review.
Therefore, efforts has been devoted to directly minimizing noise amplification for gates with convex response curves/surfaces of the "standard" (bio)chemical-reaction type. For singleinput/single-output gates, such as the illustrated "identity" function, Figure 3, noise amplification (increase in the spread of the noise distribution due to the gate function) is simply related to the maximal of the two slopes at the binary points, and it can be minimized by having both slopes close to 1, i.e., a nearly straight line response curve (see Figure 3).
However, a danger -also identified in designing filtering systems [188] -with this approach has been that the near-linear behavior is realized straightforwardly when the reaction is far from saturation. The latter regime corresponds to weak output signal, and therefore while there is no noise amplification, another source of relative sensitivity to noise is introduced: that of the random "environmental" external noise being comparable to the spread of the binary 0 and 1 signal reference values.
One solution has been to "drive" the reaction by flooding the system with reagent(s) that will increase the process rates. For example, for the reaction scheme considered in this section, Equations (9) ). This is perhaps not an interesting "curve" to consider, but it does the job of avoiding/minimizing noise amplification.
The situation with the two-input/one-output gate functions is more complicated and is discussed in the next section.
Optimization of The AND Gates
AND is the most studied gate in the (bio)chemical computing literature, and practically the only one explored in detail for the optimization of its response. Since the truth table for the AND gate is that the output 1 is obtained only when both inputs are 1, AND is a natural outcome as a product of a two-input chemical reaction. The AND gates themselves are not universal, but they become such if supplemented by NOT: NAND (not-AND) gates can be networked to realize an arbitrary binary function. Indeed, the NOT version of filtering [16] -the vertically flipped sigmoidal curve as compared to Figure 4 -would be particularly interesting to realize and widely incorporate in networked biochemical processes.
We consider a simple model for the AND gate in chemical computing: We now regard the reaction in Equation (9): A B C , as a two-input, one-output process. We introduce the "logic-range" variable for the input B, rescaled to the binary interval [0,1], paralleling the definitions in Equations (5)- (6). Here max B is the reference value for logic-1 of the B-input. The quantity z defined in Equation (6), is now a two-variable function, ( , ) z x y , describing the AND-gate response surface shape. The solution of the rate equations, given by Equation (11), is now recast in terms of the new set of the "logic-range" variables to yield x y x y xy e e e e z x y e e xe ye These are similar to the (dimensionless) parameters in Equation (13) Figure 5. We can try to identify parameter values for which the largest of the four gradients, is as small as possible (note that 00 z is always zero for this particular model).
For this calculation, let us for now assume that both and can be adjusted independently. By numerical calculation we then find that for moderate values of and , the minimum is obtained for 0.4966 , and is given by It turns out that gate functions of this type amplify analog noise even under optimal conditions. The noise amplification in the best case scenario is about 18%. Studies [10,12,19] of enzyme-based AND gates, which have utilized more realistic (and thus more complicated and not exactly solvable) rate-equation models appropriate for biocatalytic reactions, found similar estimates. Experimental data were fitted and results were numerically analyzed by using both the rate equation approach and more phenomenological shape-fitting forms, the latter exemplified in the next section. If not optimized, smooth, convex gates corresponding to typical (bio)chemical reactions can have very large noise amplification, 300-500%. Reaching the optimal conditions is not always straightforward primarily because the gate function shape depends only weakly on parameter values. Even under optimal conditions, at least about 20% noise amplification is to be expected.
For fast enough reactions the maximal gradient can be smaller than ~1.2 and even decrease below 1, which would suggest noise suppression. However, as illustrated in Figure 6, the gate function surface then develops sharp features, and the gradients can no longer be used as measures of noise amplification, because they remain close to the logic-point values only in tiny regions near these points, as compared to the noise spread of at least several percent typical for (bio)chemical signals. Generally when the spread of the noise is larger than the x and y scales over which the gate function or its derivatives vary significantly, one can assume a certain shape of the input noise distribution, such as a product of approximately Gaussian distributions in x and y for inputs at each of the logic points, or half-Gaussian, if the logic zero is exactly at the physical zero. Given a model for the gate response function, one can then numerically calculate [10] the output signal distribution for each of the inputs and thus estimate the noise amplification factor. [10,12,14] The "ridged" gate response function (e.g., Figure 6) was first encountered [12] in a study of an enzymatic system which also realized a smooth-response counterpart when a different chemical was used as one of the inputs. [12] The reaction kinetics was more complicated than in the present model, but the finding has confirmed the general expectations: The optimal conditions are obtained with a symmetrically (diagonally) positioned ridge, as in Figure 6, and the noise amplification factor is then only slightly larger than 1, estimated by considering distributions. Thus, noise amplification is practically avoided. However, such gates do not have the noise-suppression (filter) property. Figure 7 presents a schematic of an AND gate response sigmoidal in only one of the two inputs, which was recently explored and experimentally realized. [14] Many allosteric enzymes have such a "self-promoter" property with respect to one of their substrates (input chemical species). A key finding [14] has been that the single-sided sigmoidal shape can be tuned by parameter adjustment to have the noise amplification factor only slightly above 1, so that there is practically no noise amplification. However, a desirable two-sided sigmoid response, also shown schematically in Figure 7, has not to our knowledge been realized at the level of a single AND gate, in chemical or biomolecular computing literature. Certain biochemical processes in nature, which are much more complex than our synthetic AND-gate systems, do realize [189] a two-sided sigmoidal response.
Networking of AND Gates
We have seen that optimization of (bio)chemical gates one at a time is not straightforward. In most cases a rather large variation of the controllable parameters is needed: physical and chemical conditions, reactant concentrations and in some cases choice of (bio)chemical species, which may not be experimentally feasible. The actual detailed kinetic modeling of the reactions involved, especially for biomolecular systems, is in itself a challenging task: [10,12,14,15,17] The kinetics of most biomolecular processes, specifically those used for AND gates, is complex and not well studied. The quality of the experimental data for the gate-response function is limited due to the noise in the gate-function itself, limited life-time for constant activity of the biocatalytic species, etc. Thus, multi-parameter complex reaction schemes are difficult to substantiate by data fitting in the gate-design context which requires models to work for a large range of adjustable parameters.
An alternative approach involves optimization of the relative gate functioning in a network, whereby each gate is modeled within a very approximate, phenomenological curve/surface-fitting approach. These ideas have recently been tested [19] for coupled enzymatic reactions which include steps common in sensor development [213] for maltose and its sources. A modular network representation of the biocatalytic processes involved, is possible in terms of three AND gates; see Figure 8. This "cartoon" representation is actually approximate, because it obscures some of the complexity of the constituent processes. [19] The approach involves first proposing a phenomenological fitting function for the gate response surface in terms of as few parameters as possible, enough to capture the expected qualitative features of the shape. For a typical convex "identity" gate, the fitting function is conveniently written as This is a single-parameter, s, rational form that "looks" qualitatively correct, provided we assume Indeed, the curve is then convex and has slope s at , ( ) 0,0 x z x , and 1/ s at , ( ) 1,1 x z x .
For each AND gate, we then use the two-parameter, 1 s and 1 u , product The gradient values are Having introduced our approximate fitting functions, we now experimentally vary selective inputs in the network; see Figure 8. In the experiment, [19] each of the three inputs 1,2,3 x was separately varied between 0 (corresponding to the binary 0) and the reference value predefined as 1, while all the other inputs (including 3 y ) where at their reference 1 values. In fact, when the parameterization of Equation (19) is applied to all three gates in our network of Figure 8, we get a rational expression for z as a function of all the four inputs ( 1,2,3 x and 3 y ). Setting all of them but a single x-input to 1, we get the parameterization for the measurement with that input varied; we only keep that varying argument of ( ) z for simplicity: Interestingly, each data set only depends on a single parameter ( 1 s , 2 1 s u , or 3 1 2 s u u ).
While we only get partial information on the gate functioning, we can attempt to "tweak" the relative gate activities in the network to improve the stability. If the proposed approximate description is semi-quantitatively accurate for a given gate, then the parameters s and u for that gate will be functions of adjustable quantities, such as the gate time, input concentrations of some of the chemicals, and reaction rates (which can in turn be controlled by the physical and chemical conditions). In addition, s and u can depend on other quantities which are not controllable.
Without detailed rate-equation kinetic modeling this parameter dependence is not known.
s u u
). The initial sets of data [19] were collected with the experimentally convenient, randomly selected values for the "gate machinery" and other parameters. Examination of the results [19] has lead to a semiquantitative conclusion that the deviations form the optimal values could largely by attributed to the gate which is the closest to the output in Figure 8 ( 1 1 z x y AND ): it was too "active" as compared to the other two gates (means, its biocatalytic reaction was too fast). A new experiment was then devised [19] with the concentration of the enzyme catalyzing this gate's function reduced by an order of magnitude (approximately 11 fold). The new data collected for the modified network, when fitted, yielded 1 s , 2 1 s u , 3 1 2 s u u values significantly closer to the optimal. [19] 6.
Conclusions and Challenges
We reviewed aspects of and approaches to gate optimization for control of the analog noise amplification, which is important for connecting gates in small networks. For larger networks, digital error correction by redundancy will also have to be implemented, and various network elements will have to be devised for filtering, signal splitting, signal balancing, gate-togate connectivity, memory, interfacing with external input, output and control mechanisms, etc.
We used simple rate-equation models which allow exact solvability, to illustrate and motivate the discussion. Thus, we avoided presenting experimental data and their numerical analysis, which can be found in the cited articles, while various chemical and biochemical gate examples are offered in other reviews in this Special Issue. Our presentation has been limited to AND gates and related systems. Indeed, all the recent studies, with one exception: an XOR gate, [214] of noise control in (bio)chemical computing have thus far been for AND gates and, furthermore, again with just one recent exception, [215] only for those with the binary 0 set at the physical zeros of chemical concentrations. While these limitations are natural for chemical kinetics, they are definitely not typical for applications envisaged, notably, multi-input biomedical sensing. [15] As new experiments on mapping out (bio)chemical gate functioning and network designs are reported, new features of noise and error control will be explored. Specifically, noise in the gate function itself, including spread of its values and imprecise mean-values -not exactly at the expected reference output 0 or 1, with deviations possibly also different for various inputs that should ideally yield the same logic output -will have to be considered and corrected, most likely by filtering. Indeed, we conclude by emphasizing that, while longer-term, network design and scaling up will be crucial, the shorter-term challenges in (bio)chemical information processing have been to design and experimentally realize versatile and effective (bio)chemical filter processes and other non-binary network elements that can be concatenated with various binary logic gates. . The values in the tail of the distribution will be driven towards the wrong digital answer, 0 (shown by the unpaired arrow); this results in a small-probability "digital" error. Similarly, distributions peaked near 0, when it is the expected digital value (not shown), will also be sharpened, but the tail values will be driven to the wrong digital answer. Figure 8. The three-gate network, [19] with varied inputs 1,2,3 x , and constant 1 y . | 2010-10-09T08:54:05.000Z | 2010-10-09T00:00:00.000 | {
"year": 2010,
"sha1": "f2ddb0aee75e9a35aad70fd057932b2ae8ba9537",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1010.1853",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f2ddb0aee75e9a35aad70fd057932b2ae8ba9537",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Chemistry",
"Computer Science",
"Physics",
"Biology"
]
} |
73659041 | pes2o/s2orc | v3-fos-license | SELECTING FOR FOOD-FEED TRAITS IN desi AND kabuli GENOTYPES OF CHICKPEA (Cicer arietinum)
The study explored the genetic and environmental variability in chickpea for food-feed traits. Seventy nine genotypes of 17 early-maturing desi genotypes, 19 early-maturing kabuli genotypes and 43 latematuring kabuli genotypes were evaluated for food-feed traits in 7 trials laid out in a randomized complete block design in 3 locations in Ethiopia. All trials showed wide genotypic ranges in various traits related to grain yield, straw yield and straw quality. Analysis of variance for individual trials showed significant (P<0.05) effects of genotype, location and their interaction on grain and straw yields, CP, IVOMD and NDF in all populations. Correlation analysis exhibited either positive or insignificant correlations with straw yield in all trials. The correlation between IVOMD and grain yield was insignificant in all trials. Grain yield correlated significantly (P<0.001) and positively to NDF in early maturing kabuli, however, the correlation was moderate (r= 0.396). Grain yield correlated either weakly or insignificantly to CP and Ca in the trials. The correlation between P and grain yield was ignored as the straw content of P was very small in all genotypes (<1.78 g/kg). Weak or absence of correlations between grain yields with straw traits would enable chickpea breeders to manipulate grain yield and straw traits independently. This presents an opportunity to identify parental genotypes for improving grain yield and straw traits for individual locations. * Corresponding author KEYWORDS
Introduction
Chickpea is one of the important pulse crops in the world. It ranks second in area and third in production among the pulses worldwide (Bampidisa & Christodoulou, 2011). It is mainly grown in South Asia, which accounts for more than 75% of the world chickpea area. India is by far the largest chickpea producing country. Other important chickpea producing countries are Pakistan, Turkey, Mexico, Canada, Australia and Ethiopia. Chickpea is classified into desi chickpea and kabuli chickpea (Bampidisa & Christodoulou, 2011). Grains of desi chickpea are small in size, light to dark brown in color, smooth or wrinkled and have a thick seed coat. Grains of kabuli chickpea are larger, whitish-cream colored and have a thin seed coat. The desi type is more prominent and accounts for close to 80% of global chickpea production. The grains are an important source of protein, minerals and vitamins for humans (Bampidisa & Christodoulou, 2011).Chickpea cultivation produces straw that is used as livestock feed. Generally, residues of pulses and cereals are important sources of feed for livestock raised by resource-poor smallholders in Southern Asia and sub-Saharan Africa. Chickpea straw contains an average of 65 g/kg of crude protein (CP), 694 g/kg of neutral detergent fiber (NDF), 516 g/kg of acid detergent fiber, 111g/kg of acid detergent lignin and 7.7 MJ/kg of metabolizable energy (Bampidisa & Christodoulou, 2011). Moreover, growing chickpea improves soil fertility, increases intensity of land use and provides households with cash (Kassie et al., 2009). Despite being a crop of temperate regions, advances in plant breeding from CGIAR (Consultative Group on International Agricultural Research) centers namely ICARDA (International Center for Agricultural Research in the Dry Areas), which holds the world mandate for kabuli chickpea, and ICRISAT (International Crops Research Institute for the Semi-Arid Tropics), has enabled chickpea cultivation to gradually spread to the sub-tropical and tropical regions of Africa, North America and Oceania. Chickpea germplasm developed by ICARDA is distributed and utilized in all these regions. Studies to simultaneously boost grain yield and nutritive traits of grain legume crop residueshave been reported in lentil Alkhtib et al. 2017), faba bean (Alkhtib et al, 2016b) cowpea (Samireddypalle et al. 2017;Adeyanju et al. 2012). Previous studies on chickpea have reported wide genetic variation in grain yield, the number of secondary branches per plant, the number of pods per plant, biomass yield (Malik et al., 2009) and plant height (Aslamshad et al., 2009) which lead to an exploitable genetic variation in straw quality and yield. Furthermore, studies have reported an existence of positive and significant correlation between grain yield and the number of secondary branches per plant, plant height, number of pods per plant and biomass yield (Malik et al., 2009;Ali & Ahsan, 2012) which points to a possible positive correlation between grain yield with straw yield and nutritive quality. Therefore, the aim of this study was to determine varietal and environmental variations in straw traits in 79 genotypes across desi and kabuli types of chickpea and to evaluate their food-feed relationships. This is the preliminary stage in a series of steps to identify genotypes with food-feed traits. Evaluation of genotypic variation in yield and quality parameters of straw and food-feed relations would help chickpea breeders design appropriate approaches towards dual purpose food-feed (dual purpose) genotypes of chickpea to address needs for human food and livestock feed in mixed crop-livestock farming systems in developing countries.
Experimental layout and the chickpea cultivars
Seventy-nine (79) genotypes of chickpea were tested in 7 trials in 3 Ethiopian sites; Akaki (08 o 53'N 38 o 49'E), Minjar (08 o 44'N 38 o 58'E), and Chefe Donsa (08 o 57'N 39 o 06'E) located in the Central Highlands at altitudes of 2200, 1810 and 2450 m.a.s.l respectively and annual rainfall of 1025 mm, 867 mm and 843 mm respectively (Table 1). The Ethiopian Institute of Agricultural Research (EIAR) developed the genotypes, bred for high grain yield, using germplasm selected from ICARDA breeding lines. Elite genotypes were drawn from 2014 preliminary variety trials (PVT) and national variety trials (NVT) of the Ethiopian Chickpea Improvement Program selected based on their high grain yield and other agronomic traits in potential environments (PE) and low moisture stress environments (LMS). Seventeen genotypes of early-maturing desi (D) were grown in 2 locations viz. Akaki and ChefeDonsa. Nineteen early-maturing kabuli(K) genotypes were evaluated at one location (Minjar). Twenty-five late-maturing kabuli genotypes were evaluated in Akaki and ChefeDonsa. The 7 trials are identified by their codes (Table 1) which indicate which varietal trials the genotypes were drawn from (PVT, NVT), the chickpea type (D, K), the type based on physiological maturity (PE: late maturing, LMS: early maturing) and the locations they were planted (AK, CD, MN). In all trials, randomized complete block design (RCBD) was used with three or four replications (Table 1). Fields were blocked based on slope.
A unit plot measured 4 m×0.8 m. Spaces between rows were 20 cm while spaces between plants were 2 cm. Trials were hand planted in July 2015using recommended agronomic packages as optimized by EIAR for each site. At physiological maturity, plots were manually harvested from two 1.6 m 2 areas laid over two middle rows of each plot. The biomass was air-dried in the field, after which grain was removed and weighed. Straw yield was calculated by subtracting grain yield from total biomass yield. Sub-samples of 500g of representative straw were taken from each plot for chemical composition and digestibility analysis
Straw quality analysis
After oven-drying at 100 O C for 24 h, straw samples were ground to pass through a 1 mm sieve mesh. The samples were analyzed using Near Infrared Reflectance Spectroscopy (NIRS) and conventional wet chemistry. The NIRS instrument, Foss Forage Analyzer 5000 with the software package Win ISI II in the 1108-2492 nm spectra range was used to scan lentil straw samples and a good-of-fitness lentil NIRS equation was used for the prediction of dry matter (DM), nitrogen, neutral detergent fiber (NDF) and in vitro digestibility (IVOMD (1985). Neutral detergent fiber was did not involve use of heat stable amylase and the result was expressed exclusive of residual ash. Acid detergent fiber was expressed without residual ash. Lignin was determined by solubilisation of cellulose with sulphuric acid. In vitro organic matter digestibility was measured in rumen microbial inoculum using in vitro gas production technique. The buffer solution was prepared according to the method described by Menke & Steingass (1988). Rumen fluid was collected prior to morning feeding using a vacuum pump from three ruminally cannulated cows fed a total mixed ration of grass hay (790 g/kg), wheat bran (203 g/kg), salt (3.2 g/kg) and a mineral and vitamin mixture (4.6 g/kg) on a DM basis. Use of cows was assessed and approved by the Environmental and Occupational Health and Safety Unit of ILRI. The rumen fluid from the cows was composited (1:1, v/v), filtered through four layers of cheese cloth, and added to the buffer solution (1:2, v/v), which was maintained in a water bath at 39 o C under continuous flushing with CO2. The buffered rumen fluid (30 ml) was pipetted into 100 ml syringes containing 0.2 g of sample and immediately placed into a water bath at 39 °C. Gas production was recorded after 24 hours of incubation and used to calculate IVOMD according to Menke et al. (1979) equations suitable for legume hays as follows: Where GP: 24 h net gas production (ml/200 mg); CP: Crude protein (g/kg DM); XA: Ash content (g/kg DM).
Ca and P were analyzed usingAtomic absorption spectroscopy (The Perkin-Elmer Corporation, 1996) Laboratory analyses were undertaken at the Animal Nutrition Laboratories of the International Livestock Research Institute (ILRI) in Addis Ababa, Ethiopia and Patencheru, India.
Calculations and statistical analysis
A general linear model was used to test the effect of variety on grain yield, straw yield and nutritive value parameters of straw. Each trial was analyzed separately according to the following model: Where: Yij: grain/straw traits, µ: overall mean, Bi: effect of the block i, Gj: effect of the genotype j, Eij: random error. To evaluate the effect of location and genotype-location interaction (GxL), data from all trials combined and analyzed according to the following model: Where: Yij: grain/straw traits, µ: overall mean, Gi: effect of the genotype i, Lj: effect of location j, GLij: effect of interaction between the genotype and location, B(L)ij: effect of block i within location j, Eijk: random error.
Relationships between grain and straw traits were calculated separately for each trial using Pearson's correlation. Correlations among the nutritive value parameters of straw in each trial were identified using Pearson's correlation. All statistical procedures were carried out using Statistical Analysis System software (SAS, 2012).
Grain and straw yields
The effect of genotype on grain yield and straw yield was significant (P<0.001) in all trials ( Table 2). The genotypic range of grain yield and straw yield for desi across locations was 2.34 -4.7t/ha and 2.1 -5.66 t/ha respectively. The genotypic range of grain yield and straw yield in late maturing kabuli across locations was 1.04 -4.0 t/ha and 1.49 -8.74 t/ha respectively and a range of 1.08 -3.05 t/ha and 1.43 -5.53 t/ha respectively for early maturing kabuli. The effect of location and G×L for grain yields and straw yields was significant (P<0.05) in all trials (Table 4). Table 3 shows a highly significant (P<0.001) effect of genotype on straw nutritive traits. Considering the means of desi trials, the magnitude of range (g/kg) was 3.1 in CP, 12 in IVOMD, 3 in NDF, 0.9 in Ca and 0.047 in P. The magnitude of genotypic range (g/kg) considering all locations of desi population was 32 in CP, 52 in IVOMD, 65 in NDF. 6.2, in Ca and 0.48 in P. In late maturing kabuli trials, the magnitude of range among the trial means (g/kg) was 2.2 for CP, 16 for IVOMD, 24 for NDF, 0.6 for Ca and 0.231 for P. Considering all locations, the magnitude of genotypic range in late maturing kabuli genotypes (g/kg) in case of CP, IVOMD, NDF, Ca and P was 31.6, 122, 81, 8.9 and 0.945 respectively. In early maturing kabuli, the magnitude of genotypic range (g/kg) in CP, IVOMD, NDF, Ca and P was 16.4, 58, 119, 4.3 and 1.08 respectively. The effect of location and GxL was significant (P<0.05) in all trials for CP, IVOMD and NDF (Table 4). Genetic-location interaction was not significant for Ca in all trials. GxL interaction was not significant for P in desi. Table 5 shows the correlations between nutritive value parameters of chickpea straw in each trial. In desi types, early and late maturing kabuli types, NDF correlated strongly and negatively with IVOMD while other correlations were either moderate or weak.
The correlation between grain and straw traits
The correlations between grain yield and straw traits are presented in Table 6. Grain yield either correlated positively (P<0.05) or did not correlate to straw yield in all trials. The correlations between grain yield and nutritive value traits straw were insignificant in desi trials. In early maturing kabuli genotypes, grain yield correlated moderately and positively with NDF (r= 0.396; P<0.05) and Ca (r= 0.347; P<0.05). In late maturing kabuli genotypes, the correlation between grain yield and CP was weak and positive in NVT-K-PE-AK(r= 0.28; P<0.05), moderate and positive in PVT-K-PE-AK (r= 0.41; P<0.05) and moderate but negative in NVT-K-PE-CD(r= -0.37; P<0.05). Genotypes in the trial NVT-K-PE-AK showed a weak and negative correlation between grain yield and Ca(r= -0.298; P<0.05). The correlation between grain yield and P was ignored as P content of straw was very low (<1.78 g/kg).
Grain and straw yields
High demand for crop residue biomass for livestock feeding inEthiopia under mixed systems has been reported (Alkhtib et al., 2016a). Although, genotypes in the current study were bred for high grain production, wide genotypic range in straw yield was found in both desi and kabuli trials. In agreement with our results, wide genetic variation in grain and straw yields has been reported in several crops including maize (Ertiro et al., 2013), pearl millet (Blümmel et al., 2010) and durum wheat (Tolera et al., 1999). Wide variability in straw yield can be exploited to improve the straw yield of chickpea. The current results showed that effect of genotype on straw yield depends on the location. Such effects of genetic-environment interaction on crop residue yield was reported by (Ertiro et al., 2013) in maize. This suggests that the effect of location should be considered during efforts targeting enhancement of straw yield of both desi and kabuli chickpea.
Straw nutritive traits
Significant differences were observed among the various genotypes for straw nutritive value which is in agreement with Kafilzadeh & Maleki (2012). Wide genetic variability in parameters of nutritive value of crop residues has been reported by Ertiro et al. (2013) in maize, Vadiveloo & Fadel (2009) in rice, Singh & Shukla (2010) in groundnut. Crude protein content in feeds is very important to achieve optimum rumen activity in addition to ensure adequate dry matter intake. Risco & Melendez (2011) recommend a minimum of 70-80 g/kg and 100-110 g/kg of CP in the diet of non-lactating and lactating animals to sustain rumen fermentation. The genotype with the highest content of straw CP in this study had a value of 67.5 g/kg, which does not ensure optimum activity of the rumen for non-lactating ruminants. However, crude protein content of crop residue can be improved through agronomic practices, particularly by applying a feasible level of nitrogen fertilization (Blümmel et al., 2007).Rejection of high grain yielding varieties of maize by farmers has been reported because of low dry matter intake of the varieties by livestock (Hellin et al., 2013). Dry matter intake of low-quality roughages is closely and negatively associated with the content of NDF (Horrocks & Vallentine, 1999). Wide genotypic variation in NDF content of chickpea straw was found in this current study. This suggests that dry matter intake of chickpea straw could be improved by exploiting the natural variability in the straw content of NDF. However, dry matter intake is affected by other factors. Thus, it is imperative to test the palatability of straws of chickpea genotypes developed with desired food-feed traits before release. The Ca of chickpea straw in all genotypes except one in PVT-K-LMS-MN was either equal or higher than Ca content of green vetch which was reported to be 12 g/kg (Heuzé et al., 2015). That implies the possibility of improving Ca of chickpea by selection. However, P content of chickpea straw was considerably lower than vetch straws which have been reported to have 1.3 g/kg on average (Heuzé et al., 2015). Thus, no important increase in P content of chickpea straw is expected to be achieved by selection. It is noteworthy that if chickpea straw constitutes a major portion of diet of lactating cattle, mal absorption of Ca could be encountered unless the diet is supplemented by an adequate source of P. Results of this study showed that the content of CP, NDF and IVOMD were dependent on location. Therefore, recommendations of chickpea genotypes with desirable food-feed traits should be location-based. The insignificant effect of GxL on Ca content showed that the relative Ca content of chickpea genotypes is independent of location.
Correlations among nutritive value parameters
The correlations among nutritive value parameters in all trials were generally moderate or weak (except NDF with IVOMD).
The results of study revealed that no single parameter consistently showed strong correlations with the other parameters. That means no single parameter can represent the nutritive value of straw, rather data on chemical composition and digestibility has to be collected for all parameters of nutritive value during screenings of genotypes for straw nutritive value. This result is contrary to Alkhtib et al. (2016b) who reported that NDF can represent the nutritive value of faba bean straw.
Correlation between the grain yield and straw traits
Correlations between grain yield and straw yield were inconsistent across genotypes and populations, which is in agreement with Ertiro et al. (2013) in maize. Straw yield correlated either positively with grain yield or insignificantly in some trials. That means improving chickpea for straw yield will not decrease grain yield. The correlation between grain yield and straw yield was inconsistent across trials in all types of chickpea. That means grain yield cannot be used to predict straw yield in chickpea. Whenever straw yield is considered as a selection criteria by chickpea breeders, it has to be measured alongside grain yield. The correlation between grain yield and straw nutritive value parameters were either insignificant or less than 0.41 (r 2 <0.16). That means that no significant change in grain yield of chickpea would be associated with any improvement of straw nutritive value. This is in agreement with Alkhtib et al. (2016b) who reported neutral relationships between grain yield and straw nutritive value in faba bean.
Conclusion
The existence of wide ranges among genotypes for grain and straw yields and straw nutritive traits is promising for selection of genotypes with superior food-feed traits. The weak relationship between grain and straw traits in most of the trials implies the independent improvement of both food and feed traits. However, currently, breeding programs do not consider straw traits as criteria either for varietal evaluation or for release of new genotypes. Chickpea improvement efforts should give attention to p-investigated. Data on the straw nutritive value in the current study is based on in vitro evaluation, therefore, there is need to confirm these results with animal performance trials before giving final recommendations to farmers. Chickpea straw has high content of Ca, but a very low content of P. When chickpea straw is used as a basal diet for lactating livestock, a feasible supplementation of chickpea straw with a source of P has to be applied to ensure an optimum utilization of ca. This study shows promise to the possibility for simultaneous improvement of grain yield and straw traits to address the high demand existing for dual-purpose, food-feed type of chickpea genotypes in mixed crop-livestock farming systems using appropriate breeding approaches. | 2019-04-02T13:12:48.141Z | 2017-12-27T00:00:00.000 | {
"year": 2017,
"sha1": "68bc8c0d80e4f0a71f6d9613d194d073eb6ba4a9",
"oa_license": null,
"oa_url": "https://doi.org/10.18006/2017.5(6).852.860",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1e1bf10e73daabf8fffe30fa6043ade65bf449c4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
245404070 | pes2o/s2orc | v3-fos-license | Perspectives on Social and Environmental Determinants of Oral Health
Most oral conditions have a multifactorial etiology; that is, they are modulated by biological, social, economic, cultural, and environmental factors. A consistent body of evidence has demonstrated the great burden of dental caries and periodontal disease in individuals from low socioeconomic strata. Oral health habits and access to care are influenced by the social determinants of health. Hence, the delivery of health promotion strategies at the population level has shown a great impact on reducing the prevalence of oral diseases. More recently, a growing discussion about the relationship between the environment, climate change, and oral health has been set in place. Certainly, outlining plans to address oral health inequities is not an easy task. It will demand political will, comprehensive funding of health services, and initiatives to reduce inequalities. This paper sought to give a perspective about the role of social and physical environmental factors on oral health conditions while discussing how the manuscripts published in this Special Issue could increase our knowledge of the topic.
Introduction
The FDI World Dental Federation, one of the oldest dental associations in the world, stated a new concept for oral health as "multifaceted and includes the ability to speak, smile, smell, taste, touch, chew, swallow, and convey a range of emotions through facial expressions with confidence and without pain, discomfort, and disease of the craniofacial complex". The driving determinants of oral health, which affect overall well-being, can be categorized in the individual's physiological function, psychosocial function, and disease and condition status [1], and they also pointed out the determinants of oral health. Out of the five domains of what they have called "driving determinants", at least two were the main focus of this Special Issue: social environment and physical environment. Health behaviors and access to care could be determined by the social environment [2][3][4][5].
Empirical evidence has been developed about the association between social determinants and a variety of oral health conditions [6][7][8][9][10][11][12], dental services utilization [13], and oral health behaviors [14,15]. On the other hand, few studies have focused on the relationship between the physical environment and oral health conditions. The association between challenges of the 21st century, such as a shortage of water, climate change, and oral health, do not have the same body of scientific evidence found about the association between social determinants and oral health.
This paper aimed to describe and discuss some current evidence on the role of social and physical environmental factors on oral health conditions and pointed out how the manuscripts published in this Special Issue could contribute to our knowledge of the topic. Studies on the social and environmental determinants of health were identified through searches conducted on PubMed, Embase, and Google Scholar. Publications by the United Nations, the World Health Organization, and the World Dental Federation were also searched to organize this descriptive review. Manual searches were carried out using the keywords "oral health", "determinants", "environment", "oral conditions", "oral health behavior", and "oral health services". We focused on published systematic reviews about these topics. The literature search was completed in October 2021.
Oral Conditions
Dental caries remains the most prevalent oral disease. The prevalence of untreated dental caries in permanent dentition was estimated at 34.1% in 2015, impacting the agestandardized disability-adjusted life year [8]. A complex net of biological, behavioral, and social factors determines the disease, and systematic reviews (SR) have shown that poverty is also an important factor to be taken into account. Lower socioeconomic positions described by educational level, income, or occupation seems to increase caries experience in different age groups [6,7]. An SR carried out among children in Iran [16], the Middle East, and North Africa [17] reached similar conclusions.
Severe chronic periodontitis affected 7.4% of the global population in 2015 [8]. Periodontal diseases are associated with tobacco consumption, comorbidities such as diabetes, and socioeconomic factors at both the individual and the population level. An SR approached the influence of a life-long individual-level socioeconomic position on adulthood periodontitis by evaluating seven longitudinal studies. It was found that, despite the limited number of papers and some methodological issues, a lower life-long socioeconomic position increased the risk of periodontitis in adulthood [18,19]. In this Special Issue, a cross-sectional study conducted with adults in London identified that periodontal disease was associated with individual and intersectional social characteristics, especially ethnicity and education [5]. Another cross-sectional study carried out among Indian adults also identified ethnicity as a relevant factor associated with periodontal disease [4]. Migration has been identified as impacting oral health. However, the way in which psychological and social factors affect migrants [20] and impact oral diseases needs to be better understood through both quantitative and qualitative research [20].
Head and neck cancer is the most common type of cancer worldwide, with projections of about half a million new cases yearly [21][22][23]. Previous research demonstrated that low income, low educational attainment, low socioeconomic position, and socio deprivation were positively associated with oral cancer [9,24]. In addition, a recent SR [25] reported that being from ethnic minority groups or being uninsured were related to either a delay in the diagnosis of oral cancer or a delay to start treatment. It was estimated that people affected by malignancies in the orofacial region had a substantial decrease in their oral health-related quality of life (OHRQoL), which impacted their ability to cope with daily activities [26]. More epidemiological studies evaluating the social determinants of those conditions are needed.
Traumatic dental injuries (TDI) are a public health problem because of their high prevalence and also because a traumatized tooth may impact aesthetics, quality of life, and psychosocial behavior. It was estimated that over a billion people worldwide had TDI [27]. The global prevalence of TDI is 24.2% in primary dentition [28] and 15.2% in permanent dentition [27]. Until recently, evidence of an association between socioeconomic indicators and TDI was uncertain [29]. However, an overview of SR reported that some sociodemographic characteristics (younger age, male sex, and lower-income) were associated with a higher probability of being affected by TDI [10]. In the same study, the association between TDI and the educational level of caregivers remained unclear [10]. Further primary studies are required to fully understand how inequalities affect TDI.
Temporomandibular disorders (TMD) are the main cause of non-dental orofacial pain worldwide [12] and significantly affect people's quality of life [30]. The prevalence of TMD among children and adolescents was found to vary between 7.3 and 30.4% [31]. The pro-portion of adults and elderly with TMD reached 31% [32]. However, the actual prevalence of the condition is still under debate due to variations in diagnostic criteria [33]. Despite that, there are some SR providing evidence that women [11,33] between 20-40 years of age [12] are more likely to develop TMD. The gender-related difference could be linked to biological, cultural, and social factors, but the pathways in which these factors predispose more women than men to TMD are not yet understood. Furthermore, the role of other sociodemographic indicators, such as ethnicity and socioeconomic status, on TMD prevalence is still controversial [22].
The broad knowledge of the social determinants of oral conditions could help decisionmaking by healthcare providers, the development of preventive programs by policymakers, and ultimately reduce oral health inequities [6,24,34,35].
Oral-Health-Related Behaviors
Despite improvements in prevention, oral diseases are still a significant population problem [14,36], associated with oral hygiene, tobacco use, diet, and stressors. Some psychosocial factors, such as 'self-efficacy', 'intention', 'social influences', 'coping planning', and 'action planning', have been associated with oral hygiene [15], and studies increasingly highlight the fact that positive health behavior is influenced by psychosocial factors.
People with a higher sense of coherence (SOC) are better at managing stressful situations, problems, and promoting better general health. The SOC is a psychosocial determinant of people's health behavior, which has been correlated to hygiene, dietary habits, and alcohol consumption [37]. An SR of nine articles aimed to analyze the empirical evidence on the association between oral health behaviors and SOC. The study identified that more favorable oral health behaviors were observed among those with higher SOC. This result suggested that SOC may be a determinant of oral-health-related behaviors, including frequency of toothbrushing, dental-care-seeking, and daily smoking habits. Mothers' SOC could influence the oral health preventive practices of children [38]. Poursalehi et al. (2021) [36] performed an SR and meta-analysis to evaluate the effect of SOC on the oral health status of people in different age groups. The results showed that age, social support, education, working conditions, and living conditions in childhood could influence SOC. Gender did not show a significant effect on SOC. According to the authors, SOC appeared to be effective in predicting oral health behaviors.
Healthy habits such as daily toothbrushing, regular access to sources of fluoride, and moderate consumption of sugar are the most effective ways to prevent major oral diseases and to reduce health services costs [15]. According to Menegaz et al. (2018) [35], the strong social and behavioral character of oral diseases highlights the relevance of implementing educational interventions that encourage autonomy and change in health behaviors to promote prevention practices.
Scheerman et al. (2016) [15] have carried out an SR and meta-analysis of 22 papers to identify the psychosocial determinants of oral hygiene behavior in people aged 9 to 19. Higher toothbrushing frequency among adolescents was associated with a higher 'intention', 'social influences', 'self-efficacy', 'action planning', and 'coping planning', suggesting that these factors are likely to be psychosocial determinants of tooth brushing. In the same study, the psychosocial variables: 'locus of control', 'sense of coherence', and 'self-esteem' were less likely to be associated with tooth brushing. The authors highlighted the importance of psychosocial factors as determinants of oral hygiene habits among preadolescents and adolescents. The results of an SR developed by Calderon et al. (2014) [14] showed that ethnicity, race, and gender could affect the oral health behavior of adolescents. The results of an SR developed by Menegaz et al. (2018) [35] concluded that educational interventions promote changes in oral-health-related behaviors (daily tooth brushing, regular contact with fluoride sources, and controlled consumption of sugar), prevent major oral diseases, and reduce costs for health services and society.
It is very important that effective programs improve oral-health-related behavioral habits in different age groups. Literature indicates the need to develop research on other factors that affect oral health behavior. Future intervention trials should consider a range of psychological factors that have not been fully studied, such as 'self-determination', 'anticipated repentance', 'action control', and 'self-identity'.
Dental Services Utilization
The access and use of dental services are relevant information to be studied in different populations. Ethnics minorities or immigrants and those from low socioeconomic groups showed lower dental services utilization globally [39]. A higher-income was consistently associated with children's use of dental services. Among adults, 50% of the observational studies included in an SR have identified more education as a factor that increases dental service utilization [40]. In the older population, the rate of annual dental visits among those with a higher income and higher socioeconomic status is higher than among those with a lower income [41,42].
An SR showed that regular or preventive dental services utilization differs greatly across the globe. In countries with a higher human development index (HDI), more individuals utilize services. The utilization is also highly unequally distributed between different groups within countries. Individuals with less-supportive family structures, poor health literacy, poor general and oral health, edentulous or with severe tooth loss, and younger children show a lower utilization. Neither utilization nor its differences between groups have changed significantly with time [13]. While the burden of oral diseases is heavier on more socially vulnerable populations, access to dental services is better for those with higher socioeconomic conditions [40]. There is a positive association between dental insurance and dental visits [43]. Dentally insured adults have more regular access to dental care than the uninsured. This inequity situation increases the gap between the rich and poor. In our Special Issue, organizational and human resources factors were associated with access to dental prostheses in Brazil. Public dental offices with better organizational support and improved work incentives provided more dental prostheses to their patients. The reduction of inequalities in primary oral health care access should be set by policymakers [2].
Apart from identifying oral health determinants, it is urgent to overcome inequalities to promote health and to impact different morbidities, including oral conditions. An SR [44] evaluated intervention programs developed to reduce inequality in dental caries prevalence among children. The studied interventions included health promotion/preventive initiatives, topical fluorides, and water fluoridation to reduce caries among children of different socioeconomic groups. Comparison groups included children with an alternative or no intervention. The findings suggested that broader population interventions such as water fluoridation are more likely to reduce inequalities in children's caries than interventions targeted at specific populations.
Studies of interventions to reduce socioeconomic inequalities [45] in dental service utilization by adults are limited to those involving pregnant women and parents organizing care for their children. They have been mostly targeted at individual behavior, rather than community or structural factors and involved participants at the lower end of the socioeconomic status spectrum. Evidence involving participants across the whole social gradient is limited.
There is a lack of research on interventions that aim to reduce socioeconomic inequalities in adult dental visits and on interventions that target community or structural causes of these inequalities. In our Special Issue, a trial identified how an integrated oral healthcare intervention for pregnant women impacted health outcomes. This study revealed that despite socioeconomic and behavioral health determining factors, multi-professional health actions during prenatal care could contribute to positive pregnancy outcomes and oral health [3]. Health policies and services availability influence inequality in services provision, while individual, social, cultural, and economic determinants affect inequality in dental services utilization [42].
Health Determinants
Published in 1991 [46,47], and revised in 2021, the Dahlgren and Whitehead model has been widely used worldwide. It focuses on the determinants of health, rather than on the causes of the diseases, to enable people and non-medical professionals to act to improve health. The focus on the determinants of health allows the development of more comprehensive strategies in which actions can be planned and implemented without the risk of fragmentation induced by focusing on the aetiology of diseases. Specialized uncoordinated actions to treat and prevent different diseases have very limited impact in reducing risk factors or determinants of health outside their immediate field when compared with comprehensive strategies for the determinants of health. The model is a pathway to inequalities in health that proposes four main influences on the determinants of health: differential power and resources; differential exposure; differential vulnerability; and differential consequences of being sick or healthy.
Many of the social determinants for good health, such as livelihoods, equality, access to health care, and social support structures, are being undermined by climate change. Climate-sensitive health risks are particularly felt by the most vulnerable: women, children, ethnic minorities, poor communities, migrants or displaced persons, elderly populations, and those with underlying health conditions [48]. The marked social gradient in health within and between countries, and its associated inequities, are caused by the unequal distribution of power, income, goods and services, access to health care, schools and education, conditions of work and leisure, housing, communities, and towns. This unequal distribution of health-damaging experiences is the result of the combination of poor social policies and programmes and unfair economic arrangements. The structural determinants and conditions of daily life constitute the social determinants of health and are responsible for a major part of health inequities [49].
Environmental Health Determinants
Environment is one of the determinants of health. Both social environment (social support and social networks, social deprivation, income inequality, racial discrimination, social cohesion, and social capital) and built environment (human-made or modified surroundings) have been systematically reviewed to understand their impact on health outcomes [50,51]. Dental caries, fluorosis, and their association with fluoridated water [52,53] are probably the most frequently studied environmentally determined oral health conditions. As one of the most prevalent chronic diseases, dental caries affect some 70% of children among disadvantaged families globally. The disease affects more ethnic minorities, people living in rural areas, and socially disadvantaged children. A systematic review has shown that fluoride use reduces the incidence of caries in lower socioeconomic areas. Children in higher socioeconomic positions may already have good oral-health-related behaviours, better access to dental services, and more access to fluoride toothpaste when compared to children in lower socioeconomic positions. Thus, the provision of fluoridated water and the use of fluoride-containing products is more likely to be useful among those in lower socioeconomic positions [44].
Climate change impacts health in many ways, leading to death and illness from increasingly frequent extreme weather events, such as heatwaves, storms, and floods. It disrupts food systems, increases zoonoses, food, water, and vector-borne diseases, and generates adverse mental health outcomes. A half-century of progress in development, global health, and poverty reduction is threatened by climate change, for it widens health inequalities. It threatens universal health coverage by increasing the burden of diseases and by reinforcing barriers to health services access. Some 12% of the world's population spend at least 10% of their household budget to pay for health care. Health shocks and stresses caused by climate change push around 100 million people into poverty every year [48].
The effects of environmental changes on health will affect most populations in the next decades. Their management will require inputs and coordination from all sectors of government, a collaboration between countries, academic institutions, and disciplines. Local communities must be involved in monitoring, discussing, and advocating and require assistance with their process of adaptation. A multidisciplinary approach to reduce the adverse health effects of environmental changes requires three levels of action. First, policies must reduce carbon emissions and increase carbon biosequestration to eventually stabilize temperatures. Second, political and social action should be taken on the events that link climate change to disease. Third, appropriate public health systems should be put into place to deal with the adverse outcomes of climate change [54].
The multiple health impacts of climate change include an increase in infectious diseases, respiratory disorders, heat-related morbidity and mortality, undernutrition due to food insecurity, and increased sociopolitical tension and conflicts [51]. These effects are frequently unequal and impact disproportionately both populations who have and have not contributed to the problem. Climate change interacts with existing social and economic inequalities and exacerbates gaps within and between countries [55]. A recent overview of SR has summed up the indirect and direct interdependence of environmental wellbeing and human well-being. Temperature and humidity rises have been associated with infectious diseases, mortality, and adverse respiratory, cardiovascular, and neurological outcomes. Others less-frequently studied, but with consistent associations, included those between climate impacts and increased use of healthcare services, adverse mental health outcomes, adverse nutritional outcomes, and adverse occupational health outcomes [51].
Peoples' Response and Sustainability
A recent SR has described evidence on the effects of climate change adaptation responses on health outcomes in low-and middle-income countries as disparate and limited [56]. The topic lacked further quantitative data, existing evaluation timelines were described as typically short, and there were no studies reporting health outcomes in periods longer than 12 months. Papers approaching responses to extreme weather events were frequent, whilst responses to gradual climate changes have been scarcely studied. The effect of climate change adaptation responses on infectious diseases, food security, and indicators of household access to drinking water, sanitation, and hygiene (WASH) has been studied. However, evidence that these adaptation responses improved WASH indicators and food security is limited [57]. It is unequivocal that climate change affects human health. However, it remains challenging to accurately estimate the scale and impact of many climate-sensitive health outcomes.
Sustainable development is defined as the development that meets the needs of the present without compromising the ability of future generations to meet their own needs. Economic growth, social inclusion, and environmental protection are the three main different pillars of sustainable development. The United Nations has set and monitored 17 sustainable development goals for 2030. They are: to end poverty in all its forms everywhere; to end hunger, achieve food security and improved nutrition, and promote sustainable agriculture; to ensure healthy lives and promote well-being for all at all ages; to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all; to achieve gender equality and empower all women and girls; to ensure availability and sustainable management of water and sanitation for all; to ensure access to affordable, reliable, sustainable, and modern energy for all; to promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all; to build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation; to reduce inequality within and among countries; to make cities and human settlements inclusive, safe, resilient and sustainable; to ensure sustainable consumption and production patterns; to take urgent action to combat climate change and its impacts; to conserve and sustainably use the oceans, seas and marine resources for sustainable development; to protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss; to promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels; and to strengthen the means of implementation and revitalize the Global Partnership for Sustainable Development [58].
Final Comments
This paper approached the social and physical environmental determinants of oral health and critically discussed them to identify and explain their relationship. Further scientific and political actions are needed to reduce inequalities and to promote health. In the 21st century, the impact of the physical environment on some health outcomes has already been identified. However, it should be emphasized that their same impact on oral health outcomes is not fully understood. It is urgent that researchers and policymakers dedicate resources to the subject.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-12-23T16:14:53.591Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "198a260542406a7bbfd62f6db9b57d3197e72b02",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/24/13429/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5adc7593d74bfe8ecbda9c7fa0624435f941a525",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6468182 | pes2o/s2orc | v3-fos-license | A transition from unimodal to multimodal activations in four sensory modalities in humans: an electrophysiological study
Background To investigate the long-latency activities common to all sensory modalities, electroencephalographic responses to auditory (1000 Hz pure tone), tactile (electrical stimulation to the index finger), visual (simple figure of a star), and noxious (intra-epidermal electrical stimulation to the dorsum of the hand) stimuli were recorded from 27 scalp electrodes in 14 healthy volunteers. Results Results of source modeling showed multimodal activations in the anterior part of the cingulate cortex (ACC) and hippocampal region (Hip). The activity in the ACC was biphasic. In all sensory modalities, the first component of ACC activity peaked 30–56 ms later than the peak of the major modality-specific activity, the second component of ACC activity peaked 117–145 ms later than the peak of the first component, and the activity in Hip peaked 43–77 ms later than the second component of ACC activity. Conclusion The temporal sequence of activations through modality-specific and multimodal pathways was similar among all sensory modalities.
Background
In previous studies using magnetoencephalograms (MEGs) to monitor tactile [1], auditory [2], visual [3,4] and pain [5,6] systems, we found very similar mechanisms of sensory processing among these sensory modalities. In brief, several 'early' activities appear serially with a time delay of about 4 ms at each step followed by one or two 'late' activities. In general, the 'early' activity reverses polarity twice with an interval of 10 ms, which results in a characteristic triphasic waveform, while the 'late' activity is long-lasting without a polarity reversal at such a short interval [7]. For example, following tactile stimulation, 'early' activations are elicited in area 3b, area 1 and the posterior parietal cortex in this order with a delay of 3-4 ms between each step, and then a long-lasting 'late' activity is evoked in the secondary somatosensory area. We postulate that a basic role of the 'early' activity is to receive inputs from the thalamus or convergent inputs from the thalamus and/or adjacent cortical areas and to send this information to the next point quickly, while the long-lasting 'late' activity is involved in recognition of the stimuli [2].
In the present study, we sought to compare mechanisms of sensory processing at latencies later than the 'late' activity among these sensory modalities (vision, audition, touch and pain). At first, we expected there to be unimodal and multimodal activities. Although there is a large and growing number of studies on multimodal interaction using electroencephalography (EEG) and MEG [8][9][10][11][12][13] as well as multimodal activation and interaction using functional magnetic resonance imaging (fMRI) [14][15][16][17], it is unclear whether or not the timing of the transition from unimodal to multimodal cortical activations is different among modalities. The present study manipulated the interstimulus interval (ISI) at three levels to examine the transition from unimodal and multimodal activations. In general, evoked potentials around and after 100 ms, like N1 and P3a/P3b, are more sensitively increased by increasing the ISI than earlier responses. If the difference in response amplitude between different ISIs shows the same scalp distribution among modalities, the difference in amplitude should originate from the same generator. In addition, a source analysis is helpful to estimate the location of the generator. In general, the manipulation of the ISI more strongly affects P3a and P3b, and the non-specific activities of N1, which are considered indices of orienting attention [18], compared with activities ("late" activity) within 100 ms. Because the original N1 response in any modality largely includes modality-specific activities showing a different scalp distribution depending on the modality, a comparison between the scalp distributions of the original N1 might not provide a clear-cut result. Therefore, we chose a simple manipulation of the ISI to extract more effectively the non-specific activities, to obtain a clearer result and to enable us to estimate more reliably and simply the location of the activity. Another reason for this choice is that the activities obtained by manipulating the ISI may be associated with orienting attention and later processes reflected by the non-specific N1 and P3a/P3b.
We expected the non-specific, possibly multimodal, activities obtained by manipulating the ISI are clearly found at later than 100 ms and have the same scalp distribution among modalities, but the "late" activities within 100 ms to have a different scalp distribution among modalities. The multimodal activations were expected to be in the anterior cingulate gyrus or hippocampus on the basis of a large number of previous studies performing source analyses [5,[19][20][21][22][23][24][25] and intracranial recordings [26-29], whereas unimodal activities ("late" activities) are estimated to be in areas specific to each modality. Of final and special interest was whether or not the timing of the transition from unimodal to multimodal activations is the same among modalities.
Methods
The experiment was performed on 14 (four females and ten males) healthy right-handed volunteers, aged 23-52 years (mean, 32 ± 8). The study was approved in advance by the Ethics Committee of the National Institute for Physiological Sciences, Okazaki, Japan, and written consent was obtained from all the subjects.
Procedures
There were three different interstimulus interval (ISI) conditions for each modality, 0.5-0.7 s, 1.8-2.2 s and 9-11 s (2 Hz, 0.5 Hz and 0.1 Hz conditions). Therefore, there were 12 conditions (4 modalities × 3 ISIs). In each condition, 56 stimuli were presented in four separate blocks (10-17 stimuli in each block). Subjects were instructed to count the number of stimuli silently, and asked to report it in each block. If the answer was incorrect for one block out of four, the accuracy rate for the condition was 75%. In each block, the first stimulus was not included in the recording. The order of the 12 conditions was randomized among subjects.
Stimuli
Auditory-evoked potentials (AEPs) were elicited with a 1000 Hz tone (50 ms plateau, 5 ms rise/fall) that was presented binaurally through headphones at 60 dB SPL. For somatosensory-evoked potentials (SEPs), a square wave pulse 0.5 ms in duration was delivered to the right index finger through ring electrodes with the anode and cathode at the first and second phalangeal space, respectively. The stimulus intensity was two times the sensory threshold (1.0 ± 0.4 mA). For visual-evoked potentials (VEPs), a figure of a star (white on a black background, 5.3 × 5.3° visual angle) was presented for 48 ms at the center of a screen 1.5 m in front of the subject. Noxious stimuli-evoked potentials (pain-related SEPs, pSEPs) were elicited by intra-epidermal electrical stimulation [30] using a concentric bipolar needle electrode [4] that could stimulate cutaneous A-delta fibers selectively. The electrical stimulus was a square wave pulse of 1.0 ms applied to the dor-sum of the right hand between the first and second metacarpal bones. The stimulus intensity was two times the pain threshold (0.2 ± 0.1 mA). The mean visual analogue scale score (0-100) for the painful sensation was 31 ± 20.
Analysis
Conventional averaged waveforms of the 12 conditions were obtained for each subject. Then, two difference waveforms were obtained in each modality. That is, we obtained the difference waveform between the 2 Hz and 0.5 Hz conditions by subtraction of the waveform of the 2 Hz condition from the 0.5 Hz-waveform (Sub1), and the difference waveform between the 0.5 Hz and 0.1 Hz conditions by subtraction of the 0.5 Hz-waveform from the 0.1 Hz-waveform (Sub2). Therefore, the Sub1 waveform indicated the activity that was increased in the 0.5 Hz condition as compared with the 2 Hz condition, and the Sub2 waveform indicated the activity that was increased in the 0.1 Hz condition as compared with the 0.5 Hz condition. The grand-averaged waveforms of the three conditions (2 Hz, 0.5 Hz, 0.1 Hz), Sub1 and Sub2 across all subjects were obtained and used for the analysis of topography. The averaged waveforms for the 2 Hz condition, Sub1 and Sub2 were used for source modeling.
In each waveform, the root mean square (RMS) across the 27 channels was calculated, and the field distribution was examined at several RMS peaks. The similarity of the field distribution at a certain latency point between different conditions or between different modalities was examined by determining the correlation coefficient, r [2]. In addition to the grand-averaged waveform, the similarity of the topography among modalities was also assessed using data of the 0.1 Hz condition of individual subjects.
A multi-dipole analysis was performed to separate temporally overlapping multiple sources by using the brain electric source analysis (BESA) software package (NeuroScan, Mclean, VA). Model adequacy was assessed by examining: 1) percent variance [31], 2) F-ratio (ratio of reduced chisquare values before and after adding a new source) [32] and 3) residual waveforms (that is, the difference between the recorded data and the model) as described previously [1]. Since 27 EEG channels were used in the present study, only four dipoles were allowed to be included in a model to calculate the F-ratio (the degree of freedom of a chisquare of a model is 27-6N, N = number of dipoles). This is the main reason why we used the difference waveform (Sub1 and Sub2) in this study. Our preliminary analysis showed that more than six dipoles were necessary to explain the 0.1 Hz waveform. The subtraction procedure could reduce the number of dipoles because of the presence of common source activities with similar source strength among conditions. Therefore, source modeling was applied to the 2 Hz, Sub1 and Sub2 waveforms though the goal of the analysis was to clarify the temporal sequence of each source activity in the 0.1 Hz condition, where both early and late activities were expected to be present. It is well known that the middle-long latency components of evoked potentials are sensitive to ISI (e.g. [33,34]). Only when the fourth and fifth dipoles contributed almost equally to explain the recorded data (for example, two sources in the bilateral fusiform gyrus in the 2 Hz-VEPs), was a fifth dipole included in the model. BESA uses a spherical four-shell model (the brain, cerebrospinal fluid, bone and skin). The location of each cortical source was expressed in Talairach coordinates. To confirm the reliability of results of BESA using grand-averaged waveforms, data of each subject obtained in the 0.1 Hz condition were also subjected to the source modeling. The method was identical to that for the averaged waveform. It was sometimes difficult to analyze individual data using a multi-dipole method because of the low S/N ratio. This was the main reason that we used grand-averaged data in this study. However, the evoked responses in the 0.1 Hz condition were usually enough large for this analysis at least for detecting the late activities that the present study targeted. Therefore, data of individual subjects for the 0.1 Hz condition were used for the BESA and topographical analyses, and the results were compared to those for the grand-averaged data.
Psychophysical results
The counting task was very easy for the subjects. The mean correct answer rate was 98.2%. A two-way analysis of variance showed that the ISI (F = 3.42, P = 0.04) but not modality (F = 1.11, P = 0.35) was a significant factor modulating the correct answer rate. Bonferroni/Dunn's post hoc test indicated that the correct answer rate was significantly (P < 0.05) higher for the 0.1 Hz condition (99.4%) than the 2 Hz condition (96.4%).
Waveform and topography AEPs
In the waveforms of the 2 Hz and 0.5 Hz conditions, there were two RMS peaks at around 82 and 190 ms ( Fig. 1). At both peaks, the field distribution was highly correlated between the two conditions (r = 0.98 and 0.99, respectively). However, the distribution was slightly different at the P7/8 and P9/10 electrodes. In the 2 Hz condition, the activity at the first RMS peak (82 ms) was negative at almost all the electrodes, but a positive activity was clearly detected at the P7/8 and P9/10 electrodes in the 0.5 Hz condition, indicating the presence of additional bilateral sources in the 0.5 Hz condition at this latency point. In confirmation of this, the Sub1 waveform showed a positivity at these electrodes at around 80 ms. In the 0.1 Hz condition, an additional negativity at around 115 ms and Grand-averaged waveforms and topographies of auditory-and somatosensory-evoked potentials Figure 1 Grand-averaged waveforms and topographies of auditory-and somatosensory-evoked potentials. Superimposed waveforms recorded from 27 channels, obtained in the 2, 0.5 and 0.1 Hz conditions, and by the subtraction of the 2 Hz-waveform from the 0.5 Hz-waveform (Subtraction 1) and the subtraction of the 0.5 Hz-waveform from the 0.1 Hz-waveform (Subtraction 2). Isocontour maps at several peaks of the root mean square (indicated by arrows) are shown on the right. positivity at around 300 ms emerged. In the large positive deflection at 150-400 ms, there were two RMS peaks at 255 and 288 ms. At 255 ms, the positivity was maximal at Cz but the positive peak shifted slightly posteriorly (maximal at Pz followed by P3/4) at 288 ms, indicating that at least two distinct source activities were responsible for shaping the large positive deflection. The Sub2 waveform confirmed that there were additional large negative/positive sequential components in the 0.1 Hz condition as compared with the 0.5 Hz condition.
SEPs
In the 2 Hz-SEPs, a weak positivity at around 225 ms and weak activities at earlier latencies were evoked (Fig. 1). The positivity with the maximal amplitude at Cz was enhanced in the 0.5 Hz condition. In the 0.5 Hz condition, an additional negativity (maximal at T7) and concomitant positivity (Fz and Pz) at around 86 ms appeared. In the 0.1 Hz condition, a large negativity (140 ms) and positivity (200-350 ms) appeared in addition. Like waveforms of the 0.1 Hz-AEPs, the large positivity in this condition had a maximal amplitude at Cz at around 268 ms but the location of the positive peak shifted more posteriorly with an increase in latency. The field distribution pattern at the first RMS peak of the large positivity (268 ms) was correlated with that in the 0.1 Hz-AEPs (255 ms, r = 0.97 for the grand-averaged waveform and 0.9 ± 0.09 for the individual data), and that of the second RMS peak of the 0.1 Hz-SEPs (299 ms) was also highly correlated with that in the 0.1 Hz-AEPs (288 ms, r = 0.99 and 0.91 ± 0.07). Likewise, both the negativity (142 ms) and positivity (295 ms) of the Sub2-SEP waveform were significantly correlated with those of the Sub2-AEP waveform (125 and 267 ms)(r = 0.91 and 0.97, respectively), suggesting that similar cortical activities contributed to shape the waveform of the 0.1 Hz condition for AEPs and SEPs.
VEPs
Visual stimuli at 2 and 0.5 Hz evoked similar positive/ negative/positive sequential components (Fig. 2). Two additional positive components peaking at 274 and 355 ms appeared in the 0.1 Hz condition. The field distribution pattern was very similar among the three ISI conditions for the first positivity peaking at around 90 ms (r = 0.92-0.98), the negativity peaking at around 140 ms (0.98-0.99) and the second positivity peaking at around 200 ms (0.97-0.98). The Sub1 waveform showed a component peaking at 128 ms, with the maximal negativity at O1 and O2 and a positivity at Cz. Similar to AEPs and SEPs, an additional negativity (168 ms) and two sequential positive components (304 and 360 ms) appeared in the 0.1 Hz condition as compared with the 0.5 Hz condition.
pSEPs
In the 2 Hz-pSEPs, no clear component was evoked. In the 0.5 Hz condition, a positivity peaking at 298 ms with a maximal amplitude at Cz was evoked (Fig. 2). In the 0.1 Hz condition, a large positivity peaking at Cz additionally appeared as in the other modalities. The field distribution pattern at the peak RMS of the positivity (354 ms) was similar to those of the 0.1 Hz-AEPs (288 ms, r = 0.96 for the grand-averaged waveform and 0.92 ± 0.05 for individual waveforms), 0.1 Hz-SEPs (299 ms, r = 0.98 and 0.93 ± 0.03) and 0.1 Hz-VEPs (355 ms, r = 0.91 and 0.81 ± 0.13). There was also an additional component at around 150 ms in the 0.1 Hz-pSEP as compared with the 0.5 Hz condition. At the peak RMS (146 ms), the negativity was maximal at T7 (and T8 in the ipsilateral hemisphere) and the positivity was maximal at the midline electrodes (Fz and Pz). The field distribution pattern at this latency was very similar to that at 86 ms of the 0.1 Hz-SEPs (r = 0.97 and 0.88 ± 0.07). Such a high correlation of the complicated distribution pattern between different modalities was noteworthy. The Sub2 waveform consisted of a large negativity and positivity with similar field distribution patterns in other sensory modalities (r = 0.61-0.91 for the negativity, r = 0.93-0.97 for the positivity). The relatively low correlation coefficient for the negativity was due to the concomitant existence of modality-specific activity at the latency of the negativity that was enhanced in the 0.1 Hz condition, especially in VEPs.
Procedures of source modeling
We tried to seek the source solution responsible for the prominent component of the potentials whose topography showed high correlation among different conditions or different modalities. This suggests that there are similar generators in different conditions or different modalities. We repeated source estimation at around the peak latency of the potential component to select a robust solution having the highest GOF (or highest improvement of GOF). Once the best first source was determined, we tried to find the second source at around the peak latency of the potentials that were remained to be explained by the first source. Usually, the peak of the residual waveform was similar to that for the original waveform where the topography showed high correlation among conditions or modalities.
For the waveform of the 2 Hz-AEPs (Fig. 3), we started the analysis at the first peak of RMS (82 ms). The best single source was estimated to be located in the left supratemporal plane (-55, -29, 12 in Talairach coordinates) probably corresponding to the planum temporale (PT) according to previous studies (for review, see [35]. Since the first source left substantial activity unexplained at this latency point (residual variance = 25%), we tried to find a second source at this latency. The best second source was estimated to be Grand-averaged waveforms and topographies of visual-and noxious stimuli-evoked potentials in the right PT (41, -27, 15), and the GOF was increased from 75 to 98% (F = 5.9, P < 0.0002) by the addition of the second source. This two-source model successfully explained the data around 82 ms, but left some activities at around 200 ms unexplained. The best source to explain the residual activity was estimated to be in the middle part of the cingulate gyrus (1, -23, 44). By adding this source, the GOF was increased from 44 to 98% (p < 0.0001). This three-dipole model provided a mean GOF value of 87% (0-500 ms), and no additional dipoles significantly improved the fit. By using similar procedures, four sources in the bilateral PT and bilateral superior temporal gyrus (STG) were estimated for the Sub1 waveform, indicating that these four source activities were stronger in the 0.5 Hz condition than the 2 Hz condition or additionally appeared in the 0.5 Hz condition. In the Sub2 waveform, bilateral STG sources were responsible for the early activity at around 90 ms. After fitting these two sources, however, large parts of the main negative/positive components were left unexplained. To explain the residual activity, the best source was estimated to be in the posterior part of the anterior cingulate cortex (ACC). This source markedly improved the fit (e.g., the GOF at 125 ms increased from 21 to 93%). However, residual activity was still clear at around 300 ms. To explain the residual activity, we had to add two more sources to the model since no single source significantly improved the fit, but residual activity was evident. The best sources were estimated to be located in the medial part of the temporal lobe in the parahippocampal gyrus of both hemispheres. After the addition of these two sources, the GOF at 300 ms increased from 64 to 98% (P < 0.02).
Multimodal activations and modality-specific activations
Similar procedures were applied to SEPs, VEPs and pSEPs. Figures 3 and 4 show the results of source modeling. The Talairach coordinates of each cortical source are shown in Table 1. In SEPs, VEPs and pSEPs, sources in the ACC and Hip contributed to the Sub2 waveform, like in the AEPs. That is, the ACC was the main source of activity responsible for the large negative/positive vertex potentials and the bilateral sources in Hip were also responsible for the later part of the positivity, which was consistent with similar scalp topographies of evoked potentials among modalities. The locations of the ACC and Hip sources were similar among sensory modalities (Fig. 5B).
Several modality-specific activations were estimated to occur in each sensory modality. Among them, sources in the STG (AEPs), left opercular region (OP, SEPs), bilateral middle occipital gyrus (MOG, VEPs) and left OP (pSEPs) appeared to be sensitive to the ISI, that is, they were more strongly activated in longer ISI conditions.
Time course of each source activity in the 0.1 Hz condition
Since all sources detected in the analysis described above were considered to be active in the 0.1 Hz condition, they were applied to the waveform of the 0.1 Hz condition to determine the actual time course of each cortical source activity. Figure 5A shows the source strength as a function of time of the main modality-specific source activity and multimodal activity for the waveform of the 0.1 Hz condition. The peak latency of each activity is shown in Table 1. The ACC activity was biphasic and the difference in latency between the first and second peaks was similar among modalities (117-145 ms). The activity in Hip always peaked later than the second peak of the ACC activity, and the delay (48-77 ms) was similar among modalities. In addition, the temporal sequence between the main modality-specific activation and ACC activation was similar among modalities, that is, the first component of ACC activity peaked later than the modality-specific activation by 30-56 ms. These results suggested that there were similar time courses of the multimodal activations in the ACC and Hip as well as a similar timing of the flow from the modality-specific area to the multimodal circuit among all the sensory modalities.
Results of source modeling for the 0.1 Hz waveform of individual subjects are shown in Fig. 5D. The time course of each cortical activity and the sequential pattern of activation through the sensory-specific areas, ACC and Hip were very similar to those for the grand-averaged data ( Table 1). In Fig. 5D, only the main modality-specific activity and multimodal activity are shown. In addition to these sources, a significant dipole was estimated to be located in right OP (n = 5) and left S1 (n = 2) for SEP, right OP (n = 6) for pSEP, and V1 (n = 5) and the fusiform gyrus (n = 3) for VEP.
Discussion
In our previous studies using MEG to monitor auditory [2], tactile [1], pain [5,6] and visual [3,7] systems, we showed that there were similar sequential activations through 'early' and 'late' sensory cortical areas among these sensory modalities. The results of the present study demonstrated similar time courses of activation through modality-specific areas and multimodal areas. Since the main modality-specific activations in the present study correspond to the 'late' activity in nature, these findings suggest a common temporal sequence of activations: modality-specific 'early' activity to modality-specific 'late' activity-ACC-Hip (Fig. 5C).
Methodological considerations
In the present study, we used a three-step multiple source analysis to find cortical sources responsible for evoked potentials in the 0.1 Hz condition: 2 Hz, Sub1 (0.5 Hz-2 Hz) and Sub2 (0.1 Hz-0.5 Hz). To study the timing of Time course of each source activity for visual-and noxious stimuli-evoked potentials sequential activations among several cortical areas, it is apparent that results would be more convincing if the subtraction procedures were not necessary. However, the present results showed that this method made it easy to find several major sources of activities responsible for evoked potentials of each ISI condition. For example, the very similar Sub2 waveforms and their topographies of all modalities clearly showed the usefulness of this method. On the other hand, however, there is a possibility that we missed minor contributors during our three-step analysis, especially weak activities that contributed to all three ISI conditions equally.
Multimodal activation in the ACC and hippocampal region
The present results demonstrated that the main common components of the evoked potentials were the negativepositive vertex potentials that arise mainly from the ACC. It is well known that noxious stimuli evoke negative/positive vertex potentials with a scalp distribution similar to those in the present study. Source modeling of scalp potentials evoked by laser stimuli [21][22][23] showed that the ACC is the main generator of the vertex potentials, which was confirmed later in many studies, including intracranial recordings by Lenz et al. (1998) [36]. Several studies have demonstrated that at least some of the vertex potentials reflect sensory non-specific events [37-39]. Very sim-ilar biphasic potentials were also evoked in the ACC in response to auditory and visual stimuli in an intracranial recording study [24]. As for the negative/positive vertex potentials following tactile stimulation, the field distribution of the negative/positive potentials of the 0.1 Hz-SEPs was highly correlated with those in other modalities (e.g. r = 0.96 between SEP-N140 and AEP-N115; r = 0.99 between SEP-P299 and AEP-P288) indicating that the main generator of these potentials is not in the modalityspecific area. An intracranial recording study by Allison et al. (1992) [40] supported this view by showing that the scalp negative/positive vertex potentials are not generated in the sensorimotor cortex. There have been a few studies whose results were consistent with the present findings that the main generator of the negative/positive vertex potentials in response to tactile stimuli is the ACC. An SEP study by Waberski et al. (2002) [25] reported the possible contribution of the ACC to N140, and an MEG study by Inui et al. (2003a) [5] reported the ACC activity following tactile stimulation with a peak at 128-150 ms that was coexistent with the peak (134 ms on average) of the simultaneously recorded scalp (Cz) negativity.
The cingulate cortex is an anatomically and functionally heterogeneous area, and is considered to serve cognitive, emotional, motor, nociceptive and visuospatial functions The location of each cortical source is expressed in Talairach coordinates. The peak latency of each cortical activity for the grand-averaged data (left) and for the individual data (right, mean ± SD) is shown.
[41-44]. According to the traditional dichotomy of the cingulate cortex (anterior and posterior), the activation in the present study (BA 24/32) is located in the posterior part of the ACC. Functionally, this area in the cingulate cortex is coexistent with the cognitive subdivision of the ACC, which is activated by numerous cognitive/attentional tasks (for review, see Bush et al. 2000 [43]). The posterior part of the ACC is also a part of diffuse cortical networks sensitive to stimulus salience [17], stimulus changes [16] and oddball paradigms [20,[45][46][47], and is thought to be the major structure in the anterior attentional system proposed by Posner and Peterson (1990) [48] and a key site in Mesulam's (1990) [49] interconnected network for directed attention. Therefore, the similar location and time course of the ACC activity among all sensory modalities in this study suggests that the ACC activity is related to modality non-specific functions. Since only a very simple counting task was used in the present study and the ACC activation was robustly obtained in the long ISI condition, the ACC activation may be related to involuntary shifts of attention to the stimulus presented against the 'silent' background [50][51][52][53][54]. In a classical review analyzing the component structure of auditory N1, Näätänen and Picton (1987) [18] described that the N1 consists of the modality-specific and non-specific components. One of the non-specific N1 components was predicted to be generated in the frontal lobe or deeper structure and to be associated with attentional triggering mechanisms.
Activation in the Hip has usually been studied under oddball paradigms as will be described below, and reports describing Hip activity in response to simple sensory stimuli are rare. There are several EEG and MEG studies showing Hip activity following noxious stimulation [5,23,55,56] and following tactile stimulation [5]. In these studies, the Hip activity appeared later than the modality-specific and ACC activities, which is in agreement with the present results. The important roles of the hippocampus are regarded as the memory function like the memory storage, retrieval and consolidation [57][58][59] or attention [60]. As described below, the Hip activity seems to be one of the major sources generating the scalprecorded P300 (P3b), both of which we measured in this study. A series of classical psychophysical studies by Donchin and colleagues have suggested that the P3b reflects the update of context in the working memory store accompanied by the allocation of attentional resources [61]. These imply that the Hip activity we observed is associated with the memory or attentional function. It could be at least related to the more voluntary process at the later stage than the modality non-specific process that would be reflected by the ACC activity.
Possible involvement of the ACC and Hip activities in generating the P300
In ERPs recorded under oddball paradigms, in which an infrequent target stimulus and a frequent non-target stimulus are presented in a random order, a large positive component peaking at 300 ms or more after the stimulus (P300) is elicited in response to the target stimulus [62]. The P300 component is considered to reflect fundamental cognitive processes (for reviews, see [61,[63][64][65]). While task-relevant deviant stimuli elicit the parietocentral P300 or P3b, task-irrelevant salient stimuli inserted in the repeated target and non-target stimuli under three-stimulus paradigms [66] elicit an earlier positive deflection, the frontocentral P3a or novelty P300 [50,51,67]. The P3a has been interpreted as a neural correlate of the orienting response [64].
Both the temporal sequence and the topography of the two distinct positive components in the present study therefore resemble those of the P3a and P3b. Although the present study did not employ discrimination tasks such as those in the oddball paradigm, subjects had to count the stimuli presented against a silent background at a random ISI, and thus the 'odd' or 'infrequent' aspect of the stimulus discrimination was maintained to a certain degree and the P300 component might have been elicited. Supporting this notion, Polich et al. (1994) [68] compared the P300 elicited with auditory stimuli using a typical oddball paradigm with that elicited from a single stimulus procedure and concluded that the single-stimulus task produces the P300 in the same fashion as those elicited with the oddball paradigm. A source modeling study using ERPs by Tarkka and Stokic (1998) [69] produced similar findings.
The notion that the biphasic ACC activity in the present study might have contributed to both the negativity (fronto-central part of processing negativity or non-specific N1 component [70]; [18]) and the subsequent P3a in oddball paradigms is consistent with a recent source modeling study of scalp potentials [20], in which the main contributor to the N2/P3a components was the ACC. A source modeling study by Dien et al. (2003) [19] also found that the P3a had a source in the ACC. The hippocampal region being one of the neural origins of the P300 is consistent with previous studies using scalp potentials [69,71,72], MEG [73] and intracranial recordings [26][27][28][29].
The finding in the intracranial recording studies that the peak of the focal activity recorded from the medial temporal lobe occurred 35-100 ms later than the positive peak recorded from the scalp [28,29,74] is noteworthy, indicating that the activity in this area is mainly responsible for the late part of the P300.
Temporal sequence of modality-specific and multimodal activations
Several modality-specific source activities were identified in each sensory modality. In general, these activities were less sensitive to the ISI than those from the ACC and Hip. However, some of these sources were activated more strongly in longer ISI conditions. These sources included STG (auditory), OP (tactile and pain) and MOG (visual). Although the significance of the sensitivity of these sources to the ISI was not clear in this study, they appear to be involved in attentional/cognitive aspects of sensory processing to a certain degree rather than the projecting function. These activities show several common features: 1) they peaked 30-56 ms earlier than the ACC source activity; 2) they lasted about 100 ms; and 3) their response latency was too long for the early (primary) activity. According to our previous findings in MEG studies, these activities correspond to the 'late' activity that follows several 'early' activities with a characteristic triphasic time course. Therefore, the present results together with our previous findings suggest a common temporal sequence of activation across all the sensory modalities: 1) the 'early' sensory cortex, 2) the 'late' sensory cortex, 3) the ACC, and then 4) the hippocampal region, each step of which roughly corresponds to 1) the quick projection of information to the next, 2) receiving, perception and integration of sensory information that has been refined at earlier stages, 3) involuntary shift of attention to perceived stimuli, and 4) voluntary aspect of cognition, memory and execution.
Comparison with other studies on multimodal interaction
There is growing evidence for multimodal activation and interaction in humans (see review, [15,75]. Multimodal convergence has been reported in animal studies in the posterior parietal cortex, temporo-parietal junction and cingulate cortex [76][77][78][79][80]. It has been also reported that the cortical areas which have been considered to be exclusively modality-specific also respond to stimuli from different sensory modalities [80][81][82][83][84]. Most interestingly, neuronal activity can be modulated by non-auditory influences even at the primary cortical level in the primary auditory cortex [85][86][87][88]. This multimodal interaction was expected to be neural correlates of multisensory behavioral interactions observed in humans. In behavioral studies, an accelerated reaction time and an illusion of perception have been observed as a result of multisensory interaction, such as the McGurk illusion [89][90][91], the hearing hand illusion [92,93], and the parchment-skin illusion [94]. MEG and EEG studies have successfully demonstrated neural correlates of such multisensory interactions in humans [8][9][10][11][12][95][96][97][98][99][100][101][102][103]. The present study was planned to reveal the basic time course of unimodal and multimodal activations in a dif-ferent framework from these previous studies. First of all, we attempted to know whether a similar cortical area is activated in response to sensory inputs coming individually from different modalities. Therefore, stimuli were presented individually from each modality, not the simultaneous presentation from different modalities often used in studies of multisensory interaction. In this regard, Downar et al. (2000) reported similar findings to the present study using fMRI [16,17], though the technique used and stimulus environment were different from the present ones. Then, we analyzed the difference in timing between unimodal and multimodal activations, and found a similar time course from unimodal to multimodal activations across the different modalities including vision, audition, touch and pain. This unimodal-multimodal transition expressed as a similar time course is associated with functions produced by manipulation of the ISI. Considering that the activities tested contribute to N1 and P3a/b, the unimodal-multimodal transition implies that the orienting attention and the context update are represented by electrophysiological correlates with a similar delay among those modalities tested.
Conclusion
The present study revealed the temporal sequence of activations through modality-specific and multimodal pathways among all sensory modalities including vision, audition, touch and pain. The timing of the transition from unimodal to multimodal activations was similar among all the modalities. Take together with our previous studies investigating early cortical activities, there is thus the similar temporal sequence of activation among all sensory modalities. | 2017-06-21T03:16:57.008Z | 2008-12-08T00:00:00.000 | {
"year": 2008,
"sha1": "ab34c11bff7069e8ac1078056a9623f1949f7600",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-9-116",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2fff92a3f592a019b2bcbad564abbfc8aef3c253",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
62547802 | pes2o/s2orc | v3-fos-license | Business models for e-journals : reconciling library and publisher requirements ?
The JISC commissioned Rightscom to carry out a series of interviews with librarians and publishers in order to understand the strengths and weaknesses of current business models for e-journals. Rightscom developed new business models and created dynamic working models for a selected number of them. Librarians and publishers agreed in such areas as the need for more funding to cover the increased output of research, the need for predictability and not restricting usage, but disagreed over the retention of print and the need for flexibility.
Background
It has been clear for some time that there are significant concerns in university libraries about current models for the purchase of electronic journals and the ability of libraries to meet the needs of their user communities in the provision of journals.Last summer the Journals Working Group of the JISC commissioned Rightscom to undertake an analytical study to help them to understand the underlying causes of those concerns in detail, to analyse the current business models themselves and to assess the potential for new business models.
Rightscom carried out a series of interviews with librarians and publishers in order to understand the strengths and weaknesses of the current models as viewed by the customers and the providers, and to gauge the reaction to possible alternatives.Some of the library interviewees were directors of library services and others had specific responsibilities for e-journals, providing a mix of strategic and budgetary perspectives as well as direct experience of negotiating deals, managing resources in accordance with licence conditions, and service issues.The publisher interviewees were selected from commercial, scholarly and open access organizations with a range of subject interests, and with representation from North America and Europe.
The view from the libraries
The fundamental concern for libraries is obviously to facilitate the widest access to the most appropriate resources for their user community, both for research and for teaching and learning, within budgetary and staffing/space constraints.E-journals (and other e-resources such as databases) are very popular with users, especially when they can be accessed in locations away from the library, unconstrained by its opening hours.As e-journals become more widely available, usage is increasing everywhere.In general, a move to e-only is supported in principle, but with significant provisos concerning cost (as VAT is imposed), and secure perpetual access to archives.The perceived importance of having their own archives varies considerably between libraries.
In terms of overall funding, it appears that many higher education institution (HEI) managements have been willing to make extra money available to facilitate greater online access to journals, but there is a widespread view that this phase is coming to an end.Recent favourable currency movements have also eased the pain of journal price rises.
Serials -18(2), July 2005
Look, Sparks and Henderson Business models for e-journals 157 Business models for e-journals: reconciling library and publisher requirements?
The JISC commissioned Rightscom to carry out a series of interviews with librarians and publishers in order to understand the strengths and weaknesses of current business models for e-journals.Rightscom developed new business models and created dynamic working models for a selected number of them.Librarians and publishers agreed in such areas as the need for more funding to cover the increased output of research, the need for predictability and not restricting usage, but disagreed over the retention of print and the need for flexibility.
HELEN HENDERSON Information Power Ltd
Bundling and the big deal -it all depends where you're coming from It is widely acknowledged that publisher bundling and big deals have greatly increased the number of journal titles available, but the way this works at the moment is seen to have some significant drawbacks.Views about the big deal vary considerably and while each institution is different, there are some discernible patterns, broadly according to type of HEI.Large Russell Group institutions with major science and medicine departments find that many of the bundled titles are not necessarily the right ones for their communities and there is a long tail of titles which is not used; the share of the deals in the total budget is squeezing out other purchases and some subject areas are losing out; and they can face heavy cancellation penalties, making it difficult to adjust the collections as research interests change.
The more social science-and business studiesfocused Russell Group institutions seem happier with the status quo.They seem to have fewer problems with overall budgets.However, they are still concerned about future price rises.
Post-92 universities used to have small print collections and therefore are more enthusiastic about the big deal as it has delivered a major increase in the resources available to their users at a very reasonable cost.But overall cost and continued affordability are issues for them as well.
Libraries in institutions that have to focus on a narrower set of research priorities in response to the concentration of research funding need to be able to adjust journal collections in a more focused way as well.This is an issue that will loom larger for more institutions in future.Their needs could be summed up as 'unrestricted access to a restricted set of materials; restricted access to the rest'.
Institutions with a high proportion of distance or part-time students share many concerns in common with HEIs generally, but licensing terms are probably more important.Though they often have highly-rated research, these libraries felt most strongly about student provision.
FE institutions make more use of Eduserv Chest deals for databases than of JISC e-journal deals, though some of the databases will include journal titles.FEs have needs which are often very specific and can frequently be more in the area of businessto-business than journal publishing.
Reaction to new models
Libraries dislike restricting access to users and reject models that involve budgetary unpredictability and might involve them policing usage or even cutting it off if budgets ran out.They tend to oppose anything that would require researchers or students to pay individually for usage.However, it is partly a question of the level of charge and how peripheral the resource is.Pay-per-view models were not dismissed out of hand and some libraries believed it would be beneficial if some of the unpredictability and risk could be minimized.More flexibility in journal deals is a key issue for almost everyone, as circumstances for HEIs are changing more rapidly than in the past.There was general enthusiasm for a model that would allow libraries to tailor the bundles of titles they receive more closely to the interests of their community and to make some substitutions during the life of a deal.
There appears to be managerial enthusiasm for institutional repositories, partly to manage Research Assessment Exercise (RAE) submissions and showcase the institution's research output.While libraries are supportive of open access in principle, there is a good deal of scepticism about whether academic researchers are willing or able (in view of their focus on the RAE) to change their publishing patterns.
The view from the publishers
For most of the publishers interviewed, continuing their business at current or better levels of profitability (in the case of society publishers, this implies the level of contribution generated by publishing) was essential.Few could envisage continuing to have the support of their investors, shareholders or societies if they failed to deliver profits or surpluses that were close to or better than they currently were.Several of the publishers interviewed believed strongly that the solution to current problems lies in more funding for library acquisitions.They pointed out that the increasing volume of research was generating more and more papers for publication, and that therefore growth in journals was inevitable.Several of the publishers interviewed expressed frustration at the current situation, and believed that they could be doing much more for researchers if institutional budgets were to be expanded.
Publishers also stressed the level of investment that has gone into e-journals and one explicitly identified this as the reason for a one-off price rise that would not need to be repeated.
Publishers feel that their costs are hard to reduce.Each paper needs refereeing, and although referees are not in general paid, the administrative costs are directly related to the number of papers received.In some cases, these are handled directly by the publisher; in other cases, they pay expenses to editors to support the cost of additional administrative staff at the editor's institution.
A key point of difference between publishers and libraries was in attribution of the demand to continue with printed journals.Many libraries viewed the publishers as wedded to print-based pricing models; many of the publishers interviewed claimed that they would like to abandon print in the near future but that libraries were resistant to such a change.VAT has to be charged on electronic publications and this has an impact on library decisions.
Publishers would like predictability as much as libraries.They realize that pay-per-view models are difficult for everyone.All publishers were concerned about models that would encourage libraries to constrain usage: this is another shared viewpoint between libraries and publishers.In the case of the publishers, as well as being concerned about the impact on research itself, they were also concerned that the constraints of such models would lead to journals not being used effectively and thus to a perception that they represented poor value.Most publishers wanted to see some sort of relationship between usage and price.
Many of the publishers interviewed claimed to have neutral views on the question of open access: if an author-pays model could be made to work, they would be happy with it if it generated revenues and profits equivalent to their current model.
Publishers are also generally accepting of consortial models, although none felt that they did particularly well financially out of them.Most of the publishers interviewed felt that existing deals such as NESLi2 suffered from the fact that they are opt-in models and take-up was not always as high as hoped.This made it difficult for them to offer more substantial discounts.Many of the publishers interviewed therefore suggested that a UK-wide deal covering all institutions was a good way forward.It would secure access for all researchers, and secure a revenue stream that would allow publishers to develop their services.A small number of publishers, though, had definite objections to a national licence model.They felt that it would be very hard to determine centrally how resources were allocated and which titles might be included.
Building new models
A 'blue-sky' approach was deliberately taken for development of the initial group of models, with no consideration for the practicalities of implementation.The models that were developed for the project were derived from this initial group and are described below.
For each of these models a series of spreadsheets was prepared which allowed the models to be tested.A table allows variables to be entered, such as usage, base price, per article prices, institution size and accelerators.In this way it was possible to see the effects of the different variables on the initial price and ongoing costs.These models are intended for internal JISC use in evaluating publisher offers and proposing alternatives.
National licence
The national licence is similar to the original Pilot Site Licence Initiative model and assumes a single national payment to publishers for limited access to all their content.It allows a basic need for universal access to be met, but allows institutions the discretion to add such additional services as they feel are worth paying for, such as archives and print.
PPV converting to subscription
In this model, the institution may have subscriptions to some titles from a publisher, but uses pay-perview (PPV) to access other titles on an ad-hoc basis.Usage is based on a per download cost with a threshold at which sufficient usage has been made to convert to a subscription.The publisher would be able to set this at a premium above the standard subscription.
Pay-per-view pre-purchase
For smaller institutions, or those with very varied interests (see Cranfield's ILL experiment 1 ) which as a result will never reach a PPV threshold for some journals, it might be possible to buy blocks of discounted PDF downloads.This model is already offered by some publishers.
Core + peripheral
The publisher offers a set of 'collections' which include all their titles in a specific discipline.They then provide access to non-subscribed material (the rest of their titles) on a discounted pay-perview basis.Another option is for a library-selected set of 'core titles' replacing the publisher selection.
Open access -author pays
This is an open access model based on payments by the author on publication.Optionally, the institution may also pay a subscription: in this case, payment by any author at that institution will be discounted.
Open access -hybrid model
The author can choose whether to pay for publication, and make the article immediate open access, or not pay and have the article available only to subscribers.
These models were presented at a series of workshops at the Annual Conference of the UKSG in April 2005, which produced valuable feedback, particularly about which models would be preferred for trialling, and which would be easiest to trial -not necessarily the same ones!
Feedback from the workshops
Some of the key points made by workshop delegates are summarized here, under the relevant models.
National licence
Some delegates felt that a national licence could be politically problematic, in terms of top-slicing funding to pay for a package that not all academics would view as valuable.A national licence deal could also reduce choice.Though this was not the most popular option, it was seen as the second easiest model to trial.
PPV converting to subscription
This was much preferred to the unbridled pre-paid PPV model, and was the second most popular model overall.This has to be seen in the context of other models, though -it is an adjunct, not a total solution.There are also issues about the amount of premium applied to the subscription price by the publisher.
Pay-per-view pre-purchase
Although COUNTER is allowing usage metrics to be collected from different sources and consolidated, delegates considered that there is still little information on what individual researchers are buying and these PPV costs are lost in the system.Some delegates believed that the Elsevier TULIP experiment some years ago showed that PPV might not work in a large general environment.Corporate and niche libraries, however, could take advantage of it as in many cases they are transferring costs to the end-user.
PPV was generally seen by delegates as risky, complicated and expensive.Both of the PPV models were seen as relatively difficult to trial.
Core + peripheral
This model, along with some other ones, was considered by some delegates to be potentially too complex for agents' systems to handle.The administration costs for libraries could also be high for this type of model.
The issue of what constitutes the core is a crucial one, and the general view expressed was that it would have to be selected by the library, not the publisher.
On the positive side, it was felt that the core + peripheral model would allow the library to manage their relationships with departments better.It was the preferred option, and also seen as the easiest to trial, perhaps because it is a modification of the current bundled deals.
Trials for the first four models do not change the fundamental way in which journal content is paid for (i.e. by the subscribing organization or individual) but reflect a different way of assessing the value and basis of the charges.However, the two open access models present a different problem.The models are radically different, and publishers may not be prepared to experiment with existing subscription-based titles.
There are risks involved in introducing new models: ■ New models, especially forms of pay-per-view, may cause changes in behaviour by users, libraries and publishers that cannot be entirely predicted.
■ Certain key titles will always be able to command premium prices: these may be hard to integrate into wider-ranging deals.
■ For publishers, some new models (not just open access) represent changes in cashflow that will in general affect their businesses unfavourably and may eventually lead to the model becoming unworkable.
What new models can do -and what they can't
It should be obvious that the current situation presents a fundamental challenge: libraries clearly feel a strong need to reduce or at least contain costs; publishers feel an equally strong need to maintain current revenues.There is no business model that can solve that issue.What modelling can do is present the opportunity to make more informed decisions that potentially offer libraries ways to get better value for money out of their journal acquisition budgets.At the same time, the models should help publishers in understanding where their resources might best be used to help sustain their operations through delivering services structured in a way that meets the needs of their customers.
We believe that all the models stand a chance of success.Implementation, however, will be critical. | 2018-12-17T23:02:20.859Z | 2005-07-12T00:00:00.000 | {
"year": 2005,
"sha1": "bbb6eb42e27a6b6dc5dd1cfb02bcafea9fc709b7",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-up-j-s-files/journals/1/articles/854/submission/proof/854-1-854-1-10-20150210.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bbb6eb42e27a6b6dc5dd1cfb02bcafea9fc709b7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17815025 | pes2o/s2orc | v3-fos-license | Chemical fate and settling of mineral dust in surface seawater after atmospheric deposition observed from dust seeding experiments in large mesocosms
. We report here the elemental composition of sinking particles in sediment traps and in the water column following four artificial dust seeding experiments (each representing a flux of 10 g m − 2 ). Dry or wet dust deposition were simulated during two large mesocosms field campaigns that took place in the coastal water of Corsica (NW Mediterranean Sea) representative of oligotrophic conditions. The dust additions were carried out with fresh
Introduction
Dust transported in the atmosphere from the desert areas is known to be a major contributor to oceanic sedimentation in certain regions, notably in the Mediterranean (Loÿe-Pilot et al., 1986;Bergametti et al., 1989). In this region, the dust inputs are usually related to strong deposition pulses of mineral dust from the Sahara (Guerzoni et al., 1999). Thus, the fluctuations of past atmospheric dust fluxes can be used to reconstruct the atmospheric response to climate oscillations throughout the Mediterranean region (e.g., Moreno et al., 2002;Frigola et al., 2007). Moreover, atmospheric dust deposition constitutes the major source of nutrients (N, P, Si, Fe and trace metals) in the Mediterranean surface water (Krom et al., 2004;Bonnet and Guieu, 2006;Pulido-Villena et al., 2010). Dust deposition can be also an efficient mechanism to remove dissolved nutrients from ocean surface waters, notably by adsorption onto sinking particles (e.g., . Thus, dust deposition plays an important role on biogeochemical elemental cycling by acting as both a source and a sink for dissolved nutrients in the Mediterranean surface seawater. Finally, dust can also affect the carbon export in marine environment via a ballast effect on POC export (Ternon et al., 2010), by increasing the sinking velocity of organic particles (Ploug et al., 2008). In consequence, a quantification of dust deposition is essential for assessing the past and present role of dust on the Mediterranean Sea.
Atmospheric dust inputs to the Mediterranean Sea are indirectly assessed from accumulation rates in sediments, and from sediment traps in the water column. Sediment traps in the Mediterranean are also used to quantify and characterize the atmospheric flux of elements from the surface to the deep sea (Goutx et al., 2000;Heimbürger et al., 2014). These marine-based methods are used to validate modeled atmospheric dust fluxes, assuming a conservative dust transfer through the water column. However, the atmospheric and oceanic fluxes estimated from simultaneous measurements of dust fluxes by marine sediment traps and by atmospheric deposition method are not linear (Bory et al., 2002;Neuer et al., 2004). The export of dust to the bottom of the sediment traps is linked to the ballast effect of organic matter produced by biological activity, and hence an efficient downward export of the dust particles to the sediment traps demands a biological activity (Bory et al., 2002;Fischer et al., 2009;Ternon et al., 2010). The high dust deposition triggers a large increase in particles concentrations, enabling aggregation processes and hence inducing a differential settling rate of dust (Lee et al., 2009). Moreover, dust particles can be horizontally advected or redistributed in the water column, before reaching the sediment trap. A number of physical and biological mechanism control oceanic dust fluxes that, so far, are difficult to discriminate and parameterize.
Practically, the estimation of dust export is made by the determination of lithogenic material in sediment traps or records. Generally, the lithogenic component is estimated from Al content in sediments. This method consists of considering that all Al recovered in sediment traps is associated with lithogenic material and that lithogenic material is mainly from dust. By using the average Al content in dust, the dust inputs corresponding to the lithogenic fraction found in the trap can be calculated (e.g., Bory et al., 2002). The interelemental ratios with Al, like Si / Al, Ti / Al or Zr / Al ratios in the Mediterranean region have been also extensively used as proxies for dust input (e.g., Moreno et al., 2002;Frigola et al., 2007). On the basis that Ba / Al ratio in dust is known and likely constant, Ba / Al is considered as a proxy for productivity and used to estimate POC export (Paytan and Kastner, 1996;Mahiques et al., 2009). However, this proxy implies that the major source of elemental Ba to sediments is marine barite, and that there is no significant contribution of Ba from other sources, or that components of Ba in excess other than barite are related to C export in a predictable way. In a similar manner, the ratio Ba / Ti is also used as a proxy for productivity (Averyt and Paytan, 2004). Such approaches assume that the Al, Ba, Ti, Si or Zr dust content is constant during the settling of dust particles. However, the composition of sinking particles could deviate from the initial ratio during sinking since more labile elements are released more quickly than refractory ones. For example, Ba in dust is known to be more soluble than Al (Desboeufs et al., 2001).
The project DUNE (a DUst experiment in a low Nutrient low chlorophyll Ecosystem) aimed at better understanding the effect of dust deposition on the surface waters biogeochemistry of the Mediterranean Sea (Guieu et al., , 2014a. The approach applied in this project was to perform dust addition experiments onto large trace metals clean mesocosms. The original design of these mesocosms represented a unique opportunity to study the dust fate after deposition at the surface down to sediment traps. In particular, the use of mesocosms limits the problem of hydrodynamical artifact, i.e., lateral advection or losses of particles by currents, as observed for in situ sediment traps. In consequence, the DUNE experiments allowed assessing, in controlled conditions, the impact of dissolution, adsorption and chemical or biological process associated with the settling of particles in the surface water column in the case of high dust deposition events in an oligotrophic environment. The set of DUNE experiments simulated either wet (DUNE-P), dry (DUNE-Q) or a succession of two wet deposition fluxes (DUNE-R) of 10 g m −2 of Saharan dust. Here, we present the total mass and the elemental composition (POC, Al, Ba, Ca, Co, Cu, Fe, K, Li, Mg, Mn, Mo, N, Nd, P, S, Sr and Ti) of material collected in the traps and in the water column during the four dust addition experiments. The suite of elements was chosen to include nutrients (N, P, Si, Fe and trace metals: Mn, Cu, Co, Mo), elements used as proxies of marine productivity (POC, Ca, Ba), and elements used as proxies of dust input (Al, Ca, Ti, Nd). We examine the chemical composition of the added dust during its sinking and the (1) to study the relevance of various proxies of terrigeneous or productivity fluxes and (2) to investigate the link between dust and POC fluxes as a function of mode of deposition. Dust and POC fluxes during DUNE-R presented here were also discussed in a companion paper (Bressac et al., 2014) in which the deconvolution of the different processes involved in POC export was proposed. Coupling metabolic rates in the water column and export fluxes, POC flux directly linked to new production by autotrophs stimulated by the dust deposition was found to represent 50 % of the flux, while the other 50 % were attributed to the "lithogenic carbon pump", a process due to the aggregation between organic material and dust. Their conclusions are discussed in this paper by comparing the dust fluxes between the four DUNE experiments.
Dust seeding and sediment traps sampling
The two series of mesocosm seeding experiments were undertaken at the beginning of summers 2008 and 2010 (Table 1). For these experiments, six (DUNE-P and Q) or seven (DUNE-R) mesocosms were deployed in the bay of Elbo (Scandola Marine preservation area −8.554 • E, 42.374 • N) during typical oligotrophic conditions 2014a). Three or four mesocosms (D1, D2, D3 and Dopt hereafter referred as "Dust-Meso") were seeded with 41.5 g of dust -corresponding to a deposition flux of 10 g m −2 using a trace metal clean spray. The time of the dust addition was the start of the experiment (t 0 ). For the experiments DUNE-P and DUNE-R, the seeding simulated a wet deposition event by spraying diluted cloud processed dust (see Sect. 2.2) in 4 L of ultrapure water. In the case of the experiment R, two successive seedings in the same mesocosms were carried out at time t 0 then at 7 days, i.e., 164 h after the first seeding (first and second seedings, hereafter referred as Dune-R1 and DUNE-R2, respectively). For the experiment DUNE-Q, the seeding mimicked a dry deposition event by spraying fresh dust dispersed in local seawater. In each experiment, three other mesocosms (C1, C2 and C3, hereafter referred as "Control-Meso") were kept unseeded for reference. Mesocosms were covered in order to avoid possible additional inputs from natural dust events.The sediment traps screwed to the base of mesocosms at 15 m depth were recovered and replaced by divers every 48 h for DUNE-P and Q and every 24 h for DUNE-R.
Dust characteristics
The fine fraction < 20 µm in diameter of a dry sieved alluvial soil sample collected in a dust source area in southern Tunisia (33.452 • N, 9.335 • E) has been used to seed the mesocosms . In order to obtain enough quantity of the same material, we used for seeding the fine fraction of soil as analog to Saharan aerosol particles (Desboeufs et al., 1999). Two campaigns of soil sampling were made in March 2007 and March 2009 corresponding to sieved soil Dust07 and Dust09, respectively. The comparison of the physico-chemical properties of both soil samples (chemical and mineralogical composition and size distribution) indicate a good consistency (e.g., for chemical composition in Table 2), both samples being characterized by a large proportion of quartz (40 %) and calcite (30 %), and different clay minerals (25 %) such as illite, kaolinite or palygorskite.
An ageing of fine fraction of soils has been made by mimicking cloud processing with the same procedure for the Dust07 and Dust09 . The fine fraction of soil which underwent the protocol of cloud processing is noted EC-Dust for evapocondensed dust, and the fresh soil is noted NEC-Dust. The Table 1 presents the type of soils used for the three experiments. The effect of the simulated cloud processing on the formation of sulfate and nitrate at the surface of dust was checked by electronic microscope observations. This showed an enrichment of nitrogen and sulfur via the neoformation of evaporite mineral like gypsum ( Fig. 1) consistent with the observations on the ageing of dust during atmospheric transport (e.g., Buseck and Posfai, 1999). The enrichment in sulfur and nitrogen was also observed on elemental composition of EC-Dust07 and EC-Dust09 by X-ray www.biogeosciences.net/11/5581/2014/ Biogeosciences, 11, 5581-5594, 2014 spectrometry fluorescence analysis (Table 2). This enrichment is associated with a decrease in carbon content in the EC-Dust due to the reactions between calcite (CaCO 3 ) and inorganic acids to form the evaporite minerals, as gypsum (CaSO 4 ) or calcium nitrate (Ca(NO 3 ) 2 ), which release CO 2 .
Chemical characterization of sediment trap samples
After recovery, the sample bottles of sediment traps were poisoned at 5 % with a solution of buffered formaldehyde to prevent microbial degradation and grazing by swimmers and were stored at 4 • C in the dark. The samples collected in the sediment trap were treated following the standard protocol developed at the national service "Cellule Piège" of the French INSU-CNRS (http://www.obs-vlfr.fr/ LOV/Pieges). Swimmers were removed by hand-picking under a binocular microscope. The sample was rinsed three times with 50 mL of ultrapure (MilliQ) water in order to remove salt and was then freeze-dried. Mass fluxes were measured by weighing the freeze-dried samples. The accuracy of the weighing was 1 % over the whole data series. Total concentration of carbon and nitrogen were measured in duplicate with a Perkin Elmer 2400 series II elemental analyzer (CHN) on aliquots of the desiccated samples (3-4 mg). Acid digestion described by Ternon et al. (2010) sured on digested samples after dilution (1/100) by Ametek ICP-AES for Al, Ca, Co, Cu, Fe, K, Li, Mg, Mn, Mo, Nd, P, S, Sr and Ti. The recovery for all the elements was higher than 96 % in CRM samples indicating a good reliability of the digestion method. The accuracy of ICP-AES analyses was checked using SLRS-4 and SLRS-5 (River water standard materials from NRC) as CRM and the detection limits determined (Heimbürger et al., 2013). Reagent blanks were included as control for possible contamination during the analytical process.
Particulate concentration in the water column
In order to follow both the settling of the added mineral particles through the mesocosms and the change of their chemical composition, particulate concentrations for Al, Fe, Ca, Co, Cu, Fe, K, Li, Mg, Mn, Mo, Nd, P, S, Sr and Ti were measured in the water column during DUNE-R and only for Al and Fe during DUNE-P and DUNE-Q. The vertical profiles of particulate Al and Fe for the experiment P are already described in Wagener et al. (2010). The protocol used for sampling and treatment of DUNE-P is described in Wagener et al. (2010). In brief, particulate samples were collected on cellulose acetate filters by filtering one litre of seawater. For DUNE-P and DUNE-Q, the samples were collected from 0, 5 and 10 m depths at 6, 24, 46 and 70 h after seeding. For DUNE-R, the samplings were collected from six depths in the mesocosms (0, 2.5, 5, 7.5, 10 and 12.5 m) up to 164 h after seeding. After filtration, filters were dried under a laminar flow bench and kept at room temperature until analysis. One half of the collected filters were HNO 3 / HF aciddigested then diluted in 10 mL of 0.1 M HNO 3 after complete evaporation. The obtained solutions were analyzed at LOV (Villefranche sur mer) for Al and Fe with a Jobin Yvon (JY 138 "Ultrace") ICP-AES for DUNE-P and DUNE-Q (see Wagener et al., 2010). For DUNE-R, the digestion solutions were analysed at LISA (Créteil) by Ametek ICP-AES with the protocol used for the chemical characterization of sediments. The blank level was under the detection limits for the majority of elements except for Al, Fe, P and Ti with typical blank levels around 1 µg L −1 for Al and Fe and 500 ng L −1 for P and Ti. For Co, Cu, Li, Mo and Nd, the measured concentrations were mainly under the detection limits of this method (around 100 ng L −1 ). As the filters were not rinsed to remove salts, the data on Ca, K, Mg, S and Sr were highly affected by salts contained in the seawater and are not discussed here.
Total and elemental mass in sediment traps
The average total and elemental mass in the sediment traps for the four seeding experiments are presented in Table 3 for C, N, Al, Ca, S for the Control-Meso and Dust-Meso and in the supplementary section for Ba, Co, Cu, Fe, K, Li, Mg, Mn, Mo, Nd, P, Sr and Ti. The total and elemental masses were always higher in the Dust-Meso than in the Control-Meso, except for N in DUNE-Q.
For all experiments in the Dust-Meso sediment traps, the cumulative mass increase was linear with time and significant in the first 90 h, then the cumulative mass is constant until the end of the experiments (Fig. 2a). The maximum mass collected was reached between 24 h and 72 h for the experiment P and R1, whereas for the experiment R2 this maximum was obtained for the first trap samples, i.e., in the first 24 h (Table 3). The cumulative mass at the end of experiment Q was 1 order of magnitude lower compared to the other experiments. In terms of chemical composition, C was the preponderant element in the sediment traps of Control-Meso, whereas Ca, C then Al were predominant in the Dust-Meso sediment traps. The dominance of dust in the sediment traps for Dust-Meso was supported by the visible presence of dust in the samples after collection (Fig. 2b). The highest elemental concentrations for C, N, Al, Ca and S were observed for the experiments DUNE-P and DUNE-R1 and the lowest for DUNE-Q, according to the variability of the cumulative mass (Table 3).
For most elements, it appears that the elemental mass concentrations were linearly correlated with the total mass with a correlation coefficient higher than 0.98, except for N which presented a much larger dispersion (Al and N are shown in Fig. 3). The linearity of the relationship implies that the sediment composition was quasi-constant from 24 h after seeding up to the end of the experiment (168 or 172 h after seeding), showing that the chemical composition of sinking particles collected in sediment traps did not evolve after the first 24 h after dust seeding. Linear regression between total mass and elemental mass enables one to estimate the % of a given element in the collected sediments. For instance, Al was 4.82 ± 0.12 % of the total mass during DUNE-P (Fig. 3), a value significantly higher than the initial Al contribution to total mass in the seeded dust (4.12 ± 0.39 %, Table 1). Such higher elemental concentration in the sediment traps compared to initial concentration in the dust proxy was also observed for all the other studied elements except Ca, S and N. For these three elements, the mass fractions were significantly lower in the sediment traps than in the added dust.
In order to compare the composition of added mineral dust with the particles collected in the sediment traps, we normalized the elemental (X) concentration ratio of X / Al in sediment traps to the X / Al ratio in dust (Table 4). We used Al since dissolved Al measurements in the water column during DUNE-R were shown to be negligible with fractional solubility ranging from 0.74 to 0.84 % (Wuttig et al., 2013). Doing this, we identified enrichment or depletion of elements X independently of total mass variations. No significant change of Ba, Fe, Ti, Nd, Mo and Li contents was observed between added dust and collected particles in the Dust-Meso sediment traps. In contrast, for C, Co, Cu and K, a systematic enrichment in all experiments was found. Inversely, an important depletion of Ca and S was observed. The behavior of N and P was contrasted depending on the experiments: during DUNE-Q, N and P were highly enriched in the sediment traps compared to the added dust; N was depleted in the sediment traps compared to the added dust during DUNE-P and DUNE-R; no significant enrichment or depletion in P was observed during DUNE-P and DUNE-R.
Regarding the depletion of Ca, N, S, they are the major constituents of evaporite minerals as gypsum (CaSO 4 ) or cal- cium nitrate (Ca(NO 3 ) 2 ), which have been formed by cloud processing into EC-Dust. These minerals are known to be water soluble (Sullivan et al., 2007). Moreover, dissolution experiments performed in laboratory on EC-Dust07 showed that 100 % of N, associated with the neoformation of calcium nitrate was dissolved as nitrate in seawater (Ridame et al., 2013). Therefore, the depletion of Ca, S and N was due to the dissolution of sulfate and nitrate containing particles into seawater after seeding. The depletion was observed for all the samples, whatever the time, after seeding (not shown), showing that this dissolution took place during the first 24 h.
Time series of elemental particulate concentrations in the water column
The profiles of particulate aluminum (pAl) in the water column are presented for DUNE-P, -Q, -R1 and -R2 in Fig. 4. Analogous profiles for Ba, Fe, Mn, P, and Ti are given in supplementary information. The particulate concentrations in Control-Meso were always lower than the ones found in Dust-Meso at the same depth. It is obvious by comparing profiles in Dust-Meso and Control-Meso that lithogenic particles correspond mainly to added dust: particulate Al was thus used as a tracer of this dust. The highest pAl concentrations were observed in the first 5 m of mesocosms in the first 24 h for all the experiments (Fig. 4). For DUNE-R, higher pAl concentrations were found below 10 m after 48 h. A large part of pAl stock in DUNE-R2 remained at the surface until 72 h, whereas this stock was homogeneously distributed over the whole mesocosm during DUNE-R1. This is consistent with the difference of masses collected in sediment traps between DUNE-R1 and DUNE-R2; 164 h after the seeding, pAl concentrations were always higher in Dust-Meso compared to Control-Meso (not shown). No measurement of pAl was made at −2.5, −47.5 and −10 m for DUNE-P and DUNE-Q, limiting the conclusions on the location of added dust for those experiments. The particulate concentrations of Ba, Fe, Mn, P and Ti followed the patterns of Al probably indicating their lithogenic origin.
Mass budget in the sediment traps
A mass budget of dust integrated from the surface to the sediment traps was calculated from the mass budget in the sediment traps and pAl concentrations in the water column. In order to estimate the fraction of dust in the total mass in the sediment traps, the dust mass which is lost from the dissolution of evaporite minerals needs to be quantified. First, we used the Al content in the initially added dust to estimate a theoretical dust mass in the sediment traps (Table 5) which corresponds to the total dust mass without dissolution. This estimated mass corresponds with the total dust mass without dissolution process. Then, for DUNE-P and DUNE-R, we assessed the mass of seeded dust lost after dissolution of CaSO 4 and Ca(NO 3 ) 2 from the depleted part of Ca, N and S in the sediment traps (Table 5). For DUNE-Q, we considered only a potential dissolution of CaCO 3 , the major material www.biogeosciences.net/11/5581/2014/ Biogeosciences, 11, 5581-5594, 2014 containing Ca in NEC-Dust07. So, for DUNE-Q, we used the Ca content in sediment traps material to estimate the dissolved part, considering that all the depletion of Ca is associated with the dissolution of calcium carbonate (Table 5). These estimations also show that in DUNE-Q, only 70 % of the total mass collected in the traps was dust, whereas for the other experiments it accounted for more than 93 % ( Table 5). The dissolution of dust constituted a mass loss of around 7 g (i.e., around 17 % of initial dust mass) in each experiment (Table 6). That implies that only about 34 g from the 41.5 g actually seeded remained under particulate form in the mesocosms. A large part of the introduced dust was not recovered in the sediments traps even after taking account of the dissolution (Fig. 5). We further used the estimated mass percentage of Al in the sediments traps from the Fig. 3 (top) in order to also consider the dissolution of dust particles during their settling. Doing this, 7 days after seeding, only 52, 11, 57 and 41 % by mass of the lithogenic particles initially added were recovered in the sediment traps in DUNE-P, -Q, -R1 and -R2, respectively (Fig. 5). For the DUNE-P and DUNE-R, the temporal change of dust settling was very homogeneous up to 72 h. After 72 h, the settling of dust particles in DUNE-R2 was significantly lower in comparison to the DUNE-P and DUNE-R1. The low recovery of dust mass in the sediment traps even 7 days after seeding suggests that more than 45 % of dust particles (Fig. 5) had sinking velocities below 2.1 m d −1 , whereas the recovery after one day indicates that less than 15 % of dust presented sinking velocities higher than 14.7 m d −1 for DUNE-P and DUNE-R. This is consistent with the results from Bressac et al. (2012) showing that the higher settling velocity of Saharan dust particles could reach 24 to 87 m d −1 during DUNE-R. For DUNE-Q, 89 % of particles had sinking velocities below 2.1 m d −1 , and 1 % of particles had sinking velocities higher than 14.7 m d −1 (Fig. 5). This means that the large majority of deposited dust remained in the water surface layer even 7 days after the seeding. , 11, 5581-5594, 2014 www.biogeosciences.net/11/5581/2014/ Using pAl concentrations in the water column, the remaining mass of dust still in suspension in the mesocosm were estimated to be 12.8, 1.1, 10.8 and 12.9 g, respectively, for DUNE-P after 5 days, DUNE-Q after 3 days, DUNE-R1 and DUNE-R2 after 6 days (these times corresponding to the last sampling in the water column) ( Table 6). When correcting these numbers by the mass fraction that dissolved from dust, the recoveries were 96, 25, 99 and 82 % of the initial dust mass (Table 6). This mass budget shows that a half of dust was found in the sediment traps for DUNE-Q and DUNE-R2 and two-thirds for DUNE-P and DUNE-R1. A critical point of uncertainties in this calculation is the integration of pAl within the water column to estimate the mass of dust in suspension. As previously noted, no measurement of pAl was available below 10 m for DUNE-P and DUNE-Q and a potential high concentration of pAl could have been missed, underestimating the final estimated mass of dust. Thus, it is probable that this low depth resolution was insufficient for the case of the Q experiment, which presented the largest mass fraction in suspension at the end of the experiment. In consequence, this could explain, at least in part, the low rate of recovery for this experiment. Although the low depth resolution could increase the uncertainty on the estimated mass, it is important to note here that the lowest recovery was obtained for DUNE-Q experiment mimicking a dry deposition (see discussion Sect. 4.2).
Estimation of fluxes associated with dust deposition
Settling particles consist of four major components: biogenic opal (opal), biogenic carbonate (bCaCO 3 ), lithogenic particles, and organic matter (POC). In the Dust-Meso, the lithogenic particles corresponded essentially to added dust. In consequence, the fraction of dust was calculated from the estimated mass % of Al in the sediments traps from the Fig. 3, as for the mass budget. The fraction corresponding to biogenic opal was determined from the measurement of biogenic Si, obtained from sequential leaching following Mosseri et al. (2005). In comparison to open ocean sediment traps studies, we have seen that a large part of Ca measured in the traps was from added dust and the total mass of Ca is the sum of Ca as Ca(NO 3 ) 2 , CaCO 3 and CaSO 4 present in dust, plus the bCaCO 3 . Sulfur concentrations in the traps indicate that all the produced gypsum by dust treatment was not completely dissolved. The undissolved mass of CaSO 4 was estimated from the mass of particulate S. The total carbonate mass was assessed from the total mass of Ca minus the mass of Ca related to gypsum. The biogenic carbonate was finally estimated from the total carbonate mass minus the estimated CaCO 3 issued from dust (estimated from the Ca to total mass scatter plot as for Al in Fig. 3). The organic matter was estimated as 2.4 times the organic carbon (Klaas and Archer, 2002), which was issued from the total carbon mass less the total carbonate fraction of carbon. The masses in the Control-Meso were typically at least one order of magnitude lower than the masses obtained in the Dust-Meso. The exported material in Control-Meso was dominated by the POC fraction (30-50 %), and by the lithogenic fraction (20-30 %) regardless of the experiment. Inversely, the lithogenic fraction was the main component of the mass in the Dust-Meso, representing between 66 to 96 % of the total mass, the lowest percentages being measured 6 days after seeding (not shown). POC represented up to 14 % of the total mass. The highest POC contribution was obtained for DUNE-P and the lowest for DUNE-Q. The total mass, POC, dust, opal and bCaCO 3 fluxes have been estimated from the calculated mass fraction in the collected material www.biogeosciences.net/11/5581/2014/ Biogeosciences, 11, 5581-5594, 2014 (Table 7). All fluxes were significantly lower in the DUNE-Q compared to the other experiments, in agreement with the low dust recovery found in the sediment traps of this seeding.
Elemental composition of sediment records: a good proxy for atmospheric inputs?
Chemical elements (except N) and total mass are well correlated in the material collected in sediment traps. Such linearity means that the process likely involved in a modification of dust composition (such as dissolution, adsorption, precipitation, aggregation) occurred in the water column before t = 24 h after the seeding. This confirms that the study of these processes demands a high temporal resolution of dissolved concentrations monitoring as observed by Wuttig et al. (2013). This is in particular the case of evaporite minerals dissolution, releasing Ca, S and N. According to our data, the mass content of those elements is altered after dust deposition. Such dissolution implies also a decrease of dust mass during settling (Table 5), modifying the mass percentage of elements in the collected material in the sediment traps (e.g., Al in Fig. 3). In such very controlled conditions, we observed that the estimation of dust mass from Al is on average 30 % larger than the actual added mass of dust. As mentioned before, Al is often used for estimating the dust mass in sediment traps (e.g., Bory et al., 2002). Our results show that the dissolution of evaporite minerals formed during atmospheric dust transport due to cloud processing could generate an overestimation of the dust total mass estimated in this way. The typical comparison between atmospheric and marine dust fluxes estimated from Al contents shows that estimated atmospheric deposition fluxes are 2-3 times lower than oceanic sediment trap fluxes (Bory et al., 2002;Ternon et al., 2010). This suggests that the uncertainties on the estimation of lithogenic fluxes from Al content in sediment traps could be explained in part by this discrepancy, at least in areas where dust ageing is observed. Practically, the interelemental ratios found in sediments are usually used as proxies of terrigenous input (Ti / Al, Fe / Ca and Ti / Ca) (Mahiques et al., 2009;Govin et al., 2012), and of productivity (Ba / Al and Ba / Ti) Kastner, 1996 andMahiques et al., 2009). Recent studies have shown that the potential of elemental ratios including Ca as Fe / Ca or Ti / Ca are too sensitive to dilution effects by biological components to allow reliable imprints of terrigeneous inputs (Govin et al., 2012). Our data support this conclusion by showing that the high dissolution of Ca in dust triggers an increase in the Fe / Ca and Ti / Ca ratios in particles, making it difficult to use of these ratios to estimate accurate lithogenic fluxes. On the contrary, our results show the stability of content of Al, Fe and Ti in dust during their sinking in the water column, confirming the reliable use of their interelemental ratios as dust proxies. Ba in excess, i.e., the fraction of total Ba not associated with the lithogenic material, i.e., marine barite, is used to estimate the C export flux (e.g., Paytan and Kastner, 1996). During DUNE, the ratio Ba / Al was stable in material collected in the sediment traps, meaning that the Ba in the sediment traps corresponds mainly to the Ba from dust. Indeed, the biological origin of this element was masked by the high dust mass found in the sediment traps, preventing the calculation of Ba in excess. In consequence, the use of the Ba / Al or Ba / Ti ratio as productivity proxies is likely not recommended in case of large dust events such as the one simulated during DUNE. On the Biogeosciences, 11, 5581-5594, 2014 www.biogeosciences.net/11/5581/2014/ contrary, the systematic enrichment observed for Co whatever the experiment (Table 4), confirms a supplementary biological source of this element. For example, Co is known to substitute Zn in the enzyme carbonic anhydrase in some phytoplankton species (e.g Sunda and Huntsman, 1995). An increase of chlorophyll concentrations was observed after dust seeding for DUNE-P and DUNE-R, but not during DUNE-Q (although a strong increase of N 2 fixation by diazotrophs was observed, Ridame et al., 2014). Co / Al was consistent with these results since it was higher when the autotrophs response was higher and inversely. We propose that the ratio Co / Al could be a good productivity proxy even in the case of large dust inputs. However, it should be tested in "real" (deeper) sediment traps material records.
Link between dust deposition state and POC fluxes
The total and lithogenic fluxes obtained for DUNE-P and DUNE-R (Table 6) Bressac et al. (2014) showed a high degree of covariance between POC and lithogenic fluxes in the Dust-Meso for DUNE-R. They explained this link through the ballast effect of added dust on the organic matter present in the mesocosms. This conclusion was supported by the optical measurements during DUNE-R showing that the high sinking velocities of Saharan dust pool (24 to 87 m d −1 ) correspond to the formation of organic-mineral aggregates within the upper few meters of the water column after seeding (Bressac et al., 2012). We observed that the positive covariance between lithogenic and POC fluxes existed also for DUNE-P and DUNE-Q (not shown), indicating a link between dust and POC export in all the experiments. However, our results show that the pattern of sinking of particles was not equivalent for all the experiments (see Sect. 3.3 and Fig. 5). A much slower settling was observed for DUNE-Q, simulating dry deposition of dust. The highest settling was observed for DUNE-P and DUNE-R1 simulating wet deposition of dust.
In order to explain the difference of dust settling in relation with POC, we estimated the mass ratios of lithogenic matter (i.e., dust in the Dust-Meso) to organic carbon in the sediments traps (Litho / POC in Table 7), i.e., dust fluxes normalized to the POC fluxes in the collected material. For all the experiments simulating wet deposition, the mean ratios obtained were very consistent and around 30. This value is in the range of values found in the case of "real" wet dust deposition events onto surface seawater with high organic matter concentrations (Ternon et al., 2010). The lowest ratio (13) is obtained for DUNE-Q corresponding to a dry deposition.
This value is consistent with the ratio observed by Ternon et al. (2010) in Mediterranean summer conditions with a strong stratification of the water column and low Chl a concentrations. To explain these different ratios, it is important to consider the difference of seeding protocols simulating wet or dry deposition (Table 1), but also the physical and biogeochemical conditions in the four experiments (Guieu et al., , 2014a. During experiment P, stratification of the column water inside the mesocosms was not marked whereas stratification was observed during the whole DUNE-Q experiment and toward the ends of both R-seeding periods. However, the dust settling pattern was quasi-similar for DUNE-P and DUNE-R1, meaning that the stratification effect is probably low. Moreover, initial biogeochemical conditions were typical of oligotrophic conditions for all the experiments with very low Chl a concentrations in the range 0.07-0.11 µg L −1 . The chlorophyll concentration was at least doubled for DUNE-P and DUNE-R, proving a fertilizing effect of dust on phytoplankton . Inversely, no Chl a increase was observed after seeding in the Q experiment. Our results show that the highest POC and lithogenic fluxes were observed when an increase of chlorophyll concentrations was observed. On the contrary, the observations for DUNE-Q simulating dry dust deposition showed a slower dust settling and a low POC export related to an ineffective fertilizing effect for autotroph community. In the case of two successive wet deposition simulations (DUNE-R1 and R2), dust export was less efficient in the second seeding even if the chlorophyll increase was equivalent. However, the initial Chl a concentrations were higher for the second seeding, meaning that fresh organic matter produced after the first seeding had not totally disappeared. This observed primary-productivity dependence of lithogenic fluxes in our controlled oligotrophic conditions shows that a high dust export after a dust deposition event needs both a fertilizing effect to produce new organic matter and mineral ballast. This conclusion supports the work from Ternon et al. (2010) that suggested that the high lithogenic fluxes associated with dust deposition likely occur only when there is simultaneous presence of organic matter and lithogenic material (Ternon et al., 2010). This organic matter could be freshly produced due to a fertilizing effect of deposited dust or older organic matter. The high covariance observed between lithogenic and POC fluxes is similar for all the experiments simulating wet deposition, suggesting that the measured ratio Lithogenic / POC fluxes around 30 (Table 7) could be used as reference to estimate the POC export triggered by a wet dust deposition event.
Recently, Bressac and Guieu (2013) defined the "lithogenic carbon pump" to describe the relation between the lithogenic ballasting and POC export, independently of the biological contribution to POC export stimulated by the dust deposition. They suggested that the age and quantity of organic matter could be also essential to estimate the efficiency of the "lithogenic carbon pump". From this concept, Bressac et al. (2014) calculated that this lithogenic carbon pump represented 50 ± 8 and 42 ± 3 % of the total POC fluxes during DUNE-R1 and DUNE-R2, respectively. They propose that the relative decrease in the lithogenic ballasting after the second seeding was due to the scavenging of large quantity of organic matter from the water column following the first seeding. Comparing these conclusions with our observations on the POC fluxes during DUNE-P and DUNE-Q suggests that the "lithogenic carbon pump" was inefficient for DUNE-Q since the POC fluxes in the Dust-Meso were similar with the ones in the Control-Meso. This implies that the initial organic matter probably presented an insufficient concentration or an inappropriate quality (e.g., thickness) to induce lithogenic ballasting in this experiment. However, it is difficult to estimate what was the effect of strong stratification during this experiment on the low POC fluxes. On the contrary, the induced new production during DUNE-P and DUNE-R provided sufficient fresh organic matter to activate the lithogenic carbon pump, while the water column was not strongly stratified. Our results suggest that the "lithogenic carbon pump" mechanism after a dust deposition is more efficient when new production (and thus production of fresh DOM) is induced by the deposition and the stratification is not too marked.
Conclusions
Elemental particulate composition in the water column and sediment traps constitutes useful data for assessing the fate of mineral dust particles deposited at the ocean surface. From controlled artificial seeding experiments in large mesocosms, we have shown that the dust predominated the particulate phase exported at the base of mesocosms (15 m depth) and that dust particles were still in suspension in the enclosed seawater body (52 m 3 ) 164 h after the seeding. Lithogenic and POC fluxes were consistent with direct measured fluxes in sediment traps at 200 m depth in the water column following a strong desert dust deposition event (NW Mediterranean Sea; Ternon et al., 2010). This confirms that data obtained from our experimental mesocosm approach captured the mechanisms of export following a natural dust deposition event.
About 15 % of the initial dust mass introduced was dissolved in the water column in the first 24 h after seeding. This loss was due to the rapid dissolution of calcite for DUNE-Q and from the new minerals, as gypsum or calcium nitrate formed by artificial cloud processing of seeded dust in DUNE-P and DUNE-R. In spite of these dissolutions, the interelemental ratio Ti / Al of seeded dust remained constant during the dust settling, confirming that this ratio is a good proxy for marine lithogenic fluxes. We showed that relatively high Ba content in dust prevents the use of Ba / Al as productivity proxy in the case of high dust deposition such as the ones mimicked during DUNE. Instead, we identified that the ratio Co / Al was linked to the marine productivity and could be a good candidate as a productivity proxy.
The mass budget in the sediment traps and in the mesocosm revealed differences in the dust settling between the different seeding experiments. The higher mass recoveries were measured in DUNE-P and DUNE-R (Fig. 5) and were associated with the highest POC fluxes. This corresponded to the seeding experiments carried out with EC-Dust, i.e., "aged" dust and simulating wet deposition, when a significant Chl a increase after seeding was observed and the stratification was not marked. Inversely, the experiment Q, simulating a dry deposition event of NEC-Dust, i.e., "fresh" dust, presented the lowest recovery of dust mass in the sediment traps, with around 89 % of dust remaining in the water column after 6 days (Fig. 5). This low dust recovery in the sediment trap was concomitant with (1) a low Chl a increase during this experiment, (2) a low POC export and (3) and a strong stratification. We hypothesize that because dry deposition of fresh dust was inefficient to strongly fertilize autotrophs communities in the mesocosms, the dry deposition of fresh dust in our oligotrophic and stratified conditions was inefficient to induce the necessary dissolved organic matter to trigger high POC fluxes by a ballast effect. On the contrary, wet deposition of aged dust was very efficient to trigger high POC fluxes following new production induced by the new nutrients from the dust. The lithogenic fluxes in this case were typically 30-fold higher than the POC fluxes. The different dust deposition simulated during DUNE highlighted a series of processes that modulate the export of lithogenic material and POC after a dust deposition. These processes include the fertilizing effect of dust on the autotroph community, the ballast effect between lithogenic particles and dissolved organic matter and the intensity of the stratification. Further studies should focus on the link between the intensity of the POC export and the type of deposition (dry or wet) since our data do not enable us to conclude if this is a critical parameter.
The Supplement related to this article is available online at doi:10.5194/bg-11-5581-2014-supplement. | 2015-03-27T18:11:09.000Z | 2014-10-13T00:00:00.000 | {
"year": 2014,
"sha1": "302581e7384e5281dbc6eb8bcaff6e6f39cb9ce8",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/11/5581/2014/bg-11-5581-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7af746afdef69978f79cdd3397d4e62ba1575848",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
9028282 | pes2o/s2orc | v3-fos-license | 16S–23S rRNA Gene Intergenic Spacer Region Variability Helps Resolve Closely Related Sphingomonads
Sphingomonads comprise a physiologically versatile group many of which appear to be adapted to oligotrophic environments, but several also had features in their genomes indicative of host associations. In this study, the extent variability of the 16S–23S rDNA intergenic spacer (ITS) sequences of 14 ATCC reference sphingomonad strains and 23 isolates recovered from drinking water was investigated through PCR amplification and sequencing. Sequencing analysis of the 16S–23S rRNA gene ITS region revealed that the ITS sizes for all studied isolates varied between 415 and 849 bp, while their G+C content was 42.2–57.9 mol%. Five distinct ITS types were identified: ITSnone (without tRNA genes), ITSAla(TGC), ITSAla(TGC)+Ile(GAT), ITSIle(GAT)+Ala(TGC), and ITS Ile(GAT)+Pseudo. All of the identified tRNAAla(TGC) molecules consisted of 73 bases, and all of the tRNAIle(GAT) molecules consisted of 74 bases. We also detected striking variability in the size of the ITS region among the various examined isolates. Highest variability was detected within the ITS-2. The importance of this study is that this is the first comparison of the 16S–23S rDNA ITS sequence similarities and tRNA genes from sphingomonads. Collectively the data obtained in this study revealed the heterogeneity and extent of variability within the ITS region compared to the 16S rRNA gene within closely related isolates. Sequence and length polymorphisms within the ITS region along with the ITS types (tRNA-containing or lacking and the type of tRNA) and ITS-2 size and sequence similarities allowed us to overcome the limitation we previously encountered in resolving closely related isolates based on the 16S rRNA gene sequence.
INTRODUCTION
Sphingomonads are Gram-negative, chemoheterotrophic, non-spore forming, straight rods, strictly aerobic, and characterized by an outer membrane containing glycosphingolipids as cell envelope components, but lacking lipopolysaccharide (Yabuuchi et al., 1990;White et al., 1996). Colonies are yellow-pigmented or whitish brown (Takeuchi et al., 1993). Sphingomonads are found in diverse natural environments playing an important role in nutrient cycling, especially in oligotrophic environments (Aylward et al., 2013). Some have been detected in plant and animal-associated environments, are being connected to rising health-care associated infections (Aylward et al., 2013;Narciso-da-Rocha et al., 2014), and were recently linked to peritoneal dialysis-associated peritonitis (Mohan and Railey, 2015). Sphingomonads are able to survive the chlorination of tap water and have the ability to co-aggregate and form biofilms. Large numbers of phenotypically and phylogenetically similar strains belonging to this group have been isolated. As a result Takeuchi et al. (2001) examined the complete 16S rRNA gene sequences, fatty acid profiles and polyamine patterns of several strains of the genus Sphingomonas and related genera. Based on the phylogenetic analyses of the 16S rRNA gene sequences and on some chemotaxonomic and phenotypic differences, the genus Sphingomonas was divided into four clusters and three new genera were proposed. Today sphingomonads encompass eight genera: Novosphingobium, Sphingobium, Sphingomonas, Sphingopyxis, Sphingosinicella, Sphingomicrobium, Sphingorhabdus, and Parasphingopyxis (Stolz, 2013).
The use of 16S rRNA gene sequence informatics to study bacterial phylogeny and taxonomy has been the most common housekeeping marker used (Janda and Abbott, 2007). However, many investigators have encountered resolution problems at the genus and/or species level due to the high level of similarity in the 16S rRNA gene sequence (Goncalves and Rosato, 2002;Janda and Abbott, 2007). This prompted the search for a new phylogenetic marker such as the 16S-23S rDNA intergenic spacer (ITS). The genes coding for ribosomal RNAs in prokaryotes are arranged in an operon in the following order 5 -16S-23S-5S-3 and are separated by two spacer regions known as the ITS (Condon et al., 1995). ITS is more variable than the adjacent 16S and 23S ribosomal genes, and may be a better target for efficient identification at the species level due to its variability within a genus (Garcia-Martinez et al., 1996;Khan et al., 2005). This variability is due partly to differences in the number and type of tRNA sequences found within the spacer (Fredrickson et al., 1995).
Organisms isolated from a drinking water distribution network and water storage tanks in Lebanon were previously defined to be mainly Gram-negative, pigmented α-Proteobacteria belonging to the family of Sphingomonadaceae (Tokajian et al., 2008). 16S rRNA gene sequencing, biochemical identification using the Biolog system and restriction digestion of the amplified ITS region did not yield reproducible results or enough variability to properly cluster and/or identify those isolates. In the present report the ITS sequences of 14 ATCC reference sphingomonad strains and 23 isolates representing those previously recovered from drinking water (Tokajian et al., 2008), was determined. These data were used to assess the extent variability of the ITS sequences, and to examine the potential of using this genetic marker to differentiate and delineate systematic relationships between isolates that usually: don't fit within recognized biochemical profiles, don't generate acceptable identification according to commercial systems, have too few sequences deposited in nucleotide databases and share high level of similarity in the 16S rRNA gene sequence.
Bacterial Strains
The study was conducted using all forms and derivatives of yellow-pigmented colonies isolated from an intermittent drinking water distribution network (Tokajian et al., 2005) and Polyethylene and cast iron household storage tanks in Lebanon over a period of 2 years (Tokajian and Hashwa, 2004a,b). One hundred and twenty-nine Gram-negative rods with whitish to yellow-pigmented colonies were isolated and purified on R2A agar (Oxoid; Reasoner and Geldreich, 1985). The isolates were grouped into biotypes representing the various colony color and morphology obtained upon growing on R2A for 48 h at 28 • C (Figure 1). Twenty-three isolates representing the different biotypes were chosen and designated as SLAU-(1-3) (GenBank accession numbers: GQ907155/56/ 91), SLAU-(6.1-6.2) (GenBank accession numbers: GQ907158 /7), SLAU-
Reference Strains
The
DNA Extraction
DNA extraction was done using InstaGene matrix solution (BIO-RAD, München, Germany). Samples with low DNA concentration and/or quality, the extraction was repeated using QIAamp DNA Mini Kit (Qiagen, Hilden, Germany), and all according to the manufacturers' instructions. Lysates were then stored at −20 • C until further processing.
Sphingomonad-Specific 16S rDNA-Based PCR Assay
The PCR mixture contained 2 μl DNA (200 ng/μl), 1 U of AmpliTaq Gold (Applied Biosystems, USA), 0.5 μM of the forward and reverse primers (Table 1), 0.2 mM of each deoxynucleoside triphosphate (dNTP), 2.5 mM MgCl 2 and 1× PCR buffer in a final volume of 50 μl (Leys et al., 2004). The expected PCR amplicon was around 352 bp long and was FIGURE 1 | Molecular Phylogenetic analysis by Maximum Likelihood method. The evolutionary history was inferred by using the Maximum Likelihood method based on the Jukes-Cantor model. The bootstrap consensus tree inferred from 1000 replicates is taken to represent the evolutionary history of the taxa analyzed. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches. Initial tree(s) for the heuristic search were obtained by applying the Neighbor-Joining method to a matrix of pairwise distances estimated using the Maximum Composite Likelihood (MCL) approach. The analysis involved 37 nucleotide sequences of which 14 are references. All positions containing gaps and missing data were eliminated. There were a total of 86 positions in the final dataset. Evolutionary analyses were conducted in MEGA6.
visualized by ethidium bromide staining on 1.5% agarose gel using 1X TAE buffer. 16S rRNA gene amplification was used as a positive PCR control to ensure the integrity of the DNA (Tokajian et al., 2008).
ITS DNA Amplification
For amplification of the 16S-23S ITS region, PCR was performed in a total volume of 20 μl using the primers 16S-1511f targeting the end of 16S rDNA, and the reverse primer 23S-23r targeting the beginning of the 23S rDNA 1 or 1492f targeting the end of 16S rDNA and 115r targeting the 23S rDNA (Table 1) (Garcia-Martinez et al., 1999). PCR reactions contained 2 μl DNA (50 ng/μL), 200 μM dNTPs, 0.4 pmol of each primer, 1X PCR Buffer II (Applied Biosystems), 2.5 mM MgCl 2 , and 0.1 U of AmpliTaq Gold DNA polymerase (Applied Biosystems). The amplified products were then visualized by ethidium bromide staining on 1.5% agarose gel using 1X TAE buffer with reference PCR products used as positive controls. The PCR products were purified using ExoSAP-IT (USB Corp., Cleveland, OH, USA).
DNA Sequencing Reaction
The amplicons were sequenced using the ABI Prism BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems). Two sequencing reactions were performed for each sample. The 1 http://www.ridom.de/rdna/ sequencing reaction consisted of the BigDye premix, 0.2 pmol of either forward or reverse primer, and the cleaned PCR product in a total volume of 10 μl. The same primers used in the PCR were used for sequencing. All sequencing reactions were performed with 25 cycles of 96 • C for 10 s, 50 • C for 5 s, and 60 • C for 4 min.
Sequence Analysis and Phylogenetic Tree
Sequences obtained were analyzed on CLC Main Workbench v5.5 and deposited to GenBank under the accession numbers indicated above. Sequences were aligned using the Clustal Omega multiple sequence alignment program 2 (Sievers et al., 2011) with default parameters. Phylogeny was inferred using the Maximum Likelihood method based on the Jukes-Cantor evolutionary model with the consensus tree inferred from 1,000 bootstrap replicates. The initial tree(s) for the heuristic search were obtained by applying the Neighbor-Joining method to a matrix of pairwise distances estimated using the Maximum Composite Likelihood (MCL) approach. All position containing gaps and missing data were eliminated, and the total data set was composed of 86 positions. Tree building along with visualization were done using the MEGA6 program 3 (Tamura et al., 2013) (Figure 1).
Structure, ITS Sequences and Phylogenetic Analysis
Sphingomonads comprise a physiologically versatile group many of which appear to be adapted to oligotrophic environments, but several also had features in their genomes indicative of host associations (Aylward et al., 2013). Currently, little sequence data is available on the ITS region for sphingomonads. The ITS region has been increasingly used to differentiate bacterial species or strains which cannot be easily differentiated using the 16S rRNA gene (Man et al., 2010). This is the first comparison of the 16S-23S rDNA ITS sequence similarities and tRNA genes from sphingomonads, where we analyzed the phylogenetic relationships based on ITS sequencing for a number of chosen sphingomonad ATCC reference strains along with representative sphingomonads recovered from drinking water in Lebanon (Tokajian et al., 2008). Previously Takeuchi et al. (2001) separated sphingomands into four clusters (Cluster I: Sphingomonas, Cluster II: Sphingobium, Cluster III: Novosphingobium, Cluster IV: Sphingopyxis) and considered each of the four clusters as a monophyletic and distinct phylogenetic group based on the 16S rDNA sequences. Our results however, revealed that discrepancies exist specially within Cluster I (Sphingomonas sp.). S. parapaucimobilis and S. paucimobilis formed distinct lines of descent (Figure 1). Since the ITS has a non-coding function, it is, therefore, susceptible to low selective pressure leading to extensive sequence mutation and insertion/deletion phenomena, making the ITS region more variable than the 16S rDNA (Tyrrell et al., 1997). ITS size of S. parapaucimobilis and S. paucimobilis was 729 and 793 bp, respectively. Similarly, S. suberfaciens (849 bp) clustered separately and had a larger ITS compared to S. mali, S. pruni, and S. assacharolytica (536-656 bp) (Figures 1 and 2).
Five distinct ITS types were identified: ITS none (without tRNA genes), ITS Ala (with the tRNA Ala gene), ITS Ala(TGC)+Ile(GAT) (with tRNA Ala and tRNA Ile genes), ITS Ile(GAT)+Ala(TGC) (with tRNA Ile and tRNA Ala genes) and ITS Ile(GAT)+Pseudo (with tRNA Ile and tRNA Pseudo genes) (Figure 2A). All of the identified tRNA Ala(TGC) molecules consisted of 73 bases, and all of the tRNA Ile(GAT) molecules consisted of 74 bases. ITS none is rarely found in Gram-negative bacteria and was previously detected only in Klebsiella sp. (Wang et al., 2008) and some Gram-positive bacteria including Staphylococcus aureus, Listeria monocytogenes, and Bacillus cereus (Boyer et al., 2001). We also detected striking variability in the size of the ITS region among the various examined isolates (Figure 2B). Even within isolates showing the same pattern, the sizes of the individual ITS region was often different. The most common pattern was ITS Ala+Ile , which was detected in all of the studied ATCC reference strains except S. parapaucimobilis. The size range of this group was 440-849 bp, and in all except for N. subterraneum, the tRNA Ala is just downstream of the 16S rRNA gene and the tRNA Ile is just upstream of the 23S rRNA ( Figure 2B). This was contrary to what has been previously reported, with the common arrangement being ITS Ile+Ala (Boyer et al., 2001). However, Xylella fastidosa and Campylobacter sp. were also among the isolates reported to have the ITS Ala+Ile arrangement (Simpson et al., 2000;Man et al., 2010).These tRNAs divided the ITS sequence into three parts: ITS-1, ITS-2, and ITS-3. Positions and structures of tRNAs and the start and end of the ITS-2 regions were determined using tRNAscan-SE Search Server 4 (Lowe and Eddy, 1997) ( Figure 2B). Highest variability was detected within the ITS-2, where the percent sequence conservation ranged from 6 to 96% (mean 22.5% ± 20.4%). This remarkable variation was also previously observed within the ITS-2 of Xanthomonas species (Goncalves and Rosato, 2002). Differences in the size of ITS-2 was also detected, dividing the isolates into different groups; the shortest having 16-19 nt and the longest 122-138 nt (Figure 2B). These groups were polyphyletic and did not correlate with the distribution of the isolates in the phylogenetic tree (Figures 1 and 2B), except for those having ITS-2 size of 60-61 nt and which included Blastomonas natatoria as the only ATCC reference strain. However, in this group there was a remarkable consensus within the whole ITS and ITS-2 regions with minor differences; a 10 bp deletion in the ITS region detected in B. natatoria at position 672-681 and only three insertion/deletion instances within ITS-2 ( Figure 3A). This was in perfect harmony with the fact that B. natatoria not only represented a different line of descent from all other sphingomonads, but also differed in having photosynthetic and phytopathogenic traits (Takeuchi et al., 2001). Additionally, and in line with our previous observation, the highest ITS-2 sequence conservation within cluster I (Sphingomonas sp.) was detected in S. mali, S. pruni, and S. assacharolyitca (Figure 3B), which had the exact same size and almost identical sequences except for one base substitution in S. mali and S. pruni. Moreover, N. stygium and N. rosa had the same ITS-2 size and sequence, which differed slightly from that of N. subteraneuem. These findings revealed the heterogeneity and extent of variability within the ITS region as compared to the 16S rRNA gene even Table showing the length, G/C content, tRNA position, pigmentation, cluster based on the phylogenetic tree, and ITS-2 size of the isolates and reference strains included in this study. The position of each of the tRNAs is indicated in brackets, respectively. The dotted square indicates isolates having exactly identical ITS sequences. The dotted brackets indicate isolates having identical or similar ITS-2 sequences.
within closely related isolates. On the other hand, four of the sequenced isolates including the reference strain S. sanguinis exhibited longer sequences, 122-138 bp, which is suggestive of a common origin (Figure 2B). A longer stem was observed which provided more stability (Figure 4). The ones with shorter sequences 16-19 bp, formation of a secondary hairpin structure could be predicted, which could represent a putative target for RNase III during the processing of tRNA ala and tRNA Ile (Figure 4). The longer observed sequences are either due to the addition or deletion of three long stretches of nucleotides at different positions (positions: 23-43, 43-94, and 95-137) with a size of 20-51 nt. Finally, it is noteworthy that all the reference strains had a conserved nucleotide block of TGGT (except S. parapaucimobilis it was TACG and in N. subterraneum it was TTGG) at the end of the ITS-2 and some consensus sequences such as CCAACCAT at the beginning.
Sequencing the ITS region and examining the variability within the ITS-2 helped in overcoming limitations we previously encountered using 16S rRNA gene sequences and whether with the ATCC reference strains or the unknown isolates we were able to better understand the phylogeny of those isolates. The availability of few 16S rRNA gene sequences deposited in nucleotide databases and the similarities that existed in the 16S rRNA sequences led to poor discriminatory power (Tokajian et al., 2008). Although with some of the sequenced isolates a definitive identification was not attained using the ITS sequencing approach, but absolute resolution was clearly observed. Moreover, this study revealed discrepancies in Cluster I (Sphingomonas sp.), which calls for careful reconsideration.
The G+C Content of ITS Sequences
The total G+C content in sphingomonads was 62-68% (Takeuchi et al., 2001), while in the ITS sequence the range was 42.2-57.9 mol%. This was in line with what was previously observed in Xanthomonas species, Salmonella typhimurium and Escherichia coli, which confirms that the selective pressure is not identical in the coding and non-coding regions (Syvanen, 1994;Goncalves and Rosato, 2002).
CONCLUSION
Although 16S rRNA gene sequencing has been widely used for typing bacterial isolates, it was shown previously that this region does not provide enough information to discern between closely related bacterial strains at the sub-generic level, especially for diverging species (Fox et al., 1992). Sequence and length polymorphisms of ITS regions have been increasingly used as tools for the identification of bacterial species and/or subspecies, where the ITS region is hypervariable when compared to the more conserved 16S rDNA (Garcia-Martinez et al., 1999), and hence sequencing of the 16S-23S ITS region provided more information for the identification at the species and the subspecies levels (Gurtler and Stanisich, 1996;Ernst et al., 2003;Xu and Cote, 2003). Collectively the data obtained in this study show that sequence and length polymorphisms within the ITS region along with the ITS types (tRNA-containing or lacking and the type of tRNA) and ITS-2 size and sequence similarities can all be used as a potentially powerful tool to study the phylogeny of such isolates and to delineate systematic relationships. Additionally, and based on ITS sequencing some of the unidentifiable drinking water isolates, which were phenotypically similar to sphingomonads and were not identified using the 16S rRNA gene sequencing (Tokajian et al., 2008), had ITS sequences with sufficient variations that allowed to overcome the limitation of resolving closely related isolates based on the 16S rRNA gene sequence. Moreover, the ITS sequence informatics could help clinical settings in resolving the problem of identifying organisms that are rarely associated with human infections and that usually don't fit within recognized biochemical profiles.
It is noteworthy, the high-throughput genome sequencing of a number of sphingomonads revealed the presence of a large number of species-specific genes with few genomic features that can reliably be used to differentiate between the genera. These observed discrepancies were attributed to the presence of selfish genetic elements playing a significant role in shaping genome evolution, along with megaplasmids, transposons, plasmids, and chromosomal rearrangements (Aylward et al., 2013). Due to selective pressures few genomic features within sphingomonads can reliably distinguish between the different genera with organisms belonging to the different clusters exhibiting a high genomic plasticity (Aylward et al., 2013;Narciso-da-Rocha et al., 2014). Finally, in light of the findings of this study, and as the 16S rRNA gene sequencing alone cannot be used as a reliable genetic marker for sphingomonads, high-throughput genome sequencing of isolates representing the four clusters proposed by Takeuchi et al. (2001) is highly recommended. | 2016-05-12T22:15:10.714Z | 2016-02-11T00:00:00.000 | {
"year": 2016,
"sha1": "ba91039310611f47c56fcee3266514cc7b085989",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00149/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba91039310611f47c56fcee3266514cc7b085989",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
213591106 | pes2o/s2orc | v3-fos-license | Incremental Symmetry Breaking Constraints for Graph Search Problems
. This paper introduces incremental symmetry breaking constraints for graph search problems which are complete and compact. We show that these constraints can be computed incrementally: A symmetry breaking constraint for order n graphs can be extended to one for order n + 1 graphs. Moreover, these constraints induce a special property on their canonical solutions: An order n canonical graph contains a canonical subgraph on the first k < n vertices for every 1 ≤ k ≤ n . This facilitates a “generate and extend” paradigm for parallel graph search problem solving: To solve a graph search problem ϕ on order n graphs, first generate the canonical graphs of some order k < n . Then, compute canonical solutions for ϕ by extending, in parallel, each canonical order k graph together with suitable symmetry breaking constraints. The contribution is that the proposed symmetry breaking constraints enable to extend the order k canonical graphs to order n canonical solutions. We demonstrate our approach through its application on two hard graph search problems.
Introduction
Graph search problems deal with existence and enumeration of simple graphs with certain properties which are invariant under isomorphism.One of the most famous graph search problems is the search for Ramsey (s, t; n) graphs which seeks order n graphs with no clique of size s and no independent set of size t [21].The set of Ramsey (4, 5; 24) graphs was determined only recently [2].Such problems are often highly challenging due to large number of symmetries in graphs representation, and enormous search space.For graph search problems, any isomorphic graph, obtained by permuting the vertices of a (non) solution, is also a (non) solution which is symmetric.
Ultimately, symmetry breaking is about restricting the search to a reduced space which considers a single graph from each isomorphism class.If symmetries are eliminated, the size of the search space is significantly reduced, and it can be explored more efficiently because paths that lead to symmetric (non-)solutions are avoided.
Symmetry breaking in constraint programming and satisfiability solving is often achieved by introducing symmetry breaking constraints [25,6,23] which are satisfied by at least one member of each isomorphism class.A symmetry breaking constraint is called complete if it is satisfied by exactly one member from each class and partial otherwise.Ideally, a symmetry breaking constraint should be compact in size, and complete.This enables solvers to avoid symmetries without imposing an overhead due to the size of the constraint.
Computing compact and complete symmetry breaking constraints is, most often, intractable [6].For graph search problems, it is unknown if there exists a complete symmetry breaking constraint that is polynomial in the size of the graph.The wellknown lex-leader approach [22], selects the smallest member of each class, with respect to a lexicographic ordering, as a canonical representative.Testing if a given graph is a lex-leader canonical representative is known to be co-NP complete [18].Hence, it is unlikely that there exists a polynomial size symmetry breaking constraint that identifies lex-leader canonical representatives.
In theory, a complete lex-leader symmetry breaking constraint should impose one lexicographic order constraint for every symmetry.As an example, for order 10 graphs this translates to 10! = 3,628,800 constraints.In practice many of the these constraints are redundant.Itzhakov and Codish [14] compute a complete symmetry breaking constraint for order 10 graph search problems consisting of only 7,853 lexicographic order constraints instead of all 10! constraints.Codish et al. [4] show that a further reduction is made when expressing the symmetry breaking constraint using the implications derived from the AND-decomposition of the lexicographic order constraints [9].In their approach, symmetry breaking constraints are more compact and faster to compute.They compute, for the first time, a complete and compact symmetry breaking constraint for order 11 graph search problems.
This paper introduces incremental symmetry breaking constraints for graph search problems which are complete, compact and have two special properties: First, the symmetry breaking constraint for graphs of order n can be extended to one for graphs of order n + 1.Second, if an order n graph satisfies the symmetry breaking constraint, then so does its subgraph on the first k ≤ n vertices.
The first property implies that symmetry breaking constraints can be computed incrementally.The second property facilitates a "generate and extend" paradigm for parallel graph search problem solving.In this approach, to solve a graph search problem ϕ on order n graphs, we first generate the canonical graphs of some order k < n.We then compute canonical solutions for ϕ by extending, in parallel, each canonical graph of order k, applying a corresponding symmetry breaking constraint for order n graphs.The crucial point is that the symmetry breaking constraints we introduce are consistent with these order k canonical subgraphs.We show that this generate and extend paradigm can be effectively applied for order n ≤ 12 graph search problems.
We demonstrate the application of incremental symmetry breaking constraints on two hard graph search problems: enumeration of "totally magic-" [7,10] and "wordrepresentable-" [16,1] graphs.For both of these, state-of-the-art solutions apply a generate and test approach where each graph is tested for the corresponding property.Solving the instances for order 11 graphs involves huge resources and thousands of CPU days.Moreover, this approach cannot be applied for larger graphs.We apply a generate and extend approach with complete symmetry breaking constraints to provide solutions which are significantly more efficient.
The computations described in this paper are performed using the finite-domain constraint compiler BEE [20] which compiles constraints to a CNF formula, and solves it applying an underlying SAT solver.We use Glucose 4.0 [3] as the underlying SAT solver.All computations were performed on a cluster of servers, each with 56 Intel Xeon E5-2620 cores and 256GB of RAM memory, clocked at 2 GHz.Each SAT instance is run on a single thread.All running times reported are CPU times and specified in an appropriate unit: (s) seconds, (h) hours, (d) days or (y) years.
The rest of this paper is structured as follows.Section 2 presents preliminary definitions and notation.Section 3 describes an incremental approach to compute columnwise complete symmetry breaking constraints for graphs.Section 4 introduces a generate and extend paradigm for graph search problems.Section 5 demonstrates the advantage of our approach in the context of two hard graph search problems.Finally, Section 6 concludes.
Preliminaries
Lexicographic Order Constraints: The lexicographic order constraint between two vectors x = x 1 , . . ., x n and ȳ = y 1 , . . ., y n , each consisting of n finite domain variables, is denoted x ≤ lex ȳ.The AND-decomposition of a lexicographic order constraint [9] can be expressed as follows: where each of the conjuncts imp k (x, ȳ) is called a k-length lex-implication and is defined by: Permutations: We denote by S n the group of all permutations on {1 . . .n}.We represent a permutation π ∈ S n as an array of size n where the number 1 ≤ i ≤ n is mapped to π(i).For example: the permutation [2, 3, 1] ∈ S 3 maps as follows: {1 → 2, 2 → 3, 3 → 1}.
Graphs and Graph Orderings:
The set of simple graphs on n vertices is denoted G n .
The vertex set of a graph G = (V, E) of order n, is assumed to be V = {1, . . ., n} and in abuse of notation its adjacency matrix representation is also denoted G.We denote by R(G) and by C(G) the strings obtained by respectively concatenating the rows and columns of the upper triangular part of the adjacency matrix of G.We denote by G (k) for k ≤ n the induced subgraph of G on the vertex set {1, . . ., k}.This is the upper left k × k corner of the adjacency matrix of G.An unknown graph of order n is represented as an n × n adjacency matrix of Boolean variables which is symmetric and has values false (denoted by 0) on the diagonal.All of the notations for given graphs, such as C(G), R(G) and G (k) , hold also for unknown graphs.For simplicity, unknown graphs are also called graphs.For (possibly unknown) graphs G, H of the same order and X ∈ {R, C}, we denote the lexicographic order and the k-length lex-implication constraints with respect to X by: Permutations act on graphs in the natural way: viewing G ∈ G n as an adjacency matrix and given a permutation π ∈ S n , then π(G) is the adjacency matrix obtained by mapping each element G i,j to G π(i),π(j) (for These can be simplified respectively to: Given an isomorphism class of a graphs, a classic way to define the canonical representative of the class is to take the smallest graph with respect to some order.In this paper we consider two specific order relations and define a canonical representative in the following way. Definition 1 (LEXLEADER).We say that G ∈ G n is row-wise canonical if the following constraint holds for X = R, and column-wise canonical if the following constraint holds for X = C.
The following property of column-wise canonical graphs is stated in [17].
The following example demonstrates that Theorem 1 does not hold for row-wise canonical graphs.
Example 2. The following graph G (on the left) is row-wise canonical, while its subgraph G (6) (bold text) is not; G (on the right) is the row-wise canonical isomorph of G (6) .
Graph Search Problems: Graph search problems are about the search for a graph that satisfies certain graph properties which are invariant under isomorphism.If G is a solution to a graph search problem, then so is any G that is isomorphic to G.More formally, an order n graph search problem is a predicate, ϕ(G), on an unknown order n graph G, which is closed under isomorphism.A solution to ϕ(G) is a satisfying assignment for the variables of G.
Symmetry Breaking:
We focus in this paper on two particular types of complete symmetry breaking predicates: row-wise and column-wise, that are satisfied exactly by the row-wise and column-wise canonical graphs, respectively.Consider the two predicates LEXLEADER X (G) for X ∈ {R, C} from Definition 1.When G is an unknown graph, expressed in terms of Boolean variables, then Definition 1 can be viewed as specifying a conjunction of lexicographic order constraints over these variables.Each of these two conjunctions specifies, a predicate that is true exactly when its argument graph is respectively row-wise or column-wise canonical.Hence, these predicates are complete symmetry breaking constraints.We can view the constraints in LEXLEADER X (G), either as a set of lexicographic order constraints, or as a set of their corresponding lex-implications as specified by Equation 1.We denote these sets, respectively by LEXLEADER lex X (G) and by LEXLEADER imp X (G).For any symmetry breaking predicate ψ defined as a conjunction of lexicographic order constraints (or lex-implications), we define the size of ψ to be the number of lex-implications in the AND-decomposition of these constraints.
Computing Compact Symmetry Breaking Constraints: For an unknown graph G, for X ∈ {R, C}, and for Y ∈ {lex , imp}, the set of constraints LEXLEADER Y X (G) (over the variables in G) is of size exponential in the size of G.However, many of these constraints are redundant (implied by the others).One can compute an equivalent set of irredundant constraints which is more concise.From here on, we call a set of irredundant constraints, compact.Luks [18] proves a result from which it follows that unless P = NP, there is no polynomial-time algorithm that computes a compact lexleader symmetry breaking constraint for graph search problems.Nevertheless, we aim to compute compact lex-leader symmetry breaking constraints for small graph search problems.Algorithm 1 computes a compact complete symmetry breaking constraint for G given X and Y .The symmetry breaking constraint is computed iteratively by adding a constraint c from LEXLEADER Y X (G), as long as it is not implied by the constraints selected so far.In the implementation of Algorithm 1, the condition in the while-loop, applies a SAT solver to identify the constraint c which is not implied by the current set of constraints ψ.Possibly, when a new constraint is added, a constraint, already present, becomes redundant.Therefore, the Algorithm applies an operation, Reduce(ψ), to remove redundant constraints.For each constraint the test of redundancy is performed using a SAT-solver.This algorithm generalizes the ones presented in [14] and in [4].
Example 3. Let G be the unknown order 5 graph detailed in Example 1. Algorithm 1 (with X = R, Y = lex) computes a compact row-wise symmetry breaking constraint consisting of the following 7 lexicographic order constraints (after some simplifications) instead of the 120 constraints in LEXLEADER lex R (G). [
Incremental Computation of Symmetry Breaking Constraints
In this section we describe an incremental computation of compact column-wise symmetry breaking constraints for graphs.Our goal is to able to extend a given compact column-wise symmetry breaking constraint ψ for graphs of order k, with a set of constraints ∆ such that ψ = ψ ∧ ∆ is a compact column-wise symmetry breaking constraint for graphs of order k + 1.We show that we can achieve this goal when focusing on column-wise symmetry breaking predicates.This ability to extend compact columnwise symmetry breaking constraints facilitates an incremental approach to compute them.
Theorem 2. Let G be an order k unknown graph and let ψ be a compact column-wise symmetry breaking constraint for G (k−1) .Then, there exists ∆ ⊆ LEXLEADER imp C (G) such that ψ ∧ ∆ is a compact column-wise symmetry breaking constraint for G.
Proof.Let G and ψ be as in the premise of the statement, and let By construction, ψ ∧ ∆ * is a column-wise symmetry breaking constraint for G.We show that any redundant implication in ψ ∧ ∆ * must be from ∆ * .Hence, we can obtain ∆ ⊆ ∆ * for which ψ ∧ ∆ is a compact column-wise symmetry breaking constraint for G. Let c ∈ ψ be redundant in ψ ∧ ∆ * .Since ψ is compact, ψ − {c} is not a column-wise symmetry breaking constraint for G (k−1) .Hence, there exists an order k − 1 graph H which is not column-wise canonical and for which (ψ − {c})(H) is true.By choice of c, (ψ − {c}) ∧ ∆ * is a column-wise symmetry breaking constraint for G and hence, by Theorem 1, also a column-wise symmetry breaking constraint for G (k−1) .Hence, ((ψ − {c}) ∧ ∆ * )(H) is false which implies that ∆ * (H) is false.So, ∆ * contains an implication x which is false, and hence depends only on variables from G (k−1) .x ∈ ∆ * implies that ψ =⇒ x.This means that ψ not implies a lex-implication which depends only on variables from G (k−1) .The existence of such an implication contradicts the assumption that ψ is a column-wise symmetry breaking constraint for G (k−1) .1. Computation of compact column-wise symmetry breaking constraints for order 3 ≤ n ≤ 11 graphs using direct and incremental approach.
Algorithm 2 applies an incremental approach to compute a compact set of constraints equivalent to LEXLEADER imp C (G).The input is an unknown graph G, the output is a compact column-wise symmetry breaking constraint expressed in terms of the variables in G.When execution enters the outer for-loop at line 5 with a value 1 ≤ k ≤ n we have already computed a compact column-wise symmetry breaking constraint ψ for order k − 1 graphs.At this stage, the goal of the k-th iteration of the for-loop is to compute a set of constraints ∆ such that ψ ∧ ∆ is column-wise symmetry breaking constraint for order k graphs.The while-loop at lines 7-8 computes Algorithm 2 Incremental computation of compact column-wise symmetry breaking constraints 1: procedure SYMBREAKINCREMENTAL(G) 2: Input: unknown order n graph G 3: Output: compact column-wise symmetry breaking constraint for G 4: ψ ← {} 5: for k := 1 to n do 6: ∆ ← {} 7: 10: return ψ a set ∆ corresponding to ∆ * in the proof of Theorem 2. The condition in the whileloop at line 7 asks if there exists a constraint c which is a witness to the fact that ψ ∧ ∆ is not yet column-wise for k.Such a witness (if one exists) is to be found in LEXLEADER imp C (G (k) ).In the implementation, to determine if ψ∧∆ is not yet columnwise for k, we seek a value i, an order k graph H, and a permutation π ∈ S k such that Otherwise, ψ ∧ ∆ is column-wise for k and we exit the while-loop.The key step in the implementation is to encode the search for ψ i as a SAT instance.By Theorem 2 (and its constructive proof), all redundant constraints removed at line 9 are removed from ∆.
Table 1 details the time to compute compact column-wise symmetry breaking constraints expressed in terms of lex-implications for order n graphs.The Table compares the direct computation (using Algorithm 1) and the incremental computation (using Algorithm 2).All of the symmetry breaking constraints computed (direct and incremental) were verified by checking that the constraint for order n graphs renders the exact set of column-wise canonical order n graphs as solutions.
The column labeled LEXLEADER imp C details the size of the LEXLEADER imp C constraint.The two columns labeled "direct" detail the computation (size and time) of the the column-wise symmetry breaking constraint by application of Algorithm 1.The four columns labeled "incremental" detail the computation by application of Algorithm 2. The columns ∆-size and ∆-time detail the size and computation time to extend the symmetry breaking constraint from the previous row.The columns size and time detail aggregated size and time.The column labeled speedup details the ratio between the direct and aggregated incremental computation times.
The table clearly indicates that compact column-wise symmetry breaking constraints are much smaller than their LexLeader imp C (G) counterparts which are logically equivalent.This is in line with the results of previous works [14,4].The table indicates that the incremental computation is more efficient (up to 2.2 times faster) and that the sizes of the constraints (direct and incremental) are similar.This sections introduces a "generate and extend" paradigm for graph search problems which derives from the incremental properties of column-wise symmetry breaking constraints.To solve an order n graph search problem ϕ(G), one can first generate all order k < n canonical graphs (for a suitable value of k).Then, one can pose, for each order k canonical graph G , the question: does there exist a graph G ∈ G n which extends G such that ϕ(G) holds?Basically this means, fixing the variables of the subgraph G (k) to the values of G before applying a constraint (or SAT) solver on ϕ(G).The graph search problem: extend G to a solution of ϕ(G) is denoted ϕ(G/G ).
For example, there are 1044 order 7 canonical graphs.To solve an order n graph search problem ϕ(G), one can seek solutions, in parallel, for the problems ϕ(G/G ) to extend each G from these 1044 graphs.The question is: how to break symmetries when solving ϕ(G/G )?There are two issues: (1) Given a graph G ∈ G k , how to break symmetries and obtain only nonisomorphic solutions of ϕ(G/G ); and (2) Given a pair of graphs G , G ∈ G k , how to ensure that solutions in ϕ(G/G ) are not isomorphic to those in ϕ(G/G ).The beauty of column-wise symmetry breaking constraints is that they address both issues.
If G of order k is column-wise canonical, then G is consistent with a columnwise symmetry breaking constraint ψ of order n (k < n).Therefore ψ can be applied when solving ϕ(G/G ) and all solutions are column-wise canonical.Given columnwise canonical graphs G , G ∈ G k , the solutions for ϕ(G/G ) and ϕ(G/G ) are column-wise canonical, and hence, by definition, they can not be isomorphic.
When solving graph search problems of the form ϕ(G/G ) for a given columnwise canonical G ∈ G k , we can apply the column-wise symmetry breaking constraints (direct or incremental) described in Table 1.Alternatively, we can compute a specialized column-wise symmetry breaking constraint for each G .These are considerably smaller and facilitate the computation of compact column-wise symmetry breaking constraints for order 12 graph search problems.This is done by application of Algorithm 2 after fixing the values corresponding to G in the unknown graph G.
Table 2 details the computation of compact column-wise symmetry breaking constraints for graph search problems of the type ϕ(G/G ) where G is one of the 1044 order 7 column-wise canonical graphs and G is of order 7 < n ≤ 12.The three columns labeled "specialized" detail the computation of specialized column-wise symmetry breaking constraint for all order 7 column-wise canonical graphs.We detail the total computation time (for all 1044 cases) and the average and maximal size of the individual symmetry breaking constraints.The two columns labeled "simplified" detail the size (average and maximal) of the symmetry breaking constraints obtained from the symmetry breaking constraints described in Table 1 by removing implications which become true due to G .
Specialized symmetry breaking constraints are considerably smaller than the simplifications of the general counterparts described in Table 1.For example when n = 11 the general symmetry breaking constraint consists of 289,698 implications while the average (maximal) size of the specialized symmetry breaking constraints is only 873 (15,433).This means that each of the 1044 instances involve much smaller symmetry breaking constraints.The row for n = 12 details 1044 specialized symmetry breaking constraints for order 12 graphs.These cannot be computed by simplifying a general symmetry breaking constraint.This is the first time that a complete and compact lexleader symmetry breaking constraint for graphs with 12 vertices has been computed, albeit distributed over 1044 cases.
The symmetry breaking constraints described in Table 2 apply when extending order 7 canonical graphs to order n canonical solutions.It follows from Theorem 1 that these same constraints can also be applied when extending order k > 7 canonical graphs to canonical solutions of order n.In our experiments (in Section 5), when extending a canonical order k > 7 graph G to an order n solution, we apply the symmetry breaking constraint computed for G (7) .
Two Applications of Generate and Extend
In this section we apply a generate and extend approach to solve two hard graph search problems: enumeration of "totally magic-" [7,10] and "word-representable-" [16,1] graphs.For both of these problems, state-of-the-art solutions apply a generate and test approach where each non-isomorphic order n graph is tested for the corresponding property.Each such test is not trivial.Determining if a given graph is word-representable is NP-complete [13].For totally magic graphs, the complexity is unknown.Yet stateof-the-art methods are exponential.Solving the instances for n = 11 involves huge resources and thousands of cpu days.Moreover, this approach cannot be applied for larger graphs.We apply the proposed generate and extend paradigm with column-wise symmetry breaking constraints and demonstrate that this approach is significantly more efficient.
Totally Magic Graphs: The n vertex graph search problem ϕ tm(n) (G) is about the search for a totally magic graph G with n vertices [7,10].A graph G = (V, E), with |V | = n and |E| = m, is totally magic if there exist a one-to-one labeling λ : V ∪ E → 1, . . ., n + m and two integer values h, k such that: vertex magic constraint: the sum of the labels of each node and its incident edges is h; and edge magic constraint: the sum of the labels of each edge and its endpoints is k.
Figure 1 depicts a totally magic graph with 9 vertices.The sum of the labels of each node and its incident edges is 25.The sum of the labels of each edge and its endpoints is 26.Fig. 1.An order 9 totally magic graph.
A relaxation of ϕ tm(n) weakens the definition to consider the vertex and edge magic constraints with arithmetics modulo p and also specifies the number of edges, m, in the solutions.We denote the relaxed problem by ϕ p tm(n,m) .Any graph which is totally magic is also totally magic modulo p [7,15].So, we can test the solutions of the relaxed problem to identify the totally magic graphs.
Totally magic graphs are extremely rare.There are only 6 such graphs with 11 or less vertices.The only known totally magic graphs, with > 11 vertices, are composed of an odd number of triangles, or of an even number of triangles with a path of length 2. It is unknown if there exist other totally magic graphs with > 11 vertices.
In previous work, Jäger et al. [15] enumerate all totally magic graphs with n ≤ 11 vertices.Their approach is based on an enumeration of all non-isomorphic graphs with n vertices and testing each graph.These tests are based on, among other criteria, the elimination of graphs which are not totally magic modulo p ≤ 7. Jäger et al. [15] report a total of 13,595 cpu days to show that there do not exist any order 11 totally magic graphs.
We apply a constraint based approach where for each instance we consider the corresponding constraint model that expresses the totally magic constraints in conjunction with suitable symmetry breaking constraints.
We first applied a direct approach to solve instances of the form ϕ tm(n) (G) and ϕ 4 tm(n,m) (G).For both types of instances we found solutions only when n < 9 (with a 48 hours time-out).We then applied a generate and extend approach focusing on the relaxed form ϕ 4 tm(n,m) (G/G ) which enabled us to enumerate all totally magic graphs of order n < 12.
Table 3 details a two step computation of totally magic order n graphs.In the first step we apply a generate and extend approach to compute all solutions of ϕ 4 tm(n,m) (for all possible values of m).In the second step we test each solution of the relaxed problem to check if it is totally magic.
Each order n instance of the relaxed problem corresponds to a pair (G , m) where G is one of the order 7 column-wise canonical graphs and m is the number of edges in the solution.We impose a 24 hour timeout on each instance.An instance (G , m) which timed out is further refined to a set of instances of the form (G , m) where G is an order 8 column-wise canonical graph which extends G .Such time-outs were encountered only for the case n = 11 (in 164 of the 36,540 instances).
For the first step, Table 3 details (left side) the total number of ϕ 4 tm(n,m) solutions and the aggregated computation time (including the cost of the 24 hour time-outs).For the second step, Table 3 details (right side) the computation time for the tests on the solutions from the first step.To this end, we apply a series of tests similar to those described in [15].
Table 3 indicates for n = 10 and 11 total computation times of 7.21 and 976.36 (days) in contrast to the 21.70 and 13,595 (days) reported in [15].One can view the first step (generate and extend) as a filter to the second step.For example, instead of testing all 1,018,997,864 order 11 graphs for the total magic property, as in [15], we only have to test 91,397,498 which is less than 9% of them.
Word Representable Graphs: A simple graph G = (V, E) is called word-representable if there exits a word w ∈ V * containing each letter of the alphabet V such that for every i, j ∈ V , i and j alternate in w if and only if (i, j) ∈ E [16].In this case we say that G is represented by w.For example, the graph depicted in Figure 2 is word-representable.Akgün et al. [1] compute the number of connected non-word-representable order n ≤ 11 graphs.They adopt a generate and test approach, testing each non-isomorphic connected graph of the corresponding order.The test is performed using the constraint solver Minion [12].To this end, they specify a constraint model based on the equivalence of word representable graphs and so-called, semi-transitive graphs [16].They report 1100 cpu days of computation time to accomplish this task for order 11 graphs.
We denote the graph search problem to find connected order n word-representable graphs by ϕ wr(n) .We adopt the same constraint model used by Akgün et al. [1], together with constraints to ensure connectivity, and with column-wise symmetry breaking constraints.
We first applied a direct constraint based approach to solve instances of the form ϕ wr(n) (G).This approach works well to find solutions for n < 10.We then apply a generate and extend paradigm to enumerate solutions of ϕ wr(n) (G/G ).In this way we succeed to enumerate all connected word-representable graphs for order n ≤ 12.This is the first time that a solution for n = 12 is reported.
Table 4 details the generate and extend approach and compares its computation times with those of the generate and test approach described by Akgün et al. [1].To comply with their results, we present the corresponding numbers of connected and of connected non-word-representable graphs.The first three columns detail the order, n, and the numbers of connected graphs and connected non-word-representable graphs.For the generate and extend paradigm, with 9 ≤ n ≤ 12, we extend each order k column-wise canonical graph to the set of its extensions which are canonical connected word-representable graphs.The total computation times are detailed in the table.For the generate and test paradigm we specify (right most column) the computation times detailed in [1].The generate and extend computations are orders of magnitude faster than the corresponding generate and test computations.We note that for order 11 graphs, the results reported in [1] are in error: (1) the correct number of order 11 connected graphs is as specified in Table 4 (see sequence A001349 in [24]); and (2) the correct number of connected non-word-representable graphs is as specified in Table 4 (we found 2124 more connected word-representable graphs and verified that they are connected, wordrepresentable, and all non-isomorphic).For n = 12 the generate and extend approach was applied using Clasp 3.1.3[11].The generate and test approach is not able to handle this case.
Conclusion
This paper introduces incremental symmetry breaking constraints for graph search problems.We start from the notion of column-wise canonicity introduced in [17] where the authors show that the subgraphs of an order n canonical graph on the first k ≤ n vertices is also canonical.We build on this property in two ways.First we show that column-wise symmetry breaking constraints can be computed incrementally.Then, we introduce a generate and extend paradigm where canonical solutions of an order n graph search problem can be obtained using column-wise symmetry breaking constraints by extending canonical graphs of order k < n.We compute, for the first time, a complete and compact lex-leader symmetry breaking constraint for order 12 graphs.We demonstrate the superiority of our generate and extend approach through two hard graph search problems and provide the previously unknown number of order 12 nonword-representable graphs.
There is a large body of work on methods for generation of non-isomorphic combinatorial objects [19,5].Still, there remain many open graph search problems which involve surprisingly small graphs.The results obtained in this paper suggest that a constraint-based approach combined with strong symmetry breaking methods will lead to breakthroughs for many of these problems.
Many graph search problems are hereditary: order n solutions can be obtained by extending order k < n solutions.The generate and extend approach can take advantage of this property by extending order k canonical solutions instead of all order k canonical graphs.
The techniques presented in this paper can be adapted for other combinatorial objects, such as matrix search problems [8].
The symmetry breaking constraints described in this paper can be obtained by request from the authors.The symmetry breaking constraints are "solver independent".They can be applied in conjunction with any constraint solver to restrict the search to canonical solutions of a given graph search problem.
The strings R(G) and C(G) are obtained by respectively concatenating the rows and the columns of the the upper triangular part of G.
Two graphs G, H ∈ G n are called isomorphic if there exists a permutation π ∈ S n such that G = π(H).Two order n graphs G, H ∈ G n are isomorphic if there exists a permutation π ∈ S n , such that G = π(H).
Table 2 .
Computation of compact column-wise symmetry breaking constraints for order 8 ≤ n ≤ 12 graphs extending order 7 column-wise canonical graphs.
Table 3 .
A two step computation of totally magic graphs for 9 ≤ n ≤ 11 vertices.
Table 4 .
Numbers of connected non-word-representable graphs computed using a generate and extend (g & e) approach, and a generate and test (g & t) approach. | 2020-03-19T20:04:43.911Z | 2020-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "19bb6d05824f864052b584f642a7ab2731a77429",
"oa_license": null,
"oa_url": "https://doi.org/10.1609/aaai.v34i02.5513",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f473c3fccbf5b46145cb2641a3449e95ff72f548",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
232258151 | pes2o/s2orc | v3-fos-license | Euclidean and chemical distances in ellipses percolation
The ellipses model is a continuum percolation process in which ellipses with random orientation and eccentricity are placed in the plane according to a Poisson point process. A parameter $\alpha$ controls the tail distribution of the major axis' distribution and we focus on the regime $\alpha \in (1,2)$ for which there exists a unique infinite cluster of ellipses and this cluster fulfills the so called highway property. We prove that the distance within this infinite cluster behaves asymptotically like the (unrestricted) Euclidean distance in the plane. We also show that the chemical distance between points $x$ and $y$ behaves roughly as $c \log\log |x-y|$.
Introduction
In this paper we study both the chemical and Euclidean distances in the ellipses model introduced in [16]. It is a Boolean percolation in the plane with defects given by random ellipses centered at points given by a Poisson point process with intensity u > 0. Given the position of the centers, the eccentricities and orientations of the ellipses are independent. The minor axes always have length one and they make uniformly distributed angles with the horizontal direction. The lengths of the major axes are drawn independently from a heavy-tailed distribution ρ supported on [1, ∞) that satisfies ρ[r, ∞) = cr −α for r ≥ 1. Therefore, while the parameter u controls the amount of ellipses appearing in the picture, the parameter α controls how eccentric they are.
In [16], phase transition and connectivity properties for the ellipses model were studied as functions of these two parameters. Here we will focus on α ∈ (1, 2), the regime in which, for any choice of u > 0, there exists a unique infinite cluster of ellipses that, in addition, satisfies what we refer to as the highway property. Roughly, it means that after scaling the probability of connecting two regions using a single ellipse becomes close to one.
Let D(x, y) denote the minimum length of a polygonal path from x to y which lies entirely inside the set covered by the ellipses. We call D(· , ·) the Euclidean distance restricted to the set of ellipses or sometimes the internal distance. Also, for any two points x and y in the infinite cluster of ellipses, denote by D(x, y) the chemical distance between them, i.e. the minimum number of ellipses that a continuous path from x to y contained entirely inside the cluster of ellipses has to intersect. The Euclidean distance in the plane, sometimes called the unrestricted Euclidean distance, is denoted by |x − y|.
To understand geometric properties of infinite clusters is a problem of major interest in percolation theory. Models for which the chemical distance was studied include Bernoulli percolation and first-passage percolation [1,11,12]; random interlacements [6]; random walk loop soup [7]; and Gaussian free field [8,9]. General conditions for a percolation model on Z d to have a unique infinite cluster in which Euclidean and chemical distances are comparable are provided in [9].
Theorems 1 and 2 show that ellipses model does not fit into the conditions of [9]. This is due to the presence of long ellipses. A similar behavior can be observed in Poisson cylinder model [18] and long-range percolation [2,15], as we discuss next.
Comparing with long-range models
Ellipses model is closely related to other two percolation models that allow for arbitrarily long connections: Poisson cylinders model on R d and long-range percolation on Z d . In principle one could try to leverage these relations in order to obtain estimates for the distances in ellipses model, and indeed some of our results are obtained this way.
We emphasize that the highway property is shared by these three models, with the immediate adaptations that connection of far away regions is accomplished using a single cylinder or a single open edge for Poisson cylinders and long-range model, respectively. The highway property is the main tool to ensure that, in all three models, the distance inside the infinite cluster is asymptotically equivalent to the unrestricted Euclidean distance in the plane.
However, the behavior of the chemical distance differs completely in each of these models. Before elaborating on these differences, we give a quick introduction to Poisson cylinder model and long-range percolation.
Poisson Cylinders. Poisson cylinders model consists of a random collection of biinfinite cylinders of radius one whose axes are given by a Poisson point process on the space of all the lines (i.e. affine one-dimensional subspaces) in R d with d ≥ 3, see [18] for details. Distances within clusters of cylinders were studied in [5] and [14]. In [5] the authors prove that almost surely any two cylinders are linked by a sequence composed of at most d − 2 other intersecting cylinders, implying that the chemical distance is bounded. For the Euclidean distance on the other hand, in [14] the authors prove a shape theorem showing that if x, y ∈ R d are points in the infinite cluster then the internal distance between x and y is asymptotically One straightforward connection between Poisson cylinders and ellipses model is to study the intersection of the random cylinders with any given 2-dimensional plane. As shown in [19] this intersection is a collection of ellipses whose law is an instance of the ellipses model with α = 2 when d = 3 and, with α > 2 in higher dimensions. Thus, this natural coupling between ellipses model and Poisson cylinder model is not helpful to draw conclusions when α ranges in (1, 2).
Long-range percolation. Fix β, s > 0 and consider the bond percolation model, known as long-range percolation, in which for each x = y ∈ Z d an open edge connects x and y with probability Different expressions for p xy may be considered but it is usually assumed that it decays roughly as β|x − y| −s+o(1) for some positive β and s.
Let us now explain how long-range percolation and ellipses model relate to each other. Notice that both models have one parameter that controls the density (u and β, respectively) and another that controls the distribution of long connections (α and s, respectively). Essentially, a discretization of ellipses model leads to a long-range percolation with parameters satisfying the following relations The coupling is given as follows. Take B := [−1/2, 1/2) 2 and for x ∈ R 2 write For a realization of the ellipses model with parameters u and α associate with every ellipse the two extremities of its major axis. Now embed Z 2 in R 2 in the natural way and define two sites x = y ∈ Z 2 to be ξ-connected and write x ∼ ξ y if there is an ellipse whose major axis has one extremity in B x and the other in B y . Inserting open edges between pairs of ξ-connected sites leads to a long-range percolation model whose parameters s and β satisfy (3), as we show in Section 2.
We will explore this coupling to translate results about long-range percolation to results about ellipses model. However, there are some key points that must be dealt with when comparing connectivity in these models using the coupling described above.
A first issue is that, in some situations, connectivity is favored in ellipses model. In fact, when two long ellipses cross each other it may occur that the resulting open edges in the long-range model belong to different components. This suggests that connectivity properties in these two models may differ. Indeed in [16,Theorem 1.2] it is shown that in ellipses model with α ∈ (1, 2) the covered set percolates for any intensity u > 0. The corresponding long-range percolation (with s ∈ (3, 4) by (3)) does not percolate for sufficiently small β, since when z∈Z 2 p 0z < 1 the open cluster of the origin is dominated by a subcritical Galton-Watson tree.
A second issue affects connectivity in the opposite direction. Notice that having x ∼ ξ y ∼ ξ z does not ensure that, in the underlying ellipses model, the corresponding ellipses overlap since it may occur that two ellipses intersect the box B y without touching each other, see Figure 1. We now present the results about long-range percolation that we use. We refer the reader to [2, Section 1.3] for a summary on the chemical distance for different regimes of s. We will be mainly interested in the case d = 2 and s ∈ (3,4), which corresponds to ellipses model with α ∈ (1, 2). Results for any d ≥ 2 and s ∈ (d, 2d) are discussed in papers [2][3][4].
Our estimate for Euclidean distance in Theorem 1 builds on a construction from [2] that relies on the above mentioned highway property. Let us exemplify this property for the long-range model with d ≥ 2 and s ∈ (d, 2d). Denote |x − y| = N and for Then, the probability of the event {B x ↔ 1 B y } that there is an open edge connecting a site in B x = x + B to another site in B y = y + B can be estimated using as N → ∞. The estimate in (4) is in the core of the hierarchical construction from [2] which leads to the main result therein: the chemical distance between two points x, y on the infinite cluster behaves asymptotically as This same argument shows that D(x, y) ∼ |x − y|, although not mentioned in [2]. We present (a simplified version of) their hierarchical construction in Section 3, and use it as a fundamental tool for controlling the Euclidean distance traversed by a path in ellipses model.
As we have seen above, in all three models the Euclidean distance restricted to the covered set and the unrestricted distance are asymptotically the same. The chemical distance can be seen as an alternative measure of connectedness for these models and through this lens they behave very differently, presenting different orders of magnitude. For Poisson cylinders the chemical distance is bounded by a constant, for ellipses model it grows as log log |x − y|, and for the long-range model it grows as (log |x−y|) ∆ . We will see that this discrepancy between ellipses model and long-range percolation may be explained as a consequence of the first issue above.
Idea of proofs
We need to obtain lower and upper bounds on the distance between points x and y that belong to the same cluster of ellipses.
Our estimates for the lower bounds are simpler to obtain. The lower bound for the internal distance appearing in (1) is simply the unrestricted distance |x − y| in the plane, and although obvious, we do not have any improvement for it. The lower bound for the chemical distance appearing in (2), follows from an elementary induction argument. One could try to improve the bounds using the BK inequality like in the lower bound in [3] but our argument seems to provide the correct order of magnitude in a simpler way.
The proofs for the upper bounds appearing in both Theorems 1 and 2 follow similar strategies. The first step is to show that, with high probability, there exists a set of few overlapping ellipses that allows us to traverse from a local region containing x to another local region containing y without deviating too much. The second step consists of connecting locally the points x and y to this structure. This is the content of a deterministic construction in Lemma 7.
Let us discuss some details of each proof. The proof of Theorem 1 is based on a coupling of ellipses model and site-bond long-range percolation model on a renormalized lattice. The probability of an edge being open will be given by the coupling with long-range percolation described above. Only a subset of the underlying Poisson point process defining the ellipses model is used for this coupling. The remaining (independent) part is used for defining a site percolation model on a lattice of renormalized sites that correspond to boxes in the original lattice. Roughly, a site is considered open (or good ) if the corresponding box is good meaning that the cluster of ellipses near this box is sufficiently well-connected. This definition is based on an idea from [1].
On the event that x ↔ y in ellipses model, the bond percolation part and the site percolation part are then combined to create a short path connecting x to y. This is done in two steps: Hierarchical construction. This is essentially the construction from [2] based on the highway property (4). When |x−y| = N is large, with very high probability, there is an open edge connecting small neighborhoods around x and y. This idea can be iterated to build what we call a hierarchy, see Definition 1 and Figure 2. In words, a hierarchy is a collection of long edges (or highways) that essentially connects x to y, leaving only some gaps that are much shorter than the highways.
Gluing procedure. Given that we have found a hierarchy the original problem is then replaced by the problem of building connections across the remaining gaps. For that, we use the site percolation part of our coupling. The definition of good boxes will ensure that neighboring good boxes have intersecting clusters of ellipses. Moreover, the renormalization scheme is performed so that the probability of a box being good is highly supercritical. Therefore, even when a gap that we want to cross has some bad boxes around it, we can still contour these bad boxes by paying a low price in terms of distance and probability. This is accomplished through a large deviation bound on the size of bad clusters, see Section 3.2.
After these two steps are completed, we have with high probability a path of ellipses connecting x and y whose length is well-controlled. This establishes the upper bound in Theorem 1.
The reader who is familiar with the hierarchical construction of [2] and the renormalization procedure of [1] may notice that, in our proof of Theorem 1, we define events that are much simpler than the ones appearing in the original constructions. This is possible due to the existence of long overlapping ellipses that overlap, a phenomenon with no counterpart in long-range or Bernoulli percolation.
The proof for the upper bound for the chemical distance in Theorem 2 does not rely on the same coupling with long-range percolation as in Theorem 1 since this coupling does not explore the possibility of using long ellipses to its full potential. Instead, our argument involves choosing a rapidly increasing sequence of rectangles and studying the event that they are crossed in the hardest direction by a single ellipse. By a Borel-Cantelli argument, this construction provides 'enhanced highways' that cross large distances more efficiently.
Remarks on the notation. Throughout the paper we use c, C to denote generic positive constants that can change from line to line. Numbered constants c 0 , c 1 , c 2 , are kept fixed. Also, our asymptotic notation uses
Couplings, highways and hierarchies
In this Section we collect some results from the literature that will be used in the proofs of Theorems 1 and 2.
Ellipses model. Ellipses model is defined via a Poisson point process (PPP) on
For each point (z, R, V ) in the PPP, place an ellipse centered at z whose minor axis has length 1 and whose major axis has length R and forms an angle V with respect to the horizontal direction. The multiplicative parameter u > 0 controls the density of ellipse whereas the exponent α > 0 controls the tail of major axis' distribution. We refer the reader to [16] for an account on the phase transition for percolation on the covered set with respect to parameters u and α.
Define the event LR 1 (l; k) that an ellipse crosses the box [0, l] × [0, kl] from left to right. The next lemma uncovers the range of parameters in which ellipses model presents the highway property: Therefore, when α ∈ (1, 2) and k is fixed, we have P(LR 1 (l; k)) → 1 as l → ∞, showing that the highway property holds in this range of α.
A second useful estimate is a similar bound for the probability that there is an ellipse that traverses an annulus. Let B(l) denote the Euclidean ball of radius l centered at the origin in R 2 and denote its boundary by ∂B(l). For two disjoint regions A 1 and We have: Proof. See [16, Lemma 6.1]. The estimate for µ(Γ 12 ) implies (7).
Coupling long-range with continuous model. There is a canonical coupling mentioned in [15] and used in [4] between long-range percolation model and a Poisson Point We may interpret each point (x, y) ∈ ξ as giving rise to a segment connecting x and y. This coupling is useful to make the renormalization scaling more transparent. In fact, for a > 0 if ξ ′ := {(ax, ay); (x, y) ∈ ξ} then the intensities of ξ and ξ ′ are related by This scaling property is behind the highway property in case s ∈ (d, 2d), since the intensity appearing on the right-hand side tends to infinity as a grows. Also notice that when s = 2d the model is scale-invariant and there is no hope that a similar property is satisfied in that case.
For disjoint regions A 1 and A 2 we write A 1 ∼ ξ A 2 and say that A 1 and We say that two sites x = y ∈ Z d are ξ-connected if B x ∼ ξ B y and denote this event by x ∼ ξ y.
Lemma 3 below yields estimates on the probability of connecting two distant boxes and shows that this coupling indeed produces a long-range percolation model.
Proof. We begin by noticing that for x ∈ B(l) and y ∈ B z (l) we have that Thus, when |z| ∞ > l we can write with implied constants depending on d, s and l. Also, when |z| ∞ = l one can verify that P(B(l) ∼ ξ B z (l)) = 1. The fact that boxes B(l) and B z (l) must share at least a corner will imply the integral diverges for s ∈ (d, 2d).
Also, if we restrict our intensity measure to only allow for segments whose lengths are larger than some fixed value, say µ β,s := β|x − y| −s 1 |x−y|>κ dx dy we get a model in which nearest neighbors are no longer connected with probability 1, but that has the same behavior on long edges.
Change of variables and ellipses model. Now, let us restrict ourselves to the case d = 2. Here we use a change of variables to verify that the PPP's with intensity measures (8) and (5) may be viewed as reparametrizations of each other.
Instead of parametrizing a line segment in R 2 specifying its endpoints x and y, we can use its middle point z = (z 1 , z 2 ), its radius R and the angle if forms with a given direction, V . This change of variables is given by Ψ : It is straightforward to check that the Jacobian matrix J satisfies The usual parametrization of ellipses percolation is based on measure Comparing these measures, we obtain that we can relate parameters β, s used in the endpoint parametrization with the u, α parametrization of ellipses model, which leads to the relations in (3): Using relation (3) we see that Lemma 3 can also be used to estimate connection probabilities on ellipses model.
Hierarchical construction. Consider long-range percolation on Z d with parameters β and s. The highway property in (4) ensures that, for fixed γ ∈ (s/2d, 1) and |x − y| =: N large, there is an open edge connecting points in neighborhoods of size N γ around x and y with high probability. This idea can be iterated to build what we call a hierarchy, see Definition 1 and Figure 2. When a hierarchy exists the problem of finding a path from x to y can be replaced by finding paths between well-separated pairs of points that are however much closer than the original pair (x, y). This construction, introduced by [2], is reproduced below.
We use σ ∈ {0, 1} k to encode the leaves of a binary tree of depth k, by considering that ∅ is the root vertex, and 0 and 1 denote the left and right children of ∅, respectively. We append digits to the right of a word σ ∈ {0, 1} k in order to create longer words, e.g., σ1 ∈ {0, 1} k+1 is the word that encodes the right child of σ.
Definition 1 (Hierarchy). For n ≥ 1 and x, y ∈ Z d we say that a collection is a hierarchy of depth n if 1. z 0 = x and z 1 = y.
Note that the definition of a hierarchy does not take into account the distances between the points z σ .
It will be useful to think of hierarchies as being constructed successively. In view of the computation in (4), in the first step we may try to link a pair of sites z 01 and z 10 that belong to neighborhoods of size roughly N γ around z 0 and z 1 , respectively (recall that we are assuming γ ∈ (s/2d, 1) as in the paragraph above (4)). Having succeeded to do so in the first k steps, for each σ ∈ {0, 1} k we try to link z σ01 and z σ10 belonging to neighborhoods of size roughly N γ k around z σ0 and z σ1 respectively. Ideally, when we reach depth n we will be left with 2 n−1 gaps which are pairs of Figure 2: Hierarchy H n (x, y) provides a collection of highways connecting all pairs (z σ01 , z σ10 ) with σ ∈ {0, 1} n−2 . To ensure x is connected to y it suffices to connect the remaining 2 n−1 gaps, that are either of the form (z σ00 , z σ01 ) or (z σ10 , z σ11 ).
sites of type (z σ00 , z σ01 ) or (z σ10 , z σ11 ) with sites in each pair at a distance of order N γ n . The reader may consult Figure 2 for an illustration of this iterative procedure. Note that, by the discrete nature of the long-range model the procedure cannot be iterated indefinitely.
The above discussion motivates the definition of the event B n (x, y) that there is a hierarchy H n (x, y) of depth n satisfying that, for all 0 ≤ k ≤ n−2 and all σ ∈ {0, 1} k where N k := N γ k . The following lemma is a simplified version of [2, Lemmas 4.2 and 4.3] and provides appropriate choices of parameters so that the above idealized picture is achieved with high probability.
Lemma 4 (Hierarchy). Fix ε > 0 and γ ∈ s 2d , 1 . For x, y ∈ Z d and N := |x − y|, let n ∈ N be the greatest positive integer such that There is N ′ (ε, γ, d) and b = b(d) ∈ (0, 1) such that if N ≥ N ′ then for any hierarchy H n (x, y) satisfying (12) we have Moreover, there is a positive constant c = c(β, d, s) such that Remark 1. Let ∆ ′ := log 2 log(1/γ) . The definition of n yields: In fact, by the definition of n and N n we can write Therefore, The other inequality in (16) follows from n log(1/γ) ≤ log (2) N.
To obtain (15) we partition B c n according to the first depth k at which we fail to find highways. For that value of k the event B k−1 occurs and there is a hierarchy H k−1 satisfying (12). Let us fix a gap in H k−1 , which is either the form (z σ00 , z σ01 ) or (z σ10 , z σ11 ) with σ ∈ {0, 1} k−3 . For the corresponding pair of neighborhoods, say none of the edges between these neighborhoods must be open. By (12) these neighborhoods are centered at sites whose distance belongs to 1 2 N k−2 , N k−2 . Moreover, each neighborhood has cN d k−1 vertices. A straightforward adaptation of the argument leading to (4) applied to scale N k−2 shows that the probability of not finding an open edge linking a fixed pair of neighborhoods is bounded above by where c = c(β, d, s) > 0. Since there are 2 k−2 pairs of neighborhoods we have: The last bound in (15) follows from (16).
Bounding the Euclidean distance
Using the hierarchical construction from Biskup [2] we obtain a collection of highways that provides the main contribution for finding open paths between two distant sites. However, the remaining gaps must still be connected in order to find an open path between the original two points that actually uses these highways. In [2] this is accomplished by requiring that the vertices z σ in H n (x, y) belong to sufficiently large but local clusters. For our model we take a similar but different strategy.
The idea is to make a hybrid approach, considering a renormalized lattice to define a site-bond percolation model. The bond percolation part will be coupled to a longrange percolation model. Independently of this bond percolation part we define a site percolation that will be used to glue together these highways, using an idea from Antal and Pisztora [1].
Renormalization scheme
We begin describing the renormalization procedure. Partition Z 2 into a collection of boxes The exact choice of K will depend on the parameters u, α of the model and is deferred till Lemma 5. Each box B x is assigned an enlarged box and a core, defined respectively as occurs, as well as the three similar events resulting from (18) by rotations by π 2 , π and 3π 2 around the origin, see Figure 3. The events {B x is good} are defined analogously; in words, a box B x is good if it is enclosed by a well-positioned circuit of overlapping ellipses contained in its enlarged box, see Figure 3. We also say that a site x is good if its respective renormalized box, B x , is good. If B x is good, we denote by O x a circuit of ellipses that realizes such event, chosen according to some predetermined rule.
It follows from our construction that ω x and ω xy are independent processes since they are defined in terms of the PPP ξ restricted to disjoint regions of R 4 . For the same reason, (ω x ) is an independent (Bernoulli) site percolation process on Z 2 .
Our definition of a good box is close to the definition of good boxes used in [1]. Essentially, it ensures that in a cluster of good boxes one is able to move from one we also highlight its core B ′′ x . On the right, we emphasize that * -neighboring good boxes must have their respective outer circuits interlaced. Outer circuits and cores of a same box are shown in matching colors to help visualization.
box to a neighboring one remaining inside the covered set. This holds not only when moving along the coordinate directions (to a box that shares a side) but also when moving diagonally (to a box that shares a single vertex). We briefly discuss this notion of connectivity now introducing notation that is very similar to that of [1].
Given a configuration ω ∈ {0, 1} Z d , we say that a site x ∈ Z d is good if ω x = 1. Otherwise x is said bad. Denote by C * x the bad cluster (with respect to * ∼) containing x. We use the convention that C * x = ∅ if x is good. For a finite subset Λ ⊂ Z d define its outer and inner boundaries by respectively. We use the convention that ∂ o C * x = {x} when x is good. For Λ finite, its complementary set Λ c contains a finite number of connected com-ponents Λ 1 , . . . , Λ k . Exactly one of them, say Λ 1 is infinite; the other ones, if any, are called holes. When holes exist, we define Λ := Λ ∪ Λ 2 ∪ . . . ∪ Λ k which may be regarded as the result of filling all holes in Λ. We also define the external outer boundary and the external inner boundary of Λ respectively as This is important because whenever we find a region composed of bad sites, we can contour that bad region using its exterior boundary of good sites.
On the renormalized model. Recall that the PPP ξ can be parametrized using either (s, β) or (u, α), by (3).
P2. If W is a * -connected set of good sites then all the surrounding circuits O x , x ∈ W are contained in the same connected component of ellipses.
P3. If C * x is finite, then ∂ o e C * x is a * -connected set of good sites.
Proof. Property P2 is a straightforward geometric consequence of the definition (see Figure 3) and Property P3 follows from (19).
We now prove Property P1. Denote by A 0 the event in (18) and by A i , for 1 ≤ i ≤ 3 the three similar events resulting from (18) by rotations by π 2 , π and 3π 2 around the origin, respectively. Since ξ is invariant with respect to translations and rotations, any A i has probability where the maximum runs over all points x and y in the first and second boxes, respectively. The maximum is a constant multiple of K, so we can write for some constant c = c(β, s) > 0, since we are assuming s ∈ (3, 4). Then, FKG inequality implies P(B o is good) also tends to 1.
For the probability of an edge being open, we notice that B ′′ x ∼ ξ B ′′ y is a scaling by By (9) we can relate the probability of B ′′ x ∼ ξ B ′′ y with that of the event in (20) under a rescaled long-range model whose intensity can be made as high as we want by increasing K. This completes the proof of Property P1.
Gluing highways
Given two fixed sites x, y ∈ R 2 , Lemma 4 roughly states that, for the long-range model in the renormalized lattice, hierarchies exist with very high probability. On the event x ↔ y we want to use the highway structure entailed by one of these hierarchies in order to find a path that connects x to y efficiently.
For z ∈ R 2 , let a(z) ∈ Z 2 be the unique site in Z 2 such that z ∈ Ka(z) + [−K/2, K/2) 2 . The distance between the original points x and y and the distance between their respective counterparts a(x) and a(y) in the renormalized lattice can be compared as Here the L 2 -norm could be replaced by any other norm on R 2 . Lemma 4 implies that B n (a(x), a(y)) has probability close to 1 (provided that N is large and n satisfies (13)). Conditional on B n , we can find a collection of sites z σ ; σ ∈ {0, 1} n together with the endowed highway structure connecting some of them. If there is more than one choice, just pick one of them according to a predetermined rule.
We still have to ensure that all the remaining gaps, that is, all the 2 n−1 edges of type (z σ00 , z σ01 ) or (z σ10 , z σ11 ) for σ ∈ {0, 1} n−2 are connected with high probability. This can be done with the aid of an argument from [1].
Moreover, (14) guarantees that the highways of the form (z σ01 , z σ10 ) with σ ∈ {0, 1} k and 0 ≤ k ≤ n − 2 have length |z σ01 − z σ10 | = Θ(Ñ k ). From the point of view of our original ellipses model, a highway connecting z σ01 and z σ10 represents an actual ellipse E σ that realizes the event and consequently the (number of renormalized boxes intersected by E σ ) ∈ c 2Ñk , c −1 for a positive constant c 2 that will remain fixed from now on. Also, the site percolation process (ω x ) is independent of the collection z σ ; σ ∈ {0, 1} n .
For each gap (z σ0 , z σ1 ), with σ ∈ {0, 1} n−1 write m σ := |z σ0 − z σ1 | 1 and fix a deterministic path (according to a predetermined rule) of m σ + 1 neighboring sites that realize this distance, meaning Recall that C * z denotes the * -connected cluster of bad sites containing z and definē C * (z) := C * z ∪ ∂ o C * z . We look at the random subset of R 2 composed by the boxes associated to the bad clusters of the sites along the path (z (σ) j ): Denoting by #W σ the number of sites in the renormalized lattice that one needs to explore to find W σ , we have: For every a > 0 there exists c 3 = c 3 (a, p) > 0 such that Proof. We have #C * z = 1 when z is good. Otherwise, since each site of C * z has at most 8 neighbors Also, using an argument from [1] based on a previous construction in Fontes and Newman [10] (see the proof of Theorem 4), if (C * z ) z∈Z 2 is a collection of independent random subsets of Z 2 withC * z d = C * o , then (C * z ) dominates stochastically (C * z ). Defining Y z := #C * z , we have that (Y z , z ∈ Z 2 ) are i.i.d. random variables with the same distribution as #C * o . We have for Y j := Y z (σ) Notice that C * o is a * -cluster of bad sites in a Bernoulli site percolation of parameter p and by Lemma 5 we can start the construction with p sufficiently close to 1 so that the probability of a site being bad, 1−p, is subcritical, and then choose K(u, α, p, β 0 ) accordingly. Exponential decay of cluster size (see e.g. [13, Theorem (6.75)]) yields ψ(p) > 0 such that h(p) := E e ψ(p)Y j < ∞. Hence, for any fixed σ ∈ {0, 1} n−1 , an application of Markov's Inequality yields Finally, by (22) we can write The estimate on (26) follows from a union bound and the bounds obtained in (16).
Recall the definition of c 2 in (24) and take a = 1 3 c 2 at Lemma 6. Define We have P(B n ∩ W c n ) ≪ 1, meaning that with high probability every W σ is too small to contain any highway. Fix some σ ∈ {0, 1} n−1 . By P2 and P3 on Lemma 5 we can use the external inner boundary ∂ i e W σ , a * -connected set of good boxes, to glue together the highways that arrive at z σ0 and z σ1 , the procedure is illustrated in Figure 4.
Suppose that we know that {x ↔ y}∩B n a(x), a(y) ∩W n has occurred and fix a path P connecting x to y. Although P can be arbitrarily long, after the gluing process we can build a path P ′ from x to y whose length is controlled. Let 0, 1 ∈ {0, 1} n−1 be the all zeroes and all ones words, respectively.
Definition 2.
Path P ′ is defined as follows: 1. Follow P from x till it hits the first outer circuit O z of a good box B z , with z ∈ ∂ i e W 0 . 2. When P ′ first gets to ∂ i e W σ with σ = 1, use outer circuits to move towards the next highway. Figure 4: Region W σ (light gray) explores bad boxes (dark gray) on a deterministic path of boxes (thick lines) and ∂ i e W σ is made of good boxes. When #W σ is small, the highways arriving at B z σ0 and B z σ1 can be connected through ∂ i e W σ .
3. When P ′ arrives at a highway, move in a straight line till intersecting the next ∂ i e W σ . 4. When P ′ gets to ∂ i e W 1 , use outer circuits to move to the last point of P that intersects a circuit O z in ∂ i e W 1 and then use P to move to y.
We have good estimates for the length of path P ′ when moving on highways or when using outer circuits of some W σ . However, some parts of P ′ could be wiggly (when following along P) and that could possibly add a considerable amount to the total length. The next lemma allows us to improve the estimate on the length of a path inside the covered set E when we move inside a bounded region.
Lemma 7 (Distance on small scales). Let W ⊂ R 2 be a bounded connected set and let x ∈ W . If x ↔ ∂W then and consequently Proof. Denote by {e i ; 1 ≤ i ≤ m} the set of all ellipses that intersect W , which is almost surely finite since W is bounded. Since x ↔ ∂W there is some point y ∈ ∂W that can be reached from x by a path contained in E. Take a path P that connects x and y without self-intersections. For any fixed ellipse e used by P, if P ∩ e is not a straight line we can reduce the length of P by connecting its first and last visit to that ellipse directly. This modified path may intersect ∂W before reaching y, but in this case we simply replace y by the first point of ∂W that was reached. Thus, we can restrict ourselves to polygonal paths.
Let f : [0, 1] → R 2 be a continuous and injective parametrization of P with f (0) = x and f (1) = y, and define I j = f −1 (e j ). By the properties of P, we know that each I j is a closed interval and that [0, 1] = ∪ m j=1 I j . We build a minimal set of ellipses that covers P by doing a greedy exploration. We can assume that x ∈ e 1 and define i 1 := 1. Then, inductively define i j+1 as the index of an ellipse that intersects e i j and with rightmost point of I i j+1 closest to 1. Since we have a finite collection, the process ends on some index i n ; relabeling if necessary, we can consider i j = j for 1 ≤ j ≤ n.
By construction we have that P ⊂ ∪ n j=1 e j and each e j only intersects e j−1 and e j+1 . Finally, by the same reasoning as in the beginning of the proof we can assume that e n is the first ellipse to intersect ∂W . We can bound the size of n by using the fact that {e i ; i odd, 1 ≤ i ≤ n − 1} and {e i ; i even, 1 ≤ i ≤ n − 1} are disjoint collections inside W . Since each e i contains a ball of radius 1, we have that and we proved (28). The bound on (29) follows by using that on each e i the length of P is bounded by diam(W ).
Proof of Theorem 1
We now have all the ingredients to bound the Euclidean distance between distant points inside a same cluster of ellipses.
Proof of Theorem 1. We can assume δ ∈ (0, 1 − 2+α 4 ) and define γ := 2+α 4 + δ. We analyze the probability of D(x, y) being large by decomposing this event with respect to B n (a(x), a(y)) ∩ W n , obtaining since both P B c n and P B n ∩ W c n tend to zero with N by Lemmas 4 and 6, respectively. On event {x ↔ y} ∩ B n ∩ W n there is a path P between x and y and we use P to build a path P ′ as in Definition 2.
Using Lemma 7 we replace the parts of P ′ that use P on steps 1. and 4. by a path satisfying the bound on (29). Actually, we do not lose much by applying (29) at every W σ , since The length of P ′ can be estimated by where the index in the second sum runs over all highways. Since there are 2 n−1 gaps, the bounds on (16) imply For the second sum, notice that in the long-range model, for each 0 ≤ k ≤ n − 2 there are 2 k highways of size aboutÑ k . We can therefore write 4 + δ and δ can be taken arbitrarily small, we conclude the proof of (1).
Bounding chemical distance
Now we turn to investigating the chemical distance. The same construction as in the proof of Theorem 1 also provides an upper bound for the chemical distance between x and y, since it implies However, we can actually achieve a better bound. In fact, although our collection of highways provides a structure of long ellipses that links far away points efficiently in terms of their Euclidean distance, it may be conceivable that the optimal strategy to minimize the chemical distance might differ. An improvement to the bound in (30) is given in the next result: Proof. The main construction for this bound is a way of moving faster than through highways, see Figure 5. This construction has no counterpart on discrete long-range model, since it leverages on the property that two ellipses that cross in their middle section are connected.
We consider a sequence (l n ; n ≥ 0) of increasing lengths which is defined recursively by l n = l 2/α n−1 (log l n−1 ) −1 . The value of l 0 is fixed later. Consider also the following collection of boxes and define event A n in which box B n is crossed in its longest direction by one ellipse. By Lemma 1 and our choice of sequence (l n ) we have Now, we check that the series above converges by estimating the growth rate of sequence (l n ). Notice that l n ≤ l and also that this upper bound implies Computing the sums on the last line one obtains as n → ∞ that and thus for some large l 0 = l 0 (α) the coefficient in curly brackets can be estimated from below by 2 + o(1), which implies This means we can make P(∩ n≥n 0 A n ) arbitrarily close to one by taking n 0 sufficiently large. Notice that on this event we move faster than when using highways, since we can get from B n 0 to distance l n using only n − n 0 ellipses.
Besides faster highways, we also build a useful collection of circuits. Let U 0 n be the event in which box [−2l n , 2l n ] × [l n , 2l n ] is crossed in its longest direction with one ellipse and let U j n be the analogous events obtained by rotating this box counterclockwise by j · π/2, j = 1, 2, 3. Defining events C n := ∩ 4 j=1 U j n , we have by Lemma 1 Moreover, on C n we have a circuit made of four ellipses, whose supporting lines form a convex quadrilateral Q n that surrounds [−l n , l n ] 2 but stays inside [−2l n 0 , 2l n 0 ] 2 . Now we are ready to prove (31). Without loss of generality, we can assume y is the origin. Fix any ε > 0 and choose n 0 sufficiently large such that P ∩ n≥n 0 A n ≥ 1 − ε and P ∩ n≥n 0 C n ≥ 1 − ε.
Let us also define A n (x) and C n (x) as the events analogous to A n and C n but considering that x is the origin. Thus, if we define event we can write On event {o ↔ x} ∩ V we have some path P of ellipses connecting o to x. For |x| > 2l n 0 path P intersects quadrilaterals Q n 0 and Q n 0 (x).
Finally, notice that event ∩ n≥n 0 A n ensures Q n 0 is connected to Q n 1 by a path P of at most n 1 ellipses. We can also find a path P (x) of at most n 1 ellipses connecting Q n 0 (x) to Q n 1 , when we consider event ∩ n≥n 0 A n (x). Thus, we can bound the chemical distance of o and x by the number of ellipses in the following path P ′ : (i) Move from o to Q n 0 using the minimal number of ellipses and then follow circuit Q n 0 till meeting P ∩ Q n 0 .
(ii) Move from P ∩ Q n 0 to P ∩ Q n 1 and then follow circuit Q n 1 till you meet P (x) ∩ Q n 1 . Move from P (x) ∩ Q n 1 to P (x) ∩ Q n 0 (x).
(iii) Follow circuit Q n 0 (x) till you meet a point of Q n 0 (x) connected to x by a path inside Q n 0 (x) that uses a minimal number of ellipses.
Lower bound for chemical distance
The argument from [3], due originally to Trapman [17], cannot provide us a lower bound for the chemical distance since we already have an upper bound for D(x, y) of order log log |x − y|. It is possible to employ a similar strategy based on BK inequality [20,21], but here we are able to use a more elementary approach. . . . l n 0 l n 0 +2 . . . Figure 5: Construction of short path P ′ , depicted by a zigzag line. On the right, we show event ∩ n≥n 0 A n in which we have an 'improved highway'. On the left, the improved highways P and P (x) are connected to quadrilaterals Q n 0 , Q n 0 (x) and Q n 1 (x) to form P ′ .
For 0 < l 1 < l 2 , we make a slight abuse of notation by denoting the chemical distance between sets B(l 1 ) and ∂B(l 2 ) by D(l 1 , l 2 ). Instead of working with D(o, x) directly, we investigate the quantity D(1, |x|). | 2021-03-18T01:16:22.507Z | 2021-03-17T00:00:00.000 | {
"year": 2021,
"sha1": "c424418a521be7f11e2034c6e0185dd1e09fa8d5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c424418a521be7f11e2034c6e0185dd1e09fa8d5",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
10060872 | pes2o/s2orc | v3-fos-license | The Level of Participation during the Development of a Mobile Application for Home-based Healthcare Data in a Developing Context: an Actor-network Theory Perspective
The context of this study is home-based healthcare in a South African resource-restricted community. The research case involved the design and development of a mobile care data application, created to assist community caregivers in their professional activities. However, the development principles of a suitable mobile application for feature phones (limited functionality) in this context are not fully established. A participatory design approach was employed using a design science research in information systems strategy. Data was collected during the co-design sessions with the active participation of the caregivers to design and develop a suitable mobile application to capture process and report care data. The activities of the caregivers in practice and the design and development activities were observed. It was observed, conversely, that the level of participation of all stakeholders differed significantly during the process. It was especially observed that the designer and end-users were less involved in the actual development of the prototype. These differences may have an influence on the end product/result. Actor-network theory (ANT) was used to offer a new perspective on the development processes, concepts and structures. ANT has not been used extensively in development studies and may provide a mechanism to describe how the human and non-human actors formed relations as they participate in these processes through translation moments. ANT also considers the 'black-boxed' aspect of IT artefacts during development as a single node of the network that may need to be opened up. Considering the alignment of such networks, the coordination, devices and passages during the four translation moments provide valuable insights in the design and development of technology products. This paper will consider these elements in more depth. The socioeconomic factors of the developing context influenced the complex socio-technical development of the mobile application. The role of technology artefacts to assist with the development of new IT artefacts is more complex in a developing context since there are not enough mobile artefacts that could be used as examples to guide the developers. This aspect, as well as the lower digital literacy of the end-users, influences their level of participation during the design and development phases. There seems to be a large gap between mobile development in the global North versus that in the global South.
INTRODUCTION
The adoption of a participatory approach for the development of a mobile application in a resource-restricted setting is challenging.The respective users in this environment are mostly exposed to feature phones with limited functionality [1].Moreover, there were relatively few advanced mobile applications available that could serve as examples of how to develop interventions in such cases.This paper discusses the design and development of a practical application for use in a South African home-based healthcare setting.This application was envisioned to assist caregivers with capturing, reporting and sharing of patient data.The Email: Retha de la Harpe delaharper@cput.ac.za investigators made use of a participatory design approach in the initiation and creation of the application prototype.
The setting is typical of a developing context and as such the study considered the issues of Information Systems Development for Development (ISD4D).The formation of a local network of actors designing and developing an IT artefact, that has the potential to improve the situation of caregivers, is considered.In development studies, a global network of actors, provide the space and resources for projects with the aim to improve the socio-economic well-being of resource-restricted communities [2].The evaluation of the eventual use in practice of the practical application discussed in this paper falls outside the scope of this paper and further research would be required to also consider the developmental factors that are outside the project to consider a global network.
The approach followed by this research study was to introduce technology only if there is a need for it and then only with the active participation of the endusers.The perspective is of social embeddedness which assumes the construction of new techno-organisational structures within a given local social context [3].The focus of this paper is not on the potential benefit of the technology solution but rather to reflect on the level of participation during the design and development phases.In development, and especially with information communication technology (ICT), the participatory methods should be framed by discourses on the social embeddedness of ICT with a focus on the importance of local factors in technology appropriation rather than just on ensuring product quality and relevance [4].After reviewing the literature of co-designing with communities, David, Sabiescu & Cantoni [4] identified five themes relevant for co-design with development, namely stakeholders, context, ownership, social learning and sustainability.Of note is social learning as a process of knowledge advancement through exchanges as the different stakeholders interact during the co-design [4].When there is a large design reality gap then the proposed ICT system will most probably fail since it will not function in practice as anticipated [5].Heeks suggests [5] that the participation of the local users; appropriate technology mix based on the local context; alignment to local development goals; and consideration of project risks during the design should result in more successful ISD4D projects.
Caregivers participated satisfactorily within the initial product design phase.However, they were mostly passive during the development phase of the application.These observations of the extent of participation by the different actors introduced an intellectual puzzle: what was different between the design and development phases?The author therefore considers 'design' and 'development' as the two primary phases of the application prototype.Other project phases-e.g., an initial needs analysis and the final testing phase-are considered elsewhere as discrete cycles [6] [7].The approach of Diaz Andrade and Urquhart [8] was selected for analysing the design and development phases, i.e., the translation process.Based on this approach the objective for this paper is to investigate whether the actants' interests were sufficiently considered throughout the design and development phases of the mobile care data application.Secondly, the author considers whether the modus operandi of the project was appropriate in order to establish the networks of participation in a developing context.
Actor-network theory (ANT) was considered as a possible analytical lens to establish some insights in the formation of stable networks aligned around mutual interests.It was decided to separate the design and development phases to focus on the differences between these two phases since the level of participation changed after the first phase.The symmetry aspect of ANT provides a possibility to follow both human and non-human actors-referred to as 'actants'-as equal participants.ANT is a descriptive lens, telling the stories of 'how' relations between actants assemble or not [9].
Actor-network theory (ANT) has been used in information systems research but not much with development studies [2].Heeks suggests [2] that ANT can provide new insights by describing processes in detail to study the emergence of actor-network structures as well as allowing non-humans an active materiality to expose the role they play in development.This will help to understand agency, process and relations among development actors better.It is this aspect of ANT that is used in this study to understand the role of the non-humans during the design and development of the mobile application, namely the IT artefacts, in terms of their level of participation and their responses during the translation moments.
This study draws from social and artificial sciences where, in the case of social sciences, knowledge creation is subjective and about human behaviour.The caregivers will use their knowledge and common sense during their care activities by giving meaning to them whilst responding to the context of their environment, in this case typical of a developing context.The concept of artificial sciences, introduced by Simon [10] is knowledge about how things could be ('utility') through a design process as opposed to natural sciences where knowledge is about how things are ('truth').The moment the designed artefacts are being used, the focus changes to social sciences where the behaviour of people using the artefacts is considered.Examples of IT artefacts during design are models, constructs, methods and instantiations [11].
The theoretical contribution for this study is based on a framework proposed by Kuechler and Vaishnavi [12] for studies in design science research in information systems (DSRIS) although in this study with consideration of the developing context.They based their framework on Gregor's [13] taxonomy of information systems theory.Furthermore Gregor published with several colleagues on the theoretical contribution of design science research in information systems based on the initial work of March and Smith [14] and Hevner et al. [11].The general activities of the DSRIS framework are: the construction of the artefact; the gathering of data on the functional performance of the artefact or the evaluation; and reflections on the construction process.This study's focus is only on the construction activity which is the design and development of a mobile application for care data.During the construction of the IT artefact prescriptive knowledge is generated based on two constructs, namely, the problem and solution constructs in both the instance and abstract domains [15].Iivari [16] suggests that prescriptive knowledge is its own form of knowledge that cannot be reducible to descriptive knowledge.Descriptive knowledge is composed of observations, measurements classified into accessible forms [17].Prescriptive knowledge of the design is presented as design principles of both form and function.During the evaluation and reflection activities descriptive knowledge is generated that Gregor and Baskerville [18] refer to as the following research activities for descriptive theorizing: study of the artefact in use and test of knowledge of the artefact in use.The reflective questions they suggest for extracting theory for the different design components will be used to suggest the theoretical contribution of this paper.Although design science research has been used in information systems before [11], publications, specifically about the theoretical aspect of this approach, have only been published more recently.
Mobile health (mHealth)
mHealth is a component of eHealth and to date, no standardised definition of mHealth has been established [19].The Global Observatory for eHealth (GOe) defines mHealth as medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants (PDAs), and other wireless devices.In terms of healthcare, there is a need for a healthcare system that is usable anytime, anyplace and by anyone authorised [20] [21].mHealth enables the connecting of different communities to exchange data and experience using mobile technologies.It also supports the shift from treating acute and chronic diseases to disease prevention and wellness promotion.South Africa has a large number of lay health workers and most of the population also have mobile phones or access to mobile phones and these phones can therefore be used to improve service delivery of community health care services: this is known as mobile health for community based services or mHealth4CBS [22].Leon and Schneider [22] identified a few challenges for mHealth4CBS and the two relevant to this study are the poor documentation of mobile applications and best practices as well as the challenge of identifying and using affordable open-source options.
Drivers for mHealth applications are socioeconomic rather than technical [23] [24].A singlesolution focus on mHealth should be replaced with it being an extension and integrator of underlying health information systems that support, e.g., the point-ofcare for health workers [25].Mecheal and Searle [25] further suggest that mHealth applications should be interoperable and integrated with provider systems linking the most remote health worker with the most appropriate sources of information when and where needed.The individual care data at the home and facility level can then be aggregated to serve as a basis for health information.
Even though the drivers for mHealth applications are socio-economic rather than technical, the application is still regarded as an IT artefact.The artefact is an embodied structure, where its structural properties include the pattern, rules and resources inscribed during the design and development process resulting in a relatively immutable output [26] [27].This may assume that technologies embody specific stable structures [28] or being regarded as black boxed.However Orlikowski [28] argues that such assumptions of technological stability, completeness and predictability with a predefined anticipated use are not true in practice where people modify technologies to fit their use in practice.Software developers use existing technologies to develop new technologies as applications, e.g., database management systems, web services, etc.These technologies with their anticipated in-use inscriptions may then be used differently in practice-the 'appropriation' of an artefact is the combination of embodiment in-design and enactment in-use in a specific situation, or at least to what extent the technology allows it to be used differently [27] [28].An mHealth application is the result of the design and development process based on the identified anticipated use with the use of other technology artefacts.
Participatory design
Participatory design (PD) is about both the process of design-with the active participation of all participants-and research.The outcomes of design includes artefacts, systems, services, and the like.The outcome of research is knowledge [29].PD allows for participants' interpretations to be taken into account by envisioning, shaping and transcending the activities until all agree with the outcomes.The participants in PD are equal in a network aligned around a mutual interest to create new designs and knowledge.Mosavel, Simon, Van Stade and Buchbinder [30] argue that the input and involvement of community stakeholders are essential for successful research.Community-based participatory research (CBPR) seeks, in addition to knowledge creation, action and change as its primary goals.Winning the trust of respective communities is integral in the co-design philosophy.Health interventions, therefore, need to address the multiple anxieties and lived reality of that community [30] [31].The participation process for this study started with forming a trust relationship with the home-based healthcare service provider and the participation design sessions were conducted in their work space to cater for their lived reality.
Hussain, Sanders and Steinert [32] identify the following differentiating circumstances when designing with the participation of marginalised people: human; social, cultural and religious; financial and timeframe; and organisational.When considering social development during PD, attention needs to be given to the participatory process (Byrne & Sahay, 2007).Byrne and Sahay's [33] findings for the PD of a communitybased health information system indicate that it is necessary to go beyond end user participation to also consider the persons affected through the delivery.In the case of healthcare services the patients will be the indirect beneficiaries of any interventions that will assist caregivers in their care services.They also suggest that a multilevel and multisectoral approach should be adopted and that reflective practices to develop capacity should be enhanced.Bailur [31] concludes that community participation in developing contexts is more complex than has been reported in the literature.
The level of participation is not always the same during all the phases of a project (extent); it may include all the users or representatives of users and the content may include technical, social aspects or both [34].Maail [34] further suggests that user participation should correspond to the conditional factors of the context of the system development that should rather be regarded as the optimal level of participation than a high degree of participation.In this study it was practical for the caregivers to participate corresponding to the conditional factors of the mHealth application design and development.
It is important to understand the work processes on a clinical level before developing IT solutions for the complex cooperative and interdisciplinary work associated with home-care services [35].A participatory approach allows for the active participation between the healthcare professionals and developers to obtain a common understanding of the work processes in practice.Hochheiser and Lasar [36] caution against a focus purely on the design of a user interface that will result in a lack of considerations of the social, political, ethical, and societal implications of computer systems.
Mobile development is faced with the following challenges: to create user interfaces accessible to differently-abled users; to handle the complexity of developing applications across multiple mobile platforms; the need to consider context-aware applications; and to deal with the uncertainty of specifying requirements [37].Most mobile applications are still developed by small teams who rarely use any formal development processes [38].Developers did limited organised tracking of their development efforts and that the existing body of knowledge is mostly pragmatic with guidelines and code examples [38].This could pose a problem to novice developers who need to consult with existing practices and examples and this is even more problematic if the existing limited development guidelines and code are not suitable for a developing context.It is important to consider the context (hardware, input, capability, platform, conventions of each platform and environment) and the implementation where the designs and code are delivered to support the user experiences, in this case the caregivers during their service provision [39].During execution time performance needs to be considered.An information system's development and implementation should be regarded as a complex sociotechnical process and even more so in a developing context [40].
All of the above aspects are considered in this paper in an attempt to understand the perceived difficulties observed during the design and development of the mobile application.
Actor-network theory
Actor-network theory (ANT) evolved from the work by Callon and Latour at the Ecole des Mines in Paris during the 1980s [41] [42].Important contributions are also made by Law [9] [43].ANT proposes a theory that does not privilege either humans (actors) or non-human actors (actants) over the other and denies that purely technical or purely social relations are possible.Other recent works on ANT are how Heeks and Stanforth [44] use ANT to provide details about the process of technological change by 'opening' the black box of such a change.De Albuquerque, Cukierman, Marques and Marques [45] consider how technology moves from the global North to the global South where such technologies are often black-boxed based on their use in the global North.The issue then is to consider how one can distinguish between questions to do with the materials of development and those to do with the strategy of development.This study considers an mHealth application, with the use of technologies typically developed in the global North, to be developed and used in a developing context in the global South.The network of actors is considered with a specific focus on the translation as they enter (or resist entering) the network.
Translation has its origin in the social studies of science and deals with how statements become facts and how that only happens when other people accept and/or use [46].The creation of facts is a collective process-translations are the result of recording the viewpoints of the different participating actors of the network [46].This process necessarily entails interactions and negotiations between actors before any kind of agreement can be reached about common definitions and meanings [47].Successful translation occurs when all the 'voices' speak in unison, i.e., when all agree to the same aligned interest.By studying the translation process, it is possible to determine to what extent the different actors are identified and consulted.Translations may have implications for the role and relationships of the actors within the network when the impact of the organisation and/or stakeholders is considered as the actors react to changes [46].
Problematisation is the first translation moment and can be regarded as 'how to become indispensable'.The focal actor, the actor from whose vantage point the process is conducted, establishes an interest that is primarily of interest to the focal actor, but could be useful to other actors.The second translation moment is to build an interest ('interessement') which refers to how the allies are locked into place in order to form a network and to strengthen new actors' links with the network.The third translation moment is enrolment which refers to how other actors decide to become part of the network.This means that other actors are convinced that they can benefit by joining the network and this happens when individual actors align their own interests to that of the focal actor.They may do this willingly or may be cajoled into joining the network.The final moment of translation is when actors who have previously been enrolled, become spokespersons in their own right of the focal actor's interest.While it is also possible that during this stage some actors may leave the network if they feel that their interests can no longer be sufficiently aligned with the interest of the network, the ultimate goal of institutionalisation becomes more of an achievable goal.
It is possible that a translation moment may fail.
From a research perspective, this is also important because explanations and insights into how and why it failed could lead to a deeper understanding of the interaction processes involved.ANT allows us to model mistranslation as a possible intentional betrayal and the reasons for this may be relevant for the research to identify important obstacles.
The context of home-based healthcare
Home-based healthcare in South Africa addresses the overwhelming need for services related to the high incidence of HIV/AIDS, TB and other poverty-related conditions experienced in resource-restricted communities [48] [49] [50] [51].The public healthcare service is responsible for healthcare service provision to the majority of the population.Formal healthcare facilities, however, cannot meet the demand for these services [1].Home-based healthcare addresses this demand and is mostly offered by non-government organisations.Home-based healthcare institutions rely on external donations since patients in most cases cannot afford to pay for delivered services (Ibid.).Home-based healthcare services are provided by informal caregivers with basic training.In some cases, they are supported by professional nurses.The recording of patient data is still paper-based and very time-consuming [1].Caregivers are generally semi-literate and many of their patients are illiterate.There is limited electricity in these communities but most people have a mobile phone or have access to one.These are mostly feature phones with limited functionality [7].
Research methodology
The research strategy considered for this study is design science research with a participatory approach.The research was started with an ethnographic study of home-based healthcare services in a developing context [1].During this stage a relationship was built with the NGO providing the care services and only when this study was completed was the design process started.Data for the design was collected during the problematisation and ideation phases with the use of design probes.Other data collection methods were observations and open-ended interviews.Several service design methods were used during the different participatory co-design sessions.For the purpose of this paper, the co-design stage with the low fidelity (lo-fi) prototype is considered as the design concept for designing the mobile user interface and navigation between the different screens [6].A lo-fi prototype, in this case a paper mock-up of the typical mobile phones used by caregivers, was used to facilitate discussion of user interface concepts and design alternatives [52].
The data was analysed by extracting the design principles and reflection on the design sessions and methods used.The qualitative data was coded and categorised to identify themes that were then interpreted.In addition, the author, who supervised the students with their post-graduate studies for this case as well as the practical project of designing and developing the mHealth application, did a meta-analysis at an 'etic' level of the different activities [53].
The practical design and development was done by a team of intern students.The team consisted of an anthropologist, designer and IT analyst, all three masters' students.The rest of the team was compiled of IT students.All the members of the team were novices since they did not have any work experience.The IT interns were not only novice developers but also came from a developing context background.
The participatory design phase
A community in the Western Cape of South Africa was selected for this case.The research team built a relationship with a home-based care NGO (hospice) in the area.This proved challenging; the staff was sceptical of the many promises made by different groups to improve their situation without their expectations being met.The team explained that it was there to work towards possible solutions that could improve their work conditions.The participatory design process was explained to them to convince them of their active role in the co-design process.The main objective of the design phase was to identify a real need and to then co-design a possible solution with the active participation of the caregivers.
Using a lo-fi prototype saves valuable time and costs of programming fully electronic prototypes and is a useful mechanism to present abstract concepts to users with low digital literacy as a mock-up of possible ideas.The caregivers were able to design their own dialogue and suggest navigation options.In Figure 1 an example of a lo-fi prototype where the suggested dialogue is given.The dialogue in the sequence of the five screens is as follows: 1
The development phase
The development was done by IT interns from the local university who were mostly third and fourth year students working in an incubation hub and paid a monthly stipend.This hub was used as an interactive design space, where top students from different disciplines, levels, and cultural frameworks worked together on developing mobile applications for real-life problems.Funding was obtained to develop a practical application.The developers converted the designs created during the design phase into code for the proposed mobile application to capture, report and share patient care data.The interns had no prior experience and were novices in the project.Since the developers were from a developing context, they were familiar with the socio-economic challenges experienced by the caregivers.The developers were involved from the beginning of the design process as active participants.
USING ANT AS AN ANALYTICAL LENS
The term actant is used for both human actors and technology actants when discussing their role in the actor-network.Where necessary a distinction will be made.The following actants are considered for the design phase: caregiver, designer, developer, current paper forms (care record, care daily report, care monthly report); lo-fi prototype as the design concept; care tasks; and work tasks (design mobile interface; and recall care data recording).The focal actor role was mostly performed by the designer who explained the different methods as well as facilitating the co-design sessions.The developer could speak the language used by most caregivers (Xhosa) and was therefore able to act as a translator when necessary.This helped to overcome the language barrier [32].
The main actants for the development phase were: developers, designer, caregivers as the end-users; the development platform; database; designs; specifications; hand held devices; source code; documentation; prototypes; development and testing tasks; work tasks (design technical components, interfaces, etc.); code; test; and document.Mobile technology includes handsets, computers, servers, software and bandwidth connectivity, different systems or protocols for communicating signals that could include GPS, GPRS, USSD, and Bluetooth [22].
Problematisation
The designer explained the need to design the mobile user interface to the caregivers participating on behalf of the others and became the gatekeeper of the codesign process.The aligned interest was to design the user interface together and the lo-fi prototype was introduced as a useful actant to help them with the co-design process.The designer also convinced the developers that the proposed co-design method would result in a better design for the mobile user interface.The current paper forms used for recording the care details of the patients were also introduced as actants for the co-design process.The care tasks and work tasks to design the mobile user interface and how they recalled the recording of care data also became actants required for the co-design process.
During the development phase the designer acted as the focal actor introducing the design to the developers as part of aligning the interest to develop the mobile prototype.The question posed to the developers was how to develop the mobile prototype based on the design to simplify the capturing and reporting of patient data by caregivers.The participation of the developers during the co-design phase allowed them to already have their interests aligned to the problem.The technology actants were required for converting, through a series of translations, the design into the mobile prototype.It was already clear at this stage that the mobile prototype would consist of many parts to support the user interface design as could be seen from the number of technology actants required.These different parts of the prototype-e.g., user interface, database, backend code, and the like-are all separate technology actants interacting with each other as well.The developers became the designer's main allies since they participated in the co-design phase and were therefore familiar with the design created as an outcome of the design phase.
Interessement
The designer planned the co-design sessions carefully and had a few meetings with the relevant stakeholders to convince them of the importance of the co-design session.After the co-design sessions, the designer reflected on the design process and designs created to plan for the next sessions.There were no actants who were not interested in the proposed session and this is probably because the decision for the mobile application was taken by the facility management and the co-design sessions were arranged to be during regular training sessions.No specific incentives were used and it was observed that the caregivers enjoyed the participation process since it provided them with recognition for the important work they are doing.The other actants from the development team and design concepts were properly introduced and relationships formed through the interaction sessions.The facility manager and care-coordinator became valuable allies since they supported the project and even participated in the co-design process from time-to-time.The use of the lo-fi prototype allowed current care data recording practices to be challenged and for new possibilities to be introduced and considered that were easy for the caregivers to relate to their potential use.
The designer not only put together a convincing case for the developers but the outcome of the co-design already created an expectation by the end-users (the caregivers) of how they could benefit from the proposed mobile care data application.Although the technology actants were identified to be crucial for the development of the mobile prototype, the purposes for which they were originally designed were not completely aligned with the purpose for how they would be used in this case.In the original design the manners in which they were to be used were inscribed in them and since mobile development was new, these actants' purposes were not well-aligned to mobile development.It became increasingly more difficult for the designer to interact with the technology actants since he was not familiar with the technical aspects of development.At this point the end-users (the caregivers) were also unable to participate because they were lost with the technical aspects of the process.The developers took over the role of focal actors with the designer and end-users 'leaving' the network.This aspect is important and will be discussed further on.
Enrolment
It was not necessary to persuade and convince the actants to participate in the co-design sessions since they related well to each other and the lo-fi prototype made it possible for them to interact with an abstract concept of interface design in a real manner allowing them to make suggestions.The developers also related well to the method which was new to them and interacted with the caregivers without dominating the co-design session.The roles of all the participants were clear and the lo-fi-prototype made the bargaining and compromising process easier to manage.
The enrolment of the actants in the development network was problematic.The identified technology actants were necessary for the development of the mobile prototype but because they were designed with specific inscriptions for specific uses that were difficult to adapt to mobile development in this context, they, in a way, resisted enrolment.These technology components were typically developed as IT artefacts to assist with development but did not seem to be appropriate for a developing context.This resulted in a continuous process of negotiations between the developers and technology actants that could be seen as processes of translations for the developers to learn how to use these actants and for the actants to respond in ways that made it possible for them to be used for developing the mobile prototype.In addition to the essential technology actants the developers also continuously experimented with other technologies therefore replacing some of the technology actants with others.These replacements resulted in the need for new enrolment strategies, i.e. the new technology actant had to be considered for the problem, being 'made' interested before being enrolled into the network.These negotiations seemed to exclude the designer and end-users from the network because they did not seem to understand the 'technology' language used by the developers interacting with the technology actants.The developers continuously had to learn how to use the different technologies and were unable to translate that to the designer and end-users.
Mobilisation
The design network became stable after all the actants progressed through the translation moments and the designer became the spokesperson for the network.He was able to do this since he understood the purpose of the method and was able to present the user interface design as the outcome of the co-design sessions.He was able to translate the perceived care data recording activities on paper to the anticipated activities in the mobile application.The caregivers who did not participate in the co-design sessions were well-represented because their care activities were standardised by the facility.They also mostly had the same background and training.
It took a long time for the development network to become stable and most of the negotiations between the developers and technology actants were invisible in terms of the progress during the development phase.The emulator software specifically was a technology actant that resisted its use since the emulation that worked on the computer did not work in the same way on the handheld mobile devices.Even code that worked on one device, e.g., Nokia, did not work the same way on another device, e.g.Samsung.Writing code for feature mobile phones was extremely difficult because there were no other examples of how this could be done.It seemed that smart phones quickly became the preferred mobile devices in the global North and that may be the reason for the lack of examples for feature phones.Testing was a challenge due to connectivity issues, differences in mobile phones (there were no standards), transferring data to the backend system, etc.The continuous replacement of some of the technology actants to experiment with them resulted in the network remaining unstable with difficulties for the actants to remain enrolled.This resulted in a situation where it took a very long time for the network to reach a mobilised stage.There was not a specific spokesperson for the development network since different interns worked on different parts of the development until a project coordinator was formally appointed.This person then became the spokesperson for the network.The end-users and designer, who were no longer enrolled in the development network, had to be re-enrolled to test the usability of the mobile prototype.It was a new negotiation process that required again an alignment of interest.An example was when the developers illustrated the prototype and it did not work because of connectivity, version and server issues resulting in the end-users being perplexed when the developers used a 'technocratic' approach: 'It does work, you just press here and then. . .'.It seems as if the end-users never re-enrolled in the development network and this problem now has to be addressed during the deployment phase with a specific strategy to enrol the end-users again.
Alignment
According to Diaz Andrade and Urquhart [8], alignment is to what extent actants agree to translation.Through these translations the actants move towards an agreed aligned interest.The stability of the network depends on how well the interests are aligned to the interest that describes the purpose of the network.It can also be regarded as when the network grows with continuously more actants being enrolled than actants leaving the network.
In the case of the design network it was clear that the translations were supported by the use of the lofi prototype-this actant played an important role in assisting with the translation process between the end-users and designer that resulted in the creation of another actant, the design of the proposed mobile prototype.The outcome of the design network not only resulted in a stable well-aligned network but also in the enrolment of an actant that was created based on the successful translations of the other actants, namely the design for the mobile interface.One can therefore conclude that the co-design process was successful and that design concepts are useful actants that can assist with translation of abstract concepts such as converting the recording of patient data on a paper care record to a recording it electronically with a mobile application.Another finding is that a design was created as a good representation of the anticipated use as a result of successful alignment of the actants of the design network.The embodiment in-design is therefore close to the possible enactment in-practice.
The alignment of the development network was problematic with the technology actants making translations more difficult.The alignment between the developers and technology actants required many translations, some mistranslations when the conversion of the design into the prototype just did not work.The technology actants are the results of the creation of these artefacts based on the uses that are inscribed into them-this also was the result of a process of translations and thus alignment but for another purpose.Using these actants for mobile development required them to be used differently to their original intended use and that resulted in the need for many translations between the developers and technology actants to align them to the conversion of the mobile prototype based on the design of the prototype.The processes, interests, identities, values, etc., inscribed in the technology components used for developing the mobile prototype were not suitable for a developing context.The reason why the end-users and designer were excluded from the alignment process of the development network was that the translations required by the development process were just too technical to them.Again the mobile prototype is a new actant that is the result of the alignment of the other actants, and how good it is will depend on how well the prototype represents the design of the mobile interface as the representation of the anticipated use.
It is possible that the technology actant can influence the outcome of the development of the mobile prototype if, for example, the backend technical design cannot support the navigation required by the user interface design.The design of the mobile interface only represents the part of the interface visible to the end-users as that is the only part that concerns them directly.In reality this design has to be supported by additional technical designs of the other parts of the mobile prototype, e.g., database design, activity diagrams, workflow designs, etc.These technical designs are introduced during the development phase as the outcome, i.e., new actants of the network, of the interactions between the developers and the technology actants required to do these designs.These technology parts of the mobile prototype are essential for the mobile application to work and all contributed to the formation of the development network as they are enrolled.
The translation of the design into the mobile prototype is complex and requires many translations between the developers, technology actants and designer.The new actants created through the alignment process are also technical and may not always be a good representation of the design.
Coordination
Coordination is the degree to which the interpretive flexibility is restricted by rules or conventions [8].In the case of the design network the interpretive flexibility was influenced by the lack of technology knowledge of the end-users rather than rules and conventions of the design process.Participatory design implies that the design process is started with no pre-conceived idea of the solution but to allow the end-users to determine the pace and direction of the process with the designer being more in the role of facilitator suggesting possibilities with the use of design concepts.The use of the lo-fi prototype made the interpretation easier because the caregivers could relate to it without feeling intimidated by technology.This allowed them to experiment with different options for the navigation, displayed text and options for entering the care-patient data.The designer was given the opportunity to learn the 'language' of the caregivers in their own environment and to observe their work practices-this provided him with a better understanding of the possible solutions for their data capturing problems.
The co-design process allowed for interpretive flexibility by continuously responding to the environment in which the process took place.Although the design process was not influenced by rules and conventions, the manner in which the caregivers capture the patient data and the type of data recorded and reported are restricted by the rules and conventions of their work practices that again have to comply to the rules and conventions of the healthcare professional practices as well as to the legal requirements of the authorities to which the home-based healthcare service provider reports.
In the case of the development network the interpretive flexibility was influenced and mostly restricted by how the rules and conventions were inscribed into the technology actants for their specific uses that could not easily be adapted for mobile development.The definitions of patient data, data transmission protocols, coding and other standards, etc., have to comply with the rules and conventions of the agreed standards of software development.These rules and conventions do not only have to be considered but are inscribed into the different technical designs and the mobile prototype, i.e., the new technology actants will have these inscribed into them.There were also not yet any standards for mobile development that could be used by the developers.
Devices and passages
The main activities of home-based healthcare are to provide care services to patients at their homes.A sub-activity is the recording, reporting and sharing of care patient data which is still paper-based in most communities.The care patient data represents the details of the patient's diagnosis, observations and care activities and can be regarded as the substantive device for aligning a network around home-based healthcare.During the design phase the representation of the care data, i.e., the design of interface to the data can be regarded as the substantive device to align the network around the interest to facilitate easier data recording and processing.The means supporting the design is the design concept, in this case the lo-fi prototype and the procedural device to facilitate coordination and communication around the design is co-design activities to design the navigation of the identified data elements according to the work practices associated with the care data recording of the patient.
During the development phase the actants are enrolled into the network around an interest to convert the design of the care data interface into a mobile care data prototype.The substantive device in this case is the representations of the caregivers anticipated care data processes and the means for this are the different technology actants that support the coding into the mobile prototype.The procedural devices are the interactions between the developers, the technical tools and each other to develop the code for the prototype.
Applying ANT
ANT provides the mechanism to follow the actants of the design and development phases to establish to what extent they participated in the respective networks.This provided insights in the design and development processes and specifically how the technology actants made the participation easier as in the case of the lo-fi prototype in the design network and more difficult in the case of the development network.This could be contributed to the fact that mobile development is not yet matured as indicated in the literature and that the learning process made participation with the technology actants more difficult-one can assume that knowing how to use the tools (technology actants) should make the development easier.The social learning during the design phase applied to all the participants-the designer and developers learnt more about the caregivers' data capturing practices and the caregivers learnt more about the possible technology solution.Social learning during the development phase was mostly applicable to the developers who learnt more about the use of the technology actants but the learning could not be shared with the designer or end-users.The challenge here seems to be the difficulty in communicating the technology actants' roles to the actors without the technological background.The developers involved in ICT solutions for developing contexts may not have the necessary technical knowledge or access to materials that could guide them and therefore may struggle to use the technology components in the way their use is already inscribed.It is then possible that the design-reality gap may increase when the technology and environmental constraints influence the translation from the design to the ICT solution.
Another reason for the challenges experienced during the development phase is that the inscriptions of the technology artefacts used for the development, may not support the way they should be used in mobile development.The participation approach worked well during the design phase and all the actants actively participated in the process.The technology actants had to be adapted to be used in the particular context that was constrained by socio-economic factors.The lack of involvement of the caregivers, as the representatives of the community considered, not only affected the development process negatively but also influenced the research outcomes even though the lived reality of the community was inscribed in the design outcome [30].It may be necessary to consider the involvement of participants without the necessary technical knowledge to see how they can participate during the development stage.It is possible that the level of participation during the development may be lower due to the nature of the development process and in this case it may be necessary to determine the optimal level of participation of all participants to proceed with the process without the end-users becoming totally uninvolved.
In both the design and development networks new actants were created as the result of the participation of the other actants and it can be concluded that the level of participation will determine the quality of these new actants (design and mobile prototype).Having this view can assist with dealing with the context when similar solutions are considered for other contextsthe focus then on the participatory process where the designs and prototypes are actants of new networks where the outcome could be different.
Responding specifically to the use of ANT in developing studies, it was possible to describe the design and development processes in detail to show how the actor-network structures emerged [2].It was also possible to expose the roles that the IT artefacts play in development.The IT artefacts that could not so easily be adapted to the developing context as the design probe did, resulted in several difficulties experienced during the development of the mHealth application.Contributing to this problem was the inexperienced developers who could not easily learn how to use the IT artefacts in development.In a developing context there will often be less experienced developers who do not have the luxury of being sent for expensive training by their companies or who are not able to work in teams with more experienced developers.The black box nature of IT artefacts used during development influences the manner in which they enrol in the network since the purpose for which they were designed may not be well aligned to that of developing a mobile application in a developing context.
It was also interesting to note that the spokesper-son of the network during the mobilisation translation moments changed from the designer to a developer and it will be interesting to see if the spokesperson of the eventual solution would be a community representative.
Studying the level of participation and especially the 'failed' translation moments should provide more insights on aspects that need attention as can be seen by the alignment problems of the development network.Table 1 presents the key findings of the alignment, coordination, devices and passages of the design and development networks with possible implications.
THEORETICAL CONTRIBUTION
This study considered the following cognate disciplines: information systems, development studies and design.It used a DSRIS framework to allow for theory development.The DSRIS framework does not provide for development studies and the suggestions of Gregor et al. [13] [15] [18] also do not consider development studies specifically.There is therefore a need to develop a framework that also provides for development studies, maybe a DSRIS4D?The use of ANT as a suitable lens for development studies as suggested by Heeks [2] is an attempt to also include the insights gained from this analysis.
The framework proposed by Gregor et al. [15] is now used to extract design theory from this study for the following design theory components: 'purpose and scope', 'principles of form', and 'principles of function'.
The problems the researchers originally perceived were that care services were hampered by the manual data recording, many mistakes were made when completing the paper forms, and this data recording process was very time-consuming.The problem was not identified by the caregivers since they work in a resource restricted setting that does not allow for 'niceto-have' solutions.Their current paper-based system works and supports the care services.In a developing context it may often be the case that people accept a situation because there are not resources to improve the situation.The proposed mobile application is not a novel solution but developing a solution that is appropriate for their situation provided a utility value-a mobile application could be useful and has the potential to improve the quality of care services in developing contexts and result in better quality data for decision making.The design concept was the mobile care data application (CDA) and the focus with the caregivers' participation was on the user interface.The concepts came from a developing context and it was possible to reach a proof-of-concept stage.The evaluation in practice was outside the scope of this paper.There are potential problems that could influence the actual implementation of the mobile application that were identified from the ethnography study, and these are: connectivity issues, cost of the mobile application, potential target of crime (e.g., if criminals hurt caregivers to steal their mobile phone), etc.
The principles of form are the material properties built into the artefact to enable it to achieve its pur-
Alignment
• A design concept allows for the assembling of good relations between the design actants that results in a strong alignment between all the actants.
• The technology actants influenced the level of participation with the developers having complex relations around the translation of the design into the mobile prototype and with the designer and end-users becoming inactive.
The degree of participation influenced how the design and development of the care data processing were done.During the design phase the degree of agreement between the actants was high whereas during the development phase it was low.• A participatory approach provides for a situation where all the actants have the ability to actively participate in the design process.
• Alignment is difficult with different levels of complex translations required.
Coordination
• The rules and conventions of home-based care services are embedded in the work practices of the caregivers that were represented in the design of the mobile data application.
The interpretation flexibility was restricted by the rules and conventions of how these were inscribed into the technology actants used to develop the solution as well as the accepted standards of software development.
Understanding of the rules and conventions of the work practices in a particular context are well represented when a participatory process is used to provide active participation of the end-users who are knowledgeable about their rules and conventions.
• The co-design process allowed for flexible interpretations of the possibility of the mobile application and with the designer more in the role of a facilitator.
There seems to be an insufficient understanding about the rules and conventions of development as well as how these are inscribed in the technology actants.
Substantial devices
The representation of the care data as part of the proposed interface.
The representations of the caregivers' anticipated care data processes in the form of technical designs and code.
Identifying the substantial devices of the design and development networks provided a better understanding of the key purposes of these networks.
Material devices
The design concept, namely the lo-fi prototype.
The technology actants supporting the conversion of the care data processes into the mobile prototype.
Consideration of the material devices provided insights in the interplay with the material and procedural devices.
Procedural devices
The co-design activities.
The interaction between the developers with each other and with the technology actants and the designs.
Insights in how the design and development proceeded and the underlying assumptions were obtained by considering how the actants interacted.
pose.The principles of form observed are the menu design based on the users' preferences for the data capturing sequence; data elements on the screen should be minimal and easy to navigate; and familiar terminology should be used.Rather use drop-down lists than entering text; use, where possible, short-cut keys, e.g., '#3.Care plan'; the use of directional buttons (up, down, left and right) for selection of options; minimal text entry.The contextual conditions observed to enable the emergence of the desired affordances are the type of mobile devices used by the caregivers that is typical of a developing context, e.g., phone features.The data collected are the data observations typical of a developing context: e.g., 'Has the patient taken the medicine?','Is there enough food?', 'Are the children looked after?', etc.The caregivers, as the user group, perceive the functional affordances of the artefact that represent their care activities.The active level of participation of the caregivers provides the justificatory knowledge that the material properties should achieve the artefact's goals.
The principles of function are difficult to present since the artefact was never implemented to be used in practice.During the testing phase it was difficult for the users to see how the artefact could be used in practice since there were technical problems, e.g., there was a problem for the mobile phone to connect to the server, there were version problems.It was clear that the gap between the lo-fi prototype and actual mobile application was too big since the users were not partic-ipating during the development stage.The necessary actions to bring about the desired outcomes are to find a way to make the development activities more visible to the users without exposing them unnecessarily to the technical aspects of development.
Responding specifically to the use of ANT in developing studies, it was possible to describe the design and development processes in detail to show how the actor-network structures emerged [2].It was also possible to expose the roles that the IT artefacts play in development.The IT artefacts could not so easily be adapted to the developing context as with the design probe, resulting in several difficulties experienced during the development of the mHealth application.Contributing to this problem was the inexperienced developers who could not easily learn how to use the IT artefacts in development.The black box nature of IT artefacts used during development makes it difficult to 'open' it to adapt components for more appropriate use.It therefore seems that it is necessary to develop an IT 'toolkit' with IT artefacts that are more suitable for the developing context or at least have a repository with such tools with sufficient guidelines to use them.
Reflecting on this research the following were observed: it is important to build a relationship with the user group or representing organisation before starting with the design process.There are far too many empty promises made to communities in need or too many solutions with limited utility value 'given' to them.Once the relationship is strong the users must become active co-designers of their own solutions with the designer/developer more in a facilitating role.It is important to design the solution first to a point where the possible solution is a good representation of what can be expected when it is completed before starting with the development of the IT artefact.Developers should be involved in the co-design sessions to obtain a good understanding of the required solution but also to learn more about the users' mental processes.Specific IT artefacts that are suitable for a developing context should be developed to be used as developing tools during the development process.These development tools should be more flexible, easy to be used and shared by less experienced developers.
CONCLUSION
Actor-network theory provided a sufficient mechanism to establish how actants participated during the design and development phases of a prototype for a mobile care data application and the stories of the relations between the actants and the level to which these relations were formed were part of this descriptive study.Design science research is a suitable research strategy and the participatory design a suitable approach for these kinds of development for information systems.This approach provides for the considerations of the context by giving the community representatives a voice.Even though the evaluation and reflection activities of DSRIS were not done, there are already sufficient insights in the design and development activities to contribute towards the development of mobile applications as part of an information system in a developing context.
Mobile development has many challenges and recounting the stories of how mobile applications are developed, and especially the role of the technology actants and how all the actants interact with each other, can provide valuable insights.This is especially valuable in a developing context where there are limited resources.When the designers and developers are immersed in the developing context they obtain a better understanding of the local context and therefore the perspective of social embeddedness seems to be appropriate for the developing contexts.The dynamic of role allocation for participation also seems to be an important aspect of the formation and sustaining of sociotechnical networks.The use of suitable design probes increases social learning and communication but this seems to be more complex for the use of technology components during development.It seems that the design-reality gap increases during the development phase.
The research question posed for this paper can be answered as follows: the interests of the actants during the design phase were sufficient but less sufficient during the development phase.The modus operandi to establish the network using participatory design was appropriate for the design phase but did not work so well during the development phase when both the designer and end-users became inactive due to the technical nature of participation.Further research is required to obtain a better understanding of the reasons for the level of participation during the development phase and how to deal with technology actants and to determine the optimum versus high level of participation.Furthermore the investigation of the role of coordination of the rules and conventions during development will provide more insights into the difficulties experienced during the development of software in general, and mobile applications in particular, a developing context.
Further research is also required to focus more on the interactions between the actants for mobile development, the role allocation during participatory design and development, the factors leading to an increase in the design-reality gap, the translations during the iterations of development, and the inscriptions of the technology components.
Table 1 :
An ANT analysis of the participatory design and development phases of a mobile care data application | 2016-01-07T01:57:53.067Z | 2014-10-29T00:00:00.000 | {
"year": 2014,
"sha1": "8702876ccf3755ee6212bed61aa4f0dba84bd2b8",
"oa_license": "CCBYNC",
"oa_url": "https://sacj.cs.uct.ac.za/index.php/sacj/article/download/238/121",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8702876ccf3755ee6212bed61aa4f0dba84bd2b8",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258640545 | pes2o/s2orc | v3-fos-license | How much do pregnant women know about the importance of oral health in pregnancy? Questionnaire-based survey
Background Although pregnancy is a physiological process it causes hormonal changes that can also affect the oral cavity. Pregnancy increases the risk of gum disease inflammation and tooth caries which could affect the health of the developing baby. Proper oral health is crucial both for mother and her babies and is related with mothers’ awareness of this connection. The aim of this study was the self-assessment of women’s both oral health and oral health literacy as well as mothers’ awareness of the connection of oral health and pregnancy. Material and methods In the study anonymous questionnaire was prepared and provided to be filled in by 200 mothers at the age from 19 to 44 y.o. who gave birth in the gynecological clinic. The questionnaire included demographic, and concerning the areas of oral health before and during pregnancy and after the childbirth questions. Results Only 20% of the investigated women underwent the oral examination before the pregnancy and the next 38.5% underwent it intentionally when the pregnancy had been confirmed. As much as 24% of women pointed out lack of awareness of the importance of proper oral hygiene during pregnancy. 41.5% of investigated women declared complaints during the pregnancy concerning teeth or gums and 30.5% underwent dental treatment; 68%, brushed their teeth properly—twice a day; 32% of women observed deterioration of oral health state during the pregnancy. The knowledge of the importance of oral health during pregnancy presented by the majority of mothers was relatively proper, which was strongly connected with higher education status and living in big cities. A significant correlation between higher birth weight and more frequent daily tooth brushing was observed. Both higher frequency of problems concerning the oral cavity and dental treatment during pregnancy were significantly related to the younger age of mothers. Conclusions The knowledge of women concerning of oral health on the management of pregnancy and development of fetus is still insufficient. Gynecologists should inquire pregnant women if they have done dental examination, and provide wider education about importance of oral health in pregnancy.
Introduction
Although pregnancy is a physiological process, it causes hormonal changes that affect also the oral cavity. The presence and frequency of different oral problems of gums and teeth, mostly gingivitis, dental erosion, halitosis and pregnancy epulis have been described and are well known. In many clinical studies and meta-analyses Page 2 of 11 Radwan-Oczko et al. BMC Pregnancy and Childbirth (2023) 23:348 the main association between the signs of periodontal disease and adverse pregnancy outcomes like preterm birth, low birth weight, preeclampsia, gestational diabetes [1], vulvovaginitis, premature rupture membranes has been presented [2][3][4][5][6].
The most frequent signs of gingival inflammation are related to increased levels of estrogen which disrupts proliferation and differentiation of cells and keratinization of epithelium, and increased levels of progesterone which changes vessels' permeability and microcirculation in gingiva. Furthermore, in combination with oral pathological flora, an increased hormone level changes and decreases immune response is shown [7]. It leads to gums swelling and spontaneous or provoked gingival bleeding [8]. Although the plaque levels is declared to remain unchanged during the pregnancy, the gingival inflammation of pregnant women is significantly increased and peaked in the third trimester but dropped only at 3 months postpartum [7]. Finally untreated gingival inflammation, which can be reversible leads to periodontitis with periodontal attachment and bone loss and to the formation of periodontal pockets in the development periodontal diseases [9,10]. Bacteremia, which indirectly triggers the hepatic acute phase response, enhances the production of cytokines, prostaglandins (PGE2), and interleukins (IL-6, IL-8) [11].
Special care of the oral cavity in women during pregnancy might be considered when food cravings to sweet food appear [12], influencing the change in the dental plaque formation pattern [13]. Proper healthy diet during pregnancy represents a positive influence on reducing the gingival and periodontal inflammation [12,14]. As the sugar-rich diet has an influence on the bacterial load, its' direct effect are dental caries, a common and costly disease in pregnant women [15]. Findings of the researchers from Pelotas show the far effect of dental caries in this group of patients, discussing that even depression is mediated by self-perception about oral health [16]. The authors show that the presence of depressive signals and symptoms was higher in pregnant women with dental caries experience, diverse severity of untreated dental caries, tooth loss, and filled tooth [16].
Different other factors have been discussed as important to the state of the oral cavity during the pregnancy and in the reproductive age. One of those is vitamin D levels in patients serum, that are considered to influence the composition of saliva, balance the caries activity, and stimulate the production of antimicrobial peptides, such as defensins and cathelicidin [17]. In reference to the reproductive age changes in the serum levels, the treatment with exogenous vitamin D have been related to better outcome of insulin, LDL-cholesterol and anti-Mullerian hormone levels in infertile women with polycystic ovary syndrome awaiting in vitro fertilization [18], and there are also reports suggesting that population approach aiming to eliminate the prevalence of vitamin D serum levels lower than 30 nmol/L in women of reproductive age, additionally facilitating reaching of the 50 nmol/L serum levels could be of a reasonable and safe goal [19]. In relation to this, vitamin D deficiency has been associated with the possible development of diverse complications among mothers [20] and pregnant women i.e. with the pregnancy related transient osteoporosis of the hip (PR-TOH) occurring in the third trimester [21].
Commonly appearing during the pregnancy granuloma gravidarum can be caused by an increased progesterone level in response to such irritants as bacteria, calculus, sharp elements of the broken teeth or food impaction. They are usually present in jaw in the first trimester, grow fast and retreat after childbirth. It could cause local bleeding while eating and toothbrushing [22]. It was also demonstrated, that pregnant women are at higher risk of erosions of enamel leading to hypersensitivity because of dissolving properties of gastric acid affecting the teeth during vomiting in the first trimester and acid reflux at the later stages [23]. Therefore, the maintenance of good oral health during the entire period of pregnancy is absolutely essential for general health of both mothers and their babies [5,24,25].
Many studies showed that healthier behaviors of future mothers depend on socioeconomic factors such as age, place of living, education level and number of children [26][27][28]. These factors, along with the self-assessment of women's oral health and oral health literacy as well as awareness of the relationship between oral health state and pregnancy was the aim of this study.
Materials and method
Study was performed by trained medical personnel who disseminated anonymous questionnaires. In this study an anonymous questionnaire-based survey was prepared and provided to be filled in paper version by the women who gave birth in the gynecological clinic. The questionnaire included 5 general demographic items and 11 questions concerning the oral health. The mothers provided answers without any help from the dentists, in order to collect real knowledge, without any suggestions, of women's awareness of their oral health during the pregnancy. The study was approved by the Ethics Committee of Wrocław Medical University number Nr KB -900/2012.
Statistical analysis
For each continuous data mean (X), median (M), standard deviation (SD, range (min, max), lower and upper quartile (25Q, 75Q) were calculated. Statistical significance between means for different groups was calculated with the use of a one-way analysis of variance (ANOVA), alternatively using the non-parametric U Mann-Whitney test (for two groups) or Kruskal-Wallis test (for more than two groups), when the variances in groups were not homogeneous (the homogeneity of variance was determined by the Bartlett's test). Statistical significance between frequencies was calculated with the use of the chi-square test χ2df with Yate's correction with corresponding degree of freedom df (df = (m-1)*(n-1), where m -number of rows, n -number of columns). A p value of less than 0.05 was required to reject the null hypothesis. Statistical analysis was performed using EPIINFO Ver. 7.2.3.1 software package.
Results
Finally 200 questionnaires were collected from Caucasian women aged 31. 9 ± 5.3 on average. There were some questionnaires not fully completed, what could change the number of answers of some questions.
Only 170 mothers gave information about the length of pregnancy which was on average 38.9 ± 2.1 months. And only 172 mothers defined the baby's birth weight, which was 3335.7 ± 508.2 g on average. The majority of women-61.5% (lack of 1.5% of answers) were from big cities. When education was considered, the majority of mothers had higher education-55.5%, and primary education had only 4.5% of respondents. Natural parturition was declared by 45% of mothers, 48.5% of them had caesarean section however 13 mothers did not answer this question. Nausea during pregnancy was indicated by 40 percent of women, as much as 58.5% did not have this condition and 1.5% of respondents did not answer this question. The data acquired from these general questions are presented in Table 1. Investigated oral related parameters are presented in Table 2.
The first question related to the oral health in pregnancy was about the dental examination as important in pregnant women. When planning and preparing for the pregnancy only 20% of the investigated women underwent such examination and 38.5% of them had it done just after their pregnancy was confirmed. A statistically positive correlations between this examination and higher education of investigated women (chi-square test = 36.1 p ≤ 0.001) and living in the big city (chi-square test = 13.7 p ≤ 0.033) were observed. On the other hand women who lived in the countryside statistically less frequently underwent dental examination. As much as 41.5% of responders did not have the initial examination because 19.5% of women did not consider it's necessity since they do not have any dental or oral problems and 22% did not have time or money for the oral cavity examination. When any problems or changes with their teeth or gums during the pregnancy were taken into consideration, the majority of women -57% did not noticed them. There were lack of answer of 1.5% of whole investigated group. Women were asked for the selfassessment of the level of their oral health before pregnancy. In this investigated group 30% of them described it as very good, and 51.5% as good, and these states were • caesarean section 48.5% • lack of answer 6.5%
Place of living
• in the countryside 13.5% • in small/medium town 23.5% • in big town 61.5% • lack of answer 1.5%
Educational status
• primary and vocational education 12% • secondary education 32% • higher education 55,5% • lack of answer 0,5% indicated statistically more often by women with higher education. The next 17.5% of respondents felt discomfort with the calculus and the presence of small caries defects mainly in the group of women with primary and vocational education.(chi-square test = 14, p ≤ 0,024). Furthermore, statistical differences concerning the assessment of the women's oral health state and length of the pregnancy were observed. Longer time of pregnancy was correlated with worse self -assessment of oral health before pregnancy. During the time of pregnancy these feelings of oral cavity self-assessment changed and after the childbirth 20.5% of women described their oral health state as very good, and 47% as good, and these states were presented by women with higher education in 72.5% and 53.19% respectively. Moreover, 25% of the subjects described feelings of calculus and caries presence, 5.5% indicated their oral health status as bad and 4 mothers (2%) did not have or gave their opinion, mainly in the group of women with primary and vocational education, however, there was no strong statistical significance (chisquare test = 14.1, p ≤ 0.077).
Only 5% of women underwent orthodontic treatment during entire or part of the pregnancy time and they were statistically younger (Fig. 1) and 12.5% removed orthodontic braces before the pregnancy.
Significantly more women who stopped their orthodontic treatment before pregnancy had higher education -62.5% (chi-square test = 15.7 p ≤ 0.003), and there were no statistical differences in educational status concerning lack of orthodontic treatment. The presence of nausea during pregnancy was not statistically related to the use of orthodontic appliances (chi-square test = 1.97, p ≤ 0.374).
Regarding the information about the importance of good oral hygiene during pregnancy, only 16.5% of the investigated women knew about it before the pregnancy, 59.5% of respondents received this knowledge during their pregnancy and 24% of them were not aware of this knowledge until the end of pregnancy.
Furthermore, concerning the daily oral hygiene, 68% of respondents brushed their teeth twice a day, 21.5% three times daily, 6% brushed their teeth only once a day and 4.5% as much as four times a day. On the one hand the women with higher education status declared brushing teeth statistically more frequently -87.5% of them. On the other hand women with primary and vocational education declared brushing their teeth mainly two times a day (70.83% of them) (chi-square test = 20,2 p ≤ 0,001). Statistically lower birth weight of newborns whose mothers declared brushing teeth only once a day and high birth weight of children whose mothers declared brushing teeth four times a day were observed (Fig. 2). Moreover, 37% of women indicated gum bleeding and this parameter was correlated with nausea during pregnancy (chi-square test = 10.5p ≤ 0.001). Gingival local overgrowth during pregnancy which was present in 14.5% of women, correlated with younger age (Fig. 3), and was significantly more often declared by women who had pregnancy nausea (chi-square test = 3.94 p ≤ 0.047) and by women who gave birth through the caesarean section ( chisquare test = 8.68 p ≤ 0.013).
Problems or complaints concerning teeth or gums were statistically more often described by younger women (Fig. 4) and by women experiencing nausea during pregnancy (chi-square test = 3.81 p ≤ 0.05). The signs of dental hypersensitivity confirmed 24.5% of women and 1% of all women did not answer this question. As much as 30.5% of women had dental treatment during pregnancy and it was significantly more often performed in younger women (Fig. 5), however, 4 women (2%) did not answer this question. In the whole group of respondents, 5% had dental extraction, statistically more often in women living in the countryside (chisquare test = 6.30, p ≤ 0.043). Deterioration of the oral health state after pregnancy concluded 32% of mothers (Table 2).
Discussion
Many assessments regarding the pregnant women oral health state and their knowledge of oral health in relation to the pregnancy have been performed in many populations from many countries. This topic seems to be very interesting and essential since the data showed the association between the oral care and oral health and both the general health, health of an unborn child and pregnancy outcomes [3][4][5][6]. Worldwide general health, dental, gynecological and obstetricians organizations or workgroups are involved in highlighting and discussing the importance of making pregnant women aware of the significance of their oral health [3,24]. But it still seems this knowledge and awareness of both women and the knowledge providers are not sufficient, which has been presented in the published findings [22,24,26,27].
Although this research presents and analyses only significant correlations between the investigated parameters, in general the obtained results are similar to other results described in this field. A lot of studies showed that healthier behaviors of future mothers depend on socioeconomic factors such as age, place of living, education level and number of children [26,27,29,30]. The average age of the investigated group of women was 31.9 years and 45% of women gave a natural birth. The knowledge of the positive relationship between the appropriate oral health and correct course of pregnancy had only 16.5% of mothers before the pregnancy and as much as 59.5% got this information during the pregnancy, however, surprisingly, 24% of women still stated they did not have any awareness of these influences. In the work of Hom et al. [27], authors found a logical association between the oral health literacy and oral health knowledge. The level of health literacy influences seeking information about health, procedures and behaviors important for the maintenance of good health and this enhances health knowledge. In our study this phenomenon was also present. It should be pointed out that these 24% of women were not interested in oral health literacy, so their knowledge of oral health state related to complications was very low and underestimated.
Dental examination before or right after pregnancy confirmation was carried out in 58.5% of women, and they had higher educational status and lived in a big city. The study of Llena et al. [2] also confirmed such observations that better knowledge of oral health is related to the above determinants. As much as 81.5% of women described their oral cavity status before pregnancy as very good and good and this group of women had also higher education. Moreover, 24% of mothers reported lack of awareness of the importance of proper state of oral cavity during pregnancy and what is worth underlining, 19.5% of them considered this examination not necessary at all. Generally, health professionals must have awareness of the necessity of sharing the wide knowledge of the importance of oral health with pregnant women. Using the online questionnaires, Suri et.al. [31] compiled, with the help both of the dentist and obstetricians, the query evaluating the knowledge of the obstetricians about the association of periodontitis with preterm birth and birth weight. The authors noticed that more than 70% of respondents, who were quite youngthe average age was 34.8 years, and 89% of whom were women, had proper knowledge of this issue. However, in the same group only 40% of respondents recommended dental examination and only 47% advised women to take care of oral health during pregnancy. Consequently, oral health literacy among pregnant women is still not sufficient, which was shown not only in this study. Even though the majority of obstetricians and gynecologists have proper and actual knowledge of the importance of oral health during pregnancy, they do not provide this information to the patients. What is more significant, they also do not require their patients to provide the confirmation of dental examination during pregnancy. Such examination, which constitutes a part of the assessment of general health in pregnancy, is not only recommended but also should be required at early pregnancy at the latest, or be an integral and obligatory part of pregnant care. The findings of Ghaffari et al. [22] were very important as they showed educational intervention to be effective and changing the awareness of pregnant women when it comes to their behavior concerning oral hygiene and oral health.
In this study the discomfort with the calculus or/and small caries defects during pregnancy were reported by 18.5% of women mainly with primary and vocational status education. General oral complaints concerning teeth or gums during pregnancy were reported by 41.5% women including gingival bleeding and the feeling of gingival overgrowth. It seems interesting that oral problems were significantly more often present in younger women and worse self-assessed oral health was more often related to the longer pregnancy time. The association between periodontitis and preterm birth is still not clear and the data are inconsistent. In our study we did not found any correlations between worse self-assessed oral health state and preterm birth. In the systemic review and current meta -analysis carried out by Manrigue-Corredor et al. with the participation of 10,215 women from America, Europe, Asia and Africa, the authors found the positive correlation between these parameters in 60% of 20 evaluated studies [32]. At the same time the As much as 32% of mothers stated deterioration of their state of oral cavity during the. pregnancy.
Nowadays, orthodontic treatment is very popular, especially among young women, sometimes also because of esthetic reasons [33,34]. In our investigation 12.5% of women removed the orthodontic braces before the pregnancy as a result of termination of treatment or because of pregnancy. Only 5% of women who were significantly younger were under this treatment during the whole or part of the pregnancy period.
Brushing teeth twice a day is considered to be enough to maintain proper oral hygiene. In this survey, it was clearly visible that mothers with the higher educational status declared toothbrushing at least twice a day. On the one hand, 6% of mothers declared brushing their teeth only once a day, and there were significant correlations between the lower birth weight and such behavior. On the other hand, an association between a higher birth weight of newborn babies and toothbrushing four times a day was observed. Our results confirm other findings that show the influence of the daily toothbrushing on the oral hygiene and gingival inflammation which is associated with birth weight [35]. In the study of Gil [36], dental plaque level evaluated only supragingivally, was positively correlated with the periodontal parameters such as bleeding on probing, periodontal pockets depth and clinical attachment level. Moreover, the frequency of toothbrushing was negatively correlated with periodontal pockets depth and clinical attachment level. Furthermore, bleeding on probing and periodontal pockets depth were found as positively correlated with the CRP inflammatory marker, which confirms the fact that periodontal inflammation during pregnancy is the factor of a general importance.
As much as 37% of women complained about gingival bleeding and this parameter was positively correlated with nausea. Another complaint concerned the feeling of the gingival enlargement ( gingival edema or epulis) and it occurred in 14.5% of women who were significantly younger, more often had caesarean section and also felt nausea. It is well known that plaque-induced gingivitis is more often diagnosed in pregnant women because of the elevated levels of gestational hormones, which is transitional although influences gingival tissue response and changes immunological alteration. Therefore, gingival pockets, edema, or slight inflammatory overgrowth or pregnancy epulis are additionally the results of the hormonal-related gingival inflammation and not of the periodontal disease. An additional factor of the nausea can explain the presence of the described signs of gingival inflammation because of difficulties with the effective toothbrushing [24]. There are some discrepancies seen in the currently published meta-analyses assessing the relationship between the periodontal disease and adverse pregnancy outcomes [3,4,6]. In the work of Figuero et al. [3], pregnant women showed higher level of gingival inflammation when compared to the control group of non-pregnant women but without its correlation with salivary progesterone and estradiol levels. Authors also did not find any changes in IL-1β and PGE2 levels. These outcomes indicate no direct relationship between the gingivitis level and investigated parameters. However, other study showed that periodontal inflammation is not limited to the oral cavity and the periodic bacteremia and release of the endotoxins from the periodontopathogens can change the immune system response due to the production of proinflammatory cytokines particularly in women who show greater response to proinflammatory factors. Gil et al. [36] found a positive correlation between the CRP level and periodontal parameters such as pocket depth and bleeding on probing. The equivocal evidence concerning the positive influence of the periodontal treatment on adverse pregnancy outcomes was also described [5]. On the basis of meta-analysis of 11 trials, the authors [37] concluded that initial treatment of periodontal disease cannot be considered as an efficient way of decreasing the incidence of preterm birth. The cited authors underline that this treatment is not most important in the protection of adverse pregnancy outcomes.
Surprisingly, 30.5% of women who underwent dental treatment during the pregnancy were significantly younger. There was no exact information in the questionnaire what kind of treatment was conducted. Furthermore, five women who had tooth extraction were significantly more often from the countryside population. Some dentist may by unwilling not only to treat pregnant women, but also to carry out the oral examination because of liability concerns. On the other hand, the liability resulting from the lack of treatment or consultation of pregnant patients may be higher and unpredictable. Physicians, obstetricians and dentists should always spread the information about the necessity and safety of dental examination, particularly among the young women. A proper approach to clear communication and education related to the proper oral health and its connection with general health of a pregnant women and fetuses are of great importance. Additionally, very simple, but important advices concern the proper toothbrushing, using mouth rinsing, flossing and other recommendations which are dedicated at individual stages. Only the described attitude with collaborative relationship between the medical doctor and pregnant woman seem to improve both the oral health literacy and oral health knowledge, which is of the utmost importance to everybody. We shall notice a possible limitation of this study, because of only self-reporting assessment of the oral cavity parameters. Even though this method is generally accepted, the clinical assessment of the above parameters would be more explicit.
It is worth underlining that the self-assessment avoided bias, which could occur during the nice direct contact with the patients, and collected information gathered data about oral health state from three periods-before, during and after the pregnancy.
Conclusions
The relation between the longer duration of pregnancy and self-assessed worse oral health before pregnancy has been shown. This particularly concerned women with lower educational status. A correlation between daily toothbrushing and birth weight of newborns was found. Health-related behaviors and life-style of future mothers depend on socio-economic factors. Doctors should identify groups of women at increased risk (women with lower economic status, living in the countryside) and provide better education and medical care [38]. The knowledge of women about the impact of oral health on the development of pregnancy and the fetus is still insufficient. In addition to educational activities that aim at increasing women's knowledge of the impact of the oral health state on the development of pregnancy, gynecologists should inquire whether pregnant women have done the appropriate examination. The self-assessment of oral health by pregnant women may be the first step in accelerating their health-promoting activities, Fig. 6. | 2023-05-13T13:33:16.321Z | 2023-05-13T00:00:00.000 | {
"year": 2023,
"sha1": "3ea3e50c231106f93bb636d612726fc8c856fb0b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3ea3e50c231106f93bb636d612726fc8c856fb0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13929008 | pes2o/s2orc | v3-fos-license | Arsenic and Cadmium in Food-chain in Bangladesh—An Exploratory Study
Arsenic contamination of tubewell water is a major public-health problem in Bangladesh. In the recent years, the use of shallow and deep tubewell water for irrigation and the use of excess amount of cheap fertilizers and pesticides containing cadmium pose a serious threat of contamination of arsenic and cadmium in food. In an exploratory study, arsenic and cadmium were measured in foods from Matlab, a rural area in Bangladesh, that is extensively affected by arsenic and the economy is agriculture-based. Raw and cooked food samples were collected from village homes (households, n=13) and analyzed to quantify concentrations of arsenic and cadmium using atomic absorption spectrophotometry. Washing rice with water before cooking reduced the concentration of arsenic in raw rice by 13–15%. Rice, when cooked with excess water discarded, showed a significant decrease in arsenic concentration compared to that cooked without discarding the water (p<0.001). In contrast, concentration of cadmium did not decrease in cooked rice after discarding water. Cooked rice with discarded water had significantly lower concentration of arsenic compared to raw rice (p=0.002). Raw rice had higher concentration of arsenic compared to raw vegetables (p<0.001); however, no such difference was found for cadmium. Compared to raw vegetables (e.g. arum), concentration of arsenic increased significantly (p=0.024) when cooked with arsenic-contaminated water. Thus, the practice of discarding excess water while cooking rice reduces the concentration of arsenic but not of cadmium in cooked rice. However, water generally not discarded when cooking vegetables to avoid loss of micronutrients consequently retains arsenic. The results suggest that arsenic and cadmium have entered the food-chain of Bangladesh, and the cooking practices influence the concentration of arsenic but not of cadmium in cooked food.
INtrODUCtION
Arsenic contamination of groundwater is a major public-health concern in Bangladesh and elsewhere (1)(2)(3). Chronic toxicity of arsenic in humans from arsenic-contaminated drinking-water occurs in 61 of 64 districts in Bangladesh, affecting millions of people (2). The maximum permissible level of arsenic in drinking-water recommended by the World Health Organization (WHO) is 10 µg/L and, in Bangladesh, it has been adjusted to 50 µg/L by the local authorities (3). According to the Joint Food and Agriculture Organization/WHO Expert Committee on Food Additives (JECFA), the previous provisional tolerable weekly intake (PTWI) for inorganic arsenic was 15 μg/kg body-weight (equivalent to 2.1 μg/kg body-weight per day or ca. 130 µg per day for a subject of 60 kg body-weight). However, since the previous PTWI is no longer appropriate, the Committee has withdrawn it.
In its 72nd meeting, held in Rome, on 16-25 February 2010 (Summary and conclusions, issued on 16 March 2010), the JECFA determined a lower limit for inorganic arsenic on the benchmark dose for a 0.5% increased incidence of lung cancer (BMDL 0.5 ) of 3.0 μg/kg body-weight per day, i.e. 2-7 μg/kg body-weight per day, based on the range of estimated total dietary exposure (4).
Where shallow groundwater is contaminated, it is likely that arsenic is present in bioavailable forms in soil and in irrigation water (5). During the 1960s-1980s, shallow tubewells were installed in Bangladesh to provide 'safe water' to prevent morbidity and mortality due to gastrointestinal diseases caused by contaminated surface water. Water from these tubewells was not tested for arsenic or other toxic element contamination.
Results of studies in countries where the population has had long-term exposure to arsenic in groundwater indicate that 1 in 10 people who drinks water containing 500 µg/L of arsenic may ultimately die due to arsenic-induced cancer of lung, bladder, skin and cardiovascular diseases (1,3). A recent survey showed that arsenic-related diseases resulted in 9,136 deaths per year and 174,174 disability-adjusted life-years (DALYs) among people who were exposed to arsenic concentrations of above 50 μg/L, and this constituted about 0.3% of the total burden of disease in Bangladesh (6).
In Bangladesh and elsewhere, exposure to arsenic may involve a number of pathways: (a) by ingestion of contaminated drinking-water and food and (b) by inhalation of metal-containing dust. After a massive safe campaign by the United Nations Children's Fund and other donor organizations, the installation of irrigation tubewells-both deep and shallow-largely began during the decade of 1980 (7). With the increased use of groundwater through irrigation-pumps during the 1990s, arsenic contamination of water at different depths of soil surface was observed (1,7).
Later in the 2000s, the use of water from both shallow and deep tubewells for irrigation of agricultural lands began, particularly during the dry season (8). In all likelihood, the use of arsenic-contaminated water for irrigation may have contributed substantially to the spread of arsenic in the top soil-to cropto food (9,10). Simultaneously, the use of huge amounts of chemical fertilizers and pesticides also increased dramatically as the crops of high-yielding variety are less resistant to pests. We hypothesized that the excess use of fertilizers and pesticides might lead to accumulation of toxic elements, such as cadmium in soil, that may eventually reach the food-chain. Thus, in this pilot study, we aimed at assessing the concentrations of arsenic and cadmium in rice and vegetables collected from rural Matlab-both in raw and cooked form.
Sampling site
As part of a study published earlier (11), both raw and cooked vegetables and a few traditionally-cooked rice samples were collected from two villages. As described elsewhere (12), the tubewells of those villages showed high concentration of arsenic (81-96% yielded water with more than 50 μg As/L), and the vast majority of farmers used contaminated groundwater for irrigation.
Sample-collection procedure
A structured questionnaire was employed for each of the above households in the villages for collecting information on the type and quantity of water and foods ingested, and how the foods were being cooked for their consumption. A sample-collection form was used for documenting the address of the household head, type of food, and way of processing (11,12).
Food items were selected based on usual daily dietary habit and those available around courtyard. About 250 g of rice (Oryza sativa) and the edible part of each vegetable, such as amaranth (Amaranthus viridis), bitter gourd (Momordica charantia), arum (Colocasia esculenta) stem, potato (Solanum tuberosum), spinach (Basella alba), green banana (Musa spp.), and eggplant (Solanum melongena), were collected from various households (n=13) in polyethylene bags. Parts of rice samples were collected as raw and parts as cooked in accordance with the cooking customs of the population, which was to discard excess water after cooking. Cooked vegetable was also collected. Both tubewell water and surface water were used during cooking. The samples were transferred to the laboratory and washed with deionized water to remove remains of soil. Both raw and cooked samples were frozen at -20 °C until analysis (13).
Calibration standards for all the solutions were prepared from the working standard 1,000 µg/L. In the case of arsenic determination, to 10 mL of each solution (sample, recovery, blank, standard) 1 mL of 5M HCl and 1 mL of 20% KI (w/v) were added and heated in water bath at 80 0 C for 30 minutes to reduce As (V) to As (III) and cooled to room temperature for analysis of arsenic. The detection limit of arsenic for food samples was found to be 0.3 µg/ kg in solution. Hydride vapour generation-atomic absorption spectrophotometry (HVG-AAS) showed excellent correlation coefficients between 0.9998 and 0.9996 over the concentration range of 2 µg/L to 15 µg/L of arsenic. However, necessary dilutions were made for samples containing high concentration of arsenic. To validate the precision and accuracy of the method, recovery and duplicate food samples were analyzed (recovery=97%, CV%=±5). For the determination of cadmium, graphite furnace-atomic absorption spectrophotometry (GF-AAS) was used, and the method followed was described earlier (14)(15)(16). Standard reference materials-SRM 1643e (trace elements in water) and SRM 1568a (rice-flour) from NIST (USA)-were used for checking the precision and accuracy of the analysis, which were found to be excellent with CV%=±5.
Methods of cooking
To evaluate the effects of arsenic concentration from cooked rice, we performed in the laboratory two conventional cooking processes of rice in Bangladesh to understand the extent of arsenic removal. The third process was done at the household level in our study area.
Process A: An excessive volume of water was used for cooking rice, and excess water was discarded, which is the tradition of the study households. Before cooking, raw rice is washed with water until water is clear and no longer turbid. The water used for washing and cooking arsenic-contaminated rice was arsenic-free. The rice-washed water, discarded water, and cooked rice were analyzed for arsenic, and in the case of cadmium, raw rice and cooked rice were analyzed.
Process B: An optimum volume of water was used for cooking rice so that no water was needed to be discarded. The rice-washed water and cooked rice were analyzed for arsenic.
Process C: Water generally is not discarded when cooking vegetables as it is perceived that discarding excess water would lead to loss of micronutrients. Cooked vegetables were collected from households as mentioned above and analyzed for arsenic and cadmium.
Before analysis, all the samples were treated as described earlier (15).
Statistical analysis
Statistical analyses were done using the SigmaStat software (version 3.1) (Systat Software, Inc.). Descriptive frequencies of the study variables were analyzed to assess validity of data, distributions, and assumptions of normality and equal variance. Concentrations of arsenic and cadmium were expressed as mean±standard deviation in raw and cooked food items. Between-groups comparison was done by Student's t-test, and for within-groups comparison, paired t-test was done. The overall significance level of these tests was set at <0.05.
Cooking processes
Cooking process A and B were compared using arsenic-free water in the laboratory. Washing rice until water became clear reduced the concentration of arsenic in raw rice by 13-15%. The conventional cooking process A removed 54% of arsenic in the discarded water (Table 1 and Fig.). The mean con-
Arsenic concentration in raw and cooked vegetables
The ranges of arsenic concentrations (µg/kg) detected in raw leafy vegetables (amaranth, spinach, and arum) and non-leafy vegetables (potato, bitter gourd, banana, and eggplant) are given in Table 2.
Among the cooked vegetables, the highest concentration of arsenic was found in amaranth (309 µg/ kg). Since arsenic-contaminated water was used for cooking vegetables, the concentration of arsenic in cooked vegetables increased and was significantly higher than that in raw vegetables (p=0.024). The increase in arsenic concentration in the cooked The mean concentration of arsenic in raw rice was significantly higher than that in raw vegetables (p<0.001).
Cadmium in foodstuff
Assessment of cadmium in rice and vegetables (Table 3) showed that the mean cadmium concentration levels were similar in the raw rice (33.1 µg/kg) and raw vegetables (27 µg/kg) (p=0.6). Rice cooked by process A (discarding excess water) did not reduce the concentration of cadmium in the cooked rice (Fig.).
DISCUSSION
The results showed that the practice of discarding excess water while cooking rice reduced the concentration of arsenic but not of cadmium in the cooked rice. Again, not discarding water when cooking vegetables to avoid loss of micronutrients retained arsenic in cooked vegetables.
In the present study, we found high concentrations of arsenic in rice-grains collected from Matlab, suggesting that rice is being contaminated with arsenic. This might be due to the contaminated water from deep and shallow tubewells used for irrigation. The cooking process also affects the concentration of arsenic in rice. In our study, cooking process A removed 67% of arsenic from the cooked rice during washing with arsenic-free water and by discarding excess water. In process B, we found that 85% of the total arsenic was retained in the cooked rice, thereby significantly increasing the risk of arsenic ingestion through contaminated rice-grains and by the cooking process. The cooking process A, used widely in rural Matlab, is a good practice for reducing contamination of arsenic in cooked rice, provided arsenic-free water is used. Earlier studies have also shown that cooking of rice following the traditional method of the subcontinent, i.e. process A, eliminated up to 54-57% of the total arsenic from cooked rice (17,18).
The arsenic-contaminated rice may be considered a catastrophic situation in South-East Asia where concentration of arsenic in underground water is high, and rice being the staple food, could be an emerging threat locally and globally since rice is exported from these countries to many western re-gions/countries, including Europe and the United States (18).
The cooking process C is commonly practised in rural areas in which water is not discarded while cooking vegetables to avoid loss of micronutrients. This cooking process increases the concentration of arsenic in cooked vegetables and, thus, water sources being used for cooking vegetables have a great impact on the concentration of arsenic in food.
We found that raw rice contained higher concentration of arsenic compared to raw vegetables. This might be due to two reasons. First, the vegetables that were collected from households in Matlab were grown in the vicinity of the homes (home-gardening), and pond-or river-water containing low levels of arsenic was used for irrigating vegetables, unlike rice grown in large fields where arsenic-contaminated underground water is used for irrigation. Second, paddy has the enhanced capacity to accumulate arsenic compared to other cereal crops that contain lower arsenic concentration (19). Plants are known to have differential absorption and translocation of arsenicals to its various parts, e.g. roots, stems, and leaves (20,21), which may explain the higher levels of arsenic in leafy vegetables than in non-leafy vegetables. Metal transporter genes that translocate arsenic into and out of the plants may also play a role in the accumulation of arsenic in different parts of vegetables (22). Moreover, the intake of arsenic and other toxic elements by plants from soil varies from region to region. Some types of soil have a capacity for very strong bonding while others do not (23). The use of water from both shallow and deep tubewells for irrigation of agricultural lands, particularly during the dry period (November-March) for production of high-yielding varieties of rice augment the accumulation of excessive level of arsenic (8).
Flooding changes the chemistry of soil dramatically and makes arsenic more soluble; thus, recurrent flooding in Bangladesh redistributes and mobilizes arsenic in the surface-layer of soil from highly-exposed zones to low-exposure zones, especially via drainage from contaminated sources (24,25). Rice grown in flooded fields picks up more arsenic than rice grown in unflooded fields (24). Ponds may also be contaminated with arsenic where arsenic-contaminated tubewells are installed near the ponds. The inflow of drainage from tubewells was found to be the major cause of arsenic contamination in pond-water (25).
Cadmium is a pollutant which has strong negative effects on human and animal health. Chronic exposure to cadmium is associated with kidney damage, bone damage, cancer, low birthweight, sponta-neous abortion, and many other ailments (26,27). We found that both rice and vegetables contained, on average, 30 µg/kg of cadmium. The maximum level of cadmium permitted by the Japanese Government and by the Codex Alimentarius in rice is 400 µg/kg (28,29), and for leafy vegetables, the limit recommended by Codex Alimentarius is 200 µg/kg (29), which indicates that the concentration of cadmium in Bangladeshi foods is still within the safe limit. However, since rice is a staple food in Bangladesh and is consumed twice a day in large quantities (average consumption of adults is 1,500 g of cooked rice per day) (30), it could result in accumulation of cadmium in the body within a short period. Iron-deficient individuals have higher uptake of cadmium than those with balanced iron reserve (31). Thus, menstruating women and children with low iron stores have higher resorption of cadmium because of increased expression of DCT-1, a metal ion transporter (32). Half-life of cadmium is 10 years which accumulates in the body and is not metabolized and detoxified as arsenic (27). Thus, chronic exposure to cadmium in staple food, especially among rice-eating population, would have serious public-health consequences. In our study, both rice and vegetables contained cadmium. The presence of cadmium in rice and vegetables in Matlab could be due to the use of chemical fertilizers, e.g. triple superphosphate, and pesticides (composed of cadmium or its derivatives) for the cultivation of paddy and vegetables. Toxic metal deposition in urban and rural areas is a problem because of industrial wastage, sewage sludge, chemical fertilizers, and pesticides. As a consequence, grazing animals are exposed to pollutants deposited on pastures. Thus, humans are additionally exposed to cadmium-contaminated animal products, especially from meat and milk (33)(34)(35).
The mitigation of arsenic from the food-chain can be comprehended using different types of cooking methods and by changing the social behaviours of the communities through raising public awareness. Since the cooking practices do not affect concentration of cadmium in food, efforts should be focused on changing the types of fertilizers and pesticides used that may lower the concentration of cadmium in food, e.g. using rice variety that requires lower use of pesticides and fertilizers.
Limitations
The limitations of the study are: (a) food samples (rice and vegetables) collected were not well-representative of all types of food consumed in other areas of Bangladesh and (b) fish or pulses were not included in the study that are also consumed in the daily diet.
Conclusions
This exploratory study has shown that arsenic and cadmium have entered the food-chain of Bangladesh and identified rice and vegetables as major sources of arsenic and cadmium. Both arsenic and cadmium are easily available in the environment, and the exposure via food, water, and other occupational sources can contribute to a spectrum of diseases (36). Since exposure to toxic metals has become an increasingly-recognized cause of morbidity globally (26,27,37), it is necessary to identify the additional toxicants, e.g. lead, manganese, chromium, and cobalt, that may enter into the food-chain from agricultural and industrial activities.
ACKNOWLEDGEMENtS
This study was funded by ICDDR,B. ICDDR,B gratefully acknowledges the following donors which provide unrestricted support to the Centre's re- The authors thank Dr. G.B. Nair for his support and are very grateful to Dr. Rubhana Raqib for her constructive suggestion, critique, and ideas to prepare the manuscript at different stages.
The first two authors contributed equally to construct the manuscript, and the other authors contributed by planning, editing, and reviewing and by adding potential information to make it more precise. | 2018-04-03T04:52:36.405Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "6d644bcd7fa59d53f22c0b4ba1424e0ed1035b67",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/JHPN/article/download/6606/5075",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d644bcd7fa59d53f22c0b4ba1424e0ed1035b67",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
221542088 | pes2o/s2orc | v3-fos-license | To reperfuse or not to reperfuse: a case report of Wellens’ syndrome with suspected COVID-19 infection
Wellens’ syndrome is known to be associated with left anterior descending artery occlusion that could lead to an extensive anterior wall myocardial infarction. Thus, emergency cardiac catheterization is needed. However, during coronavirus disease 2019 (COVID-19) pandemic, it is recommended for hemodynamically stable acute coronary syndrome patients with COVID-19 infection to be treated conservatively in an isolated hospital ward. We report an 85-year-old patient with chief complaints of typical, squeezing chest pain in the past 4 h. The patient had a high fever, dyspnea, sore throat, and fatigue for 3 days. He had previously come into contact with COVID-19 positive relatives. The patient was hemodynamically stable and pulmonary auscultation revealed coarse rales in the entire lung. Electrocardiography (ECG) evaluation during the pain episode showed non-specific ST-T changes in lead V2-V5. After sublingual nitrate was administered, ECG evaluation during the pain-free period revealed a biphasic T wave inversion in lead V2 and V3. Laboratory workup showed elevated cardiac marker and leucopenia with neutrophilia and lymphopenia. Rapid immunochromatographic test and initial severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) reverse transcription-polymerase chain reaction (RT-PCR) evaluation from nasopharyngeal swab showed negative results. However, radiographic evaluations suggest the diagnosis of COVID-19 infection. While waiting for the second RT-PCR evaluation, the patient was diagnosed with Wellens’ syndrome with suspected COVID-19 infection. The patient was treated conservatively according to national guidelines and scheduled for elective cardiac catheterization. On the third day, the patient felt better and insisted on being discharged home. Ten days after discharged, the patient died of myocardial infarction. Emergency cardiac catheterization should be done for patient with Wellens’ syndrome, regardless of the COVID-19 infection status.
Background
Wellens' syndrome was first reported in 1982 and is known to be associated with left anterior descending (LAD) artery occlusion. If left untreated, it could lead to an extensive anterior wall myocardial infarction [1]. A later study with 18 months follow-up period showed that the mortality rate was remarkably high in patients treated conservatively compared to patients treated with cardiac catheterization (26.67% vs. 0.88%) [2]. Thus, emergency cardiac catheterization is warranted in patients presenting with this syndrome.
However, during this coronavirus disease 2019 (COVID- 19) pandemic, more caution is taken into consideration for all invasive procedures, including for the cardiac catheterization procedure. In patients with COVID-19 infection, the balance of staff exposure and patient benefit should be weighed carefully. Based on the patient's risk, conservative therapy may be sufficient for non-ST-segment elevation myocardial infarction (NSTE MI) patient with COVID-19 [3]. The Indonesian Heart Association also published a national practical clinical guideline for NSTEMI patient with COVID-19. It is recommended that patients with stable hemodynamic to be treated conservatively in isolated hospital ward [4]. In this report, we present a patient with Wellens' syndrome with suspected COVID-19 infection based on the clinical symptoms and radiographic findings. The patient was treated conservatively according to the national guideline for NSTEMI with COVID-19 infection.
Case presentation
An 85-year-old man came to the emergency room with the chief complaint of typical, squeezing chest pain in the past 4 h. The patient also experienced diaphoresis and nausea following chest pain. In the past 3 days, the patient had a high fever, dyspnea, sore throat, and fatigue. Past medical history of type 2 diabetes mellitus or hypertension was denied. He had a history of contact with one of his relatives who tested positive for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) based on reverse transcription-polymerase chain reaction (RT-PCR) evaluation.
Vital signs on admission were as follows: blood pressure 130/90 mmHg, respiratory rates 26 times/min, heart rate 104 beats/min, right axillary temperature 39°C, oxygen saturation 94% at room air, and became 99% with the simple mask with 6 L/min oxygen. Pulmonary auscultation revealed coarse rales in the entire lung. Other physical examinations were within normal limit. Twelvelead electrocardiography (ECG) performed when the patient was in pain showed non-specific ST-T changes in lead V2-V5 (Fig. 1a). After receiving sublingual nitrate, the chest pain subsided, and the ECG evaluation showed biphasic T wave inversion and minimally elevated ST-segment in lead V2 and V3 (Fig. 1b). Before the patient was transferred to the hospital ward, the ECG evaluation in pain-free period revealed deeply inverted T waves in lead V2-V4 (Fig. 1c).
Since the diagnosis of COVID-19 infection could not be ruled out until the RT-PCR assay was repeated, the patient was diagnosed with Wellens' syndrome with suspected COVID-19 infection. Because the patient was categorized as high-risk NSTEMI (high GRACE score but with stable hemodynamic) with high neutrophil-tolymphocyte ratio and suspected with COVID-19 infection, the patient was treated conservatively in the intensive care unit (ICU) isolation ward while waiting for the early elective cardiac catheterization. The patient received double antiplatelet therapy (DAPT) of aspirin (80 mg once daily) and clopidogrel (75 mg once daily), fondaparinux (2.5 mg once daily), atorvastatin (80 mg once daily), bisoprolol (2.5 mg once daily), isosorbide dinitrate pump (1 mg per hour), paracetamol (500 mg thrice daily), and methisoprinol (500 mg thrice daily).
On the third day, the patient's oxygen saturation was 98% without oxygen supplementation, and ECG evaluation reverts to biphasic T wave in lead V2 and V3 (Fig. 4). CK-MB level was still above the normal limit (19 ng/mL). The patient insisted on being discharged and refused to be referred for early elective cardiac catheterization because he already felt better. The patient and his family signed the consent form to be discharged home despite the high chance of myocardial infarction in the near future. The patient was also aware that the diagnosis of COVID-19 infection could not be ruled out yet because second RT-PCR from nasopharyngeal swab had not been performed yet, thus he and his family had to self-quarantine at home for 14 days.
On the fourth day, the patient was discharged and received aspirin (80 mg once daily) and clopidogrel (75 mg once daily) for his take-home medicine. Two weeks later, in a follow-up session via telephone, one of the family members informed that the patient already died 10 days after being discharged from our hospital due to cardiac arrest secondary to new-onset ST-elevation myocardial infarction. Due to the limited facilities in the other hospital, the patient did not undergo coronary angiography.
Discussion
Our patient has fulfilled the suspect case criteria for COVID-19 by WHO guideline as followed: A patient with acute respiratory illness (fever and at least one sign/symptom of respiratory disease, e.g., cough, shortness of breath), AND a history of travel to or residence in a location reporting community transmission of COVID-19 disease or having been in contact with a confirmed or probable COVID-19 case during the 14 days prior to symptom onset or patients with severe acute respiratory illness (fever and at least one sign/symptom of Fig. 2 Chest X-ray showed preceding consolidation persisted with new consolidative changes in the left apical-middle-lower zone and the right lower peripheral region Fig. 3 Chest computed tomography scan showed diffuse pneumonia in both lungs with multifocal ground-glass opacities and crazy paving patterns respiratory disease, e.g., cough, shortness of breath; AND requiring hospitalization) AND in the absence of an alternative diagnosis that thoroughly explains the clinical presentation [5]. This was also supported by the laboratory abnormalities found in our patient, which were lymphopenia, leucopenia, thrombocytopenia, elevated creatinine, AST and ALT, and hypoxemia from blood gas analysis, and also pneumonia by chest X-ray and CT scan findings (bilateral, peripheral, patchy opacities on chest X-ray and bilateral ground-glass opacities, crazy paving, and multifocal consolidation from chest CT scan) that suggest high probable for COVID-19 infection. It is suggested that findings from chest CT scans usually peak around 9-13 days [6,7]. WHO criteria for confirmed COVID-19 was based on the detection of unique sequences of virus SARS-CoV-2 RNA by nucleic acid amplification tests such as realtime RT-PCR and needed at least two positive results [8]. For initial diagnostic testing, Centers for Disease Control and Prevention (CDC) recommends collecting and testing an upper respiratory specimen with a nasopharyngeal swab as the preferred specimen choice [9]. However, multiple negative tests are required to exclude a diagnosis of COVID-19.
CDC also stated that negative SARS-CoV-2 results from RT-PCR do not preclude COVID-19 infection and should not be used as the sole basis for patient treatment decisions, especially when it is not supported with the clinical observations, patient history, and epidemiological information [9]. It is because RT-PCR has the sensitivity as low as 6-70% for initial diagnosis despite its high specificity [10]. In our case, although the initial SARS-CoV-2 RT-PCR showed a negative result, the chest CT scan showed a typical manifestation of COVID-19. This might be explained by the findings from previous study that the sensitivity of the initial chest CT scan is greater than the initial RT-PCR assay (98% vs 71%, p < 0.001) [11].
To establish the diagnosis of Wellens' syndrome, it is suggested that several criteria be fulfilled, which includes (1) deep symmetrically inverted T waves or biphasic T waves in lead V2 and V3, (2) isoelectric or minimally elevated (< 1 mm) ST-segment, (3) absence of precordial Q waves, (4) history of angina, (5) pattern present during pain-free period, and (6) normal or mildly elevated creatine phosphokinase (less than two times normal upper limit) [12]. In our case, the patient fulfilled all criteria for Wellens' syndrome except the cardiac marker. However, since the cardiac marker is known to be frequently abnormal in patients with COVID-19 [13], we argued that the cardiac marker criterion could be exempted in this situation.
There were some reports regarding the association between COVID-19 infection and cardiovascular complications including myocardial injury, myocarditis, deep vein thrombosis (DVT), and pulmonary embolism (PE) [14]. Our case might be correlated to COVID-19 infectioninduced myocardial injury, infarction, or inflammation due to systemic inflammation response, marked by an elevated CK-MB level. However, it is unlikely that our patient had DVT because there were no supporting clinical findings such as warmth or pain in the extremity or asymmetrical swelling [15]. PE could also be ruled out because there was also no filling defect in the pulmonary artery in the chest CT scan evaluation [16].
It could be argued that this type of case is usually diagnosed as high-risk anterior NSTEMI. However, we would like to stress out the use of Wellens' syndrome nomenclature to underline the high probability of total or near-total LAD occlusion that is not commonly found in high-risk NSTEMI patients. Patients with Wellens' syndrome will develop extensive anterior wall infarction if aggressive intervention is not undertaken, despite the relief of symptoms with medical management. Half of the patients will develop the infarction within 1 week after the admission [1]. Thus, in normal situation, our patient should have undergone emergency cardiac catheterization. Other than that, our patient also had a GRACE score of 159. European Society of Cardiology (ESC) and American College of Cardiology/ American Heart Association recommend an invasive strategy should be performed in less than 24 h in patient with high-risk NSTEMI (GRACE score more than 140) [17,18]. However, in patients with suspected COVID-19 infection, the algorithm management is different. National guideline published by the Indonesian Heart Association recommends conservative treatment in the isolated hospital ward if the patients have stable hemodynamic to reduce transmission risk of COVID-19, especially when a special standardized facility is not available [4]. This recommendation was in line with the Chinese Society of Cardiology guideline that recommends patients with high-risk NSTEMI to be hospitalized and treated conservatively in designated hospital [19]. American College of Cardiology suggests that in patient with stable NSTEMI, conservative therapy may be sufficient on the basis of patient risk [3]. In contrary, guideline published by ESC recommends patients with high-risk NSTEMI to still be treated with an early invasive strategy in less than 24 h after admission in COVID-19 designated hospital [20]. According to the Egyptian Society of Cardiology guidelines, patients with high-risk NSTEMI should undergo early catheterization in less than 24 h. However, it is only possible if the hospital is not overwhelmed and all the precautions to prevent the dissemination of infection and protect the medical staff are adopted. Nevertheless, if the prevalence of COVID-19 increases and causes overburden of the health system resources, patients with high-risk NSTEMI should be hospitalized and treated conservatively in isolation wards or ICU in nondesignated hospital [21].
According to the national guideline, DAPT (clopidogrel or ticagrelor and aspirin) and high-dose statin should be given for conservative treatment during hospitalization [4]. We opted to treat our patient with clopidogrel because of the moderate bleeding risk (CRUSADE score 37). Recent meta-analysis study showed that ticagrelor was associated with a higher risk of major bleeding compared to clopidogrel in East Asian patients with acute coronary syndrome [22]. In addition to those recommended treatment, the patient also received fondaparinux. After the hospitalization, the patient had been given DAPT for take-home medicine. It is recommended that DAPT should be given for 1 year after discharged home [17].
The limitations of this report were the absence of coronary angiography and echocardiography evaluation. Therefore, the diagnosis of the patients could not be confirmed and the differential diagnosis such as PE and myocarditis could not be totally excluded. The coronary angiography was not performed because it was not recommended by the Indonesian Heart Association. Echocardiography evaluation was not performed because there was no published guideline in performing echocardiography to patients with suspected COVID-19 infection and there was also a shortage of standardized personal protective equipment when this case report occurred.
Conclusion
In the case of acute coronary syndrome in the COVID-19 pandemic situation where the risk of infectious spread is very high, risk stratification is essential to determine the treatment strategy. Following the national guideline in this situation, high-risk NSTEMI with conservative management is preferred for treatment in the acute phase, with favorable outcomes in the acute phase. However, in the case of Wellens' syndrome, where significant LAD occlusion is suspected, urgent early cardiac catheterization should be done, regardless of the COVID-19 infection status. Recognition of the ECG pattern of Wellens' syndrome is also crucial because Wellens' syndrome has a poor prognosis despite the lack of symptoms in early stable conditions. | 2020-09-09T14:11:03.944Z | 2020-09-09T00:00:00.000 | {
"year": 2020,
"sha1": "308092ddb8c21e72bdba1e2897435de0b4d3a9f4",
"oa_license": "CCBY",
"oa_url": "https://tehj.springeropen.com/track/pdf/10.1186/s43044-020-00094-w",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "308092ddb8c21e72bdba1e2897435de0b4d3a9f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219311011 | pes2o/s2orc | v3-fos-license | Anti-phase Variation of Hydrology and In-Phase Carbon Accumulations in Two Wetlands in Southern and Northern China Since the Last Deglaciation
To examine the spatial patterns of hydrological variations in the southern and northern East Asia Monsoonal (EAM) region on millennial time scales, as well as to investigate the relationship between hydrological changes and carbon accumulation in these regions with contrasting environmental backgrounds, we performed facies-based hydrological reconstructions in two wetlands, Midiwan wetland (37°39′N, 108°37′E) and Dahu wetland (24°45′N, 115°2′E), located in a semi-arid loess-desert transitional zone and humid southern China, respectively. Our reconstructions revealed an anti-phase pattern of the precipitation in these two wetlands on a millennial time scale. However, with the different responses to the contrasting hydrological conditions, the carbon accumulations at these two sites showed an in-phase patterns on a millennial time scale. Our results imply that the carbon accumulations at these two sites are mainly controlled by local hydrologic conditions. The wetlands in both southern and northern China were found to be expanding during the interval from 6 to 4 cal. ka BP (ka = kilo annum), as inferred by a higher total organic carbon (TOC) content. For the Mystery Interval (MI, from 17.5 to 14.5 cal. ka BP), however, both hydrological conditions and carbon accumulations at these two sites showed an in-phase pattern.
INTRODUCTION
Wetland represents one of the most important terrestrial ecosystems with its natural accumulation of organic matter closely related to hydrological processes (Billett et al., 2004;Holden, 2005). Although it only accounts for 3% of the global terrestrial land area, wetlants are regarded as one of the most important carbon reservoirs due to its high carbon density (Frolking and Crill, 1994;Blodau, 2002;Strack et al., 2006;Limpens et al., 2008;Yu et al., 2010;Leifeld et al., 2019). It is both a natural and an anthropogenic source of greenhouse gases (e.g., CH 4 ) emission to the atmosphere because of its significant changes in decomposition processes under different climates, harvests, and disturbances such as fires (Zoltai et al., 1998;Page et al., 2002;Olson et al., 2013;Chimner et al., 2017;Rigney et al., 2018).
Carbon accumulation in a wetland is determined by the balance of the photosynthetic uptake and decomposition loss mainly controlled by regional climatic conditions, especially the hydrological process (Frolking et al., 2010;Rennermalm et al., 2010). Generally, carbon accumulation increases with an increase in soil moisture that is influenced by groundwater level or precipitation (Nijp et al., 2019;Lazcano et al., 2020). However, at some waterlogged sites, carbon accumulation decreases with the increase in soil water because of the inhibition of a seeper to the growth of plants (Figure 1). Here, we put forward a conceptual framework by hypothesizing that there will be two cases for the relationship between total organic carbon (TOC) accumulation and soil moisture conditions in wetlands with typical hydrological conditions of A and B types (Figure 1): (1) the variations in TOC at two wetlands would be anti-phased if the variations in precipitation at the two sites are in-phase, or (2) the variations in TOC would be in-phased if precipitation at the two sites is anti-phase. For the latter case, there may exist several periods during which the proportions of both wetlands increase. Therefore, the total carbon accumulation is ultimately determined by the spatial pattern of hydrology.
From a global perspective, there are spatial differences in precipitation as well as in trends in different regions (Wang et al., 2012). Numerical modeling studies revealed that the FIGURE 1 | Conceptual model showing the relationship between the water content of soil (WL) and soil carbon accumulation (CA). The upper plot is for the WL (horizontal axis) and plant production (ordinates) and the lower for the WL and carbon accumulation. A and B show the typical relationship between the WL and CA in two contrasting hydrological conditions. variation in summer precipitation in northern and southern East Asia exhibits an anti-phase pattern on the orbital time scale due to the ENSO-like response to orbital forcing (Shi et al., 2012). Stable carbon isotope records of peat sequences from the eastern Tibetan Plateau and northeastern China also show an anti-phase pattern of monsoon precipitation on the millennialcentennial time scale (Hong et al., 2005(Hong et al., , 2010(Hong et al., , 2014. A recent reconstruction of lake levels at Lake Chenghai in southwest China also showed an out-phase variation of precipitation on orbital time scale in southwest China with boreal summer insolation that was regarded as the driver of Asian summer monsoon precipitation (Xu et al., 2020). For northern and southern East Asia, however, it is unclear whether the antiphased spatial pattern of monsoonal precipitation exists on the millennial time scale. More importantly, the relationship between carbon accumulation and regional hydrological conditions in southern and northern East Asia remains unknown. More studies on archives containing information about hydrological processes and carbon cycling are needed for understanding the history, variability, and dynamics of environmental change in these two regions.
Here, we chose two well-dated peat sequences that represent contrasting hydrological and temperature regimes to test our hypotheses. The objectives of this study were: (1) to examine the spatial variation of changes in precipitation (i.e., hydrological cycle) in southern and northern East Asia on a millennial time scale and (2) to investigate the relationship between hydrological changes and carbon accumulation in wetlands located in a semiarid loess-desert transitional zone and in humid southern China.
STUDY SITES
The two wetlands used in this study are Midiwan (MDW, 37 • 39 N, 108 • 37 E) located in the loess-desert transitional zone in northern China and Dahu (DH,24 • 45 N,115 • 2 E) located in the Nanling Mountain area in southern China (Figure 2). These wetlands are located in a semi-arid area and humid area, respectively, where the contrasting changes in precipitation yield different hydrological conditions and, consequently, determine the TOC accumulation. Like scenario A in Figure 1, in the semiarid region (i.e., MDW), the higher soil moisture would favor the growth of plants and carbon accumulation, whereas in the humid region (i.e., DH), like scenario B in Figure 1, more water would hinder the growth of plants and carbon accumulation when the water content of soil (WL) exceeds the optimum values (Figure 1).
Midiwan (MDW) Wetland
The MDW wetland, with an altitude of 1400 m a.s.l., is located in southwest of Yulin City in northern Shaanxi Province, in the southern margin of the Mu Us Desert region. As showed in Table 1, a quadrat survey of Lake Hongjiannao near the MDW site shows that the modern plants in this site are mainly grasses from 9 families. The climate is characterized by semiarid continental monsoon activity with an annual precipitation of 395 mm and annual mean temperature of 7.8 • C (Figures 3A,B). Water is the most important limiting factor for the growth of plants at this site; the amount and timing of precipitation are critical. During the humid periods, with the retreat of the desert, the palaeosol or peat was deposited. During the drier periods, with the advance of the desert, loess or eolian dust was deposited on the land surface (Porter and Zhou, 2006). Therefore, regarding the depositional sequences in this loessdesert transitional zone, the alternative deposition of wetland and wind-blown dust reflects the history and variability of summer monsoon activities (Porter and Zhou, 2006). Specifically, wetland deposits reflect the stronger monsoon activity that brings more precipitation to this region, while eolian dust deposits reflect the retreat of the monsoon front, which causes drier conditions and desertification in this region. Following the stratigraphic description of Zhou et al. (1996), the MDW peat sequence with a length of 13.8 m was divided into 13 depositional units ( Table 2), reflecting millennial-scale changes in hydrological conditions in northern China.
Dahu (DH) Wetland
The DH swamp, covering an area of 0.8 km 2 and located at about 260 m a.s.l., has developed in a small, closed intermontane basin in the eastern Nanling Mountain region in southern China (Zhou et al., 2004;Zhong et al., 2010Zhong et al., , 2011. The hydrological conditions of the swamp depend largely on precipitation because there is no river discharging into the swamp (Xue et al., 2009). In this area, the present-day annual average temperature is 17.8 • C and the annual precipitation is ∼1600 mm, mainly occurring from March to September ( Figure 3C). The modern vegetation around this site is the shrubbery with ferns and grasses (Table 3) (Zhong et al., 2010). Ferns and grasses are the main plants in the peat accumulations. Zheng et al. (2008) conducted systematic drilling along a track line from the northeast to the southwest. The cross section along the drilling sites was reconstructed based on stratigraphic correlation (Figure 4). In the region, where the WL is usually higher than W opt 2 , an increase in WL would inhibit plant growth and carbon accumulation. During humid times, lacustrine mud or sand deposits on the surface because the valley is covered with water. It is only during relatively drier periods, accompanied by the shrinking of the lake, that plants that contribute to the peat accumulations flourish. The stratigraphical sequence at this site is composed of peat interbedded with lacustrine sediments (Zhou et al., 2004;Zheng et al., 2008;Xue et al., 2009). The lacustrine sand represents a waterlogged condition caused by excessive precipitation, while the peat layer represents a relatively drier environment caused by moderate precipitation. The thickest deposits occurred at the center of the swamp, covering a time back to 42 cal. ka BP (Zheng , 2008). In this study, a 3.5 m long core was taken at the center of the swamp and divided into 10 depositional units ( Table 2; Zhou et al., 2004).
Hydrologic Grades of Different Facies
Based on our conceptual model, the production of plants that affects the accumulation of organic carbon in the soil is controlled by the WL. As shown in Figure 1, when the WL is lower than the minimum value (W min ), plants cannot survive; accordingly, there will be no organic carbon deposits in the stratigraphic section. When the WL increases from W min to W opt 1 (the lower optimum threshold WL for plant growth), plant production increases. For a WL between W opt 1 and W opt 2 (the highest optimum threshold WL for plant growth), the production will be consistent. With a WL > W opt 2 , the production will decrease due to the inhibition of the waterlogged condition to the growth of plants.
For the MDW site, we assigned the value of 0 to peat layers to represent the hydrologic grade, 2 to lacustrine deposits, and -1 to -3 to eolian deposits to mark the dry conditions (Figure 5). Based on the description of the MDW profile (Zhou et al., 1996), by using the aforementioned protocol and also consulting the results of pollen analyses and stable carbon isotope measurements, the hydrological condition of the MDW area since 16 cal. ka BP was reconstructed (Table 2).
Similarly, for the DH site, we assigned the value of 0 to peat layers and 1-3 to lacustrine deposits (Figure 5). Based on the description of each stratigraphic unit (Zhou et al., 2004), the hydrological grade in the DH core since 18 cal. ka BP was reconstructed ( Table 2).
Chronological Framework
The chronological frameworks for these two peat sites were established by radiocarbon dating (Zhou et al., 1996(Zhou et al., , 2004. A total of 23 and 17 samples from the MDW and DH profiles, respectively, including fossil wood, charcoal, and peat, were collected for dating. Radiocarbon dating results were calibrated using the CALIB software (Stuiver et al., 1998) to obtain calendar ages. The chronological framework for each profile was established by the linear regression between the calibrated age and depths. The details of the dating materials, methods, and chronological framework can be found in Zhou et al. (1996) for the MDW site and in Zhou et al. (2004) for the DH site.
Total Organic Carbon (TOC)
The TOC was determined for the two studied sections (Zhou et al., 1996(Zhou et al., , 2004. At both the MDW and DH sites, the stratigraphic sequence is composed of inter-bedded peat and sand layers, indicating a great difference in carbon accumulation at different times. Organic carbon was only deposited during the periods when peat layers were formed. For those sites with drastic facies changes, carbon accumulations are mainly determined by the organic carbon input into the layers. Compared to organic carbon input, other factors, such as microbial activities and depositional factors, are negligible. Therefore, combining the reconstructed hydrological conditions, we used the TOC data to explore the relationships between hydrology and carbon accumulation at these two sites with contrasting climate condition.
Hydrological Variations in MDW and DH
There were five periods with wetter conditions and six periods with drier conditions at MDW since the last 16 cal. ka BP ( Figure 6A). The highest WL at this site was found from 14.5 to 13.5 cal. ka BP, as indicated by the layer of light grayishgreen lacustrine silt and silty peat. During the Younger Dryas event, the deposition at the MDW site varied from silt to silty peat, then to eolian sand (1090-815 cm), indicating an unstable, variable hydrological condition between dry and humid (Zhou et al., 1996). The hydrological condition at the MDW site was generally humid during the early Holocene (11.5-8.5 cal. ka BP), except for an aberrant dry event occurring at ∼10 cal. ka BP. A prolonged dry period occurred from 8.5 to 6.5 cal. ka BP, as indicated by a set of grayish-yellow eolian deposits. The WL resumed to high level in the period from 6.5 to 3.5 cal. ka BP, as indicated by a set of silty peat deposits (550-310 cm). We note that the deposits during this humid period contained more minerals than those in the early Holocene, suggesting that this humid period occurred along with a drying trend. A set of siltwith-mud bands developed, and this is mantled by a modern active dune on the top, suggesting a strengthened drying trend after 3.5 cal. ka BP. The general trend of the hydrological variation in the DH area since 18 cal. ka BP represents a cyclic pattern: the WL decreasing in the period from 18 to 15 cal. ka BP, increasing from 15 to 11 cal. ka BP, and then decreasing from 11 to 3.5 cal. ka BP ( Figure 6B). The WL changed from a peak at 18 cal. ka BP to a low at ∼15 cal. ka BP. There was a dry period from 15.5 to 14.5 cal. ka BP, as inferred by a brown herbaceous-rich peat layer from 280 to 254 cm in the section. This dry period was followed by a humid period from 14.5 cal. ka BP to the beginning of the Holocene, with a hiatus corresponding to the Younger Dryas event. The hydrological condition in this area during the early Holocene was relatively high. A dry event occurred at ∼9 cal. ka BP and lasted for ∼1000 years (9.5-8.5 cal. ka BP). The WL began to decrease after 7 cal. ka BP and reached its lowest level again in the period from 6 to 3.5 cal. ka BP. The WL showed a slight increase after 3.5 cal. ka BP.
Hydrology and Carbon Accumulation
The variations in carbon accumulation, indicated by TOC proxy and hydrological conditions in the MDW area during the last 16 cal. ka BP, are generally synchronous ( Figure 6A): a higher WL was accompanied by a higher carbon accumulation, except during a wetter period with lower carbon accumulation from 14.5 to 14 cal. ka BP, as recorded by a layer of lacustrine silt. The hydrology and carbon accumulation in the DH area during the last 18 cal. ka BP, in contrast, represents an asynchronous pattern ( Figure 6B): a higher WL was concurrent with a lower carbon accumulation.
The carbon accumulations at these two wetlands on the millennial time scale show a general in-phase relationship (Figure 6). For example, during 10-8 cal. ka BP and 6-4 cal. ka BP, there were high carbon accumulations at both sites. However, during 8-6 cal. ka BP, both sites had low carbon accumulation. The highest rate of carbon accumulation at these two sites occurred at different times, although their trends on the millennial time scale are generally synchronous. The highest rate occurred during the early Holocene in northern China, but during the middle Holocene in southern China.
Comparison of Hydrological Reconstruction With Other Proxies
To validate our conceptual model, we compared the facies-based reconstruction of the hydrological condition with other proxybased reconstructions. The results show that our reconstructions of hydrology at these two peatlands through facies analysis are comparable with other proxy-based reconstructions, implying that the facies analysis is a sound approach for hydrological reconstruction for these sections with drastic facies variation. Our result at the MDW site is generally consistent with the results of pollen and stable carbon isotope analyses performed in the same section (Zhou et al., 1996). The dry episodes are usually correlated to those with low levels of pollen concentrations and the positive bias of stable carbon isotope. The proxy-based reconstruction of lake level from Lake Daihai (Sun et al., 2009) northeast of the MDW site also showed a similar trend in hydrological fluctuation to that in the MDW area. The rhythm from the dry (9-7 cal. ka BP) to the humid period (7-3.5 cal. ka BP) in the MDW area can be correlated to the lake-level change in Lake Daihai, implying that the MDW site might have the potential to record hydrology at a regional scale. The archaeobiological evidence showed that rice agriculture in northwest China first emerged > 5000 years ago and lasted for > 1000 years (Li et al., 2007). This evidence, showing a relatively humid period from 5 to 4 cal. ka BP in northern China, further Frontiers in Earth Science | www.frontiersin.org explains the expansion of wetlands in the mid-Holocene. For the DH site, other proxies such as pollen concentration (Zhou et al., 2004), humification (Zhong et al., 2011), and biomarker records (Zhou et al., 2005;Zheng et al., 2009) agreed well with our findings.
Hydrology and Carbon Accumulation During the Holocene
The reconstructed hydrological conditions with carbon accumulation at these two sites show that carbon accumulations are generally controlled by regional hydrological conditions. The result rejects the first case in the aforementioned hypothesis, that carbon accumulation is anti-phased. However, it supports the second case in our hypothesis. We noticed that, in several periods, such as 9.5-8.5 cal. ka BP and 6-4 cal. ka BP, the carbon accumulation increased in both northern and southern East Asia (Figure 7), which matches well with the change of global peatlands coverage (Yu et al., 2010).
The period from 6 to 4 cal. ka BP was a unique period, during which precipitation in northern and southern East Asia showed an anti-phased spatial pattern. The environment in northern East Asia in this period was relatively humid, as indicated by our results in MDW and the proxy-based reconstruction from Lake Daihai (Sun et al., 2009), while in southern East Asia, the environment during the same period was relatively dry, as indicated by the hydrological reconstruction and other reports (Zhou et al., 2004(Zhou et al., , 2005Xiao et al., 2007;Zheng et al., 2009;Zhong et al., 2010Zhong et al., , 2011. This type of spatial pattern in precipitation in East Asia favored the development of wetlands during the mid-Holocene. The total area of global peatlands expanded in this period (Yu et al., 2010), which matches well with the peatland expansion in both the MDW and DH areas (Figure 7).
The results from this study filled a major knowledge gap regarding the seesaw pattern of hydrological changes in the northern and southern East Asia monsoonal (EAM) regions on the millennial time scale. As showed in Figure 8, the gridded linear trends in precipitation in China can be divided into three regions. The MDW and DH sites are located in regions with contrasting trends of precipitation. Intergrading the seesaw pattern of the East Asia monsoon on an orbital timescale (Shi et al., 2012), millennial time scale, and inter-decadal scale, we propose that the monsoon dynamics on different timescales might be similar. Thus, modern instrumental records can provide an analog for understanding the environmental processes in the past.
Spatial Pattern of Hydrology and Carbon Accumulation During the Mystery Interval (MI)
It is worthwhile to note that the variations in precipitation in northern and southern East Asia are synchronous during 16-14 cal. ka BP (Figure 6), which is different from the anti-phase pattern during the Holocene. This episode is known as the Mystery Interval (MI), a period with a series of enigmatic climate features from 17.5 to 14.5 cal. ka BP: temperatures in Greenland were lower than that during the Last Glacial Maximum as inferred by low δ 18 O values, but the mountain glaciers in eastern Greenland, northern Europe, and North America retreated during this period (Williams et al., 2012;Zhang et al., 2014). Our results in the DH area show that the hydrological conditions during the MI varied from a high during 18 cal. ka BP to a low in 15 cal. ka BP, then returned to a higher level in 14 cal. ka BP, which is consistent with the two-phase pattern of northern hemisphere records (Broecker and Putnam, 2012). For the MDW site, the WL reached its peak at 14.5 cal. ka BP. The spatial pattern of hydrology, as inferred from these two sites during the MI, is itself a mystery. It shows an in-phase pattern, while during the Holocene there experienced an anti-phase pattern. The cause of this variation in spatial pattern during the MI remains unclear.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. | 2020-06-05T13:03:08.303Z | 2020-06-05T00:00:00.000 | {
"year": 2020,
"sha1": "4afc82dbb896c836bf2bac9f367b8b28d3818047",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2020.00192/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "4afc82dbb896c836bf2bac9f367b8b28d3818047",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
233862624 | pes2o/s2orc | v3-fos-license | Remote Mentorship in Radiation Oncology: Lessons to Share
Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Florida; Department of Radiation Oncology, Joint Radiation Oncology Clinic, David Grant Medical Center, Travis AFB, California; Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan; Department of Radiation Oncology, University of Toronto, Ontario, Canada; Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas; Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
invaluable. Finding mentors has been noted as a challenge for women in radiation oncology given low representation in the field. 1 In 2019, women comprised only 17.4% of department chairs and program directors and 30.7% of faculty. 2 Digital, or remote, mentorship seems an ideal solution to connect women mentors and mentees, especially given findings that over a quarter of female residents train in programs with less than or equal to 2 female faculty. 3 In 2018, the Society for Women in Radiation Oncology founded a mentorship program to fill this unmet need, creating over 100 pairings. Participants were paired with members from the next training level up (ie, medical students with residents, etc) unless a specific request was made. Mentees were encouraged to make the initial introduction. We believe this to be the largest initiative of its sort in the field of radiation oncology to date. Given growing interest in using remote mentorship to encourage students to consider radiation oncology and to help trainees to succeed, we write to share lessons from our early experience with this program.
In our program, mentees and mentors were paired based on preferred commonalities such as geographic region and disease site interest. Afterward, an institutional review board exempt, anonymous survey (Supplementary Materials) was administered to 127 eligible program participants from June to July 2020. Questions were created that related to the following domains: professional characteristics, ethnicity, communication Sources of support: This work had no specific funding. Disclosures: C.S. and W.W. are the current cochairs of the Society of Women in Radiation Oncology (SWRO) mentorship committee. A. L. and L.P. are former chairs and serve as advisors to the SWRO organization. R.J. and J.C. serve as advisors to the SWRO organization. R.J. has stock options as compensation for her advisory board role in Equity Quotient, a company that evaluates culture in health care companies; she has received personal fees from Amgen and Vizient and grants for unrelated work from the National Institutes of Health, the Doris Duke Foundation, the Greenwall Foundation, the Komen Foundation, and Blue Cross Blue Shield of Michigan for the Michigan Radiation Oncology Quality Consortium. She has a contract to conduct an investigatorinitiated study with Genentech. She has served as an expert witness for Sherinian and Hasso and Dressman Benzinger LaVelle. She is an uncompensated founding member of TIME'S UP Healthcare and a member of the board of directors of ASCO. C.S. received an honorarium for her participation as a panelist on Elekta's "Championing Women & Diversity in Radiation Oncology: A Panel Discussion." Research data are stored in a repository and will be shared upon request to the corresponding author.
Supplementary material for this article can be found at 10 Many of the questions incorporated a 5-point Likert scale to describe the level of agreement with the provided statement (ranging from "strongly disagree" to "strongly agree"). There were also open-ended questions for which coding was developed once responses were collected. Ultimately, 27 members answered the survey (Fig 1). Fifty percent of participants were in their pairing for less than 1 year. Despite a low response rate (22%), open-ended questions garnered valuable information that may have immediate relevance as the field embraces remote mentorship in the current environment.
One commonality 23% of respondents noted was a lack of compatibility with their pairing(s), which led to the dyad's demise. When asked if they would like to continue with the same mentor/mentee pairing, one respondent answered, "Did not really develop a relationship with mentee." Another respondent wrote "Surprisingly, I felt my mentee and I were so different that we did not have much chemistry nor was it a fruitful experience. . . I didn't expect this, so something to consider with future pairings [is] to have a couple points of commonality." Other responses relating to lack of compatibility in the pairing can be found on a supplemental word cloud (Fig 2). Additional information gathered from our study can be found in Table 1.
Other studies have shown that effective mentorship can be established by assigning pairings with mutual personal interests. 4,5 Pairings based only on similar clinical interests between individuals without compatible personalities can cause the pairing to be unsuccessful owing to the lack of interpersonal reward. 6,7 In our program, 42.9% reported that they were happy and wanted to continue with their pairing, presumably from commonalities that extend beyond their backgrounds or geographic location. Most of our survey respondents suggested race (17.9%) and geographic location (28.6%) did not affect their pairing success.
Personality is difficult to perfectly capture on paper; however, there are opportunities to establish better matching by asking questions in this vein. One way that has proved fruitful is personality testing, such as by utilization of the Myers-Briggs Type Indicator, to help base pairings on compatible personality types. 8,9 These results might help evaluate which individuals are likely to have the most cohesive pairings. Ideally, individuals might also be permitted to change their pairings either annually or earlier if compatibility is not found.
Digital mentorship offers a way to connect individuals across our field and provide the unique specificity needed for enduring effect. Given the increased quality and availability of telecommunication due to increased globalization 10 and the events of 2020, remote communication is better now than ever before. The lessons from our experience with encouraging digital mentorship through Society for Women in Radiation Oncology may have immediate implications for others considering similar efforts. We hope that sharing our observations will help others as we continue to seek to identify ways to foster the future leaders of our field. | 2021-05-07T00:04:30.623Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "995a1a33988ea9a0934684cd8dbac2737b91d8c1",
"oa_license": "CCBYNCND",
"oa_url": "http://www.advancesradonc.org/article/S2452109421000440/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "737f3aced6156a633c425eb6e90cb5ae24658d2f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.