id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
210901331
pes2o/s2orc
v3-fos-license
Exploring the Qualitative Factors Influencing the Perception of Employees on Performance Management Practices of Public Organizations in Ghana- The Case of Bolgatanga Polytechnic The perception of employees on human resource practices of organizations affect their performance. When employees perceive human resources practices as being effective, their performance would be affected positively. This study therefore, explored the qualitative factors influencing the perception of employees’ on performance management practices of public institutions in Ghana. We explored the perception of some selected employees of the Bolgatanga Polytechnic on performance management practices of the Polytechnic. We found out that employees had a negative perception about performance management practices of the Polytechnic. This negative perception was grounded on the perceived non-involvement of employees in the performance management processes of the Polytechnic. The participants perceived that they were not given the needed attention and support to enable them achieve the goals of the performance management system of the Polytechnic.We concluded that performance management systems of Public institutions in Ghana are ineffective and fast becoming annual form filling exercises and not a means of managing performance of employees of those institutions.We recommended that it was time the central Government and other relevant bodies gave the needed attention to performance management of employees of public sector institutions in Ghana. their performance. They stated that when employees perceive human resources practices as being distinctive, consistent and consensual, their performance would be affected positively. According to them, distinctiveness refers to the visibility, understandability, legitimacy and relevance of human resource practices whilst consistency refers to the instrumentality, validity and consistency of human resource messages. Consensus, on the other hand, refers to whether there is agreement among stakeholders of the practices. In the view of Sharma, et al. (2017), employees perceive performance management system to be effective when it is well planned and employees are involved in the processes of goal setting and development of the evaluation method. They also perceive performance management system to be effective when there is active role played by the human resource section of organizations by way of consistent policies and guidelines about the system; and the presence of a strong communication system where feedback is timely and employees are allowed to air their views about the system, thereby making the system transparent and effective (Sharma, et al. 2017). A study conducted by Khan, et al. (2016) also revealed that procedural justice, participatory goal setting, effective feedback and performance-based pay affects employee's perception about performance management systems. Results of several studies also confirms the relationship between employees perception and the effectiveness or otherwise of performance management systems (Levy et al., 2017;Fryer et al., 2009;Sarah Hackman, 2018;Abane et al., 2017;Latham et al. 2005;Kurubanga and Wenner, 2013;Ho, 2005;Latham et al, 2008) The perception of employees about performance management systems of organizations are therefore very important in achieving the goals and objectives of those systems (Sharma, et al., 2016). According to them a positive perception can be achieved from employees when they consider performance management strategies as accurate and fair. According to them, accuracy of a performance management system comprise four major elements: performance planning accuracy (PPA), feedback and coaching accuracy (FCA), performance rating accuracy (PRA) and outcome accuracy (OA). On the other hand, Sharma, et al., (2016) stated that fairness of a performance management system also comprise four factors: procedural fairness (PRF), which refers to fairness about procedures; distributive fairness (DIF), which refers to fairness about the outcomes performance reviews; Interpersonal fairness (IPF), which also refers to fairness and quality of interpersonal relationship between supervisors and subordinates; and informational fairness (INF), which refers to fairs on feedback and explanations of relevant issues relating to the performance management system. Some earlier research by Dickinson, 1993;Boice, & Kleiner (1997); Murphy, & DeNisi, (2008), as cited by Sharma, et al. (2016) revealed that when employees consider the performance management strategies of an organization as being accurate and fair then those are likely to be effective and could lead to the achievement of the goals of an organization. Understanding Performance as a Concept In order for oranizations to be able to manage performance effectively, the meaning of the concept of performance must be well understood. This is because the concept of performance lent itself to so many interpretations and assumptions. It means a lot of things to a lot of people depending on their perception about it (Van Dooren, et al., 2010). According to them, to be able to fairly measure performance and also give it some meaning, it must be contextualized (Van Dooren, et al., 2010). They therefore, adopted the four perspective of performance proposed by Dubnick (2005) to give a fair explanation of the concept of performance. Performance as sustainable results (P4) Source: Dubnick, 2005 In contextualizing the concept of performance in line with Van Dubnick's four perspectives of performance, they began by defining performance as intentional behavior which could be individual or organizational. According to them, this definition gives two main dimensions to the concept of performance. The first dimension is the action of an individual or organization and the second dimension is the results of the action. They also suggest that, in order to give meaning to the concept, the quality of the action, which could be competent or incompetent, and the quality of the result, which could be good or bad must also be considered. The two dimensions of performance, according to Van Dooren, et al. (2010), then results into four perspectives of performance. The first perspective sees performance as P1, where performance is equal to the number of units produced or for example, the presence of a security officer at his/her duty post. In this case the quality of the units produced or what the security guard does on duty, does not matter. Also, how the units were produced or whether the security guard is competent enough to perform his/her duties does not matter here. According to them, measuring performance this way is problematic. The second perspective of performance is looking at performance as P2, where quality of actions or the competence in terms of the capacity of an individual or organization is equated to performance. They state that measuring performance from the perspective of the capacity of an individual or organization without considering other relevant factors such as the quantity or quality of their output amounts to narrowing the meaning of performance. The third perspective is perceiving performance as (P3), where performance is understood to mean the quality of the results produced. In this case the quality of the actions or how the results are produced does not matter. What matters here is the results and not how they came about. In the view of Van Dooren, et al. (2010), this is problematic because the results may not be sustainable without other relevant factors such as the quality or the capacity of the individual or organization that produced the results. What then is performance? According to Van Dooren, et al. (2010), performance is better understood to mean the ability of an individual or an organization to produce quality results through quality actions. That is, performance is achieved when an individual or organization is able to convert their capacity or competence into quality results. According to them performance is sustainable only when capacity can be converted into quality results. This means that when quality results is not the outcome of quality actions, performance cannot be sustained. However, according to Elger (2006), to perform is to produce valued results. He states that performance is a complex series of actions that integrates skills and knowledge to produce valuable results. He states further that performance is a journey and not a destination, and a point in that journey represents a level of performance. According to him, a level of performance characterizes the effectiveness or quality of a performance. He states, for instance that a person performing at level 3 in the performance journey is more effective than a person performing at level 2 in the performance journey. According to him, performance is largely dependent on one's skills, knowledge, experience, work environment, among others. He therefore, suggest that performance can be improved as one gains more of the relevant factors that determines performance and that performance can be enhanced by organizations if they put the factors that induce performance in place. According to Sonnentag et al. (2006), performance is generally understood by many authors to consist of two main parts, the action or behavior and the outcome of the action or behavior. According to them the action (behavior) refers to what one does as an employee such as teaching children how to speak French, assembling engines, among others. They state that not all actions of the employee constitute performance unless those actions can be linked to the achievement of organizational goals. Campbell et al. (1993) therefore defines performance as what an organization hires one to do, and do well. According to Campbell et al. (1993), actions that cannot be brought to bear in the achievement of the goals of an organization cannot be classified by an organization as performance. Employees performance in an organization or in any context can therefore, be measured or evaluated by taking into account the actions or behavior that are relevant to the organization (cf. Ilgen & Schneider, 1991;Motowidlo, Borman, & Schmit, 1997). Sonnentag et al. (2006), state on the other hand that the outcome aspect of performance is the consequence or result of the action of an employee such as the number of children who can speak French from the French class or the number of engines assembled. They state, however, that the outcome of the actions of employees may depend on a number of factors. They state for instance that a French teacher may be able to teach children in French very well but some children may not be able to speak French due to some deficiencies. They state further that for this reason, authors are debating as to whether the action or outcome aspect should be labeled as performance. Sonnentag et al. (2006) have an inclination in favour of the suggestions that the action aspect should be labeled as performance. Performance is also seen to be a dynamic process (Sonnentag et al., 2006). According to them individual performance changes over time and is a reflection of learning processes and other long and short term factors. They dwelled on the work of Avolio, Waldman, & McDaniel, 1990;McDaniel, Schmidt, & Hunter 1988;Quinones, Ford, & Teachout, 1995 to explain that performance increases as a result of learning as well as time spent on specific job, though not in a uniform pattern among individuals. According to Sonnentag et al. (2006), performance in the changing world is not dependent or influenced by specific factors but by several factors, some of which are unpredictable. According to them this makes the conceptualization of performance difficult. In their view, factors such as the importance of continuous leaning; the relevance of proactivity; the increasing use of teams; globalization and its effects on individual performance; as well as technology, which is fast becoming an integral part in measuring the performance of employees, make the concept of performance so dynamic. Performance, therefore, can only be managed and not imposed. One of the major task of organizations is therefore, how to find effective ways of managing performance in order to remain relevant in this fast changing word. Performance Management Societal demand for a high performing public sector became prominent in the 1980s. Performance then received much attention and became an agenda since then (Van Dooren, et al., 2010). Many governments in the western world after receiving much pressure from citizens for welfare services, realized the need to cut expenditure on public servants, while demanding improved performance from them (Van Dooren, et al., 2010). Many governments and organizations, started to initiate programmes and activities aimed at improving the performance of public servants in their various countries (Van Dooren, et al., 2010). The quest for improved performance by the Public Sector then became the catch word and the ultimate desired result for governments (Hood, 1994). Research on how to manage performance got much attention from the 1980s (Van Dooren, et al., 2010). These researchers began to question the outcome of earlier researchers on performance improvement. Earlier researchers from the 1920s concentrated on how to improve performance and not so much on how to manage performance (Denisi and Pritchard, 2006). During this time, the focus was on how to improve the system of measuring and appraising performance and not on how to improve performance itself (Denisi and Pritchard, 2006). The assumption then was that if a valid, reliable and accurate instrument for measuring and appraising the performance of employees was found, their performance could be improved (Lee,1985). In the 1970s and 1980s researchers began to raise questions about such conclusions (Denisi andPritchard, 2006, Sanger, 2008). Many writers then decided to shift focus from performance appraisal as a means of improving performance to performance management. According to Armstrong (2009), "performance management is a means of getting better results from the organization, teams and individuals by understanding and managing performance within the agreed framework of planned goals and competency requirements". Armstrong and Baron (2004) also defines performance management as "a strategic and integrated approach in delivering sustained success to organizations by improving performance of people by developing the capabilities of teams and individuals". Heinrich (2002) defines performance Management as "the process of defining goals, selecting strategies to achieve those goals, allocating decision rights, and measuring and rewarding performance". According to DeNisi and Pritchard (2006), Performance Appraisal is "a discrete, formal organizationally sanctioned event, usually not occurring more frequently than once or twice a year, which has clearly stated performance dimensions and/or criteria that are used in the evaluations process". According to them, "it is an evaluation process, in that quantitative scores are often assigned based on the judged level of the employee's job performance on the dimension or criteria used, and the scores are shared with the employee being evaluated". They, however, state on the other hand that, performance management "is a broad set of activities aimed at improving employees' performance". They state further that, though performance appraisals provide input for the performance management process, the focus of performance management is how to motivate employees to improve performance. Therefore, while the ultimate goal of performance management is the improvement of the performance of employees, that of performance appraisal is to provide information for the performance management process. Performance management therefore, involves the measurement of individual employees and teams' performance and taking the necessary actions to correct shortfalls and aligning those performance to the goals of organizations (Aguinis, 2013) Heathfield (2018) defines Performance management as the process of creating a work environment or setting in which people are enabled to perform to the best of their abilities. She suggests that performance management is not an annual appraisal meeting and is also not preparing for that appraisal meeting nor is it a self-evaluation system. According to her it is not a form nor is it a measuring tool although many organizations may use tools and forms to track goals and improvements, they are not the process of performance management. Performance of employees can therefore, be managed in a systematic process involving four major stages: planning the process, providing feedback and coaching employees, reviewing or appraising the performance of employees and providing outcomes of the performance reviews or appraisals (Bernthal, 1996). For performance management to succeed, employees must believe in the process, the approach, transparent implementation and commitment from managers (Goh, 2012). The perception of employees about the performance management processes is therefore important for the success of the process (Sharma, et al., 2016). the concentration of performance management research in the Ghanaian context have been on central government agencies and state owned enterprises since it became a topical issue among researchers. Aboachie-Mensah and Seidu (2012) stated that literature and empirical evidence on performance appraisals as a performance management tool and the perception of employees on same in the educational sector is even scarcer. Performance Management could be an effective tool to manage the performance of employees in the civil and public service in Ghana but for the chronic challenges such as resources and lack of involvement of staff in the process (Abane, et al. 2017). One of the critical factors that were to determine the success of the Civil Service Reform Programme (CSRP), which was commissioned in 1987 by the Government, led by the Provisional National Defense Council (PNDC), was the motivation and involvement of staff of the Service in the implementation of the Programme (Aryee, 2001). However, the Civil Service after several years after the implementation of the CSRP (1987)(1988)(1989)(1990)(1991)(1992)(1993), is still lacking behind in the involvement of staff in the formulation and implementation of performance management systems (Bawole, et al., 2013). In the early 1990s the Government, led by the National Democratic Congress (NDC) instituted the National Institutional Reform Programme (NIRP) with the aim of making employees of the Civil and Public Service proactive, efficient, effective, innovative, cost conscious and attractive in order to deal with growing demand for an efficient public service and other daunting challenges emerging at the time (Ohemeng, et al., 2018). To achieve these, the Civil Service Performance Improvement Programme (CSPIP) was introduced with the main aim of changing the performance management strategy from one that was based solely on annual confidential reports, which mainly measured the personal attributes of employees, to one that focused on the job performance of employees. Employees performance were to be measured based on concrete achievements of governments programmes and policies as contained in budget statements and other sources (Ohemeng, et al., 2018). According to them, however, due to changes of governments subsequently and perceived non-involvement of employees in the processes the reform programmes could not be implemented. Most of the research carried out on the institution and implementation of performance management systems in the civil and public service institutions in Ghana have so far revealed several challenges, with the main one being the non-involvement of employees. Bawole, et al. (2013) found that the performance appraisal system in the civil service in Ghana is taken out of the larger context of performance management and therefore does not yield the needed results. They found, for instance, that appraisal standards were not set with the involvement of staff because the appraisal forms are designed at the head offices and sold at the Assembly Press to staff who want to be assessed for promotion purposes. Some studies conducted in some tertiary institutions in Ghana on the perception of employees about their performance management systems revealed that, though there were prospects for the performance management systems of those institutions, the perceived challenges such as appraisal errors, lack of coaching and ineffective performance criteria were likely to negatively affect the implementation of their performance management systems (Aboachie-Mensah and Seidu, 2012; Sarah Hackman, 2018). Methodology and Data Collection Techniques This is a qualitative study that relied on the results of a semi-structured interview of purposefully selected Semior Members and Senior Staff of the Bolgatanga Polytechnic to draw conclusions on the topic. Senior members are staff of the Polytechnic with a minimum qualification of Master's degree while Senior staff are staff with a minimum qualification of undergraduate degree, Higher National Diploma (HND), an Ordinary Diploma from the Universities or equivalent qualifications. The Senior Members and Senior Staff were of particular interest because they were considered to have a good understanding of the performance management system of the Polytechnic than the Junior Staff, majority of whom do not have any formal education. This is line with suggestions by several authors including Creswell (2009), who said the purpose of qualitative research is to purposefully select participants that will best help the researcher to understand the research problem. Semi-structured interview questions were used to interview nine (9) staff on one-on-one and face-to-face basis, out of the one hundred and fifty-seven (157) Senior Members and Senior Staff of the Polytechnic. These nine staff were made up of three (3) females and six (6) males who are confirmed staff and have been appraised before, for at least twice. All the nine (9) staff were orally informed in advance about the topic being studied and the purpose of the interview. They were informed to choose a place and time of convenience to them for the interview. They were also informed that the interview was going to be audiotaped and transcribed for solely the purpose of the study. They all consented to take part in the interview. They all actually took part in the interview at places and times convenient to them. The interview took place within ten days period. The semi-structured and open-ended interview method was used to allow for flexibility in the interview. Flexibility in interviews allows the interviewer to follow-up interesting points and important leads raised by interviewees and allows for some inconsistencies to be cleared (Ryan, et al., 2009). The open-ended questions also allows the interviewees to tell their own story instead of strictly responding to a series of questions (Ryan, et al., 2009). Creswell (2009) andYin (2011) also suggested that in qualitative interviewing, the unstructured and open-ended questions should be used since qualitative interviewing is intended to seek the views and opinions of interviewees. The one-on-one and face-to-face interview was used because it is a valuable method of getting an in-depth information from participants on their perceptions, experiences and understanding about a particular subject matter (Ryan, et all., 2009). It also allows the interviewer to use non-verbal cues such as facial expression, body language and eye-contact to enhance the understanding of what is being said by the interviewee (Ryan, et all., 2009). The interview was conducted on two thematic areas: the accuracy and fairness of performance management systems. According to Sharma, et al., (2016), a positive perception can be achieved from employees on performance management systems when they consider the systems to be accurate and fair. According to them, accuracy of a performance management system comprise four major elements: performance planning accuracy (PPA), feedback and coaching accuracy (FCA), performance rating accuracy (PRA) and outcome accuracy (OA). They explained that performance planning accuracy (PPA) refers to the degree to which employees perceive that the planning phase of the performance management system ensures that the goals set for employees are relevant to their job functions and are aligned with the goals of the organization. They also explained that feedback and coaching accuracy (FCA) refers to the degree to which employees perceive that the FCA phase ensures the alignment of the employees delivered performance with the planned performance, through regular feedback and continuous coaching throughout the year. They further explained that performance rating accuracy (PRA) refers to the degree to which the employees perceives that the annual performance review or appraisal phase measures the alignment of the employees annual performance against the planned performance, through an assessment of the employees performance against planned goals. They added that outcome accuracy (OA) refers to the degree to which the employee perceives that the outcome phase ensures that the performance-based rating, compensation, rewards and or recognition are clearly tied to the employees' annual performance review. Furthermore, Sharma, et al., (2016) stated that fairness of a performance management system also comprise four factors: procedural fairness (PRF), which refers to fairness about procedures; distributive fairness (DIF), which refers to fairness about the outcomes performance reviews; Interpersonal fairness (IPF), which also refers to fairness and quality of interpersonal relationship between supervisors and subordinates; and informational fairness (INF), which refers to fairs on feedback and explanations of relevant issues relating to the performance management system. The perception of participants on the accuracy and fairness of the performance management system of the Polytechnic was therefore, explored using the above factors of accuracy and fairness proposed by several writers and given prominence by Sharma et al. (2016). Under accuracy, four (4) sub-themes were explored, which included performance planning accuracy, feedback and coaching accuracy, performance rating accuracy and outcome accuracy. The perception of staff on fairness of the system were also explored using four (4) subthemes, comprising procedural fairness; distributive fairness; Interpersonal fairness; and informational fairness. Conclusions were made based on the perception of participants about the accuracy and fairness of the system. Findings and Discussion of Employees Perception on the Accuracy and Fairness of the Performance Management System of Bolgatanga Polytechnic Under planning accuracy it was found that employees were not familiar with the mission and vision of the Polytechnic as well as the major goals of the Polytechnic to enable them focus their efforts towards achieving them. It was also found that they were not guided by their superiors to set goals in line with the goals of the Polytechnic and work towards achieving them. A participant expressed the following concern in response to a question on their involvement in managing their performance: nobody tells us what we are supposed to do or achieve. We are always there and they just bring us forms to fill and also ask us to write our duties on the form and we have been writing our duties for them every year. We don't discuss any plans we just fill the forms and that is all". It was further found that on the appraisal form of the Polytechnic, employees are asked to set targets for themselves, but the participants said they were not always sure as to whether the targets set by subordinates are in line with the objectives of the Polytechnic. It was also revealed that there were no follow-ups or coaching of employees to ensure that the objectives they set for themselves are achieved. The following responses of some participants makes it quite explicit that follow-ups or coaching are not done and there are also no discussions on how employees could achieve their goals within a year: it is difficult because you know in the appraisal form you are asked to set targets for the next year but they don't follow-up to know whether they are achieved. But if they follow up it will help us focus and meet the targets, but no one follows up. Another respondent said, No, my superior has never discussed with me about something like that he only briefed us after academic board meetings and how the department can help achieve certain targets. The respondent continued I don't have a problem with the appraisal itself but the feedback and the monitoring that is not effective, there is no feedback to know whether the person has the basic skills whether those who are supposed to monitor have the system to do so. This response was further corroborated by another respondent who said we only approach ourselves on certain things. Apart from new appointees that they give them orientation, nobody coaches you as to what to do. The above responses suggested perceived lack of accuracy in terms of feedback and coaching and also lack of procedural and informational fairness. Another relevant finding from the interviews was that, participants strongly held the perception that their performance were not accurately measured. According to them the standards used in measuring them do not accurately reflect their job descriptions. A participant stated as follows: the standard they use in assessing us to me is not right. You see I teach but nobody ask me how I am teaching in the appraisal form so how to you assess that one. So I think there are a lot of issues to be addressed with the way they appraise us, especially we those who are teaching. Definitely something must be done, I think so. The participants were also emphatic that there was no any reward scheme linked to their performance. According to them their performance were not based on any extrinsic motivation but on their own conscience. A participant said: rewarding any employee for good work done is what we have not yet done. Not necessarily giving money but there are so many ways of doing it. Even a letter to you telling you that you have done enough, is ok. This suggested perceived lack of accuracy in terms of the outcomes of the performance management system in the institution, which had the possibility of affecting performance negatively. Over all, participants perceived the performance management system of the Bolgatanga Polytechnic to be lacking accuracy and fairness under the various sub-themes explored. They perceive that they are not involved in planning the performance management system of the Polytechnic and therefore, the system is seen as an imposition rather than a means of helping them contribute to the growth of the institution. The issue that received much concern from participants was the fact that there were no follow-ups to ensure that targets set were achieved. This, according to them makes the system seem like an annual form filling exercise and not an appraisal of their performance. For instance, a respondent said nobody tells you anything about appraisal, you only hear of it when it is time to fill the form. You don't hear about it until it is time to do it again. It was clearly revealed that the performance management system of the Polytechnic was perceived not to be fair because, according to participants, they do not actually understand the real rationale of the system and what it is actually intended to achieve. They stated that nobody explains to them the relationship between the system and goals of the Polytechnic and whether it was actually in line with those goals. They said their immediate supervisors do not also communicate with them on what to do to be able to achieve their set goals under the system. A participant expressed the following concern: you see we don't even understand the whole thing and it seems like our Heads of Department are also the same because nobody has ever explained to me anything about this appraisal. I think it is time they do that, I hope you agree with me. Otherwise, for instance I don't know how this appraisal can help me to do my work. A participant who had been head of Department before confirmed what the above participant had said about supervisors' lack of understanding of the system as follows: you know we don't understand the system so we follow what is in the form. And some part of it ask them to set their targets so they do that but we don't know what to actually do with those targets the staff set for themselves, I actually think we need some explanation about this appraisal form. The above responses suggested a perception of lack of accuracy and fairness of the performance management system in the Polytechnic. This implies that employees are perceived not to be involved in the planning of the system. It is also a revelation that employees are not coached or given feedback during the year to enable them make the needed adjustments in order to achieve their set targets. It further implies that there is no regular and formal communication between supervisors and their subordinates on issues relating to the appraisal system in the Polytechnic. The findings also suggests that there is no formal reward system that links with the performance of employees. process, providing feedback and coaching employees, reviewing or appraising the performance of employees and providing outcomes of the performance reviews or appraisals. However, evidence from the Bolgatanga Polytechnic based on the perception of participants in this study is a departure from the above paragraph. Participants in this study perceived the system to be lacking accuracy and fairness. This implies that employees in the Polytechnic have a negative perception about the performance management system of the Polytechnic, as suggested by Sharma, et al (2016). This further suggests that the performance management system of the Polytechnic is not effective and therefore cannot achieve the purpose for which it was instituted. This situation is most likely to affect the achievement of the goals of the Polytechnic. The Bolgatanga Polytechnic and for that matter, public institutions in Ghana must therefore, ensure that there is accuracy and fairness in-build in their performance management systems and the implementation of those systems must also be done in line with the underlining principles of accuracy and fairness. The importance of knowing and managing employees' perceptions as they relate to performance management practices in public institutions cannot be overemphasized. Various public sector reforms are perceived to have failed due to lack of involvement of staff in the reform processes and also poor management of public sector employees' perceptions about such reforms. In order to help address the challenges of performance management systems of public institutions, the perception of staff must always be explored about such systems before instituting and implementing those systems. As found out by this study, staff at all levels must be part and parcel of the conception, institution and implementation of performance management systems. Such performance management systems should also have in-build mechanisms that ensures that employees are given coaching and feedback within reasonable intervals about their performance and also clearly linking performance standards to the job functions and objectives of public institutions. Such systems must also make provision for rewards, strictly based on the performance of employees. Supervisors of public institutions should also be regularly trained on how to appraise their subordinates so that the purpose of institutional appraisal systems could be achieved. Supervisors should not only be trained on how to rate their subordinates, but also how to establish cordiality with them, regularly communicating with them, motivating, coaching and also providing them with feedback during the year. Including these duties in the job descriptions of supervisors would be appropriate to help build the culture of managing performance instead making it an annual event, where staff are merely appraised without bases. The above recommendations may not succeed when managers of public institutions do not adopt a holistic approach to performance management as a concept, as proposed by Bawole, et al. (2013). This suggest a new orientation and culture in the management of staff performance in public institutions. This new orientation and culture of managing staff performance is so important because, at the heart of managing employees of public institutions is how to manage their performance. We therefore, suggest the involvement of the relevant ministries and agencies in the country to ensure that management of performance of employees of public institutions is given the needed attention and funding. This is because performance management is the base upon which any intervention and programmes of government can succeed. So long as the civil and public employees remain the only vehicle through which government policies and programmes are implemented, their performance must be well managed. On the basis of the above we would suggest that further and preferably a larger study on the qualitative factors influencing the perception of staff of the public sector on performance management practices of public institutions should be considered and undertaken by researchers. This would make it imperative for policy makers to consider paying more attention to the issue of employees' perception on performance management systems in the public sector so that the needed interventions could be formulated and implemented to improve the performance of employees in the public sector.
2019-10-10T09:24:24.784Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "a171136d5f9356666633036a8dd92c6ee9bcb5f4", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/JRDM/article/download/49582/51227", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9e3d5a6bc42d2c83e96f6627a2bdc8ef1654a1cc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
247999415
pes2o/s2orc
v3-fos-license
Early initiation of rivaroxaban after reperfusion therapy for stroke patients with nonvalvular atrial fibrillation Background The optimal timing of initiating oral anticoagulants after reperfusion therapy for ischemic stroke is unknown. Factors related to early initiation of rivaroxaban and differences in clinical outcomes of stroke patients with nonvalvular atrial fibrillation (NVAF) who underwent reperfusion therapy was investigated. Methods From data of 1,333 NVAF patients with ischemic stroke or transient ischemic attack (TIA) in a prospective multicenter study, patients who started rivaroxaban after intravenous thrombolysis and/or mechanical thrombectomy were included. The clinical outcomes included the composite of ischemic events (recurrent ischemic stroke, TIA, or systemic embolism) and major bleeding at 3 months. Results Among the 424 patients, the median time from index stroke to starting rivaroxaban was 3.2 days. On multivariable logistic regression analysis, infarct size (odds ratio [OR], 0.99; 95%CI, 0.99–1.00) was inversely and successful reperfusion (OR, 2.13; 95%CI, 1.24–3.72) was positively associated with initiation of rivaroxaban within 72 hours. 205 patients were assigned to the early group (< 72 hours) and 219 patients (≥ 72 hours) to the late group. Multivariable Cox regression models showed comparable hazard ratios between the two groups at 3 months for ischemic events (hazard ratio [HR], 0.18; 95%CI, 0.03–1.32) and major bleeding (HR, 1.80; 95%CI, 0.24–13.54). Conclusions Infarct size and results of reperfusion therapy were associated with the timing of starting rivaroxaban. There were no significant differences in the rates of ischemic events and major bleeding between patients after reperfusion therapy who started rivaroxaban < 72 hours and ≥ 72 hours after the index stroke. Clinical trial registration Unique identifier: NCT02129920; URL: https://www.clinicaltrials.gov. Introduction Background 2 Scientific background and explanation of rationale Theories used in designing behavioral interventions Methods Participants 3 Eligibility criteria for participants, including criteria at different levels in recruitment/sampling plan (e.g., cities, clinics, subjects) Method of recruitment (e.g., referral, self-selection), including the sampling method if a systematic sampling plan was implemented Recruitment setting Settings and locations where the data were collected Interventions 4 Details of the interventions intended for each study condition and how and when they were actually administered, specifically including: Unit of assignment (the unit being assigned to study condition, e.g., individual, group, community) Method used to assign units to study conditions, including details of any restriction (e.g., blocking, stratification, minimization) Inclusion of aspects employed to help minimize potential bias induced due to non-randomization (e.g., matching) Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to study condition assignment; if so, statement regarding how the blinding was accomplished and how it was assessed. Unit of Analysis 10 Description of the smallest unit that is being analyzed to assess intervention effects (e.g., individual, group, or community) If the unit of analysis differs from the unit of assignment, the analytical method used to account for this (e.g., adjusting the standard error estimates by the design effect or using multilevel analysis) Statistical Methods 11 Statistical methods used to compare study groups for primary methods outcome(s), including complex methods of correlated data Statistical methods used for additional analyses, such as a subgroup analyses and adjusted analysis Methods for imputing missing data, if used Statistical software or programs used Participant flow 12 Flow of participants through each stage of the study: enrollment, assignment, allocation, and intervention exposure, follow-up, analysis (a diagram is strongly recommended) o Enrollment: the numbers of participants screened for eligibility, found to be eligible or not eligible, declined to be enrolled, and enrolled in the study o Assignment: the numbers of participants assigned to a study condition o Allocation and intervention exposure: the number of participants assigned to each study condition and the number of participants who received each intervention o Follow-up: the number of participants who completed the followup or did not complete the follow-up (i.e., lost to follow-up), by study condition o Analysis: the number of participants included in or excluded from the main analysis, by study condition Description of protocol deviations from study as planned, along with reasons Recruitment 13 Dates defining the periods of recruitment and follow-up Baseline Data 14 Baseline demographic and clinical characteristics of participants in each study condition Baseline characteristics for each study condition relevant to specific disease prevention research Baseline comparisons of those lost to follow-up and those retained, overall and by study condition Comparison between study population at baseline and target population of interest Baseline equivalence 15 Data on study group equivalence at baseline and statistical methods used to control for baseline differences 6 5 5 7 8 8 8 Numbers analyzed 16 Number of participants (denominator) included in each analysis for each study condition, particularly when the denominators change for different outcomes; statement of the results in absolute numbers when feasible Indication of whether the analysis strategy was "intention to treat" or, if not, description of how non-compliers were treated in the analyses Outcomes and estimation 17 For each primary and secondary outcome, a summary of results for each estimation study condition, and the estimated effect size and a confidence interval to indicate the precision Inclusion of null and negative findings Inclusion of results from testing pre-specified causal pathways through which the intervention was intended to operate, if any Ancillary analyses 18 Summary of other analyses performed, including subgroup or restricted analyses, indicating which are pre-specified or exploratory Adverse events 19 Summary of all important adverse events or unintended effects in each study condition (including summary measures, effect size estimates, and confidence intervals) Interpretation 20 Interpretation of the results, taking into account study hypotheses, sources of potential bias, imprecision of measures, multiplicative analyses, and other limitations or weaknesses of the study Discussion of results taking into account the mechanism by which the intervention was intended to work (causal pathways) or alternative mechanisms or explanations Discussion of the success of and barriers to implementing the intervention, fidelity of implementation Discussion of research, programmatic, or policy implications Generalizability 21 Generalizability (external validity) of the trial findings, taking into account the study population, the characteristics of the intervention, length of follow-up, incentives, compliance rates, specific sites/settings involved in the study, and other contextual issues Overall Evidence 22 General interpretation of the results in the context of current evidence and current theory
2022-04-08T05:11:23.425Z
2022-04-06T00:00:00.000
{ "year": 2022, "sha1": "a2f197d3d8485d3035f8ed60776e82c975df3eaa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0264760&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2f197d3d8485d3035f8ed60776e82c975df3eaa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225938192
pes2o/s2orc
v3-fos-license
Elaboration of Explanatory Factors of Accidents in Cameroon by Factorial Correspondence Analysis The aim of this paper is to examine the causes of road accidents in Cameroon. The Douala-Yaounde highway was chosen as the case of study. Available field data recorded from the year 2006 to 2011, have enabled the analysis of each accident. The method used here is the factorial correspondence analysis; which aims to bring in a small number of dimensions, most of the initial information, focusing not on the absolute values, but the correspondence between the variables, that is to say the relative values. From this analysis, it appears that, of the 906 accidents recorded during this period, top five causes account for nearly 83% of the information provided by the set of variables on the occurrence of road accidents. These causes are: driver inattention, lack of control, over speeding, improper overtaking and tire puncture. These results require involvement in the construction of road safety policies through training, sensitization and adequate repressions as well as administrative reforms and research policy in road safety. Introduction Today, road accidents remain a blight both for public authorities in all countries around the world and for international organizations. Indeed, according to the specifically in Africa, road accidents are the second leading cause of death after malaria. Thus, Africa accounts for about 27% of the 1.35 million deaths worldwide for just over 2% of vehicles [1]. This situation can be explained by a combination of several factors, the weight or importance of which varies from country to country. In Cameroon, a middle-income country and therefore prone to road accidents [1], although the mortality rate has declined sharply from 28.1% in 2007 to 20.6% in 2015, road accidents remain the second leading cause of death after malaria [2]. In this context, although road accidents occur throughout the country, they are concentrated around three main roads due to the importance of the cities linked by them. As a result, road accidents have a very high socio-economic cost. In human terms, there have been just over 1000 deaths and over 6000 injuries. On the economic level, the economic losses suffered by Cameroon due to road accidents represent nearly 100 billion CFA francs per year, equivalent to 1% of the GDP of this period [3]. In the light of the above, it is becoming essential to conduct a study to identify the explanatory factors for road accidents in order to draw up a policy which, if it does not eliminate road accidents, will reduce them or provide a framework for their occurrence. Literature Review Road accident studies have shown that four main causal dimensions have been identified as contributing to the occurrence of an accident. These include driver behaviour, the environment, the vehicle and pedestrian behaviour. Each dimension includes the causes that are linked to it [4]. Several studies have been carried out for the causal analysis of accidents. Those based on the examination of reports or on the variation of behaviour according to causal explanations and beliefs, or those based on a quasi-experiment analysing the variation of causal explanations and attitude towards preventive measures, according to the situational relevance, personal relevance and severity of the accident, and these studies have given disparate results depending on the position of each analyst [4]. Causal explanations for accidents thus vary from one source to another depending on the analytical techniques used or the location. In addition, several methods are commonly used for studying the causality of road accidents. Many of them are based on the collection of road accident data. The principle here is to group the accidents according to their profile for a good understanding of their production [5]. This method usually leads to a subjective analysis. This is why we are moving towards Correspondence Factory Analysis. According to Grangé et al. [6], the studies carried out in this framework are based on the analysis of contingency tables, which makes it possible to study the links between two qualitative variables. Here we then have the possibility of reducing the dimension arising from the existence of correlation between the variables. In addition to descriptive statistical analysis, whose interest is recognised (quantification of the situations studied) but also some limitations (difficulty of crossing mul-Journal of Transportation Technologies tiple data and interpretation), Factor Correspondence Analysis enables all the data (circumstances and characteristics of the accidents) to be cross-referenced/taken into account and to highlight their dependence/independence (correlation strength and significance) [7] [8]. In view of all these performances, it becomes obvious for us to approach this method in the framework of the present study in order to limit the dispersion of results observed. Methodology In order to analyse the causalities factors of road accidents, we use Factor Correspondence Analysis (FCA), which aims to gather most of the initial information in a reduced number of dimensions, focusing not on absolute values but on the correspondences between the variables [7]. This reduction is all the more useful if the number of initial variables is high. The notion of "reduction" common to all factor techniques has the particularity of providing a common representation space for variables and individuals. The goal of the CFA is therefore to read the information contained in a multidimensional space by reducing the dimension of this space while retaining a maximum of the information contained in the original space. To do this, the AFC uses the reduction or frequency table as the basis for its reasoning. This method makes it possible to compare the distance between the different responses modalities to the variables selected on axes whose significance is determined by the variables that characterize them. The CFA is used to determine and prioritize all dependencies between the rows and columns of the table. The total variance explained allows us to appreciate the amount of information explained by a factorial axis. It defines axes that best summarize the information obtained from the selected variables. In our study, given the heteroscedasticity of the variables, it seemed appropriate to conduct an analysis of the correlations of the variables using the correlation matrix. It represents the correlation coefficients calculated on several variables taken in pairs. To assess the relationships between the variables and the factor axes, we used the post-rotation component matrix based on the varimax method with Kaiser normalization. Choice of Road Sample The road sample is the Yaoundé-Douala highway which is 242 km long. This road has been chosen for its heavy traffic (6000 vehicles/day) due to the economic and political importance of the towns it connects. That is why we can describe the results of each analysis separately [10]. Sections 3 and 4 (total length of 114 km + 20 km) are selected as a sample for their heavy traffic and especially the availability of police reports in the existing data in the said sections. The Study Variables From the accident reports, we have identified several causes of accidents which we group into variables for a good exploitation of the data. The measurement of these different variables and their designation in the simulation process is given in the following Table 1. Table 2 and Figure 2). Lack of control 1 when the information is significant and 0 in the contrary causemm Driver carelessness 1 when the information is significant and 0 in the contrary causeimp Collection and Classification of Accidents from Police Reports Bad parking 1 when the information is significant and 0 in the contrary casemst Brake failure 1 when the information is significant and 0 in the contrary causup Dangerous maneuvering 1 when the information is significant and 0 in the contrary causemda Mechanical failure 1 when the information is significant and 0 in the contrary causem Driver inattention 1 when the information is significant and 0 in the contrary causeina Wheel bursting 1 when the information is significant and 0 in the contrary causecl Excessive speed 1 when the information is significant and 0 in the contrary causevit Bad overtaking 1 when the information is significant and 0 in the contrary causemde Journal of Transportation Technologies Treatment of Accident Reports A grid of relevant data from police reports has been implemented from an adaptation of road traffic injury analysis bulletin used in France. A grid was filled for all the police reports of the considered section. A quality control consisted of the cross validation of the pre-filled grid. This process helped to ensure concordance between the information on grids and those on the corresponding minutes and messages. The encoded information was entered into a designed database and analyzed using SPSS software (SPSS version 11.0). Results and Discussion From the correlation matrix (Table 3), it appears that there is no strong correlation between the variables in the analysis. Moreover, from the matrix of components after rotation (Table 4) and the table of total variance explained ( The correspondence factorial analysis allowed us to build a typology of accidents, thus constituting profiles. Without describing the causes of accidents, these profiles highlight the multi-causality of accidents on Cameroonian roads. Finally, the axes retained are: driver inattention, lack of control, excessive speed, poor passing and wheel bursting. These axes represent the main causes of road accidents in our sample (see Figure 3 and Table 5). The graph below illustrates their positioning. The points furthest away from the axes are those that have a high correlation with them. We have thus proceeded to the elaboration of accident profiles on our line of study. These results reflect the national and indeed international trend in road accident causality studies and even their categorisation. These results are similar to those obtained by the psychologist Robert Ngueutsa [4] and BPA [11]. In Ngueutsa's study on the explanation of road accidents according to causes and analysts in Cameroon, he notes that speeding, dangerous overtaking and wheel bursts are ranked among the most important causes of road accidents. Furthermore, the study conducted by the BPA notes that lack of control and driver inattention are among the three predominant types of accidents on Swiss roads. Conclusions In this study, correspondence factor analysis is used to classify accidents according to their profile using the reports drawn up by the police forces. In the period from 2006 to 2011, 906 accidents were recorded. The objective here is not to analyse accidents according to their production but to be able to categorise them while retaining as much information as possible from the set of variables. Although factor analysis does not make it possible to describe the causes of accidents, it does make it possible to develop an overall policy which, if accidents are not eliminated, would make it possible to reduce them or to control their occurrence in cases of force majeure. This policy provides guidance on certain groups of accidents or individuals that can range from awareness to enforcement, thereby reducing the number of accidents and fatalities on our roads. Accident data sources can be used to establish the presence or absence of a number of factors that can modify the risk of an accident. A description is never an explanation, and the decision-maker is interested in the role of the factor under consideration in the accident, not whether it is present or absent. Therefore, it is necessary to be able to use other elements to produce useful information and to be able to move from describing to attempting to explain the mechanisms and the assessment of risk factors. It is becoming necessary to apply to accidents the methods used in all scientific approaches based on the analysis of epidemiological data if we want to control the phenomenon of accidents in our environment, although the availability and quality of the reports is a brake on this.
2020-07-09T09:03:31.261Z
2020-05-09T00:00:00.000
{ "year": 2020, "sha1": "07ca8e938976940ccdfb9cd9bd1320b3c7dab003", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=101323", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9e1d10e0bbebfddc2b151697fc0c787fb49b6481", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Psychology" ] }
132682553
pes2o/s2orc
v3-fos-license
Production and Marketing Constraints for Cumin Seed in Barmer District Cumin is an important seed spices in India. Cumin seeds have an aromatic fragrance due to an alcohol ‘cuminol’. The seeds are largely used as condiments in the form of an essential ingredient in all mixed spices and in curry powder for flavouring, vegetables, pickles, soups etc. It also has medicinal properties and is used in treatment of carminative, stomachic, astringent and in diarrhea. Cumin is largely exported in form of seed and some quantities in the form of seed oil, cumin powder and oleoresin India is biggest exporter of cumin seed, powder and oils to Japan, Korea, USA etc. As the government has more awake on public health and targeted to produce chemical free (organic) seeds and other products. The individual country has decided permissible limits of residue before accepting it for import. International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 8 Number 03 (2019) Journal homepage: http://www.ijcmas.com Introduction Cumin is an important seed spices in India. Cumin seeds have an aromatic fragrance due to an alcohol 'cuminol'. The seeds are largely used as condiments in the form of an essential ingredient in all mixed spices and in curry powder for flavouring, vegetables, pickles, soups etc. It also has medicinal properties and is used in treatment of carminative, stomachic, astringent and in diarrhea. Cumin is largely exported in form of seed and some quantities in the form of seed oil, cumin powder and oleoresin India is biggest exporter of cumin seed, powder and oils to Japan, Korea, USA etc. As the government has more awake on public health and targeted to produce chemical free (organic) seeds and other products. The individual country has decided permissible limits of residue before accepting it for import. ISSN: 2319-7706 Volume 8 Number 03 (2019) Journal homepage: http://www.ijcmas.com Cumin (Cuminum cyminum) is an important low volume high value seed spices grown in India. India is the largest producer and consumer of cumin seed in the world while Gujarat is leading in production and Rajasthan in acreage (Table 1). Cumin is grown on 104828 ha area with an annual production of 28410 tonnes in Barmer district with an average productivity of 348 kg/ha. About 90% of the total production is marketed in Krishi Upaj Mandis of adjoining state i.e. Unjha, Deesa, Mehsana etc of Gujarat instead of Rajasthan. The yield of cumin crop is adversely affected by incidence of wilt and blight diseases and attack of aphid while economic returns were drastically affected by marketing problems. Besides this, farmers practicing traditional method of cultivation since a long time resulted in decrease in productivity. In view of this a study was conducted in three village of Gudamalani tehsil of Barmer district in Rajasthan. A set of personnel interview, questionnaire and farm inventory were used to collect basic information and production and marketing constraints from these selected farmers. The variables were scored according to scale already developed and in-use in the extension research studies. The data were analyzed and interpreted in terms of frequencies, percentage and score value. The farmers ranked different constraints like the non declaration of minimum support prizes, unavailability of storage structures, unavailability of loaning facilities, lack of laboratory for testing the seed for quality parameters, lack of processing units etc. are as major constraints and they were ranked 1, 2,…. and so on respectively. The exporters must consider the permissible limits of chemical in export material to avoid rejection of material to get more foreign exchange. Cumin is the major Rabi crop of western Rajasthan (Jodhpur, Barmer, Jalore, Jaiselmer, Nagore, Pali etc.) and contributes around 95% of total acreage and 91% in production (Table 1). More than 50% of the total production is marketed in Krishi Upaj Mandis of Gujarat i.e. Unjha, Deesa, Mehsana etc instead in local Mandies. Materials and Methods The productivity in state could be enhanced through adoption of improved technologies particularly by adequate supply of improved seed (wilt resistant variety), availability of non persistent chemicals for soil and seed treatment, IPM and ICM practices have been assessed (Table 2). In view of this the study was conducted in three village of Gudamalani tehsil of Barmer district in Rajasthan during implementation of project on IPM with ITC Limited. These were selected after comprehensive laboratory analysis of soils and seed of previous crop for residue of plant protection chemicals. These three villages were Dudasar, Mittiberi and Laxmanpura with sample size of 26, 25 and 17, respectively (A total 68 farmers constituting 120 ha). The farmers were selected after developed personnel interview, questionnaire and farm inventory to collect basic information regarding PoP's (Choudhary and Pagaria 2012). The major source of irrigation was sprinkler method because soils in these areas are sandy to sandy loam with high infiltration rates. Besides this, the ground water table is very deep water along with brackish in nature. To understand the knowledge of farmers about latest technologies, adoption level, consultancy pattern and other possible reasons of non adoption were considered as dependent variable. The variables were scored according to scale developed and in-use in the extension research studies. The data were analyzed and interpreted in terms of frequencies, percentage and score value. Production Constraints in adoption of improved technology Cumin is almost grown on assured irrigated condition where input supply is only limited by either availability of inputs or economic conditions of the growers. The soils are sandy to sandy loam with undulating topography forced the farmers to follow broadcasting method of sowing followed by mixing with cultivator resulted into uneven and poor germination and un-uniform crop stand (Veerasamy et al., 2003). Similarly it restricts the use of modern equipments for inter cultivation. These practices increased the cost of cultivation as they need higher seed rate (15 kg of seed instead of 5 kg) and more number of costly labours for field operations. The results of study related to production constraints, the grower's ranked the lack of suitable seed drill for cumin sowing (shallow) as top and prime constraint ( Table 3). The timely availability of improved seed variety resistant to wilt was the major constraints for adoption of improved practices. Similarly, the government polices like lack of subsidies on inputs and plant protection measures, poverty etc. also hinders the adoption of improved packages. The increased rate of wages for labours and their engagement in MNREGA affected timeliness of farm operations. Marketing Constraints in adoption of improved technology The cumin is cultivated in India on 1 m ha (Approx.) and there is no declaration of MSP. Once the government declares it, a gradual increase in sell prices noticed yearly. But the cumin prices are stable and are around Rs. 100-120/-kg during last decade in comparison to the increase in prices by two to three folds of other crops (Table 4). The results of present study reveal that the non declaration of MSP was top most constraint as reflected by sizable farmers. The produce was sold in market just after harvesting to local vendors to pay the wages, fulfill daily requirement and loans by cooperatives due to economic conditions of the farmers. In this regards, the lack of loan against property (Cumin) is not provided by buyer in Rajasthan as compared to Gujarat (II constraint). The quality of cumin deteriorate day by day and there is a shortage of storage facility for large scale along with lack of processing unit for grading (was also hinder the production of cumin. The facilities for determining moisture content is also reduces the price of products. A great lack of an adequate insurance or relief from government against natural calamities in proportion to area cultivated and crop conditions. Table.1 Area, production and productivity of cumin in Rajasthan during 2012-13 To understand the behaviour of farmers towards selling of their produce in another state, it was concluded that the broker of Unjha and Deesa mandi purchase the produce without processing like sieving and grading, fix the prizes in standing crops, loaning against produce if farmer do not sold their produce so farmers sold their produce as early as threshed because moisture content also pays the return. From above study it may be concluded that adoption of improved technologies is easy but constrains for its adoption as a hurdle race where one constrain solved gave birth to another constraints. As it is clear that the time of sowing never wait and it is top most non monetary input in crop production. So to sow the crop on time sacrificed by farmers either by using uncertified or untreated seed. The use of these practices makes the crop more vulnerable to increased incidence of wilt and blight. These practices also increased the cost of production in one hand and make it unfit of environment as well as export. Similar finding were reported by Jain andPagaria (2011), Jain (2014) and Singh et al, 2011. Thus, overall market and government policies may make the Rajasthan as most productive state. It is suggested that there is a need to strengthen effective communication methods like SMS services, leaflets, technical bulletins, newspapers, radio talk, trainings etc. to timely availability of inputs and management of weather aberrations.
2019-04-26T14:16:18.569Z
2019-03-20T00:00:00.000
{ "year": 2019, "sha1": "2073d17e696ddd82467827d4dfa7f5075eb020af", "oa_license": null, "oa_url": "https://www.ijcmas.com/8-3-2019/Pradeep%20Pagaria%20and%20Sonali%20Sharma.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b37f8bae359674bdbce1f4c0c1f2fd4c753fc64f", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Business" ], "extfieldsofstudy": [ "Mathematics" ] }
17699306
pes2o/s2orc
v3-fos-license
Excited hadrons from improved interpolating fields The calculation of quark propagators for Ginsparg-Wilson-type Dirac operators is costly and thus limited to a few different sources. We present a new approach for determining spatially optimized operators for lattice spectroscopy of excited hadrons. Jacobi smeared quark sources with different widths are combined to construct hadron operators with different spatial wave functions. We study the Roper state and excited rho and pion mesons. Optimized quark sources The low-lying hadron spectrum shows a few features which are fingerprints of QCD. In the meson sector the pion occurs as an almost-Goldstone boson with its squared mass vanishing proportional to the quark mass in contrast to all other mesons. The observed ordering of the lowest positive, 1/2 + , N (1440), and negative parity excitations of the nucleon, 1/2 − , N (1535) is 'unnatural'. A physical picture based on linear confinement, Coulomb and color-magnetic terms, always arranges the first radial excitation above the first orbital excitation, i.e. the excited states have alternating parities. Whereas ground state spectroscopy on the lattice is by now a well understood physical problem with impressive agreement with experiment, the lattice study of excited states is not so far advanced. In a lattice calculation the masses of excited states show up in the sub-leading exponentials of Euclidean two point functions. A direct fit of a single correlator is cumbersome since the signal is strongly dominated by the ground state. Also with methods such as constrained fits [1] or the maximum entropy method [2] one still needs very high statistics for reliable results [3,4]. * Presented at LATTICE 2004 by C.B. Lang. The work was supported by Fonds zur Förderung der Wissenschaftlichen Forschung inÖsterreich (P16824-N08 and P16310-N08) and by DFG and BMBF. An alternative method is the variational method [5] where one diagonalizes a matrix containing all cross correlations of a set of several operators with the correct quantum numbers. For a large enough and properly chosen set of basis operators each eigenmode is then dominated by a different physical state. After normalization the largest eigenvalue gives the correlator of the ground state, the second-largest eigenvalue corresponds to the first excited state, and so on. It is important to optimize the spatial properties of the interpolating operators. An example for this fact is the Roper state where the variational method, based on nucleon operators that differ only in their diquark content but have the same spatial wave function, did not lead to success [6]. It can be argued that a node in the radial wave function is necessary to capture reliably the Roper state or other radially excited hadrons. Recently [7] we demonstrated that an elegant solution is to combine Jacobi smeared quark sources with different widths to build the hadron operators and compute the cross-correlations in the variational method. We find good effective mass plateaus for the first and partly the second radially excited states. The propagators are then fitted using standard techniques. Already in [8] Jacobi smeared sources were combined with point sources and crosscorrelations studied in similar spirit (see also [9]). The technique of Jacobi smearing is well known [8,10]. The smeared source lives in the timeslice t = 0 and is constructed by iterated multiplication with a smearing operator H on a point-like source. The operator H is the spatial hopping part of the Wilson term at timeslice 0; it is trivial in Dirac space and acts only on the color indices. This construction has two free parameters: The number of smearing steps N and the hopping parameter κ. These can be used to adjust the profile of the source. Here we work with two different sources, a narrow source n and a wide source w with parameters given by N and κ were chosen such that the profiles approximate Gaussian distributions with the indicated half-widths [7]. We remark that the two sources allow the system to build up radial wave functions with and without a node. The parameters were chosen such that simple linear combinations c n n + c w w of the narrow and wide profile approximate the first and second radial wave functions of the spherical harmonic oscillator: The coefficients c n ∼ 0.6, c w ∼ 0.4 approximate a Gaussian with a half-width of σ/2 ∼ 0.33 fm, while c n ∼ 2.2, c w ∼ −1.2 approximate the corresponding excited wave function with one node. The final form of the wave function is determined through the variational method [5]. In this approach one computes a complete correlation matrix of operators O i , i = 1, 2, ... R that create from the vacuum the state which one wants to analyze. The eigenvalues λ (k) (t) of the correlation matrix behave as where ∆M k is the distance of M k to nearby energy levels. The hadron sources we use for the correlation matrix are constructed from the narrow and wide quark sources. Excited nucleon signals For our quenched calculation we use the chirally improved Dirac operator [11]. It is an approximation of a solution of the Ginsparg-Wilson equation which governs chiral symmetry on the lattice. This operator is well tested in quenched ground state spectroscopy [12] where pion masses down to 250 MeV can be reached at a considerably smaller numerical cost than needed for exact Ginsparg-Wilson fermions. For ground states the chirally improved action shows very good scaling behavior. The gauge configurations were generated on a 12 3 × 24 lattice with the Lüscher-Weisz action [13]. The inverse gauge coupling is β = 7.9, giving rise to a lattice spacing of a = 0.148(2) fm as determined from the Sommer parameter [14]. The statistics of our ensemble is 100 configurations. We use 10 different quark masses m ranging from am = 0.02 to am = 0.20. Our analysis is based on the interpolator ε abc (u a Cγ 5 d b )u c . Each of the three quarks can be smeared either narrow (n) or wide (w). This gives 8 possible combinations (nnn, nnw, etc.). From a subset of 4 of these operators (after projection to definite parity) we calculate the correlation matrix C(t) which we then use in the variational method. The exponential decay of three eigenvalues is clearly identified. We identify these signals with the nucleon, the Roper state and the next positive parity resonance N (1710). A detailed discussion of further checks on the correct identification of the Roper state can be found in [7] (see also [3] concerning the problem of nucleonη ′ ghost contributions). Excited meson signals As another test of our approach we discuss the π-and ρ-mesons and their radial excitations. For the ρ we use the interpolators u(x) γ i d(x) and u(x) γ 4 γ i d(x), for the pion u(x) γ 5 d(x) and u(x) γ 4 γ 5 d(x). Again we use wide and narrow quark sources for both interpolating fields, corresponding to 3 operators each (the combinations nw and wn give identical correlators and one of them can be omitted). When diagonalizing the 3 × 3 matrix with either interpolator we see a pronounced exponential decay only for the two larger (in magnitude) eigenvalues, λ (1) (t) and λ (2) (t). The smallest eigenvalue λ (3) (t) does not show a clear effective mass plateau. This is an indication that this eigenvalue couples to an unphysical quenched ghost state [15,16,3]. The final results for the masses as a function of the quark mass are shown in Fig. 1. We find that the ground state meson masses approach their experimental values reasonably well. The excited state masses are considerably above their experimental values. There is, however, a plausible reason for this behavior. The sizes of hadrons which are not, or only weakly, affected by spontaneous chiral symmetry breaking can be estimated from the known string tension, which is approximately 1 GeV/fm. Hence the size of the excited mesons should be larger than the ground state, about 1.5 fm. Thus the size of our lattice (1.8 fm) is clearly not enough for a precise measurement of e.g. the ρ(1450) mass. The finite size effect cannot be neglected for the excited state since it apparently shifts the measured mass up as compared to the experimental value. A crucial test of our method is to check whether indeed the ground state is built from a nodeless combination of our sources and the excited states do show nodes. This question can be addressed by analyzing the eigenvectors of the correlation matrix. This has been done in [7] and indeed confirms the expectation.
2014-10-01T00:00:00.000Z
2004-09-03T00:00:00.000
{ "year": 2004, "sha1": "6f2fca10ad00160e898a046380523e9183a55a7a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/0409014", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6f2fca10ad00160e898a046380523e9183a55a7a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139105374
pes2o/s2orc
v3-fos-license
Chromosome Arm Locations of Barley Sucrose Transporter Gene in Transgenic Winter Wheat Lines Three transgenic HOSUT lines of winter wheat, HOSUT12, HOSUT20, and HOSUT24, each harbor a single copy of the cDNA for the barley sucrose transporter gene HvSUT1 (SUT), which was fused to the barley endosperm-specific Hordein B1 promoter (HO; the HOSUT transgene). Previously, flow cytometry combined with PCR analysis demonstrated that the HOSUT transgene had been integrated into different wheat chromosomes: 7A, 5D, and 4A in HOSUT12, HOSUT20, and HOSUT24, respectively. In order to confirm the chromosomal location of the HOSUT transgene by a cytological approach using wheat aneuploid stocks, we crossed corresponding nullisomic-tetrasomic lines with the three HOSUT lines, namely nullisomic 7A-tetrasomic 7B with HOSUT12, nullisomic 5D-tetrasomic 5B with HOSUT20, and nullisomic 4A-tetrasomic 4B with HOSUT24. We examined the resulting chromosomal constitutions and the presence of the HOSUT transgene in the F2 progeny by means of chromosome banding and PCR. The chromosome banding patterns of the critical chromosomes in the original HOSUT lines showed no difference from those of the corresponding wild type chromosomes. The presence or absence of the critical chromosomes completely corresponded to the presence or absence of the HOSUT transgene in the F2 plants. Investigating telocentric chromosomes occurred in the F2 progeny, which were derived from the respective critical HOSUT chromosomes, we found that the HOSUT transgene was individually integrated on the long arms of chromosomes 4A, 7A, and 5D in the three HOSUT lines. Thus, in this study we verified the chromosomal locations of the transgene, which had previously been determined by flow cytometry, and moreover revealed the chromosome-arm locations of the HOSUT transgene in the HOSUT lines. INTRODUCTION In transgenic wheat lines carrying the cDNA for the barley sucrose transporter gene HvSUT1 (SUT) fused to the Hordein B1 promoter (HO; the HOSUT transgene), HvSUT1 is overexpressed and the uptake of sucrose into grains is increased because the Hordein B1promoter is highly active in maturing cereal endosperm (Weichert et al., 2010). All three HOSUT lines significantly increased grain yield under semi-controlled conditions, together with higher protein yield and higher iron and zinc concentration compared with the wild type cultivar (Saalbach et al., 2014). Identification of transgene insertion sites in genomes has practical implications for crop breeding. In Arabidopsis and rice, direct methods such as thermal asymmetric interlaced PCR (TAIL-PCR) have been successfully used to determine genomic DNA sequences flanking T-DNA inserts containing some marker genes (Liu et al., 1995;Liu and Chen, 2007). In wheat, however, it is still difficult to know the positions of transgenes by TAIL-PCR because of its large and complex genome. Cápal et al. (2016) applied flow cytometry to identify the chromosomal location of the transgene in the three HOSUT lines. They sorted the wheat chromosomes into individual chromosomes by flow cytometry and analyzed the flow-sorted chromosomes by PCR and fluorescence in situ hybridization (FISH), and found that each of the HOSUT lines had single insertion sites of the transgene on separate chromosomes, 4A, 7A, and 5B. Cápal et al. (2016) also performed whole genome amplification of single chromosomes that were flow-sorted from each of the HOSUT lines and confirmed the chromosomal locations of the HOSUT transgene in the HOSUT lines. We were convinced that the HOSUT transgene chromosomal locations should be reconfirmed by independent approaches because the substantial yield increase in the HOSUT lines might be of considerable future interest for wheat breeding, and because the flow cytometry approach of Cápal et al. (2016) was used to localize a transgene to chromosomes for only the first time in wheat. At first, we tried single-copy FISH with the HvSUT1 probe without success. Since the main purpose of this study was to confirm the authenticity of the chromosomal locations of the integrated transgene that had been provisionally established by flow cytometry, we decided to conduct a conventional aneuploid analysis using aneuploid lines of wheat, because this analysis is the most reliable approach to identifying chromosomes carrying genes of interest in wheat. In addition, telocentric chromosomes harboring the HOSUT transgene were expected to arise in the progeny of hybrids between the wheat aneuploid and HOSUT lines. Such telocentric chromosomes, which are smaller than any intact wheat chromosomes and can easily be sorted by flow cytometry, would be useful in future research on the chromosomal and DNA organization around the integration sites of the transgene. In this study we employed the nullisomic-tetrasomic lines of common wheat for aneuploid analyses, reconfirmed the chromosomal locations of the HOSUT transgene in the three independent HOSUT lines, and identified the chromosome arms carrying the HOSUT transgene integrations. Plant Material and Cytology We used three transgenic lines of winter wheat (Triticum aestivum L., 2n = 42, genome constitution AABBDD) cv. Certo: HOSUT12, HOSUT20, and HOSUT24, which had been used in the flow cytometry study by Cápal et al. (2016). Cápal et al. (2016) reported that the HOSUT transgene is located on chromosome 7A in HOSUT12, on chromosome 5D in HOSUT20, and on chromosome 4A in HOSUT24. Therefore, we cross-fertilized three nullisomic-tetrasomic lines of common wheat cv. Chinese Spring with the three HOSUT lines, namely nullisomic 7Atetrasomic 7B (N7AT7D) with HOSUT12, nullisomic 5Dtetrasomic 5B (N5DT5B) with HOSUT20, and nullisomic 4A-tetrasomic 4B (N4AT4B) with HOSUT24. In nullisomictetrasomic lines, one pair of homologous chromosomes is replaced by an extra pair of homoeologous chromosomes (Sears, 1954). These F 1 hybrids were self-fertilized to obtain F 2 progeny. Figure 1 illustrates the process of cross-and self-fertilization, and the expected chromosome configurations in the progeny. Root tips and leaves were taken from all individuals of the F 2 progeny for cytological and PCR analyses, respectively. We conducted the karyotyping of the F 1 and F 2 progeny, as well as cultivar Certo, by C-banding, following the protocols of Endo (2011). Individual chromosomes were identified based on the previously published banding karyotypes of common wheat (Endo and Gill, 1984;Gill et al., 1991). PCR We conducted PCR analysis to demonstrate the presence or absence of the HOSUT transgene and critical chromosomes, namely 4A, 5D, or 7A, in the F 2 progeny. One primer set for the HOSUT gene and two primer sets for each of the chromosomes were chosen from the list of PCR primers reported by Cápal et al., 2016; Table 1. DNA was extracted from young leaves using a DNeasy Plant Mini Kit (Qiagen Tokyo, Japan). The PCR mixture (15 µL) contained 30 ng of genomic DNA, 1 × Gflex PCR Buffer (TaKaRa, Japan), 0.5 µM primers and 0.375 U of Tks Gflex The primer information was according to Cápal et al. (2016). Chromosome-specific Owm markers were designed by Primer3 based on the chromosome sequences from the International Wheat Genome Sequencing Consortium (IWGSC), while preventing the primers from amplifying the sequence from the homoeologous chromosomes. FIGURE 2 | Chromosome constitutions of a common wheat cultivar Certo and an F 1 hybrid between N7AT7B and HOSUT12. Chromosomes 4A, 7A, and 5D of Certo are similar to those of Chinese Spring wheat in terms of the C-banding pattern, and there is no wheat chromosome 1B in Certo (A). Sequential C-banding-FISH/GISH shows that Certo is disomic for 1BL.1RS translocation substituting for chromosome 1B. Probes for FISH/GISH are rye total genomic DNA (indicated with pink arrows), pSc 200 sequences (indicated with green arrows), and 18S.26S rDNA (indicated with green arrowheads at the secondary constrictions). (B). In the F 1 hybrid there was only one chromosome 7A from HOSUT12 and three doses of chromosome 7B. (C). Bar = 10 µm. DNA Polymerase (TaKaRa, Japan). PCR conditions were 94 • C for 1 min followed by 30 cycles of 98 • C for 10 s, 60 • C for 15 s, and 68 • C for 30 s. PCR products were separated on 3% agarose (w/v) gels in TAE buffer. RESULTS AND DISCUSSION All chromosomes of Certo were identified based on their C-banding patterns (Figure 2A). The C-banding patterns of some of the Certo chromosomes were obviously different from those of Chinese Spring wheat, generally accepted as the standard cultivar for cytogenetic research with wheat, (Gill et al., 1991). Cultivar Certo had a wheat-rye translocation substituting for chromosome 1B, probably a translocation between the long arm of chromosome 1B and the short arm of rye chromosome 1R (1BL.1RS) because the C-banding pattern of its long arm was similar to that of chromosome 1B and because FISH/GISH detected rye-specific pSc200 subtelomeric and genomic chromatin signals in its short arm ( Figure 2B). Although the banding patterns of Certo chromosomes 2B, 3B, and 7B were different from those of Chinese Spring chromosomes 2B, 3B, and 7B, they were identified by consulting the chromosome banding patterns of other wheat cultivars (Endo and Gill, 1984). The C-banding analysis confirmed that the F 1 hybrids between the HOSUT lines and nullisomic tetrasomic lines were monosomic for the critical chromosome and trisomic for the respective homoeologous chromosomes ( Figure 2C). Chromosome constitutions were successfully identified in all F 2 seedlings ( Table 2). As far as C-banding patterns are concerned, there was no structural difference between the Certo and HOSUT homologous chromosomes 7A, 5D, and 4A (Figure 3). F 2 of N7AT7D × HOSUT12 Among 42 F 2 plants from the cross between N7AT7D and HOSUT12, 32 had intact chromosome 7A in the disomic condition (five plants) or in the monosomic condition (27 plants). The remaining 10 plants had no chromosome 7A ( Table 2 and Supplementary Figure S1). Subsequent PCR analysis demonstrated that all of the 32 plants with chromosome 7A had both 7A-specific markers (Owm186 and 190) and the HvSUT1 marker, and that the remaining 10 plants, with no chromosome 7A, had none of the three markers ( Table 2 and Figure 4). This perfect concurrence of chromosome 7A and the HvSUT1 marker clearly showed that the HOSUT transgene was located on chromosome 7A. One of the 10 F 2 plants without the HvSUT1 marker was monotelosomic for the short arm of chromosome 7A (7AS) and trisomic for chromosome 7B FIGURE 3 | C-banding images (three for each) of chromosomes 7A, 4A, and 5D derived from Certo (upper row) and the HOSUT lines (lower row). Note that there was no obvious structural difference between the normal and transgene-carrying homologous chromosomes. FIGURE 4 | PCR analysis of part of the F 2 progeny from a cross between N7AT7B and HOSUT12. The HvSUT1 marker was not amplified in F 2 plants 3, 7, and 9. Neither of the 7A-specific markers (Owm186 and Owm 190) were amplified in F 2 plants 3 and 9, while Owm186 was amplified in F 2 plant 7, which was identified by C-banding to be monotelosomic 7AS (Supplementary Figure S2). This result suggested that the HvSUT1 marker was located on the long arm of chromosome 7A. Figure S2). In this plant, one of the two 7Aspecific markers (Owm190) was not amplified by PCR (Figure 4). This result suggested that Owm186 and Owm190 were located on the short and long arms of chromosome 7A, respectively, and that the 7A long arm carried the HOSUT transgene. Assuming the HOSUT transgene was located on one of the other chromosomes, and that the transmission rate of the HOSUT transgene to the F 2 progeny was 75%, as expected from the Mendelian segregation ratio in F 2 progeny for a monohybrid cross, the probability that the transgene was transmitted to none of the 32 plants would be 0.0011(chi-square test). Therefore, it could statistically be deduced that the transgene was located on no other chromosome than chromosome 7A. F 2 of N5DT5B × HOSUT20 Among 29 F 2 plants from the cross between N5DT5B × HOSUT20, 17 were either monosomic (11 plants) or disomic (6 plants) for chromosome 5D, 11 were nullisomic for chromosome 5D, and one was monotelosomic for the long arm of chromosome 5D (5DL) ( Table 2 and Supplementary Figures S3, S4). Subsequent PCR analysis showed that all the plants monosomic or disomic for chromosome 5D had the HvSUT1 marker as well as both 5D-specific markers (Owm180 and Owm184). On the other hand, the 11 nullisomic-5D plants had no HvSUT1 marker and neither of the 5D-specific markers ( Figure 5 and Table 2). The monotelosomic-5DL plant had the FIGURE 5 | PCR analysis of part of the F 2 progeny from a cross between N5DT5B and HOSUT20. The HvSUT1 and two 5D-specific markers (Owm180 and Owm184) were not amplified in F 2 plants 2, 6, and 8. In F 2 plant 4, which was identified by C-banding to be monotelosomic 5DL (Supplementary Figure S4), the HvSUT1 and Owm180 markers were amplified, but Owm184 was not. This result suggested that the HvSUT1 marker was located on the long arm of chromosome 5D. HvSUT1 and Owm180 markers but not the Owm184 marker. This perfect association between the presence of chromosome 5D or 5DL and the HvSUT1 marker suggested that the HOSUT transgene was located on the 5DL chromosome arm. At the same time, Owm180 and Owm184 were confirmed to be located on the long and short arms of chromosome 5D, respectively. With similar reasoning, as mentioned above for the "F 2 of N7AT7D × HOSUT12" progeny, the probability that the HOSUT transgene was transmitted to none of the 18 plants carrying chromosome 5D or 5DL is 0.0143 (chi-square test). Therefore, the null hypothesis that the transgene is located on a chromosome other than chromosome 5D can be rejected. F 2 of N4AT4B × HOSUT24 Among 30 F 2 plants from the cross between N4AT4B × HOSUT24, 25 were either monosomic (23 plants) or disomic (two plants) for chromosome 4A, and two were nullisomic for chromosome 4A (Table 2 and Supplementary Figure S5). Two of the remaining three plants were monotelosomic for the long arm of chromosome 4A (4AL) (Supplementary Figure S6), and one had a translocation between the 4AL arm and the short arm of chromosome 6B (6BS) (Supplementary Figure S7). Subsequent PCR analysis showed that all of the 25 plants carrying the 4A chromosome and three plants carrying the 4AL arm had the HvSUT1 and both FIGURE 6 | PCR analysis of part of the F 2 progeny from a cross between N4AT4B and HOSUT24. The HvSUT1 and two 4A-specific markers (Owm121 and Owm167) were not amplified in F 2 plants 7 and 9. All three markers were amplified in F 2 plant 3, which was identified by C-banding to be monotelosomic 4AL (Supplementary Figure S6). This result suggested that the HvSUT1 marker was located on the long arm of chromosome 4A. 4A-specific markers. On the other hand, the two plants without chromosome 4A had none of the three markers (Figure 6 and Table 2). This perfect concurrence of chromosome 4A (or 4AL) and the HOSUT transgene suggested that the HOSUT transgene was located on the 4AL arm. The low occurrence of nullisomics for chromosome 4A (6.7%), compared with the occurrence of nullisomics for chromosomes 7A (23.8%) and for chromosome 5D (37.9%), was probably due to inadequate compensation for the loss of chromosome 4A by two doses of chromosome 4B in pollen. It is known that there are rearrangements among chromosomes 4A, 5A, and 7B of modernday hexaploid bread wheat, and that chromosome 4A carries translocated segments from chromosomes 5A and 7B (Devos et al., 1995). This fact suggests that the loss of chromosome 4A was inadequately compensated by the extra dose of chromosome 4B in this experiment. Again, making a similar calculation to the one that we performed for the "F 2 of N7AT7D × HOSUT12" progeny, the probability that the HOSUT transgene was transmitted to none of the 28 plants carrying chromosome 4A or 4AL is 0.0026 (chi-square test). Therefore, the null hypothesis that the transgene is located on a chromosome other than chromosome 4A can be rejected. Taken together, the present study confirmed the chromosomal locations of the HOSUT transgene in the HOSUT lines as suggested by Cápal et al. (2016). FISH is the fastest way to identify transgene insertion sites in chromosomes, when it works. Although there have already been several studies reporting successful FISH of single-copy genes or cDNAs or transgenes in wheat (e.g., Anand et al., 2003;Danilova et al., 2012Danilova et al., , 2014, we failed to assign the HOSUT gene to chromosomes by FISH. Therefore, the conventional aneuploid analysis is still the surest way of assigning specific genes or DNA sequences to specific chromosomes, although it is laborious. In this study, telocentric chromosomes 5DL and 4AL carrying the transgene appeared in the F 2 progeny of crosses between the nullisomic-tetrasomic lines and HOSUT lines. The size and morphology of telocentric chromosomes are good landmarks, which can serve to identify them under a microscope and isolate them from the chromosome complement by flow cytometry. After being established in telosomic lines, the 5DL and 4AL telocentric chromosomes can be flow-sorted onto microscope slides, which would make excellent, debris-free chromosome preparations for FISH analysis. In addition, flowsorted chromosomes can be extremely stretched, sometimes more than seven times longer, than chromosomes prepared by the squash method (Endo et al., 2014). In producing the draft sequence of a hexaploid wheat genome, flow-sorted telocentric chromosomes were used for DNA extraction in order to reduce the complexity of the polyploid genome (International Wheat Genome Sequencing Consortium [IWGSC], 2014). Likewise, the 5DL and 4AL telocentric chromosomes harboring the transgene in the respective HOSUT lines can be flow-sorted for sequencing to analyze the DNA structure around the transgene insertion sites by various methods using next generation sequencing, such as targeted locus amplification (Cain-Hom et al., 2017). This strategy would be more efficient than performing whole genome sequencing or targeted locus amplification with the whole genome of the HOSUT lines. AUTHOR CONTRIBUTIONS TE conceived the plan of this study and wrote the manuscript. WW and BB cross-fertilized the HOSUT lines with the nullisomic-tetrasomic lines. MM grew and self-fertilized the F 1 lines. TE and ST performed the cytological observation and PCR analysis, respectively. WW and MM revised the manuscript. FUNDING This research was supported by the Ministry of Education, Culture, Sports, Science and Technology as part of the Joint Research Program implemented at the Institute of Plant Science and Resources, Okayama University in Japan. This work was also supported by the Research Institute for Food and Agriculture of Ryukoku University.
2019-04-30T13:12:36.135Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "e64f61c896a8d96f4a084e671f0fac90e3d16ef9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.00548/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e64f61c896a8d96f4a084e671f0fac90e3d16ef9", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
207780247
pes2o/s2orc
v3-fos-license
Nearest matrix polynomials with a specified elementary divisor The problem of finding the distance from a given $n \times n$ matrix polynomial of degree $k$ to the set of matrix polynomials having the elementary divisor $(\lambda-\lambda_0)^j, \, j \geqslant r,$ for a fixed scalar $\lambda_0$ and $2 \leqslant r \leqslant kn$ is considered. It is established that polynomials that are not regular are arbitrarily close to a regular matrix polynomial with the desired elementary divisor. For regular matrix polynomials the problem is shown to be equivalent to finding minimal structure preserving perturbations such that a certain block Toeplitz matrix becomes suitably rank deficient. This is then used to characterize the distance via two different optimizations. The first one shows that if $\lambda_0$ is not already an eigenvalue of the matrix polynomial, then the problem is equivalent to computing a generalized notion of a structured singular value. The distance is computed via algorithms like BFGS and Matlab's globalsearch algorithm from the second optimization. Upper and lower bounds of the distance are also derived and numerical experiments are performed to compare them with the computed values of the distance. Introduction Given an matrix polynomial P (λ) = k i=0 λ i A i of degree k where A i , i = 0, . . . , k are n × n real or complex matrices, this paper investigates the distance from P (λ) to a nearest matrix polynomial with an elementary divisor (λ − λ 0 ) j , j r, for a given λ 0 ∈ C and integer r 2. Although the problem is considered only for finite values of λ 0 , the analysis also covers the infinite case which is equivalent to the reversal polynomial defined by rev P (λ) := k i=0 λ i A k−i having an elementary divisor λ j , j r. In particular in such cases the distance under consideration is important from the point of view of control theory for the following reasons. If P (λ) = λA 1 − A 0 is a regular matrix pencil then there exist invertible matrices E and F such that where J p1 is a block diagonal matrix containing all the Jordan blocks associated with finite eigenvalues of P (λ) and N p2 is a nilpotent matrix of size p 2 with nilpotent Jordan blocks on the diagonal. The block N p2 arises in the decomposition only if P (λ) has an eigenvalue at ∞. The matrix pencil on the right hand side of the above decomposition is called the Weierstrass canonical form of the pencil P (λ). If ∞ is an eigenvalue, then the smallest positive integer ν such that N ν p2 = 0 is called the index of the pencil. If ν > 1, then this is equivalent to the existence of a Jordan chain of length at least 2 at ∞ for P (λ), or equivalently an elementary divisor λ j , j 2, for rev P (λ). In such a case the associated differential algebraic equation A 1ẋ (t) = A 0 x(t) + Bu(t), may not have any solution for certain choices of initial conditions unless the controller u(t) is sufficiently smooth. In fact the larger the length of a Jordan chain at ∞, the greater are the smoothness requirements on u(t). In particular, for dynamical systems arising from matrix pencils as above to be stable or asymptotically stable, it is necessary that the matrix pencil has index at most one. Moreover, for the stability of such systems it is necessary that the purely imaginary eigenvalues of P (λ) are not associated with Jordan chains of length 2 or more. For more details see [2,15,4] and references therein. It is well known that arbitrarily small perturbations to matrix pencils with λ 0 as an eigenvalue of algebraic multiplicity r can result in a matrix pencil having an elementary divisor (λ − λ 0 ) r . In fact this result can also be extended to all matrix polynomials a proof of which is provided in Section 3. Due to this fact, the distance problem under consideration is also equivalent to finding the distance to a nearest matrix polynomial with an eigenvalue at λ 0 with algebraic multiplicity at least r. This problem has been considered in the literature in various forms. The distance to a nearest matrix polynomial with a prescribed multiple eigenvalue is considered in [12] and bounds on the distance are obtained under certain conditions. In [13] this work is extended to find the distance from a given matrix polynomial to a nearest matrix polynomial with a specified eigenvalue of algebraic multiplicity at least r and a Jordan chain of length at most k and an upper bound of the distance to a nearest matrix polynomial with a specified eigenvalue of algebraic multiplicity at least r. The latter is done by constructing a perturbation to the given matrix polynomial which has the desired feature. However, the construction is possible under certain conditions. The results are extended to matrix polynomials in [8] where a similar construction is made to find an upper bound on the distance to a nearest matrix polynomial with specified eigenvalues of desired multiplicities. The distance from an n × m matrix pencil A + λB with n m, to a nearest matrix pencil having specified eigenvalues such that the sum of their multiplicities is at least r is considered in [10]. Under the assumption that rank B r, and only A is perturbed, the distance is shown to be given by a certain singular value optimization under certain conditions. These ideas are extended in [7] to find the same distance from a square matrix polynomial that has no infinite eigenvalues. Under certain conditions similar to those in [10], a singular value optimization is shown to be equal to the distance when only the constant coefficient of the matrix polynomial is perturbed. A lower bound is found for the general case when all coefficient matrices are perturbed. The techniques are further extended to find the same distance for more general nonlinear eigenvalue problems in [6]. The analysis of the distance problem in this paper has several key features. Firstly the stated distance is considered for a square matrix polynomial that is either regular or singular and perturbations are considered on all the coefficient matrices of the polynomial. Note that with the exception of [10] where a rectangular matrix pencil is considered, in all other works in the literature the matrix pencil or polynomial is assumed to be regular. However, [10] considers perturbations only to the constant coefficient matrix of the pencil. In fact by using elementary perturbation theoretic arguments it is shown in Section 3 that if the matrix polynomial P (λ) is singular, then it is arbitrarily close to a regular matrix polynomial with an elementary divisor (λ − λ 0 ) j , j r. This makes it possible to assume that the matrix polynomial P (λ) is regular in the rest of the paper. A necessary and sufficient condition is obtained for P (λ) to have λ 0 as an eigenvalue of algebraic multiplicity at least r. Due to this it is possible to show that finding the stated distance is equivalent to finding a structure preserving perturbation such that the nullity of a certain block Toeplitz matrix is at least r. This leads to a lower bound on the distance and allows for several characterizations of the distance in terms of optimization problems. Under the mild assumption that λ 0 is not an eigenvalue of P (λ), for different choices of norms it is established that computing the distance from P (λ) to a nearest matrix polynomial with an elementary divisor (λ − λ 0 ) j , j r, is equivalent to computing a generalized version of a structured singular value or µ-value. It is well known that the µ-value computation is an NP-hard problem [1]. Due to the form of the generalized µ-value, these results are likely to throw light on the computational complexity of the distance problem. The characterization in terms of generalized µ-values also yields a lower bound on the distance. Alternatively, the distance is characterized by another optimization problem which is computed via BFGS and Matlab's globalsearch algorithm. This also results in an upper bound on the distance. A special case for which the solution of the distance problem has a closed form expression is also discussed. Finally, computed values of the distance via BFGS and Matlab's globalsearch are compared with upper and lower bounds. Preliminaries Standard notations are followed throughout the paper. The set of n × n complex matrices is denoted by C n×n . The i-th singular value of a matrix A is denoted by σ i (A). Also the smallest singular value of A is denoted by σ min (A). Consider the matrix polynomial of degree k of the form P (λ) = k i=0 λ i A i , A i ∈ C n×n . There exist two n × n matrix polynomials E(λ) and F (λ) with nonzero determinants independent of λ such that P . This is called the Smith form of P (λ). Important concepts associated with P (λ) may be defined via its Smith form. The nonzero diagonal elements d 1 (λ), . . . , d t (λ) are called the invariant polynomials of P (λ). The number of such invariant polynomials is the normal rank of P (λ). The polynomial P (λ) is said to be regular if its normal rank is equal to its size n. Else it is said to be a non-regular or singular matrix polynomial. Each invariant polynomial may be written as a product of linear factors where λ i1 , . . . , λ iqi are distinct complex numbers and c i1 , . . . , c iqi are positive integers. The factors (λ−λ ij ) cij are called elementary divisors of P (λ). Any λ 0 ∈ C is a finite eigenvalue of P (λ) if (λ−λ 0 ) c is a elementary divisor of P (λ) for some positive integer c. The algebraic multiplicity of λ 0 as an eigenvalue of P (λ) is the sum of all the powers of the term (λ − λ 0 ) in all the invariant polynomials and the geometric multiplicity of λ 0 as an eigenvalue is the number of invariant polynomials which have (λ − λ 0 ) c as a factor. Clearly, if the matrix polynomial P (λ) is regular, then the eigenvalues of P (λ) are the roots of det(P (λ)) with algebraic multiplicity equal to the multiplicity of the root. Having an elementary divisor (λ−λ 0 ) r is also equivalent to the existence of vectors x 0 , . . . , x r−1 ∈ C n , x 0 = 0, satisfying the equations where P i (λ) denotes the i-th derivative of P (λ) with respect to λ. The vectors x 0 , . . . , x r−1 are said to form a Jordan chain of length r of P (λ) corresponding to λ 0 . Given any n × n matrix pencil L(λ) = A − λE there exist two n × n invertible matrices P and Q such that P (A − λE)Q is a block diagonal matrix pencil, the diagonal blocks being either q × q blocks of the form λI q − J q (α) or λJ q (0) − I q , or q × (q + 1) blocks of the form λG q − F q , or their for some α ∈ C. The blocks λI q − J q (α), λJ q (0) − I q , λG q − F q and λG T q − F T q correspond to a finite eigenvalue α, the infinite eigenvalue, right singular blocks and left singular blocks respectively. This is called the Kronecker canonical form (KCF) of the pencil. For a matrix pencil, having a elementary divisor (λ − λ 0 ) r is equivalent to having a block λI r − J r (λ 0 ) in its KCF. Also clearly the algebraic multiplicity of λ 0 as an eigenvalue of L(λ) is the sum of all the sizes of the blocks corresponding to λ 0 and its geometric multiplicity is the number of such blocks. The normwise distance of P (λ) to the set of all matrix polynomials having a elementary divisor (λ − λ 0 ) j where j r will be considered with respect to the following norms. δ F (P, λ 0 , r) = inf |||∆P ||| F |P + ∆P has an elementary divisor (λ − λ 0 ) j , j r , and |||P ||| 2 := [A 0 · · · A k ] 2 , · F and · 2 being the Frobenius and 2-norms on matrices respectively. Also the matrix polynomial ∆P (λ) = k i=0 λ i ∆A i is such that any of the coefficient matrix ∆A i may be zero. Due to the importance of the case λ 0 = 0 in practical applications and also because the results for this case involve expressions that are relatively simpler than the general case, in many instances initially the results for this special case are obtained and then extend for other choices of λ 0 . The following lemma will be useful for making these extensions. Proof. The proof follows from the fact that for each i = 1, . . . , p + 1, Polynomials for which the distance is zero Given a matrix polynomial P (λ) it is interesting to identify cases when the distance under consideration is zero. One such situation is obviously the case that k = 1 and λ 0 is an eigenvalue of P (λ) of multiplicity at least r. The main result of this section is a proof of the fact that this holds for all values of k and also for singular matrix polynomials. The following theorem proves this for matrix pencils which is later generalized to matrix polynomials. For the sake of completeness, the case that the distance is zero if λ 0 is an eigenvalue of the pencil of multiplicity at least r is also included in the statement of the theorem. Also note that although the theorem is proved with respect to the norm |||·||| F , clearly it also holds for all other choices of norms. Proof. (a) The proof of this part is obvious owing to the structure of the Kronecker canonical form of a pencil having λ 0 as an eigenvalue of algebraic multiplicity atleast r. (b) Let L(λ) be a singular pencil and ǫ > 0 be arbitrarily chosen. Without loss of generality it may be assumed that L(λ) is in Kronecker canonical form, i.e., where R f (λ) and R inf (λ) represents the regular part of L(λ) corresponding to finite and infinite eigenvalues respectively and S(λ) represents the singular part. The idea of the proof is to construct a pencil ∆L(λ) such that |||∆L||| F < ǫ and L(λ) + ∆L(λ) is a regular matrix pencil with 0 as an eigenvalue of algebraic multiplicity n. Then by part (a), L(λ) + ∆L(λ) is arbitrarily close to having an elementary divisor λ n where clearly n r. The arguments are then extended to the case that λ 0 = 0. The block R f (λ) is bidiagonal with super-diagonal entries 0 or −1. Construct an n 1 × n 1 pencil ∆R f (λ) such that all the sub-diagonal entries areǫλ and all other entries are 0. The block R inf (λ) is also bidiagonal with super-diagonal entries λ or 0. Construct an n 2 × n 2 pencil ∆R inf (λ) by replacing the super-diagonal entries λ and 0 of R inf (λ) by 0 andǫλ respectively and setting all other entries to 0. The singular part S(λ) contains equal number of right and left singular blocks. Without loss of generality assume that right singular block and left singular blocks in S(λ) appear alternatively so that S(λ) can be considered block diagonal with square diagonal blocks formed by placing one right and one left singular block next to each other. Each such diagonal block of S(λ) has exactly one row and one column independent of λ and all other rows and columns have exactly one entry as λ. Assuming that there are µ blocks in S(λ), suppose that i 1 th, . . . , i µ th row and j 1 th, . . . , j µ th columns of S(λ) are independent of λ. Construct an n 3 ×n 3 block diagonal pencil ∆S(λ) with square blocks on the diagonal of the same size as the diagonal blocks of S(λ) such that the (i 2 , j 2 )th, . . . , (i µ , j µ )th entries areǫλ and all other entries are 0. In those cases where all three types of blocks R f (λ), R inf (λ) and S(λ) are not present in L(λ), the strategy for forming ∆L(λ) are as follows. • If only the singular blocks occur in L(λ) then we construct ∆L(λ) as a block diagonal matrix with blocks of the same size as S(λ) such that the (i 1 , j 1 ), . . . , (i µ , j µ ) entries areǫλ and all other entries are 0. In each case the above arguments may be extended to show that by choosingǫ small enough, |||∆L||| F < ǫ and L(λ) + ∆L(λ) is a regular pencil whose determinant is a scalar multiple of λ n . Now suppose λ 0 = 0. The pencil L(λ) may be written in the form Therefore for ∆L(λ) := (λ 0 ∆E) − λ∆E, λ 0 is an eigenvalue of (L + ∆L)(λ) of algebraic multiplicity n r. The proof now follows from the fact thatǫ may be chosen small enough so that |||∆L||| F = |λ 0 | 2 + 1 ∆E F < ǫ. Hence the proof. The above result may be extended to matrix polynomials by considering the first companion linearization It is an example of a block Kronecker linearization as introduced in [3] where it was shown that if L(λ) is a block Kronecker linearization of P (λ) and ∆L(λ) is a pencil of the same size as L(λ) with |||∆L||| F < ǫ, for some sufficiently small ǫ > 0, then L(λ) + ∆L(λ) is a strong linearization of P (λ) + ∆P (λ) such that |||∆P ||| F < Cǫ for some positive constant C. Due to this result the following theorem is an immediate consequence of Theorem 3.1. Theorem 3.2. For a given n × n matrix polynomial P (λ) of degree k, and a positive integer r kn, if (a) P (λ) is regular and λ 0 is an eigenvalue of algebraic multiplicity greater than or equal to r, or (b) P (λ) is singular, then P (λ) is arbitrarily close to a regular matrix polynomial having an elementary divisor (λ − λ 0 ) j where j r. In fact, by using the arguments in the proof of Theorem 3.1, it is clear that any n × n singular matrix polynomial P (λ) of degree k is arbitrarily close to a regular matrix polynomial having an elementary divisor (λ − λ 0 ) kn . In view of the above theorem, it is now possible to assume without loss of generality the distance δ s (P, λ 0 , r) for s = 2 of F are being computed for a regular matrix polynomial P (λ) which does not have λ 0 as an eigenvalue of algebraic multiplicity r. This also has the effect of removing the uncertainty that was earlier associated with the situation that perturbations being made to the matrix polynomial for the desired objectives could result in a singular matrix polynomial. A characterization via block Toeplitz matrices One of the aims of this work is to show that for appropriate choices of norms computing the distance to a matrix polynomial with an elementary divisor (λ − λ 0 ) j , j r, is equivalent to finding a structured singular value or generalized µ-value. The next result is an important step in this direction. Since the expression for the optimization is more aesthetic if r is replaced by r + 1, in the rest of the paper the distance is considered in the form δ s (P, λ 0 , r + 1), where s = 2 or F. The following definition will be frequently used. For any γ = γ 1 γ 2 · · · γ r ∈ Γ given by (6.1) and α ∈ C, let T γ (Q, α) be a function from the set of all n × n matrix polynomial Q(λ) = k i=0 λ i A i , to the set of (r + 1)n × (r + 1)n matrices defined by Theorem 4.1. A scalar λ 0 ∈ C is a eigenvalue of a n × n matrix polynomial P (λ) of algebraic multiplicity at least r+1 if and only if the rank of T γ (P, λ 0 ) as defined by (4.1) is at most (r+1)(n−1). It may be assumed that {x 11 , x 21 , . . . , x p1 } is a linear independent set. Clearly the i th Jordan chain contributes the following k i vectors  of length(r+1)n to the null space N (T γ (P, λ 0 )) of T γ (P, λ 0 ) for i = 1, . . . , p. All the above vectors are linearly independent as {x 11 , x 21 , . . . , x p1 } are linearly independent. Hence the nullity of T γ (P, λ 0 ) is at least r + 1. Conversely suppose that rank(T γ (P, λ 0 )) (r + 1)(n − r) so that the nullity of T γ (P, λ 0 ) is at least r + 1. Let {x 1 , x 2 , . . . , x r+1 } be a linearly independent ordered list in N (T γ (P, λ 0 )), where . If x r+1,j = 0 for some j, then x j will be a Jordan chain of P (λ) of length r + 1 corresponding to λ 0 and the proof follows. So assume without loss of generality that for each j = 1, . . . , r + 1, there exists t j , 0 < t j < r + 1 such that x ij = 0 for all i = t j + 1, . . . , r + 1. Let t = max 1 j r+1 t j and p = t − min 1 j r+1 t j . By reordering the list if necessary, it may be assumed that the first k 1 + · · · + k s vectors of the list satisfy x ij = 0 for all s = 1, . . . , p + 1 and i = t − s + 2, . . . , r + 1, so that k 1 + · · · k p+1 = r + 1. Note that k j may be zero for some or all j = 2, . . . , p + 1. Consider X = x 1 x 2 · · · x r+1 . Then in fact, It is possible that some of the vectors x t−s+1,1+ s j=1 kj , . . . , x t−s+1, s+1 j=1 kj in the consecutive columns 1 + s−1 j=1 k j to s j=1 k j of X can be made zero for each s = 1, . . . , p + 1, via elementary column operations on X that affect only those columns. The columns of the transformed X will also be a linearly independent list in N (T γ (P, λ 0 )). Assume without loss of generality that X has been formed after such transformations have already been made and the submatrices x t−s+1,1+ s−1 j=1 kj · · · x t−s+1, s j=1 kj of X have full column rank for each s = 1, . . . , p + 1. If the first vector of the list β p , does not belong to span(β), it is included in β. If it belongs to span(β) then it can be uniquely represented by a linear combination of the vectors of β and at least one of the scalar coefficients in the representation is nonzero. Replace one of the vectors from β whose associated coefficient in the linear combination is nonzero by the first vector of β p . Now consider the second vector of the list β p . If it does not belong to the span of the updated β, then it is included in β. Otherwise it is a linear combination of the vectors of β with at least one of the scalar coefficients in the linear combination being non zero. As β p is a linearly independent list, a vector associated with such a non zero scalar in the linear combination can be chosen from β p+1 . The set β is further updated by replacing this vector by the second vector from β p . This process is continued for the rest of the vectors in β p as well as those of β p−1 , . . . , β 1 . The final β clearly forms a linearly independent list of eigenvectors of P (λ) corresponding to λ 0 . Moreover the sums of the lengths of the Jordan chains associated with these eigenvectors is at least tk 1 + p s=1 (t − s)(k s+1 − k s ). But Hence λ 0 is an eigenvalue of P (λ) of algebraic multiplicity at least r + 1 and the proof follows. Remark 4.2. Theorem 4.1 is established in [12] for the particular case that P (λ) has an eigenvalue of multiplicity 2. Under the assumption that the leading coefficient matrix is full rank, another characterization of a matrix polynomial P (λ) having a specified eigenvalue of multiplicity r is obtained in [7] via a different block Toeplitz matrix that involves r(r + 1)/2 parameters. In view of part (a) of Theorem 3.2, the following corollary of Theorem 4.1 is immediate. Corollary 4.3. Given any n×n matrix polynomial P (λ) consider the collection S(P, λ 0 ) of all n×n matrix polynomials (∆P )(λ) := k i=0 λ i ∆A i such that the block Toeplitz matrices T γ (P + ∆P, λ 0 ) as defined in (4.1) with γ = [1 · · · 1], have rank at most (r + 1)(n − 1). For any choice of norm |||·|||, the distance to a nearest matrix polynomial with an elementary divisor (λ − λ 0 ) j , j r + 1, is given by inf{|||∆P ||| : ∆P (λ) ∈ S(P, λ 0 )}. 5 The distance as the reciprocal of a generalized µ value Corollary 4.3 implies that for any given choice of norm, finding the distance from P (λ) to a nearest matrix polynomial with an elementary divisor λ r+1 is equivalent to finding the smallest structure preserving perturbation to the block Toeplitz matrix  so that the rank of the perturbed matrix is at most (r + 1)(n − 1). This fact is used in this section to show that if λ 0 ∈ C is not already an eigenvalue of P (λ), then computing the distance from P (λ) to a nearest matrix polynomial with the desired elementary divisor (λ − λ 0 ) j , j r + 1, with respect to the norms |||P ||| 2 and |||P ||| F is the reciprocal of a generalized notion of a µ-value. To this end, the definition of a perturbation class and a structured singular value, which is also referred to in the literature as a µ-value, are introduced. A perturbation class S is a nonempty closed subset of C p×q such that if ∆ ∈ S then t∆ ∈ S for 0 t 1. Definition 5.1 (µ-value). [11,5] Let S ⊂ C p×q be a perturbation class and let . be a norm on C p×q . The µ-value of M ∈ C q×p with respect to S and . is The generalized µ-value is now defined as follows. The following lemma provides an useful factorization of T (Q, λ 0 ). The proof of the lemma follows from direct multiplication of the stated factors and is therefore skipped. Lemma 5.3. For a given positive integer r and an n × n matrix polynomial Q(λ) of degree k, such that E i , i = 1, . . . , r, r + 1 are the (r + 1) × (r + 1) matrices, The next theorem is the main result of this section. Theorem 5.4. Let P (λ) = k i=0 λ i A i be an n × n matrix polynomial of degree k, and 1 r < kn. For ∆A i ∈ C n×n , i = 0, . . . , k, let S 1 be the perturbation class of all perturbations of the type I r+1 ⊗ ∆A 0 · · · ∆A k and S 2 be the perturbation class of all perturbations of the type I r+1 ⊗ ∆A 0 · · · ∆A min{r,k} . For any λ 0 ∈ C which is not an eigenvalue of P (λ), let T (P, λ 0 ) be defined by (5.3) and E and M (λ 0 ; r) be as given in Lemma 5.3 and Lemma 2.1 respectively. Then, otherwise. An alternative formulation of the distance as an optimization An alternative formulation for the distance δ s (P, λ 0 , r + 1) is obtained in this section for s = 2 or F. Theorem 6.1. Let P (λ) = k i=0 λ i A i be an n×n matrix polynomial of degree k. For a given integer r, such that 0 < r < kn, consider the sets Γ := {[γ 1 · · · γ r ] : γ i > 0, 1 i r}, and (6.1) Now let C r,n T ,Γ be the collection of all block Toeplitz like matrices X given by, where [x T 0 · · · x T r ] T ∈ C (r+1)n 0 and [γ 1 · · · γ r ] ∈ Γ. Then for s = 2 or F δ s (P, 0, r + 1) = and in general, where M (λ 0 ; r) is as given in Lemma 2.1. Setting ∆A i = 0 for i = r + 1, . . . , k if r < k, (P + ∆P )(λ) has an elementary divisor λ j , j ≥ r + 1. Therefore in this case, δ s (P, 0, r + 1) = inf X∈C r,n T ,Γ A 0 · · · A r XX † s , for s = 2 or F . When λ 0 = 0, Lemma 2.1 implies that Using this in (6.5), the minimum 2 or Frobenius norm solution of the resulting equation is given by thus proving that if r k then for s = 2 or F, If r > k, then P (λ) + ∆P (λ) has an elementary divisor (λ − λ 0 ) j where j r + 1 iff there exists vectors x 0 , x 1 , . . . , x r ∈ R n with x 0 = 0 and r positive scalars γ 1 , . . . , γ r such that This set of equations can be written in the form This is equivalent to Therefore the proof follows by arguing as in the previous case. Remark 6.2. The parameters γ i can all be taken to be 1 in the optimization that computes δ s (P, λ 0 , r+ 1) for s = 2 or F. As shall be seen in Section 8, this will also decrease the number of variables in the optimization. But these parameters play an important role when computing the upper bound for δ s (P, λ 0 , r + 1) from this characterization. However, there is no particular advantage in choosing them to be nonzero real or complex numbers when deriving the upper bound. Lower bounds on the distance The first lower bound on the distance δ F (P, λ 0 , r + 1) is derived from Theorem 4.1. Upper bound on the distance In this section an upper bound on the distance δ F (P, λ 0 , r + 1) that can be used in conjunction with the lower bound obtained in Theorem 7.1 is derived. Theorem 8.1. Let P (λ) = k i=0 λ i A i be an n×n matrix polynomial of degree k and r < kn be a positive integer. For Γ as given in (6.1), let γ := γ 1 , . . . , γ r ∈ Γ, and let f (γ) := σ (r+1)n−r (T γ (P, λ 0 )) T are the corresponding right and left singular vectors with v i , u i ∈ C n for i = 0, 1, . . . , r dependent on γ. Also let Γ 0 ⊂ Γ be the collection of all γ ∈ Γ with the property that the vector v 0 formed by the first n entries of a right singular vector v(γ) associated with the singular value f (γ) of T γ (P, λ 0 ) is nonzero. Then for s = 2 or F δ s (P, λ 0 , r + 1) inf where M (λ 0 ; r) is as defined in Lemma 2.1, U (γ) = u 0 · · · u r , and the infimum is taken to be ∞ if Γ 0 = ∅. Proof. From equations (6.3) and (6.4) it is clear that if In either case, δ s (P, λ 0 , r + 1) f (γ) U (γ)(M (λ 0 ; r)V (γ)) † s , and the proof follows by taking the infimum of the right hand side of the above inequality as γ varies over Γ 0 . Remark 8.2. A matrix polynomial for which Γ 0 = ∅ has never been encountered in practice. Therefore it is conjectured that the upper bound in Theorem 8.1 is never ∞. In fact numerical experiments show that in many cases this upper bound is very close to the computed value of the distance. Some special cases The quantities δ s (P, 0, 2), s = 2, F, are a measure of the distance to a matrix polynomial nearest to P (λ) = k i=0 λ i A i , having a defective eigenvalue at 0. In this case the problem is equivalent to finding a nearest matrix pencil to λA 1 + A 0 in the chosen norm that has 0 as a defective eigenvalue. Note that this distance is of significant practical interest as when P (λ) is replaced by rev P (λ), then it is the distance to a nearest matrix polynomial with a defective eigenvalue at ∞. This problem was considered in [9] for the matrix pencils where several results that apply only to this special case were obtained. Firstly, the upper bound for the distances in Theorem 8.1 is given by are the right and left singular vectors of P (0) 0 γP ′ (0) P (0) corresponding to its (2n − 1)th singular value f (γ). In this case, γ can be allowed to vary over all positive real numbers as the restriction v 0 = 0 can be removed. To see this, assume that γ > 0 is such that the corresponding vector v 0 = 0. Then clearly u 0 = 0 and f (γ) 0 Then for s = 2, or F and the relations So unless (P + ∆P )(λ) is singular, 0 is a multiple eigenvalue of (P + ∆P )(λ). In either case the objective is achieved as the polynomial (P + ∆P )(λ) is arbitrarily close to having an elementary divisor λ j , j ≥ 2. Secondly, a formula for the Frobenius norm distance to a nearest matrix polynomial with a defective eigenvalue at 0 may be found for the special case that 0 is already and eigenvalue of P (λ) (so that rank A 0 = n − 1) and the allowable perturbations to P (λ) have the property that their coefficient matrices have rank atmost 1. The formula is given by the following theorem, the proof of which is identical to that of [9,Theorem 5.4]. Theorem 9.1. Let P (λ) = k i=0 λ i A i be an n × n matrix polynomial of degree k where rankA 0 = n − 1. Suppose A 0 = U ΣV * is a Singular Value Decomposition (SVD) of A 0 and a i,j is the entry of U * A 1 V in the i th row and j th column. Define X and Y as: . . . . . . 0 σ n−1 0 a n,1 . . . . . . a n,n−1 a n.n where σ 1 σ 2 · · · σ n−1 > 0 are the singular of A 0 . Then, the distance with respect to the norm |||·||| F , to the nearest matrix polynomial with a defective eigenvalue at 0 under the restriction that the coefficient matrices of the perturbating matrix polynomial have rank at most 1 is given by min{σ min (X), σ min (Y )}. Numerical Experiments This section presents numerical experiments conducted to illustrate the upper and lower bounds on the distances and their values computed via BFGS and MATLAB's globalsearch algorithm from the formulation in Theorem 6.1. Computing δ F (P, λ 0 , r + 1) from the optimization in Theorem 6.1 via BFGS requires the gradient of the objective function f (X) := HX(M (λ 0 ; r)X) † F where X varies depending on whether r k or r > k and H = P (λ 0 ) · · · 1 p! P p (λ 0 ) , p = min{r, k}. By Lemma 2.1, H = A 0 · · · A k M (λ 0 ; r). Therefore Only real matrix polynomials are considered in the experiments. Since M (λ 0 ; r) has full column rank, for any X = X 0 if there exists a neighborhood S of X 0 such that rankX 0 = rankX for all X ∈ S then f (X) is differentiable at X 0 . If we use any numerical scheme to find the infimum of f (X), then generically at every step there exists a neighborhood S of X where every element of S is of full rank and consequently we can find gradient of f (X) at those points. Additionally the matrix X involved in the objective function f (X) has block Toeplitz structure which needs to be incorporated when finding the gradient of f (X). For simplicity, the gradient is initially computed for the function (f (X)) 2 without taking the structure of X into consideration with the changes due to the structure being incorporated later. Therefore the function under consideration is g(X) := (f (X)) 2 = G(M (λ 0 ; r)X)(M (λ 0 ; r)X) † 2 F . Considering g(X) as a real valued function of the entries of X, Expanding the right hand side gives Therefore, Due to the structure of X, ∇g(X) X=X0 is given by and by ∇g(X) Due to the difficulties in computing the gradient of the objective function, the optimization for δ 2 (P, λ 0 , r + 1) in Theorem 6.1 is performed only via MATLAB's globalsearch.m. Also in each case, the optimizations involved in the lower and upper bounds are computed via globalsearch.m algorithm. Table 10.2 records the same for δ F (P, 1, r) as r varies from 2 to 6. Likewise, Table 10.3 and Table 10.4 records the corresponding quantities for the distances δ 2 (P, 0, r) and δ 2 (P, 1, r) respectively, except that in these cases the distance is computed only via the globalsearch.m algorithm. In almost every case the lower bound from Theorem 7.3 is better than the lower bound from Theorem 7.1. The perturbations ∆P (λ) constructed to find the upper bound in Theorem 8.1 may also be obtained by using nonzero singular values of T γ (P, λ 0 ) other than f (γ) and a corresponding pair of left and right singular vectors. However the resulting upper bound obtained by taking the infimum of |||∆P ||| s , s = 2 or F over all permissible γ does not seem to be an improvement over the one already obtained. For instance in Example 10.1, the matrix T γ (P, 0) corresponding to the distance δ 2 (P, 0, 4) is of size 8 and the upper bound from Theorem 8.1 reported in Table 10.3 is constructed by using σ 5 (T γ (P, 0)) and it corresponding left and right singular vectors. If the same bound is constructed by considering the three smallest singular values σ 6 (T γ (P, 0)), σ 7 (T γ (P, 0)) and σ 8 (T γ (P, 0)) and corresponding left and right singular vectors, then the values are 1.55784600, 1.65319413 and 2.42096365 respectively. Similar observations have been made by considering the other singular value of T γ (P, 0). Conclusion Given a square matrix polynomial P (λ), the problem of finding the distance to a nearest matrix polynomial with an elementary divisor of the form (λ − λ 0 ) j , j r, for a given λ 0 ∈ C and r 2 has been considered. The distance is shown to be zero for singular matrix polynomials. The problem has been characterized in terms of different optimization problems. One of them shows that the solution is the reciprocal of a generalized notion of a µ-value. The other optimization is used to compute the distance via numerical software like BFGS and MATLAB's globalsearch. Upper and lower bounds have been derived from the characterizations and numerical experiments performed to compare them with the computed values of the distance show that they are quite tight in many cases. Since µ-value computation is an NP-hard problem, it is conjectured that the solution of the given distance problem is also NP-hard. The optimizations involved in the calculations are computationally quite expensive. But this is also the case with other optimizations proposed in the literature for computing similar distances. Also due to the nature of the optimizations, it is not clear that the values of the bounds from Theorem 7.3 and Theorem 8.1 are the globally optimal values. However in many cases they are very close to the computed values of the distance. This leaves the question whether they may actually give the exact solution of the distance problem open for future research.
2019-11-04T16:04:30.000Z
2019-11-04T00:00:00.000
{ "year": 2019, "sha1": "cb61236bf8dedc3f62602f05945e5b364a39fb55", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1911.01299", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb61236bf8dedc3f62602f05945e5b364a39fb55", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
271059917
pes2o/s2orc
v3-fos-license
Evaluation of flat ridge rehabilitation using an intraoral custom-made distraction device at four weeks versus eight weeks and its impact on dental implant efficacy: A comparative study Background This study aimed to evaluate alveolar bone height enhancement using a custom-made distractor to evaluate its ability to support dental implants. Method The left mandibular premolars of nine dogs were extracted, followed by alveoloplasty to simulate an atrophic ridge. The dogs were divided into three groups: groups I and II received distractors followed by dental implants, while group III received implants alone. Distractors remained in place for 4 weeks in group I and 8 weeks in group II for consolidation. Subsequently, the distractors were removed, and a titanium dental implant was immediately inserted during the same visit. In the third group, implants were placed in the same area as noted. The implant was left in position for 8 weeks, after which the left hemimandible underwent dual-energy X-ray absorptiometry and histological analysis, focusing on the region of interest (ROI)1 mesial and distal to the dental implant. Results Densitometric analysis revealed notable osseointegration between the regenerated bone adjacent to the dental implant. Notably, there were significant differences in osseointegration between groups I and II. Moreover, osseointegration levels were similar between groups II and III, where no distraction device was employed. Histological findings showed the formation of new bone in the distraction gap, with more advanced maturation noted in the 8-week group. It is worth noting that the integration between bone and implants in the third group surpasses that of the distraction groups. Conclusion Using the distraction device for only 4 weeks is acceptable to meet the criteria for implant placement. The small size of the distraction device reduces tissue reaction after surgery because it eliminates the necessity of complex surgeries that may require bone grafting. Density measurements and histological observations indicate that the distractor promotes the generation of enough bone for prosthetic rehabilitation with dental implants. Introduction Replacing teeth in flat alveolar ridges poses challenges in determining appropriate dental implant to meet both functional and aesthetic requirements (Bras et al., 1983).For effective mandibular rehabilitation, the height of the edentulous ridge above the mandibular canal must be at least 7 mm (Keller, 1995).Common etiological factors for a flat alveolar ridge include tooth loss, periodontal disease, tumor removal, and post-traumatic growth disorders (Chang et al., 2016).Distraction osteogenesis is a surgical approach that involves osteotomy to generate a gap by utilizing the bone's regenerative capacity to elongate the alveolar ridge (Toledano et al., 2019;Vale et al., 2020).The distraction device is distinguished from other grafting techniques in that it enhances the size of the original bone (Ilizarov, 1988;Kobayashi et al., 2005;Nelson et al., 2006;Zapata et al., 2014).Codivilla was the first to use a distractor device on bones in 1905 (Hosny, 2020), while McCarthy et al. (1992) was the first used this device for lengthening maxillofacial bones.The mechanism of DO originate from traction, inducing tension within the callus and prompting the formation of a new bone aligned with the distraction vector (Lim et al., 2018;Pereira et al., 2007).The primary advantage of the distraction device over traditional surgical approaches is its ability to reduce wound dehiscence by maintaining periosteal nutrition of the bony rim (Chiapasco et al., 2006;Gaggl et al., 2000). The process of bone formation via distraction involves several biological phases: latency, distraction, and consolidation (Cho et al., 2007;Pereira et al., 2007).In the latency phase, the clot transforms into granulation tissue, containing connective tissue cells and infiltrating capillaries, progressing to form smooth calluses within a few days.Interleukin (IL-1) secreted by progenitor cells plays a critical role in the inflammatory response while interleukin (IL-6) stimulates mesenchymal stem cell proliferation (Ando et al., 2014;Yang et al., 2022).Numerous studies have reported increased expression of TGF-β1 during both the latency and distraction phases, reaching levels more than twice as high as those found in the normal mandible (Alzahrani et al., 2014;Weiss et al., 2002;Yang et al., 2022).Many studies have indicated that an appropriate latency period of 5-7 days significantly influences optimal healing (Jensen et al., 2002;Mofid et al., 2001). The distraction phase involves exerting tensile forces on the gap tissue, with the proliferation of fibroblast-like cells at the peripheries (Jazrawi et al., 1998).At the distraction gap, TGF-β1 secreted by stem cells stimulates osteoblast proliferation to fill the gap (Ozkan et al., 2007).During the distraction phase, the expression of many bone morphogenic proteins (BMP-2,4,7) is upregulated, followed by a subsequent decrease during consolidation stage (Cheung et al., 2006;Marukawa et al., 2006).Numerous studies suggest that a rate of 1 mm daily leads to sufficient bone formation in maxillofacial distraction osteotomies (Fu et al., 2021;Klein and Howaldt, 1996).In 1989, Ilizarov reported that a movement rate of 0.5 mm/day results in early consolidation, ultimately leading to the failure of the distraction plan (Ilizarov, 1989).Conversely, increasing the rate of movement to 2 mm/day may result in compromised bone formation and inadequate adjustment of soft tissues (Natu et al., 20214).After cessation of the distraction phase, the regenerated bone undergoes maturation that typically lasts 3-4 weeks in children and 6-8 weeks in adults (Yen et al., 2020).Karp et al. (1992) examined mandibular elongation over consecutive days and noted the presence of three distinct layers spanning from the center to the peripheries of the gap.The central area of the gap consists of fibrous tissue with fibroblast-like cells; the subsequent layer denotes the site of bone formation surrounded by osteoblast cells; and the third layer, the layer of remodeling, ultimately leads to the creation of mature bone (Karp et al., 1992).Dual-energy x-ray absorptiometry (DEXA) is a critical method for assessing both BMC and BMD, which helps in making informed therapeutic decisions and evaluating treatment responses (Lorente-Ramos et al., 2012).The current study aims to evaluate the histological structure and densitometric analysis of newly formed bones using a custom-made distraction device for a brief period and its ability to receive dental implants to reduce the time required for prosthodontics. Grouping and scenario University, Cairo, Egypt.The experiment began with the extraction of premolars, followed by alveoloplasty to flatten the ridge.The dogs were divided into three groups: Groups I and II underwent osteotomy and installation of distraction devices.After seven days, the distractor was rotated two full revolutions daily until a vertical height of 7 mm was achieved.Subsequently, the consolidation period was extended to 4 weeks for Group I and 8 weeks for Group II.Afterward, the third surgery was performed to remove the distractor and insert the dental implants.The dogs in Group III received only dental implants of the appropriate size.The osseointegration process began with dental implants and lasted 8 weeks for all groups. Extraction and alveoloplasty For the first surgery, the animals were anesthetized using sodium thiopental at 40 mg/kg (Pharm.Industry Co. Egypt) into the recurrent tarsal vein.The procedure involved extracting the lower left premolars, reducing the height of the alveolar bone, trimming the excess mucosa, and suturing it with a 4/0 chromic catgut suture.After consuming soft food for one week, the dogs were made to resume their regular diet and left for 12 weeks for complete healing. Osteotomy and distraction The second surgery was performed to insert a titanium distraction device that consisted of two small plates interconnected by a threaded rod (Fig. 1).The lower plate of the distractor was secured to the base of the mandibular body, the transport plate was fixed to the induced movable bone segment.Dogs in Groups I and II were anesthetized, and a semicrestal buccal flap was created.Careful subperiosteal dissection was performed to preserve the lingual mucosa.The distractor was customized according to the mandibular topography.A bone segment measuring 30 mm in length and 5 mm in height was designated for distraction, with holes drilled at appropriate positions for screw placement, followed by separation from the jaw.Both distractor pieces were fastened in place using eight titanium screws, the surgery area was sutured with 4/0 chromic catgut suture (Fig. 1).The animals were administered Zyleject (Amoun Pharm.Co., Cairo, Egypt) twice daily for 3 days, with Amoxicillin 500 mg (Egyptian Pharmaceutical Co., 10th of Ramadan, Egypt) twice daily for 5 days.After 1 week, the distraction spring was rotated clockwise for two full revolutions daily for seven days until it was extended coronally by 7 mm, and the designated consolidation period began. Distraction removal and implant insertion In the third surgical procedure, the distractor was extracted from Groups I and II, and a dental implant of 15 × 4.9 mm (Tot II Dent, Alexandria, Egypt) was placed at the focal point.A dental implant of 11 × 4.9 mm was performed on Group III at a suitable site (Fig. 1).After 8 weeks, all dogs were euthanized by cardiac injection of pentobarbital.The entire lower jaw was removed and divided equally into two parts. DEXA analysis Hemimandibles containing implants were utilized to calculate BMC and BMD using a DEXA device (Norland Eclipse Norland® densitometer, USA).The ROI encompasses three points mesial and distal to the implant.The distal point was located 8 mm from the upper mandibular rim, whereas the mesial points were positioned 2 mm coronally and apically to the distal point.After selecting the targeted area with a pointer, the program recorded the volume (in cm 3 ), BMC (in g), and BMD (in g/cm 3 ). Histologic evaluation The specimens were fixed in a 10 % neutral formalin for 10 days, then immersed in a mixture of 20 % sodium citrate and 5 % formic acid for 2 months for decalcification.Each decalcified sample was embedded in molten paraffin and allowed to solidify, after which the implant was carefully extracted.Serial sections were cut and stained with H&E and trichrome stain to analyze the newly formed bone. Data analysis was performed using SPSS version 23 (IBM, USA).The ANOVA test was obtained to determine the significant relationship (P < 0.05).Tukey's HSD test was performed to determine significant relations among the groups. Results All surgeries went smoothly, and recovery was uneventful without complications.All implants were successfully positioned without rejection.Throughout the healing and osseointegration process, the mucous membrane remained free from inflammation. Densitometric analysis The goal of measuring BMC and BMD was focused on the area mesial and distal to the dental implant, represented by the stars (*) (Fig. 2).Statistical analysis was performed on the recorded data from all groups utilizing Kolmogorov-Smirnov and Shapiro-Wilk tests and the normality of the data distribution was also evaluated (Chart 1).The results of the ANOVA test for BMC and BMD showed statistically significant differences between all groups, p = 0.000 and P = 0.014, respectively (Table 1).Multiple comparisons through the Tukey HSD test indicated significant differences among all groups of both BMC and BMD, except BMD, between Groups II and III (p = 0.164) (Table 2). Histological examination Histological examination of the ROI of group I revealed diverse stages of bone formation in distinct regions.The initial layer comprises mature lamellar bone oriented toward the base of the mandible, characterized by numerous osteocytes arranged in a definitive pattern and S.S. Hassan et al. The Saudi Dental Journal 36 (2024) 1241-1247 multiple reversal lines facing the mandibular bone.This is followed by a layer of bone undergoing maturation, which is succeeded by a final layer of immature woven bone contacting the implant threads with osteocyte cells.Examination with Masson's trichrome staining revealed that the bone adjacent to the mandible exhibited a mature type, as indicated by the green staining.In contrast, the area of the woven bone facing the implant displayed red staining of immature ossification (Fig. 3).Histological sections of group II revealed the emergence of a fine sheet of woven bone directed toward the implant screws.This was succeeded by a broad, clearly delineated layer of mature lamellar bone with multiple bone marrow spaces.These spaces were demarcated from the mandibular base by reversal lines, indicative of ongoing bone remodeling activity.Masson's trichrome staining highlighted the presence of mature calcified bone, which was represented by green staining near the base of the jaw, followed by a mixture of green and red colors within the bone tissue, indicating the continuous process of bone maturation (Fig. 4). Histological sections of Group III revealed an area of small cylindrical lamellar bone facing the implant threads containing osteocytes demarcated from the next layer by the reverse line.The second layer was a layer of mature cancellous bone with bony trabeculae and areas of bone marrow; it homogeneously continued with the spongiosa of the *.The mean difference is significant at the 0.05 level.a. Dunnett t-tests treat one group as a control and compare all other groups against it.body of the mandible.Trichrome staining revealed the appearance of fully matured calcified bone in the form of green staining all over the thickness (Fig. 5). Discussion DO is a tissue engineering technique that employs tension mechanics to prompt the formation of bone tissue within the created gap.The latest research underscores the reliability of DO as a surgical approach to enhance the height of alveolar ridges during 4-week and 8-week consolidation periods.It demonstrates its ability to create an optimal environment for placing dental implants within a short duration.Conventional distractors typically align screw holes in a parallel manner in both the movable and fixed parts.However, in the current design, each part of the distraction device is fixed with four screws.In the part fixed to the base of the jaw, the orientation of the screw holes is perpendicular to that of the movable part.In addition, instead of a single line, the four screw holes are arranged in two parallel lines, with two holes in each.This modification is intended to enhance the geometric stability of the distractor, thus reducing the possibility of inappropriate buccal-lingual rotation. In this study, a bone section 30 mm in length and 5 mm in height was cut to be sufficient to achieve distractor stability with a successful blood supply.A thin titanium distractor was made and fixed in the appropriate position to ensure the integrity of the buccal mucoperiosteum.This integration is critical to maintaining the mesenchymal stem cells needed for bone regeneration in the biological environment to ensure optimal outcomes.The distraction arm was rotated two full revolutions clockwise to lengthen 1 mm daily for 7 days to lengthen 7 mm.The biological results were consistent with this daily tensile strength.Relevant results for most investigators indicated that adequate amounts of calcified lamellar bone were achieved using daily stimulation with a single movement of 1 mm (Bell et al., 1997;Chiapasco et al., 2006;Gaggl et al., 2000;Ilizarov, 1989;Meyer et al., 2001).Although other researchers have agreed that the target movement rate was 1 mm/day, it should be divided into 0.5 mm twice /day (Block et al., 1996;Green et al., 2005).Furthermore, several authors performed one revolution of 1 mm daily for over 10 days to attain 10 mm.They observed mechanical instability, resulting in microvascular disruption (Bras et al., 1988;Green et al., 2005).In contrast, Ilizarov (1989) reported that a decrease in the movement rate to 0.5 mm/day leads to early consolidation and, thus, failure of the targeted distraction plan (Ilizarov, 1989). An ongoing controversy continues regarding the minimum duration necessary for the consolidation period, which is sufficient for generating mature bone tissue capable of supporting dental implants.According to current research, a consolidation period of 4 weeks is suitable, as bone maturation progresses, facilitating the placement of a fixed dental implant within a noninflammatory environment, supported by healthy overlying mucosa.This period is similar to the one recorded by Ransom et al. (2018), who determined that 43 days (5 + 10 + 28 days) from the start of distractor installation until its removal (Ransom et al., 2018).Other experiments have shown a wide range of the time required to close the gap, reaching 12 weeks, which appears to depend on the animal used and the bone tissue (Li et al., 1999;Richards et al., 2000).Histological and histochemical examination indicated that the newly formed bone in the distraction gap had a wide extent of maturation needed for osseointegration in the 4-week and 8-week groups, with the latter displaying a higher degree of maturation, suggesting that the periods employed for DO in each group were sufficient to prepare the site for accommodating a dental implant. Conclusion DO is a successful surgery that leads to the formation of bone tissue within the flat ridge in a short period, making it suitable for accommodating dental implants without the need for bone grafting.Densitometry measurements and histological observations demonstrated that bone formation in both distracted groups resulted in sufficient bone tissue formation, enabling the successful rehabilitation of dental implant prosthetics with a satisfactory success rate. Financial support This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Fig. 1 . Fig. 1.Installation of the distractor device with eight screws (a), cutting line (b), and inserting the dental implant (c). Fig. 2 . Fig. 2. Print out scanner photographs of densitometric analysis of implants of all groups, * represent the ROI. Fig. 3 . Fig. 3. Photomicrographs of Group I show basal bone with definite osteocyte cells (A), line of separation between old and mature bone (B), area of indefinite bone remodeling (C), scattered mature osseointegration of green staining (D), red staining of fibrous tissue.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 4 . Fig. 4. Photomicrographs of Group II show spongy bone with marrow spaces (A), mature lamellar bone-facing implant threads (B), definite osteocyte cells (C), osseointegration of green staining (D), a mix of green & red staining, (E) at area facing the implant (F).(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Table 1 One way of variance (ANOVA) test of both BMC and BMD. Table 2 between groups.
2024-07-09T15:12:36.157Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "9190495d01839f0f43c3d6837432194f38f49f13", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "51a8530abce18a34fcc77c19816e09dcc2826d2d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54620329
pes2o/s2orc
v3-fos-license
The role of convective overshooting clouds in tropical stratosphere–troposphere dynamical coupling Abstract. This paper investigates the role of deep convection and overshooting convective clouds in stratosphere–troposphere dynamical coupling in the tropics during two large major stratospheric sudden warming events in January 2009 and January 2010. During both events, convective activity and precipitation increased in the equatorial Southern Hemisphere as a result of a strengthening of the Brewer–Dobson circulation induced by enhanced stratospheric planetary wave activity. Correlation coefficients between variables related to the convective activity and the vertical velocity were calculated to identify the processes connecting stratospheric variability to the troposphere. Convective overshooting clouds showed a direct relationship to lower stratospheric upwelling at around 70–50 hPa. As the tropospheric circulation change lags behind that of the stratosphere, outgoing longwave radiation shows almost no simultaneous correlation with the stratospheric upwelling. This result suggests that the stratospheric circulation change first penetrates into the troposphere through the modulation of deep convective activity. Introduction Weather forecasting in tropical regions is challenging due to the unstable nature of the atmosphere there and its sensitivity to various extratropical disturbances. The impact of the extratropical circulation on the tropics, such as the lateral propagation of tropospheric Rossby waves, has been stud-ied previously (e.g. Kiladis and Weickmann, 1992;Funatsu and Waugh, 2008). The influence from above (i.e. from the stratosphere) is generally neglected, but under certain circumstances, such as during a sudden stratospheric warming (SSW) event, stratospheric meridional circulation change can modify convective activity as will be shown later. Early satellite measurements showed that enhanced poleward eddy heat fluxes in the extratropical stratosphere induce tropical cooling through changes in the mean meridional circulation (Fritz and Soules, 1970;Plumb and Eluszkiewicz, 1999;Randel et al., 2002). It is generally believed that such changes in the stratosphere do not affect the troposphere, due to the difference in air density between the two. Indeed, tropical temperature change induced by the intraseasonal mean meridional circulation is apparent only in the layer around 70 hPa and above (Ueyama et al., 2013). However, this does not imply that the stratospheric meridional circulation has no impact on the atmosphere below the 70 hPa level. A possible impact of stratospheric meridional circulation on cumulus heating has been suggested by Thuburn and Craig (2000) in a simplified general circulation model experiment. Stratospheric upwelling effects on tropical convection is also confirmed by a more realistic general circulation model forecast study (Kodera et al., 2011a). These models make use of cumulus parameterization to account for the effect of convection into large-scale circulation. Therefore, model sensitivity should be dependent on the parameterization used. Stratospheric effect on tropical convection is also found in non-hydrostatic models that treat the convection explicitly. Although it is not fully understood yet how stability near the tropopause influences anvil cloud-top height, Chae and Sherwood (2010) showed with observational data and a regional non-hydrostatic model experiment that the variation of static stability near the tropopause due to a change in the stratospheric upwelling, influences cloud height even if the cloud height peaks only near 12 km (or 200 hPa). Using a global non-hydrostatic model simulation, Eguchi et al. (2015) also found that increased tropical upwelling due to a SSW event reduces the static stability in the upper tropical tropopause layer (TTL), which leads to an increase of deep convective activity in the troposphere. Temperature response to stratospheric upwelling becomes unclear in the region lower than the tropopause because clouds form in response to adiabatic cooling associated with upwelling. Stratospheric temperature decreases, but minimal temperature changes occur in the TTL, resulting in a decrease in static stability in the upper TTL (Li and Thompson, 2013). In the regions where deep convective clouds are frequent, stratospheric influence further penetrates deeper in the troposphere (Eguchi and Kodera, 2010;Kodera et al., 2011b). Once the distribution of convective clouds is modified, this effect can be amplified within the troposphere through a feedback involving water vapour transport (Eguchi and Kodera, 2007). In a previous study composite analysis of the tropical tropospheric impact of SSW events were made for the winters from 1979 to 2001 (Kodera, 2006). Even though significant responses were found in the tropical troposphere, a problem of the composite analysis is that, by averaging many different events to extract a common feature, detailed structures often become obscure. Therefore, case studies are made in the present paper on two exceptionally large events focusing on the role of overshooting and deep convective clouds in stratosphere-troposphere dynamical coupling in the tropics. The selected two largest SSW events of January 2009 and January 2010 (Harada et al., 2010;Ayarzagüena et al., 2011) have a large impact on the tropical upwelling in the lower stratosphere as will be shown later. These SSWs are not only large but also localized in time unlike other SSWs. Large and simple structure of the temporal variation of the forcing (eddy heat flux) and the response (stratospheric zonal wind) of 2009 and 2010 SSWs permit us to investigate a detailed feature of the circulation change. It should also be noted that not all major SSW events necessarily have such large tropical impacts, as this depends on the latitude of the associated planetary wave breaking (Taguchi, 2011). Data Meteorological reanalysis data from the European Centre for Medium-Range Forecasts (ECMWF) ERA interim (Dee et al., 2011) were used to analyse air temperature and winds including vertical velocity. Cloud data in the TTL, from the Level 2 Cloud Layer Product (Version3-01), were obtained by Cloud-Aerosol LIdar with Orthogonal Polarization (CALIOP) aboard the CALIPSO satellite (Winker et al., 2007). Outgoing longwave radiation (OLR) data provided by NOAA (e.g. Arkin and Ardanuy, 1989) is widely used to analyse convective activity in the tropics. In this study, in addition to the OLR data with a 2.5 • × 2.5 • lat-lon resolution, we used the Microwave Humidity Sensor (MHS) channels 3 to 5 to detect deep convection and convective overshoots because of the scattering by icy particles in precipitating clouds so cold that they cause a depression in the brightness temperatures. MHS data are obtained from NOAA18 and MetOp-A. The equatorial crossing time for these platforms is approximately 14:00 local time (LT) for NOAA18, and 21:30 LT for MetOp-A. In the present work, the original data were regridded to a regular grid with resolution of 0.25 lat × 0.25 lon. The figures show DC and COV occurrences resampled to a grid of 2.25 × 2.25 for plotting purposes. Although these high frequencies are generally not sensitive to cirrus and anvil cirrus clouds, they will probably have difficulty distinguishing some strong anvil clouds from deep convective clouds. But fortunately, these strong anvil clouds are generally tightly connected with deep convective cloud systems (Hong et al., 2008). Results An enhanced Brewer-Dobson (BD) circulation during a stratospheric warming event creates strong downwelling in the polar region and upwelling in the tropical stratosphere, and thus a warming and cooling tendency in these respective regions. Figure 1a and b show the evolution of eddy heat flux at 100 hPa averaged over the extratropical Northern Hemisphere (NH; 45-75 • N), and the latitude-time section of the zonal mean pressure coordinate vertical velocity at 50 hPa from 01 January to 11 February (the left and right panels are for 2009 and 2010, respectively). In both years, stratospheric upwelling in the tropics at the 50 hPa level strengthens following the increase in wave activity at around 16 January 2009 and around 20 January 2010 (indicated by the solid vertical lines in the figure). In the tropics, an increase in COV is synchronous with the stratospheric upwelling (Fig. 1c). The convective activity represented by the OLR also increases in the Southern Hemisphere (SH), which can also be characterized as a southward shift of the active convective region (Fig. 1d). A delay in the response of the OLR in the SH is also noted. The difference in the characteristics in the temporal variation in COV and OLR relative to the vertical velocity at 50 hPa becomes also apparent in the vertical structure of the correlation coefficient in the following. To study the relationship between tropospheric convective activity and the vertical velocity at different pressure levels, correlation coefficients were calculated between variables representing a convective activity (COV, DC, and OLR) and the pressure vertical velocity (ω) at each level (Fig. 2). These correlation coefficients are simply being used to identify the relation between dynamical variables in two rather short-duration events. Variables were first averaged over the tropics (25 • S to 25 • N), and then correlations were calculated for the 31-day period centred on the onset day (16 January for 2009 and 20 January for 2010). For convenience of comparison, the sign of the OLR was reversed (−OLR). In both winters, COV shows the highest correlation, with ω in the lower stratosphere around 70-50 hPa. DC is also correlated with the stratospheric upwelling, but less so. The OLR shows little relationship with the stratospheric circulation, although it is correlated with vertical velocity in the upper troposphere. Here, we check the physical consistency among the variables by comparing the correlation coefficients among them. It is reasonable to expect that stratospheric vertical velocity should have the strongest relationship with the occurrence of COV (i.e. convection penetrating to the stratosphere) and the weakest relationship with OLR, which is sensitive to lower clouds as well as deep convection. Therefore, the following inequalities among the correlation coefficient, r, between the lower stratospheric pressure vertical velocity, ω, should be expected: where r ω,COV , r ω,DC , and r ω,-OLR are the correlation coefficients between ω and COV, DC, and −OLR, respectively. This relationship is satisfied in the correlation analysis presented in Fig. 2. This result supports our working hypothesis that lower stratospheric vertical velocity variation is coupled with the tropical convective activity. The present study can also be compared with a regression study of the BD circulation index by Li and Thompson (2013); enhanced BD circulation increases clouds occurrence above the tropical tropopause, in association with a decrease of stratospheric temperature and the static stability around the tropopause. The structure of the tropical temperature and stability change associated with the COV is consistent with a variation associated with a strengthening of the BD circulation. Formation of the clouds above the tropopause is also consistent with the correlation of COV with upwelling above 100 hPa. Figure 3 depicts a development of downward coupling in the equatorial summer tropics, averaged between 20 • S and the Equator. The temperature tendency (Fig. 3a) shows a rapid decrease in the stratosphere following the increase in the eddy heat flux in Fig. 2a, but no clear temperature signal is observed in the troposphere, which agrees with the results of a previous study (Ueyama et al., 2013). Figure 3b shows the altitude-time section of measured cloud frequency (optical thickness < 4) by CALIOP. Horizontal dashed lines indicate approximate height corresponding to 100 hPa pressure level (solid lines in Fig. 3a and c). Prior to the SSWs, thin clouds are formed near 16.6 km (or 100 hPa) around a cold point tropopause. When cooling events start, clouds form at all levels from the upper to the lower TTL, indicating a development of convective activity. Pressure vertical velocity is shown as a departure from the period mean normalized by a daily standard deviation at each level to visualize the large range of variation (Fig. 3c). Although vertical velocity varies in a similar manner to temperature tendency in the stratosphere, an increase in the upwelling also occurs in the troposphere following the stratospheric change. This tropospheric upwelling is associated with an increase in surface precipitation (Fig. 3d). This result shows that the temperature tendency is a good proxy for vertical velocity in the stratosphere. However, dynamical cooling tends to be compensated by diabatic heating due to cloud formation lower than the tropopause as illustrated in Fig. 3; consequently, the temperature tendency is no longer a good indicator of the vertical velocity below 70 hPa. Figure 4 shows the evolution of the geographical distribution of OLR and COV before (i) and after (ii) the onset of the event. The influence of the El Niño-Southern Oscillation (ENSO) is evident in the OLR during period (i). In Jan- uary 2009, which is a cold phase of ENSO, a well-developed region of low OLR is located over the Maritime Continent, while in January 2010, a warm phase of ENSO, it is located over the western Pacific according to the change in the equatorial Pacific sea surface temperature (SST). The velocity potential at 925 hPa (contour lines) in period (i) indicates that these convective activities are maintained by a large-scale low-level convergence. After the onset of the stratospheric event during period (ii), the low-OLR centre over the Maritime Continent or western Pacific is weakened, and multiple convective-active regions develop in the SH along 15 • S. This active convective zone includes tropical cyclones and storms (names are indicated below the panel) over warm ocean sectors near Madagascar, north of Australia, and in the southwestern Pacific. The occurrence of COV is high over the African and South American continents, but no particular enhancement is seen around the Maritime Continent-western Pacific region in period (i). This indicates the weaker dependency of COV on low-level convergence. Although the occurrence of COV increases after the onset in period (ii), no substantial change is seen in the spatial structure except that the COV distribution takes a more zonal form. The distribution of the regions with low OLR becomes increasingly similar to that of COV Atmos. Chem. Phys., 15, 6767-6774 during period (ii). This indicates that the COV-related deep convective activity becomes important after the onset of the stratospheric event. Summary and discussion The results of our analysis of changes in tropical circulation associated with large SSWs during January 2009 and January 2010 can be summarized as follows. Enhanced stratospheric wave activity produced a cooling in the tropical stratosphere through a strengthening of the BD circulation. This influence penetrated downward into the troposphere through a change in the cloud formation. Among the variables representing different convective activity, COV shows the highest correlation with the lower stratospheric vertical velocity. This result is reasonable because the COV clouds penetrate above the tropopause and interact directly with the stratospheric circulation. The reason for low correlation of the OLR with stratospheric upwelling originates from the fact that the tropospheric variation lags by about a week (Fig. 1). The results obtained from the present two SSW events are consistent with the earlier results from an independent composite analysis of the NH winters for the period of 1979 to 2001. Figure 5a shows the results of the above-mentioned composite analysis. Twelve SSW events of which maximum deceleration of the polar night jet (average 50-70 • N) at 10 hPa exceeds 2 m s −1 day −1 with a smoothed data are selected (see detail in Kodera, 2006). The key day is defined as the day of the largest deceleration. Student t values corresponding to a 95 % significance level for one-and two-sided tests are 1.8 and 2.2, respectively. Following a deceleration of the polar night jet, statistically significant increase in the upwelling occurs in the tropical stratosphere around day 2, and in the tropospheric equatorial SH around day 4 to 11. Two SSW events in the present study are juxtaposed below in Fig. 5b. The top panel shows the zonal-mean zonal wind tendency of winters 2009 and 2010 similar to the top panel of Fig. 5a. The tropical vertical pressure velocity in the SH (20 • S-Eq) is presented in a similar way to the composite analysis by choosing the day of the maximum deceleration as the time origin. We can see that the upwelling in the tropical SH increases in the upper troposphere around day 4 to day 11 similarly to the composite mean (rectangles in Fig. 5). Therefore the relationship that we have identified here in two particularly strong SSWs between the SSW and the enhancement of tropical convection is consistent with that previously identified from the composite analysis. To get an insight into a possible mechanism of connection between the stratospheric and tropospheric variability, we also calculated correlations between the temperature or vertical temperature gradient (or static stability) at each level and COV or −OLR (Fig. 2 bottom). COV shows a stronger relationship around the tropopause with vertical temperature gra- dient (Fig. 2e) than temperature itself (Fig. 2d). This means that COV is sensitive to the stability around the tropopause region (100 hPa), while OLR is related with the static stability in the upper troposphere (Fig. 2f). This result indicates that COV increases due to a decrease of static stability around the tropopause induced by a cooling in the lower stratosphere associated with the SSW, consistent with the results of Kuang and Bretherton (2004) and Chae and Sherwood (2010). Our previous numerical experiment also shows that, when local cooling occurs near the tropopause, upwelling enhances, accompanying a warming in the lower TTL and the upper troposphere (see Fig. 4 of Kodera et al., 2011a). A global nonhydrostatic model study (Eguchi et al., 2015) also confirmed the relationship suggested in the present result. Therefore, we consider that, although the cooling effect by stratospheric upwelling is limited in the stratosphere, its effect can further penetrate below through changes in COV and deep convective activity. Changes were also noted in the spatial distribution of the convective activity following the stratospheric event (Fig. 4). When stratospheric upwelling was suppressed before the onset of the event (period i), convection tended to cluster around the equatorial Maritime Continent or western Pacific region depending on the phase of ENSO. When the stratospheric upwelling increased (period ii), convection expanded over a wide range of longitudes in the tropical summer hemisphere. In other words, tropical circulation changed from a more Walker-like (east-west) configuration to more of a Hadley (north-south) type. The Madden-Julian Oscillation (MJO) (Madden and Julian, 1994) has a significant influence on tropical convective activity. It is reported that the occurrence of the SSW is related with the phase of the MJO (Garfinkel et al., 2012;Liu et al., 2014). One would ask whether or not the present phenomenon is associated with the MJO. The features of the MJO in January 2009 and 2010 differed significantly as can be seen in Fig. 6. A convective centre remained stationary over the Maritime Continent prior to the onset of the 2009 stratospheric event, after which an eastward propagation was initiated from the Indian Ocean. In contrast, an eastwardpropagating convective centre became almost stationary over the western Pacific after the onset in January 2010. In spite of the differences in the MJO in January 2009 and 2010, circulation changes related to the stratospheric events showed sim-ilar features during both winters, suggesting that the present phenomenon is independent of the MJO. The certainty of the dynamical connections identified here is of course limited by the small number and the relatively short duration of the events. Further certainty will come from future modelling studies and observational studies of a larger set of events.
2018-12-04T17:20:49.514Z
2014-09-15T00:00:00.000
{ "year": 2014, "sha1": "8db3bf44617eeb665bcf1cdfbefad4bd19512724", "oa_license": "CCBY", "oa_url": "https://www.atmos-chem-phys.net/15/6767/2015/acp-15-6767-2015.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "71cd10d94e51068ae75c154285631ba9c3853630", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
7923025
pes2o/s2orc
v3-fos-license
Coronary artery bypass surgery in a patient with Kartagener syndrome: a case report and literature review Kartagener syndrome consists of congenital bronchiectasis, sinusitis, and total situs inversus in half of the patients. A patient diagnosed with Kartagener syndrome was reffered to our department due to 3-vessel coronary disease. An off-pump coronary artery bypass operation was performed using both internal thoracic arteries and a saphenous vein graft. We performed a literature review for cases with Kartagener syndrome, coronary surgery and dextrocardia. Although a few cases of dextrocardia were found in the literature, no case of Kartagener syndrome was mentioned. Introduction In 1606 Hieronymous Fabricious described situs inversus, while in 1643 Marco Severino described dextrocardia [1]. Situs inversus is a rare congenital disorder with an incidence of 1:10000, in which the major visceral organs are reversed from left to right in a mirror image of the normal condition [2]. Kartagener syndrome consists of congenital bronchiectasis, dextrocardia and sinusitis [2]. A patient with Kartagener's syndrome and three-vessel coronary disease was referred to our department for bypass surgery. We searched the literature about the Kartagener's syndrome in order to find references about the choice of conduits and the position of the surgeon in patients with mirror-image appearance of the heart. Several cases of surgical coronary revascularization in patients with dextrocardia have been reported in the literature, but no case was referred as Kartagener's syndrome. We report a case of a patient with Kartagener's syndrome with total situs inversus, bronchiectasis, chronic respiratory disease and three-vessel coronary disease, being treated in our institute with coronary surgery using both internal thoracic arteries. To the best of our knowledge this is the first report of coronary surgery in a patient with Kartagener syndrome. Case Report and Review A 56 year-old Caucasian male patient was admitted to our department for scheduled coronary artery bypass due to three-vessel coronary disease. The patient was already diagnosed as Kartagener syndrome with total situs inversus and azoospermia (patient had no children). A CT scan of the thorax showed bronchiectasis of the lungs and dextrocardia ( fig. 1). The coronary angiography was performed without particular difficulties and revealed a proximal stenosis of 90% in the left anterior descending artery (LAD), a proximal stenosis of 90% in the circumflex artery and a stenosis of 99% between the proximal and middle part of the right coronary artery. The ejection fraction was normal and the aortic valve was competent. A spirometry was performed which revealed a reduction of the Forced Expiratory Volume, with a FEV1 of 1.44 L (40.6% of predicted value) and a reduction of the Forced Vital Capacity, with a FVC of 1.80 L (38.7% of the predicted value). Due to the patient's severe pulmonary disease an offpump operation was decided. The chest was entered through a median sternotomy, with the surgeon standing on the left side of the patient. The heart had an exact mirror image of a normally positioned heart and showed a good contractility. Both internal mammary arteries (IMAs) and a saphenous vein graft (SVG) were harvested. The LAD was opened and grafted with the left internal mammary artery (LIMA). Then the first obtuse marginal branch of the circumflex artery was grafted with right internal mammary artery (RIMA). Finally, the posterior descending artery (PDA) was grafted with the saphenous vein graft. The proximal anastomosis of the vein graft was then performed on the ascending aorta. After haemostasis, the chest was closed in routine fashion. The patient was extubated six hours later and remained in the Intensive Care Unit for three days due to his respiratory disease and increased volume of secretions. He was discharged from the hospital on the 10 th postoperative day. Discussion Kartagener's syndrome is characterized by the triad of bronchiectasis, sinusitis and situs inversus, and is also combined with abnormalities of the cilia of the respiratory epithelium. Some male patients with Katagener's syndrome also have sterility due to dyskinesia of the spermatozoa [2]. Total situs inversus is a rare condition which does not preclude long-term survival. Patients with dextrocardia and coronary disease may present for coronary bypass surgery. The mirror-image site of the heart and the great vessels does not impose a problem for carrying out a normal coronary artery bypass grafting operation, as it can be seen in the literature. Saad et al reviewed the literature for coronary surgery in patients with dextrocardia, dealing with the position of the surgeon [3]. We reviewed the literature in order to ascertain the conduit choice of each surgeon, especially concerning grafting of the left anterior descending artery ( Table 1). Most of the authors preferred to graft the LAD with the right internal mammary artery, as the mirror-image appearance of the heart offers the convenience of using this arterial graft. Seedio et al. reported a series of two patients [4]. In one case they used LIMA as a free graft to graft the LAD. Tabry et al. anastomosed the free LIMA to the RIMA and then they grafted the LIMA to the first diagonal branch and the LAD [5]. Kuwata et al. harvested both internal mammary arteries and both radial arteries, skeletonized the LIMA and managed to use it in-situ to graft the LAD [6]. Chakravarthy et al. reported two cases [7]. In the first case, they used LIMA in-situ to graft the LAD, whereas in the second case they used the RIMA. Yamashiro et al. used both IMAs and the radial artery, which was anastomosed to the LIMA and then to the second obtuse marginal branch (OM2) and PDA in a sequential manner [8]. RIMA was anastomosed to the LAD and LIMA grafted the OM1 branch. In older reports (Grey and Cooley, Irvin, Yamaguchi, Astudillo, Nomoto) saphenous vein grafts were exclusively used [9][10][11][12][13]. In our case the use of the left internal mammary artery to graft the left anterior descending artery was feasible, as the stenosis of the vessel was proximal and the length of the arterial conduit imposed no technical difficulty. We preferred the use of the LIMA to the LAD as the literature has strongly proven the excellent results of this anastomosis [14]. RIMA was skeletonized and used to graft the obtuse marginal branch of the circumflex artery. Finally, performing the operation "offpump" did not constitute a problem in our case, as the patient was haemodynamically stable throughout the procedure allowing us to have access to all coronary vessels, without the need of conversion to "on-pump" operation, as occurred in the case of Bonde and Campalani [15]. The use of cardiopulmonary bypass was omitted in our patient because of his poor respiratory function. Conclusion Situs inversus with mirror-image of the heart is a rare condition, which eventually every cardiac surgeon might have to deal with. The position of the surgeon depends mainly on the surgeon's choice. The use of the RIMA seems to be the easier way to graft the LAD, but when the lesion of the LAD is proximal LIMA can also be used to graft the LAD. In patients with Kartagener's syndrome and severe respiratory disease, off-pump bypass grafting could be performed.
2014-10-01T00:00:00.000Z
2010-08-26T00:00:00.000
{ "year": 2010, "sha1": "1210cf7a60c5d4f51a3f53acaed375687c6d5736", "oa_license": "CCBY", "oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/1749-8090-5-68", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1210cf7a60c5d4f51a3f53acaed375687c6d5736", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249838975
pes2o/s2orc
v3-fos-license
Generation of subharmonics in acoustic resonators containing bubbly liquids: A numerical study of the excitation threshold and hysteretic behavior Highlights • We study the generation of subharmonics in a bubbly liquid in acoustics resonators.• We carry out numerical simulations of the nonlinear bubbles-ultrasound interaction.• We show that subharmonics are due to the nonlinearity and configuration of resonators.• We show that subharmonics have an amplitude-threshold dependence.• We show the hysteretic nature of subharmonic generation in bubbly liquids. Introduction Ultrasound are commonly used in many sectors, such as industry and medicine [1]. In particular, ultrasonography is one of the most widely used diagnostic techniques, mainly because of its non-invasive nature, low cost and wide availability. This method is based on the reception of the waves reflected by the interfaces between different media within the volume to be evaluated, and takes advantage of the different propagation speeds to create an image. When the difference between the propagation speed of the media is not very pronounced, strategies must be used to increase this difference and obtain sharper images. The main technique is the use of contrast agents, introducing a liquid with gas microbubbles into the bloodstream [2]. The presence of gas makes the speeds very different and, therefore, the quality of the image is hugely enhanced. In addition, because of the presence of bubbles, the media become highly nonlinear, causing other very interesting effects for diagnostics, such as the generation of new frequencies, harmonics and subharmonics [3,4]. The use of harmonics to obtain higher image quality, due to their better spatial resolution, is not very suitable with the control of the process because they can be caused by both bubbles and tissues [5]. However, subharmonics are more appropriate, because their existence is almost exclusively due to bubbles, and thus allows the control of the spot at which the user wants these new frequency components. The use of subharmonics to generate images reduces the processing and filtering of the signal obtained at the receiver [6,7]. A deep knowledge about the behavior and generation of subharmonics is a key factor to their use in ultrasound diagnostics. An important characteristic of subharmonics is their abrupt appearance when control parameters are varied. There is a threshold beyond which they are suddenly generated, as it has been studied for an uncoated bubble [8][9][10][11][12] and for a coated bubble [13][14][15][16]. All these works study the dynamics of a bubble excited by a linear continuous pressure source, but they do not consider the nonlinear retroaction of the bubble vibrations on the acoustic field. In this work we consider the simplified case of an homogeneous distribution of uncoated bubbles in a liquid contained in a onedimension rigid-walled resonator. Besides its theoretical interest for the knowledge about the behavior of nonlinear ultrasound in bubbly liquids, the analysis proposed here in this configuration might be helpful for diagnosis purpose, since when contrast agents are used, ultrasound can interact with structures of different dimensions, or be confined in a bubble layer or a bubble cloud, which can be resonant and lead to the generation of subharmonics. This study could also be useful in the sonochemistry framework to generate subharmonics in a resonant sonoreactor, which are acoustic waves of lower frequencies with lower attenuation. We study the interaction between the acoustic field, modeled by the wave equation accounting for the bubbles, and the bubble vibrations, modeled by a Taylor-expanded Rayleigh Plesset equation, recalled in Section 2 and solved by means of an appropriate numerical model [17]. The rigid-wall condition is appropriate for generating subharmonics in a highly nonlinear bubbly liquid medium (see the paragraph right below Eq. (5)), as shown in [18]. The results obtained here indicate which type of resonator, in terms of geometrical aspects (length), is more convenient to generate subharmonics, in Section 3.1, and show that their nature is clearly nonlinear, in Sections 3.2.1 and 3.3.1. In Sections 3.2.2 and 3.3.2 the above-mentioned threshold is observed. In Section 3.4 the hysteretical nature of the bubbly liquid is demonstrated through the behavior of subharmonics when the pressure amplitude is either increased or decreased gradually. Section 4 gives the conclusions of this work. Material and methods We consider an ultrasonic field in a one-dimensional cavity of length L, filled with a bubbly liquid. We suppose an homogeneous distribution of spherical gas bubbles of the same size in the liquid. The initial bubble radius, R 0g , is assumed small compared to the wavelength of the acoustic field, λ. We study the nonlinear interaction of acoustic waves and bubble vibrations, which is modeled by a partial differential equations system [19][20][21]: where p(x, t) is the acoustic pressure and v(x, t) = V(x, t) − v 0g is the bubble volume variation, x is the one-dimensional space coordinate, t is the time, T l is the last instant of the study, v 0g = 4 3 πR 3 0g is the initial volume of the bubbles, and V(x, t) is the instantaneous volume of a bubble located at position x. In Eq. (1) (wave equation accounting for the bubbles), c 0l and ρ 0l are the sound speed and the density at the equilibrium state of the liquid, and N g is the bubble density in the liquid. In Eq. (2) (Rayleight-Plesset equation), δ = 4ν l /ω 0g R 2 0g is the viscous damping coefficient of the bubbly fluid, in which ν l is the cinematic viscosity of the liquid, ω 0g = 2πf 0g = ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ 3γ g p 0g /ρ 0l R 2 0g √ is the isentropic resonance frequency of the bubbles, in which γ g is the specific heats ratio of the gas, p 0g = ρ 0g c 2 0g /γ g is its atmospheric pressure, ρ 0g and c 0g are the density and sound speed at the equilibrium state of the gas. The other parameters are η = 4πR 0g /ρ 0l , a = (γ g +1)ω 2 0g /2v 0g and b = 1/6v 0g . Subscripts combining t and x denote partial derivatives. Eqs. (1) and (2) are complemented with the following initial conditions: Moreover, the cavity is excited by a time-dependent pressure source s(t) of amplitude p s and frequency ω = 2πf located at x = 0: and we assume a rigid-wall boundary condition at x = L: This model assumes that bubbles are the only source of attenuation, dispersion, and nonlinearity in the fluid, they are monodisperse and oscillate at their first radial mode, and surface tension is neglected. The translational motion of the bubbles relative to the liquid, under Bjerknes, buoyancy, viscous drag and added-mass forces is not considered in this work [20,22]. This differential system, Eqs. (1)-(5), is solved using the numerical model developed in [17]. This tool is based on the finite-volume method in the space dimension and the finite-difference method in the time domain. In Section 3, 100 finite volumes per wavelength and 400 time points per period of f are used. Results The objective of this section is to study the generation of subharmonics from a single frequency by means of the model presented in Section 2. The following data for the bubbly liquid are set into the model: c 0l = 1500ms − 1 , ρ 0l = 1000kg m − 3 , and ν l = 1.43 × 10 − 6 m 2 s − 1 for the liquid (water) and c 0g = 340ms − 1 , ρ 0g = 1.29kg m − 3 , and γ g = 1.4 for the gas (air). We use bubbles of radius R 0g = 2.5μm (resonance frequency f 0g = 1.35MHz), and the bubble density is N g = 5 × 10 11 m − 3 . In the following, the final time of the simulations, T l , is high enough to guarantee that the steady regime is reached in the resonator, i.e., T l = 2000 T in In this section the source frequency is f = 300kHz (f/f 0g = 0.223) and the source amplitude is p s = 12kPa. We use cavities of length L = 3λ/4 and L = 5λ/4 ( Fig. 1) vs. L = λ/2 and L = λ (Fig. 2)) to determine which type of resonator is the most convenient to generate low frequency components in the configuration given via Eqs. (1)- (5). As it can be seen, when the resonator length is L = (2n +1)λ/4 ( Fig. 1), new frequencies do not appear, although the maximum amplitude of the fundamental f is very high in both cases, 38.8kPa and 23.3kPa (323% and 194% of p s ), respectively. However, when the length is L = nλ/2 ( Fig. 2), low and high frequencies appear. When the resonator length is L = λ/2 (Fig. 2a), a low frequency f/2 (red line) with maximum amplitude 34.2kPa (285% of p s ) and a high frequency (among others less intense) of large amplitude 3f/2 (blue line), with maximum amplitude 8.08kPa (68.2% of p s ), are generated. When the resonator length is L = λ (Fig. 2b), two low frequencies appear, f/4 and 3f/4 (red and green line, respectively) with maximum amplitudes 18.3kPa and 11.8kPa (125% and 98.4% of p s ), respectively. A high frequency, 5f/4 (blue line) with maximum amplitude 5.98kPa (49.8% of p s ), is also generated. Therefore, the resonators used in the following sections will be chosen within the set L = nλ/2. Nonlinear behavior In this section the origin of the generation of new low-frequency components is studied. Fig. 3 shows the dimensionless acoustic pressure waveform at the 3/4-length point of the cavity during the entire time of the study, T l = 2000 T, (top) obtained at a low source amplitude, p s = 10Pa (Fig. 3a), at a high source amplitude, p s = 12kPa (Fig. 3b), and at the same high source amplitude, p s = 12kPa, but by canceling the nonlinear contributions in the differential system, i.e., a = b = 0 in Eq. (2) (Fig. 3c). As it can be seen, when the amplitude is high and a, b are not null, the amplitude in the cavity increases considerably. The corresponding frequency decompositions are shown in Fig. 3 (bottom). At low source amplitude, p s = 10Pa, there is only one component, the driving frequency f (Fig. 3a). At high amplitude, p s = 12kPa, in addition to the source frequency, there are new frequency components (Fig. 3b). However, the cancellation of the nonlinear contributions in the differential system (a = b = 0 in Eq. (2)) prevents the new frequency components from being created, even at high source amplitude, p s = 12kPa (Fig. 3c). These results clearly demonstrate that the generation of the new low-frequency components is a nonlinear effect. Study of subharmonic f/2 for several driving frequencies In this section we study the generation of the low-frequency component at f/2 as a function of the source amplitude p s for several values of the driving frequency f ranging between 200 and 400kHz. To this purpose, the source amplitude p s is raised from 1kPa to 15kPa, and we analyze whether the component f/2 appears or not by observing its maximum amplitude p m f/2 . As can be seen in Fig. 4a for several frequencies, when the amplitude p s is low there is no f/2-component. However, above a threshold value p th , the amplitude at f/2 increases suddenly and hugely up to 200% or 300% of p s . Once this threshold is exceeded, the growth of the f/2-component, shown in Fig. 4b for f = 250kHz (which is representative of the five source frequencies studied here), seems to be linear for the five source frequencies studied here. This behavior (existence of a excitation threshold, linear increase beyond the threshold) is the same for the five source frequencies studied here. Moreover, the threshold value p th increases with the driving frequency f, as evidenced in Fig. 5. This threshold amplitude seems to roughly follow a slight quadratic deviation from a linear behavior law vs. frequency. Nonlinear behavior In this section the origin of the generation of new low-frequency components is studied. Fig. 6 shows the dimensionless acoustic pressure waveform at the mid-point of the cavity during the entire time of the study, T l = 4000 T, (top graphs) obtained at a low source amplitude, p s = 10Pa (Fig. 6a), at high source amplitude, p s = 12kPa (Fig. 6b), and at the same high source amplitude, p s = 12kPa, but by canceling the nonlinear contributions in the differential system, i.e., a = b = 0 in Eq. (2) (Fig. 6c). As can be seen, when the amplitude is high and a, b are not null, the amplitude in the cavity increases considerably. The corresponding frequency decompositions are shown in Fig. 6 (bottom graphs). At low source amplitude, p s = 10Pa, there is only one component, the source frequency f (Fig. 6a). At high amplitude, p s = 12kPa, in addition to the driving frequency, there are new frequency components (Fig. 6b). However, the cancellation of the nonlinear contributions in the differential system (a = b = 0 in Eq. (2)) prevents the new frequency components from being created, even at high source amplitude, p s = 12kPa (Fig. 6c). Like in Section 3.2.1, these results clearly demonstrate the nonlinear character of the generation of new low-frequency components in the resonator. Study of subharmonics f/4 and 3f/4 for several driving frequencies In this section we study the generation of the low-frequency components at f/4 and 3f/4 as a function of the source amplitude p s for several values of the driving frequency f ranging between 200 and 350kHz. To this purpose, the source amplitude p s is raised from 1kPa to 15kPa, and we analyze whether the f/4 and 3f/4-components appear or not by observing their maximum amplitude, p m f/4 and p m 3f/4 , respectively. As it can be seen in Fig. 7a for several frequencies, when the amplitude p s is low no components at f/4 and 3f/4 are observed. However, when the amplitude reaches a specific threshold value p th , the amplitudes of f/4 and 3f/4 subharmonics increase suddenly and abruptly up to 100% or 200% of p s . The respective behaviors of f/4 and 3f/4 subharmonics above this threshold, shown in Fig. 7b, left and right diagrams respectively, for f = 250kHz (which is representative of the four source frequencies studied here), differ in two aspects (compare solid and dashed curves in Fig. 7a): 1) the growth of the f/4-component amplitude seems to be linear and frequency-dependent (different slopes vs. frequency f), whereas the growth of the 3f/4-component amplitude seems to follow a quadratic deviation from a linear behavior and almost frequency-independent; 2) the jump amplitude of the f/4-component at the threshold is an increasing function of frequency f, whereas the one of the 3f/4-component is only slightly an increasing function of frequency f. Moreover, it must be noticed that the threshold amplitude is the same for both frequency components. This suggests that the nonlinear subharmonic generation mechanism is strongly conditioned by the geometry of the resonator. This behavior (existence of a excitation threshold, linear or slightly quadratic behavior beyond the threshold), observable for both subharmonics, is the same for the four source frequencies studied in this section. Moreover, like in Section 3.2.2, the threshold value p th increases with the driving frequency f, as evidenced in Fig. 8 for both subharmonics. Again this threshold amplitude seems to roughly follow a slight quadratic deviation from a linear behavior law. Hysteretic character of subharmonic generation In this section we study the hysteretic character of the subharmonic generation by refining the analysis of Sections 3.2 and 3.3, in the two cases L = λ/2 and L = λ, respectively. To this end, the pressure field in the cavity is computed twice, by stepwize increasing and stepwize decreasing the source amplitude p s (using steps of 1kPa), the initial conditions considered for each step being the final conditions of the precedent one, instead of the generic initial conditions Eq. (3). We thus examine whether the direction in which the source amplitude is varied has an effect on the behavior of the subharmonics or not, and whether their appearance is hysteretic or not. Fig. 9 represents the maximum amplitude of the f/2-subharmonic (p m f/2 ) obtained at the driving frequency f = 300kHz in the resonator of length L = λ/2 (Sections 3.2) (i) when the source amplitude p s is increased starting from Eq. (3) (solid blue line), (ii) when the amplitude is increased stepwise (dotted red line), and (iii) when the amplitude is decreased stepwise (dashed green line). It can be seen that the threshold for the subharmonic disappearance in Case (iii) (p thd = 3kPa) is different from the appearance threshold in Case (i) (p th = 10kPa). However, above these thresholds the behavior of p m f /2 remains the same in both cases. Case (ii) does not lead to the formation of the subharmonic, which would probably require higher amplitudes that cannot be handled by Resonator of length L = λ Figs. 10 and 11 represent the maximum amplitude of the f/4-subharmonic (p m f /4 ) and the 3f/4-subharmonic (p m 3f/4 ), respectively, for a driving frequency f = 300kHz in the resonator of length L = λ (Sections 3.3) (i) when the source amplitude p s is increased starting from Eq. (3) (solid blue line), (ii) when the amplitude is increased stepwise (dotted red line), and (iii) when the amplitude is decreased stepwise (dashed green line). It can be seen that the threshold for the disappearance of both subharmonics in Case (iii) (p th = 3kPa) and the threshold for their appearance in Case (i) (p th = 9kPa) and in Case (ii) (p th = 17kPa) are different. These three thresholds have the same values for both f/4 and 3f/4-subharmonics. However, above these thresholds the amplitudes p m f /4 and p m 3f/4 remains the same in the three cases. The comparison of Cases (i), (ii), and (iii) allows us to conclude about the existence of an hysteretic character of f/4 and 3f/4-subharmonics generation. As seen before [23], a change in acoustic pressure amplitude modifies the characteristics of the bubbly medium (the average size of the bubbles, i.e., the void fraction in the liquid, is pressure amplitude dependent) and the resonance of the cavity containing it. Following this result, the threshold effect of a subharmonic is most likely due to the sudden match of the subharmonic frequency and the cavity resonance. The hysteretical behavior of this subharmonic may rely on the fact that the modifications of those characteristics of the medium are different depending on whether that pressure amplitude is raised or lowered. Further investigations are needed for a more comprehensive exploration of these points. They are part of our ongoing work. Conclusions We have carried out numerical experiments to study the generation of subharmonics in a bubbly liquid from a single-frequency ultrasonic driving signal in a one-dimensional resonator. The numerical simulations rely on a nonlinear mathematical model that couples the bubbles oscillations and the acoustic field. It has been shown via the model used here that the creation of subharmonics is a nonlinear effect and that the resonators that better suit this purpose in the boundary configuration assumed in this paper are those which length is a multiple of L = λ/2. The amplitude-threshold for the creation of subharmonics has been observed. The hysteretic nature of subharmonic generation in bubbly liquids has also been shown through the behavior of subharmonic components when different sequences of pressure amplitudes are applied at the source, which is the main point of this paper, since it has hardly been mentioned in the literature.
2022-06-19T15:22:37.211Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "2ea20fbdf29e3043c36fed76a0db7c437a426a78", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ultsonch.2022.106068", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b299b835df3289469a15a3b749c27a9b956af0a9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
51911666
pes2o/s2orc
v3-fos-license
Unusual electronic and vibrational properties in the colossal thermopower material FeSb2 The iron antimonide FeSb2 possesses an extraordinarily high thermoelectric power factor at low temperature, making it a leading candidate for cryogenic thermoelectric cooling devices. However, the origin of this unusual behavior is controversial, having been variously attributed to electronic correlations as well as the phonon-drag effect. The optical properties of a material provide information on both the electronic and vibrational properties. The optical conductivity reveals an anisotropic response at room temperature; the low-frequency optical conductivity decreases rapidly with temperature, signalling a metal-insulator transition. One-dimensional semiconducting behavior is observed along the b axis at low temperature, in agreement with first-principle calculations. The infrared-active lattice vibrations are also symmetric and extremely narrow, indicating long phonon relaxation times and a lack of electron-phonon coupling. Surprisingly, there are more lattice modes along the a axis than are predicted from group theory; several of these modes undergo significant changes below about 100 K, hinting at a weak structural distortion or phase transition. While the extremely narrow phonon line shapes favor the phonon-drag effect, the one-dimensional behavior of this system at low temperature may also contribute to the extraordinarily high thermopower observed in this material. FeSb 2 crystallizes into an orthorhombic structure with two formula units per unit cell, as shown in Fig. 1(a). Despite this simple structure, there are two moieties of FeSb 2 crystals, those with a putative metal-insulator transition (MIT) in which the dc conductivity along the b axis first increases below room temperature, reaching a broad maximum at about 80-100 K, before decreasing dramatically as the temperature is further reduced 1 , and a second class of materials without a MIT in which the dc conductivity immediately begins to decrease as the temperature is lowered [1][2][3][4] , as shown in Fig. 1(b), for the two types of crystals examined in this work. Both classes of materials have a high thermoelectric power factor at low temperature; however, it is extraordinarily high in the materials with a MIT 1 . The thermoelectric efficiency is given by the dimensionless figure of merit ZT = σS 2 T/κ, where σ, S, T, and κ are the conductivity, Seebeck coefficient, temperature, and thermal conductivity, respectively; the thermoelectric power is simply S 2 σ; in FeSb 2 the Seebeck coefficient may be as high as −  S 45 mV K −1 at low temperature, resulting in the highest power factor ever recorded 2 . In general, there are two strategies for increasing ZT; reduce κ or increase the power factor S 2 σ. However, because the source of this large thermoelectric response is not entirely understood, with electronic correlations [1][2][3][4][5][6][7][8][9][10][11] , as well as the phonon-drag effect [11][12][13][14][15] , having been proposed, it is not clear which approach offers the best chance of success. The complex optical properties yield information about both the electronic and vibrational properties of a material, and can offer insights into the origin this unusual behavior. The real part of the optical conductivity is particularly useful as it yields information about the gapping of the spectrum of excitations in systems with a MIT, and in the zero-frequency limit, the dc conductivity is recovered, σ 1 (ω → 0) ≡ σ dc , allowing comparisons to be made with transport data. Furthermore, the infrared-active transverse-optic modes at the center of the Brillouin zone may be observed in σ 1 (ω) as resonances superimposed upon an electronic background (or antiresonances if strong electron-phonon coupling is present). The optical properties of FeSb 2 have been previously examined in the a-b planes 6 and along the c axis 7 , revealing a semiconducting response at low temperature and evidence for electron-phonon coupling. Results Crystals of FeSb 2 have been prepared by the usual methods 16,17 . The reflectance of several single crystals, with and without a MIT, has been measured over a wide frequency range (3 meV to 4 eV) at a variety of temperatures for light polarized along the a, b, and c axes 18 (Supplementary Fig. S1). Only naturally-occurring crystal faces have been examined, although after an initial measurement the c axis face was polished to remove some surface irregularities. Polishing broadens the lattice mode(s), but does not otherwise affect the optical properties. After the optical measurements were completed, the samples were dismounted and the dc resistivity, ρ dc , was measured using a standard four-probe technique 1 [the dc conductivity, σ ρ = 1/ dc dc , is shown along the b axis in Fig. 1(b)]. While the reflectance is a tremendously useful quantity, it is a combination of the real and imaginary parts of the dielectric function, and as such it is not necessarily intuitive or easily understood. It is much simpler to examine the real part of the optical conductivity, determined from a Kramers-Kronig analysis of the reflectance 19 , shown in the infrared region along the a, b, and c axes Figs. 2(a), (b), and (c), respectively; the insets show the conductivity over a much wider frequency range. Interestingly, the temperature dependence of the reflectance for crystals with and without an MIT is identical in the infrared region (shown for light polarized along the b axis in Supplementary Fig. S2). Consequently, the low-frequency optical conductivity in Fig. 2 never shows the initial increase with decreasing temperature that is seen in the dc conductivity in samples with a MIT; instead, the low-frequency optical conductivity decreases with temperature along all three lattice directions, suggesting that no MIT is present. The apparent dichotomy between the temperature dependence of the dc resistivity and the optical conductivity in crystals with an MIT [Figs. 1(b) and S2(a)] indicates that the dc transport properties are being driven by an impurity band that is sufficiently narrow so that its response falls below our lowest measured frequency. At room temperature, the real part of the optical conductivity may be described by a simple Drude model with Fano-shaped Lorentz oscillators to describe possible electron-phonon coupling 20 , and scattering rate 1/τ, where n and m * are the carrier concentration and effective mass, respectively. The second term is a summation of oscillators with position ω j , width γ j , strength Ω j , and (dimensionless) asymmetry parameter q 1/ j 2 , that describe the vibrations of the lattice or bound excitations (interband transitions);  Z 377 0 Ω is the impedance of free space, yielding units for the conductivity of Ω −1 cm −1 . In the 1/q 2 → 0 limit a symmetric Lorentzian profile is recovered; however, as 1/q 2 increases the line shape becomes increasingly asymmetric. The real part of the optical conductivity along the a and c axes at 295 K, shown in Figs The temperature dependence of the real part of the optical conductivity for light polarized along the a axis, revealing several extremely sharp infrared-active lattice modes and the rapid suppression of the low-frequency conductivity with decreasing temperature. Inset: the conductivity shown over a wide energy range. (b) The temperature dependence of the optical conductivity for light polarized along the b axis. As the temperature is reduced the low-frequency conductivity decreases dramatically revealing a step-like feature at 600 cm −1 ; three narrow infrared-active lattice modes all lie below this energy. There is a clear transfer of spectral weight (area under the conductivity curve) from low to high frequency with decreasing temperature. The points on the conductivity axis correspond the values for σ dc measured along this direction in a sample without a MIT and normalized to the extrapolated value for σ 1 (ω → 0) at room temperature. Inset: the conductivity shown over a much larger energy range. (c) The temperature dependence of the optical conductivity along the c axis, which is similar in magnitude to the conductivity along the a axis; a single sharp lattice mode is observed in this polarization. Inset: the conductivity shown over a wide energy range. anisotropy suggests that m * is slightly lower along the a axis, and that the larger value for σ dc along the b axis is a consequence of a lower scattering rate (Supplementary Table 1). As Fig. 2 indicates, the Drude component begins to decrease rapidly in strength below room temperature, along all three directions, with a commensurate loss of spectral weight (the area under the conductivity curve) that is transferred from low to high frequency 6 . The Drude model may be used to track the temperature dependence of ω p and 1/τ down to about 75 K, below which the free-carrier response becomes too small to observe in our measurements. The Drude expression for the dc conductivity, σ πω τ = Z 2 / dc p 2 0 , decreases rapidly as the temperature is lowered, suggesting that the transport may be described by an activation energy E a using the Arrhenius equation, where E a = E g /2. Transport measurements typically identify two gaps in FeSb 2 ,  E 5 g meV below about 20 K, and −  E 26 36 g meV in the 50−100 K temperature range [1][2][3][4] . The Arrhenius relation describes the temperature dependence of σ dc along all three lattice directions quite well (see Supplementary Fig. S4), and yields values for the transport gap of .  E 20 6 g , 19.5 and 24.8 ± 2 meV along the a, b, and c axes, respectively, in good agreement with the high-temperature values for the transport gap. Discussion While the Drude model with Fano-shaped Lorentz oscillators is able to reproduce the temperature-dependence of the optical conductivity along the a and c axes reasonably well, it fails to describe the sharp feature that develops along the b axis at low temperature. This step-like feature is the signature of a van Hove singularity in the density of states. The asymmetric profile in the real part of the low-temperature optical conductivity resembles the ω 1/ singularity response observed in one-dimensional semiconductors, , and ω ω = Δ  /2 where 2Δ is the semiconducting optical gap, and β is the sine-Gordon coupling constant 21 . When this functional form is taken in linear combination with several Lorentzian oscillators, the optical conductivity is reproduced quite well with σ 0 = 1730 Ω −1 cm −1 , 2Δ = 614 cm −1 , and β = 0.75, as shown in Fig. 3(b), clearly establishing the one-dimensional nature of the optical properties. The estimate for 2Δ along the b axis considerably larger than E g ; however, it should be noted that the optical determination of 2Δ probes only direct transitions between bands due to low momentum transfer. If the material has a direct gap, then the optical and transport gaps should be similar, ; however, in indirect-gap semiconductors, phonon-assisted transitions typically result in < Δ E 2 g . The observation of one-dimensional behavior in this material is of particular importance as it has been argued that lowered dimensionality may increase the value of the Seebeck coefficient [22][23][24] . Electronic structure calculations can provide insight into the optical properties of a material. However, density functional theory (DFT) predicts a metallic rather than a semiconducting ground state 12 , indicating that a more sophisticated approach is required. Consequently, first principle calculations have been performed using a linearized quasiparticle self-consistent GW and dynamical mean field theory (LQSGW + DMFT) approach [25][26][27] (details are provided in the Supplementary Information). Figure 4 shows the low-energy quasiparticle band structure near the K point (0.26b * + 0.28c * ) where the direct bandgap is a minimum. Here b * and c * are the reciprocal lattice vectors along the b and c axes. Around the K point, the calculation shows direct bandgap of 80 meV, which is in a good agreement with the semiconducting optical gap of Δ  2 76 meV. In addition, low-dimensional behavior is observed near the K point; along the a * direction, quasiparticle bands for the conduction and valence electrons are almost flat, as illustrated by the quasiparticle band in Fig. 4(b). In contrast, the quasiparticle bands are dispersive along the b * and c * directions shown in Figs. 4(c) and (d). The fact that DMFT is necessary to generate a low-dimensional quasiparticle spectral function that is consistent with the semiconducting ground state indicates that electronic correlations are an essential ingredient in understanding the anisotropic optical and transport properties of FeSb 2 . We now turn our attention to the equally interesting behavior of the infrared-active lattice modes. FeSb 2 crystallizes in the orthorhombic Pnnm space group, where c is the short axis [ Fig. 1(a)]. The irreducible vibrational representation is then Γ = + 3 , of which only the B 1u , B 2u and B 3u modes are infrared-active along the c, b, and a axes, respectively 6 . The temperature dependence of the real part of the optical conductivity has been projected onto the wave number versus temperature plane using the indicated color scales in Figs. 5(a), (b), and (c) for light polarized along the a, b, and c axes, respectively. The vibrations have been fit using oscillators with a Fano profile superimposed on an electronic background at 295 and 5 K (Supplementary Figs. S5, S6, and S7). The frequencies of the lattice modes at the center of the Brillouin zone and their atomic characters have also been calculated using first principles techniques and are in good agreement with previous results 28,29 (details are provided in the Supplementary Information); the comparison between theory and experiment is shown in Table 1. The behavior of the lattice modes are remarkable in several ways. Along the a, b, and c axes the vibrations have line widths that are up to an order of magnitude smaller than the previously reported values 6,7 ; at low temperature all the modes are extremely sharp and several have line widths of less than 1 cm −1 , a result that has also been observed in some Raman-active modes 30 . The narrow line widths imply long phonon lifetimes (τ γ ∝ 1/ j j ) and mean-free paths, consistent with the suggestion of quasi-ballistic phonons 11,15 , which affect S through the phonon-drag effect where the phonon current drags the charge carriers, giving rise to an additional thermoelectric voltage [31][32][33] . In addition, while several of the infrared-active vibrations were previously reported to have a slightly asymmetric profile at high temperature 6,7 , in this work all the line shapes appear to be symmetric (  q 1/ 0 j 2 ), indicating that electron-phonon coupling is either very weak or totally absent. The single B 1u mode along the c axis, and the three B 2u modes along the b axis, shown in Figs. 5(c) and (b), respectively, increase in frequency (harden) with decreasing temperature, and are in excellent agreement with the calculated values ( Table 1). The behavior of the lattice modes along the a axis in Fig. 5(a) The temperature-dependence of the real part of the optical conductivity for light polarized along the a axis projected onto the wave number versus temperature plane; the color scheme for the conductivity is shown above the plot. Only three B 3u modes are predicted for this symmetry; however, there are four modes at 121, 191, 243 and 254 cm −1 at 295 K. Below about 100 K the 191 cm −1 mode disappears and is replaced by a new, very strong mode at 220 cm −1 ; all the modes are quite narrow at low temperature ( Table 1). The change in the character of the lattice modes below 100 K hints at a weak structural distortion along this direction. (b) The optical conductivity for light polarized along the b axis projected onto the wave number versus temperature plane. There are three strong B 2u modes at 106, 231, and 269 cm −1 at 295 K that harden and narrow while increasingly slightly in strength at low temperature. (c) The optical conductivity for light polarized along the c axis projected onto the wave number versus temperature plane. There is one strong B 1u mode at 191 cm −1 at 295 K that hardens with decreasing temperature, increasing slightly in strength and narrowing dramatically at low temperature. At room temperature the three modes observed at 121, 243, and 254 cm −1 are in good agreement well with the calculated values for the B 3u modes at 125, 252, and 260 cm −1 , respectively; however, a fourth reasonably strong mode at 191 cm −1 is also observed that is considerably broader than the other vibrations. As the temperature is reduced the mode at 191 cm −1 actually decreases slightly in frequency, while the remaining modes harden. Below about 100 K, the mode at 191 cm −1 vanishes and a new, very strong mode appears at 220 cm −1 , while at the same time the modes at 243 and 254 cm −1 both shift to slightly higher frequencies; the mode at 121 cm −1 shows no signs of any anomalous behavior [ Fig. 5(a) and Fig. S5]. The fate of the 191 cm −1 mode is uncertain; however, it is unlikely that it has evolved into the 220 cm −1 mode due to the large difference in oscillator strengths (Table 1). It is also unlikely that this is a manifestation of the B 1u mode, which has a comparable frequency, because that feature does not display the unusual temperature dependence of the mode observed along the a axis, nor is there any evidence of it along the b axis. The dramatic change in the nature of the lattice modes along the a axis at precisely the temperature where the resistivity begins to increase dramatically suggests there is a weak structural distortion or phase transition. To conclude, the temperature dependence of the optical and dc transport properties of single crystals of FeSb 2 , both with and without a MIT, have been examined over a wide temperature and spectral range, along all three lattice directions. While the temperature dependence of the optical properties are essentially identical in the two types of crystals, the dc transport properties are dramatically different. This dichotomy can be explained by the presence of a sample-dependent impurity band that lies below the optical measurements. The optical conductivity in both types of crystals reveals an anisotropic response at room temperature, and singular behavior at low temperature along the b axis, demonstrating a one-dimensional semiconducting response with Δ  2 76 meV, in agreement with ab inito calculations. The lattice modes along the b and c axes have symmetric profiles which narrow and harden with decreasing temperature, and their positions are in good agreement with first principles calculations. However, along the a axis there is an extra mode above 100 K; below this temperature the resistivity increases rapidly and the high-frequency vibrational modes undergo significant changes that hint a weak structural distortion or transition. Transport studies along this direction may shed light on the nature of this peculiar behavior. Although electron-phonon coupling is apparently either very weak or totally absent in this material, the fact that DMFT is required to reproduce the semiconducting ground state and anisotropic response indicates that electronic correlations play an important role in the optical and transport properties. While the extremely narrow phonon line shapes support the phonon-drag explanation of the high thermoelectric power, electronic correlations and the low-dimensional behavior along the b axis may also enhance the Seebeck coefficient [22][23][24] , making it likely that both contribute to the extremely high thermopower observed in FeSb 2 . Methods The temperature dependence of the absolute reflectance was measured at a near-normal angle of incidence using an in situ evaporation method 18 over a wide frequency range on Bruker IFS 113v and Vertex 80v spectrometers. In this study mirror-like as-grown faces of single crystals have been examined. After an initial measurement, the c-axis face was determined to have a minor surface irregularity, so it was was polished and remeasured. Polishing broadens the lattice mode somewhat, but the electronic properties were not affected. The temperature dependence of the reflectance was measured up to . 1 5 eV, while polarization studies were conducted up to at least 3 eV. The complex optical properties were determined from a Kramers-Kronig analysis of the reflectance 19 . The Kramers-Kronig transform requires that the reflectance be determined for all frequencies, thus extrapolations must be supplied in the ω → 0,∞ limits. In the metallic state the low frequency extrapolation follows the Hagen-Rubens form, ω ω ∝ − R( ) 1 , while in the semiconducting state the reflectance was continued smoothly from the lowest measured frequency point to ω → .  R( 0) 0 64 and 0.68 along the a and c axes, respectively, and . 0 74 along the b axis. The reflectance is assumed to be constant above the highest measured frequency point up to × 8 10 4 cm −1 , above which a free electron gas asymptotic reflectance extrapolation R(ω) ∝ 1/ω 4 is employed 34 . Table 1. The experimentally-observed position (ω j ), width (γ j ) and strength (Ω j ) of the infrared-active lattice modes in FeSb 2 along the a (B 3u ), b (B 2u ), and c (B 1u ) axes at 295 and 5 K, compared with the frequencies and atomic intensities calculated from first principles assuming a Pnnm (orthorhombic) space group; for all of the modes the asymmetry parameter  . q 1/ 001 j 2 (symmetric profiles). The phonon lifetimes τ j ∝ 1/γ j . The uncertainties for the fitted position, width, and strength are estimated to be 1%, 5%, and 10%, respectively. All units are in cm −1 , unless otherwise indicated.
2018-08-04T14:23:18.514Z
2018-08-03T00:00:00.000
{ "year": 2018, "sha1": "e0bce4f2c6f0080dd1dac207f9974ca50805e4de", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-29909-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c726303678e11c57d2ba633788a42adad5b5b7d8", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics" ] }
220548698
pes2o/s2orc
v3-fos-license
Hemodynamics and Hemorrhagic Transformation After Endovascular Therapy for Ischemic Stroke Hemorrhagic transformation remains a potentially catastrophic complication of reperfusion therapies for the treatment of large-vessel occlusion ischemic stroke. Observational studies have found an increased risk of hemorrhagic transformation in patients with elevated blood pressure as well as a high degree of blood pressure variability, suggesting a link between hemodynamics and hemorrhagic transformation. Current society-endorsed guidelines recommend maintaining blood pressure below a fixed threshold of 180/105 mmHg regardless of thrombolytic or endovascular intervention. However, given the high recanalization rates with mechanical thrombectomy, it is unclear if the same hemodynamic goals from the pre-thrombectomy era apply. Also, individual patient factors such as the degree of reperfusion, infarct size, and collateral status likely need to be considered. In this review, we will discuss current evidence linking hemodynamics to hemorrhagic transformation after mechanical thrombectomy. In addition, we will review the clinical relevance of cerebral autoregulation in stroke, highlighting recent studies that have harnessed autoregulatory physiology to define and trend individualized limits of autoregulation. This review will go on to emphasize the translatability of this approach to stroke management. Finally, we will discuss novel statistical approaches like trajectory analysis to post-thrombectomy hemodynamics. INTRODUCTION Hemorrhagic transformation (HT) is a feared complication of acute ischemic stroke and is independently associated with neurological deterioration and worse functional outcomes (1)(2)(3)(4). Accurate prediction and triage of patients at risk for HT would be of tremendous value, and yet the underlying mechanisms and potential biomarkers of HT remain elusive. While animal and human studies have invoked pathomechanisms involving neuroinflammation, neurovascular unit impairment, blood brain barrier disruption, and vascular remodeling, this clinically oriented review will focus on cerebral autoregulation and optimal blood pressure (BP) management following endovascular thrombectomy (EVT) for large-vessel occlusion (LVO) acute ischemic stroke (5,6). Mechanical thrombectomy preceded by intravenous thrombolytics has become standard of care treatment in stroke patients with acute ischemia secondary to LVO (7). This shift occurred after 2015, a year that witnessed five randomized trials (MR CLEAN, ESCAPE, SWIFT PRIME, REVASCAT, and EXTEND IA), showing the efficacy of EVT over standard medical care (8)(9)(10)(11)(12). A subsequent meta-analysis (HERMES) included a total of 1,287 patients and demonstrated a significant reduction in 90-days disability compared to controls, though 90-days mortality did not differ between the two study populations (7). Two additional trials (DAWN, DEFUSE-3) were published in 2018. They provided evidence that thrombectomy can be offered up to 24 h after symptom onset in selected patients with a mismatch between infarct size and clinical deficit (13,14). In all seven of these major trials, the rates of symptomatic HT were key safety outcomes, reported as serious adverse events following treatment. In the first five studies that looked at EVT in the early window (up to 12 h), symptomatic HT in the treatment group ranged from 0 to 7.7%. Of note, in these five studies, most patients (>80%) in both intervention and control groups received intravenous thrombolysis in addition to EVT. In both extended time window trials, symptomatic hemorrhagic complications occurred in 6-7% of patients in the treatment group. The DEFUSE 3 trials' rates of symptomatic intracranial bleeding did not differ between the EVT and control group (7 vs. 4%, respectively; P = 0.75) (13). Five patients with symptomatic HT in the EVT group died, compared with two in the control group. In the DAWN trial, the rates of symptomatic intracranial bleeding did not significantly differ between the EVT and control groups (6 vs. 3%, respectively; P = 0.50) (14). The HERMES pooled analysis of patient-level data concluded that the rates of symptomatic intracranial hemorrhage are not higher in patients receiving EVT than in patients receiving medical therapy alone (4.4 vs. 4.3%, respectively; risk difference 0.1%), suggesting that reperfusion alone may not be the primary driver of symptomatic HT (7). Observational studies have shown an increased risk of HT with sustained post-procedural hypertension and higher BP variability (15). Interestingly, mean systolic BP (SBP) was lower among patients with successful reperfusion, indicating a possible difference in the threshold for reperfusion injury depending on recanalization status. Furthermore, radiographic hemorrhagic infarction (HI) is common following EVT and has been associated with poor outcome, thereby questioning the purported benign nature of HI (4). While these studies suggest a possible role of hemodynamics in the development of HT, they do not prove a causal relationship. Identification of patients at risk for HT (both radiographic and symptomatic) may allow for early preventative strategies like BP control post-EVT. BLOOD PRESSURE MANAGEMENT FOLLOWING THROMBECTOMY Current American Heart Association guidelines recommend maintaining BP < 180/105 mmHg for all patients treated with intravenous thrombolysis or EVT to promote perfusion to ischemic territories while mitigating potential risks of intracranial hemorrhage. Still, guidelines acknowledge a lack of prospective trials to substantiate this position, and the language of these consensus statements reflects this uncertain area of care: "In patients who undergo mechanical thrombectomy, it is reasonable to maintain the BP ≤ 180/105 mmHg during the first 24 h after the procedure. In patients who undergo mechanical thrombectomy with successful reperfusion, it might be reasonable to maintain BP at a level <180/105 mmHg." (16). Randomized controlled trials are unavailable, and the evidence in support of these recommendations is moderate to weak (class of recommendation IIa&IIb, level of evidence B-NR). Furthermore, trial protocols regarding post-procedural BP control in the studies that contributed to guideline development were vague, and BP management likely varied across sites. The vast majority of patients enrolled in under 6-h randomized trials received intravenous thrombolytic therapy, and the trial protocols stipulated management according to local guidelines with pressures generally under 180/105 mmHg for the first 24 h after the procedure. Only two trial protocols provided additional recommendations. The ESCAPE protocol states that systolic BP ≥ 150 mmHg is probably useful in promoting and sustaining adequate collateral flow while the artery remains occluded (9). The protocol further states that controlling pressure once reperfusion has been achieved, aiming for normal pressures, is a reasonable route for individual patients. Second, the DAWN protocol endorses systolic pressures under 140 mmHg in the first 24 h for subjects who achieve successful reperfusion (17). As a result of the limited data, current management strategies are based on guidelines that favor a one-size-fits-all approach that neglects the heterogeneity of stroke and differences in individual patient characteristics. The care of patients with stroke is, therefore, poorly individualized. Despite the efficacy of EVT, many patients with LVO stroke still suffer morbidity, mortality, and functional dependence in longitudinal studies (7,18). Observational studies, including a recent meta-analysis, have shown higher rates of HT, worse outcomes, and increased mortality in patients with higher peak SBP values or hemodynamic variability in the first 24 hours after EVT (15,(19)(20)(21). However, it remains unclear if post-procedural hypertension is simply an epiphenomenon, or if it reflects a valid therapeutic target. In a recent multicenter study of 1,245 patients who achieved successful reperfusion after EVT, Anadani et al. divided patients into three groups based on SBP goal in the first 24 h post-EVT. The investigators found that higher SBP targets were associated with higher odds of symptomatic intracranial hemorrhage, mortality, and hemicraniectomy (22). The results agree with earlier findings by Goyal et al., who published a single-center experience after the implementation of more aggressive BP control following successful EVT. Compared to patients treated with permissive hypertension (<180 mmHg), those treated with moderate (<160 mmHg) and intensive (<140 mmHg) BP control showed improved functional outcome and lower mortality at three months (19). Although we currently lack rigorous clinical evidence, these studies, as well as compelling conceptual reasons, suggest that BP optimization may represents a post-EVT neuroprotective strategy. Indeed, while a higher BP may be beneficial in patients with incomplete reperfusion by promoting perfusion to ischemic territories and the penumbra, it could lead to relative hyperperfusion. Such hyperperfusion could cause cerebral edema and hemorrhage in those patients with complete reperfusion. This phenomenon is well-described in chronic ischemia after carotid revascularization (via endarterectomy or stenting) but may also occur in acute stroke (23)(24)(25). For example, Hashimoto et al. reported cerebral hyperperfusion syndrome in a 77-yearold patient with acute internal carotid and middle cerebral artery occlusions. Due to the patient's neurologic deterioration, the authors suggest that it is essential to routinely monitor regional oxygen saturation with near-infrared spectroscopy, evaluate cerebral blood flow, and maintain antihypertensive therapy to prevent hyperperfusion after revascularization (25). It is also possible that this complication is more prevalent than the handful of published case reports might suggest. Following recanalization, lower BP targets may be warranted to decrease reperfusion injury and promote penumbral recovery. Nevertheless, optimal, personalized BP targets remain undefined. To complicate the matter, individual patient factors such as degree of reperfusion, infarct size, concomitant carotid revascularization, antithrombotic therapy, and hemodynamic status likely need to be considered. Because of these factors, there is a high degree of practice variation in BP management following EVT (26). Recent studies have shown that real-time autoregulation monitoring can be used to identify a dynamic BP range in individual patients at which autoregulation is optimally functioning (27)(28)(29)(30)(31). Such an autoregulation-derived, personalized BP range may provide a favorable physiologic landscape for the acutely injured brain. Accordingly, the following section will review the use of cerebral autoregulation monitoring in patients with acute ischemic stroke, highlighting the hypothesis that exceeding a personalized upper limit of autoregulation predisposes patients to reperfusion injury and HT (27,29) CEREBRAL AUTOREGULATION AND BLOOD PRESSURE PERSONALIZATION Cerebral autoregulation describes the intrinsic capacity of the cerebral vasculature to preserve stable blood flow in the face of systemic BP changes (or, more precisely, cerebral perfusion pressure changes) (32). Autoregulatory capacity in acute stroke is critical for the maintenance of stable blood flow to the ischemic penumbra and avoidance of excessive hyperperfusion (33,34). There is fairly widespread agreement that stroke is associated with impaired autoregulation, even in cases of minor stroke (33)(34)(35). This impairment may exist ipsilateral to the stroke site in a focal fashion, or globally throughout both hemispheres (34). Interestingly, Immink et al. reported dynamic autoregulatory disturbance ipsilateral to middle cerebral artery (MCA) territory strokes but bilaterally in lacunar ischemic strokes (36). These results were bolstered in more recent analyses by Guo et al., showing that dynamic autoregulatory markers were impaired ipsilaterally in a stroke of large artery atherosclerosis but bilaterally in stroke of small artery occlusion (37). Petersen et al. then examined autoregulation on a more longitudinal basis, reporting dynamic autoregulatory failure up to 1 week following acute LVO strokes in the MCA. More specifically, this investigation showed that the autoregulatory parameter phase was lower in the affected cerebral hemisphere compared to the contralateral hemisphere, indicating an impaired ability to buffer against BP fluctuations (38). Furthermore, in stroke patients with impaired autoregulation, recovery tends to be delayed for up to 3 months, underlining the clinical relevance of autoregulation in stroke research (35,39). That said, only a handful of studies have looked at functional outcome prognostication with respect to autoregulation physiology in stroke. For example, Reinhard et al. enrolled 45 patients within 48 h of LVO MCA strokes and showed that ipsilateral lower phase shifts were related to worse functional outcomes (40). In light of the prolonged enrollment timeframe, the authors conceded that autoregulatory impairment might reflect initial stroke severity, rather than functioning as an independent contributing factor to outcome. To help resolve this question, Castro et al. measured autoregulation in 30 patients with LVO MCA ischemic stroke within 6 h of symptom onset (39). This report demonstrated that autoregulatory impairment operated as a statistically independent predictor of functional autonomy at the 90-days endpoint (odds ratio 14.0, 95% confidence interval 1.7-74.0; P = 0.013). In yet another study, these authors reported that final infarct volume is significantly lower in patients with preserved autoregulation in a similar acute window post-stroke (41). In a review summarizing these findings, Castro et al. conclude that early autoregulatory measures wield considerable import in the guidance of acute stroke management, secondary injury prevention, and outcome improvement (35). Autoregulatory physiology has thus been invoked as a biological avenue with possible deterrent and restorative benefits concerning HT and associated neurologic worsening. In an invasive neuromonitoring study, Dohmen et al. enrolled 15 patients with MCA ischemic strokes and calculated the cerebral perfusion pressure-oxygen reactivity index (COR) (42). They found COR indices were higher (worse) in the eight patients with malignant courses (i.e., massive brain edema) compared to the seven patients with relatively benign courses. The study concludes that dysautoregulation appears to play an essential role in the development of cerebral edema. In a study mentioned above, Castro et al. calculated cerebrovascular resistance, coherence, gain, and phase in 46 patients within 24 h of MCA ischemic stroke (41). At admission, phase was lower (indicative of worse autoregulation) in patients with HT. Also, progression to edema was related to lower cerebrovascular resistance values and increased blood flow velocities at the initial presentation. These lower resistances, the authors submit, reflect paradoxical cerebral vasodilation, as cerebrovascular resistance is equal to the quotient between mean arterial pressure and mean flow velocity (CVR = MAP/MFV). Thus, they argue that breakthrough hyperperfusion and microvascular injury may underlie the development of malignant edema and HT. Cumulatively, there is substantial evidence for impaired autoregulation after stroke. It follows that an autoregulationguided approach can be applied to the cerebrovascular hemodynamics of stroke pathophysiology. The Cambridge group has been refining this work over several decades, particularly in patients with traumatic brain injury (43). With this hypothesis in mind, a recent study harnessed autoregulation monitoring to identify and track personalized BP limits in 90 patients undergoing EVT for LVO ischemic stroke (27,29). This cohort revealed that continuous estimations of optimal BP and autoregulatory limits are feasible in post-EVT care. The study further demonstrated that exceeding individualized autoregulatory thresholds was associated with HT and worse outcome (Figure 1). In more detail, every 10% increase in time spent above the upper limit of autoregulation was associated with a doubling in the odds of shifting toward a more unfavorable 3months outcome. The study also observed a progressive increase in percent time above this upper limit with worsening grades of HT (11.4% of the time for no HT, 13.5% for hemorrhagic infarctions 1 and 2, and 20.9% for parenchymal hematoma 1 and 2; P = 0.03). Also, patients who developed symptomatic intracranial hemorrhage spent more time above the upper autoregulatory limit when compared to patients without this complication (11.9 vs. 24.6%; P = 0.1) (29). This relationship between deviation from the upper autoregulatory limit and outcome is supported by the construct that above the upper autoregulatory limit, the cerebral vasculature functions as a pressure-passive system, in which increases in cerebral blood flow are not counteracted by vasoconstriction (44). This system permits periods of hyperperfusion in the setting of an elevated systemic BP (33). Furthermore, higher cerebral blood flow after reperfusion therapy (measured via arterial spin labeling magnetic resonance imaging) has been shown to increase the risk of HT (45). Several retrospective studies reported an association between sustained hypertension after EVT and HT (15,46), although others did not unearth this relationship (19,47). Divergence of autoregulatory capacity among different patients may be at least one explanation for these discordant results. An additional aim of this post-EVT monitoring study was to compare personalized, autoregulation-guided BP targets with two commonly used clinical approaches: 1) maintaining BP below a fixed, pre-determined value as recommended by current guidelines and 2) stratifying BP thresholds based on reperfusion status (29). Ultimately, there was no association between time spent above any of the fixed SBP thresholds and HT or functional outcome, even after stratifying by reperfusion status. This supplementary analysis was particularly important because optimal BP ranges after EVT are likely influenced by numerous factors; stratifying by reperfusion status alone might not be sufficient. For instance, chronic hypertension and flow-limiting extracranial carotid disease may shift a person's autoregulatory curve toward higher pressures. Aggressively lowering BP after successful EVT in this scenario may result in cerebral hypoperfusion and infarct expansion (48,49). In comparison, optimal BP ranges could shift toward lower pressures in patients without hypertension or preexisting large-vessel disease. Overall, then, these results argue for future research in prospective, multicenter, and randomized FIGURE 1 | (A) Relative hyperperfusion above the upper limit of autoregulation may predispose patients to hemorrhagic transformation and worse outcomes. (B) In contrast, patients who oscillate within their personalized limits of autoregulation may be protected from secondary brain injury after stroke. ULA, upper limit of autoregulation; MAP OPT , optimum mean arterial pressure; MAP, mean arterial pressure; LLA, lower limit of autoregulation. Frontiers in Neurology | www.frontiersin.org trials. Finally, another interesting avenue of investigation revolves around the question of restoring dysautoregulation by dynamically adjusting BP. In other words, by targeting an optimum BP within autoregulatory limits, intensivists may be able to shift patients to a more favorable position on the autoregulatory curve, but this hypothesis remains untested. BLOOD PRESSURE TRAJECTORY ANALYSIS AFTER STROKE In addition to autoregulation monitoring, researchers in recent years have applied innovative statistical tools to study BP data in the acute window post-stroke. For instance, in 2018, Kim et al. used trajectory modeling to examine longitudinal BP data from a prospective multicenter registry of 8,376 stroke patients (50). Their characterization of post-stroke BP courses has been hitherto a missing element in the field. In their work, the authors applied the TRAJ procedure from SAS software to separate heterogeneous, longitudinal BP data into trajectory groups with similar patterns. This analysis identified the optimal number and shape of trajectories; it then assigned patients to estimated trajectory groups. Five distinct BP trajectories were generated over the acute period following stroke. The risk of recurrent stroke, myocardial infarction, or death was greater in patients who fell into the acutely elevated or persistently high BP trajectory groups. In 2019, Li et al. published a post-hoc BP trajectory analysis of a large BP lowering trial in 4,036 patients with stroke (51). Using similar statistical methods, the authors generated five BP trajectories over seven days following stroke. Patients who sustained high BP over time had significantly higher mortality rates at 3-months and 2-years follow-up. Patients in the experimental arm of the original trial who received BP lowering interventions were more likely found in lower BP trajectories than patients in the control arm, demonstrating that pharmacological intervention can affect a patient's BP trajectory and potentially their outcome. These two studies, then, reaffirm the association between elevated post-stroke BP and poor outcome. In recent work by Petersen et al., trajectory analysis was conducted on a prospective, multicenter, international cohort of 1,060 patients who underwent EVT for LVO ischemic stroke (52). Five unique post-EVT systolic BP trajectories were generated over 72 h (Figure 2). Compared to patients in the moderate trajectory (2), patients in the acutely elevated (4) and persistently high (5) trajectories had a significantly increased risk of unfavorable functional outcome after adjustment for several covariates (odds ratio 1.6 and 2.5, respectively). While the elevated BP in high trajectory groups may reflect an acute, post-stroke hypertensive response, it may also reflect underlying, untreated hypertension. Patients in higher trajectories had higher rates of hypertension and received more antihypertensive medication pre-admission. Additionally, elevated BP may reflect reperfusion status, as non-recanalized patients were more likely to be in higher trajectory groups. Overall, patients who maintained lower BP trajectories had better 90-days functional outcomes, but this trend was not observed for symptomatic HT. Patients in the acutely elevated (4) trajectory had the highest rate of symptomatic HT, even more than patients in the persistently high trajectory (5). In contrast, patients in the moderateto-high (3) trajectory (who had the highest rates of inhospital antihypertensive treatment) had markedly lower rates of symptomatic HT than any other trajectory group. These findings raise questions about alternative mechanisms, such as cerebral edema, through which elevated BP may impact functional outcome. It is unknown whether lowering a patient's trajectory from persistently high (5) to acutely elevated (4) will improve outcomes, as this retrospective analysis of a prospective cohort was purely observational. However, these findings may help identify ideal candidates for future trials. This work, along with the previously described studies on autoregulation-based BP goals, are hypothesis-generating and aim to identify a subset of patients who may benefit most from post-stroke BP intervention. Additionally, this body of work demonstrates the impact of emerging analytical techniques on understanding post-stroke hemodynamics, prevention of secondary injuries like HT, and more personalized BP management. CONCLUSION In the era of endovascular thrombectomy, hemorrhagic transformation remains a potentially devastating complication of acute ischemic stroke. Intracranial bleeds after thrombectomy likely occur as a result of a multifactorial process. Still, this clinical review of BP optimization shows that hemodynamic management represents a titratable, neuroprotective avenue in the care of critically ill patients. Exceeding the upper limit of autoregulation may predispose patients to reperfusion injury; maintaining BP within autoregulatory limits may achieve favorable outcomes while avoiding hemorrhagic complications. Additionally, trajectory analysis has the potential to provide more tailored hemodynamic management in the post-thrombectomy intensive care setting.
2020-07-17T13:15:32.667Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "d6beb8e23b291354d2c222f5841b9d2022bccc01", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.00728/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6beb8e23b291354d2c222f5841b9d2022bccc01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218833401
pes2o/s2orc
v3-fos-license
Dietary Supplements for Male Infertility: A Critical Evaluation of Their Composition Dietary supplements (DS) represent a possible approach to improve sperm parameters and male fertility. A wide range of DS containing different nutrients is now available. Although many authors demonstrated benefits from some nutrients in the improvement of sperm parameters, their real effectiveness is still under debate. The aim of this study was to critically review the composition of DS using the Italian market as a sample. Active ingredients and their minimal effective daily dose (mED) on sperm parameters were identified through a literature search. Thereafter, we created a formula to classify the expected efficacy of each DS. Considering active ingredients, their concentration and the recommended daily dose, DS were scored into three classes of expected efficacy: higher, lower and none. Twenty-one DS were identified. Most of them had a large number of ingredients, frequently at doses below mED or with undemonstrated efficacy. Zinc was the most common ingredient of DS (70% of products), followed by selenium, arginine, coenzyme Q and folic acid. By applying our scoring system, 9.5% of DS fell in a higher class, 71.4% in a lower class and 19.1% in the class with no expected efficacy. DS marketed in Italy for male infertility frequently includes effective ingredients but also a large number of substances at insufficient doses or with no reported efficacy. Manufacturers and physicians should better consider the scientific evidence on effective ingredients and their doses before formulating and prescribing these products. Introduction Infertility is a pathological condition defined as the inability of a sexually active, non-contracepting couple to achieve pregnancy in one year [1]. Both male and female factors can lead to infertility. In particular, according to the causes, it has been reported that 29.3% is due to a male factor, 37.1% to a female factor, 17.6% to both male and female factors, with the remaining percentage considered as idiopathic [2]. It is estimated that around 10%-15% of all couples are affected by infertility, thus representing a global concern in most developed countries [3]. Among male infertility causes, many recent studies have emphasized the role of genital tract inflammation, incorrect lifestyles and malnutrition [4]. On this regard, weight excess and other conditions such as metabolic syndrome, alcohol abuse, cigarette smoking, exposure to environmental pollutants etc. have been strongly related to a decrease in sperm quality and fertility. A major driving hypothesis is that these conditions, by inducing an elevation of reactive oxygen species (ROS) and nitrogen species (RONS), are able to alter the balance of the redox status of both the steroidogenic cell population and the germ line cell populations, leading to the impairment of the hypothalamic-pituitary-testicular axis and the reduction of sperm quality [5]. A large number of recent studies have focused on the ability of many substances, generally termed as nutraceuticals, to improve the hormonal status and sperm parameters by different mechanisms [6]. Nutraceuticals are used as ingredients of dietary supplements (DS), widely marketed for the prevention or treatment of the most disparate pathological conditions. From a legislative point of view, the European Food Safety Agency (EFSA) defines that DS are not intended for the treatment or prevention of disease in humans, but only to support specific physiological function [7]. Currently, DS are widely prescribed to improve physiological aspects related to male fertility. Many DS are available on the market with various formulations, containing both nutrients and botanicals at different doses. Despite many authors demonstrating positive effects of some ingredients on semen parameters and fertility outcomes [8], many others have also shown a lack of efficacy and even potentially harmful side effects [9]. In a recent position statement, the Italian Society of Andrology and Sexual Medicine (SIAMS) summarized the state of the art on each single ingredient currently used in the andrological field. In this paper, authors concluded that there was still limited scientific evidence on the possible role of any nutraceutical in andrology and the use of antioxidants could be suggested in patients with idiopathic infertility in the presence of documented abnormal sperm parameters only after a specific diagnostic workup. However, to date, no regulation or guidelines are available for the use of these products, generating confusion for both prescribers and patients [10]. Moreover, several factors make it difficult to empirically address the right ingredient for the right patient. In particular, it is difficult to identify the correct DS since each product contains different ingredients at different doses. The purpose of this study was to critically evaluate the composition of DS employed in male infertility, using the Italian market as a sample. Materials and Methods In order to evaluate the potential efficacy of DS, a systematic literature review on substances used to improve sperm parameters was preliminarily performed. The literature search was conducted in MEDLINE, Scopus, EMBASE, and Cochrane Library registers until 31 March 2020. Only randomized clinical trials (RCTs) and systematic reviews or meta-analysis of RCTs were considered eligible. With the aim to rule out possible interactions between ingredients, only studies that used active substances alone or in combination with at most the other three ingredients were considered. The key terms used for the search were: fertility or male reproduction or semen parameters and supplements or ingredients. Figure 1 displays the flow diagram of the selection of eligible papers. To establish the efficacy of each ingredient we considered only those having at least one RCT or systematic review or meta-analysis of RCTs, demonstrating a significant effect on any sperm parameters involved in male fertility. Significance was set at p-value < 0.05. When evaluating the findings of meta-analyses, we verified whether statistical methods incorporated substantial heterogeneity (Higgins I 2 > 30%) into a random-effects model, as appropriate. Regarding the daily dose of each active ingredient with nutrient characteristics, we referred to the tolerable Upper intake Levels (UL) as reported in Dietary Reference Intake (DRI) [11]. Based on the results of available articles, we were able to identify, for each active ingredient, the minimal effective daily dose (mED) able to improve sperm parameters. To define mED we used the lowest effective dose reported in RCTs, systematic review or meta-analysis of RCTs. Therefore, we classified the ingredients contained in each supplement and suggested daily dose into three categories: reported efficacy with a dose achieving the mED (A), reported efficacy but with a dose below mED (B) and unreported data of efficacy (C). To classify DS, we created a formula taking into consideration the three classes of ingredients and their number: In particular, the above formula was conceived based on the following sequential steps: (1) Each class of ingredients was given an arbitrary value: A = +2, B = +1 and C = −1; (2) These values were multiplied for the respective number of ingredients within each supplement (A, B and C respectively), obtaining a total score given by the sum of each category (2A + B − C); (3) As the number of ingredients highly differed between supplements, we standardized the above total score by dividing it for the maximum possible score for that supplement, by assuming that each ingredient was of class A (=2N, where N is the total number of ingredients in each supplement); (4) In order to correct this value for the number of ingredients of only categories A and B, the relative score was multiplied for the sum of high efficacy ingredients plus half (as a proxy of their lower efficacy) the number of moderate efficacy ingredients (A + B/2), finally obtaining a corrected score for each supplement. Based on the results of available articles, we were able to identify, for each active ingredient, the minimal effective daily dose (mED) able to improve sperm parameters. To define mED we used the lowest effective dose reported in RCTs, systematic review or meta-analysis of RCTs. Therefore, we classified the ingredients contained in each supplement and suggested daily dose into three categories: reported efficacy with a dose achieving the mED (A), reported efficacy but with a dose below mED (B) and unreported data of efficacy (C). To classify DS, we created a formula taking into consideration the three classes of ingredients and their number: In particular, the above formula was conceived based on the following sequential steps: (1) Each class of ingredients was given an arbitrary value: A = +2, B = +1 and C = −1; (2) These values were multiplied for the respective number of ingredients within each supplement (A, B and C respectively), obtaining a total score given by the sum of each category (2A + B − C); (3) As the number of ingredients highly differed between supplements, we standardized the above total score by dividing it for the maximum possible score for that supplement, by assuming that each ingredient was of class A (=2N, where N is the total number of ingredients in each supplement); (4) In order to correct this value for the number of ingredients of only categories A and B, the relative score was multiplied for the sum of high efficacy ingredients plus half (as a proxy of their lower efficacy) the number of moderate efficacy ingredients (A + B/2), finally obtaining a corrected score for each supplement. (5) Given the distribution of the scores resulted in three main clusters, we classified DS into three categories, resembling the efficacy of the ingredients: higher expected efficacy (corrected score ≥ 4), lower expected efficacy (4 < corrected score > 1) and no expected efficacy (corrected score ≤ 1). We collected the names and formulations of the DS registered in Italy by referring to the register of the Italian Ministry of Health [12]. Results The literature search on active ingredients allowed us to identify 41 studies (RCTs or meta-analyses) reporting their efficacy on sperm parameters ( Figure 1). By this analysis we found that 18 of these ingredients had a reported efficacy. The complete list of ingredients with clinical evidence of efficacy, the respective references, evaluated sperm parameters and employed daily doses, are summarized in Table 1. In the right column, the mED of each ingredient is reported. In some studies, marked with an asterisk, the employed dose exceeded the reported UL. In particular, all the studies involving zinc evaluated the effect of this ingredient at a dose exceeding UL. For each active ingredient, the evidence of efficacy was supported by at least two RCTs or meta-analysis, excluding astaxanthin, D-aspartic acid and L-citrulline, which had only one reference. Ingredients without clinical evidence in the improvement of sperm parameters (no RCT or meta-analyses) are listed in Table 2. We found 21 DS marketed in Italy for male infertility. Their composition and the daily doses of their active ingredients are summarized in Table 3. Moreover, for each supplement, the scores of expected efficacy and the symbols summarizing the efficacy of their ingredients are reported. A detailed analysis of this table raised the following considerations: (i) all supplements were mixtures of active ingredients; (ii) in each supplement the number of ingredients ranged from 2 up to 17, with a mean number higher than 7; (iii) 13 of 21 supplements contained at least one ingredient without reported efficacy; (iv) 19 supplements had ingredients below mED; (v) indeed, 1 supplement contained seven ingredients dosed below mED; (vi) 1 supplement contained only active ingredients satisfying mED; (vii) the product number 9 had a nutrient reaching UL (zinc 40 mg/day); (viii) zinc was the most used ingredient, followed by selenium, arginine, coenzyme Q, folic acid and carnitine. These substances were present in more than 50% of DS, whereas all the remaining ingredients were represented in 10% or less of products. The distribution of DS into the three classes of efficacy is reported in Figure 2. Two DS out of 21 (9.5%) were included in the higher expected efficacy group. The majority of remaining products (71.4%) fell in the lower expected efficacy group, and four (19.1%) in the group with no efficacy. A detailed analysis of this table raised the following considerations: (i) all supplements were mixtures of active ingredients; (ii) in each supplement the number of ingredients ranged from 2 up to 17, with a mean number higher than 7; (iii) 13 of 21 supplements contained at least one ingredient without reported efficacy; (iv) 19 supplements had ingredients below mED; (v) indeed, 1 supplement contained seven ingredients dosed below mED; (vi) 1 supplement contained only active ingredients satisfying mED; (vii) the product number 9 had a nutrient reaching UL (zinc 40 mg/day); (viii) zinc was the most used ingredient, followed by selenium, arginine, coenzyme Q, folic acid and carnitine. These substances were present in more than 50% of DS, whereas all the remaining ingredients were represented in 10% or less of products. The distribution of DS into the three classes of efficacy is reported in Figure 2. Two DS out of 21 (9.5%) were included in the higher expected efficacy group. The majority of remaining products (71.4%) fell in the lower expected efficacy group, and four (19.1%) in the group with no efficacy. Discussion This critical review aimed to evaluate the formulation of supplements for male infertility using the Italian market as a sample. In general, there is still poor evidence in terms of large well-designed randomized and placebo-controlled trials availability, supporting the efficacy of nutraceutical products in the field of male reproductive health [54,55]. Nevertheless, these products are commonly administered to infertile patients [8,56]. Since a medical prescription is not necessary to purchase dietary supplements, subjects seeking fertility may have easy access to these products [10,57]. As a proof of concept, the Italian market of supplements generated 3.3 billion euros in 2019, with an increase of 4.3% compared to 2018 [58]. Whilst a rational use of supplements may be potentially beneficial for the improvement of sperm parameters, we need to stress that their uncontrolled use is potentially harmful for patients' health due to direct toxic effects and interaction with drugs or nutrients [59]. In this respect, we were surprised to point out that all RCTs and meta-analyses on zinc for male infertility relied on doses always exceeding the UL. Over this background, in the near future it would be desirable to better define thoughtful criteria for each supplement in use. Our analysis found that beside the gap of literature, the market of food supplements is still supported by poor scientific evidence. The majority of DS contained a huge number of ingredients, up to 17. The mixture of such a high number of ingredients may generate different issues, including a low concentration of each substance (i.e., necessitating of two or more administrations to reach the Discussion This critical review aimed to evaluate the formulation of supplements for male infertility using the Italian market as a sample. In general, there is still poor evidence in terms of large well-designed randomized and placebo-controlled trials availability, supporting the efficacy of nutraceutical products in the field of male reproductive health [54,55]. Nevertheless, these products are commonly administered to infertile patients [8,56]. Since a medical prescription is not necessary to purchase dietary supplements, subjects seeking fertility may have easy access to these products [10,57]. As a proof of concept, the Italian market of supplements generated 3.3 billion euros in 2019, with an increase of 4.3% compared to 2018 [58]. Whilst a rational use of supplements may be potentially beneficial for the improvement of sperm parameters, we need to stress that their uncontrolled use is potentially harmful for patients' health due to direct toxic effects and interaction with drugs or nutrients [59]. In this respect, we were surprised to point out that all RCTs and meta-analyses on zinc for male infertility relied on doses always exceeding the UL. Over this background, in the near future it would be desirable to better define thoughtful criteria for each supplement in use. Our analysis found that beside the gap of literature, the market of food supplements is still supported by poor scientific evidence. The majority of DS contained a huge number of ingredients, up to 17. The mixture of such a high number of ingredients may generate different issues, including a low concentration of each substance (i.e., necessitating of two or more administrations to reach the daily effective dose), a large volume of pills and a high risk of interactions. What is more, we found that some ingredients included in many DS had no scientific evidence of efficacy (i.e., astragalus, vitamin D3, taurine and riboflavin). The formulation of pills with a large number of ingredients, some of which cause uncertain benefits, denotes a gap of knowledge of potential biologic targets by manufacturers. Moreover, it has been reported that some plant extracts, present in many of these supplements, are likely to interact with drug metabolism [60,61]. This aspect raises further concerns on the safety of these products. Very frequently, nutrients were present in DS at a dosage below mED. This situation was more common among products with a high number of ingredients. The administration of any active substance with a dose below mED appears as scientifically unjustified due to uncertainties in the therapeutic results. Differently, when the number of ingredients was small, the dose often satisfied mED. Another major aspect in the evaluation of supplements concerns safety. Some ingredients, particularly when administrated in high doses, are not free from risks when used as dietary supplements. For example, folates can mask the B12 deficiency favoring the progression of neurological damage [62]. The combination of these two vitamins could have a synergic effect in improving homocysteine metabolism hence the sperm quality. It should be noted that vitamin B12, when present, was rarely associated to folic acid [63,64]. Furthermore, zinc reduces the copper intestinal absorption interfering with its carrier [65]. With respect to this, we want to stress that one supplement on the market contained a dose of zinc reaching the UL. On a positive note, our analysis revealed that some active ingredients with reported efficacy are frequently present in analyzed supplements. Previous studies demonstrated that some ingredients are particularly effective in specific patients' conditions. Substances with antioxidant properties are indicated in inflammation of the male accessory glands, both related to microbial and non-microbial origin. Several studies performed in asthenozoospermic infertile patients, showed that the positive effect of selenium supplementation is dependent on the correct structure of the mitochondrial capsule [66,67]. Carnitine supplementation induced a significant increase in sperm motility in cases of asthenozoospermia with preserved mitochondrial function [68,69]. Due to the key role of zinc in the processes of DNA compaction, administration of this micronutrient was successful in improving sperm morphology and DNA integrity in patients with prostate abnormalities [70,71]. Based on active ingredients reaching mED we created a grading scale of supplements distinguishing three classes of expected efficacy. Three products were present in the higher class, some of which contained ineffective or underdosed ingredients. Most of the supplements fell in the lower group of expected efficacy. In this class, a large number of ineffective or underdosed products were also present. For an adequate evaluation of these classes, we considered the number of the effective ingredients as the most important criterion of efficacy. A relevant aspect was the use of ineffective or underdosed ingredients that should be absent or less than possible. Another parameter to evaluate a product was the presence of a lower number of ingredients. We acknowledge the application of a non-validated statistical method to calculate scores for each DS may represent a point of weakness in this study. Very recently, a validated formula to score supplements was suggested by Kuchakulla et al. [72], based on the Budoff's score, previously conceived by cardiologists to evaluate their procedures [73]. However, this scoring system when applied to DS, does not take into account the effective dose of ingredients, a crucial point in the evaluation of their efficacy. For example, using this approach, a DS containing ingredients at ineffective or toxic doses would be considered useful. As a point of strength, our scoring system relied on high quality evidence coming from RCTs or a systematic review and meta-analyses of RCTs, which represents a reliable approach to critically weighing the expected efficacy of dietary supplements. The same approach could be applied to evaluate products used in other clinical conditions. In conclusion, this study showed that most DS marketed in Italy for male infertility contain ingredients with reported efficacy in the improvement of sperm parameters. Nevertheless, a non-negligible number of DS are mixtures of substances with uncertain or unreported benefits, whose administration may be unhelpful or even harmful for infertile patients. On that basis, we believe manufacturers should carefully scrutinize scientific evidence before delivering each supplements' formulation. Accordingly, physicians should evaluate the composition of DS and the dose of each single constituent before considering their clinical use. Finally, the choice for DS should be tailored to the specific patient's fertility problem. Author Contributions: A.G. and G.C.P. contributed to the conception/design of the research and acquisition/analysis of the literature data; A.G., G.C.P. and F.F.-P. equally contributed and drafted the manuscript; A.D.N. concepted and performed data analyses; L.D.T., A.V. and C.F. critically revised the paper for important intellectual content. All authors revised and approved the final manuscript, and agreed to be fully accountable for ensuring the integrity and accuracy of the work. All authors had full access to all the data in the study and are able to take responsibility for the integrity of them and the accuracy of the analysis. Funding: This research received no external funding.
2020-05-21T09:07:38.255Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "34df779b8068a11da617a155bd514fe02d3335ab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/5/1472/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6b9ac4be0d87a94caaa6d853b99d7fc09723aff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11203872
pes2o/s2orc
v3-fos-license
Evaluation of Coughing and Nasal Discharge as Early Indicators for An Increased Risk to Develop Equine Recurrent Airway Obstruction (RAO) Background It is often assumed that horses with mild respiratory clinical signs, such as mucous nasal discharge and occasional coughing, have an increased risk of developing recurrent airway obstruction (RAO). Hypothesis Compared to horses without any clinical signs of respiratory disease, those with occasional coughing, mucous nasal discharge, or both have an increased risk of developing signs of RAO (frequent coughing, increased breathing effort, exercise intolerance, or a combination of these) as characterized by the Horse Owner Assessed Respiratory Signs Index (HOARSI 1–4). Animals Two half‐sibling families descending from 2 RAO‐affected stallions (n = 65 and n = 47) and an independent replication population of unrelated horses (n = 88). Methods In a retrospective cohort study, standardized information on occurrence and frequency of coughing, mucous nasal discharge, poor performance, and abnormal breathing effort—and these factors combined in the HOARSI—as well as management factors were collected at intervals of 1.3–5 years. Results Compared to horses without clinical signs of respiratory disease (half‐siblings 7%; unrelated horses 3%), those with mild respiratory signs developed clinical signs of RAO more frequently: half‐siblings with mucous nasal discharge 35% (P < .001, OR: 7.0, sensitivity: 62%, specificity: 81%), with mucous nasal discharge and occasional coughing 43% (P < .001, OR: 9.9, sensitivity: 55%, specificity: 89%); unrelated horses with occasional coughing: 25% (P = .006, OR = 9.7, sensitivity: 75%, specificity: 76%). Conclusions and Clinical importance Occasional coughing and mucous nasal discharge might represent an increased risk of developing RAO. B oth lay people and veterinarians often assume that horses with mild respiratory signs such as occasional coughing, mucous nasal discharge, or both -clinical signs associated with inflammatory airway disease (IAD) 1 -have an increased risk of developing recurrent airway obstruction (RAO) when compared to horses without these signs. 2 To our knowledge, there is no published evidence for this assumption. The risk for a horse with IAD of developing RAO and the relationship between the 2 conditions are still unknown. 3,4 As a result of a workshop on respiratory disease, the American Association of Equine Practitioners concluded that determining whether IAD is a precursor to RAO is a "priority question". 5 IAD is a common, relatively mild respiratory disease, affecting horses of all ages. 1,6 RAO, in contrast, affects middle-aged to older horses and provokes more severe respiratory signs. A strong genetic basis was shown for RAO [7][8][9] and hay feeding is the most important environmental factor for its development 4,10,11 and exacerbation. 12,13 IAD and RAO can be differentiated based on clinical signs combined with ancillary diagnostics, such as cytologic evaluation of bronchoalveolar lavage fluid (BALF) and specialized pulmonary function testing. 1 Recently, several questionnaire-based scoring systems for lower airway disease in horses have been developed, ie, the risk screening questionnaire, 14 the HOARSI-system, 11 and the visual analog scale. 15 The Horse Owner Assessed Respiratory Signs Index (HOARSI) shows good repeatability and has been validated using standardized comprehensive clinical examination. 11,16 It is also the most extensively used of these scores, especially for genetic and epidemiologic studies, eg, [7][8][9][17][18][19][20] . The HOARSI is based on owner-observed clinical signs. Comprehensive validation demonstrated that HOARSI 3 and 4, characterized by frequent coughing, increased breathing effort and exercise intolerance reliably identify RAO-affected animals. 16 More mildly affected horses showing occasional coughing, mucous nasal discharge, or both are classified as HOARSI 2. HOARSI 1 indicates the absence of all of these clinical signs. Symptom-scoring systems are also used as screening tools for the diagnosis of human asthma, for epidemiologic investigations, and also for clinical screening. 21,22 In equine medicine, there are only few published studies on the value of such scoring systems for clinical screening of horses with lower airway disease, [14][15][16] and none that have investigated their value in prognostication. The purpose of this study was to evaluate if scoring of owner-reported information can assist in formulating the prognosis of horses with mild respiratory clinical signs. Coughing and nasal discharge are the most frequently and reliably recognized clinical signs of lower airway disease. 11,15,23,24 If occasional coughing and nasal discharge indicate an increased risk of later developing RAO, these mild clinical signs should be attributed more importance in the context of decisions on prophylactic measures (ie, environmental changes) to avoid development of RAO in susceptible animals. The specific aim of this study was therefore to investigate if the risk of developing RAO is increased in horses with occasional coughing, mucous nasal discharge, or both compared to horses without any respiratory clinical signs. Study Design Owners of horses in 2 half-sibling families were interviewed twice with an assessment interval of 1.3-3.8 years (mean = 2.3 years, 95% CI = 2.2-2.4 years). Owners of the independent replication population of unrelated Warmblood horses were also interviewed twice with a gap of 4.1-5 years (mean = 4.7 years, 95% CI = 4.7-4.7 years). Assessment intervals were significantly different between the 2 populations (P < .001). The following questions were investigated: Horses and Classification Two half-sibling families of direct descendants of 2 RAOaffected Warmblood stallions (F1, n = 65 and F2, n = 47) and an independent replication population of unrelated Warmblood horses (n = 88) were graded according to HOARSI-1-4, from healthy to severely affected, as described in detail 11 and HOARSI 3 and 4 were classified as RAO-affected. 11,16 The replication population was a random sample from the register of the Swiss Equestrian Federation. 17 Feeding of hay during the whole study period was an inclusion criterion, as hay feeding is the most important environmental factor in the development of RAO. 4,10,11 Age (In the half-sibling group (F1 and F2 combined), the overall age range was 6-15 years, while the overall age range of the replication group was 5-24 years. For both populations, 2 age groups with approximately equal numbers of horses were then formed for statistical analysis. Half-sibling families, F1 and F2 combined: younger group 6-9 years and older group 10-15 years of age; unrelated horses: younger group 5-12 years and older group 13-24 years of age), coat color (brown, other colors), time outdoors (0-1 hour, 1-multiple hours; 0-4 hours, 4-multiple hours), sex (stallions and geldings, mares), and clinical signs were recorded. Clinical signs-ie, mucous nasal discharge (absent, present), coughing (none, occasional, regular, frequent), increased breathing effort (absent, present), and performance (poor, adequate, good, excellent)-needed to be persistent for at least 2 months. The questionnaire also documented specific information on management and environmental factors, as previously described in detail. 11 Statistical Analyses All information was categorized numerically for analyses with NCSS 2007 (NCSS Statistical Software, www.ncss.com). When Chi-square and Fishers exact tests showed significant results, specificities and sensitivities were calculated and univariable logistic regression analyses were performed to establish odds ratios (OR) with 95% confidence interval (CI). T-tests or, if the data were not normally distributed, Mann-Whitney U and Wilcoxon Rank-Sum Tests were used to investigate differences between the time intervals of the 2 assessments. Significance level was set at P ≤ .05. Effects of Occasional Coughing and Mucous Nasal Discharge on the Risk of Developing RAO. Horses with occasional coughing in the first assessment did not develop RAO significantly more often (6 of 32 occasionally coughing horses, 19%) than healthy horses (5 of 71 healthy horses, 7%, P = .08). Time intervals between the assessments did not differ between horses with occasional coughing (mean = 2.3 years, 95% CI = 2.1-2.5 years) at the first assessment and healthy horses (mean = 2.3 years, 95% CI = 2.1-2.4 years; P = .81). Horses with mucous nasal discharge in the first assessment developed RAO significantly more frequently (8 of 23 horses with mucous nasal discharge, 35%) than horses without any respiratory signs (7%; P < .001, OR = 7.0, 95% CI = 2.0-24.6 years, Table 2). Sensitivity of mucous nasal discharge as a predictive sign for RAO was 62%, and specificity was 81%. Time intervals did not differ (P = .42) between horses with mucous nasal discharge (mean = 2.4 years, 95% CI = 2.2-2.6 years) and horses without clinical signs (mean = 2.3 years, 95% CI = 2.1-2.4 years) at the time of the first assessment. When owners reported both coughing and nasal discharge at the first assessment, the respective horses developed RAO significantly more often (6 of 14 horses with occasional coughing and nasal discharge combined, 43%) than horses without reported respiratory signs (7%; P < .001, Table 2). The odds ratio was determined to be 9.9 (95% CI = 2.5-40.0). Sensitivity of nasal discharge and occasional coughing combined as a predictive sign for RAO was 55%, specificity was 89%. Time intervals did not differ between horses with occasional coughing and nasal discharge combined (mean = 2.3 years, 95% CI = 2.0-2.7 years) and healthy horses (mean = 2.3 years, 95% CI = 2.1-2.4 years) classified at the first assessment (P = .90). Influence of Age, Sex, Coat Color, and Environmental Factors (Bedding, Time Spent Outdoors) on the Risk of Developing RAO. None of these investigated factors showed significant effects on the development of RAO (Table 1). There were no differences regarding the time intervals for any of these factors (results not shown), except for the factor time outdoors. The difference in the time interval between the group 0-4 h time spent outdoors compared to >4 h of time spent outdoors was 3.3 months (P = .05). Unrelated Population Effects of Occasional Coughing and Nasal Discharge on the Risk of Developing RAO. Horses that showed occasional coughing at the time of the first assessment developed RAO more frequently than horses without any respiratory signs (P = .006, Table 2). Six (25%) of the 24 horses with occasional coughing at the first assessment developed RAO, whereas only 2 (3%) of the 60 previously healthy horses developed RAO. The odds ratio was 9.7 (95% CI = 1.8-52.1). Sensitivity of occasional coughing as a predictive sign for RAO was 75%, and specificity was 76%. Time intervals of the 2 assessments were significantly different (P = .009) between horses with occasional coughing (mean = 4.7 years, 95% CI = 4.6-4.7 years) and healthy horses (mean = 4.7 years, 95% CI = 4.7-4.8 years) at the time of the first assessment, but the difference of the mean time interval was small (0.8 months). Nasal discharge in the first assessment was not associated with an increased likelihood of signs related to RAO in the second assessment. Two of 12 horses (17%) with nasal discharge developed RAO in the second assessment (P = .13). The time intervals between the two assessments differed significantly between horses with nasal discharge (mean = 4.7 years, 95% CI = 4.6-4.7 years) and horses without respiratory clinical signs (mean = 4.7 years, 95% CI = 4.7-4.8 years; P = .003) by 1.1 months. The risk of developing RAO in horses with coughing and nasal discharge combined at the time of the first assessment did not differ compared to that in horses without respiratory clinical signs. Only 1 of 8 horses with occasional coughing and nasal discharge combined (13%) developed RAO in the second assessment (P = .32). The intervals between the 2 (Table 3). There were no significant differences regarding the time intervals for any of these factors (results not shown), except for the factor time outdoors (mean difference between group 0-4 hours time spent outdoors and group >4 hours of time spent outdoors: 0.6 months, P = .02). Discussion This study shows that occasional coughing and mucous nasal discharge might be early indicators of an increased risk of developing RAO. It has been proposed that small airway disease (used as a synonym for IAD) is a precursor of chronic obstructive pulmonary disease (used as a synonym for RAO). 2 However, many owners are oblivious to the early signs of IAD, which might later progress to RAO with its potentially debilitating consequences. 2 Still to date, the assumption that horses with IAD are at increased risk of developing RAO remains speculative and has not been appropriately investigated. [3][4][5] To our knowledge, the present results provide the first scientific basis for this assumption. Specifically, our data demonstrate the predictive value of horse-owner reported mild clinical signs when evaluating the risk to develop RAO. In the half-sibling group, horses with nasal discharge alone had a 7-fold and those with nasal discharge and coughing combined an almost 10-fold increased risk of developing RAO. Specificities for these clinical signs were good, but sensitivities were low. Across all groups most horses did not develop RAO. Even with occasional coughing and nasal discharge combined, the majority remained free of RAO within the subsequent years. In the unrelated horses, investigated as an independent replication population, horses with occasional coughing developed RAO more frequently compared to horses without respiratory signs. A considerable proportion (25%), but still a clear minority, of those with occasional coughing became RAO-affected within 4-5 years. Overall, specificities and sensitivities of occasional coughing for developing RAO in horses are comparable to the values of a recently developed composite asthma predictive score in children. 22 A markedly lower proportion of unrelated horses without any clinical signs developed RAO (3% within 4.1-5 years) than the offspring of the RAO-affected stallions (7% within 1.3-3.8 years). This is likely because of genetic effects, the most important predisposing factor for the development of RAO besides hay feeding. 8,9,11,25,26 Family history of RAO is itself an important predictor for developing the disease with risks estimated at 3-5 fold when one of the parents is affected. 11,26 In human asthma, a recent study also found that family history is an independent predictor for children to develop persistent asthma within 4 years. 22 Genetic effects could also potentially explain why coughing was a strong predictor of risk for RAO in the unrelated group, while in the half-sibling group there was only a trend for cough to predict RAO. Alternatively, this could be a statistical power issue or because of the shorter time interval between first and second examinations in the half siblings compared to the replication group. The available data do not allow us to test these hypotheses, however. These questions would have to be investigated in further studies. In both populations, the tested individual (signalment) and environmental factors had no influence on the development of RAO, which excluded important potential confounding effects. The most important of all environmental effects in the development of RAO, hay feeding [10][11][12][13]27 was deliberately eliminated by defining continuous hay feeding throughout the study period as an inclusion criterion for all horses. It was somewhat surprising, however, that age group had no effect, because age has previously been shown to be an independent risk factor for the development of RAO. 11,26 All horses in the present study were at least 5 years old, and the results do not apply to younger animals. Our data indicate that occasional coughing, which was a predictor of RAO in both populations, is an important indicator for an increased risk of developing RAO. In contrast, the association of nasal discharge with the development of RAO, which had a marked effect in the half-sibling group only, could not be reproduced in the unrelated population. Previous studies have shown that compared to coughing nasal discharge is a less sensitive indicator for pulmonary disease in general 23 and in particular for RAO. 16,28,29 Classification bias is expected to be low with the owner-reported clinical signs used. The HOARSI, which includes owner-reported frequency of coughing and presence of nasal discharge, has proven to have a high reliability of classification. 8,16 In addition, relevant confounding effects associated with the time intervals could be excluded in both populations. Nonetheless, data collection on only 2 occasions constitutes a weakness of the present study. Based on our present data, it is impossible to determine if the development of RAO was the result of a gradual worsening or if there was a more abrupt deterioration between the 2 assessments. In addition to more frequent assessments, future studies should also include clinical and ancillary examinations, particularly BALF cytology and pulmonary function testing, which would give a more precise and conclusive diagnosis of IAD 1 and RAO 4 than owner-based questionnaires alone. Cytologic characterization would also allow to investigate which forms of IAD, neutrophilic, eosinophilic or mast-cell type, 1,30 are precursors of RAO. The neutrophilic form seems most likely to be responsible, because its presence is most often accompanied by coughing. 30 In conclusion, mild, but persistent respiratory signs, particularly occasional coughing, can indicate an increased risk of developing RAO. Thus, when a horse presents with signs persisting for more than 2 months, further clinical and ancillary examinations should be considered-especially when a familial history of RAO is known. This will help the owner and clinician to decide on the need for prophylactic measures such as environmental changes with the goal of avoiding the development of RAO.
2018-04-03T00:21:53.052Z
2014-01-13T00:00:00.000
{ "year": 2014, "sha1": "0ca3b96a562c1a2e36bb37213d7788c165cbc39c", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.12279", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0ca3b96a562c1a2e36bb37213d7788c165cbc39c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259243911
pes2o/s2orc
v3-fos-license
Network community detection via neural embeddings Recent advances in machine learning research have produced powerful neural graph embedding methods, which learn useful, low-dimensional vector representations of network data. These neural methods for graph embedding excel in graph machine learning tasks and are now widely adopted. However, how and why these methods work -- particularly how network structure gets encoded in the embedding -- remain largely unexplained. Here, we show that shallow neural graph embedding methods encode community structure as well as, or even better than, spectral embedding methods for both dense and sparse networks, with and without degree and community size heterogeneity. Our results provide the foundations for the design of novel effective community detection methods as well as theoretical studies that bridge network science and machine learning. Significance statement Graph embeddings map network data onto low-dimensional vector representations, which can be easily integrated into machine learning applications.We demonstrate that, for networks with planted communities, shallow linear neural networks for graph embedding-node2vec, DeepWalk, and LINE-capture the community structure down to the theoretical community detectability limit.Using benchmark networks with built-in communities, we show that neural embedding is a practical and robust approach to representing community structure, with comparable or even superior performance with respect to spectral embedding methods.Our results reveal that neural graph embedding can achieve the fundamental limit of community detectability without the need for deep layers and non-linear activation functions, laying the foundation for future research at the interface between network science and machine learning. Introduction Networks represent the structure of complex systems as sets of nodes connected by edges [1,2,3] and are ubiquitous across diverse domains, including social sciences [4,5], transportation [6,7], finance [8,9], science of science [10,11], neuroscience [12,13], and biology [14,15,16].Networks are complex, high-dimensional, and discrete objects, making it highly non-trivial to obtain useful representations of their structure.For instance, recommendation systems for social networks typically require informative variables (or "features") that capture the most important structural characteristics.Often, these features are designed through trial and error, and may not be generalizable across networks. Graph embeddings automatically identify useful structural features for network elements, most commonly for the nodes [17,18].Each node is represented as a point in a compact and continuous vector space.Such a vector representation enables the direct application of powerful machine learning methods, capable of solving various tasks, such as visualization [19,20], clustering [21,22], and prediction [23,18,24].This representation can facilitate the operationalization of abstract concepts using vectorial operations [25,26,20,27,28].Graph embeddings have been studied in various contexts.For example, spectral embedding stems from the spectral analysis of networks [17,29].A closely related formulation is matrix factorization [30,31].Recent years have witnessed a substantial shift towards a new paradigm of graph embeddings based on neural networks [32,33,34,35,36,37,20,38,22,39,40], which have demonstrated remarkable effectiveness across many computational tasks [23,34,38,39,35,39,40].Yet, due to the inherent black-box nature of neural networks, how and why these methods work is still largely unknown, and we lack firm understanding of the process of encoding certain network structure onto embeddings. One of the fundamental and ubiquitous features of networks is community structure, i.e., the existence of cohesive groups of nodes, characterized by a density of within-group edges that is higher than the density of edges between them [41,42,43].In practice, neural graph embedding methods are widely used to discover communities from networks [31,34,38,26]. The stochastic block model (SBM) is a basic generative model of networks with community structure [44,45] and is regularly used as a benchmark for community detection algorithms.Some clustering methods are able to correctly classify all nodes into communities in large and dense networks generated by the SBM, provided that the average degree increases as the number of nodes increases [46,47,48,21,49,50].However, most networks of interest in applications are sparse [51,1], in that their average degree is usually much smaller than the network size.The task of community detection is particularly hard on very sparse networks.For instance, the performance of many spectral methods significantly worsens as the graph gets sparser [52,53], which has led to the development of remedies such as non-backtracking walks [52,54,53] and consensus clustering [55].However, it remains unclear how neural graph embeddings perform on sparse networks, how much edge sparsity hampers their ability to detect communities, and how they fare for traditional clustering techniques, especially spectral methods. Here, we prove that graph embedding methods based on a shallow neural network without non-linear activation-such as DeepWalk [38], LINE [39], and node2vec [34]-can detect communities all the way down to the information-theoretical limit on graphs generated by the SBM [56].Our results imply that two common components of deep learning-multiple "deep" layers and non-linear activation-are not necessary to achieve the optimal limit of community detectability.Numerical experiments reveal that they have a remarkable performance also in the limit of sparse networks, getting close to the theoretically optimal performance curve of the belief propagation (BP) method [56] for networks generated by the SBM.In particular, node2vec [34] learns the community structure in more realistic networks with heterogeneous distributions of degree and community size substantially better than spectral embeddings, BP, and traditional clustering techniques.The excellent performance of node2vec is consistent across different levels of edge sparsity, community sizes, and degree heterogeneity.Our results might inform powerful community detection algorithms and improve our theoretical understanding of clustering via neural embeddings.We have made available the code to reproduce all the results at [57]. Detectability limit of communities We first consider the standard setting studied in papers concerning community detectability [53,52,58].We focus on undirected and unweighted networks with community structure generated according to the planted partition model (PPM) [59], a special case of the SBM where nodes are divided into q equal-sized communities, and two nodes are connected with probability p in if they are in the same community and with probability p out if they are in different communities.We assume that the networks are sparse, i.e., p in and p out are inversely proportional to the number n of nodes.Therefore, the average degree ⟨k⟩ and the ratio of edge probabilities p in /p out do not depend on n. We specify the edge probabilities via the mixing parameter µ = np out /⟨k⟩.The mixing parameter indicates how blended communities are with each other.As µ → 0, communities are well separated and easily detectable.For larger values of µ, community detection becomes harder.For µ = 1, which corresponds to p in = p out , the network is an Erdős-Rényi random graph and, as such, has no community structure.We note that the mixing parameter µ is slightly different from the traditional mixing parameter µ LFR used in the Lancichinetti-Fortunato-Radicchi (LFR) benchmark, which is defined as µ LFR = (1 − 1 q )np out /⟨k⟩.The difference between µ and µ LFR is negligible for large q.Communities are present for all µ-values in the range [0, 1), because the edges are more densely distributed within communities than between them.For a given algorithm, communities are detectable if the partition found by the algorithm has greater similarity with the planted partition than the trivial division in which node labels are randomly shuffled.However, it is shown that there is a regime µ * ≤ µ < 1, in which communities are not detectable by any algorithm [58,56].This is because, due to fluctuations in the numbers of neighbors within and between the groups, the true communities are effectively indistinguishable from random subgraphs with the same size, with respect to the imbalance between the internal and the external degree of the nodes.The threshold µ * marks the information-theoretical detectability limit of communities in graphs generated by the PPM. Detectability limit of node2vec We determine the maximum mixing parameter µ * A below which communities are detectable by an algorithm, which we refer to as the algorithmic detectability limit.We first give a high-level description of our derivation of the algorithmic detectability limit for node2vec.We note that our derivation can be directly applied to other neural graph embeddings such as DeepWalk [38] and LINE [39].See the Methods section for the step-by-step derivations. Our analysis is based on the fact that node2vec generates its embedding by effectively factorizing a matrix when the number of dimensions is sufficiently large [30].This insight enables us to study node2vec as a spectral method (see Methods).Spectral algorithms identify communities by computing the eigenvectors associated with the largest or smallest eigenvalues of a reference operator such as the combinatorial and normalized Laplacian matrices.Each eigenvector corresponds to a community in a network, with the entries having similar values for the nodes in that community.Therefore, when using eigenvectors to represent the network in vector space, nodes in the same community are projected onto points in space lying close to each other so that a data clustering algorithm can separate them [17]. The existence of such localized eigenvectors can be inferred by analyzing the spectrum of the reference operator using random matrix theory.For instance, this approach has been applied to determine the detectability limit of the normalized Laplacian matrix generated by the PPM [60].We find that, under some mild conditions, the spectrum of the node2vec matrix is equivalent to that of the normalized Laplacian matrix.Hence, the detectability limit of node2vec matches that of the spectral embedding with the normalized Laplacian matrix [60]: See Supporting Information Section 2 for the expression of the detectability limit in terms of the mixing parameter µ.This threshold exactly corresponds to the information-theoretical detectability limit µ * of the PPM [58,55].In other words, node2vec has the ability to detect communities down to the information-theoretic limit in principle.However, like in the case of spectral modularity maximization [58], our analysis is only valid when the average degree is sufficiently large.Nevertheless, as we shall see, our numerical simulations show that node2vec performs well even if the average degree is small. Experiment setup As baselines, we use two spectral embedding methods whose detectability limit matches the informationtheoretical one: spectral modularity maximization [58] and Laplacian EigenMap [61].In addition, we use two other neural embeddings, DeepWalk [38] and LINE [39].DeepWalk and LINE share the same architecture as node2vec but are trained with different objective functions [30,62].Furthermore, we employ the spectral algorithm based on the leading eigenvectors of the non-backtracking matrix, which reaches the information-theoretical limit even in the sparse case for networks generated by the PPM [52].For all embedding methods, we set the number of dimensions, C, to 64.Finally, we employ three community detection algorithms: Infomap [63], statistical inference of the microscopic degree-corrected SBM [44], and the BP algorithm [56].Note that we set the initial parameters of the BP algorithm based on the ground-truth communities to yield the maximal performance.See Supporting Information Section 4 for the parameter choices of the models and the implementations we used.Community detection via graph embedding is a two-step process: • First, the network is embedded, which yields a projection of nodes onto points in a vector space. • Second, the points are divided into groups using a data clustering method (e.g., K-means clustering). Thus, the performance of community detection depends on both the quality of the embedding and the performance of the subsequent data clustering procedure.Since we focus on the ability of neural embedding methods to generate representations where clusters are detectable, we want to control the second step by using an ideal clustering method that can optimally find the clusters for a given representation.To do so, we use a K-means algorithm with fixed centroids (i.e., Voronoi clustering), whose positions are determined by the locations of the true communities in the embedding space, and clustering is performed by assigning each point/node to the centroid/cluster with the highest cosine similarity.See Supporting Information Section 6 for the results for the ordinal K-means algorithm.By using this algorithm, we can focus on the question of whether an embedding method can successfully encode community structure or not. We assess the performance by comparing the similarity between the planted partition of the network and the detected partition of the algorithm.We used the element-centric similarity [64], denoted by S, with an adjustment such that a random shuffling of the community memberships for the two partitions yields S = 0 on expectation (See Supporting Information Section 1).This way, for planted divisions into equal-sized communities, S = 0 represents the baseline performance of the trivial algorithm, while S > 0 indicates that communities are detectable by the given algorithm. A B C < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 W 6 W 5 m F), and the different number of communities (q = 2 for A-C and q = 50 for D-F).The dashed vertical line indicates the theoretical detectability limit µ * given by (1): communities are detectable (i.e., S > 0), in principle, below µ * .Spectral embedding methods detect communities up to the theoretical limit for dense networks (C and F), supporting the algorithmic limit derived by previous studies [58,60].However, for sparse networks, they fall short even at low µ-values (A and D).node2vec outperforms spectral methods, with the performance curve close to that of the BP algorithm, which is supposed to be optimal.Note that even the BP algorithm falls short of the exact recovery of some easily-detectable communities in the case of q = 50 communities, even with the initial parameters set according to the ground-truth communities. Simulations: Planted Partition Model We test the graph embedding and community detection algorithms on networks of n = 100, 000 nodes generated by the PPM, with q ∈ {2, 50} communities of equal size and average degree ⟨k⟩ ∈ {5, 10, 50} (Fig. 1).Spectral methods find communities better than random guessing below the detectability limit µ * , i.e. S > 0, for µ < µ * and ⟨k⟩ = 50 (Figs.1C and F).However, their performance is much worse when the average degree is small (⟨k⟩ = 5, Figs.1A and D).For example, Laplacian EigenMap falls short below the detectability limit (µ < µ * ), despite having the optimal detectability limit when the average degree is sufficiently large [65].All techniques, including BP, which is supposed to be optimal for sparse networks, fail the exact recovery of the clusters for sparse networks even if the value of µ is low (⟨k⟩ = 5, Figs.1A and D).We find that misclassifications are inevitable for these highly sparse networks because some nodes end up being connected with other communities more densely than with their own community by random chance.The BP algorithm also fails for the networks with q = 50 communities, even for small µ values.This may be because BP employs a greedy optimization strategy that may converge to a suboptimal solution near the starting point.Notably, the poor performance of the BP algorithm is mainly observed in the networks with 50 communities (q = 50), where the prevalence of many local minima may exacerbate the limitations of the greedy optimization. On the other hand, node2vec is substantially better than the spectral methods, and its performance is the closest to that of the BP algorithm for sparse networks (Figs.1A and D).node2vec consistently achieves a good performance across different numbers of communities and different network sparsity.Furthermore, node2vec performs well even if we reduce the embedding dimension C from 64 to 16, which is smaller than the number of communities in the cases where q = 50 (Supporting Information Section 5). Simulations: LFR benchmark The PPM is a stylized model that lacks key characteristics of empirical community structure.We test the graph embedding using more realistic networks generated by the LFR model [66], which produces networks with heterogeneous degree and community size distributions, to assess the performance of the methods in a more practical context.Unlike the PPM, however, the theoretical detectability limit of communities in LFR networks is not known.We build the LFR networks by using the following parameter values: number of nodes n = 10, 000, degree exponent τ 1 ∈ {2.1, 3}, average degree ⟨k⟩ ∈ {5, 10, 50}, maximum degree √ 10n, community-size exponent τ 2 = 1, community size range [50, √ 10n].In LFR networks, the BP algorithm and the non-backtracking embedding-which have an excellent performance on the PPM networks, at least in theory-underperform noticeably, suggesting that optimal methods for the standard PPM may not perform well in practice.On the other hand, node2vec consistently has the best performance, with a larger margin in sparser networks (Fig. 2).The performance of node2vec is also consistent across networks with different levels of degree hetero-7 Element-centric similarity Mixing parameter, < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 W 6 W 5 m E A P H u o M K Z o 9 H i 0 v 0 h 0 A b k = " > A A A B 6 n i c b V B N S w M x E J 3 U r 1 q / q h 6 9 B I v g q e x K U Y 9 F L x 4 r 2 g 9 o l 5 J N s 2 Figure 2: Performance of community detection methods on the LFR benchmark networks, as a function of the mixing parameter µ.We generated networks with n = 10 4 nodes with different edge sparsity (⟨k⟩ = 5 in A and D, ⟨k⟩ = 10 in B and E, ⟨k⟩ = 50 in C and F).The degree exponent τ 1 = 2.1 in A, B, and C, and τ 1 = 3 in D, E, and F. node2vec consistently performs well across different sparsity regimes for most µ-values, with a larger margin for sparser networks.The BP algorithm, which is provably optimal for networks generated by the PPM, fails to identify some easily-detectable communities, even with the initial parameters set according to the ground-truth communities. geneity.Even with the smaller embedding dimension C = 16, node2vec performs comparably well with Infomap, which is known to be very accurate on LFR networks [67] (Supporting Information Section 5). Discussion We investigated the ability of neural graph embeddings to encode communities by focusing on shallow linear graph neural networks-node2vec, DeepWalk, and LINE-and comparing them with traditional spectral approaches.We proved that, for not too sparse networks created by the PPM, node2vec is an optimal method to encode their community structure in that the algorithmic detectability limit coincides with the information-theoretic limit.In particular, our experiments on the PPM and LFR benchmarks show that node2vec consistently excels on sparse networks with small and moderate average degree, with homogeneous and heterogeneous degrees and community sizes in the detectable regime, demonstrating its high robustness and potential in the analysis of empirical networks. Our results provide an alternative perspective to the common design principles of neural networks widely accepted for text and image processing.In these applications, deep neural structures and non-linear activation are considered indispensable in order to achieve high performance.The neural network architecture is also critical for graph neural networks for community detection task [68].Our findings further demonstrate that a simple neural network with only one hidden layer and no non-linear activation can achieve the information-theoretical detectability limit of communities with performance close to or superior than the best methods for community detection. DeepWalk [38] and LINE [39] are also optimal in terms of the detectability limit of communities (Supplementary Information Section 2).However, node2vec surpasses both DeepWalk and LINE in numerical tests, owing to two key features.First, node2vec learns degree-agnostic embeddings, which are highly robust against degree heterogeneity [62].By contrast, DeepWalk tends to learn node degree as the primary dimension in the embedding space [62].Consequently, degree heterogeneity introduces considerable noise to the community structure in the DeepWalk embedding.Second, LINE is a specific instance of node2vec with window size T = 1 [30], and thus learns the dyadic relationships between nodes.As is the case for node2vec, LINE is resilient to degree heterogeneity, and performed closely to node2vec for some networks in our simulations.However, it did not perform as well as node2vec, and this discrepancy may be attributed to LINE's emphasis on learning stochastic and noisy dyadic relationships, as opposed to the indirect relationships that node2vec captures. Our results come with caveats.First, we focused on the best achievable clustering performanceby using Voronoi clustering with the centroids of the planted communities-because we wanted to control any factors coming from the data clustering step so that we could focus on the representation learning.However, we also fine-tuned other community detection methods-the SBM and the BP algorithm-using the information on the planted partition, such as the number of communities, their sizes, and edge probabilities.Thus, caution should be taken when interpreting the results: our analysis reports an upper bound on the performance, and the actual performance in practice will depend on the choice and configuration of the data clustering method.Indeed, a previous study [22] using the K-means algorithm demonstrated that node2vec did not perform as well as standard community detection methods even if its hyperparameters are fine-tuned.By contrast, we did not fine-tune the parameters of our embedding methods.Hence, we believe that the previous results [22] are primarily due to the limitations of the K-means clustering algorithm (when the initial position of the centroids is arbitrarily chosen), rather than to the embedding. Second, in our analytical derivations, we assumed that the average degree is sufficiently large, as is the case for the corresponding analysis of spectral modularity maximization [58].Thus, the optimality may not hold if networks are substantially sparse.However, our simulations suggested that node2vec is resilient to network sparsity compared with traditional spectral embedding methods.Understanding the factor inducing such resilience will be interesting for future work. Third, while we restricted ourselves to the community detection task, graph embeddings have been used for other tasks, including link prediction, node classification, and anomaly detection.Investigating the theoretical foundation behind the performance of neural embeddings in other tasks is a promising research direction. Even with these caveats, we believe that our study will provide the foundation for future studies that uncover the inner workings of neural embedding methods and bridge the study of artificial neural networks to network science. node2vec as spectral embedding node2vec learns the structure of a given network based on random walks.A random walk traverses a given network by following randomly chosen edges and generates the sequence of nodes x (1) , x (2) , . ... The sequence is then fed into skip-gram word2vec [69], which learns how likely it is that a node j appears in the surrounding of another node i up to a certain time lag T (i.e., window length) through the conditional probability where , and Z is a normalization constant.Each node i is associated to two vectors: vector u i represents the embedding of node i; v i represents node i as a context of other nodes.Because the normalization constant is computationally expensive, node2vec uses a heuristic training algorithm, i.e., negative sampling [69].When trained with negative sampling, skip-gram word2vec is equivalent to a spectral embedding that factorizes matrix R n2v with elements [70,30]: in the limit of C → n with T greater than or equal to the network diameter, where P (x (t) = i) is the probability that the t th node in the given sequence is node i (see Supporting Information Section 3 for the step-by-step derivation).This interpretation of node2vec as a spectral embedding allows us to derive the algorithmic detectability limit from the spectrum of R n2v .Deriving the spectrum of R n2v in a closed form is challenging because R n2v involves element-wise logarithms.We approximate the element-wise logarithm by a linear function by assuming that the window length T is sufficiently large.To demonstrate our argument, let us describe R n2v ij in the language of random walks.Given that the network is undirected and unweighted, the probability P (x (t) = j) corresponds to the long-term probability of finding the random walker at node j.The probability P (x (t+τ ) = j | x (t) = i) refers to the transition of a walker from node i to node j after τ steps.In the limit τ → ∞, the walker reaches the stationary state, and P (x (t+τ ) = j | x (t) = i) approaches P (x (t) = j).Thus, in the regime of a sufficiently large T , we take the Taylor expansion of In matrix form, where A is the adjacency matrix, D is a diagonal matrix whose diagonal element D ii is the degree k i of node i, m is the number of edges in the network, and 1 n×n is the n × n all-one matrix.We used P (x (t) = j) = k j /2m and (D −1 A) , derived from the fact that P (x (t) = j) is proportional to degree in undirected networks; D −1 A is the transition matrix, whose τ th power represents the random walk transition probability after τ steps. The node2vec matrix Rn2v has a connection to the normalized Laplacian matrix, L, which is tightly related to the characteristics of random walks and network communities [71].The normalized Laplacian matrix is defined by . By using an alternative expression of the transition probability, i.e., ( , we rewrite Rn2v as < l a t e x i t s h a 1 _ b a s e 6 4 = " where 1 n is a column vector of length n.We note that vector D 1/2 1 n / √ 2m is a trivial eigenvector of L associated with the null eigenvalue, λ 1 = 0. Furthermore, (I − L) τ changes the eigenvalues while keeping the eigenvectors intact.This means that Rn2v can be specified by using the spectrum of L, i.e., where Γ ∈ R n×n is the matrix of the eigenvectors of L, and ϕ is a graph kernel [18] that transforms the eigenvalues λ i (i = 1, 2, . . ., n) of L by 3).Equation ( 6) tells us that the eigenvectors U of Rn2v are equivalent to the eigenvectors Γ of the normalized Laplacian matrix, up to a linear transformation given by Building on the correspondence between the normalized Laplacian L and the node2vec matrix Rn2v , we derive the algorithmic community detectability limit of node2vec.Following [58,65,60], we assume that the network consists of two communities generated by the PPM.Then, the non-trivial eigenvector of L encodes the communities and has the optimal detectability limit of communities, provided that the average degree is large (( 1)) [58,65,60].This non-trivial eigenvector of L corresponds to the principal eigenvector of Rn2v .Specifically, the non-trivial eigenvector of L is associated with the smallest non-zero eigenvalue λ 2 , which is λ 2 < 1 when each community is densely connected within itself and sparsely with other communities [17].The eigenvalues are mirrored in the eigenvalues ϕ(λ i ) of Rn2v , and λ 2 -the smallest non-zero eigenvalue-yields the maximum ϕ-value (Fig. 3) This correspondence of non-trivial eigenvectors between Rn2v and L suggests that communities detectable by L are also detectable by Rn2v and vice versa.Thus, spectral embedding with Rn2v has the same information-theoretic detectability limit as spectral methods relying on eigenvectors of L, for networks with sufficiently high degree. Detectability limit of DeepWalk We expand our argument to include DeepWalk [38].Similar to node2vec, DeepWalk also trains word2vec but with a different objective function.Furthermore, DeepWalk is equivalent to a matrix factorization if the embedding dimension is sufficiently large and the window size T is greater than the network's diameter [30,62].More specifically, DeepWalk generates an embedding by factorizing a matrix with entries [62]: in the limit of C → n with T being greater than the network diameter.When the random walker is in the stationary state at time t and makes sufficiently many steps (τ ≫ 1), we have lim In particular, if the degree distribution is Poisson and the average degree is sufficiently large, which is true for the PPM.By substituting ( 11) into (10), we obtain Armed with this result, let us derive the detectability limit of DeepWalk.Assuming that the window length T is large, we take the Taylor expansion of ( 9) around ϵ In matrix form, Note that RDW is similar to the node2vec matrix Rn2v (( 4)).The right/left eigenvectors of RDW are obtained from those of the normalized Laplacian by simple multiplications by the operators D 1/2 and D −1/2 , respectively.Therefore, DeepWalk has the information-theoretical detectability limit as well. Detectability limit of LINE LINE [39] is a special version of node2vec with the window length being T = 1.The corresponding matrix factorized by LINE is given by [30]: For LINE, although Ref. [30] shows log + log 2m, we introduce a small positive value a 0 (a 0 > 0) to prevent the matrix elements from being infinite for A ij = 0. To obtain the spectrum of R LINE , we exploit the Taylor expansion log(x + a 0 ) ≃ x a 0 + log a 0 around x = 0, where a 0 > 0. Specifically, assuming that the average degree is sufficiently large, we obtain or equivalently in matrix form where a 1 := log a 0 +log(2m).Equation ( 17) is reminiscent of (5) for node2vec.Comparing Eqs. ( 17) and ( 5), it immediately follows that they share the same eigenvectors, and thus node2vec and LINE have the same detectability threshold. Supporting Information: Network community detection via neural embeddings Sadamori Kojaku, Filippo Radicchi, Yong-Yeol Ahn, Santo Fortunato We adjusted the original definition of the element-centric similarity in Ref. [1] such that the score for the two random partitions is zero.In the following 1 arXiv:2306.13400v1[physics.soc-ph]23 Jun 2023 section, we define the element-centric similarity and its expected value for random partitions.Then, we define the adjusted element-centric similarity. Element-centric similarity for partitions Element-centric similarity (ECS) quantifies the difference between two partitions of nodes.Let us represent a partition via the membership variables where n is the number of nodes.In the following three steps, ECS computes the similarity of two partitions g and g ′ .First, ECS constructs the affinity graph for each partition.In the affinity graph for partition g, two nodes (i, j) are connected by an edge if they belong to the same community (i.e., g i = g j ).Otherwise, i and j are not directly connected.Second, ECS computes the neighborhood of each node by using a random walk.The random walk has a probability α of restarting the walk from the starting node.Because a node is connected to all other nodes in the same group in the affinity graph, the transition probability p g ij from i to j is given by where n g g i is the number of nodes in group g i in partition g, and δ ij is Kronecker delta.Third, SCE deems two partitions g and g ′ as similar if the respective transition probabilities p g ij and p g ′ ij are similar, i.e., S(g, g By substituting Eq. ( 1) into Eq.( 2), we obtain where n g,g ′ c,c ′ is the number of nodes that belong to group c in partition g and group c ′ in partition g ′ , and C g and C g ′ are the number of groups in partition g and g ′ , respectively.We note that the restarting probability α is canceled and does not affect the similarity. Element-centric similarity for random partitions We derived the element-centric similarity between a given partition g and random partitions ζ.We generate the random partition by shuffling the group membership.This randomization preserves the number of groups and the size of each group.In the random partition, a node belongs to a group c ′ of size n ζ c ′ with probability n ζ c ′ /n.Thus, the expected number of nodes in group c in partition g that belong to group c ′ in the random partition is given by By substituting Eq. ( 4) into Eq.( 3), we have where z g c = n g c /n is the fraction of nodes in group c in partition g. Normalized element-centric similarity We adjusted the element-centric similarity such that random partitions have a score of zero, i.e., 2 Reparameterization of detectability limit In Ref. [2], the detectability limit for the spectral embedding with A is described using c in = np in and c out = np out as First, we rewrite the inequality using ⟨k⟩, p in , p out as where we have exploited By rearranging Eq. ( 10) into np in = q⟨k⟩ − (q − 1)np out and substituting it into Eq.( 9), we obtain where we remind that µ = np out /⟨k⟩. This sequence then trains the skip-gram word2vec using negative sampling [3]. Negative sampling learns a correlational association between the center and context nodes in light of a random correlation.More specifically, consider the conditional probability P (x (t+τ ) = j|x (t) = i) that node j appears after τ steps from the center node x (t) = i.This probability is strongly correlated with the frequency of j in the entire sequence-P (x (t+τ ) = j)-because a frequent node in the given sequence also frequently appears in the window.Negative sampling discounts this frequency effect by contrasting the context j with a random node j ′ sampled from the given sequence. Operationally, one generates a list D of node pairs to train the word2vec model.List D is a union of two lists D data and D rand .D data includes the node pairs (i, j) consisting of a center node i and a context node j that co-appear in the same window in the given sequence.Another list D rand includes the node pairs (i, j ′ ) consisting of a center node i sampled from the given sequence and a random node j ′ sampled from a random distribution P 0 (j ′ ).We use a typical random distribution, i.e., we use the long-term probability P (x (t) = j ′ ) of random walks as P 0 (j) [4].Then, the skip-gram word2vec model estimates the probability P (i, j) ∈ D data that a given pair (i, j) comes from D data by where u i and v j are the column embedding vectors of center node i and context node j, respectively.The embedding vectors are determined by maximizing the log-likelihood The maximization of J can be translated into a matrix factorization problem [5].One parametrizes dot similarity u ⊤ i v j as a single variable R ij and assumes that the elements R ij are independent of each other.This assumption holds if the embedding dimension is sufficiently large [5,6].By taking the derivative and solving ∂L/∂R ij = 0, we obtain We assume that P (i, j) ∈ D data , P (i, j) ∈ D rand > 0, which is true when the window size is larger than or equal to the diameter of the network.Rearranging the equation yields Now, let us specify P (i, j) ∈ D data and P (i, j) ∈ D rand .Remind that D data is the node pairs sampled from a random-walk sequence generated from the given network.More specifically, where P NS (τ ) is the probability that the τ th node is sampled as a context node and paired with the center node i.Because each context node is sampled with the same probability, P NS (τ ) = 1/2T , which gives Another list D rand is created by two independent sampling processes, one process sampling a node i from the given sequence and the other process sampling another node j from the random distribution P 0 (j).The former process is essentially the same as the latter because P 0 (j) is proportional to the frequency of node j.Thus, we have or equivalently, Altogether, by substituting Eqs. ( 19) and ( 21) into Eq.( 17), we have We can neglect the constant − log D rand + log D data because it does not change the non-trivial eigenvectors of R. Thus, we obtain Random walks in undirected networks are reversible, i.e., P (x (t) = i, x (t+τ ) = j) = P (x (t) = i, x (t−τ ) = j).Thus, we have in the main text.By substituting R ij = u ⊤ i v j we obtain a matrix decomposition problem where Because R is a symmetric matrix, we can find such a decomposition by the eigendecomposition, i.e., where Γ is the matrix of eigenvectors, and Λ is a diagonal matrix with the corresponding eigenvalues in the diagonals.Equation ( 26) coincides with the optimal solution for the original objective J of word2vec, provided that the embedding dimension C is equal to the number of nodes.Even with a smaller C, Eq. ( 26) provides a good approximation [4,6]. node2vec, LINE, and DeepWalk For node2vec, we set the length of a single walk to 80, the number of walkers per node to 40, the length of the window to 10, and the number of epochs to train to 1 while not biasing the random walk (p = q = 1).We use the word2vec implemented in the gensim package [7] with the default parameters of version 4.3.For LINE, we increase the number of walks to 400 because LINE is trained with fewer iterations than node2vec.Similarly, we increase the number of walks for DeepWalk to 120.We set the other parameters to those used in node2vec. Laplacian EigenMap We used the standard eigenvector solver-scipy.linalg.eigs-implemented in scipy [8] to compute the eigenvectors of the normalized Laplacian matrix. However, the eigenvector solver did not converge for some networks due to numerical instability.To improve numerical stability, we transformed the normalized Laplacian matrix as follows.The normalized Laplacian matrix is given by where L is the normalized Laplacian matrix, I is the identity matrix, D is a diagonal matrix whose ith diagonal entry D ii is the degree of node i, and A is the adjacency matrix.Laplacian EigenMap relies on the eigenvectors of L with the non-zero smallest eigenvalues.Some of these smallest eigenvalues can be nearly zero, which is the cause of the numerical instability.Thus, we shift the eigenvalues by: This transformation changes the eigenvalue λ i of L to 2 − λ i , while the eigenvectors remain unchanged.Because the eigenvalues of L are bounded in the range [0, 2], the smallest eigenvalues of L correspond to the largest eigenvalues of L, with a sufficient distance from zero.Thus, Laplacian EigenMap can be obtained by computing the eigenvectors associated with the largest eigenvalues of the shifted Laplacian matrix L. Modularity embedding Modularity embedding relies on the eigenvectors of the modularity matrix given by where k is the column vector of length n with element k i indicating the degree of node i, and m is the number of edges in the network.Since the modularity matrix Q is a fully dense matrix, computing its eigenvectors is expensive.However, Q and the adjacency matrix A-a sparse matrix-have the same eigenvectors.Thus, we can compute the eigenvectors of Q through A. Let us demonstrate our argument by noting that, according to the Perron-Frobenius theorem, the principal eigenvector of A is in parallel with k.Thus, transformation A − kk ⊤ 2m only changes the largest eigenvalue while keeping the eigenvectors and secondary eigenvalues intact.We exploit this property to compute the modularity embedding.To find the C dimensional embedding, we compute the C + 1 eigenvectors associated with the largest eigenvalues of A. Then, we discard the eigenvector associated with the largest eigenvalue.We used scipy.linalg.eigsimplemented in the scipy package [8]. Non-backtracking walk embedding Following [9], we computed the eigenvectors of the non-backtracking matrix.We use scipy.linalg.eigsimplemented in the scipy package [8].Because the solver did not converge for some networks, we relaxed the convergence criterion by setting tol=0.0001. Flat SBM We used the degree-corrected stochastic block model without hierarchical partitioning implemented in the graph-tool package [10].Since we focus on the basic clustering ability of the method, the number of communities is set to the number of true communities. Belief propagation Belief propagation is an optimal method for sparse networks generated by the stochastic block model.We employed the code, sbm, provided by an author of the original paper [11,12].We set all the parameters based on the true communities.More specifically, we set the number of communities to the true number of communities.We then set the "cab matrix"-specified by the -c option of sbm-to the density of edges between and within groups, multiplied by the number of nodes.Lastly, we set the fractional group size specified by the -P option of sbm to the fraction of nodes in each true community.We made available our Python wrapper for sbm at [13]. Embedding with a smaller number of dimensions We tested graph embeddings with a smaller number of dimensions, i.e., C = 16 (Figs. 1 and 2).Despite the fact that C is lower than the number of true communities q in the examples where q = 50, we find qualitatively the same results as in the results for C = 64, that we reported in the main text.Specifically, for the stochastic block model, the community detection performance decreases compared to C = 64 dimensions, and node2vec outperforms other graph embedding methods for most values of the mixing parameter µ.The performance of node2vec stands out for q = 50, where the number of communities is larger than the number of dimensions (Figs.1D-F).For the LFR model, the community detection performance decreases overall (Fig. 2).Nevertheless, node2vec is comparable to or outperforms Infomap. Embedding with the K-means algorithm We located cluster centroids used in the Voronoi clustering based on the true community membership.However, the true community membership is often unknown in practice.To investigate the actual performance of graph embedding, we employ the K-means algorithm with K set to the number of true communities.While the K-means algorithm performs worse than Voronoi clustering, the difference is small for the SBM (Fig. 3).On the other hand, for the LFR model, there is a noticeable degradation in clustering performance, especially for small µ values (Fig. 4).This performance degradation may be due to the heterogeneity in the size of clusters.Because the K-means algorithm has a tendency to identify balanced clusters [14], it fails to identify communities in the LFR model, where communities can have very different sizes. Figure 1 : Figure1: Performance of community detection methods for networks generated by the PPM as a function of the mixing parameter µ.We generated networks with n = 10 5 nodes, different edge sparsity (⟨k⟩ = 5 in A and D, ⟨k⟩ = 10 in B and E, ⟨k⟩ = 50 in C and F), and the different number of communities (q = 2 for A-C and q = 50 for D-F).The dashed vertical line indicates the theoretical detectability limit µ * given by (1): communities are detectable (i.e., S > 0), in principle, below µ * .Spectral embedding methods detect communities up to the theoretical limit for dense networks (C and F), supporting the algorithmic limit derived by previous studies[58,60].However, for sparse networks, they fall short even at low µ-values (A and D).node2vec outperforms spectral methods, with the performance curve close to that of the BP algorithm, which is supposed to be optimal.Note that even the BP algorithm falls short of the exact recovery of some easily-detectable communities in the case of q = 50 communities, even with the initial parameters set according to the ground-truth communities. 1 k 9 x 5 9 8 4 b b P Q 1 g M D h 3 P u 4 d 4 5 Y c q Z 0 o 7 z b a 2 t b 2 x u b Z d 2 y r t 7 + w c V + / C o o 5 J M E t o m C U 9 k L 8 S K c i Z o W z P N a S + V F M c h p 9 1 w f D v z u 4 9 U K p a I B z 1 J S b a d F U 2 J b j L X 1 4 l n Y u 6 e 1 l v 3 D e q z Z u i j h K c w C n U w I U r a M I d t K A N B D J 4 h l d 4 s 5 6 s F + v d + l i M r l l F 5 h j + w P r 8 A R p b k r 0 = < / l a t e x i t >( i ) < l a t e x i t s h a 1 _ b a s e 6 4 = " l T O X Q t 1 3 5 v e k 3 z 0 8 U 2 Q f Z u k x b + A = " > A A A B 8 H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z d F N y 4 r 2 I e 0 Q 8 l k M m 1 o k h m S j F C G f o U b F 4q 4 9 X P c + T e m 0 1 l o 6 4 H A 4 Z x z y b 0 n S D j T x n W / n d L a + s b m V n m 7 s r O 7 t 3 9 Q
2023-06-26T01:15:47.017Z
2023-06-23T00:00:00.000
{ "year": 2023, "sha1": "674c3772918dbf790c6b8fc34745081533b393aa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "674c3772918dbf790c6b8fc34745081533b393aa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
11753978
pes2o/s2orc
v3-fos-license
Ethics of randomised controlled trials – not yet time to give up on equipoise In this commentary on Fries and Krishnan's argument that 'design bias' undermines the status of equipoise as the ethical justification for randomised controlled trials, it is argued that their argument is analogous to Bayesian arguments for the use of informative priors in trial design, but that this does not undermine the importance of equipoise. In particular, mismatches between the outcomes of interest to industrial sponsors of research and outcomes of interest to patients and clinicians ensure that in many cases industry-sponsored trials can fail to reflect the reasonable equipoise of working clinicians. Introduction James Fries and Eswar Krishnan have recently presented an interesting argument for the proposition that 'equipoise is a false and diverting principle' and propose an alternative test of the ethics of a randomised controlled trial, the 'positive expected value' test [1]. The concept of equipoise has been introduced into the medical literature on many occasions [2][3][4][5][6][7]. Both concepts are intended to give ethical justification to entering patients into randomised controlled trials. The problem that critics of such trials pose is that entering a patient into a trial seems to involve knowingly failing to offer the patient the treatment the doctor believes to be best for the patient, in the interests of scientific research and future patients. However, if there is genuine uncertainty as to which of the treatments being compared is superior, then randomised assignment can be justified [8]. There is considerable debate in the literature about how to give rigorous expression to what 'genuine uncertainty' requires. How much uncertainty? Whose uncertainty? When should we stop being 'uncertain' and start being 'certain'? The concepts of 'equipoise' defined in this literature are all attempts to give more precise expression to what is meant by 'uncertainty' here, and to give a sound basis to the ethical justification of randomisation in controlled trials. Design bias Fries and Krishnan argue that in the context of licensing trials of new drugs these debates are irrelevant and misleading. They argue that new drugs that reach industry-sponsored phase III trials are more likely to be effective than not, because they reach this stage of testing only if they have survived rigorous preclinical and clinical screening, and because the trial design decisions that are taken are those most likely to produce a positive result. They argue that this is demonstrated by the fact that all the trials they reviewed produced positive results in favour of the new drugs being tested [1]. On the basis of this, they argue that equipoise is being systematically violated. Their empirical argument is not strong methodologically, and they acknowledge that there are alternative explanations for their finding. In addition, their ex post finding that all the trials they reviewed gave positive results does not entail that ex ante the triallists were not substantially uncertain that they would gain a positive result. Nevertheless, their qualitative argument for the existence of 'design bias' is plausible. The question is: what follows from this? Ethics of randomised controlled trials -not yet time to give up on equipoise Design bias and the Bayesians One response is to argue that the Fries-Krishnan argument is nothing new, because in effect the Bayesians have been arguing something similar for years. Some Bayesian philosophers argue that randomisation is unnecessary in the first place, and on that basis randomised controlled trials are unethical [9,10]. Most Bayesian statisticians and triallists, however, do accept that randomisation has its place in trial design [11,12]. What is required, they say, is that one starts with an 'informative prior' that fixes the rate at which people are initially randomised to different arms of the trial. On this account, something like equipoise or uncertainty remains the ethical justification for randomisation. The concepts of equipoise normally used are qualitative (one is either uncertain, or not, or the community at large is uncertain or not). Here the concept used is quantitative (one specifies a degree of belief in the proposition that the new drug is safe or effective, and a range of degree of belief within which one is 'uncertain' as to the truth or falsity of that proposition) [13]. As Fries and Krishnan argued, and as most Bayesians also accept, this approach to the decision to run a trial, and to design in it a particular way, involves subjective judgements about what is important and about what it is fair to offer patients. This then involves placing weight not only on what clinicians believe, and on what they think is important, but also on what patients believe and think important [5,14,15]. Problems with the Fries-Krishnan-Bayesian approach One response to the claim that phase III trials are systematically prone to design bias is the following. Suppose that any new drug in phase III trials is likely to work, at least to some extent. The primary purpose of such a trial cannot then be to determine whether or not the new drug is effective. Instead, it is to measure how effective it is, and, secondly, to identify any problems with using the drug in clinical practice (rare adverse events, the tolerability of known side-effects, adherence to treatment, quality-of-life issues). If this is the purpose of phase III trials, then this will mean that different types of design and different numbers of patients will be required in many cases than are now required for trials that aim at proving effectiveness alone. This may have the effect of undermining 'design bias'. If the origin of design bias is sponsors selecting the design that will put their new product in the best light, then this represents a constraint on the designs they are entitled to choose. Developing this, admittedly speculative, thought, even within the Bayesian approach there is considerable complexity. Designs that make full use of the ability to alter the assignment of patients to arms of the trial in the light of new information can be complex and difficult to analyse, and the choice of prior to reflect the different degrees of belief of sceptics or enthusiasts in the clinical and patient communities can be controversial [12,13,16]. Designing a trial that reflects the triallists' confidence in the new product, while allowing a fair test of that product, which produces results that can be understood by, and can hence persuade, the clinician who is neutral about the new product is harder than it looks. Fries and Krishnan might object that if design bias is endemic, then the clinician ought not to be neutral about new products; this is a very strong claim to make, however, and I will return to it. The next problem is that a design chosen to present the new drug in as favourable a light as possible may well not be the design that answers the question that is clinically relevant [17]. They may measure the 'wrong' outcomes or make the 'wrong' comparisons. Clinicians may be interested in the relative effectiveness of drug versus surgery for osteoarthritis of the knee, yet they are offered very little evidence on this type of question; patients may be more interested in mobility than in pain control, but mobility may not be used as an outcome measure [18,19]. Consider, therefore, the clinician who is not involved directly with the drug development but is interested in either participating in the trial, or (later on) in using the results of the trial to inform her practice. On the Fries-Krishnan view, she ought to have a prior degree of belief in favour of the new product's effectiveness. Other things being equal, she seems to be being asked to consider any new drug as an advance -otherwise why would the drug company put all its effort into developing it? Yet the reasonably experienced clinician will know that new drugs are not always advances on the existing pharmacopoeia, will not always give patients outcomes they prefer, and may sometimes be harmful or ineffective in practice. So how enthusiastic ought the clinician to be? The reasonable patient deserves to be informed by his clinician about new products and new trials, but also about the ins and outs of such products and such trials. In practice, these considerations would lead clinicians and patients towards something very like equipoise, save in those happy situations in which there is close concordance between the interests of patients, clinicians, triallists and sponsors. Conclusions Fries and Krishnan are certainly correct in arguing that the equipoise concept has serious problems. Yet it is not the case that it is dead in the water. For practical clinical purposes it remains the central test of the ethical justification for randomisation. They are also correct to stress the role of patient autonomy and patient preferences in the design and conduct of trials. What they establish is that equipoise is neither a necessary nor a sufficient condition for a trial to be justified. Some trials do not require equipoise, and not all trials with equipoise are ethically justified. For example, phase I and II trials are rarely based on equipoise, and some trials in chronic illness or in non-serious acute illness can be conducted with placebo control even when there is an effective standard therapy, provided that the patients consent and are really free to choose the alternatives [20]. Some trials of potentially life-saving treatments, to which there is no effective alternative, are arguably unethical if patients have no choice but to enter the trial [21]. Patient autonomy is surely very important. But the point of the equipoise principle is that doctors need to be able to assure themselves and their patients that the offer of randomisation is not suboptimal. The defect of the Fries-Krishnan claim (that trials can be ethical if there is positive expected benefit) is that this need not be maximal: doctors, on this theory, can knowingly and willingly do less than their best for their patients. The point of the equipoise theory was that it seeks to show how randomisation can be consistent with seeking to do one's best for one's patient. Although conceptual problems remain to be resolved with equipoise, the ethical costs of giving up on it as the default justification are high [4,5,7]. It may be that we will eventually find a better justification for trials than equipoise, but I am not convinced that 'design bias' is a sufficient reason to give up on equipoise just yet. Competing interests The author declares that he has no competing interests.
2014-10-01T00:00:00.000Z
2004-09-14T00:00:00.000
{ "year": 2004, "sha1": "3d56ed9f1b6fa8b8852c6eda4f2c56a2bb38523e", "oa_license": null, "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar1442", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a16818f4c45ac467570c29258fad73fba1da2c26", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18939534
pes2o/s2orc
v3-fos-license
Large N limit of SO(N) gauge theory of fermions and bosons In this paper we study the large N_c limit of SO(N_c) gauge theory coupled to a Majorana field and a real scalar field in 1+1 dimensions extending ideas of Rajeev. We show that the phase space of the resulting classical theory of bilinears, which are the mesonic operators of this theory, is OSp_1(H|H )/U(H_+|H_+), where H|H refers to the underlying complex graded space of combined one-particle states of fermions and bosons and H_+|H_+ corresponds to the positive frequency subspace. In the begining to simplify our presentation we discuss in detail the case with Majorana fermions only (the purely bosonic case is treated in our earlier work). In the Majorana fermion case the phase space is given by O_1(H)/U(H_+), where H refers to the complex one-particle states and H_+ to its positive frequency subspace. The meson spectrum in the linear approximation again obeys a variant of the 't Hooft equation. The linear approximation to the boson/fermion coupled case brings an additonal bound state equation for mesons, which consists of one fermion and one boson, again of the same form as the well-known 't Hooft equation. Introduction Gauge theories play a fundamental role in our description of nature. Nevertheless our understanding of confining phase of gauge theories is not so complete. In principle we should be able to calculate the hadronic spectrum starting from Quantum Chromodynamics(QCD), which is a gauge theory, yet this has not been possible up to now. It is believed that the hadrons are colorless excitations of the underlying gauge theory and we never see the constituent quarks as free particles. This suggests that in this case we should have an independent formulation of gauge theories in terms of color singlet operators of the original gauge theory. In general this is a very hard task. Gauge theories in 1 + 1 dimensions provide a great testing ground for many ideas about realistic theories. This is a great simplification, various difficult problems of higher dimensional theories will not be there, yet there are still interesting aspects of these theories which make them worth studying in depth. In [1] Rajeev has constructed a theory of mesons in two dimensions in the limit N c , the number of colors in SU(N c ), goes to infinity using only the color invariant variables (which correspond to the meson operators). The idea that QCD should simplify while keeping all its essential features in this limit goes back to 't Hooft [3,4] and that this limit should be a kind of classical mechanics to Migdal and Witten [5]. This is a very promising step in simplifying gauge theories, but the large-N c theory is also quite complicated and it is not possible as yet to understand it in four dimensions. Originally 't Hooft studied two dimensional QCD in the large-N c limit to understand the meson spectrum and obtained his bound state equation in his seminal paper [4]. Soon after the scalar two dimensional QCD was worked out by Shei and Tsao in [6] following 't Hooft, and later by Tomaras using Hamiltonian methods in [7]. These works obtained the analog of the 't Hooft equation for this case. A natural extension of these would be to look at combined (fermionic) QCD and scalar QCD, this is done in a paper of Aoki [8] where it is shown that three types of mesons are possible and they all obey a certain type of 't Hooft equation(see also [9]). Cavicchi [10] using a path integral approach with bilocal fields, developed in [11], studied coupled fermions and bosons as well as some other models in two dimensions and he obtained some generalized versions of the 't Hooft equation. To understand gauge theories better, we study the problem of bosons and fermions coupled to SO(N c ) gauge fields in 1 + 1 dimensions. We will apply the methods developed by Rajeev to this toy model. We recommend his lectures for a more detailed exposition of the underlying ideas and various other directions [12]. In [1] it was shown that the phase space of the two dimensional QCD is an infinite dimensional Grassmannian [13]. Using the same methods scalar version of QCD is worked out in [14],the phase space of the theory comes out to be an infinite dimensional disc. Recently Konechny and the second author obtained the large-N c phase space of bosons and fermions coupled to SU(N c ) gauge theory; a certain kind of super-Grassmannian [15]. The linearized equations agree with the ones found in [8]. The correct equations are nonlinear and various approximation schemes are also discussed in [15]. There are some ideas in the literature which suggest that gauge theories in two dimensions all behave in a very similar way [16], therefore it will be interesting to see how much of this holds for SO(N c ) gauge theory. The organization of our work is as follows, since we did not want to go into technical details of super-geometry immediately, we first study the purely fermionic case. The essential calculations are very similar to the ones in Rajeev's lectures [12] and for the geometry basic ideas are already in [17,13], we also recommend the article [18] for a good discussion. We show that one can formulate the large-N c limit in terms of bilinears along the lines in [1]. We obtain a variant of the 't Hooft equation in the linear approximation. We explain the geometry of the phase space and show that it is a homogeneous manifold, O 1 (H)/U(H + )(see explanations in section IV), and the symplectic form is the natural one. In the second part, we study the combined system of bosons and fermions, this part is very brief, we state mostly the results. We obtain a super-Poisson structure of the bilinears in the large-N c limit and the resulting Hamiltonian. The equations of motion in the linear approximation agree with the purely bosonic and purely fermionic ones with an additional one for the mesons made up of one fermion and one boson. This is again a variant of the well-known 't Hooft equation. The discussion on the geometry of the resulting infinite dimensional supersymplectic space requires some new ideas. This part is technically complicated, we use essentially Berezin's ideas [19], but we do not claim that all the technicalities of the infinite dimensional case is understood. We show that the underlying phase space should be the super-homogeneous manifold OSp 1 (H|H)/U(H + |H + ), and the supersymplectic form is the natural one on this space. We plan to come to the more mathematical aspects of this problem in a future publication. The SO(N c ) Majorana Fermions in the Light-cone Since the basic philosophy was explained in [1] we can be brief and only state our conventions and define our theory. We will use the light cone coordinates [20] for an introduction to light-cone quantization, and [21] for a more comprehensive review), the action functional is where we have an SO(N c ) gauge theory for which the matter fields are in the fundamental representation and Tr denotes an invariant inner product in the Lie algebra. The Lie algebra condition for SO(N c ) implies that A T µ = −A µ . To compute the variations of the action we need the independent degress of freedom, we can expand A µ = A a µ T a where T a are the generators of the Lie algebra of SO(N c ), chosen such that TrT a T b = − 1 2 δ ab . Our conventions for the Majorana fermions are as follows: we choose the Majorana representation in which the fermions are real, i. e. Ψ † M = Ψ T M (transpose here also includes the color indices to simplify the notation). The gamma matrices now are given by, Note that γ 5 happens to be diagonal in 1 + 1 dimensions and we setΨ M = Ψ T M γ 0 . We now rewrite the action in the light-cone coordinates and eliminate all nondynamical degrees of freedom. We write Ψ M = ψ 1 ψ 2 , and use γ + = 1 . We further set A − = 0 and choose x + as the evolution variable, which we call "time". (from now on T only means tranpose in the color space). We note also that we have a real two component fermion, they are Grassmann valued obeying ψ 1 ψ 2 = −ψ 2 ψ 1 . We can check that the action is real if we use the following complex conjugation convention for spinors, (ψξ) * = ξ * ψ * . We see that ψ α 1 is non-dynamical, and hence can be eliminated using its equation of motion, Similarly we solve for the nondynamical A a + , and get A remark is in order here to clarify what we mean by 'real fermions' while the action and the constraint equation for A a + have explicit factors of i. The resolution of this seeming paradox is that it is the equations of motion which are actually real. To see this note first that the symplectic form has a factor of i in it, and the ψ α 1 constraint only has real operators. The A a + constraint is also real if we choose complex conjugation of fermions to be (ψξ) * = ξ * ψ * . This convention implies that the product of two real fermions is imaginary, and this is the reason for the extra factors of i in the action. The equation of motion for ψ α 2 reads, which is manifestly real. This shows that the "time" evolution preserves the real valuedness condition imposed on the fermions. If we insert the above constraints into the action we arrive at This defines our theory at the classical level with the redundant degrees of freedom eliminated. Since it is written entirely in terms of ψ 2 we will refer to this field as ψ from now on. The real fermions have a super-poisson bracket, which can be read off from the action, given by Since our real fermions are Grassmann valued we use a symplectic structure which is i times a real symmetric operator, and the Hamiltonian is actually i times an antisymmetric one, as we will see in more detail in the next section. There is an ambiguity in the quantization, we follow Rajeev's original approach [12,1], we will remove the nondynamical fields after quantizing the dynamical field, ψ α . Using the Dirac rule we get an anticommutator for ψ, (Note that for the orthogonal group, the distinction of upper and lower indices is irrelevant, since the metric tensor is unity). The reason for our convention of complex conjugation is to arrive at this more familiar form of the Clifford algebra. We could have chosen a convention in which the product of real fermions is real, then we would arrive at a Clifford algebra with a factor of i as in [17], but this usual form is preferable (it leads to a positive inner product, or the hermitian conjugation is compatible with the inner product in the fermionic Fock space). Let us introduce the Fourier decomposition, which is done in a complex Hilbert space (to simplify notation we drop the subscript in p − , seth = 1 and [dp] = dp 2π ), (To be precise, in the above expansion, we should assume a cut-off ǫ 0 around zero momentum, to be taken to zero at the end of our calculations). We see that χ α (p) satisfies the basic anticommutator, where we write δ[p − q] = 2πδ(p − q). Real valuedness of the original field implies that χ α † (p) = χ α (−p). In standard physics notation the expansion would be written as [dp] 2 3/4 . As well known in the physics literature, to make the Hamiltonian bounded from below, we should choose a vacuum to be used to construct a fermion Fock space, and further impose a normal ordering prescription. This is done by simply requiring that χ α (p)|0 >= 0 for p > 0, and defining : This can be stated in one formula as, For most of our calculations we need only the bilinears and the above expressions. (For the Hamiltonian we actually need the normal ordering of product of four such operators, and it is defined as usual all the annihilation operators are to be taken to the right of creation operators, it will be briefly explained later on). We can reduce our Hamiltonian after this quantization process, and we get, Above we used (T a ) αβ (T a ) λσ = − 1 2 (δ ασ δ βλ − δ αλ δ βσ ) and the Green function The last normal orderings can be rearranged to act only on the color invariant combinations in the large-N c limit, we will discuss this in the next section. Next we introduce the algebra of color invariant bilinears and study the resulting system in the large-N c limit following [1,12]. Classical mechanics of color invariant operators We define color invariant bilinears as in [1,12,15] to be our dynamical variables and find the large-N c limit by postulating Poisson algebra of these bilinears and defining the phase space to be a manifold where these Poisson brackets make sense. Since the theory is superrenormalizable we expect this to be related to the Hilbert-Schmidt ideal condition which is well-known in the literature on the Fock spaces [13,22,23,24]. We will see these aspects in more detail in the next section when we talk about the geometry of the phase space. We define our basic dynamical variables, bilinears, which are color invariant combinations of the fermion operators. We find it useful to define a related operator,F (p, q) =R(−p, q), we will see that this is the correct variable for the geometry of the phase space. We assume that there are proper large-N c limits of our operators, then they become classical variables when they are restricted to color invariant sector of the full Fock space. Following [1], we postulate the following Poisson brackets(we choose the quantization parameter to be 1 Our dynamical system is not defined completely yet, since there is still a left over global color invariance, generated bŷ The commutators of these generators satisfy the Lie algebra of SO(N c ). If we restrict ourselves to the color invariant states, we find a constraint equation satisfied in the large-N c limit, which can be best expressed in terms of F (p, q) = R(−p, q), We define ǫ(p, q) = −sgn(p)δ[p −q], then we can rewrite this constraint as a simple quadratic operator equation, (we interpret F, ǫ as integral kernels acting on L 2 space of initial data). In the next section we will analyze the geometric meaning of these constraints. The Hamiltonian and the above Poisson brackets determine the evolution of our classical system; the Poisson brackets are consistent with the constraint equation. The large-N c Hamiltonian is obtained by dividing the original Hamiltonian by N c and rewriting it in terms of our large-N c variables. After certain manipulations which are sketched below, we obtain the following Hamiltonian, where P and F P refer to the principal value and finite part prescriptions, respectively. In the following we will often write short for P and F P , but one should keep in mind that these regularization perscriptions are used to define the singular integrals. The main steps of the derivation of the above Hamiltonian are very similar to the one in [12], although there are some small differences. Here we supply the basic ingredients to help the reader: for simplicity in many places we write x, y instead of x − , y − , we define ǫ(z) = P ∞ −∞ sgn(p)e +ipz , note the sign of the exponent. We have the vacuum expectation value of our field product, An important formula for the reduction is given in [12]: We also have |x−y| = F P [dp] p 2 e ip(x−y) . We use a form of Wick's theorem for normal ordered products, Note that when we take the large-N c limit we can expand the full normal ordering in the leading order to get : ψ α (x)ψ α (y) :: ψ β (x)ψ β (y) :. In the above equality the fourth and fifth terms on the right are of smaller order in the large-N c limit as well as the last term in the equality. The sixth term is an infinite vacuum expectation value, but that is a constant term which will not contribute to the equations of motion hence we can drop it. As a result, Using the above formulae we get a finite renormalization of the mass term. Let us compute the equations of motion at the linear approximation. What we mean by this is to linearize the constraint as well as the equations of motion. The linearization of the constraint simply says that R(u, v) = 0 if u, v have different signs. We thus restrict ourselves to u, v > 0 and compute We also put P = u + v, x = u/P and make the ansatz R(u, v; x + ) = ζ R (x)e −iP + x + . For further details we refer to the previous works [1,12] where similar calculations are done in more detail with the same type of ansatz; this yields an eigenvalue equation, where µ 2 = 2P + P is the invariant mass of the excitation. By looking at the behaviour of this equation under x → 1 − x, and y → 1 − y, we see that we can choose our wave functions to be antisymmetric under y → 1 − y, thus ζ(1 − y) = −ζ(y). This gives us, This equation is one of our main results and it is a variant of the well-known 't Hooft equation. Apart from the numerical factors this equation is the same as the original one, and this result fits to the ideas in [16]. Its properties are well known, the most important one is that there are only bound state solutions. An interesting question is the existence of "baryon" like excitations. These should correspond to operators of the form but the meaning of these operators as N c → ∞ is not so obvious. Yet we can think about normalized states of this form when all the momenta are positive acting on the Fock vacuum, they should correspond such baryon like states. Perhaps our large-N c theory can detect their presence. Indeed, one can check that the operator, measures the number of such excitations. This operator can be given a meaning in our theory: in the large-N c limit therefore it is natural to expect that the operator, B = 1 2 ∞ 0 [dp]F (p, p) gives us this number and as we will see it is well-defined. In our classical limit we can ask if this number makes sense for our system, that is if it is a conserved quantity. The answer, not surprisingly, is no: the above baryon number is not conserved by our equations of motion. Thus there are really no baryons in this theory. Geometry of the Phase space To understand the geometry behind the classical system that we introduce in the previous section, we must take a look at the finite dimensional orthogonal group. Our approach will be similar to one in [2] where we discussed the bosonic version of this theory. The basic ideas of the quantization of free Weyl fermions and the underlying geometry is discussed in the paper of Bowick and Rajeev [17], but we would like to expand on it and there are some differences in our conventions. We recall that the real orthogonal group can be defined as the set of linear transformations which leave a quadratic form invariant. (here Q(u, v) = u T Qv represents this quadratic form, and superscript T denotes the ordinary transpose). In our case the quadratic form is diagonal, so it is the standard inner product u T v. We work with the complexification of the original real Hilbert space, and if our Hilbert space is even dimensional, in this complex space we can use a different quadratic form, simply by using an invertible transformation S, Q 2 = S T QS. Assume now that we have a complex structure J acting on our original real Hilbert space, that is, a real antisymmetric matrix with respect to this form, which is also orthogonal, implying J 2 = −1. If the quadratic form is the identity, we may think of such a matrix as J = 0 1 −1 0 in an appropriate basis of the real Hilbert space. Let us split our Hilbert space into two isomorphic pieces with respect to the above decomposition of the complex structure, W ⊕W , and complexify the real Hilbert space, naturally we have W ⊗ C ⊕W ⊗ C. Choose with respect to this decomposition, This is the transform which we can use to diagonalize our complex structure. Of course our original quadratic form now changes as we described above: we get, (In our problem we actually transform the inverse of this form, but one can see that as matrices these two forms are identical). The complex orthogonal group is the set of trans- In finite dimensions the quadratic form is Q(z, z) = z 1 z m+1 + z 2 z m+2 + ... + z m z 2m . We see then that the original real orthogonal group is embeded into the complex orthogonal group defined by this quadratic form as a set of matrices with now a, b satisfying, a Tā + b † b = 1 and a Tb = −b † a (where we decomposed the matrix in the obvious way). This explicitly shows that the complex structure, which is a real orthogonal matrix, becomes diagonal, J = −i1 0 0 i1 . In our physical example these diagonalizations will be accomplished by the Fourier transform. An immediate consequence of this way of looking at the real orthogonal group is that the real orthogonal group actually carries a copy of the unitary group in it, corresponding to the elements, a 0 0ā . The quadratic form implies a Tā = 1, as well as aa † = 1, this implies aa † = a † a = 1. It is the unitary group of H + , where H + refers to the subspace on which J acts as i. For our purposes we should extend these discussions to the infinite dimensional case. In the infinite dimensional one we should not use the full orthogonal group but the one with a convergence condition [17]. This condition is the well-known Hilbert-Schmidt condition in the quasi-free representations of canonical anticommutation algebra. We will comment further on the convergence conditions when we make contact with our system. We define the restricted orthogonal group on the complexified Hilbert space as follows, where I 2 is the ideal of Hilbert-Schmidt operators [25]. We can state the convergence condition more economically as [ǫ, g] ∈ I 2 , where ǫ = 1 0 0 −1 with respect to the above decomposition. This is basically the complex structure we had, except that a factor of i has been removed. The Lie algebra of this group can be found from an infinitesimal group element, with R T = −R and S † = S and ∆ represents an infinitesimal parameter. The reader can verify that u T Q + Qu = 0. We would like to define a classical phase space using this infinite dimensional orthogonal group. This will be our phase space for the large-N c theory, but for the moment let us define it as a mathematical system. We introduce a variable Φ, The orbit of ǫ under the restriced orthogonal group is parametrized by this operator. It is easy to see that the orbit is diffeomorphic to The operator Φ satisfies, where I 1 denotes the trace class operators in the appropriate space of operators( here Q −1 is identical to Q as a matrix, but transforms differently). The second condition really says that Φ is in the Lie algebra of this group (it is possible to think of this space as a real subset of the restricted Grassmanian, and there is an analogous construction of a line-bundle on this space, see [26]). The tangent space of this orbit is given by the infinitesimal action of the group at any point, and in fact it is a copy of the Lie algebra of this group at every point. The action of a vector field on the basic variable Φ becomes V u (Φ) = i[u(Φ), Φ], for a Lie algebra element u(Φ), which changes differentiably over the orbit. So a vector field at a point Φ = gǫg −1 comes from a Lie algebra element g −1 u(Φ)g. It is well-known [27] that such orbits in finite dimensions typically carry a symplectic structure. If we formally define a two form, following the methods in [1] we can check that it is closed and non-degenerate. The form evaluated at two vector fields V u , V v is given by which shows that it is well-defined, due to the Hilbert-Schmidt conditions, non-degenerate, homogeneous and Kähler. The group action on this phase space Φ → g −1 Φg is actually Hamiltonian, that is there are moment maps which generate this action, given by a conditional trace, F u = − 1 2 Tr ǫ (Φ − ǫ)u, with Tr ǫ (A) = 1 2 Tr(A + ǫAǫ). Just for completeness we record that if we decompose u, v as above. The last term represents a central part and cannot be removed in this classical theory. How does this tie up with our system? Recall that we had a symplectic form which was i times a quadratic form Q, and a Hamiltonian for the free theory which is the mass part, i times an antisymmetric form ω, the combination of the two provides a natural operator:ω = Q −1 ω is a type (1, 1) tensor hence a proper linear transformation. Its polar decomposition will have all the basic pieces we need. Of course we have also ω −1 Q, so which one we choose is determined by the equations of motion. If we look at this general system in the Hamiltonian formalism, the equations of motion will give us, Hence the operator Q −1 ω is the one we should use. We find the polar decomposition of this operator,ω = KJ, where K is positive symmetric and J T J = 1, orthogonal(we should be using the natural inner product defined by Q to define the transpose, and in the infinite dimensional case to define underlying real Hilbert space of initial data). Howeverω is antisymmetric with respect to our quadratic form, this means that J 2 = −1 and orthogonal, thus a complex structure ( the complex structure coming from the other choice differs from this by a minus sign). In our example we see that the quadratic form is 2 √ 2δ(x − − y − )δ αβ (thus all the calculations can be done with the usual matrix transpose), and the antisymmetric form is (we omit the identity in the color space). When we use a basis which diagonalizesω we get solutions which oscillate in time with a frequency given by the eigenvalues of K. In our example, if we decompose the field ψ α using a Fourier mode decomposition, we have w α (p, x + ) = w α (p, 0)e −i m 2 2|p| x + for p > 0 w α (p, x + ) = w α (p, 0)e +i m 2 2|p| x + for p < 0. (44) (Note that the above combinations on the exponents are relativistically invariant if we recall the mass-shell condition p + = m 2 2p − ). This suggests that the i subspace of J goes to creation operators, and −i subspace goes to the annihilation operators, it is better therefore to represent our Fourier coefficients as w α (p) = ξ α (p) and w α (−p) =ξ α (p) for p > 0. If we act with J on our field variables, We see now that this Fourier transform diagonalizes our complex structure. If we look at the inverse of the quadratic form it transforms as dx . This is the form of Q that we wanted to obtain. From the Fourier decomposition, creation and annihilation operators therefore are assigned according to sgn(p), ξ(p) → χ †α (p) andξ(p) → χ α (p). The ultimate reason for the choice of Fock vacuum is to make the Hamiltonian bounded from below, if we write our Hamiltonian in the Fourier space, Notice that sgn(p) appears in the Hamiltonian, which is basically the complex structure we have, and the normal ordering (according to our choice of creation and annihilation operators) now makes the Hamiltonian bounded from below: We could question the effect of the interactions since we have been describing everything in terms of the free part of the Hamiltonian. Here we see a clear advantage of our light-cone point of view, the complex sturcture we start with using the free Hamiltonian is independent of any of the parameters of the theory, thus the choice of quasi-free representation of the canonical anticommutation relations is not affected by the change of parameters due to interactions. In our case we explicitly keep the change of mass due to the interactions with the gauge fields, so we are not taking advantage of this property. In more general case this property may be helpful, in fact for the scalar theory it is essential. We thus conclude our discussion on the choice of Fock space and its relation to the natural complex structure in our system. Next we show that Φ − ǫ really represents our basic bilinears: let us decompose the complexification of our one-particle Hilbert space as H + ⊕ H − according to −sgn(p), we can write a general bilinear as an operator acting on the one-particle space and decomposed according to this direct sum, one checks that with exactly S † = S and R T = −R. We also know that (F + ǫ) 2 = 1. But these are exactly the properties satisfied by Φ. Our physical system has a one-particle Hilbert space given by the initial data on the light-cone x + = 0, we complexify this space and use Fourier transform to put our operators into the desired form. Then H − corresponds to the negative frequency components in the physics language. The Poisson bracket relations can be meaningfully extended to the Hilbert-Schmidt type R, so we need the convergence conditions. The convergence conditions are also a natural consequence of the super-renormalizability of this system. The time evolution of the finite N c system should keep us in the same free Fock space, and in the large-N c limit this should be expressible as an operator like Φ. In fact the smeared out Poisson brackets are given by the Poisson bracket relations of the moment maps. Thus the symplectic structure we have on this homogeneous manifold is the one we have found for our bilinears. It is useful to look at the same issue from the point of view of generalized coherent states: assume that we have a Lie group which is representable on a Hilbert space by unitary operators through a highest weight vector. If we look at the orbit of this vector under the action of the group, this orbit has a natural symplectic structure, and all the vectors on the orbit correspond to the generalized coherent states [28,29,5]. In our case the group of Bogoluibov automorphisms, which do not act on the color part of our fermions are represented on the fermion Fock space by the color invariant bilinears. The highest weight vector is the vacuum and its orbit under this group therefore carries a natural symplectic structure. The corresponding group is the restricted orthogonal group O 1 (H) and the orbit is our phase space.(In fact physically we should be using the projective Fock space, since the phase does not change the physical content of a state. The bilinears provide a unitary representation of the central extensionÔ 1 (H) of the group O 1 (H), when we use the projective Fock space, the central part disappears and we decend to the restricted orthogonal group). The convergence conditions are now a result of the implementability of these automorphism in the Fock space, which is defined by our choice of the vacuum [23,22,13]. The large-N c limit allows us to restrict to the bilinears and the super-normalizability keeps us in the restricted class of implementable automorphisms. Thus taking the large-N c limit provides a classical limit in this sense. This shows that our large-N c limit has a well-defined classical phase space with a natural symplectic structure. This openes up various possibilities, such as studying large fluctuations of the field in this limit. There are various delicate questions, such as the domain of the Hamiltonian, existence of finite time evolution, completeness of the trajectories which we plan to come back in the future. Bosons and Fermions This is the begining of the second part of our paper. The second part has two themes again: the construction of the phase space via the large-N c limits of the bilinears and the geometry of the ensuing phase space. Since the bosonic theory is developed in [2] and the fermionic version is explained in detail in the previous sections the construction of the phase space and finding the Hamiltonian will be very brief. We recommend the reader to look at [2] and we use the results of the previous sections freely. The geometry part, which is in the next section, will require new methods and in some sense it is not as complete. It may be helpful if the reader also consults to [15] where the SU(N c ) version is discussed. We will develop these aspects as much as we can and in some cases we indicate what the idea should be. We start our first theme: we use the same conventions as in the previous sections and our previous paper. The action functional of the combined system of bosons and fermions can be written as, where we use the same conventions as in section IV for the Majorana fermions. The transpose refers to the color indices for the scalar field. Again the covariant derivative is D µ = ∂ µ +gA µ , where A µ has values in the Lie algebra of SO(N c ). We choose x + as time and set A − = 0 as our gauge fixing condition. Then the action in the light-cone formalism reads, The advantage of the light-cone formalism is again clear, we are already in the Hamiltonian picture. We can read off the Poisson brackets satisfied by the dynamical fields. We also see that ψ 1 is not dynamical, as well as A a + , therefore they can be eliminated through their equations of motion. The dynamical fermion field ψ α 2 will be called ψ α for simplicity as in the previous sections. We will assume that the field A a + is eliminated after the dynamical fields are quantized, this will give us the quantized Hamiltonian of the system, where The quantization process is defined for the Fermionic sector in section II and for bosons in the reference [2]. We expand fermions and bosons into Fourier modes in a complex space, with now χ α(p) † = χ α (−p) and a α † (p) = a α (−p). (We should again assume that there is an infinitesimal cut-off around the zero momentum to be taken to zero at a later stage). The Poisson bracket relations go to These are exactly the same as before, there is one more commutator now, As we will see in the next section the definition of the Fock vacuum brings new features-a larger symmetry algebra appears. We introduce the vacuum state |0 > s , characterized by χ α (p)|0 > s = 0, a α (p)|0 > s = 0 for p > 0, where we put a subscript s to emphasize that the vacuum is for the full algebra of the boson/fermion system. We repeat for the convenience of the reader the normal ordering rules of the bilinears (rewritten to fit to our needs), There is an obvious extension of the general definition of normal ordering to the product of more than two operators, which one needs for the reduction of the Hamiltonian: set all the annihilation operators to the right of creation operators in a recursive way. We first introduce our bilinears for the large-N c limit and work out their Poisson brackets. Then we express our Hamiltonian in the large-N c limit in terms of these bilinears. We can see that the basic color invariant observables are: note that we have no need for normal ordering in the last two operators since they consist of commuting operators. In the large-N c limitĈ andĈ are related,C = C † , and there are similar conditions on F, B (when we represent the resulting classical observables as integral kernels and think of them as now abstract operators). For our calculational purposes it is better to introduce the following variables as in section III and the reference [2], and also the variable,Ŝ These variables in the large-N c limit satisfy the following (super)Poisson brackets, We note that the last one is symmetric in the variables and the third and forth ones show that S behaves as a module of the algebras defined by the Poisson brackets of T, R, thus it carries a representation of these two algebras. This is the general form of a super-algebra structure. We will denote the full set of these brackects as a super-Possion bracket { , } s . The conversion of the normal ordered products of non-color invariant combinations appearing in the above Hamiltonian to the full normal ordering in the large-N c limit can be achieved as before resulting with the same changes in the masses m 2 F → m 2 F − g 2 /2π and ln(Λ U /Λ I ) denotes the renormalized mass of the boson. We skip the details of this reduction, since they are the extensions of the details in [12] and we have given some essential steps in section III. The resulting Hamiltonian of our system in the large-N c can be expressed as a free part and an interacting part: The interaction part is written as where the kernels are given by We have not completed the definition of our large-N c limit yet, there is a constraint. Recall that we still have a left over global color invariance, which is generated by the operator, Q αβ = [dp](: χ α † (p)χ β (p) : +sgn(p) : a α † (p)a β (p) :). (57) When we restrict our color invariant bilinears to the color invariant sector of the full Fock space, we find that where we define as in section III, ǫ(p, q) = −sgn(p)δ[p − q] (here the minus sign is crucial, in our previous works that was not important, but in the super case there is a prefered choice) and we also employ the product convention as before for example (F C)(p, s) = [dq]F (p, q)C(q, s). We warn the reader that above the two epsilons have the same matrix elements but they are acting on different spaces. The meaning of this constraint could best be understood if we introduce a super operator, The above constraint is simply given by It also satisfies a Lie algebra condition, it is better to write it in the following form: use a decomposition of our super-space into H + |H + ⊕ H − |H − , according to the sign of ǫ in even and odd parts respectively. Then we haveǫ = 1 0 0 −1 , and we introduce with respect to this decompositionω s = 0 −ǭ 1 0 , then: we invite the reader to verify this. There are also convergence conditions, which come from the super-renormalizability of this system again. The time evolution should leave this system in the same Fock space. Another way to see this is to think about the smeared out operators, and see that the central terms make sense only for the restricted set of operators, for which the off-diagonal blocks are in the Hilbert-Schmidt class. We can write down these convergence conditions in an economical way as where I 2 refers to the ideal of Hilbert-Schmidt operators in this super-space. We have proposed elsewhere [30] a method of introducing such operators in the super-context, and we assume this definition is used. Since these technical matters are not completely settled we are brief at this point, see also the next section on the geometry. This completes the construction of our large-N c limit: we postulate the above Hamiltonian, the super-Possion brackets with the constraint and this defines a classical system. The "time" evolution is given by the basic rule: for any observable O s of the theory where the Hamiltonian is in general an even function of our bilinears-which we should consider as the coordinates of this phase space. It is possible to carry out the analysis given in [15], but we will be content with describing only the linear approximation. We plan to report on these in a separate publication (they will appear in the PhD thesis of the first author). We start with the linearization of the constraint Φ 2 = 1, which gives us The first two are exactly the conditions we have found before, for mesons made up of only bosons in [2], the first one is in section III, and the last one is the new condition on our odd variable. In terms of S that means we have S(u, v) = 0 unless u, v > 0 or u, v < 0. If we assume u, v > 0 and evaluate the equations of motion in the linear approximation for S(u, v), ∂ + S(u, v; x + ) = {S(u, v; x + ), H} and furthermore we make the same type of ansatz as in [12,2,15] S(u, v; x + ) = ζ S (x)e −iP + x + , with P = u + v, x = u P , The other linearized equations are the same as before (see section III and [2]). There are baryonic states that we can measure by the operator in the large-N c limit this operator should go to B = 1 2 ∞ 0 [dp](F (p, p)+B(p, p)). The baryonic states for finite N c correspond to states of the form where p 1 ...p Nc > 0 and products of them acting on |0 > s . Not surprisingly the above baryon number is not a conserved quantity, so it does not have the physical importance as it has in the case of Dirac fermions where it is a conserved number, in fact a topological number(see [12] for the discussion of this in the large-N c limit and its extension in [15]). The Geometry of the Phase Space Let us define a super space H|H, where we use a splitting to even and odd according to the grading +, − (we are using a Z 2 graded real Hilbert space). We recall some of the conventions, following Berezin [19]: we work with the Grassmann envelop of this graded vector space (thus we acquire a Z grading). Its mathematical theory is delicate and we will comment on it later (some good examples of homogeneous super-symplectic manifolds are worked out in [31], this is a good reference to learn by examples). We decompose every linear transformation or tensor according to this grading, the standard matrix form of a linear transformation is where A, D are even and B, C are odd. This means that A = A B + A S , D = D B + D S where subscript B refers to the body that is the ordinary numbers, subsrcipt S refers to the soul, that is only the Grassmann part. B, C have no body they are purely Grassmann valued. We have the usual hermitian conjugation of such block matrices, but the transpose has to be carefully defined. We introduce a super-transpose, τ , where T denotes the ordinary matrix transpose. One can verify that this form satisfies (AB) τ = B τ A τ . It will be useful to record the following properties, Str(A τ ) = StrA, if we decompose our graded space into a direct sum, for example in our case into H + |H + ⊕H − |H − , the operators can also be decomposed into super-operators, say into a b c d , then Realness is related to an involution in the Grassmann algebra, ξ → ξ * and we assume that this involution obeys (ξ i ξ j ) * = (ξ j ) * (ξ i ) * and (aξ) * =āξ * , where a is a complex number and bar denotes the ordinary complex conjugation. The real Grassmann algebra is the part which is invariant under this involution. This means that there will be factors of i to make things invariant. This implies that the real graded Hilbert space is defined residing inside a complex graded Hilbert space. On the space of linear transformations there is a complex conjugation operator, according to Berezin conventions it should be given by the following: write a linear transformation in its standard form, then we note that A * * = A, and ( We have the set of real linear transformations, this set is invariant under the above conjugation, M * = M, it remains so under the product of super-matrices, thanks to (A 1 A 2 ) * = A 1 A 2 . The set of real linear operators thus is an algebra. Let us assume that the even part has a symplectic form ω and the odd part has a standard quadratic form 1. On the complexification of this space we introduce a super-symplectic form, Note that multiplying the last part with an i does not really change anything as far as only the even transformations are concerned but for the full case we need this factor. We look at the space of real transformations which will leave this form invariant, this is a super-group, and it is denoted by OSp(H|H). Its even part has body isomorphic to Sp(H) ⊕ O(H), the odd parts are modules over the Grassmann envelops of these groups. If we write down the group conditions for an element where a, d are real even, c real odd and b imaginary odd operators, Since we have the complex conjugation convention (ψξ) * = −ψ * ξ * , the complex conjugate of a product of odd operators become imaginary, this is why we have ic T c, then it becomes a real even element of the Grassmann envelop. Decompose our spaces according to the matrix representation of ω s , W ⊕W |W ⊕W . Let us assume that we also have a super-complex structure, which is a type (1, 1) tensor, Assume that we extend everything to the complexification of our original Hilbert space. The we can perform a transformation S that will put the above complex structure into diagonal form in this complexified space. To accomplish this it is better to represent it in a slightly different way, use a decomposition W |W ⊕W |W , then Now compute S −1 J s S and see that we getĴ s = iǫ = −i 0 0 i , which definesǫ in this decomposition. We use a decomposition according to the sign of i and the resulting graded Hilbert space becomes H + |H + ⊕ H − |H − . If we compute the transformation of ω s , it goes into S τ ω s S since it is a two form, and we get with respect to the above decomposition. Obviously our real group also transformed by the same rule as J s , so a typical group element becomes according to the above decomposition, Note that each of the blocks are super-operators with standard decompositions, and for each one we are using Berezin definition of the complex conjugate a b c d * = a * −b * c * d * . We have a full complex group which leaves invariant the above transformed version of the two form, this is the complex OSp group, and the real group now sits inside this complex group. Thus a complex transformation satisfies The reader may question the consistency of these equations. We should remember that (M τ ) τ =ǭMǭ, then we can see that they are consistent. There is an interesting subgroup, given by elements of the form g = A 0 0 A * and A satisfies recall that A * τ = A † and (A τ ) * =ǭA †ǭ , this is the same as before except that we express it in the subspace, so we should useǭ instead ofẼ, we get A † A = AA † = 1. Let us see what it means when we expand A = a β γ d , we wee that the body parts satisfy a † B a B = 1, d † B d B = 1, these are the ordinary unitary groups inside. Therefore we have shown that this group's even part has body U(H + ) ⊕U(H + ). This group is denoted by U(H + |H + ) and it is the super-unitary group of H + . Let us define the orbit ofǫ, this is really the complex structure if we remove the factors of i, under the real group OSp: It is immediate that Φ 2 = 1. We will now show that we also havê so it is an element of the Lie algebra of OSp. DefineÊ = ǭ 0 0ǭ , this is really ourẼ written in this splitting of the Hilbert space, and note (ω τ s ) −1 =ω s ,ω τ s =ω sÊ ,ω 2 s =Ê and ǫω s = −ω sǫ , then, where we usedÊg τ τÊ = g. Let us look at the stability subgroup ofǫ, that is given by operators of the form A 0 0 A * , and we have seen that this can be identified with the unitary operators on H + |H + , U(H + |H + ). Hence we conclude that our variable Φ is actually parametrizing the space OSp(H|H)/U(H + |H + ). What is the advantage of this parametrization? The above super manifold is actually a symplectic manifold with a super-symplectic structure most naturally written in terms of the variable Φ: This is formally defined, but we use the rules of super analysis to define our differential forms. Clearly it is closed, use where we used StrAB = StrBA. It is also clear that this form is homogeneous. Its nondegenaracy can be proved atǫ, and homogeneity proves it everywhere. Upto now we have really used a finite dimensional approach, but to identify the large-N c phase space of the previous section, we need to extend these notions to the infinite dimensional case. The extension is formally simple, we assume that we have super-Hilbert spaces, that is even and odd spaces each one are coming from a separable Hilbert space and we use a proper extension of the Grassmann envelop to this case(this is not so obvious and we assume our proposal in [30], this may not be the only possibility see [32,33]). In this infinite dimensional setting we introduce a Hilbert-Schmidt condition, the group that we use should be the restricted real OSp group, The variable Φ now satisfies some convergence conditions, indeed one can check that where each block refers to a super-operator in the appropriate class of operator ideal. These convergence conditions imply that the super-symplectic form Ω s we defined in the finite dimensional setting makes sense. The trace class conditions are important to write down moment maps, but we will ignore it for this work. Hence we have an infinite dimensional phase space, The reader can now see how this is related to our system, from the experience we have in the previous cases. In our problem we have a free action which has bosons and fermions, This action is written in the standard light-cone frame and one of the components of the Majorana field has been eliminated in favor of the other. The transpose refers to the color indices for the gauge group SO(N c ). As it stands this does not require the full content of the super-geometry, but as we have seen the interaction terms, given by the proper bilinears of field operators, makes the use of super geometry most convenient: when we reformulate our theory in terms of bilinears, we need the combinations which can only be expressed in terms of odd operators. We will now see that the Poisson algebra of these bilinears can only be formulated as a super-two form. Moreover a simple iterative solution of the constraint equation reveals that the bosonic operators should be given as an infinite series of products of odd operators, this is why we think it is most natural to use the full content of the Berezin's super-analysis. (We hope to come back to the more mathematical aspects of our system in a future publication). In our theory we have a super-symplectic form and a super-quadratic form which is in the standard representation given by where we have the identity I c in the color space. The relevant operator is automorphisms of the the quasi-free second quantization of this system when we think of it without the color part-the color part has been averaged out and reduced the system to this bilinears. We may give an argument using the super-coherent states [39,40], similar to the ordinary cases: there is a central extension of the automorphism group OSp 1 which is realized by these bilinears on the full Fock space. When we think about the projective Fock space this descends to the OSp 1 group. The orbit of the vacuum under this group gives us a classical phase space albeit a more general one, with a super symplectic form. The large-N c limit provides this reduction to the space of super-coherent states. This is a natural classical phase space and the large-N c limit corresponds to this classical limit. Before ending our discussions we would like to make a few comments of general nature. Let us write down a super-dynamical system in the Hamiltonian form We assume that the action is an element of the even part of the Grassmann algebra. If we want this to be real we demand (Ψ τ Q s Ψ) * = Ψ τẼ Q * s Ψ = Ψ τ Q s Ψ, that is Q s =ẼQ * s , we are again usingẼ = 1 0 0 −1 in the standard decomposition. For the first term it implies the sameẼω * s = ω. If we further note that it should be invariant under the transpose, we get for the first term using an integration by parts for the time derivative, Ψ τ ω s ∂ t Ψ = −Ψ τ ω τ sẼ ∂ t Ψ, which implies ω s = −ω τ sẼ and the second term requires Ψ τ Q s Ψ = Ψ τ Q τ sẼ Ψ, which means we should have Q s = Q τ sẼ . The equations of motion will give us, This suggests that we should further investigate operator ω −1 s Q s which is a type (1, 1) tensor, thus a true linear transformation. We note that ω −1 s Q s is real: (ω −1 s Q s ) * = (ω −1 s ) * Q * s = ω −1 sẼẼ Q s = ω −1 s Q s , by using the conjugation properties of ω s and Q. This operator is antisymmetric with respest to the form defined by Q s : as well as under ω s . It would be most natural if we could use a generalization of the polar decomposition for Q −1 s ω s , and write this operator as ω −1 s Q s = J s K s , where J t s J s = 1, and K s > 0, K t s = K s , with an appropriate transpose t and positivity is assumed to be given a meaning in this super-context. Then we could claim that the basis in which J s is diagonal, will tell us the separation of creation and annihilation operators in this full generality. This can be done in the simple case we looked at, when the operators involved only had body parts, and no Grassmann numbers. Unfortunately for the general case we do not have the proper mathematical machinery. If we could find a super-transformation S, such that S −1 ω −1 s Q s S is diagonal with each entry (±iλ k ) for a pure number λ k we could postulate the quantization by means of canonical commutation/anticommutation relations. To the best of our knowledge there is no such theorem. We think these questions deserve further investigations.
2014-10-01T00:00:00.000Z
2002-01-24T00:00:00.000
{ "year": 2002, "sha1": "db45d9310da726acf9ca3bdd19a1f9bf246386e3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0201192", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1a05ec99a5723d38aba74158c9e49aaa4ebdf462", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221644130
pes2o/s2orc
v3-fos-license
Left upper lung cancer with persistent left superior vena cava and left azygos vein: a case report Background With the popularization of thoracoscopic surgery, more and more macrovascular malformations have been reported. Understanding some vascular malformations with relatively fixed anatomical site and their range of drainage could avoid severe complications during the surgery. Persistent left superior vena cava (PLSVC) is a common thoracic vascular malformation, and is always combined with other cardiovascular dysplasia. As for the patient with upper left lung cancer in this case, he had PLSVC and left azygos vein, and non-metastatic enlargement of the lymph nodes at the same time, which had influenced the decisions on surgery and treatment. We made a summary of experience regarding this. Case presentation A 46-years-old male patient, his CT found a space-occupying lesion in the superior lobe of the left lung. The chest CT showed that the patient had PLSVC and left azygos vein, and multiple enlarged lymph nodes in the mediastinum. The patient received thoracoscopic upper left lung lobectomy and lymph node dissection. It was discovered that the left azygos vein had a concealed form, which influenced the lymph node dissection. The post-surgery pathology showed that there was squamous cell carcinoma in the upper left lung (pT2bN0M0 p Phase IIA) and no cancer metastasis with the lymph nodes. The patient had a good post-surgery recovery. Conclusions PLSVC is not rare, and is always combined with other vascular malformations. If discovering PLSVC before surgery, we suggest completing chest enhanced CT and vascular reconstruction, to find out other cardiovascular malformations that may exist. Left azygos vein is a rare vascular malformation, but it has a relatively fixed anatomical site, and always co-exists with PLSVC, therefore, understanding anatomy of left azygos vein is good for preventing accidental damage. Especially when performing surgery above the left pulmonary artery trunk, attention shall be paid to preventing damage to the left azygos vein. In addition, as for the patient with the diagnosis of lung cancer before surgery, it is not reliable to judge whether there is metastasis or not merely according to the size of the lymph nodes, instead, PET-CT or needle biopsy is recommended. Background Persistent left superior vena cava (PLSVC) is formed due to non-degradation of the left superior vena cava after birth as a result of abnormal embryonic development. It is reported that, the incidence of PLSVC is about 0.3%0 .5%, and some patient may have congenital heart disease or other vascular malformations [1]. PLSVC is one of the common variations in the thoracic vein system, while there is rarely no report on left azygos vein [2][3][4][5][6]. At present, there is no report on combination of such vascular malformations as well as radical surgery for left lung cancer. As for the patient in this case, he had a combination of two vascular malformations, and received upper left pneumonectomy and lymph node dissection. During the surgery process, we found that these malformed vessels especially left azygos vein had a concealed form, which could easily lead to accidental damage, thus massive hemorrhage. Therefore, we have summarized some previous experience regarding such vascular malformations. Case presentation A 46-years-old male patient, he was admitted to hospital due to chest pain for 2 weeks, and his CT found a space-occupying lesion in the superior lobe of the left lung (5.7*5.4*5.3 cm) (Fig. 1). The lesion was diagnosed as non-small-cell lung carcinoma according to presurgery percutaneous lung biopsy. The chest CT showed that the patient had PLSVC and left azygos vein (Fig. 2), and multiple enlarged lymph nodes in the mediastinum. The ultrasonic cardiogram considered PLSVC. Electrocardiogram and other examinations discovered no obvious abnormalities. The patient has a smoking history of 30 years, with 20 cigarettes per day, but he has no history of other diseases. In January 2019, the patient received thoracoscopic upper left lung lobectomy and lymph node dissection. In the surgery, it was found that PLSVC stretched downward from the top of the chest, and abutted on the left margin of the arcus aortae and the ascending aorta, and then entered the pericardium at last. The left azygos vein entered the thoracic cavity via aortic hiatus, ascended along with the descending aorta, turned to the front at the intersection between the arcus aortae and the descending aorta, came to the deep connective tissues, went across the pulmonary artery trunk, and came to PLSVC at last (Fig. 3). There was no obvious malformation with the vein, artery and bronchus of the upper left lung. In the surgery, it could be seen that there was obvious enlargement of lymph nodes of the Group 4, 5, 6, 7 and 10, and the maximum diameter of single lymph node was about 2.5 cm (Fig. 4). Post-surgery pathology showed that: there was squamous cell carcinoma in the upper left lung (5*4*3 cm), reactive hyperplasia with the lymph nodes, and no invasion of the vessels, nerves, pleura or bronchial stump (pT2bN0M0 p Phase IIA). The patient received 6 chemotherapies after surgery. During a one-year follow-up visit, the patient had a good recovery without relapse. Discussion PLSVC is a relatively common thoracic vascular malformation, and could be mostly diagnosed by chest CT, but there are also reports mistaking PLSVC as enlarged lymph nodes [7][8][9]. Most of PLSVC flow into the right atrium, while only a few into the left atrium. As for the patients with PLSVC into the right atrium, if without other cardiovascular malformations, they are always symptomless and require no therapy. However, as for those into the left atrium, they always demonstrate cyanosis of different degrees, and some require surgical treatment. At present, there are few reports regarding left lung cancer surgery combined with PLSVC [10], and some PLSVC patient may have other cardiovascular malformations at the same time [11], therefore, it needs to be extremely careful when performing left lung cancer surgery for such patients. In normal cases, some obvious cardiovascular malformations could be discovered by cardiac B-mode ultrasound and chest CT before surgery. In case of discovering PLSVC in pre-surgery examination, we suggest perfecting the vascular reconstruction in a further way, and reading the images carefully, so as to understand whether there are other vascular malformations or not, thus avoiding damage to these vessels during the surgery. The patient in this case had PLSVC and left azygos vein at the same time. At present, there are very few reports on left azygos vein, but these reports all show that left azygos vein and PLSVC coexist with each other, and have a relatively fixed anatomical site. As for the patient in this case, since the left azygos vein had small caliber and concealed form, we didn't notice the left azygos vein being purple on the surface of the descending aorta before surgery, and it was not noticed until during the surgery. However, at the intersection of the arcus aortae and the descending aorta, the left azygos vein bended forward, and flew into PLSVC crossing the left pulmonary artery trunk on deep surface of the connective tissues. Here, since left azygos vein was deep and not easy to observe, therefore, it was likely to damage the blood vessel and result in massive hemorrhage if not careful. The surgeon might mistake that the left pulmonary artery trunk or the arcus aortae was damaged. And for unexperienced surgeons, they may also mistake this as the arterial ligament. Therefore, regarding the patients with left azygos vein, when dissecting lymph nodes of the Group 5 and Group 6 and separating branches of the upper left lung artery, attention shall be paid to prevent damage to such vessel. In addition, the patient had multiple enlarged lymph nodes in the mediastinum, and the maximum diameter of single lymph node was 2.5 cm. By experience, we were highly suspicious that the lymph node was metastatic lymph node, and combined with the tumor size, it was considered very likely to be Phase IIIb, and therefore, the patient might have no surgical indications. We suggested that the patient should perfect the PET-CT examination to know whether there was metastasis or not. However, the patient refused to complete PET-CT due to limitation of economic conditions, so we performed surgical treatment directly. Luckily, post-surgery pathology of the patient showed that there was no cancer metastasis with the lymph nodes. Therefore, we think that we can't judge whether there is metastasis or not merely according to the size of lymph nodes, which might make the patients lose the opportunity for radical cure by surgery. Conclusion To sum up, if discovering PLSVC before surgery, it generally means that there may be other vascular variations. We suggest completing chest enhanced CT and vascular reconstruction to understand whether there are other cardiovascular malformations or not. As for such vascular malformations as left azygos vein, at present, the reports all show that it coexists with PLSVC, and its anatomical site is relatively fixed. Therefore, being familiar with anatomy of left azygos vein is good for preventing accidental damage to the blood vessel, and especially when performing surgical operation above the left pulmonary artery trunk, extreme attention shall be paid. In addition, as for the patients with the diagnosis of lung cancer before surgery, it is not reliable to judge whether there is metastasis or not merely according to the size of the lymph nodes, instead, PET-CT or needle biopsy is recommended.
2020-09-14T14:17:40.730Z
2020-09-14T00:00:00.000
{ "year": 2020, "sha1": "d54091dbccf59ceaeb7c720ec2ac30aaee6e0c32", "oa_license": "CCBY", "oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-020-01278-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d54091dbccf59ceaeb7c720ec2ac30aaee6e0c32", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251756496
pes2o/s2orc
v3-fos-license
Overnight GARCH-It\^o Volatility Models Various parametric volatility models for financial data have been developed to incorporate high-frequency realized volatilities and better capture market dynamics. However, because high-frequency trading data are not available during the close-to-open period, the volatility models often ignore volatility information over the close-to-open period and thus may suffer from loss of important information relevant to market dynamics. In this paper, to account for whole-day market dynamics, we propose an overnight volatility model based on It\^o diffusions to accommodate two different instantaneous volatility processes for the open-to-close and close-to-open periods. We develop a weighted least squares method to estimate model parameters for two different periods and investigate its asymptotic properties. We conduct a simulation study to check the finite sample performance of the proposed model and method. Finally, we apply the proposed approaches to real trading data. Introduction Since Markowitz (1952) introduced the modern portfolio theory, measuring risk has become important in financial applications. Volatility itself is often employed as a proxy for risk. Furthermore, there are several risk measurements, such as Value at Risk (VaR), expected shortfall, and market beta (Duffie and Pan, 1997;Rockafellar and Uryasev, 2000;Sharpe, 1964). These risk measurements take volatilities as an important ingredient in their formulations, and their performances heavily depend on the accuracy of volatility estimation. Generalized autoregressive conditional heteroskedasticity (GARCH) models are one of the most successful volatility models for low-frequency data (Bollerslev, 1986;Engle, 1982). They employ squared daily log-returns as innovations in conditional expected volatilities and are able to capture low-frequency market dynamics, such as volatility clustering and heavy tail. At the high-frequency level, nonparametric approaches, such as Itô processes and realized volatility estimators, are often utilized to model and estimate volatilities. Examples include two-time scale realized volatility (TSRV) (Zhang et al., 2005), multi-scale realized volatility (MSRV) (Zhang, 2006), kernel realized volatility (KRV) (Barndorff-Nielsen et al., 2008), quasi-maximum likelihood estimator (QMLE) (Aït-Sahalia et al., 2010;Xiu, 2010), pre-averaging realized volatility (PRV) , and robust pre-averaging realized volatility (Fan and Kim, 2018). In practice, we often observe jumps in financial data, and the decomposition of daily variation into continuous and jump components can improve volatility estimation and aid with better explanation of volatility dynamics (Aït-Sahalia et al., 2012;Barndorff-Nielsen and Shephard, 2006;Corsi et al., 2010). For example, Fan and Wang (2007) and Zhang et al. (2016) employed the wavelet method to identify the jumps in given noisy high-frequency data. Mancini (2004) studied a threshold method for jump-detection and presented the order of an optimal threshold, and introduced the jump robust pre-averaging realized (PRV) estimator. We call this realized volatility. There have been several recent attempts to combine low-frequency GARCH and SV models and high-frequency re-See also Martens (2002); Todorova and Souček (2014);Tseng et al. (2012) for more information on the impact of overnight volatility. These studies document an increasing interest in developing Itô process-based models that provide a rigorous mathematical formulation for using both opento-close high-frequency data and close-to-open low-frequency data to analyze whole-day market dynamics. In this paper, we develop an instantaneous volatility model for a whole-day period. The wholeday is broken down into two time periods, the open-to-close and close-to-open periods. During the open-to-close period, we observe high-frequency trading data, whereas during the close-toopen period, we observe low-frequency close and open prices. To reflect this structural difference, we develop two different instantaneous volatility processes for the open-to-close and close-to-open periods. For example, for the open-to-close period, we use the current integrated volatility as an innovation to reflect the market dynamics immediately, which helps to adapt to the rapid change in the volatility process, as occurs in the high-frequency volatility models (Corsi, 2009;Hansen et al., 2012;Shephard and Sheppard, 2010;Song et al., 2020). For the close-to-open period, we employ the current squared log-return as an innovation, which brings us back to the discrete-time GARCH model for the close-to-open period. The proposed structure implies that the conditional expected volatility for the whole-day period is a function of past open-to-close integrated volatilities and squared close-to-open log-returns. We call this volatility model the overnight GARCH-Itô (OGI) model. Moreover, to estimate its model parameters, we develop a quasi-likelihood estimation procedure. Specifically, for the open-to-close period, we employ realized volatilities as a proxy for the corresponding conditional expected volatilities, whereas for the close-to-open period, we adopt squared close-to-open log-returns as a proxy for the corresponding conditional expected volatilities. These proxies have heterogeneous variances that are related to the accuracy of the proxies. To reflect this, we calculate their variances and assign different weights to each proxy. As a result, the proposed estimation method takes the form of weighted least squares. We apply the overnight GARCH-Itô model for a VaR study. The rest of this paper is organized as follows. Section 2 introduces the overnight GARCH-Itô model and discusses its properties. Section 3 proposes weighted least squares estimation methods and investigates its asymptotic properties. Section 4 conducts a simulation study to check the finite sample performance of the proposed estimation methods. Section 5 applies the proposed overnight GARCH-Itô model and method to real trading data. The conclusion is presented in Section 6. We collect the proofs in the Supplement document. Overnight GARCH-Itô models In this section, we develop an Itô diffusion process to capture the whole-day market dynamics. To separate the parameters for the high-frequency period (open-to-close) and low-frequency period (close-to-open), we use the subscript or superscript H and L, respectively. For the low-frequency GARCH volatility related parameter, we use superscript g. Definition 1. We call the log-price X t an overnight GARCH-Itô (OGI) process if it satisfies dX t = µ t dt + σ t (θ)dB t + J t dΛ t , where [t] denotes the integer part of t except that [t] = t−1 when t is an integer, λ is the time length of the trading period, Z H t = t [t] dW s , Z L t = t λ+[t] dW s , γ = γ H γ L , and θ = (ω H1 , ω H2 , ω L , γ H , γ L , α H , α L , β H , β L , ν H , ν L ) is the model parameters. For the jump part, Λ t is the Poisson process with constant intensity µ J , and the jump sizes J t 's are independent of the continuous diffusion processes. Furthermore, the jump size J t is equal to zero for the overnight period. where n is an integer, and at the market close time, Thus, the instantaneous volatility process is some quadratic interpolation of the GARCH volatility with the open-to-close integrated volatility and squared close-to-open log-return as the innovation. To account for the random fluctuations of the instantaneous volatilities, we introduce Z H t and Z L t with the scale parameters ν H and ν L . When considering only one of the open-to-close and closeto-open periods and ignoring the other period, the OGI model recovers the realized GARCH-Itô process (Hansen et al., 2012;Song et al., 2020) or unified GARCH-Itô process . Thus, unlike the proposed OGI model, these models only incorporate one innovation term of the integrated volatility and squared log-return in their conditional volatility. Because our main interest lies in measuring the whole-day risk, to estimate the model parameters, we use nonparametric integrated volatility estimators (Barndorff-Nielsen et al., 2008;Zhang, 2006) and squared log-returns as proxies for the parametric conditional expected integrated volatility. Thus, it is important to investigate properties of the integrated volatility of the proposed OGI model. The following theorem shows the properties of integrated volatilities. Theorem 1. For the OGI model, we have the following properties. (a) The integrated volatilities have the following structure. For 0 < α H < 1, 0 < β L < 1, and n ∈ N, we have n n−1 D n , D H n , D L n are martingale differences and ω g , γ, α g , β g , ω g H , α g H , β g H , ω g L , α g L , β g L are functions of θ. Their detailed forms are defined in Theorem ?? in the supplement document. Then, recursively, we can obtain the multi-period prediction. As we discussed above, we estimate the model parameters via the relationship between the conditional GARCH volatilities, h H n (θ), h L n (θ), and h n (θ), and the corresponding integrated volatility or squared log-return. Thus, to study the low-frequency volatility dynamics, we only need Theorem 1 (a). That is, under the model assumptions (2.2)-(2.4), we develop the rest of the paper. In comparison with direct volatility modeling based on realized volatility such as HAR, HEAVY, and realized GARCH models (Andersen et al., 2003;Corsi, 2009;Hansen et al., 2012;Shephard and Sheppard, 2010), the unified GARCH-Itô model and OGI model may be more difficult or even less practical for drawing statistical inferences from combined low-and high-frequency data. However, like the unified GARCH-Itô model case, the OGI approach indicates the existence of the diffusion process, which satisfies the conditions (2.2)-(2.4) and fills the gap between the low-frequency discrete time series volatility modeling and the high-frequency continuous time diffusion process. Because the purpose of this paper is to develop diffusion processes that can account for the low-frequency market dynamics, the parameter of interest is the GARCH parameter θ g = (ω g H , ω g L , γ, α g H , α g L , β g H , β g L ). We notice that, under the model assumption, we need the common γ condition for the open-to- Let m be the average number of the high-frequency observations, that is, m = 1 n n d=1 m d . Due to market inefficiencies, such as the bid-ask spread, asymmetric information, and so on, the high-frequency data are masked by the microstructure noise. To account for this, we assume that the observed log-prices during the open-to-close period have the following additive noise structure: where X t is the true log-price, t d,i is microstructure noise with mean zero and variance η d , and the log-price and microstructure noise are independent. The effect of µ t is negligible regarding high-frequency realized volatility estimators, and the magnitude of daily returns is relatively small. Thus, for simplicity, we assume µ t = 0 in Definition 1. We note that the theoretical results in Theorem 2 can be established in the similar way with non-zero µ t under some piecewise constant condition for µ t . In contrast, during the close-to-open period, we only observe the low-frequency observations, open and close prices. In the low-frequency time series modeling, we often assume that the true low-frequency prices are observed. In practice, the microstructure noise may exist in the low-frequency observations, but its impact on the low-frequency modeling is relatively small. Thus, we also assume that the true low-frequency observations, the open and close prices X d and X λ+d , are observed at the open and close times, t d+1,0 and t d+1,m d+1 . Remark 2. For the microstructure noise, we may need a stationary condition to estimate the integrated volatility with the optimal convergence rate m −1/4 ( Barndorff-Nielsen et al., 2008;Fan and Kim, 2018;Zhang, 2006). For example, we may impose a ARMA-type structure on the microstructure noise and assume some dependence between the price processes and the microstructure noise. However, in this paper, we directly adopt a well-performing nonparametric realized volatility estimator, which can be obtained under certain structures of the microstructure without affecting the volatility modeling. Thus, we can put such structures on the microstructure noise, as long as we can secure the well-performing realized volatility estimator. GARCH parameters estimation We first fix some notations. For any given vector Let C's be positive generic constants whose values are independent of θ, n, and m and may change from occurrence to occurrence. In this section, we develop an estimation procedure for the GARCH parameters, θ g = (ω g H , ω g L , γ, α g H , α g L , β g H , β g L ), which are minimum required parameters to evaluate the GARCH volatilities defined in Theorem 1, where elements of θ g are defined in Theorem ?? in the supplement document. We denote the true GARCH parameter by θ g 0 = (ω g H,0 , ω g L,0 , γ 0 , α g H,0 , α g L,0 , β g H,0 , β g L,0 ). Theorem 1 indicates that integrated volatilities can be decomposed into the GARCH volatility terms h H n (θ g 0 ) and h L n (θ g 0 ), and the martingale difference terms D H n and D L n . This fact inspires us to use the integrated volatilities as proxies of the GARCH volatilities. Then, as the sample period goes to infinity, the martingale convergence theorem may provide consistency of the estimators. Corsi et al., 2010;Fan and Wang, 2007;Xiu, 2010;Zhang, 2006;Zhang et al., 2016), and we call these nonparametric estimators "realized volatility." Under mild conditions, we can show that realized volatility converges to integrated volatility with the optimal convergence rate m −1/4 ( Barndorff-Nielsen et al., 2008;Tao et al., 2013;Xiu, 2010;Zhang, 2006). In the numerical study, we employ the jump robust pre-averaging realized (PRV) estimator (Aït-Sahalia and Xiu, 2016; This implies that the squared close-to-open return can also be decomposed into the GARCH volatility and martingale difference. That is, we have the following relationships: where D LL n = D L n + 2 n λ+n−1 (X t − X λ+n−1 )σ t (θ 0 )dB t . We use the above relationships to estimate the GARCH parameter θ g 0 . The variances of the martingale differences D H n and D LL n indicate the accuracy of the GARCH volatility information coming from the proxies λ+n−1 n−1 σ 2 t (θ 0 )dt and (X n − X λ+n−1 ) 2 , so each proxy with the smaller variance is closer to the corresponding GARCH volatility. Thus, as we incorporate the variance information into an estimation procedure, we expect to improve its performance. For example, we can standardize the proxies as follows: The unit expectations help to assign a larger weight to a more accurate proxy. In the empirical study, we find that the variance of the integrated volatilities is smaller than that of the squared close-to-open returns. That is, the open-to-close proxy is more accurate, so we make more use of the information from the open-to-close period by assigning to it a larger weight. To compare the proxies and GARCH volatilities, we employ the weighted least squares estimation as follows: and φ H and φ L are consistent estimators of variances of martingale differences D H n and D LL n , respectively. To evaluate the above quasi-likelihood function, we first need to estimate the integrated volatility IV i . It can be estimated by the realized volatility estimator, which is denoted by RV i . Then we estimate the GARCH volatilities as follows: We note that the conditional expected volatilities for the open-to-close and close-to-open periods have the common γ as in (3.1) and (3.2), which makes it possible to have the GARCH form for the whole-day conditional expected volatility. To evaluate the GARCH volatilities, we use RV 1 and the sample variance of the close-to-open log-returns as the initial values h H 0 (θ g ) and h L 0 (θ g ), respectively. The effect of the initial value has the negligible order n −1 (see Lemma 1 in Kim and Wang (2016)), so its choice does not significantly affect the parameter estimation. With these estimators, we define the quasi-likelihood function as follows: and we obtain the estimator of the GARCH parameters θ g 0 by maximizing the quasi-likelihood function. That is, where Θ g is the parameter space of θ g . We call the estimator the weighted least squares estimator (WLSE). To obtain the variances of martingale differences, φ H and φ L , we employ the QMLE method as follows. We define the quasi-likelihood functions for the open-to-close and close-toopen, respectively, in the following manner: Then we find their maximizers, which are denoted by θ g H and θ g L . Using the residuals, we estimate the variances of martingale difference in the following way: Similar to the proofs of Theorems 3 and 5 in , we can establish their consistency. Remark 3. There are other possible choices of the variance of the martingale differences. For example, we can use the conditional variances in Theorem 1 (b) to evaluate the quasi-likelihood function (3.3). However, the conditional variance heavily depends on the underline OGI process, which may cause some bias when the underline model is misspecified. Thus, to make robust inferences, we use the unconditional variance instead of the conditional variance. Furthermore, the proposed procedure has a more simple structure, which may help to reduce estimation errors. We note that the proposed two-step weighted least square estimation procedure works well as long as the first-step variance estimators are consistent. Thus, we can easily incorporate the other variance estimator. According to our empirical analysis, the unconditional variance estimator provides more stable results than the conditional variance estimator. Thus, we use the unconditional variance and only report its related results. If we can estimate conditional variance in a robust way, it may show better performance. However, obtaining the robustness is not straightforward because we need to impose structure on the process to evaluate the conditional variance. We leave this for a future study. To establish asymptotic properties for the proposed WLSE, we need the following technical assumptions. Assumption 1. ( where ω l , ω u , γ l , γ u , α l , α u , β l , β u are some known positive constants, · 2 is the matrix spectral norm, and A = (2) We have for some positive constant C, Remark 4. Assumption 1(2) is about the finite 4th moment condition, which is the minimum requirement when handling the second moment target parameter. Under some finite 4th moment conditions, Assumption 1(4) is satisfied Tao et al., 2013). However, when there is a jump part in the diffusion process, this condition may be violated. In this case, we need to employ some jump robust realized volatility (Aït-Sahalia and Xiu, 2016;Zhang et al., 2016) and derive some uniform convergence with respect to time d. Finally, Assumption 1(5) is required to derive an asymptotic normal distribution of the proposed WLSE. The following theorem investigates the asymptotic behaviors of the proposed WLSE θ. Theorem 2. Under Assumption 1, we have Remark 5. Theorem 2 shows that the WLSE θ g has the convergence rate m −1/4 + n −1/2 . The first term, m −1/4 , comes from estimating the integrated volatility, which is known as the optimal convergence rate in the case of high-frequency data with the presence of the microstructure noise. The second term, n −1/2 , is the usual convergence rate in the low-frequency data case. Under the stationary assumption, we also derive the asymptotic normality. Remark 6. To derive the asymptotic normality, we need the condition nm −1/2 → 0, which is too restrictive for the long sample period. If this condition is violated, the asymptotic normality may depend on m 1/4 (RV d − IV d ), which is the quantity related to high-frequency estimation. If this term is some martingale difference, we may be able to relax the condition such as nm −1 → 0. In this case, usually, m is huge, so it is not restrictive. One of our objectives in this paper is to predict future volatility. The best predictor given the current available information F n is the conditional expected volatility-that is, the GARCH volatility h n+1 (θ g 0 ). With the model parameter estimator, we estimate the GARCH volatility as follows: where the GARCH parameters ω g , α g , and β g are estimated using the plug-in method with the WLSE θ g . The following corollary provides the consistency of the GARCH volatility estimator. Hypothesis tests In financial practices, we are interested in the GARCH parameters (ω g , γ, α g , β g ) and often make statistical inferences about them, such as hypothesis tests. In this section, we discuss how to conduct hypothesis tests for the GARCH parameters. We first derive the asymptotic distribution of the GARCH parameter estimators. Theorem 2 implies that The GARCH parameters are functions of θ. For example, Thus, using the delta method and Slutsky's theorem, we can show that when ∂f (θ g ) where f ( θ g ) = ∂f (θ g ) ∂θ g | θ g = θ g and A and B are consistent estimators of A and B, respectively. To evaluate the asymptotic variances of the GARCH parameter estimators, we first need to estimate A and B. We use the following estimators, and h H i (θ g ), h L i (θ g ) are defined in (3.1) and (3.2), respectively. Under some stationary condition, we can establish its consistency. Then, using the proposed Z-statistics T f,n in (3.8), we can conduct the hypothesis tests based on the standard normal distribution. A simulation study We conducted simulations to check the finite sample performance of the proposed estimation methods. We generated the log-prices for n days with frequency 1/m all for each day and let . . , n, j = 0, . . . , m all . We chose the closed time λ as 6.5/24, which corresponds to 6.5 trading hours. The true log-price follows the OGI model in Definition 1. The parameter setup is presented in the supplement document. To generate the jump, we simply set the jump size as |J t | = 0.05 and the signs of the jumps were randomly generated. Λ t was generated using a Poisson distribution with mean 10 during the open-to-close period. For the open-to-close period, we generated the noisy observations. The detail setup can be found in the supplement document. To generate the true process, we chose m all = 43, 200, which equals the number of every 2 seconds in a one-day period. We varied n from 100 to 500 and m from 390 to 11,700, which correspond to the numbers of 1 minute and every 2 seconds in the open-to-close period, respectively. We repeated the whole procedure 500 times. 3. To check the asymptotic normality of the GARCH parameters (ω g , γ, α g , β g ), we calculated the Z-statistics defined in Section 3.3. Figure 1 draws the standard normal quantile-quantile plots of the Z-statistics estimates of ω g , γ, α g , and β g for n = 500 and m = 390, 1170, 11700 and true volatility. Figure 1 shows that, as the realized volatility closes to the true integrated volatility, the Z-statistics close to the standard normal distribution. This result agrees with the theoretical conclusions in Section 3. Thus, based on the proposed Z-statistics, we can conduct hypothesis tests using the standard normal distribution. One of our main goals in this paper is to predict future volatility. We therefore examined the out-of-sample performance of estimating the one-day-ahead GARCH volatility h n+1 (θ 0 ). To estimate future GARCH volatility, we employed the proposed conditional GARCH volatility estimator h n+1 ( θ), realized GARCH volatility estimator (Hansen et al., 2012;Song et al., 2020) data. For example, the realized GARCH volatility has the following GARCH form: h n (θ) = ω + γh n−1 (θ) + αRV n−1 , and the discrete GARCH(1,1) has the following GARCH form: We then adopted the QMLE method with the Gaussian quasi-likelihood function to estimate their Then we estimate the GARCH parameter (ω g , γ, α g , β g ) using the QMLE method with RV + OV as the proxy. We call this the aggregated OGI (A-OGI) model. We note that this model can be considered as the realized GARCH model with the additional overnight innovation term. We measure the mean absolute errors with the one-day-ahead sample period over 500 samples as follows: 1 500 where var n+1,i is one of the above future volatility estimators at the ith sample path given the available information at time n. We report the mean absolute errors for the OGI, S-OGI, A-OGI, realized GARCH, adjusted realized GARCH, GARCH, and sample variance with respect to the OGI against varying the number, n, of the low-frequency observations and the number, m, of the high-frequency observations in Table 2. In Table 2, we find that the OGI models can estimate the one-day-ahead GARCH volatility h n+1 (θ 0 ) well, but the other estimators cannot account for it well. This may be because, under the OGI model, the market dynamics are explained by the open-to-close high-frequency volatility and squared close-to-open log-returns; however, the other models ignore one of the factors. Compared to estimation methods for the OGI models, the WLSE yields better performance than the others. One possible explanation for this is that the WLSE procedure gives more weight to the high-frequency observations, and this helps reduce the estimation errors. From these results, we can conjecture that modeling appropriate overnight processes helps to not only account for market dynamics but also improve the estimation accuracy. that is, λ = 6.5/24. We used the log-prices and adopted the jump robust PRV estimation procedure in (??) in the supplement document to estimate open-to-close integrated volatility. In the empirical study, we chose the tuning parameter c τ as 10 times the sample standard deviation of pre-averaged prices m 1/8Ȳ (t d,k ). We fixed the in-sample period as 500 days and used the rolling window scheme to estimate the parameters. To check the relative importance of each OGI model component, we report the average proportion of jumps, and the mean and standard deviation of the PRV, squared overnight return, and estimated GARCH volatility from the OGI model in Table 3. From Table 3, we find that the magnitude of squared overnight returns is comparable to that of PRV, and the squared overnight returns have a greater standard deviation. This result leads us to conjecture that the overnight risk usually significantly affects the volatility dynamics structure. For a comparison, we calculated the OGI, S-OGI, A-OGI, discrete GARCH(1,1), and adjusted realized GARCH volatilities defined in Section 4 and the GJR GARCH (1,1) (Glosten et al., 1993). To check the performance of the ARFI-type model, we adopted the HAR-RV model (Corsi, 2009) and log-HAR-RV model with the bias correction (Demetrescu et al., 2020). For the log-HAR-RV model, we apply the HAR model on the logarithm of the realized volatility and multiply exp ( σ 2 /2) to the forecast value, where σ 2 is the consistent estimator of the error variance for the HAR model on the log-realized volatility. Then, we magnified these two estimators by multiplying (1 + mean[OV /RV ]) to match the scale of the whole day variation, which are called "adjusted HAR" and "adjusted log-HAR," respectively. To check the leverage effect, we also considered some variations of the OGI model as follows: , c H and c L are additional parameters. To estimate the parameters, we adopted the QMLE method, which is referred to as GJR-OGI. To measure the performance of the volatility, we used the mean squared prediction errors (MSPE) and QLIKE (Patton, 2011) as follows: where Vol i is one of the OGI, S-OGI, A-OGI, GJR-OGI, GJR GARCH, discrete GARCH, adjusted realized GARCH, adjusted HAR, and adjusted log-HAR volatilities, and we used RV i + (X i − X λ+i−1 ) 2 as the nonparametric daily volatility estimator. We predicted the one-day-ahead conditional expected volatility using the in-sample period data. Also, to check the significance of the difference in performances, we conducted the Diebold and Mariano (DM) test (Diebold and Mariano, 1995) for the MSPE and QLIKE. We compared the OGI with other models. Table 4 reports the average rank and the number of the first rank of MSPEs and QLIKEs for the nine models over the five assets. Table 5 reports the full MSPEs and QLIKEs results, and Table 6 shows the p-values for the DM tests. From Tables 4 and 5, we find that incorporating the squared overnight returns as a new innovation helps explain the market dynamics. From Tables 4-6, we find that the OGI model shows the best performance overall. This may be because the weighted least squared estimation method helps improve the volatility prediction accuracy. Table 4: Average rank of MSPEs and QLIKEs for the OGI, S-OGI, A-OGI, GJR-OGI, GJR, discrete GARCH, adjusted realized GARCH, adjusted HAR, and adjusted log-HAR. In the parenthesis, we report the number of the first rank among competitors. The mean-variance utility function is defined as where ξ > 0 is the risk aversion coefficient and R i is the daily log-return. Then, the optimal allocation has the following form: to make the investment feasible and prevent the short-sellings. Finally, the resulting returns R * i =x * i R i were used to measure the economic performances. Specifically, with the meanR * and standard deviationS * of the returns R * i , we calculated the Sharpe ratio SR * =R * /S * and expected utility EU * =R * − ξ 2 S * 2 . Table 7 reports the Sharpe ratios and expected utilities for the nine models for ξ = 2.5, 5 over the five assets. As seen in Table 7, the models with the open-to-open information show better performance than other models. This may indicate that considering overnight period helps obtain the additional economic gains. When comparing the models with the open-to-open information, the OGI-based models do not significantly outperform the GJR and GARCH models. This may be because the future return estimator often has the huge errors in practice. From this result, we can conjecture that the overnight period is significant in terms of investing strategy. To check the volatility persistence of the nonparametric volatility, we study regression residuals between the nonparametric volatility and estimated conditional volatilities. Specifically, we fitted the following linear model where V ol i is one of the predicted volatilities OGI, S-OGI, A-OGI, GJR-OGI, GJR, discrete GARCH, adjusted realized GARCH, adjusted HAR, and adjusted log-HAR. Then, we calculated the regression residuals, i , for each model and checked their auto-correlations over lag L = 1, . . . , 30. Table 8 reports the average rank and the number of the first rank for the first and max absolute auto-correlations from the nine models. In the supplement document, we draw the auto-correlation function (ACF) of the regression residuals for each model and asset (see Figure ??). From Table 8 and Figure ??, we find that the OGI, S-OGI, A-OGI, and GJR-OGI models show better performance than other estimators overall. That is, the OGI-based models can explain the market dynamics in the volatility time series. Table 8: Average ranks in order from the smallest to the biggest for the first and max absolute auto-correlations over lag L = 1, . . . , 30 for the OGI, S-OGI, A-OGI, GJR-OGI, GJR, discrete GARCH, adjusted realized GARCH, adjusted HAR, and adjusted log-HAR. In the parenthesis, we report the number of the first rank among competitors. To backtest the estimated VaR, we conducted the likelihood ratio unconditional coverage (LRuc) test (Kupiec, 1995), the likelihood ratio conditional coverage (LRcc) Christoffersen (1998), and the dynamic quantile (DQ) test with lag 4 (Engle and Manganelli, 2004). Table 9 reports the number of cases where p-value is bigger than 0.05 for the five assets and q 0 = 0.01, 0.02, 0.05, 0.1, 0.2 based on the LRuc, LRcc and DQ tests. In the supplement document, we draw the scatterplots for the p-values of LRuc, LRcc, and DQ tests for the nine models with q 0 = 0.01, 0.02, 0.05, 0.1, 0.2 (see Figure ??). As seen in Table 9 and Figure ??, the OGI model shows the best performance for all hypothesis tests. This result shows that the overnight risk is important to account for whole-day market dynamics, and the OGI process can account for market dynamics by utilizing the overnight risk information. In contrast, other OGI-based models show relatively worse performance. This finding prompts us to speculate that it may help improve estimation accuracy by estimating the open-to-close and close-to-open separately with the weighted least squared estimation method under the common γ condition. Conclusion In this paper, we introduce the diffusion process, which can explain the whole-day volatility dynamics. Specifically, the proposed OGI model can account for the different dynamic structures for the open-to-close and close-to-open periods. To do this, we introduced the weighted QMLE procedure and showed its asymptotic properties. In the empirical study, we found the benefit of incorporating the overnight information. The models with overnight innovation term perform better than other models for the prediction of daily volatility, utility based analysis, and volatility persistence analysis. It suggests that incorporating the overnight information helps account for the dynamic structure of the daily total variation. On the other hand, the OGI model outperforms other OGI-based models in terms of the prediction of daily volatility, analysis of volatility persistence, and one-day-ahead VaR measurement. It reveals that the weighted least squared estimation method with the common γ condition helps obtain better estimation accuracy. Tao, M., Wang, Y., and Chen, X. (2013). Fast convergence rates in estimating large volatility matrices using high-frequency financial data. Econometric Theory, 29 (04) For the open-to-close period, we generated the noisy observations as follows: To generate the true process, we chose m all = 43, 200, which equals the number of every 2 seconds in a one-day period. We varied n from 100 to 500 and m from 390 to 11,700, which correspond to the numbers of 1 minute and every 2 seconds in the open-to-close period, respectively. We treated Y t d,j , j = 1, . . . , m − 1 as the high-frequency observations and the open and close prices X t d,0 and X t d,m as the observed log-prices. To estimate the integrated volatility for the open-to-close period, we employed the jump adjusted pre-averaging realized volatility estimator (Aït-Sahalia and Xiu, 2016; as follows: (6), adjusted realized GARCH (7), adjusted HAR (8), and adjusted log-HAR (9). (a) For 0 < α H < 1 and n ∈ N, we have is a martingale difference. First consider | L n,m (θ) − L n (θ, θ 0 )|. By Assumption ??(4), we have (C.8) and similarly, we can show Then, since IV i 's and X i 's are nondegenerate random variables, we have = 0 a.s., and similarly, we obtain  which implies θ * = θ 0 a.s. This shows that the maximizer is unique. Finally, the result is a consequence of Theorem 1 in Xiu (2010). Proof of Theorem ??. The mean value theorem and Taylor expansion indicate that for some θ * between θ 0 and θ, we have Thus, by Assumption ?? (1) and (5), h H n (θ 0 ) and h L n (θ 0 ) are stationary. Since (X n − X λ+n−1 ) 2 and IV n are functions of h H n (θ 0 ), h L n (θ 0 ) D H n and D LL n , (X n − X λ+n−1 ) 2 and IV n are stationary. Then, similar to the proofs of Proposition C1, we can show − s n,m (θ * ) p → − s n (θ 0 ). Since IV i 's and X i 's are nondegenerate, − s n (θ 0 ) is almost surely positive definite. By the ergodic theorem, we have − s n (θ 0 ) p → 2A.
2021-03-01T02:16:04.074Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "9900e299580437583cadf700ce42820e3a744f5f", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2102.13467", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9900e299580437583cadf700ce42820e3a744f5f", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
228375987
pes2o/s2orc
v3-fos-license
Square-densities, and volume forms Heron's formula from antiquity, for the area of a triangle, is used to relate volume form and infinitesimal square-volume of certain infinitesimal simplices in a Riemannian manifold Introduction The Greek geometers (Heron et al.) discovered a remarkable formula, expressing the area of a triangle in terms of the lengths of the three sides. Here, length and area are seen as non-negative numbers, which involves, in modern terms, formation of absolute value and square root. To express the notions and results involved without these non-smooth constructions, one can express the Heron Theorem in terms of the squares of the quantities in question: if g(A, B) denotes the square of the length of the line segment given by A and B, the Heron formula says that the square of the area of the triangle ABC may be calculated by a simple algebraic formula out the three numbers g(A, B), g(A, C), and g(B, C). Explicitly, the formula appears in (1) below. In modern terms, the formula is (except for a combinatorial constant −16 −1 ) the determinant of a certain symmetric 4 × 4 matrix constructed out of three numbers; see (2) below. This determinant, called the Cayley-Menger determinant, generalizes to simplices of higher dimesions, so that e.g. the square of the volume of a tetrahedron (3-simplex) (ABCD) in space is given (except for a combinatorial constant) by the determinant of a certain 5 × 5 matrix constructed out of the six square lengths of the edges of the tetrahedron (by a formula already known in the Renaissance). The Heron formula has the advantage that it is symmetric w.r.to permutations of the k + 1 vertices of a k-simplex. Also, it does not refer to the vector space or affine structure of the ambient space. We shall in particular consider the case where the space, in which the k-simplex lives, is a Euclidean space: an affine space E whose associated vector space V is provided with a positive definite inner product. Then the square lengths, square areas, square volumes etc. of the simplices can also be calculated by another well known and simple expression: namely as (1/k!) 2 times the Gram determinant of a certain k × k matrix constructed from the simplex, by choosing one of its vertices as origin. The Gram determinant itself expresses the square volume of the parallelepipedum spanned by k vectors in V that go from the origin to the remaining vertices. An important difference beween the two formulae is the (k + 1)!-fold symmetry in the Heron formula, where the Gram formula is apriori only k!-fold symmetric, because of the special role of the chosen origin. This Gram method of calculating the square-volumes has the advantage that it it is easy to describe algebraically, and in particular, it is easy to describe what happens if one changes the metric; this is needed, when dealing with Riemannian manifolds, where the metric tensor, in any given coordinate chart, changes from point to point. We begin in Section 1 by recalling the classical case of Euclidean spaces. In particular, we recall the comparison (standard, but nontrivial) between the Heron and Gram calculations. This Section is essentially a piece of standard linear algebra. In Section 2, we recall or introduce the notions of differential form and square density in the combinatorial versions from synthetic differential geometry (SDG). This leads to synthetic, or combinatorial arguments, based on "infinitesimal" simplices, and their square volume. In Section 3, we relate (in terms of SDG) the volume form of an n-dimensional Riemannian manifold to the volume of certain infinitesimal n-simplices. This Section contains the main theorem, where we, for a Riemannian manifold of dimension n, compare the square volume of n-simplices given, respectively, by the Heron formula and by the (valuewise) square of the volume form. Throughout, R denotes "the" number line, a commutative ring with suitable properties to be described when needed. In particular, the notion of positivity, and of when a quadratic form over R is positive definite is recalled in the beginning of Section 4. I do not know whether positive definiteness plays a role in the algebraic arguments in the first three Sections, except that the use of the phrases "square length", . . . , "square volume", etc. are somewhat misleading in the indefinite case. It is useful to think in terms of the quantities occurring as being quantities whose physical dimension is some power of length (measured in meter m, say), so that length is measured in m, area in m 2 , square area in m 4 , etc. Tangent vectors are not used in the following; they would have physical dimension of m · t −1 (velocity). The word square-density is used in any dimension. Square length, square area, and square volume are examples, but we do not claim that the square densities considered presently have such geometric significance. The theory developed here was also attempted in my [5]; I hope that the present account will be less ad hoc. Heron's formula The basic idea for the construction of a square k-volume function goes, for the case k = 2, back to Heron of Alexandria (perhaps even to Archimedes); they knew how to express the square of the area of a triangle S (whether located in Euclidean 2-space or in a higher dimensional Euclidean space) in terms of an expression involving only the lengths a, b, c of the three sides: where t = 1 2 (a + b + c). Substituting for t, and multiplying out, one discovers (cf. [2] 1.53) that all terms involving an odd number of any of the variables a, b, c cancel, and we are left with an expression that only involves the squares a 2 , b 2 and c 2 of the lengths of the sides. The expression in the parenthesis here may be written in terms of the determinant of 4 × 4 matrix (described in (2) below), which makes it is possible to generalize from 2-simplices (= triangles) to k-simplices, in terms of determinants of certain (k + 2) × (k + 2) matrices, "Cayley-Menger matrices/determinants"; they again only involve the square lengths of the k+1 2 edges of the simplex. An k-simplex X in a space M is a (k + 1)-tuple of points (vertices) (x 0 , x 1 , . . . , x k ) in M. If g : M × M → R satisfies g(x, x) = 0 and g(x, y) = g(y, x) for all x and y (like a metric dist(x, y), or its square), one may construct a (k + 2) × (k + 2) matrix C(X) by the following recipe: first take the (k + 1) × (k + 1) matrix whose ijth entry is g(x i , x j ). It has 0s down the diagonal and is symmetric, by the two assumption about g. Enlarge this matrix it to a (k + 2) × (k + 2)matrix by bordering it with (0, 1, . . . , 1) on the top and and on the left. The case k = 2 is depicted here (writing g(ij) for g(x i , x j ) for brevity; note g(01) = g(10) etc., so that the matrix is symmetric. (The indices of the rows and columns are most conveniently taken to be −1, 0, 1, 2.) This is the Cayley-Menger matrix C for the simplex, and its determinant is its Cayley-Menger determinant. Heron's formula then says that the value of this determinant is, modulo the "combinatorial" factor −16 −1 , the square of the area of a triangle with vertices x 0 , x 1 , x 2 , as expressed in terms of squares g(x i , x j ) of the distances between them. Similarly for (square-) volumes of higher dimensional simplices. Note that no coordinates are used in the construction of this matrix/determinant. We shall in the following denote the square volume of a k-simplex X, as calculated by the Heron-Cayley-Menger formula, by Heron(X) (provided of course that we have some data giving us the "square distance" g(x i , x j ) between its vertices). Proof. Interchanging the vertices x i and x j has the effect of first interchanging the ith and jth column, and then interchanging the ith and jth row of the new matrix. Each of these changes will change the determinant by a factor −1. Gram's formula Given a k-simplex X = (x 0 , x 1 , . . . , x k ) in a Euclidean space E with associated vector space V with an inner product. If V = R n with the standard inner product, we may form an n × k matrix Y with columns The determinant itself is coordiante independent, i.e. it only depends on the inner product on V , not on coordinatizing V by R n . This determinant likewise has a volume theoretic significance: it gives the square of the volume of the parallelepipedum spanned by the k vectors The following Proposition is only included for a comparison with the issue of (k + 1)! symmetry of the formulae. Proposition 1.2 The Gram determinant for a k-simplex is invariant under the (k + 1)! symmetries of the k-simplex. Proof. It suffices to prove this for the case where V = R n with standard inner product. Interchanging x i and x j , for i and j ≥ 1 implies an interchange of the corresponding columns in the Y -matrix, and this interchanged matrix comes about by multiplying Y on the right by the k × k matrix S obtained from the unit matrix by interchanging its ith and jth column. This S has determinant −1. So S T · Y T · Y · S has the same determinant as Y T · Y . Interchanging x 0 and x j in the simplex corresponds, using y i = x i − x 0 , to multiplying the Y -matrix on the right by the matrix S j , obtained from the unit k × k matrix by replacing its jth row by the row Comparison formula For a Euclidean space E, it makes sense to compare the values of the Heron and Gram formulas for square volume of a k-simplex X = (x 0 , x 1 , . . . , x k ). Let C denote the (k+2)×(k+2) matrix ((Heron-) Cayley-Menger) formed by the square distances between the vertices, as described above, and let Y T · Y be the Gram k × k matrix of the simplex, likewise described above. There is a known relation between their determinants For a proof, see reference [10]. Note that the left hand hand side in (3) does not make use of the algebraic structure of E and its associated vector space, but only on the (square-) distance function (arising from the inner product). This flexibility will be crucial when we consider Riemannian manifolds. We denote the square volume of a simplex X, as calculated in terms of the Cayley-Menger matrix C, by Heron(X), and denote the square volume of the corresponding parallelepipedum, as calculated by Gram's method, by Gram(X). But we shall later have occasion to consider different fixed (positive definite) inner products G on one and the same vector space V , in which case we may extend the notation and write Heron G and Gram G to specify which inner product we use. The comparison (3) may then be formulated . (The factor k! 2 is just because the volume of the parallelepipedum is k! as large as the one of the simplex itself.) Remark. In terms of physical dimension alluded to in the Introduction: volume of a k-simplex has dimension m k , so its square volume has dimension (m k ) 2 ; the entries g(x i , x j ) in the Cayley-Menger matrix have physical dimension m 2 , and expanding its determinant, all terms are products of k copies of these entries. (The entries 0 and 1 in the top line and left column in the matrix are "pure" quantities, i.e. of dimension m 0 ). So the value of the determinant is of physical dimension (m 2 ) k . The Heron formula is then meaningful in the sense that it equates quantities of dimension (m 2 ) k and (m k ) 2 . In particular, the comparison between the square volumes of a ksimplex, as calculated by Heron-Cayley-Menger and by Gram, which is a consequence of (3), is dimensionally meaningful; both have physical dimension m 2k . Differential forms and square densities As in [4], say, we consider the following kind of structure on an object M in a category E with finite inverse limits M, namely subobjects of each of the M (r) s being a reflexive and symmetric relation, with M (0) being the equality relation. We have in mind the "rth neighbourhood of the diagonal" of an affine scheme, as considered in algebraic geometry, or the "prolongation spaces" of manifolds as considered in e.g. [6]. Except for M (0) , these relations are not transitive. -We are actually only interested in the the cases r = 0, 1, 2. We use the well known "synthetic" language to express constructions in categories E with finite limits, in "elementwise" terms 1 , in particular we consider, for a natural number k, the object of r-infinitesimal k-simplices in M, meaning the subobject of M × M × . . . × M (k + 1 times) consisting of k + 1-tuples (x 0 , x 1 , . . . , x k ) of elements of M with (x i , x j ) ∈ M (r) for all i, j = 0, 1, . . . , k; such a k + 1-tuple, we shall call an r-infinitesimal k-simplex; the x i s are the vertices of the simplex. Note that the question of whether a k-simplex is r-infinitesimal only depends on the "edges" (x i , x j ) (face-1-simplices) of the simplex, equivalently, it depends on the 1-skeleton of the simplex. We shall, as in [4], write x i ∼ r x j for (x i , x j ) ∈ M (r) . In the context of SDG, we have that x ∼ r y in R n is equivalent to: For any r + 1-linear function φ : R n × . . . × R n → R, we have For r = 1 and r = 2, we shall consider certain maps from the object of r-infinitesimal k-simplices to R, namely maps which have the property that they vanish if x i = x j for some i = j. For r = 1, combinatorial differential k forms ω have this property. (In the context of SDG, such maps are automatically alternating with respect to the (k + 1)! permutations of the x i s, see [4] Theorem 3.1.5.) For r = 2, such maps have not been considered much 2 , except for the case where k = 1, where (pseudo-) Riemannian metrics g, in the combinatorial sense (recalled after Definition 2.3 below), are examples of such maps; for this case, we think of g(x 0 , x 1 ) as the square of the distance between x 0 and x 1 . The gs of interest are symmetric, g(x 0 , x 1 ) = g(x 1 , x 0 ). For manifolds M, we have Proof. It suffices to consider an R n chart around x; we consider the degree ≤ 2 part of the Taylor expansion of g around x. Then g is given , Ω is linear in the argument after the semicolon, and G(x) is a symmetric n × n matrix . To say that g vanishes on the diagonal M (0) (i.e. g(x, x) = 0 for all x) is equivalent to saying that C(x) = 0 for all x. We now compare g(x, y) and g(y, x); we claim Taylor expanding from x the G(y) on the right hand side gives that this the difference between the two sides is (y −x) · dG(x; y −x) · (y −x) which is trilinear in y − x, and therefore vanishes, since x ∼ 2 y. So we have that if Ω vanishes, then g is symmetric; vice versa, if g is symmetric, its restriction to M (1) is likewise symmetric, and (being a differential 1-form), it is alternating, so the Ω-part vanishes, which in coordinate free terms says: g(x, y) = 0 for x ∼ 1 y. For the number line R, (x 0 , x 1 ) ∈ R (2) iff (x 0 − x 1 ) 3 = 0, and the map g given by g(x 0 , x 1 ) := (x 0 − x 1 ) 2 is a map as described in the Proposition. In fact, it is the restriction of the standard "square-distance" function R × R → R. So we recall, respectively pose, the following definitions, corresponding to r = 1 and r = 2. Let M be a manifold. Definition 2.2 A (combinatorial) differential k-form on M is an Rvalued function ω on the set of 1-infinitesimal k-simplices in M, which is alternating with respect to the (k + 1)! permutations of the vertices of the simplex. Hence it vanishes on simplices where two vertices are equal. Definition 2.3 A k-square-density on M is an R-valued function on the set of 2-infinitesimal k-simplices in M, which is symmetric with respect to the (k+1)! permutations of the vertices of the simplex, and which vanishes on simplices where two vertices are equal. Note that for k = 1, Proposition 2.1 gives that 1-square densities (square lengths) g have the property that they vanish not just on M (0) (the diagonal), but also on M (1) : g(x, y) = 0 if x ∼ 1 y. So the notion of 1-square density agrees with (combinatorial) "differential quadratic form", as considered in [4], Section 8.1. (Combinatorial) differential quadratic 1-forms we shall also call pseudo-Riemannian metrics. As a bridge between square densities and differential forms, we pose the following auxiliary Such extended k-form restricts to a function on 1-infinitesimal ksimplices (and the restriction may or may not be a differential 1-form; note that we did not put conditions like "alternating" or "symmetric" on extended k-forms). Proof. We have to prove that for any 2-infinitesimal k-simplex (x 0 , x 1 , . . . , x k ). It suffices to do this in a coordinate patch around x 0 , which we may assume is 0 ∈ R n , in which case ω and ω ′ are functions Ω and Ω ′ : D 2 (n) × . . . × D 2 (n) → R (k factors in the product). By the basic axiom scheme of SDG, the ring A of functions D 2 (n) → R is of the form A = A 0 ⊕ A 1 ⊕ A 2 , with A 0 the constant functions R n → R, A 1 the linear functions R n → R, and A 2 the (homogeneous) quadratic functions R n → R. This A is a graded ring (only non-zero in degrees 0,1 and 2). The ideal of functions vanishing on 0 is A 1 ⊕A 2 ⊆ A. So the ideal of functions (D 2 (n)) k → R which vanish if any of its arguments is 0 is the k-fold (symmetric) tensor product of This ring is k-graded, with e.g. the multidegree (1, . . . , 1) consisting of the k-linear functions (R n ) k → R By assumption, both Ω and Ω ′ belong to the ideal (6). The assumption that both Ω and Ω ′ restrict to the same differential k-form ω implies that Ω and Ω ′ agree in their component of multidegree (1, . . . , 1) (this component being the coordinate expression of ω). Thus Ω ′ = Ω + θ with θ of multidegree ≥ (1, . . . , 1) and of total degree ≥ k + 1. The required equation is, in these terms, that (Ω + θ) 2 = Ω 2 , and this is a simple "counting degrees"-argument in the k-graded ring A k : Here, θ 2 has total degree ≥ 2 · (k + 1) ≥ 2k + 1, which is 0 since A k is 0 in total degrees > 2k; and θ is a linear combination of terms of multidegree of the form (1, 1, . . . , 1 + p, . . . 1) for p ≥ 1, so θ · ω is a linear combination of terms of multidegree which is of total degree 2k + p ≥ 2k + 1. So the two last terms in (7) are 0, and this proves the Proposition. k-square-densities from 1-square-densities g We shall argue that for 2-infinitesimal simplices (x 0 , . . . , x k ), the Cayley-Menger determinants define square-densities. We already argued above that these determinants are symmetric: the value does not change when interchanging x i and x j . We have to argue for the vanishing condition required. If x i = x j , then g(x i , x m ) = g(x j , x m ) for all m, and this implies that the ith and jth rows in the Cayley-Menger matrix are equal, which implies that the determinant is 0. We denote the k-square-density corresponding to a 1-square-density g by Heron g (when k is understood from the context). k-square-densities from differential k-forms Essentially this is the process of squaring (in R) the values, so it is tempting to denote the square-density which we are aiming for, by ω 2 . Precisely: we get a well defined k-square-density out of a differential k-form by a two step procedure: 1) to extend the given k form ω to a suitable function ω, to allow as inputs not just 1-infinitesimal k-simplices, but also 2-infinitesimal k-simplices; and then 2) squaring ω valuewise. "Suitable" means that ω is an extended form in the sense of Defintion 2.4, i.e. that it vanishes on simplices where two vertices are equal. We shall prove that such an extension ω is possible; it is not unique: it depends on choosing a coordinate chart. But we shall prove that uniqueness holds after squaring. The question of existence of such ω is local, so let us assume that the manifold M is an open subset of R n . Then the k-form ω is given by a function Ω : where for each x 0 ∈ M, the function Ω(x 0 ; −, . . . , −) : (R n ) k → R is k-linear and alternating in the k arguments; these arguments are arbitrary vectors in R n , in particular, they may be of the form x i − x 0 for x i ∼ 2 x 0 , so the restriction of Ω(x 0 ; x 1 − x 0 , x 2 − x 0 , . . . , x k − x 0 ) to the the set of 2-infinitesimal 2-simplexes defines an extension ω of ω, so In this form, the fact that ω is alternating w.r.to the k! permutations of the x i s (i = 1, . . . , k) can be read of from the fact that Ω(x 0 ; . . .) is alternating. It is also alternating w.r.to permutations involving x 0 , as long as the x i s are ∼ 1 x 0 ; this can be seen from seen from an easy Taylor expansion argument, see the proof of Theorem 3.1.5 in [SGM]. Now if we use Ω to construct the extension of ω to ω, defined on 2-infinitesimal k-simplices, the constructed ω will still be alternating w.r.to permutations of the x i s for i > 0, but the Taylor expansion argument mentioned fails for the interchange of, say, x 0 and x 1 : we cannot conclude that . This failure get repaired by valuewise squaring: Proposition 2.6 For any 2-infinitesimal k-simplex (x 0 , x 1 , . . . , x k ), we have Proof. We shall only do the case k = 1. (For the more general case, the further argument is essentially the same as in the proof of Proposition 1.2 above.) First, we have by a Taylor expansion from x 0 The trilinear term vanishes, because x 1 ∼ 2 x 0 . Now we square, and get The quadrilinear term vanishes because x 1 ∼ 2 x 0 , but also the term Ω · dΩ vanishes, because it is trilinear in x 1 − x 0 . So we get as desired. We conclude that a differential k-form ω can be extended to an ω (whose input are 2-infinitesimal k-simplices), such that ω 2 is (k + 1)!symmetric. (Also, the extension constructed also clearly has the property that it vanishes if x i = x j for some i = j.) Hence ω 2 is a square density. From Proposition 2.5, we therefore conclude that if two extended kforms extend the same differential k-form ω, the two resulting squaredensities agree. Because of the Proposition, there is a well-defined "squaring" process, leading from differential k-forms to k-square-densities on a manifold M: extend the form ω, and square the result. It is natural to denote this square density by ω 2 , with the understanding that it means ω 2 for any extended form ω, extending ω. Variable metric tensor We consider a manifold M which is embedded as an open subset of R n (elements of R n we write as n × 1 matrices). A 1-square-density g on M can in this case be given by a metric tensor, i.e. by a family of symmetric n × n matrices G(x) (for x ∈ M), such that for x ∼ 2 y, (which equals (y − x) T · G(y) · (y − x) by (5)). We shall also use the notation G(x; ; x − y) := (x − y) T · G(x) · (x − y). Thus G(x; ; −) is quadratic in the argument after the double semicolon. The letter G is used for the "metric tensor", i.e. for the family of the matrices G(x). So this G suffices to describe a Heron-Cayley-Menger matrix for any 2-infinitesimal k-simplex in M. We write Heron G (X) for the determinant of this matrix. This Heron G defines in fact a k-square density on M, for any k: metric tensors define square densities. We shall prove (Proposition 3.2) that for a 2-infinitesimal k-simplex, (x 0 , x 1 , . . . , x k ), the G(x i )s occurring in the Cayley-Menger determinant for this simplex may all be replaced by x 0 , so that, for a given 2-infinitesimal k-simplex, we can use the comparison with the Gram description, available for constant metric tensors. The terms in the Cayley-Menger determinant for a k-simplex X are linear combinations of k-fold products g(x i , x j ) with i = j, in particular the product ± g(x 0 , is a term. (The other terms in the determinant come about from similar k-chains of adjacent 1-simplices, by permutation of the indices.) In terms of variable Riemannian tensors G(x) (with the G(x) symmetric n × n matrices), the product (10) is (possibly modulo sign) the displayed expression in the following Lemma 3.1). It is useful first to intoduce some ad hoc terminology. A finite sequence of points x 0 , x 1 . . . , x k in M which are consecutive 2-neigbours i.e. x i ∼ 2 x i+1 for i = 0, . . . k − 1, we shall for simplicity call path of length k. Ifx is a path of length k, we get a path of length k − 1 by omitting the first of the vertex of the path. Let us denote this truncated path by |x. We are interested in such paths in M ⊆ R n when M is eqipped with a Riemannian metric g, given by variable symmetric n × n matrices G(x). So g(x, y) = (x − y) T · G(x) · (x − y). Then for a path x 0 , . . . , x k , as above, we write G(x) for the product g(x 0 , x 1 ) · . . . · g(x k−1 , x k ), i.e. in coordinates and we write G(x) for the similar product, but with all the x i s appearing before the double semicolon replaced by the G(x) for x the first vertex of the path, Thus in G(|x), the constant matrix used is G(x 1 ) because the first vertex of |x is x 1 . Proof. By induction of the length k of the path. The assertion is clearly true for k = 1. Assume that it holds for k − 1. Then by the induction assumption, used for the path |x. Now by definition of G(|x), the equation continues Now we Taylor expand, for fixed i, the displayed factor G(x 1 ; ; , as well as the quadratic term Q, get annihilated by being mutiplied with G(x 0 ; ; x 1 − x 0 ), since this factor is linear in the dG-term (and, even more so, Q). So altogether, we have an expression (at least) trilinear in x 0 − x 1 , and therefore it vanishes since x 0 ∼ 2 x 1 . Therefore, in the product (11), each factor G(x i ; ; . . .) may be replaced by G(x 0 ; . . .), and then we have G(x). An orientation form for an n-dimensional manifold M is a differential n-form δ, so that any differential n-form on M can be written 3 f · δ for a unique f : M → R. In the manifold M = R n , determinantformation is an orientation form. An orientation on M is given by an orientation form, and δ 1 and δ 2 define the same orientation if δ 2 = f · δ 1 for an f : M → P ⊆ R. An n-form ω is positive if it is f · δ for some f : M → P . Recall from the last lines of Section 2 the notation ω 2 for the square k-volume constructed out of a differential k-form ω: Theorem 4.1 Assume that g is a Riemannian metric on an oriented ndimensional manifold M. Then there exists on M a unique positive differential n-form ω such that Heron g and ω 2 agree on all 2-infinitesimal n-simplices; it deserves the name volume form for g. Proof. Since the data and assertions in the statement do not depend on the choice of a (positively oriented) coordinate chart, it suffices to prove the assertion in such. So assume that M ⊆ R n is an open subset (with orientation inherited from the canonical one det on R n ), and G is given in terms of the positive definite n × n matrices G(x) (for x ∈ M). For the existence of a volume form: Consider the extended n-form ω, given by the formula ω(x 0 , x 1 , . . . , x n ) := det G(x 0 ) n! · det(x 1 − x 0 , . . . , x n − x 0 ) for any 2-infinitesimal n-simplex X = (x 0 , . . . , x n ). Let Y denote the n × n matrix with x i − x 0 as its ith column. Then squaring the defining equality for ω gives using the product rule for determinants and det(Y T ) = det(Y ). By definition of Gram, the equation continues = 1 n! 2 Gram G(x 0 ) (X) = Heron G(x 0 ) (X) = Heron G (X), using the Heron-Gram comparison Proposition 1.3 and Proposition 3.3. This proves the existence of the claimed differential n-form.
2020-12-14T02:16:06.830Z
2020-12-11T00:00:00.000
{ "year": 2020, "sha1": "380d911e877cf1a82081fdd13e1187460ae5c092", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c402ed98c3c6f5b2e22b8376d3defec041784dd8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
115432822
pes2o/s2orc
v3-fos-license
Groundwater Quality Analysis for Human Consumption A Case Study of Sukkur City , Pakistan Drinking water quantity and quality is of the utmost importance. If the drinking water gets contaminated, it can result in severe health problems. For example, the continuous consumption of drinking water containing more than permissible amounts of fluoride can lead to bone deterioration and increased risk of bone fracture [1]. The present study was carried out to check the quality of underground water of Sukkur city. The analyzed parameters were fluoride, sodium, magnesium, calcium, potassium, iron, arsenic, TDS, pH, conductivity, odor, color and taste. World Health Organization (WHO) standards were followed in present study. Underground water samples were collected from 20 different populated locations of Sukkur city. Only arsenic, pH, iron and potassium were found to be within health safe limits while the rest of the parameters exceeded the permissible standards set out by WHO. The TDS, sodium, fluoride and magnesium were over the limits at some locations. Keywords-groundwater; water quality; physiochemical analysis INTRODUCTION Safe drinking water is the one of the core factors for a healthy life.Our two main water sources are underground and surface water.Only 3% of the underground water is fresh water and approximately 1.5 billion people use this water for drinking purpose [2].In Pakistan, the average consumption of water is 1 gallon per day for drinking and 188 gallons for other purposes [3].It is estimated that 17% of the world population is drinking water which is unsafe to drink, 32% consume from safe sources and the remaining 51% from centralized pipe supply systems [4].In Pakistan, the unsafe quality of drinking water results in 30% of all diseases, 40% of all deaths and the majority of infant deaths.Many waterborne diseases are a direct result of polluted water consumption, like diarrhea, malaria, intestinal worms, anemia, cholera etc. [5].Many of the leading causes of ground water contamination are industrial liquid waste, agrochemical disposal and untreated discharge of effluents [6].There is unfortunately a lack of surveillance and monitoring programs to check the drinking water quality in Pakistan, a situations that gets worse considering the pathetic institutional and government arrangements, insufficiency of well-equipped laboratories, non-compliance of WHO standards and absence of a legal framework for drinking water quality problems [7].Present study's purpose is to analyze the underground water quality of Sukkur city, compare the obtained results with WHO standards and put forward the measures need to be taken. A. Study Area and Sampling Locations Sukkur city is situated on west bank of Indus River at latitudes 27°05' to 28°02'N and longitudes 68°47' to 69°43'E, altitude of 67m in Sindh Province.It is the 3rd largest city of Sindh and 14thof Pakistan.In this city, 60-80% of drinking water is taken from surface water.Figure 1 shows the map of Sukkur city, on which the sample locations are circled with red color and sample number, while the Table I shows location names. B. Water Sample Collection Around 20 water samples are taken from different sites of Sukkur city, particularly places from where people collect their drinking water.The standard method of sample collectionis that sample bottles are sterilized and collected in clean polyethylene bottles [8]. III. RESULTS AND DISCUSSION Table III shows the World Health Organization (WHO) standards for potable water suitable for human consumption.The study results are benchmarked against these standards. A. pH pHis the degree of acidity or basicity of an aqueous solution.The pH value ranges from 0 to 14 and 7 being neutral.pHless than 7 indicates acidity and greater than 7 indicates basicity [10].The recommended pH value by the WHO for drinking water is from 6.5 to 8.5.Figure2 shows the acquired pH values of water samples at all locations.Location SPL13 has pH=7,8 which is the highest.This may be caused by the presence of toxic metals even at low concentration like copper and lead that are usually responsible for making the water alkaline.The use of agrochemicals, i.e., mostly plant nutrients and fertilizers, in the locality is responsible for the high concentration of heavy metals, which is definitely a major health risk.Such contaminations are possible to reach and retain in soil layers and may even percolate to the groundwater aquifers, thus inducing a greater human health risk [19].Moreover, the geological structure of the catchment and buffering capacity also tends to influence the pH value of the water.The measured pH values of all the locations are within the WHO limits. B. Total Dissolved Solids (TDS) The measured values of TDS for samples of all the locations are presented in Figure3.The satisfactory value proposed by WHO is 1000 mg/L, while the measured values of TDS at 9 locations: SPL3, SPL7, SPL9, SPL10, SPL11, SPL14, SPL15, SPL17, SPL20 exceed the desired limit.The highest values are recorded at SPL14 and SPL15 and it is 6 times higher than the desired limit.Uncontrolled wastewater outflows both from domestic and industrial domains are the most possible reasons for high TDS values in the region, as certain portions of such flows are then percolate to the aquifers and polluting the groundwater.The TDS values exceeding the limits may affect the aesthetic water quality [11]. D. Fluoride The measured Fluoride values for all locations are presented in Figure 5. Fluoride higher than the desired limit was found in sites SPL9, SPL14, SPL17, SPL18 and SPL20.This may be because of the abundance of phosphorite rocks in those areas as the fluoride in the water mostly comes from these rocks.The ground water at particular areas is not suitable for drinking purposes.Dental and skeletal fluorosis are health hazards relevant towater consumptionof higher fluoride concentration [13].Fig. 5. Groundwater fluorideconcentration results of various sampling locations of Sukkur city E. Potassium All samples measured results for potassium are under the permissible limit regarding WHO guidelines and are presented in Figure 6.Such concentration of potassium in ground water could not have adverse effects for human health [14].Fig. 6. Potassium results at various sampling locations F. Iron Measured iron values for all locations are presented in Figure 7 and they are found within the permissible limits of WHO standards, thus the consumption of underground water of Sukkur city is safe regarding this aspect [15].Fig. 7. Iron results at various sampling locations of Sukkur city G. Magnesium Measured values of magnesium are presented in Figure 8At three locations i.e. sample numbers SPL14, SPL15 and SPL20 magnesium is found higher than the desired limit.The excessive intake of magnesium may cause vomiting and diarrhea.High dose of magnesium in water may cause nerve problems, muscle slackening and depressions [16]. H. Arsenic Measured results of Arsenic are presented in Figure 9 and were found to be within permissible limits of WHO guidelines [17].Furthermore, no reports of lung bladder and skin cancer were found in the sampling premises, confirming that water is free from As. Groundwater magnesium results of at various sampling locations Fig. 9. Groundwaterarsenic results at various sampling locations I. Calcium The measured values of calcium for all locations are presented in Figure 10.The location majority has values within the desired limit but few locations exceeded it i.e.SPL2, SPL14, SPL15 and SPL20.Calcium determineswater hardness, inadequate calcium intake may cause increased risks of nephrolithiasis (kidney stones), osteoporosis, hypertension, coronary artery disease, and obesity [18].It was concluded that overall quality of underground water at most of the locations was quite satisfactory, since the results of many parameters of various locations were under permissible limits.Arsenic, pH, iron, potassium were within limits throughout all the locations while some parameters were exceeding the limits at various locations.TDS was higher at locations SPL3, SPL7, SPL9, SPL10, SPL14, SPL15, SPL17 and SPL20.Sodium was higher at locations SPL10, SPL14, SPL15, SPL17, and SPL20.Fluoride was higher at locations SPL9, SPL14, SPL17, SPL18 and SPL20.Magnesium was higher at locations SPL14, SPL15 and SPL20.Calcium was higher at locations SPL2, SPL14, SPL15 and SPL20.EC was higher at all locations except SPL2, SPL6, SPL13 and SPL16.Based on the results of this study, it is recommended that locations number SPL14 (New Goth Sukkur), SPL15 (Makka Goth Shikarpur Road) and SPL20 (Jaferia Society) must be examined thoroughly and possibly remedial measures must be taken into action. Fig Fig. 8.Groundwater magnesium results of at various sampling locations Fig. 10 . Fig. 10.Calcium results at various sampling locations of Sukkur city J. Electrical Conductivity (EC) Measured EC results are presented in Figure 11.It was observed that most of the EC values were beyond the permissible limit.Only five locations out of 20 showed satisfactory results.The exceeding values of EC indicate the sum of the cation (or anions), or in other terms, the total concentration of salts.High temperature may also be another possible reason for increased EC values, as the EC of solutions approximately increases 2 percent with increase in each °C of Fig. 11 . Fig. 11.EC of groundwater at various sampling locations of Sukkur city IV.CONCLUSIONS TABLE I . They were as pH, TDS, fluoride, sodium, magnesium, calcium, potassium, iron, arsenic, conductivity, odor, color andtaste.All experimental work and tests were conducted according to standards and conducted at Energy & Environment Engineering Department, QUEST Nawabshah [9].Table II lists the equipment used for this study. C. Parameters TestedIn present study, 12 water quality parameters were tested.
2018-12-21T04:55:50.921Z
2018-02-20T00:00:00.000
{ "year": 2018, "sha1": "9a62345b78eabf9f2f93e4096b798238d7562dd3", "oa_license": "CCBY", "oa_url": "http://etasr.com/index.php/ETASR/article/download/1768/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a62345b78eabf9f2f93e4096b798238d7562dd3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Environmental Science" ] }
89734061
pes2o/s2orc
v3-fos-license
Heterosis and Combining ability studies in intraspecific derivatives of Wal ( Lablab purpureus ( L . ) Sweet ) Eight genotypes of Wal viz., DPLW-46, DPLW-61, DPLW-10, DPLW-31, DPLW-15, DPLW-48, DPLW-51, DPLW-29 and their crosses made in half diallel fashion were evaluated to estimate the combining ability and heterosis effects for yield and yield component characters. The proportion of δg/δs revealed preponderance of non additive gene action for inheritance of all the characters. All the parents exhibit significant estimates for combining ability for one or more characters. The parents viz., DPLW29, DPLW-15 and DPLW-51 were good combiner for most of the characters. Among the crosses DPLW -46 X DPLW-29, DPLW-51 X DPLW-29, DPLW-46 X DPLW-61, DPLW-15 X DPLW-51 and DPLW-46 X DPLW-10 were identified as promising cross combinations and recorded highest significant positive heterosis over mid and better-parents for seed yield per plant. These heterotic cross combinations could be exploited to get superior segregants. important pulse crop cultivated in Konkan region of Maharashtra predominantly grown on residual moisture after rice crop during rabi season.It is popularly recognized as 'Wal' in Maharashtra.Though the crop is having a considerable diversity and the improvement through selection is possible, the productivity of this crop is very low.Development of new variety with high yield and early maturity is prime objective of breeder.The first step in a successful breeding programme is to select appropriate parents.Diallel analysis provides systematic approach for selection of appropriate parents and crosses superior in terms of traits.Exploitation of heterosis is primarily dependent on screening and selection of available germplasm that could produce better cross combinations.Breeding strategies based on selection of hybrids require expected level of heterosis as well as the specific combining ability (sca).In breeding high yielding varieties of crop plant, the breeders often face the problem of selecting parents and crosses.Combining ability analysis is one of the powerful tool available to estimate the combining ability effects and aids in selecting the desirable parents and crosses for the exploitation of heterosis.The ultimate objective of any crop improvement programme is to improve yield which is a complex character and is dependent on a number of agro-morphological traits.The degree of heterosis depends on the degree to which parental lines are related.With this background information, the present investigation was taken up to assess combining ability and heterosis in wal. The experimental material consisted of eight genetically diverse genotypes of wal viz., DPLW-46, DPLW-61, DPLW-10, DPLW-31, DPLW-15, DPLW-48, DPLW-51 and DPLW-29 were crossed in half diallel fashion (excluding reciprocals) as suggested by Griffing (1956) in Method -I, Model -II.The resulting 28 F 1 's and 8 parents were grown in randomized block design with three replications during rabi 2012.The experiment was conducted at research farm of the Department of Agricultural Botany, College of Agriculture, Dapoli, Dist-Ratnagiri, (M.S).The seeds were sown on 60 x 45 cm distance between rows and plants.Observations were recorded on five randomly selected plants of each genotype per replication for six quantitative characters viz., days to maturity, plant height (cm), number of pods per plant, pod length (cm), number of seeds per pod and seed yield per plant (g).The analysis of variance was computed as suggested by Panse and Sukhatme (1985).The combining ability analysis was carried out as per Kempthorne (1969) and the magnitude of heterosis was estimated in relation to better and mid parent as per the standard method. The analysis of variance for combining ability was highly significant for all the characters except for number of seeds per pod while sca variance was highly significant for plant height, pod length, number of seeds per pod and seed yield per plant and it was non-significant for the character days to maturity and number of pods per plant (Table 1).The ratio of δ 2 g/δ 2 s revealed preponderance of non DOI: 10.5958/0975-928X.2016.00104.6 additive gene action for inheritance of all the characters.Gawali et al., (2011) has also reported similar non additive gene action of high magnitude for various characters.The general combining ability (gca) effects of the parents are presented in Table 2.The results revealed that none of the parent was good general combiner for all the characters.The parent DPLW-51 was found to be the good general combiner with gca effect (-0.72) for days to maturity.The Parent DPLW-29 showed desirable general combining ability effect for the characters viz., plant height, number of pods per plant & yield per plant.The parent DPLW-15 was the good general combiner for pod length.All the parents showed nonsignificant gca effect for number of seeds per pod in both directions.Sawant et al., (2006) and Viraj et al., (2006) have also reported such negative as well as positive gca effects exhibited by parents for one or more yield contributing characters. The specific combining ability (sca) effects of hybrids are presented in Table-3.The highest yielding cross DPLW -46 x DPLW -29 recorded significantly highest positive sca effect for seed yield per plant (9.96).While the cross DPLW-46 X DPLW-61 showed significant sca for number of pods per plant, number of seeds per pod but the character plant height (-4.90) showed negative non-significant sca effect.The cross DPLW-15 X DPLW -51 showed significant sca effect for plant height (-2.84) and number of pods per plant (10.99).While the cross DPLW-46 x DPLW-48 found to be the best combiner for days to maturity (-1.33).The cross DPLW-10 x DPLW-48 found to be the best combiner for pod length (0.46).Such varying specific combining ability effects exhibited by different crosses has also been reported by Jayarani and Manju (1996) and Jyothula and Guttala (2001).Present investigation revealed that on the basis of gca estimates none of the parent was good combiner for all the characters.The parents DPLW-29, DPLW-15 and DPLW-51 were good general combiner for most of the characters in Wal.The sca estimates revealed that no cross combination was good for all the characters.However, the crosses DPLW-49 xDPLW-29, DPLW-51 x DPLW-29, DPLW-46 x DPLW-61 and DPLW-15 x DPLW-51 were exhibiting high sca effects for important yield contributing characters. In conventional breeding considerable attention has been paid to increase the yield potential by exploiting the heterosis from intervarietal hybrids of wal by identification of potential cross combinations with respect to grain yield and its related traits.Heterosis over mid and better parent for growth and yield characters are presented in Table-4.For days to maturity negative heterosis was considered as desirable, showed moderate variation from -1.88 per cent to 1.26 per cent over mid parent and -1.57per cent to 2.56 per cent over better parent.Hybrid DPLW-31 x DPLW-29 showed maximum heterotic performance in negative direction over mid parent (-1.88 %) and better parent (-1.57%) for this character proved to be desirable for selection.Vashi et al. (1999) reported the similar result for the character days to maturity in lablab bean.For the character plant height heterosis ranged from -15.02 per cent to 9.85 per cent over mid parent and -10.86 per cent to 28.08 per cent over better parent.The heterosis was worked out for dwarfness.The heterotic effect for plant height was highly significant for the cross DPLW-61 x DPLW-51 showed (-15.02%) over mid parent and DPLW-10 x DPLW-51 (-10.86 %) over better parent followed by DPLW-61 x DPLW-51 (-10.47 %) in case of better parent.For the character pod length heterotic values for mid and better parent ranged from 0.91 per cent to 21.70 per cent and 0.38 per cent to 21.31 per cent respectively.Hybrid DPLW-10 x DPLW-48 showed maximum positive heterosis over mid parent (21.70%) and (21.31%) over better parent. Relative heterosis and heterobeltiosis for number of pods per plant ranged from -4.55 to 35.56 per cent and -13.35 to 27.02 per cent respectively.The hybrid DPLW-46 x DPLW-29 showed highest relative heterosis (35.56%)..The hybrids having relatively higher heterosis for this character were found performing well for the yield.Similar findings were reported by Virja et al. (2006) for pod length and Valu et al. (2006) for number of pods per plant.For number of seeds per pod heterotic value of hybrid ranged from 2.40 per cent to 21.10 per cent over mid parent and -3.88 per cent to 20.08 per cent over better parent.Hybrid DPLW-46 x DPLW-61 proved to be the best by showing higher heterosis over mid parent (21.10%) and better parent (20.08%).To obtain higher seed yield plant which produced more number of pods is desirable.For the character yield per plant relative heterosis and heterobeltiosis ranged from -0.67 per cent to 50.76 per cent and -1.81 per cent to 41.54 per cent.Hybrid DPLW-46 x DPLW-29 has showed maximum relative heterosis (50.76%) and heterobeltiosis (41.54%) followed by hybrid DPLW-46 x DPLW-61 and DPLW-51 x DPLW-29.Similar finding were earlier reported by Bendale et al. (2005). The hybrids DPLW-46 x DPLW-29 and Hybrid DPLW-46 x DPLW-61 were reported significant heterosis for number of pods per plant, pod length, number of seeds per pod and seed yield per plant followed by DPLW-51 x DPLW-29 for number of seeds per pod and seed yield per plant.These two cross combinations can be successfully utilized in the exploiting hybrid vigour as well as development of superior populations in the lablab bean.Further the segregating progenies of these cross combination may provide the opportunity to select the desirable individual plants having high grain yield and also high intensity of expression of yield contributing characters.
2019-04-02T13:09:11.766Z
2016-11-09T00:00:00.000
{ "year": 2016, "sha1": "6ba4da0648039b2ee1879f337eb425af9f269492", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5958/0975-928x.2016.00104.6", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6ba4da0648039b2ee1879f337eb425af9f269492", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
195354424
pes2o/s2orc
v3-fos-license
Sustainable Green Pavement Using Bio-Based Polyurethane Binder in Tunnel As a closed space, the functional requirements of the tunnel pavement are very different from ordinary pavements. In recent years, with the increase of requirements for tunnel pavement safety, comfort and environmental friendliness, asphalt pavement has become more and more widely used in long tunnels, due to its low noise, low dust, easy maintenance, and good comfort. However, conventional tunnel asphalt pavements cause significant safety and environmental concerns. The innovative polyurethane thin overlay (PTO) has been developed for the maintenance of existing roads and constructing new roads. Based on the previous study, the concept of PTO may be a feasible and effective way to enrich the innovative functions of tunnel pavement. In this paper, the research aims to evaluate the functional properties of PTO, such as noise reduction, solar reflection and especially combustion properties. Conventional asphalt (Open-graded Friction Course (OGFC) and Stone Mastic Asphalt (SMA)) and concrete pavement materials were used as control materials. Compared with conventional tunnel pavement materials, significant improvements were observed in functional properties and environmental performance. Therefore, this innovative wearing layer can potentially provide pavements with new eco-friendly functions. This study provides a comprehensive analysis of these environmentally friendly materials, paving the way for the possible application in tunnels, as well as some other fields, such as race tracks in stadiums. Introduction As a closed space, the functional requirements of the tunnel pavement are very different from ordinary pavements, e.g., a higher requirement on the reduction of the pavement noise, proper light reflection to ensure safety and save tunnel lighting energy, closure of the construction process, better pollutant discharge and air purification. For a long time, cement concrete pavement has been widely used in tunnels, due to its long service life and better lighting effect. However, the long construction period, the high noise during operation, large dust and the fast deterioration rate of the skid resistance at the entrance and exit sections, which significantly influence the tunnel safety and environmental protection, have become a limitation for the further application of the cement concrete on the tunnel construction. In recent years, with the increase of requirements for tunnel pavement safety, comfort and environmental friendliness, asphalt pavement has become more and more widely used in long tunnels, with roughly 6.6 GPa Youngs' modulus. Furthermore, dynamic stability was evaluated with cyclic compression tests. It presented a superior performance in the long-term durability of the materiel, which was more than ten times higher than conventional porous asphalt. The fatigue resistance was assessed by the indirect tensile strength test. From which test, the strong inherency tensile strength was also examined. Porous PTO can also facilitate stormwater infiltration and increase the driving safety and maneuverability of automobiles [18][19][20][21]. Apart from mechanical properties, the hydraulic conductivity and the durability against clogging have also been verified much larger than conventional porous pavement materials, due to excellent void connectivity and pore structures [22]. Based on the previous study, the concept of PTO may be a feasible and effective way to enrich the innovative functions of tunnel pavement. In this paper, the research aims to evaluate the functional properties of PTO. Apart from conventional functionalities, e.g., skid resistance and drainage, some properties, such as noise reduction, solar reflection, and especially combustion properties, have aroused great interest in tunnels of metropolitan cities. A conventional asphalt (open graded friction course (OGFC) and stone mastic asphalt (SMA)) and concrete pavement materials were used as control materials. Compared with conventional tunnel pavement materials, significant improvements were observed in mechanical and functional properties, as well as environmental performance. Therefore, this wearing layer can potentially provide pavements with new eco-friendly functions. This study provides a comprehensive analysis of these environmentally-friendly materials, paving the way for a possible application in tunnels, as well as some other fields, such as in race tracks in stadiums. Materials and Preparation of the PTO Specimens To achieve a high permeability, a void-rich thin overlay, such as OGFC, is currently the most feasible and effective way [23,24]. Although potentially beneficial, permeable pavement materials face some challenges, due to the high-void content (less stone-to-stone contact regions) and the viscous nature of bitumen. In conclusion, the poor mechanical durability [25] and unfavorable clogging behavior [24,26] represent the main obstacle inhibiting a wide application of permeable pavements [27]. The latest research substituted conventional bitumen with bio-based polyurethane (PU) in order to create a sustainable permeable pavement. The bio-based polyurethane consists of various polymers that are synthesized by a poly-addition reaction of a di-isocyanate or a polymeric isocyanate with a polyol. Specifically, among the polyol components, traditional petroleum raw materials are replaced by organic oils. The synthesis is based on the connection of isocyanates and hydroxyl groups that lead to the formation of a urethane group [28]. Polyurethane elastomers consist of the polyol component and the isocyanate component. By modifying the components, a wide range of material properties, ranging from brittle to elastic, can be designed. Preliminary research conducted at RWTH Aachen University (Germany) suggests that PU-bound porous pavement structures exhibit high permeability, high strength, high resistance to permanent deformation and increased fatigue resistance [17,29]. The mechanical and morphological properties of aggregate are essential for the functional and mechanical properties of conventional OGFC [30][31][32]. Various investigations have focused on using natural or recycled aggregate in pavements [33]. In producing the PTO mixture, the conventional natural aggregate within the particle size range of 2.0-5.6 mm was replaced, and the remaining 30% was filled with natural sand (0-0.2 mm). A 2-component polyurethane product was selected as the binder for the PU specimens. The basic components of PU are shown in Figure 1. Conventional OGFC, SMA and cement were selected as reference materials in this study. The conventional OGFC and SMA specimens were composed of crushed diabase aggregate, limestone powder, and a polymer modified bitumen binder. The mixtures were prepared by means of Marshall compaction (50 impacts per side). The grain size distribution and detailed mixture design of the PU, OGFC and SMA reference specimens were based on the porosity and the maximum density process, which are given in Figure 2 and Table 1 to Table 3 respectively. Conventional OGFC, SMA and cement were selected as reference materials in this study. The conventional OGFC and SMA specimens were composed of crushed diabase aggregate, limestone powder, and a polymer modified bitumen binder. The mixtures were prepared by means of Marshall compaction (50 impacts per side). The grain size distribution and detailed mixture design of the PU, OGFC and SMA reference specimens were based on the porosity and the maximum density process, which are given in Figure 2 and Tables 1-3 respectively. Conventional OGFC, SMA and cement were selected as reference materials in this study. The conventional OGFC and SMA specimens were composed of crushed diabase aggregate, limestone powder, and a polymer modified bitumen binder. The mixtures were prepared by means of Marshall compaction (50 impacts per side). The grain size distribution and detailed mixture design of the PU, OGFC and SMA reference specimens were based on the porosity and the maximum density process, which are given in Figure 2 and Table 1 to Table 3 respectively. The preparation of different PU specimens followed a similar procedure to hot-mix asphalt (OGFC and SMA). However, mixing polyurethane can be conducted at room temperature, because the polymerization reaction and viscosity of polyurethane are not strongly affected by temperature. After the two components of polyurethane were thoroughly mixed, the binder is added to the aggregate. The components are mixed for a few minutes to obtain a homogenous mixture in which all surfaces of the aggregate are coated with binder. After mixing, a pre-determined amount of polyurethane-bound mixture is placed into a mold to obtain specimens of the desired bulk density. A heavy roller is used for compaction of the mixture. After approximately 24 h, the hardening process is completed, and the specimens can be removed from the mold. Another reference specimen had a grade of C40/10 which is conventionally used in PCC pavements in Germany. Table 4 shows the batching proportions for C40/10 used in this study. The design slump values for PCC mixtures were 80. Acoustic Performance Existing studies have shown that tire-road noise mainly happens based on the inner resonance of the tire surface and pavement cavities (Pcavity), air flow around the vehicle body (Vehicle) and tire vibrations (Pvibration) [34]. Pcavity, which determines the noise absorption, is researched by a huge amount of research as the most important factor when evaluating the acoustic performance of the PU specimens [20,35]. In order to examine the sound absorption capacity of pavement material, the impedance tube test according to DIN EN ISO 10534-2 was adopted, which has been widely applied for noise absorption evaluation in previous studies [36,37]. This test method uses an impedance tube, two microphones, an amplifier, and a recorder to evaluate the sound absorption coefficient (see Figure 3). Due to the shape of the tube, the sound waves propagate as flat waves inside the tube. A frequency sweep is generated and played back via the connected amplifier and loudspeakers. The generated sound frequencies are measured with the microphones installed at the tube. The acoustic transfer functions of the two microphone signals are used to calculate the reflection factor and the absorption factor at normal incidence and the impedance ratio of the test material according to DIN EN ISO 10534-2 (https://www.perinorm.com/document.aspx). All test samples, including PU, OGFC, SMA, and cement, were regulated in the same dimension of normal pavement samples with 100 mm diameter and 40 mm height, so that it corresponds to the practical layer thickness. There are three parallel test pieces of each variant; all specimens were measured three times each. Due to the shape of the tube, the sound waves propagate as flat waves inside the tube. A frequency sweep is generated and played back via the connected amplifier and loudspeakers. The generated sound frequencies are measured with the microphones installed at the tube. The acoustic transfer functions of the two microphone signals are used to calculate the reflection factor and the absorption factor at normal incidence and the impedance ratio of the test material according to DIN EN ISO 10534-2 (https://www.perinorm.com/document.aspx). All test samples, including PU, OGFC, SMA, and cement, were regulated in the same dimension of normal pavement samples with 100 mm diameter and 40mm height, so that it corresponds to the practical layer thickness. There are three parallel test pieces of each variant; all specimens were measured three times each. Reflection Test The reflection rate is an indicator of how reflective a pavement is of solar radiant. Higher reflection rate means more radiant energy could be reflected back into the air, and pavement will absorb less energy. It has been reported that the reflection rate depends strongly on pavement surface color, service condition, and texture features. In general, the reflection rate of pavement falls in the range of 0.10−0.30 [33], with brighter pavements higher reflection rates. The reflection rate may be not a big problem under a tunnel section, but it is worthy of being investigated for the application of the innovative material in the entrance and exit sections of the tunnel, where is not sealed under the tunnel section. Furthermore, these results can offer comprehensive information and may benefit the readers who would like to apply the innovative material in other practical fields. In this research, UV/VI/IR Spectrophotometer (see Figure 4), a device commonly applied to detect substance based on the absorption spectrum, was used to test the reflection rate of different samples. Samples were radiated by different lights within the wavelength range of 400−2000 nm with a step length of 5 nm, followed by detection of the reflected energy. Then, the reflection rate of different wavelength light was automatically calculated. Reflection Test The reflection rate is an indicator of how reflective a pavement is of solar radiant. Higher reflection rate means more radiant energy could be reflected back into the air, and pavement will absorb less energy. It has been reported that the reflection rate depends strongly on pavement surface color, service condition, and texture features. In general, the reflection rate of pavement falls in the range of 0.10-0.30 [33], with brighter pavements higher reflection rates. The reflection rate may be not a big problem under a tunnel section, but it is worthy of being investigated for the application of the innovative material in the entrance and exit sections of the tunnel, where is not sealed under the tunnel section. Furthermore, these results can offer comprehensive information and may benefit the readers who would like to apply the innovative material in other practical fields. In this research, UV/VI/IR Spectrophotometer (see Figure 4), a device commonly applied to detect substance based on the absorption spectrum, was used to test the reflection rate of different samples. Samples were radiated by different lights within the wavelength range of 400-2000 nm with a step length of 5 nm, followed by detection of the reflected energy. Then, the reflection rate of different wavelength light was automatically calculated. To further test the samples' heat reflection, a specially-designed iodine tungsten lamp device ( Figure 5), with a radiant energy of 820 W/m 2 , was used to simulate solar radiation. All samples were surrounded by thermal insulation cotton in the bottom and four sides. The temperatures were measured by a sensor placed 2.5 cm in the sample. The setup of this test was defined according to our previous research, which can efficiently test and evaluate the samples' heat reflection in the laboratory. A calibration factor or shift factor which can convert the laboratory results to the field will be studied in the future. Flammability Evaluation Method The ignition of a vehicle is the main cause of the tunnel fire. However, for the pavement engineers, how to reduce the harm caused by the ignition of the pavement materials is the main concern. Especially, the smoke may be extremely harmful to the drivers trapped within the tunnel. Therefore, the combustion performance of the materials was evaluated in this study. The cone calorimeter was designed by Dr. Babrauskas of the National Institute of Standards and Technology (NIST) in 1982 based on the principle of oxygen consumption. It is a vital test instrument for evaluating materials' combustion performance. As shown in Figure 6, the cone calorimeter is mainly composed of a carrier, a combustion chamber, a ventilation system, a flue gas measuring system and a gas analyzer. To further test the samples' heat reflection, a specially-designed iodine tungsten lamp device ( Figure 5), with a radiant energy of 820 W/m 2 , was used to simulate solar radiation. All samples were surrounded by thermal insulation cotton in the bottom and four sides. The temperatures were measured by a sensor placed 2.5 cm in the sample. The setup of this test was defined according to our previous research, which can efficiently test and evaluate the samples' heat reflection in the laboratory. A calibration factor or shift factor which can convert the laboratory results to the field will be studied in the future. To further test the samples' heat reflection, a specially-designed iodine tungsten lamp device ( Figure 5), with a radiant energy of 820 W/m 2 , was used to simulate solar radiation. All samples were surrounded by thermal insulation cotton in the bottom and four sides. The temperatures were measured by a sensor placed 2.5 cm in the sample. The setup of this test was defined according to our previous research, which can efficiently test and evaluate the samples' heat reflection in the laboratory. A calibration factor or shift factor which can convert the laboratory results to the field will be studied in the future. Flammability Evaluation Method The ignition of a vehicle is the main cause of the tunnel fire. However, for the pavement engineers, how to reduce the harm caused by the ignition of the pavement materials is the main concern. Especially, the smoke may be extremely harmful to the drivers trapped within the tunnel. Therefore, the combustion performance of the materials was evaluated in this study. The cone calorimeter was designed by Dr. Babrauskas of the National Institute of Standards and Technology (NIST) in 1982 based on the principle of oxygen consumption. It is a vital test instrument for evaluating materials' combustion performance. As shown in Figure 6, the cone calorimeter is mainly composed of a carrier, a combustion chamber, a ventilation system, a flue gas measuring system and a gas analyzer. Flammability Evaluation Method The ignition of a vehicle is the main cause of the tunnel fire. However, for the pavement engineers, how to reduce the harm caused by the ignition of the pavement materials is the main concern. Especially, the smoke may be extremely harmful to the drivers trapped within the tunnel. Therefore, the combustion performance of the materials was evaluated in this study. The cone calorimeter was designed by Dr. Babrauskas of the National Institute of Standards and Technology (NIST) in 1982 based on the principle of oxygen consumption. It is a vital test instrument for evaluating materials' combustion performance. As shown in Figure 6, the cone calorimeter is mainly composed of a carrier, a combustion chamber, a ventilation system, a flue gas measuring system and a gas analyzer. The main test parameters it can include the heat release rate (HRR), total heat release (THR), effective combustion heat (EHC), ignition time (TTI), smoke and toxicity parameters, and mass change parameters (MLR). These parameters can be used to evaluate the combustion performance or flame retardancy of materials. Compared with the traditional test method, the cone calorimeter can get more data in one experiment. In addition, the combustion test environment of the cone calorimeter is similar to the real combustion environment, and the test results have a good correlation with the real conflagration, which has a good reference value for the evaluation of the combustion performance of materials. In this test, it is required that the cross section of the tested specimen be 100 mm × 100 mm square, and the mass should not exceed 200 g. The experimental power of the cone calorimeter is 50 kW/m 2 , and the corresponding temperature is 780 °C. Before the test, the side and bottom of the sample were wrapped with aluminum foil, and then the measured sample will be forced to ignite under 50 kW/m 2 thermal radiation intensity. The data obtained during combustion will be collected by a computer. Results of the Acoustic Test The acoustic absorption coefficients of all specimens are shown in Figure 7. The figure indicates that the acoustic behavior of the four types of material is distinctively different. In general, the acoustic absorption properties of porous pavement material (both PU 8 and OGFC) are far higher than the conventional SMA and concrete specimens. In the relevant frequency range of 800−2500 Hz, PU specimens show a peak absorption coefficient of about 84% at 1400 Hz, whereas the OGFC specimens show a peak absorption coefficient of about 70% at 1000 Hz. The SMA and concrete only show a relatively lower peak noise absorption about 30% and 20% at 800 and 1080 Hz respectively. However, the noise frequency near pavement is usually in the range from 1200Hz to 1600Hz if the travel speed around 70 [34,36]. In which case, the PU material can have the highest noise absorption property among all materials. The peak noise absorption frequency is a function of the sample height and the pore structures. During the test, the specimens of PU, OGFC, SMA and concrete were kept in the same height of 4 mm. In this case, the difference can be mostly attributed by the porosity and pore structures. The absorption coefficients of PU samples surpass that of the other samples throughout the entire frequency domain. In comparison with the conventional OGFC, the PU exhibits higher coefficients of absorption across a wider range of frequencies, rendering it a material with far superior acoustic properties. The excellent noise reduction ability is mainly due to the large connective void content within the PU, which can expand the frequency range of absorption. Therefore, the PU material is the most suitable for the application in tunnel pavement, due to the relatively high noise frequency level. The main test parameters it can include the heat release rate (HRR), total heat release (THR), effective combustion heat (EHC), ignition time (TTI), smoke and toxicity parameters, and mass change parameters (MLR). These parameters can be used to evaluate the combustion performance or flame retardancy of materials. Compared with the traditional test method, the cone calorimeter can get more data in one experiment. In addition, the combustion test environment of the cone calorimeter is similar to the real combustion environment, and the test results have a good correlation with the real conflagration, which has a good reference value for the evaluation of the combustion performance of materials. In this test, it is required that the cross section of the tested specimen be 100 mm × 100 mm square, and the mass should not exceed 200 g. The experimental power of the cone calorimeter is 50 kW/m 2 , and the corresponding temperature is 780 • C. Before the test, the side and bottom of the sample were wrapped with aluminum foil, and then the measured sample will be forced to ignite under 50 kW/m 2 thermal radiation intensity. The data obtained during combustion will be collected by a computer. Results of the Acoustic Test The acoustic absorption coefficients of all specimens are shown in Figure 7. The figure indicates that the acoustic behavior of the four types of material is distinctively different. In general, the acoustic absorption properties of porous pavement material (both PU 8 and OGFC) are far higher than the conventional SMA and concrete specimens. In the relevant frequency range of 800-2500 Hz, PU specimens show a peak absorption coefficient of about 84% at 1400 Hz, whereas the OGFC specimens show a peak absorption coefficient of about 70% at 1000 Hz. The SMA and concrete only show a relatively lower peak noise absorption about 30% and 20% at 800 and 1080 Hz respectively. However, the noise frequency near pavement is usually in the range from 1200 Hz to 1600 Hz if the travel speed around 70 [34,36]. In which case, the PU material can have the highest noise absorption property among all materials. The peak noise absorption frequency is a function of the sample height and the pore structures. During the test, the specimens of PU, OGFC, SMA and concrete were kept in the same height of 4 mm. In this case, the difference can be mostly attributed by the porosity and pore structures. The absorption coefficients of PU samples surpass that of the other samples throughout the entire frequency domain. In comparison with the conventional OGFC, the PU exhibits higher coefficients of absorption across a wider range of frequencies, rendering it a material with far superior acoustic properties. The excellent noise reduction ability is mainly due to the large connective void content within the PU, which can expand the frequency range of absorption. Therefore, the PU material is the most suitable for the application in tunnel pavement, due to the relatively high noise frequency level. Figure 8 presents the results of the heat reflection tests subjected to light in the wavelength of 400 to 2000 nm. It can be observed that the light reflection rate varied significantly among different surfaces. Freshly manufactured asphalt, which is black, has the lowest light reflection in the whole range of wavelength. In other words, the majority of solar radiation will be absorbed by asphalt, thus increasing the road surface temperature. In which case, the OGFC and SMA, that use bitumen as a binder material, exhibit lower heat reflectance values, which in most cases is less than 0.1%. The surface temperature of OGFC and SMA is also relatively high, reaching 74 degrees after 6h solar radiation. Results of Heat Reflection In contrast, PU and granite have the highest light reflection rates in wavelength rage of 800 to 2000 nm. PU experiences a continuously upward trend in terms of heat reflection and eventually reaches a peak at 2000 nm, with a reflection rate of around 0.5. Infrared mainly contributes to the most thermal effect of light, especially in practice. The infrared wavelength is in the range of 760−2500 nm, which falls into the highest reflection wavelength range of PU and granite. In this case, PU and granite in the actual application of heating rate are lower. Figure 8. Simulation of solar radiation using iodine tungsten lamp. When exposed to the same simulated solar radiant energy of 820 W/m 2 , the surface temperature of PU is lower than that of normal asphalt surface during the testing period (see Figure 9). After about five hours of heating, the temperature of both samples becomes steady. The eventual temperature of PU was 20 °C lower than that of the normal asphalt surface after ten hours of heating. The result indicates that PU, which is brighter, provides higher heat reflection rate in comparison with the black Figure 8 presents the results of the heat reflection tests subjected to light in the wavelength of 400 to 2000 nm. It can be observed that the light reflection rate varied significantly among different surfaces. Freshly manufactured asphalt, which is black, has the lowest light reflection in the whole range of wavelength. In other words, the majority of solar radiation will be absorbed by asphalt, thus increasing the road surface temperature. In which case, the OGFC and SMA, that use bitumen as a binder material, exhibit lower heat reflectance values, which in most cases is less than 0.1%. The surface temperature of OGFC and SMA is also relatively high, reaching 74 degrees after 6h solar radiation. Figure 8 presents the results of the heat reflection tests subjected to light in the wavelength of 400 to 2000 nm. It can be observed that the light reflection rate varied significantly among different surfaces. Freshly manufactured asphalt, which is black, has the lowest light reflection in the whole range of wavelength. In other words, the majority of solar radiation will be absorbed by asphalt, thus increasing the road surface temperature. In which case, the OGFC and SMA, that use bitumen as a binder material, exhibit lower heat reflectance values, which in most cases is less than 0.1%. The surface temperature of OGFC and SMA is also relatively high, reaching 74 degrees after 6h solar radiation. Results of Heat Reflection In contrast, PU and granite have the highest light reflection rates in wavelength rage of 800 to 2000 nm. PU experiences a continuously upward trend in terms of heat reflection and eventually reaches a peak at 2000 nm, with a reflection rate of around 0.5. Infrared mainly contributes to the most thermal effect of light, especially in practice. The infrared wavelength is in the range of 760−2500 nm, which falls into the highest reflection wavelength range of PU and granite. In this case, PU and granite in the actual application of heating rate are lower. When exposed to the same simulated solar radiant energy of 820 W/m 2 , the surface temperature of PU is lower than that of normal asphalt surface during the testing period (see Figure 9). After about five hours of heating, the temperature of both samples becomes steady. The eventual temperature of PU was 20 °C lower than that of the normal asphalt surface after ten hours of heating. The result indicates that PU, which is brighter, provides higher heat reflection rate in comparison with the black In contrast, PU and granite have the highest light reflection rates in wavelength rage of 800 to 2000 nm. PU experiences a continuously upward trend in terms of heat reflection and eventually reaches a peak at 2000 nm, with a reflection rate of around 0.5. Infrared mainly contributes to the most thermal effect of light, especially in practice. The infrared wavelength is in the range of 760-2500 nm, which falls into the highest reflection wavelength range of PU and granite. In this case, PU and granite in the actual application of heating rate are lower. When exposed to the same simulated solar radiant energy of 820 W/m 2 , the surface temperature of PU is lower than that of normal asphalt surface during the testing period (see Figure 9). After about five hours of heating, the temperature of both samples becomes steady. The eventual temperature of PU was 20 • C lower than that of the normal asphalt surface after ten hours of heating. The result indicates that PU, which is brighter, provides higher heat reflection rate in comparison with the black normal asphalt surface, which significantly reduces the pavement surface temperature. Therefore, this innovative pavement surface treatment method can potentially lessen the urban heat island effect. Materials 2019, 12, x FOR PEER REVIEW 10 of 15 normal asphalt surface, which significantly reduces the pavement surface temperature. Therefore, this innovative pavement surface treatment method can potentially lessen the urban heat island effect. Ignition Time (TTI) TTI is an important parameter for evaluating the combustion properties of materials. It refers to the time spent from heating the surface of materials to continuous combustion at a preset incident heat flux, and its unit is second. It can be used to evaluate and compare the refractory properties of materials. The longer the TTI is, the harder the material is to ignite under the specified experimental conditions. As shown in Figure 10, the TTI of OGFC is smaller than that of SMA. This is because the air void of OGFC is larger than that of SMA, resulting in the larger exposed asphalt area and air contact area, thus it is easier to ignite. The TTI of porous polyurethane mixture is larger than that of OGFC and SMA, indicating that PU is more difficult to ignite than asphalt. Because there is no obvious ignition phenomenon of cement, it is regarded as incapable of ignition, and its TTI is not shown in the figure. Heat Release Rate (HRR) HRR refers to the heat release rate per unit area after the material is ignited under the preset radiation intensity, and its unit is kW/m 2 . The maximum value of HRR is the peak value of the heat release rate (pkHRR). The peak value indicates the maximum degree of heat release during Ignition Time (TTI) TTI is an important parameter for evaluating the combustion properties of materials. It refers to the time spent from heating the surface of materials to continuous combustion at a preset incident heat flux, and its unit is second. It can be used to evaluate and compare the refractory properties of materials. The longer the TTI is, the harder the material is to ignite under the specified experimental conditions. As shown in Figure 10, the TTI of OGFC is smaller than that of SMA. This is because the air void of OGFC is larger than that of SMA, resulting in the larger exposed asphalt area and air contact area, thus it is easier to ignite. The TTI of porous polyurethane mixture is larger than that of OGFC and SMA, indicating that PU is more difficult to ignite than asphalt. Because there is no obvious ignition phenomenon of cement, it is regarded as incapable of ignition, and its TTI is not shown in the figure. Materials 2019, 12, x FOR PEER REVIEW 10 of 15 normal asphalt surface, which significantly reduces the pavement surface temperature. Therefore, this innovative pavement surface treatment method can potentially lessen the urban heat island effect. Ignition Time (TTI) TTI is an important parameter for evaluating the combustion properties of materials. It refers to the time spent from heating the surface of materials to continuous combustion at a preset incident heat flux, and its unit is second. It can be used to evaluate and compare the refractory properties of materials. The longer the TTI is, the harder the material is to ignite under the specified experimental conditions. As shown in Figure 10, the TTI of OGFC is smaller than that of SMA. This is because the air void of OGFC is larger than that of SMA, resulting in the larger exposed asphalt area and air contact area, thus it is easier to ignite. The TTI of porous polyurethane mixture is larger than that of OGFC and SMA, indicating that PU is more difficult to ignite than asphalt. Because there is no obvious ignition phenomenon of cement, it is regarded as incapable of ignition, and its TTI is not shown in the figure. Heat Release Rate (HRR) HRR refers to the heat release rate per unit area after the material is ignited under the preset radiation intensity, and its unit is kW/m 2 . The maximum value of HRR is the peak value of the heat release rate (pkHRR). The peak value indicates the maximum degree of heat release during Heat Release Rate (HRR) HRR refers to the heat release rate per unit area after the material is ignited under the preset radiation intensity, and its unit is kW/m 2 . The maximum value of HRR is the peak value of the heat release rate (pkHRR). The peak value indicates the maximum degree of heat release during combustion. The greater the HRR and pkHHR are, the greater the heat released from the burning of the material, and the greater the fire hazard. Figure 11 shows that the heat release rate curve of OGFC, with the highest pkHRR of 120.38 kW/m 2 , is relatively left-centralized, meaning the quickest complete combustion and the greatest risk of fire. The HRR curve of SMA, with the second largest pkHRR, basically encloses that of PU, which indicates that SMA emits more heat per unit time and the heat release of it lasts for a longer time than that of PU. By comparing the curves of PU, SMA and OGFC, it can be seen that PU has better flame retardancy than asphalt. Finally, the HRR curve of cement fluctuates at 0, indicating that the cement will not be ignited. combustion. The greater the HRR and pkHHR are, the greater the heat released from the burning of the material, and the greater the fire hazard. Figure 11 shows that the heat release rate curve of OGFC, with the highest pkHRR of 120.38 kW/m 2 , is relatively left-centralized, meaning the quickest complete combustion and the greatest risk of fire. The HRR curve of SMA, with the second largest pkHRR, basically encloses that of PU, which indicates that SMA emits more heat per unit time and the heat release of it lasts for a longer time than that of PU. By comparing the curves of PU, SMA and OGFC, it can be seen that PU has better flame retardancy than asphalt. Finally, the HRR curve of cement fluctuates at 0, indicating that the cement will not be ignited. Total Heat Release (THR) THR refers to the sum of heat released by materials from ignition to flame extinction at a preset incident heat flux in the unit of MJ/m 2 . Combining HRR with THR can better evaluate the combustibility and flame retardancy of materials, which has a more objective and comprehensive guiding role for fire research. Figure 12 shows that SMA has the largest THR of 24.34 MJ/m 2 , because it has the largest asphalt content. Besides, the THR of OGFC is 16.73 MJ/m 2 , close to that of PU of 16.04 MJ/m 2 , but the ignition time of OGFC is small, and the heat release of it is concentrated and intense. Therefore, asphalt pavement is more dangerous than polyurethane pavement when they are on fire. Finally, the total heat released of cement is very small but still exists, because the cement has only seven days age and has not been fully hydrated. Total Heat Release (THR) THR refers to the sum of heat released by materials from ignition to flame extinction at a preset incident heat flux in the unit of MJ/m 2 . Combining HRR with THR can better evaluate the combustibility and flame retardancy of materials, which has a more objective and comprehensive guiding role for fire research. Figure 12 shows that SMA has the largest THR of 24.34 MJ/m 2 , because it has the largest asphalt content. Besides, the THR of OGFC is 16.73 MJ/m 2 , close to that of PU of 16.04 MJ/m 2 , but the ignition time of OGFC is small, and the heat release of it is concentrated and intense. Therefore, asphalt pavement is more dangerous than polyurethane pavement when they are on fire. Finally, the total heat released of cement is very small but still exists, because the cement has only seven days age and has not been fully hydrated. combustion. The greater the HRR and pkHHR are, the greater the heat released from the burning of the material, and the greater the fire hazard. Figure 11 shows that the heat release rate curve of OGFC, with the highest pkHRR of 120.38 kW/m 2 , is relatively left-centralized, meaning the quickest complete combustion and the greatest risk of fire. The HRR curve of SMA, with the second largest pkHRR, basically encloses that of PU, which indicates that SMA emits more heat per unit time and the heat release of it lasts for a longer time than that of PU. By comparing the curves of PU, SMA and OGFC, it can be seen that PU has better flame retardancy than asphalt. Finally, the HRR curve of cement fluctuates at 0, indicating that the cement will not be ignited. Total Heat Release (THR) THR refers to the sum of heat released by materials from ignition to flame extinction at a preset incident heat flux in the unit of MJ/m 2 . Combining HRR with THR can better evaluate the combustibility and flame retardancy of materials, which has a more objective and comprehensive guiding role for fire research. Figure 12 shows that SMA has the largest THR of 24.34 MJ/m 2 , because it has the largest asphalt content. Besides, the THR of OGFC is 16.73 MJ/m 2 , close to that of PU of 16.04 MJ/m 2 , but the ignition time of OGFC is small, and the heat release of it is concentrated and intense. Therefore, asphalt pavement is more dangerous than polyurethane pavement when they are on fire. Finally, the total heat released of cement is very small but still exists, because the cement has only seven days age and has not been fully hydrated. Specific Extinction Area (SEA) and Total Smoke Release (TSR) SEA is a dynamic parameter to characterize the amount of smoke emitted at every moment in the combustion process, which can reflect the ratio of volatile matter per unit mass to smoke (unit: m 3 /kg), while the TSR can reflect the total amount of smoke generation and release per unit area in the fire field (unit: m 3 /m 2 ). These data have a good correlation with the smoke parameters of large-scale experiments. Figure 13 shows that SMA has the largest average smoke emission and total smoke emission, followed by OGFC, PU and cement. In fires, excessive smoke may lead to asphyxiation, hypoxia and death. Therefore, asphalt pavement is more harmful to the environment and the human body than polyurethane pavement when burning. Specific Extinction Area (SEA) and Total Smoke Release (TSR) SEA is a dynamic parameter to characterize the amount of smoke emitted at every moment in the combustion process, which can reflect the ratio of volatile matter per unit mass to smoke (unit: m 3 /kg), while the TSR can reflect the total amount of smoke generation and release per unit area in the fire field (unit: m 3 /m 2 ). These data have a good correlation with the smoke parameters of largescale experiments. Figure 13 shows that SMA has the largest average smoke emission and total smoke emission, followed by OGFC, PU and cement. In fires, excessive smoke may lead to asphyxiation, hypoxia and death. Therefore, asphalt pavement is more harmful to the environment and the human body than polyurethane pavement when burning. 3.3.5. Fire Proceeding Index (FPI) FPI combines ignition time and peak heat release rate, and is the ratio of ignition time to peak heat release rate (unit: s*m 2 /kW). The larger the FPI value, the stronger the fire resistance. Figure 14 shows that the FPI of OGFC is smaller than that of SMA, and the FPI of asphalt mixture is smaller than that of polyurethane mixture, which indicates, as previously analyzed, that polyurethane pavement has better flame retardancy than asphalt pavement. Besides, the pkHRR of cement is almost zero, so its FPI value is extraordinary and not shown in the figure. 3.3.5. Fire Proceeding Index (FPI) FPI combines ignition time and peak heat release rate, and is the ratio of ignition time to peak heat release rate (unit: s*m 2 /kW). The larger the FPI value, the stronger the fire resistance. Figure 14 shows that the FPI of OGFC is smaller than that of SMA, and the FPI of asphalt mixture is smaller than that of polyurethane mixture, which indicates, as previously analyzed, that polyurethane pavement has better flame retardancy than asphalt pavement. Besides, the pkHRR of cement is almost zero, so its FPI value is extraordinary and not shown in the figure. Specific Extinction Area (SEA) and Total Smoke Release (TSR) SEA is a dynamic parameter to characterize the amount of smoke emitted at every moment in the combustion process, which can reflect the ratio of volatile matter per unit mass to smoke (unit: m 3 /kg), while the TSR can reflect the total amount of smoke generation and release per unit area in the fire field (unit: m 3 /m 2 ). These data have a good correlation with the smoke parameters of largescale experiments. Figure 13 shows that SMA has the largest average smoke emission and total smoke emission, followed by OGFC, PU and cement. In fires, excessive smoke may lead to asphyxiation, hypoxia and death. Therefore, asphalt pavement is more harmful to the environment and the human body than polyurethane pavement when burning. 3.3.5. Fire Proceeding Index (FPI) FPI combines ignition time and peak heat release rate, and is the ratio of ignition time to peak heat release rate (unit: s*m 2 /kW). The larger the FPI value, the stronger the fire resistance. Figure 14 shows that the FPI of OGFC is smaller than that of SMA, and the FPI of asphalt mixture is smaller than that of polyurethane mixture, which indicates, as previously analyzed, that polyurethane pavement has better flame retardancy than asphalt pavement. Besides, the pkHRR of cement is almost zero, so its FPI value is extraordinary and not shown in the figure. Summary and Conclusions In this paper, the functional properties of PTO were evaluated, and compared, with the conventional asphalt (OGFC and SMA) and concrete pavement materials regarding the noise reduction, solar reflection and combustion properties. Significant improvements were observed in functional properties, as well as environmental performance. The following summarizes the main findings of this study: • Based on the evaluation by the acoustic tube test, both PU and OGFC show superior acoustic properties in comparison with conventional SMA and concrete pavement materials. However, comparing to OGFC, PU has a wider range of noise absorption. Especially for high frequency noise, which more likely exists in the tunnel, PU exhibits a maximum noise absorption coefficient, while other materials have almost no noise absorption within this range. Concluding, PU is more efficient in noise reduction of tunnel pavement. • The light reflection rate varied significantly among different pavement surfaces. Based on the heat reflection tests, the OGFC and SMA which use bitumen as a binder material, exhibit the lowest heat reflectance values. The concrete presented the highest reflection rate and followed by PU in wavelength it ranged from 800 to 2000 nm. On the other hand, an increase of the radiation time results in a significant increase in the surface temperature of the PU and asphalt material. However, compared to the asphalt material, the increase of temperature on the PU surface is almost 30% less. Therefore, this PU pavement surface can potentially lessen the urban heat island effect. It is meaningful when the PU is applied in the entrance and exit sections of the tunnel, where it is not completely sealed under the tunnel section. • By comparing the results of the combustion tests, it can be seen that PU has a better flame retardancy than asphalt (OGFC and SMA). Particularly, the TTI of PU is larger than that of OGFC and SMA, indicating that PU is more difficult to ignite than asphalt. Asphalt (OGFC and SMA), with the higher pkHRR than PU, can result in faster combustion and a greater risk of fire. SMA has the largest THR, indicating that the largest amount of heat is released. The THR of OGFC is close to that of PU, but the ignition time of OGFC is small, and thus the heat release is more concentrated and intense. SMA has the largest average smoke emission and total smoke emission, followed by OGFC, PU and cement. FPI of asphalt is smaller than that of PU. Overall, this study has demonstrated that this innovative wearing layer can potentially provide tunnel pavements with new eco-friendly functions compared with the conventional asphalt materials. In further research, the PTO will be applied and analyzed in the construction of model tunnels. The relative production and laydown direct cost relation for all the mixtures will be investigated in the next step. More comprehensive tests will be carried out to investigate the mechanical and functional properties of the PTO materials, i.e., long-term performance of PTO under repeated dynamic loading (traffic loading in the field) needs to be investigated carefully through dynamic fatigue and fracture tests [38]. The possibility of the PTO applied in race tracks in the stadiums will also be investigated.
2019-06-26T13:04:02.468Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "f3f6a12a4e0fe46fad60de7ca8fda28b6c7778b4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/12/12/1990/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3f6a12a4e0fe46fad60de7ca8fda28b6c7778b4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
261627426
pes2o/s2orc
v3-fos-license
Presidentialism and Democracy in East and Southeast Asia, ed. Marco Bünte, Mark R. Thompson, Routledge 2023, pp. 173 The actual history of the constitutional democracies of the independent states of East and Southeast Asia begins during the Cold War, when this region also became an area of conflict between two warring blocs. Decolonization and democratization in this region began after the end of World War II and lasted practically to the first decade of the 21st century, although according to many, these processes cannot be considered completed yet. At that time, guided by historical experience, the religion professed by the majority of society, and economic ties with former colonizers or neighbouring countries, choices were made as to the political system adopted, the system of government, and the main principles of the state’s functioning. At present, it is very difficult to find a common denominator for legal and comparative research on the constitutional systems of the countries of the region discussed. It should be taken into account that when choosing a democratic system and a system of government, political elites are guided by certain factors. Research on this subject shows that two elements are particularly important: institutional experiences Presidentialism and Democracy in The actual history of the constitutional democracies of the independent states of East and Southeast Asia begins during the Cold War, when this region also became an area of conflict between two warring blocs.Decolonization and democratization in this region began after the end of World War II and lasted practically to the first decade of the 21 st century, although according to many, these processes cannot be considered completed yet.At that time, guided by historical experience, the religion professed by the majority of society, and economic ties with former colonizers or neighbouring countries, choices were made as to the political system adopted, the system of government, and the main principles of the state's functioning. At present, it is very difficult to find a common denominator for legal and comparative research on the constitutional systems of the countries of the region discussed.It should be taken into account that when choosing a democratic system and a system of government, political elites are guided by certain factors.Research on this subject shows that two elements are particularly important: institutional experiences DOI: 10.4467/23538724GS.23.016.18162 RECENZJE RECENZJE RECENZJE and the cultural geography of a given country.Earlier democratic experiences, especially when they led to the introduction of authoritarian governments, are constraints for the authors of the constitution, who, wanting to avoid the mistakes of their predecessors, eliminate previously adopted political solutions from their field of interest.At the same time, states that have experienced a military dictatorship are more likely to introduce a presidential system.The second factor determining the choice of the system of government is cultural geography and colonial experience.Most Latin American countries have adopted a presidential system, while Western European countries have a parliamentary system.In these regions, only a few countries have decided to introduce a semi-presidential system, although it is widespread in Eastern Europe.The influence of former colonizers is not without significance.Former British colonies strongly favour parliamentarianism, while most former French colonies in Africa adopted a semi-presidential system, and all the former Spanish colonies in South America introduced a presidential system.Of course, the time of introducing changes is not without significance for the choice of political system.During the so-called second wave of democratization, a parliamentary system was clearly preferred (it was introduced by 37 out of 55 countries undergoing political transformation in the years 1945-1973, that is 67.3%). In turn, the so-called the third wave of democratization favoured a presidential system (in 1974-2006, 43 out of 92 countries introduced this system of government, that is 46.7%). 1 In the light of the above, the analysis of institutional solutions adopted in the constitutions of the countries of the region under discussion allows us to distinguish two prevailing types of government systems characteristic for this area, namely the parliamentary and presidential systems.The first of them is present in Malaysia, Singapore, Thailand, and formally in Vietnam, as well as in Cambodia and Laos (whose basic law was modelled, among others, on the Vietnamese model).Indonesia, the Philippines, and Timor-Leste have adopted a presidential system of government.In the years 1945-2006 (i.e. in the period covering the so-called second and third waves of democratization), in the group of 26 Asian countries undergoing democratic transformation, 17 introduced a parliamentary system and 5 a presidential system (the remaining 4 countries adopted a mixed model). 2The adoption of a presidential system in the Southeast Asian region can therefore be seen as an exception. The presidential system has been long criticized, especially by political 1 Jung Jai Kwan, Ch.J. Deering, "Constitutional choices: Uncertainty and institutional design in democratising nations," International Political Science Review 2015, 36, pp .60-77. scientists (Juan Linz 3 ).This system is characterized by a rigorous division (separation) of the legislative and executive powers and a combination of the functions of the president and the head of government.Pursuant to these principles, the president (as an organ of executive power) has full executive power and is exempt from accountability before parliament.Critics argue that this is a system that leads to instability in power.Sharing the position put forward by Linz, they suggest that because the president and the legislature are elected separately, the legitimacy of their power is competitive, which causes conflicts.At the same time, a lack of mechanisms of dispute resolution between the president and the legislature leads to the politicization of the judicial system and the involvement of the justice system in conflict resolution.However recent research conducted in this regard on the governments of Southeast Asian countries that have adopted the presidential system of government seems to contradict this thesis.As a consequence, the presidential system of government has its supporters and opponents, strengths and weaknesses.Nevertheless, its characteristics do not in themselves prevent the construction of a lasting presidential democracy.How the presidential system of government will function in practice depends not only on the formal institutional framework adopted, but also on such variable and non-obvious factors as the personality of political actors, the party system, and general cultural issues. The book reviewed here is included in the "Routledge/City University of Hong Kong Southeast Asia Series" that has been published since 2004.Presidentialism and Democracy in East and Southeast Asia examines the impact of presidential systems on democracies by examining three distinct topics: the perilousness of competing legitimacies of the executive and legislative branches; issues of institutional design (particularly regarding semi-presidentialism); and the rise of executive aggrandizement.Despite often intense political conflict and temporary instability in East and Southeast Asia, presidential systems of various types -from relatively "pure" forms to semi-presidentialism and other hybrids -have largely been resilient.Although there are signs of growing authoritarianism in several cases, presidentialism, associated with both accommodation and conflict, has usually not driven this.This book's contributions to presidentialism debates -as the authors claim -will be of interest to students and scholars of comparative politics, while the book also offers detailed analysis of the presidency in these East and Southeast Asian cases. The book examines presidential systems operating in South Korea, the Philippines, and Indonesia (pure cases of presidentialism), and in Taiwan, Timor-Leste (semi-presidentialism), and Myanmar (a hybrid system).The aim of the authors is to fill a gap in existing literature on comparing presidentialism, which until lately continued to exclude most East and Southeast Asian cases. Bearing in mind the above, Erik Mobrand argues that formal institutions in South Korea do not account for either stability or instability of presidentialism. As regards the Philippines (the oldest and the purest form of presidentialism in the region), Mark R. Thompson provides strong evidence for the danger of that system, both in terms of the competing legitimacy claims of the executive and the legislature, and also in connection with aggrandisement. In the chapter on Indonesia, Dirk Tomsa argues that presidential politics there is, above all, a reflection on a complex regime configuration in which a president navigates between popular demands from the electorate, the interests of powerful veto actors who use democratic procedure only as an instrument to defend their predominantly material interests, and a constantly evolving but still inefficient set of political institutions that have largely failed to ensure accountability and transparency.On the other hand, Andres Ufen highlights the effect of presidentialism on the structure of political parties in Indonesia. The chapter on Taiwan focusses on its evolution after 1949.Taiwan's semi-presidential form of government consists of a parliamentary system with a president mandated to fulfil the role of political adjudicator between the legislative and executive branches of government.As in South Korea, civil society in Taiwan has been crucial for successfully overcoming authoritarian legacies in order to build democratic accountability in a presidential system. In an analysis of Timor-Leste, Rui Graça Feijo explains that the election of non-partisan presidents has contributed to stabilizing the country's young democracy.After 2017, this informal tradition was abandoned, and it is unclear how this will influence the political system. It is argued by Marco Bünte that Myanmar's 2008 constitution created a special form of hybrid presidentialism, which not only conditioned the transition from military to civilian rule but also provided the background for later military dissatisfaction, ultimately leading to the military coup of February 2021. Consequently, the authors conclude that presidentialism has often been quite resilient in East and Southeast Asia, largely defying claims of its political perilousness based on research from other regions, particularly Latin America. The publication reviewed here is undoubtedly a significant voice in the discussion on the advantages and disadvantages of the presidential system of government.It clearly proves that not only the provisions of the constitution, but, above all, other factors determine the functionality of the adopted model of government.On the basis of the findings made by the authors of this publication, it can be concluded that a discussion on how to ensure the efficient and, at the same time, law-abiding functioning of a presidential system of government has become desirable, rather than considering -in the event of problems -a drastic change of the regime to a parliamentary or semi-presidential one.Taking into account that the same system can function in different ways (e.g. when governments are held by different presidents), reforms of the political system, especially radical reforms, should be approached with caution.At the same time, a thorough analysis should be made not only of the provisions of the constitution, but also of the president's informal practices and leadership style, assessing how the system will function in various configurations of such extra-legal factors.
2023-09-10T15:05:05.801Z
2023-08-31T00:00:00.000
{ "year": 2023, "sha1": "820dd3c6d953c790dcf065b0f68d63011aa63fec", "oa_license": "CCBY", "oa_url": "https://www.ejournals.eu/pliki/art/24043/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e993f002016c6f6a03419519aebb1a118cda101", "s2fieldsofstudy": [ "Political Science", "History" ], "extfieldsofstudy": [] }
263748859
pes2o/s2orc
v3-fos-license
Composite Estimation of Permeability in Identified Hydrocarbon Reservoirs of Langbodo Field Niger Delta, Nigeria This study identified the fluid types and boundaries present within selected reservoirs in Langbodo field, using petrophysical parameters based on estimated rock properties such as porosity, permeability, irreducible water saturation, hydrocarbon saturation and bulk water volume. This was with a view to correcting the salient reservoirs heterogeneities anomalies error inherent in building of an ideal realistic reservoir models. The quality of the data obtained were checked and despiked to eliminate null values. Petrel version 2009 and OpendTect 4.6.0 Exploration and production softwares were used for the quality interpretations of data, such as lithology identification, delineation of potential reservoirs and determination of fluids and fluids contacts. Estimation of quantitative petrophysical parameters were done by inputting the data into Microsoft excel 2015 version softwares and adopting appropriate mathematical relations, such as the Tixier, Timur and the Coates and Dumanoir models for the permeability (K). Realistic estimation of the permeability was done by comparing the average of the Tixier, Timur and the Coates and Dumanoir models with each of the models. The composite model obtained, mirrors the behavior of the Timur’s permeability which is higher than that of the Tixier and the Coates and Dumanoir. Integration of the Achie’s equation and neutron – density crossplot confirmed the presence of substantial hydrocarbon in the reservoirs, although producibility indicators revealed that the reservoirs may not be producible without enhanced oil recovery method(s). This study established that the composite model is a better representation of K in the study area because it agrees with the Timur’s estimation model. I. INTRODUCTION he geometric progression of the world population has called for higher demand for energy, of which hydrocarbons constitute a dominant percentage, especially among the non-renewable energy sources.Multinational hydrocarbon exploration companies may experience poor reservoir performance within few years of production due to inadequate reservoir properties description [1].The success of any hydrocarbon exploration program depends on the building of reliable reservoir models [1].Some of the most important parameters needed to characterize reservoir quality are permeability, porosity and shale volume.Their accurate prediction is the basis upon which one can actually identify if the reservoir is producible.The description of reservoir characteristics and fluid flow performance can be anchored on the permeability and this plays a very important role in designing exploration and development plan [2].An accurate estimation of the permeability enhances oil and gas field development and it is the basis for building geophysical models, accurately predicting oil and gas reserves and taking reasonable development plan [3].The importance of accurate estimation of the permeability cannot be overemphasized as other petrophysical parameters are dependent directly or indirectly on it.Therefore, accurate estimation of the permeability in identified reservoirs is undoubtedly important to enhance validity of other dependent variables.Unfortunately, diverse methods, with varying strengths and weaknesses have been proposed by various researchers.The Tixier's model of 1949, the Timur's model of 1968 and the Coates and Dumanoir's model of 1981 [4][5][6], were applied by several researchers [7][8][9][10], with little or no cognizance to the weaknesses inherent in such approaches.These models are based on correlation between permeability and other geophysical parameters such as porosity and irreducible water saturation.The estimation of permeability from the aforementioned models is often accompanied with varying strengths and weaknesses; which the ignorant and unsuspecting explorationist may not watch out for; depending on the peculiarity of the geology of the area under consideration.Therefore, this research work is aimed at enhancing the computation of the permeability by integrating the strengths of the diverse methods for estimating the parameter, using suites of well logs from "Langbodo field" Niger Delta.This research finding will improve reservoir quality assessment, and assist other producibility indicators to rank the reservoirs for further developmental decisions. II. STUDY AREA The study area is located in the "Langbodo field" onshore Niger Delta.It is located within the Shell Petroleum Development Company acreage.The field is bounded by latitudes 4° 46′ and 5° 57′ and longitudes 5° 37′ and 5° 64′ .Fig. 1 and 2 are maps showing the study area and well location points respectively.The subsurface geology of the study area reveals the Niger Delta Basin.The wells drilled in the study area enabled the acquisition of wireline logs, utilized for this study.The geology of the Niger Delta revealed an extensional rift basin located in the Niger Delta and the Gulf of Guinea on continental margin close to the western coast of Nigeria.It has an established access to Equatorial Guinea, Cameroon and Sao Tome and Principe.The complexity of the basin is shown in the content of high productive hydrocarbon system.The Niger Delta basin is one the largest sub-aerial basins in Africa which is composed of several geologic formations that indicate how the basin could have initially formed and the large scale tectonic of the area.Research shows that some other basins formed from similar geologic processes exist around the Niger Delta.Its formation can be traced to a failed rift junction formed during the separation of the South American plate and the African plate, as the South Atlantic started to open [10].The origin of rifting in this basin started in the late Jurassic and terminated in the mid Cretaceous.Continuation of rifting led to the formation of several faults many of which are thrust faults.Also at this time, the late Cretaceous harbors some deposition majorly composed of syn-rift sand and shale.This shows that the shoreline regressed during this time.Concurrently, the basin had been witnessing extension leading to high angle normal faults and fault block rotation.At the origin of the Paleocene, there was a noticeable shoreline T transgression.During the Paleocene, the Akata Formation was deposited.At the time of the Eocene, the Agbada Formation followed the underlying Akata shale [12].This formation loading caused the underlying shale Akata Formation to be compressed and squeezed into shale diapirs.The Oligocene, the Benin Formation was thereafter deposited.It is the shallowest part of the sequence with age of formation varying from earlier to recent [12].Microsoft excel® software and adopting appropriate mathematical/ statistical valuations to evaluate the delineated reservoirs. Enhanced computation of the permeability (K) of reservoirs involve a detailed qualitative and quantitative evaluation of other petrophysical parameters such as the lithology, porosity, water and hydrocarbon saturation among others.The analytic procedure is aimed at a better estimation of permeability (K).This will as well assist in determining quality of the reservoirs.Results obtained from at least three methods of estimating the reservoir permeability were used to arrive at a better K of the reservoirs.A systemic analysis suitable for this case was adopted for the estimation of the permeability of reservoirs of interest penetrated by the "Langbodo" wells.The Schlumberger Petrel Software was used to enhance interpretation.Microsoft excel was used to estimate K from various equations; as well as comparing obtained results.Table I below B. Permeability Estimation by Timur Method The second permeability equation used for this comparative study was proposed by Timur (1968).This equation is based on similar algorithm as that of Tixier (1949); although some variations exist between the duo.The earlier estimated irreducible water saturation was utilized to compute Timur's permeability across 0.5 feet thickness of the bore wells; average values across each of the reservoirs were then taken.(%) = × 100% (4) Porosity may be classified as primary or inter-granular and secondary porosity.Primary porosity is the porosity that has existed in the formation since the time they were deposited.Secondary porosity is as a result of the action of tectonic forces or formation water.Porosity may be classified as effective and absolute porosity.Effective porosity is the porosity available to free fluid excluding unconnected porosity occupied by water bound and disseminated shale.Absolute porosity is the total porosity regardless of whether or not the individual voids are connected.Porosity was estimated from the density logs in this study. A. Density Log Derived Porosity Density log is a porosity log that measures electron density of a formation.It can assist the geologist to identify minerals, detect gas-bearing zones, determine hydrocarbon density and evaluated shaly sand reservoirs and complex lithologies. Effective porosity was calculated for evaluating interval using the equation shown below: Where = matrix density (usually 2.66 / sandstone), = Formation's bulk density (obtained from density log at 0.6 .interval), = formation's fluid density (1.5 / for water and 0.8 / for hydrocarbon) and = Density of adjacent shale body. B. Determination of Irreducible Water Saturation ( ) Irreducible water saturation is the water harbored in the pore spaces by capillary force.When a zone is at irreducible water saturation ( ), the water saturation in the uninvaded zone ( ) will not migrate because it is held by pressure in the grains.For most reservoir rocks in the field, irreducible water saturation ranges from values less than 10 % to values more than 50 % [13].The ( ) was estimated from the equation below: Where = Formation factor and = Irreducible water saturation. The formation factor was estimated from the Achie's equation below: Where = Porosity, = Lithologic constant and = Cementation exponent. A. Quantitative Analysis of Reservoir Permeability (K) Recorded average permeability across the fifteen reservoirs mapped out are shown in Tables II-IV.respectively.Generally, permeability in well A is observed to decrease with depth.This may not be unconnected with close grain parking owing to overburden loads.Tixier method was observed to yield highest permeability across the reservoirs of well A. This trend amazingly change in well B, with Tixier recording values less than 100 , although these values were observed to be very close in range to the results obtained from Coates and Dumanoir equation.Based on the findings from the Langbodo field, the Tixier method was also observed to yield ridiculously low permeability values in interval penetrated by reservoirs (A to E) of "Langbodo" well C.The mean permeability values across the reservoirs are tabulated along the triple set of permeability equations Fig. 2, 3 and 4 are graphical representation of the triple set of equation and the mean value.The inconsistency noticed in the values obtained from Tixier method is believed to be due to certain effects on the reservoirs.This may be hydrocarbon effect or probably presence of heterogeneities that masked through permeability as presented by the method.[14], [15] and [16], have variously recorded permeability for some Niger Delta reservoirs: and the values obtained by the author are far more than that obtained by the Tixier method in wells A and B. Since mean values are true representations of dataset, the average permeability values are preferred to the other methods.It is however noticed that close similarities exist between the average permeability value and the Timur's permeability (Fig. 3 -5).In well B, the Composite model was found to almost behave similar to the Timur's estimation model.From this research we find higher estimated transmitivities and high reservoir permeability distribution within the models.The result from the Composite model shows a better representation of the permeability.This is similar to the work of [10], done by merging different models in the absence of core data from the study area to a nearby field core data in developing a reliable reservoir geophysical model.The differences in the behaviors of each of the models may be as a result of the permeability being masked by certain unnoticed heterogeneities [15].4.4.Some of the important parameters evaluated are discussed below: 1) Porosity Average values of porosity (Table II) obtained from the reservoirs of well A are, 0.3160, 0.2770, 0.2710, 0.2610 and 0.2690 for RSV 1, RSV 2, RSV 3, RSV 4 and RSV 5 respectively.These strongly suggest moderate to good porosity in the reservoir sands.The effective porosity values however reduced with noticeable differences with RSV 1, RSV 2, RSV 3, RSV 4 and RSV 5 recording values (measured in v/v) of 0.289, 0.252, 0.236, 0.236, 0.234 and 0237 respectively.The differences observed between the density porosity and the effective porosity is interpreted to be microporosity contributions from shale/clay.Similar trend to this is also observed in Well B (Table III) and Well C (Table IV). 2) Irreducible Water Saturation Observed irreducible water saturation is range from 0.09 to 1 for Well A, 0.094 to 0.138 in Well B and 0.081 to 0.087 in Well C (Table V).These values are high enough to ensure little to no water cut during production.shows that the Timur's relation closely agrees with the composite (mean) value, although some marked discrepancies were observed at some points.Petrophysically, subsurface reservoirs in the Langbodo field have reasonable hydrocarbon in their pore spaces, and estimated producibility indicators are good enough to support secondary migration of this oil into the borehole, if developed.Taking stand from the findings of this study, the use of a single method for estimating of reservoir permeability (K) is strongly discouraged.It is recommended that composite Permeability (K) equations be used. Fig. 1 Fig. 1 An Outlay of the "Langbodo Field" showing Well Locations Fig. 2 Fig. 2 Location Map of the Petroliferous Niger Delta, Showing the Important Frame and Tertiary Delta Growth [11]. summarizes the data set used for this research.The Wells A, B and C penetrated a total of 11500 , 11620 and 12,035 respectively.Table I Available logs from the three wells used for the research IV.DETERMINATION OF RESERVOIRS PERMEABILITY Permeability is the rock property to transmit fluids.It is determined by the size of the connecting passages (pores throats or capillaries) between pores.Permeability is a key parameter associated with the characterization of any hydrocarbon reservoir.It is measured in Darcies or millidarcies.The Tixier (1949), Timur (1968) and the Coates and Dumanoir (1981) equations were used to derive the permeability of each reservoir that was identified on the Langbodo field.A. Permeability Estimation by Tixier MethodTixier, 1949 proposed that permeability (K) is a direct function of reservoir cubic porosity and an indirect function of the reservoir irreducible water saturation.Tixier's equation is mathematically documented as follows: Estimation by Coates and Dumanoir Method Thirteen years after the emergence of Timur's relation, Coates and Dumanoir proposed another relation that expressed the reservoir permeability as a direct function of the square porosity and a fraction of the irreducible water saturation. Coates and Dumanoir Permeability, = Porosity, = Irreducible water saturation.V. DETERMINATION OF POROSITY Porosity is the ratio of the total volume of pore space to the entire volume of the formation.It relates the amount of internal space in a given volume of rock to the total volume of rock in the reservoirs.The amount of internal space or voids in a given volume of rock determines the amount of fluid it will hold. Fig. 5 Fig. 5 shows the comparison of the average permeability values to each of the values obtained from the triple set of equation across well A reservoirs. Fig. 3 ( Fig. 3 (a and b) Comparison of Different Permeability Values Across Langbodo Well A The reservoir permeability obtained from RSV 1 of well A dropped suddenly in RSV 2. It increased a little in RSV 3 and later dropped in RSV 4 and 5.This result might be attributed to predominant laminated shale deposits in reservoirs 2, 4 and 5 of well A. However, the Timur's estimation model gave almost the same values with the Composite model which is a merger of the three other models.As earlier reported in this work, these discrepancies may be as a result certain heterogeneities such as shale/clay which masked through the permeability[16]. Fig. 4 ( Fig. 4 (a and b) Comparison of Different Permeability Values Across Langbodo Well B Fig. 5 ( Fig. 5 (a and b) Comparison of Different Permeability Values Across Langbodo Well C The three estimation models in well C for all the reservoirs behaved in almost similar ways.This may be as a result of similar deposits in each of the five reservoirs.It is evident from the Fig. 5 (a and b) above that the composite permeability estimation model which is obtained from the merger of the other three models employed in this work mirrors the Timur's permeability model.Integration of the existing models to obtain a better reservoir permeability will give a better picture of the reservoirs in well C.This is similar to the work of [10].B. Discussion of Other Petrophysical Parameters Some other petrophysical parameters were evaluated, part of the results obtained from these are shown in Tables 4.1, 4.2 and 4.3.Other analyzed parameters such as the irreducible water saturation is tabulated in Table 4.4.Some of the important parameters evaluated are discussed below: Table II Average Porosity and Permeability Values Obtained for Well A ReservoirsTable III Average Porosity and Permeability Values Obtained for Well B Reservoirs Table IV Average Porosity and Permeability Values Obtained for Well C Reservoirs Permeability obtained in well A by Tixier method ranged from 244056.9 in Reservoir A to 2885.8 in Reservoir E. Similarly, Timur equation and Coates and Dumanoir equation yielded values that ranges from 86717.0 to 2955.2 and 3529.742 to 640.5 Table V Irreducible Water Saturation ( ) in Langbodo Wells Reservoirs VII.CONCLUSION Careful comparison of the Tixier, Timur and Coates and Dumanoir equations and the mean (composite) permeability
2023-10-08T15:16:36.002Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "20942db11817297fa04b54793e53e4235c3d0f19", "oa_license": "CCBY", "oa_url": "https://doi.org/10.47514/phyaccess.2022.2.1.003", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5bf9d6a006c6626943863c3b64ec02147c786b82", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
211531921
pes2o/s2orc
v3-fos-license
The Role of Age on Effectiveness of Active Repositioning Therapy in Positional Skull Deformities Purpose: Non-synostotic positional deformities are currently diagnosed in nearly half of the newborns, however not any evidence-based guidelines are available for management. The aim of this study is to assess the effect of active repositioning treatment at infants with positional skull deformities. Method: A retrospective data of 158 infants treated with active repositioning as a conservative treatment for at least 2 months were analyzed in this study. Anthropometric 3D scanner measurements of pre-and post-treatment diagonal difference, cranial vault asymmetry index, cranial ratio were evaluated for each patient. Infants were separated to 4 different groups according to their morphologic deformation types as plagiocephaly, brachycephaly, scaphocephaly and combined (brachycephaly+plagiocephaly), and 2 groups according to age at onset of treatment. Results: In combined group, pre-treatment mean diagonal difference and cranial vault asymmetry index values decreased from 9.38 mm and 6.9% to 6.94 mm and 4.9% respectively. In plagiocephaly group, mean pre-treatment results changed from 10.32 mm and 7.5% to 7.83 mm and 5.5% respectively after treatment. All these changes were statistically significant. Effectiveness of timing of repositioning treatment on different positional skull deformities was analyzed and outcome was found significantly improved when the active repositioning treatment was started before 4 months of age. Conclusion: Improvement rates of the asymmetry decrease with age due to decreasing skull enlargement rate. Early diagnosis, especially before 4 months of age, more parental education, and close follow-up are important for babies with this condition who may benefit just from repositioning treatment. Neonatal skull is soft and moldable in the natal and newborn periods due to the rapidly growing brain tissue. Skull deformities may be classified as; pathologic type, craniosynostosis, secondary to abnormal suture development; or deformational/positional type secondary to external forces acting upon cranium. Craniosynostosis usually requires surgical intervention, however with early diagnosis, positional skull deformities may be treated with active repositioning, physical therapy and helmet therapy in infants (1)(2)(3). American Pediatric Academy (APA) has started a campaign and suggested that the babies should be at supine position in bed to decrease sudden infant death. (4) Soon after acceptance of supine position in bed campaign in almost all countries, 50% decrease in sudden infant death syndrome was recorded. (5) However, Argenta et. al in 1996 reported up to 600% increase in the prevalence of cranial asymmetries. Thus, a consensus has been made about the relation between deformational plagiocephaly and supine sleeping position. (6,7) Nowadays, skull deformities are diagnosed in 45% of infants, with most common diagnoses being plagiocephaly, brachycephaly and scaphocephaly. Symptoms may be observed initially between 4th and 8th weeks of life. (8)(9)(10)(11). Positional plagiocephaly can be recognized as unilateral parieto-occipital flattening with ipsilateral frontal bossing and anterior shift of the ipsilateral ear that results in a parallelogram deformity of the head. Central bi-occipital flattening with an anterior to posterior shortening and medial to lateral widening of the head is the characteristics of deformational brachycephaly, therefore it is also known as 'short head' syndrome (11). Scaphocephaly, 'narrow head' , is characterized by anteriorposterior elongation and bi-parietal shortening of the skull (12,13). Besides the cosmetic problems, it is suggested that positional deformities may constitute a risk for temporo-mandibular joint problems, motor skill deficiencies, sleep apnea syndrome, visual field defects, ear infections, difficulties at cognitive functions and academic degrees (14)(15)(16)(17)(18). The first postnatal 4 months are critical for the development of positional skull deformities (PSD), and a peak is observed at the deformation level by the end of 4th month (19). Therefore, in 2008 American Pediatric Academy (APA) proposed that, infants should be positioned in a facedown position 2-3 times for 3-5 minutes, under surveillance, during their awake times to prevent cranial asymmetry, and that this duration should be increased as the child grows older (20). Since the infant skull is easier to mold, early infancy is the most favorable time to prevent PSD. The aim of this study is to investigate the effects of early conservative treatment in PSD patients on improvement of cranial asymmetry rates. MATERIALS AND METHODS A retrospective analyzes of all infants admitted to our outpatient clinics due to skull shape deformities between 2014 and 2018 were performed. The infants who received positional treatment for at least 2 months were determined and included to the study. Parameters including gender, delivery method (vaginal delivery vs caesarian section), gestational age at birth (premature/mature), twin status, age at diagnosis, onset of treatment, treatment duration (days), anthropometric measures of pre-and posttreatment diagonal difference (DD), cranial vault asymmetry index (CVAI), cranial ratio (CR) were evaluated for each patient. Cranial parameter analyze were made with SmartSoc and Omega Scanner 3D systems ( Figure 1). The same instrument was used for all measurements of each individual infant throughout the study. Either of these two systems was used for each patient, they were never used together. The same technician performed the scanning and evaluated the cranial alignment for each infant. Patients were disintegrated morphologically into 4 groups: Group I (plagiocephaly), infants whose cephalic index was between 78-89 and CVAI was greater than% 3.5; Group II (brachycephaly), infants whose cephalic index was greater than 89 and CVAI was lesser than% 3.5; Group III (scaphocephaly), infants with cephalic index lesser than 78; Group IV (combined: brachycephaly+plagiocephaly), infants whose cephalic index was greater than 89 and CVAI was greater% 3.5. Infants were distributed into 2 groups according to age at diagnosis: Group A, infants aged below 4 months of age and Group B, infants aged 4 months and older. Statistical analyses were performed using IBM ® SPSS ® Statistics (version 21.0). Student t-test or Mann Whitney U test were used to compare variables between cohorts. p<0.05 was accepted as statistically significant. RESULTS A total of 158 infants were included in the study. Demographics and basic evaluations regarding perinatal and neonatal examinations are detailed in Table 1 25) in the post-treatment period. Both these differences were statistically significant (P DD =0.0001 and P CVAI =0.0001) ( Table 2). The mean pre-treatment and post-treatment values of Group II and Group III were evaluated and no statistically significant differences were found (Table 2). In plagiocephaly deformation types, difference between pre-and post-treatment CVAI and DD levels were statistically significant in both groups of diagnostic age, Group A and Group B. However, in combined deformation types, the regression of CVAI and DD levels were not statistically significant for Group B infants (Table 3). In combined deformation types, absolute change in both DD and CVAI levels after treatment were significantly different at Group A compared to Group B infants ( DISCUSSION Our study analyses the impact of conventional treatment (repositioning) on the management of skull deformities. We found that when started at infants <4 months old they responded better to treatment compared to the infants ≥4 months old. This was shown with statistically significant better reduction in DD and CVAI values after treatment in the former group. In general, our results indicate that the repositioning treatment efficacy is related to the age at onset of treatment, and the outcome is significantly improved when the treatment is started before 4 months of age. The main purpose of our study is to analyze the efficacy of early repositioning treatment using photogrammetric methods. In this study of 158 infants, we observed that, in the group of infants diagnosed and treated before 4 months of age, mean pre-treatment DD of 10.4 mm and CVAI of 7.65% improved to mean DD of 7.24 mm and CVAI of 5.2% after treatment. For the group of infants diagnosed and treated after the age of 4 months, mean pre-treatment DD of 9.2 mm and CVAI of 6.68% improved to mean DD of 7.58 mm and CVAI of 5.25% at the post-treatment period. Comparison of improvements at DD and CVAI measurements in both groups revealed that the significance of improvement was much more prominent in the treatment Group A, which are similar to those previously reported in literature. Shweikeh et al. reviewed 15 articles in literature and investigated the efficacy of current skull deformity management guidelines. They concluded that parents should be informed as early as possible about positional skull deformity (PSD) and that the education by means of close surveillance is the center of prevention and management of this disorder (21). Craniofacial measurements are quite important in the diagnosis and evaluation of these patients (22). Previous studies investigated various techniques and skull shape measurements for the diagnosis and follow-up of PSD, however there is no consensus on a practical clinical method to measure the intensity and the change of deformity (23). Radiologic diagnostic techniques are barely helpful in these patients, and although plain radiographs and computerized tomography (CT) scans were performed in the past for these patients, these are not recommended as routine diagnostic tools for patient evaluation. The CT scans are not preferred for long-term follow-up in infants and children since the patient is exposed to high dose radiation, and it requires sedation to immobilize the patient to obtain optimum images. However, CT may be preferred in the differential diagnosis between deformational disorders and craniosynostosis, if there is suspicion after clinical evaluation (24). Nevertheless, 3D measurement devices provide non-invasive, effective, reliable and low-cost evaluation of skull asymmetries. Furthermore, this technique is compatible with the gold standard 3D CT technique in the diagnosis and follow-up, and may even provide more detailed and accurate shape information. (25). Neglected cranial deformations may lead to negative outcomes in a child's future life. Previous studies reported association of skull deformities and abnormal language development, visualperception deficits, and delayed intellectual and motor development skills (13,26,27). Therefore, children at school age usually require supportive education and speech therapy, physical therapy, and work education. These patients are also prone to astigmatism. Thus, it is common for these children to wear prescription glasses and they need to wear proper protective helmets to do some sport activities like snowboard, bicycle riding (28). Miller et al. reported that deformational infants with plagiocephaly consist of a high risk group for developmental difficulties at school age (29). Recently, a study using Bartley's developmental scale III on 6-month-old plagiocephaly infants showed that these babies are at high risk for delayed neurologic development (15). Steinbok et al. reported that 33% of infants with skull deformities needed educational support and 14% were located at special needs class (30). Thus, these patients need to be diagnosed early by neonatologists and general pediatricians not only to prevent aesthetic deformations but to prevent psychomotor developmental retardation, as well. CONCLUSION Recently, PSD prevalence has been on the rise. It is important that the pediatricians are able to evaluate the severity of the problem and establish an early diagnosis in these cases. Improvement rates of the asymmetry decrease with age due to decreasing skull enlargement rate by age. Early diagnosis and close follow-up are quite important so that the infants with this condition may benefit from conservative management.
2020-02-06T09:17:11.935Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b194d84f35dc6c644efbfd0b483d77523cb3709a", "oa_license": null, "oa_url": "https://dergipark.org.tr/tr/download/article-file/1460582", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "b194d84f35dc6c644efbfd0b483d77523cb3709a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235653145
pes2o/s2orc
v3-fos-license
Dynamic Spectrum Sharing for Future LTE-NR Networks † 5G is the next mobile generation, already being deployed in some countries. It is expected to revolutionize our society, having extremely high target requirements. The use of spectrum is, therefore, tremendously important, as it is a limited and expensive resource. A solution for the spectrum efficiency consists of the use of dynamic spectrum sharing, where an operator can share the spectrum between two different technologies. In this paper, we studied the concept of dynamic spectrum sharing between LTE and 5G New Radio. We presented a solution that allows operators to offer both LTE and New Radio services using the same frequency bands, although in an interleaved mode. We evaluated the performance, in terms of throughput, of a communication system using the dynamic spectrum sharing feature. The results obtained led to the conclusion that using the dynamic spectrum sharing comes with a compromise of a maximum 25% loss on throughput. Nevertheless, the decrease is not that substantial, as the mobile network operator does not need to buy an additional 15 MHz of bandwidth, using the already existing bandwidth of LTE to offer 5G services, leading to cost reduction and an increase in spectrum efficiency. Introduction The next mobile generation following Long Term Evolution (LTE) that has started its deployment in some countries is the 5th Generation (5G). The main focus of LTE was set to increase data transfer rates, while 5G is expected to revolutionize our society, focusing not only on delivering extreme mobile broadband, but also in the fields of critical machine communication and massive machine communication. New applications will emerge, and the target values and requirements proposed are extremely demanding [1]. The main challenges of 5G systems consist of increasing data transfer rates, reducing latency, and increasing capacity, spectrum efficiency, and network energy efficiency, which will be necessary for different application scenarios [2]. The current network architecture cannot sustain all the requirements and target values of 5G New Radio (NR). Therefore, the 3rd Generation Partnership Project (3GPP) released two variations for the new network architecture of 5G NR communication systems: Non-standalone (NSA) and Standalone (SA) [3]. The main difference between both types is that the NSA architecture is based and depends on the LTE core network, while the SA architecture uses a novel next-generation core network, not depending of any LTE infrastructure. The Internet of Things (IoT), mobile internet, and Cognitive Radio (CR) stand as relevant driving forces for 5G development [4][5][6][7]. The IoT technology has the potential ability to connect almost everything to the internet, which will lead to the massive growth of devices that require network acceptance. Particularly, in 2018, there were approximately 22 billion connected devices, which is the equivalent of around 2.9 devices/person. It It is clear that the frequency spectrum is a scarce and limited resource that constitutes an important factor in mobile communication systems, as well as the related cost for the Mobile Network Operator (MNO). In this context, new spectrum explorations [11,12], higher energy efficiency [13], and dynamic spectrum usage [14][15][16][17][18] have become the new features of communication networks. The topic of spectrum sharing in the bands of old communication systems started drawing the attention of researchers, as it is the safest and most economical solution [19]. The standardization procedure for the spectrum sharing principles started in March 2017 by 3GPP. One of the solutions presented regarding the spectrum allocation for 5G NR systems comprises the use of the existing frequency spectrum used by the already deployed mobile generations. Spectrum sharing is based on the flexibility of the physical layer and the fact that in the LTE network, all channels are assigned in the time-frequency domain. This way, the flexibility of the 5G NR radio interface can be used for reference signals, allowing dynamic configuration and minimizing collisions between NR and LTE during simultaneous data transmission. Consequently, there is the possibility of sharing a frequency domain within the same communication channel. A comprehensive overview on the different ways of spectrum sharing that has been investigated in recent years is found in [14]. In addition, in [20,21], new schemes and algorithms for dynamic spectrum sharing between Global System for Mobile Communications (GSM) and LTE technologies were investigated. Regarding the IoT, spectrum sharing is a preferable approach to cope with the conflicts between massive IoT connections and limited spectrum resources as it was discussed in [22][23][24][25]. It can also be used to solve vertical requirements and the competition in the acquisition of frequency bands between MNOs [26,27], as well as to improve spectrum utilization in Cognitive Radio (CR) and TV white space [28][29][30]. The leading mobile producers have shown massive interest in developing solutions for dynamic spectrum sharing. These are presented in Table 2. The Dynamic Spectrum Sharing (DSS) solution allows mobile network operators to offer LTE and NR services using the same frequency bands, although in an interleaved mode. This allows NR services without the need of acquiring new and dedicated frequency spectrum, antenna, or radio frequency units. The solution is intended to assist operators in the short-term rollout deployment of 5G services through LTE already-in-use spectrum. It is not intended to provide substantial performance, as that would necessitate new dedicated spectrum for NR, but to provide coverage, reduce costs, and improve spectrum efficiency for the operator. Figure 1 presents the DSS technology with LTE and NR sharing the same frequency band in comparison to using two separate bands for each technology. between massive IoT connections and limited spectrum resources as it was discussed in [22][23][24][25]. It can also be used to solve vertical requirements and the competition in the acquisition of frequency bands between MNOs [26,27], as well as to improve spectrum utilization in Cognitive Radio (CR) and TV white space [28][29][30]. The leading mobile producers have shown massive interest in developing solutions for dynamic spectrum sharing. These are presented in Table 2. The Dynamic Spectrum Sharing (DSS) solution allows mobile network operators to offer LTE and NR services using the same frequency bands, although in an interleaved mode. This allows NR services without the need of acquiring new and dedicated frequency spectrum, antenna, or radio frequency units. The solution is intended to assist operators in the short-term rollout deployment of 5G services through LTE already-in-use spectrum. It is not intended to provide substantial performance, as that would necessitate new dedicated spectrum for NR, but to provide coverage, reduce costs, and improve spectrum efficiency for the operator. Figure 1 presents the DSS technology with LTE and NR sharing the same frequency band in comparison to using two separate bands for each technology. The deploying of DSS technology is divided into two phases: Phase 1, which is based only on the NSA architecture and accepts a sharing ratio between 20 and 60% with a fixed UL sharing ratio; and Phase 2, which introduces a dynamic UL sharing ratio and accepts both NSA and SA architectures. The main differences between both phases are presented in Table 3. The sharing ratio refers to the ratio of shared resources between both technologies. For example, considering a sharing ratio of 20% refers to 5G NR occupying 20% of the available resources, while LTE occupies 80%. Another example is for a sharing ratio of 60%, meaning that 5G NR uses 60% of the available resources while LTE uses only 40%. The deploying of DSS technology is divided into two phases: Phase 1, which is based only on the NSA architecture and accepts a sharing ratio between 20 and 60% with a fixed UL sharing ratio; and Phase 2, which introduces a dynamic UL sharing ratio and accepts both NSA and SA architectures. The main differences between both phases are presented in Table 3. The sharing ratio refers to the ratio of shared resources between both technologies. For example, considering a sharing ratio of 20% refers to 5G NR occupying 20% of the available resources, while LTE occupies 80%. Another example is for a sharing ratio of 60%, meaning that 5G NR uses 60% of the available resources while LTE uses only 40%. For downlink, the allocation of the subframes is based on Time Division Multiplexing (TDM). In one frame, regardless of the sharing ratio adopted, subframes 0, 5, and 9 are strictly dedicated to LTE transmission. Subframes 1, 2, 3, 4, 6, 7, and 8 can be used for both LTE and NR transmission, depending on the sharing ratio and the architecture mode adopted, see Figure 2 [32]. For downlink, the allocation of the subframes is based on Time Division Multip (TDM). In one frame, regardless of the sharing ratio adopted, subframes 0, 5, and strictly dedicated to LTE transmission. Subframes 1, 2, 3, 4, 6, 7, and 8 can be used fo LTE and NR transmission, depending on the sharing ratio and the architecture adopted, see Figure 2 [32]. The downlink resource allocation, when considering the transmission of s frames, varies depending on the sharing ratio implemented [33]. Different patter depicted in Figure 3, for the NSA architecture mode. It can be observed that for frame, subframes 0, 5, and 9 are always dedicated to LTE. In addition, for the first only, slot 1 is represented with yellow (slot type B) and slot 2 with orange (slot ty and both are used for transmitting synchronization and CSI-RS signals, respective ditional synchronization signals are sent with a period of 20 ms in slot 1 of the rem frames, represented by green (slot type B**). The remaining slots are used for LTE a transmission. The downlink resource allocation, when considering the transmission of several frames, varies depending on the sharing ratio implemented [33]. Different patterns are depicted in Figure 3, for the NSA architecture mode. It can be observed that for every frame, subframes 0, 5, and 9 are always dedicated to LTE. In addition, for the first frame only, slot 1 is represented with yellow (slot type B) and slot 2 with orange (slot type B*), and both are used for transmitting synchronization and CSI-RS signals, respectively. Additional synchronization signals are sent with a period of 20 ms in slot 1 of the remaining frames, represented by green (slot type B**). The remaining slots are used for LTE and NR transmission. For downlink, the allocation of the subframes is based on Time Division Multiplexing (TDM). In one frame, regardless of the sharing ratio adopted, subframes 0, 5, and 9 are strictly dedicated to LTE transmission. Subframes 1, 2, 3, 4, 6, 7, and 8 can be used for both LTE and NR transmission, depending on the sharing ratio and the architecture mode adopted, see Figure 2 [32]. The downlink resource allocation, when considering the transmission of several frames, varies depending on the sharing ratio implemented [33]. Different patterns are depicted in Figure 3, for the NSA architecture mode. It can be observed that for every frame, subframes 0, 5, and 9 are always dedicated to LTE. In addition, for the first frame only, slot 1 is represented with yellow (slot type B) and slot 2 with orange (slot type B*), and both are used for transmitting synchronization and CSI-RS signals, respectively. Additional synchronization signals are sent with a period of 20 ms in slot 1 of the remaining frames, represented by green (slot type B**). The remaining slots are used for LTE and NR transmission. For uplink, the allocation of the resources is based on Frequency Division Multiplexing (FDM). As depicted in Figure 4, the Physical Resource Blocks (PRB) available for LTE or NR transmission are represented by the color green and depend on the sharing ratio adopted and carrier bandwidth. Furthermore, there are seven PRBs dedicated to NR UL transmission only, represented by the color yellow in the right outer edge of the frequency band. These PRBs depend on the positioning of the LTE PRACH PRBs. In Figure 4, these are located in the left outer edge of the frequency band. If the LTE PRACH PRBs were located in the right outer edge, the seven NR UL PRBs would then be positioned in the left outer edge (meaning the opposite side). For uplink, the allocation of the resources is based on Frequency Division Multiplexing (FDM). As depicted in Figure 4, the Physical Resource Blocks (PRB) available for LTE or NR transmission are represented by the color green and depend on the sharing ratio adopted and carrier bandwidth. Furthermore, there are seven PRBs dedicated to NR UL transmission only, represented by the color yellow in the right outer edge of the frequency band. These PRBs depend on the positioning of the LTE PRACH PRBs. In Figure 4, these are located in the left outer edge of the frequency band. If the LTE PRACH PRBs were located in the right outer edge, the seven NR UL PRBs would then be positioned in the left outer edge (meaning the opposite side). The calculation of the maximum available NR/LTE sharing ratio is given by the following equation: where N represents the number of PRBs that are necessary for LTE transmission, for instance, the LTE PRACH PRBs. In this paper, we studied the concept of dynamic spectrum sharing between LTE and 5G NR technologies for the same mobile network operator. We assessed the performance of an LTE-NR communication system using the DSS feature, in terms of throughput, using different sharing ratios for both NSA and SA architectures and for both transmission directions (downlink and uplink). We performed a comparison of the performance while using different modulation schemes and numbers of layers. The remainder of the paper is organized as follows. Section 2 presents the sharing ratio calculation for downlink and uplink; Section 3 provides the equipment and methods used for the measurements; and Section 4 presents the results obtained and analysis. Lastly, Section 5 delivers the conclusions of this paper. Sharing Ratio Calculation The sharing ratio between LTE and NR is defined and managed by a new system unit denominated Common Resource Manager (CRM). It has the responsibility to compute the sharing ratio and update it according to traffic demands. In order to do so, the CRM continuously gathers information from both LTE and NR sites. The CRM component is composed by 3 objects: CRM, situated in the base station; LTE CRM, situated in the LTE system unit; NR CRM, situated in the NR system unit. Figure 5 presents the main The calculation of the maximum available NR/LTE sharing ratio is given by the following equation: where N represents the number of PRBs that are necessary for LTE transmission, for instance, the LTE PRACH PRBs. In this paper, we studied the concept of dynamic spectrum sharing between LTE and 5G NR technologies for the same mobile network operator. We assessed the performance of an LTE-NR communication system using the DSS feature, in terms of throughput, using different sharing ratios for both NSA and SA architectures and for both transmission directions (downlink and uplink). We performed a comparison of the performance while using different modulation schemes and numbers of layers. The remainder of the paper is organized as follows. Section 2 presents the sharing ratio calculation for downlink and uplink; Section 3 provides the equipment and methods used for the measurements; and Section 4 presents the results obtained and analysis. Lastly, Section 5 delivers the conclusions of this paper. Sharing Ratio Calculation The sharing ratio between LTE and NR is defined and managed by a new system unit denominated Common Resource Manager (CRM). It has the responsibility to compute the sharing ratio and update it according to traffic demands. In order to do so, the CRM continuously gathers information from both LTE and NR sites. The CRM component is composed by 3 objects: CRM, situated in the base station; LTE CRM, situated in the LTE system unit; NR CRM, situated in the NR system unit. Figure 5 presents the main responsibilities of the CRM. Initially, it starts by gathering information from LTE and NR sites and, based on the information receives, it defines the resources to be shared. It then allocates the shared resources to both technologies. According to traffic conditions and demands for a specific moment, it evaluates the optimal sharing ratio to be selected and finally updates it for LTE and NR. Sensors 2021, 21,4215 responsibilities of the CRM. Initially, it starts by gathering information from LTE sites and, based on the information receives, it defines the resources to be shared allocates the shared resources to both technologies. According to traffic conditi demands for a specific moment, it evaluates the optimal sharing ratio to be selec finally updates it for LTE and NR. For downlink, depending on the traffic demands for a specific moment, th receives information concerning load indication and takes a decision on the DL ratio that needs to be adopted. To calculate the DL load, the weighted load (based occupancy), average LTE DSS Guaranteed Bit Rate (GBR) load, NR DSS GBR lo NR PDCCH load need to be determined [35]. The algorithm for the sharing ratio tion for DL is presented in Figure 6. The first step consists of the verification, f CRM entity, of the average LTE GBR load, as well as the NR PDCCH load again fined threshold, so that a decision can be taken regarding the resources to be assi the average LTE GBR load is higher than 70% and the NR PDCCH load is lower th then the sharing ratio for NR will decrease. Else, if the average LTE GBR load is e lower than 70% and the NR PDCCH load is higher than 70%, then the sharing r NR will increase. Lastly, if both the average LTE GBR load and NR PDCCH l higher than 70%, then one of the two following conditions is applied:  If the LTE GBR resource delta (n; n − 1) > 0, the sharing ratio for NR will be r  If the LTE GBR resource delta (n; n − 1) ≤ 0 and the NR PDCCH resource delt 1) > 0, the sharing ratio for NR will increase. The second step of the algorithm is based on the load information received fr 1. The CRM then calculates the LTE-weighted load and the NR weighted load, from the LTE and NR total loads are determined. Finally, in step 3, the number of LTE subframes is calculated, taking into account the LTE and NR total loads from ste resulting number of subframes matches to a specific sharing ratio value. For downlink, depending on the traffic demands for a specific moment, the CRM receives information concerning load indication and takes a decision on the DL sharing ratio that needs to be adopted. To calculate the DL load, the weighted load (based on PRB occupancy), average LTE DSS Guaranteed Bit Rate (GBR) load, NR DSS GBR load, and NR PDCCH load need to be determined [35]. The algorithm for the sharing ratio calculation for DL is presented in Figure 6. The first step consists of the verification, from the CRM entity, of the average LTE GBR load, as well as the NR PDCCH load against a defined threshold, so that a decision can be taken regarding the resources to be assigned. If the average LTE GBR load is higher than 70% and the NR PDCCH load is lower than 70%, then the sharing ratio for NR will decrease. Else, if the average LTE GBR load is equal or lower than 70% and the NR PDCCH load is higher than 70%, then the sharing ratio for NR will increase. Lastly, if both the average LTE GBR load and NR PDCCH load are higher than 70%, then one of the two following conditions is applied: • If the LTE GBR resource delta (n; n − 1) > 0, the sharing ratio for NR will be reduced; • If the LTE GBR resource delta (n; n − 1) ≤ 0 and the NR PDCCH resource delta (n; n − 1) > 0, the sharing ratio for NR will increase. The second step of the algorithm is based on the load information received from step 1. The CRM then calculates the LTE-weighted load and the NR weighted load, from which the LTE and NR total loads are determined. Finally, in step 3, the number of LTE and NR subframes is calculated, taking into account the LTE and NR total loads from step 2. The resulting number of subframes matches to a specific sharing ratio value. For uplink, a similar procedure as for downlink is taken. Depending on the traffic demands for a specific moment, the CRM receives information concerning load indication and takes a decision on the UL sharing ratio that needs to be adopted. To calculate the UL load, the weighted load (based on PRB occupancy) and the average LTE DSS GBR load need to be determined. Figure 7 presents the algorithm for the UL sharing ratio calculation. For step 1, the average LTE GBR load is verified by the CRM, so a decision can be taken regarding the assignment of resources. If the average LTE GBR load is higher than 70%, then the sharing ratio for NR will be decreased. The second step of the algorithm is based on the load information received from step 1. The CRM then calculates the LTE weighted load and the NR weighted load, from which the LTE and NR total loads are determined. Finally, in step 3, the number of LTE and NR subframes is calculated, taking into account the LTE and NR total loads from step 2. The resulting number of subframes matches to a specific sharing ratio value. Equipment and Methods This section presents the parameters adopted for our work, as well as the scenarios tested. We considered a MIMO system, composed of one base station for LTE, one for NR, and one mobile station that is composed of a Qualcomm chipset prototype that is frequently used on commercial Samsung devices. We used both 64QAM and 256QAM modulation for the measurements. A bandwidth of 15 MHz was selected. The Absolute Radio-Frequency Channel Numbers (ARFCN) for NR were 175,800 for downlink and 166,800 for uplink. The NR-ARFCN is a code that refers to the carrier frequency to be used for both transmission directions of the radio channel and is defined in the 3GPP TS 38.104 Release 16 specification [36]. The NR-ARFCN can be converted to frequency, resulting in 75,800 = 879 MHz for downlink and 166,800 = 834 MHz for uplink. Frequency Division Duplex (FDD) was selected for all cases. We performed throughput measurements using physical and static equipment from Nokia Networks R&D laboratory, considering a Signal-to-Interference-plus-Noise Ratio (SINR) higher than 25 dB and a Reference Signal Receive Power (RSRP) higher than −70 dBm with Line of Sight (LoS) and without the presence of fading. These are standard values used at the laboratory for testing the performance of new technologies. They are considered very good radio conditions, and the reason for choosing them is to create almost ideal radio conditions in order to verify and confirm the aptness of DSS technology and its peak performance using physical measurements, as it is a technology under development and testing. We considered both NSA and SA architectures. For NSA, we measured using sharing ratio values between 20 and 70%. For SA, we measured using sharing ratio values between 30 and 70%. Table 4 below summarizes the scenario parameters adopted for the work. Equipment and Methods This section presents the parameters adopted for our work, as well as the scenarios tested. We considered a MIMO system, composed of one base station for LTE, one for NR, and one mobile station that is composed of a Qualcomm chipset prototype that is frequently used on commercial Samsung devices. We used both 64QAM and 256QAM modulation for the measurements. A bandwidth of 15 MHz was selected. The Absolute Radio-Frequency Channel Numbers (ARFCN) for NR were 175,800 for downlink and 166,800 for uplink. The NR-ARFCN is a code that refers to the carrier frequency to be used for both transmission directions of the radio channel and is defined in the 3GPP TS 38.104 Release 16 specification [36]. The NR-ARFCN can be converted to frequency, resulting in 75,800 = 879 MHz for downlink and 166,800 = 834 MHz for uplink. Frequency Division Duplex (FDD) was selected for all cases. We performed throughput measurements using physical and static equipment from Nokia Networks R&D laboratory, considering a Signalto-Interference-plus-Noise Ratio (SINR) higher than 25 dB and a Reference Signal Receive Power (RSRP) higher than −70 dBm with Line of Sight (LoS) and without the presence of fading. These are standard values used at the laboratory for testing the performance of new technologies. They are considered very good radio conditions, and the reason for choosing them is to create almost ideal radio conditions in order to verify and confirm the aptness of DSS technology and its peak performance using physical measurements, as it is a technology under development and testing. We considered both NSA and SA architectures. For NSA, we measured using sharing ratio values between 20 and 70%. For SA, we measured using sharing ratio values between 30 and 70%. Table 4 below summarizes the scenario parameters adopted for the work. Each network architecture type has different possible variations. The NSA architecture is based on the LTE core network and uses LTE-based interfaces. For this type of architecture, the gNodeB needs to support these interfaces and acts as a secondary node, while the eNodeB acts as a primary or master node. There are different options to deploy an NSA architecture-option 3, 3a, 3x, 4, 4a, 7, and 7a. The option used for our measurements is NSA option 3x, where the control plane is routed through the master eNodeB and the user plane is directly routed through the secondary gNodeB. The eNodeB also communicates directly with the gNodeB and both communicate directly with the Evolved Packet Core (EPC). The SA architecture has 2 options-option 2 and 5. Option 2 is that adopted for our work and, as it can be seen in Figure 8b, consists of a Next Generation Core (NGC) and a gNodeB that communicates directly with it, without needing any support of LTE structures. For both network architectures studied, we used two radio modules that have attached to them one attenuator, as the measurements were performed in a laboratory with close proximity to the mobile user. We used either 2 or 4 antennas, depending on the case studied. Each network architecture type has different possible variations. The NSA architecture is based on the LTE core network and uses LTE-based interfaces. For this type of architecture, the gNodeB needs to support these interfaces and acts as a secondary node, while the eNodeB acts as a primary or master node. There are different options to deploy an NSA architecture-option 3, 3a, 3x, 4, 4a, 7, and 7a. The option used for our measurements is NSA option 3x, where the control plane is routed through the master eNodeB and the user plane is directly routed through the secondary gNodeB. The eNodeB also communicates directly with the gNodeB and both communicate directly with the Evolved Packet Core (EPC). The SA architecture has 2 options-option 2 and 5. Option 2 is that adopted for our work and, as it can be seen in Figure 8b, consists of a Next Generation Core (NGC) and a gNodeB that communicates directly with it, without needing any support of LTE structures. For both network architectures studied, we used two radio modules that have attached to them one attenuator, as the measurements were performed in a laboratory with close proximity to the mobile user. We used either 2 or 4 antennas, depending on the case studied. Results and Discussion This section presents the results obtained from our measurements. We divide it into two subsections: downlink and uplink results. In each subsection, we present results for both NSA and SA architectures. We considered sharing ratios ranging from 20 to 70% for the NSA architecture and from 30 to 70% for the SA architecture, meaning that 5G NR occupies between 20 and 70%, and 30 and 70% of the available resources, while LTE occupies between 80 and 30%, and 70 and 30% for the NSA and SA architectures, respectively. Figure 9 presents the throughput results using the DSS feature from the first four cases of Table 2, comprising the NSA architecture. The differences between the cases consist of the modulation type, that is either 64QAM or 256QAM modulation, and MIMO type, that is either 2 × 2 or 4 × 4 MIMO. Each case depicts five different curves. The green (NR only) and purple (LTE only) curves represent the values for the throughput achieved without the use of DSS technology. The yellow curve (DSS LTE + NR) is the most important one as it presents the total throughput obtained with DSS. The remaining blue (DSS LTE) and red (DSS NR) curves represent the individual throughputs for each technology while using DSS. Notice that the DSS LTE + NR throughput equals the sum of the individual DSS LTE and DSS NR throughputs. It can be observed that, for all cases, when we increased the sharing ratio, the values for the DSS NR TP also increased while the DSS LTE TP values decreased. This was an expected behavior, as the higher the sharing ratio, the more resources will be available for NR transmission and the fewer for LTE transmission. It can also be observed that from a sharing ratio of approximately 57%, meaning that NR occupies 57% of the resources while LTE occupies 43%, the individual throughput for NR surpassed the LTE throughput. In addition, it is visible that the overall throughput using DSS was slightly lower than that for the LTE or NR-only throughputs. This is understandable, as the available frame resources are shared between both technologies. Downlink Comparing case 1 and case 2, where the difference is the modulation type that increases from 64QAM to 256QAM modulation, it can be concluded that the major difference is on the maximum values for throughput. For case 1, depending on the sharing ratio adopted, between 90 and 100 Mbps were obtained, while for case 2, the values for throughput were approximately 120-135 Mbps. An increase of 35% was observed. A similar comparison can be conducted for cases 3 and 4. For case 3, the maximum DSS throughput values were 175-200 Mbps, while for case 4, the values varied between 240 and 260 Mbps, depending on the sharing ratio. For these cases, an increase of 37% in throughput was observed for a sharing ratio of 20%, while for a 70% sharing ratio, the throughput increase was 30% (see Table 5). Results and Discussion This section presents the results obtained from our measurements. We divide it into two subsections: downlink and uplink results. In each subsection, we present results for both NSA and SA architectures. We considered sharing ratios ranging from 20 to 70% for the NSA architecture and from 30 to 70% for the SA architecture, meaning that 5G NR occupies between 20 and 70%, and 30 and 70% of the available resources, while LTE occupies between 80 and 30%, and 70 and 30% for the NSA and SA architectures, respectively. Figure 9 presents the throughput results using the DSS feature from the first four cases of Table 2, comprising the NSA architecture. The differences between the cases consist of the modulation type, that is either 64QAM or 256QAM modulation, and MIMO type, that is either 2 × 2 or 4 × 4 MIMO. Each case depicts five different curves. The green (NR only) and purple (LTE only) curves represent the values for the throughput achieved without the use of DSS technology. The yellow curve (DSS LTE + NR) is the most important one as it presents the total throughput obtained with DSS. The remaining blue (DSS LTE) and red (DSS NR) curves represent the individual throughputs for each technology while using DSS. Notice that the DSS LTE + NR throughput equals the sum of the individual DSS LTE and DSS NR throughputs. It can be observed that, for all cases, when we increased the sharing ratio, the values for the DSS NR TP also increased while the DSS LTE TP values decreased. This was an expected behavior, as the higher the sharing ratio, the more resources will be available for NR transmission and the fewer for LTE transmission. It can also be observed that from a sharing ratio of approximately 57%, meaning that NR occupies 57% of the resources while LTE occupies 43%, the individual throughput for NR surpassed the LTE throughput. In addition, it is visible that the overall throughput using DSS was slightly lower than that for the LTE or NR-only throughputs. This is understandable, as the available frame resources are shared between both technologies. Comparing case 1 and case 2, where the difference is the modulation type that increases from 64QAM to 256QAM modulation, it can be concluded that the major difference is on the maximum values for throughput. For case 1, depending on the sharing ratio adopted, between 90 and 100 Mbps were obtained, while for case 2, the values for throughput were approximately 120-135 Mbps. An increase of 35% was observed. A similar comparison can be conducted for cases 3 and 4. For case 3, the maximum DSS throughput values were 175-200 Mbps, while for case 4, the values varied between 240 and 260 Mbps, depending on the sharing ratio. For these cases, an increase of 37% in throughput was observed for a sharing ratio of 20%, while for a 70% sharing ratio, the throughput increase was 30% (see Table 5). Regarding case 1 and case 3, where the difference consists of the number of transmitting and receiving antennas, 2 × 2 MIMO and 4 × 4 MIMO, respectively, an increase in all throughput values of approximately 50% could be observed, along with an increase in complexity. Figure 10 depicts the DL throughput results with the DSS feature for the first four cases from Table 2, using the SA architecture, contrary to the results from Figure 8 that made use of the NSA architecture. The main difference of both types of architectures is that NSA is an intermediary solution that is based on the LTE network, while the SA architecture does not depend in any way on the LTE network, using a Next-Generation Core (NGC) along with NR protocols. Moreover, the SA architecture leads to an improved efficiency with less complexity. Comparing case 1 and case 2, we can observe that maximum DSS throughput values varied between 80 and 90 Mbps, and 110 and 120 Mbps, respectively. The increase from one case to another was approximately 36%. For cases 3 and 4, the DSS values varied between 160 and 180 Mbps, and 200 and 240 Mbps, respectively. For these, an increase of approximately 29% was observed. Regarding case 1 and case 3, where the difference consists of the number of transmitting and receiving antennas, 2 × 2 MIMO and 4 × 4 MIMO, respectively, an increase in all throughput values of approximately 50% could be observed, along with an increase in complexity. Figure 10 depicts the DL throughput results with the DSS feature for the first four cases from Table 2, using the SA architecture, contrary to the results from Figure 8 that made use of the NSA architecture. The main difference of both types of architectures is that NSA is an intermediary solution that is based on the LTE network, while the SA architecture does not depend in any way on the LTE network, using a Next-Generation Core (NGC) along with NR protocols. Moreover, the SA architecture leads to an improved efficiency with less complexity. Comparing case 1 and case 2, we can observe that maximum DSS throughput values varied between 80 and 90 Mbps, and 110 and 120 Mbps, respectively. The increase from one case to another was approximately 36%. For cases 3 and 4, the DSS values varied between 160 and 180 Mbps, and 200 and 240 Mbps, respectively. For these, an increase of approximately 29% was observed. Downlink Regarding case 2 and case 4, the increase in the DSS throughput was approximately 87% for both sharing ratios of 30% and 70%. In addition, it can be remarked that for all cases, the values for the NR-only throughput for the SA architecture were smaller than those of the NSA architecture, with a difference of around 15 Mbps for case 1, 20 Mbps for cases 2 and 3, and 40 Mbps for case 4. The reason for this is that in the SA architecture, the number of broadcast signals was higher than that in the NSA architecture. An example is the presence of System Information Block (SIB) signals, as well as paging with information regarding the cell. Regarding case 2 and case 4, the increase in the DSS throughput was approximately 87% for both sharing ratios of 30% and 70%. In addition, it can be remarked that for all cases, the values for the NR-only throughput for the SA architecture were smaller than those of the NSA architecture, with a difference of around 15 Mbps for case 1, 20 Mbps for cases 2 and 3, and 40 Mbps for case 4. The reason for this is that in the SA architecture, the number of broadcast signals was higher than that in the NSA architecture. An example is the presence of System Information Block (SIB) signals, as well as paging with information regarding the cell. Table 6 provides the percentage loss of the throughput that occurs when using DSS instead of an NR-only system. It can be observed that there was a loss in throughput values between a minimum of 10% and a maximum of 26%, depending on the sharing ratio and case adopted. The existence of a decrease in throughput was expected, as with DSS, the available resources are shared with LTE, and hence, there are fewer resources available for NR, compared to a system that is based only on NR. However, from the results obtained, for the NSA architecture, having a loss between 14 and 25% is not a considerable decrease taking in account the fact that there is no need for new dedicated spectrum to be allocated for NR, as it shares the bandwidth with LTE technology. For case 1, the average loss was 19.8% for NSA and 17.2% for SA. For case 2, the loss was 17.5% for NSA and 15.6% for SA. For case 3, we had a 21.2% loss for NSA and 19% for SA. Lastly, for case 4, the loss was 20.6% for NSA and 19.2% for SA. Table 6 provides the percentage loss of the throughput that occurs when using DSS instead of an NR-only system. It can be observed that there was a loss in throughput values between a minimum of 10% and a maximum of 26%, depending on the sharing ratio and case adopted. The existence of a decrease in throughput was expected, as with DSS, the available resources are shared with LTE, and hence, there are fewer resources available for NR, compared to a system that is based only on NR. However, from the results obtained, for the NSA architecture, having a loss between 14 and 25% is not a considerable decrease taking in account the fact that there is no need for new dedicated spectrum to be allocated for NR, as it shares the bandwidth with LTE technology. For case 1, the average loss was 19.8% for NSA and 17.2% for SA. For case 2, the loss was 17.5% for NSA and 15.6% for SA. For case 3, we had a 21.2% loss for NSA and 19% for SA. Lastly, for case 4, the loss was 20.6% for NSA and 19.2% for SA. We can observe that the decrease in the DSS throughput was higher with the increase in the sharing ratio. This is due to the fact that, as the sharing ratio increases (meaning that a higher number of the available resources were used for NR transmission and fewer for LTE), more synchronization signals are transmitted in the slots dedicated to NR, as well as overhead signals. Figure 11 depicts the UL throughput results for case 5 from Table 6, using both NSA and SA architecture. Regarding the NSA architecture, it can be observed that the maximum throughput values for LTE and NR only (not considering the DSS feature) were 40 and 22 Mbps, respectively. Moreover, if we compare the results of the DSS LTE + NR and NR-only throughputs from both architecture types, we can conclude that they were similar. Therefore, there is no difference between the SA and NSA architecture, as there are no additional channels that need to be transmitted and hence occupy extra resources. Sharing Ratio 20% 30% 40% 50% 60% 70% 30% 40% 50% 60% 70% NSA ARCHITECTURE SA ARCHITECTURE We can observe that the decrease in the DSS throughput was higher with the increase in the sharing ratio. This is due to the fact that, as the sharing ratio increases (meaning that a higher number of the available resources were used for NR transmission and fewer for LTE), more synchronization signals are transmitted in the slots dedicated to NR, as well as overhead signals. Figure 11 depicts the UL throughput results for case 5 from Table 6, using both NSA and SA architecture. Regarding the NSA architecture, it can be observed that the maximum throughput values for LTE and NR only (not considering the DSS feature) were 40 and 22 Mbps, respectively. Moreover, if we compare the results of the DSS LTE + NR and NR-only throughputs from both architecture types, we can conclude that they were similar. Therefore, there is no difference between the SA and NSA architecture, as there are no additional channels that need to be transmitted and hence occupy extra resources. Table 7 presents the percentage loss of throughput when using the DSS technology. We can observe that for both architecture types, a maximum loss of 25% occurred for a sharing ratio of 70%, meaning that NR occupied 70% of the available resources while LTE occupied only 30%. Therefore, we can conclude that using the DSS technology comes with a compromise of a maximum 25% loss on throughput. However, the decrease observed is not that considerable if taking into account that, instead of needing an additional 15 MHz of bandwidth, the system shares the actual 15 MHz between both technologies. Subsequently, from the operator's point of view, DSS brings the advantage of cost reduction and spectrum efficiency, while being able to present the "5G icon game" and provide coverage strategies. Table 7 presents the percentage loss of throughput when using the DSS technology. We can observe that for both architecture types, a maximum loss of 25% occurred for a sharing ratio of 70%, meaning that NR occupied 70% of the available resources while LTE occupied only 30%. Therefore, we can conclude that using the DSS technology comes with a compromise of a maximum 25% loss on throughput. However, the decrease observed is not that considerable if taking into account that, instead of needing an additional 15 MHz of bandwidth, the system shares the actual 15 MHz between both technologies. Subsequently, from the operator's point of view, DSS brings the advantage of cost reduction and spectrum efficiency, while being able to present the "5G icon game" and provide coverage strategies. Conclusions In this paper, we analyzed the impact and the advantages of using the DSS technology in an LTE-NR communication system. We proposed different schemes for the resource allocation, according to the selected sharing ratio. The results obtained provided insight into the behavior of the system with DSS and showed that it is a technology that brings advantages from the operator's point of view, mainly regarding the spectrum efficiency and cost reduction. In conclusion, from the results obtained, it is clear that using the DSS technology brings a major advantage of not needing extra dedicated bandwidth for NR systems, which, from the operator's point of view, leads to an improvement of spectrum efficiency and cost reduction. We performed a comparison of the spectrum usage for LTE and NR when adopting the DSS feature and without. In addition, we measured the throughput obtained for both LTE and NR, using the proposed allocation schemes for each sharing ratio. The results obtained clearly demonstrated that even if a maximum loss of 25% on throughput is observed, there is a major advantage in using the DSS technology due to the fact that there is a cost reduction for the mobile operator alongside an optimization on the spectrum usage, due to the fact that the MNO can re-use the already existing 15 MHz bandwidth of LTE and does not need to buy any new dedicated supplementary 15 MHz for 5G services. In conclusion, the deployment of DSS technology is useful, especially for the initial deployment of 5G NR, as the operator is able to build strategies while presenting the initial 5G picture to the consumer, through LTE already-in-use spectrum.
2021-06-28T05:09:45.566Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "0a61cc0813fe8fd534b4fd5117d809128f463916", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/12/4215/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a61cc0813fe8fd534b4fd5117d809128f463916", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
135004381
pes2o/s2orc
v3-fos-license
Distribution of Mercury in Flint Creek Watershed : Implications for Mercury Bioaccumulation Water, soil/sediment, and 36 fish samples were collected at three major sites along Flint Creek in 2015-2016 and analyzed for total organic carbon (TOC), dissolved organic carbon (DOC), total mercury (tHg), and other water quality indicators. This study was a follow-up to a 2012 study that revealed elevated tHg levels in fish, resulting in a public health advisory. This study revealed tHg concentrations in water below the detection limit (0.0002 ppm); while, tHG in soil/sediment ranged from < 0.0133 ppm to 0.0682 ppm dry weight. No temporal trends existed, but tHG tended to increase with TOC. Mercury levels in sediment were below the threshold effects level suggested as a preliminary screening level by the National Oceanic and Atmospheric Administration (acute = 1.4, chronic = 0.77 ppb). In summary, tHg levels were low; posing little risk to drinking water. Soil/sediment levels were generally higher and could pose risk to aquatic species. Introduction: Mercury (Hg) is a naturally occurring metal found primarily in cinnabar (mercury-sulfate) that is released through the weathering of rock and (or) volcanic activity [11].However, the main source of Hg in the environment is from human activity through coal-combustion electrical power generation and industrial waste disposal [11,14], and probably coal burning power plants in Alabama.Environmental concentrations can be influenced by proximity to point sources such as sewage treatment plants and industrial discharges, and by geographic and physiographic factors that affect vulnerability to atmospheric deposition.Once Hg is released to the environment, it can be converted to a biologically toxic form of methylmercury (MeHg) by microorganisms found in soil and in the aquatic environment [11].According to Giuseppe et.al., 2017 [7], the bioavailability and toxic effects of mercury and its compounds in both fish and human is shown below: MeHg is a potent neurotoxin that affects the central nervous system causing neurological damage, mental retardation, blindness, deafness, kidney malfunction, and in some cases, death [10].In recent years, there has been increasing recognition that methylmercury affects fish and wildlife health, both in acutely polluted ecosystems and ecosystems with modest methylmercury levels.A study by U.S. researcher Peter Frederick suggests methylmercury may increase male homosexuality or non-sexual psychological disorder in birds.Methylation of Hg is of concern because MeHg is absorbed easily into the food chain [7].MeHg readily crosses biological membranes and can accumulate to harmful concentrations in the exposed organism and biomagnify up the food chain [10].This biomagnification can cause high levels of Hg in top predator fishes and have a detrimental effect on humans and fish-eating wildlife [8,10].Bioavailabilty and toxic effects of mercury and its compounds Materials and Methods Within the Flint Creek Watershed, a total of 3 major sites were sampled for a combination of thirty-six fish (i.e., catfish, largemouth bass, and bream), surface water, soil and bed sediments (Table 1, Fig. 2), and other factors thought to influence total mercury (tHg).The samples were collected from May 2015 during moderate to high-flow conditions through August 2016 during low-flow conditions.Sites were selected to represent watersheds dominated by agricultural, highly forested, and urban land uses throughout much of the study area and several tributaries draining into parts of the Tennessee River Basin.Samples were collected at times of low water flow for streams such as Flint Creek to allow for sampling of more undisturbed sediments.Besides water samples, fish from the watershed were caught and analyzed because fish are good bio-indicators; they are at the top of the aquatic food web and are consumed by humans, making them important for attempts to assess contamination. Composite surface water samples were collected randomly from the three predetermined sites along the Flint Creek.The water samples were analyzed for total organic carbon (TOC), dissolved organic carbon (DOC), total mercury (tHg), and other water quality parameters.This study is a follow-up to a fish sampling study conducted in the same reach in September 2012-2013 by the Alabama Department of Environmental Management (ADEM) at site # 3 (US31) [2].Those results showed mercury levels slightly elevated in fish over regulatory standards and levels from fish in disturbed and undisturbed streams nationwide, resulting in a public health advisory for largemouth bass.This sampling was conducted to assess the presence or absence of mercury in fish (12 each of largemouth bass, bream, and catfish), and the water system. A total of thirty-six fish [three species; 12 each of largemouth bass (micropterus salmoides); bream (acanthopagrus); and catfish (disambiguation)] and 102 samples (36 for soil/sediment; 36 for surface water; and 30 samples for TOC/DOC) were collected directly into cleaned, pretested, fluoropolymer glass bottles using sample handling techniques designed for collection of mercury at trace levels [17].The samples were filtered through a 0.45-cm capsule filter prior to preservation.The samples were preserved by adding 5 ml of pretested 12M hydrochloric acid (HCl) solution.Furthermore, water quality information was also measured at each sampling point to include the following physicochemical parameters: Water temperature, and pH. The above parameters were measured in the field because the mercury methylation process is driven by these additional constituents and it is important to know their concentration to fully assess the significance of mercury concentrations in soil/sediments and water.Surface water samples were analyzed for tHg concentration using cold-vapor atomic-fluorescence spectroscopy (CVAFS) [11] at the ETC Lab, in Memphis, Tennessee.Method quantitation limits (MQL) for concentrations of tHg in water was 0.0002 mg/L, [12].Soil/Bed-sediment samples were analyzed for concentrations of tHg using the CVAFS technique at the ETC Lab. These samples were predigested with nitric and sulfuric acids in a sealed teflon pressure vessel at 125°C for a minimum of 2 hours.The cooled sample was then diluted with a 5-percent bromochloride solution and oxidized at 50°C for a minimum of 12 hours [9].The MQLs for tHg in soil and bed sediments were 0.0133 and 0.0133 mg/Kg, respectively [12].Fish-fillet samples were placed into acid-washed borosilicate glass jars and freeze-dried.The dry product was homogenized and digested in a sealed teflon pressure vessel with microwave heating and the addition of nitric and hydrochloric acid, followed by hydrogen peroxide.Cold-vapor atomic absorption spectrophotometry with flow injection sample introduction and stannous chloride reduction was used to determine the concentration of tHg in the sample.A full description of laboratory procedures can be found in Brumbaugh et al., 2001. The analytical data was validated using standard quality control measures as required by the applied analytical method.Quality assurance, instrumentation maintenance and calibration were performed in accordance with guidelines established by USEPA, NELAP (National Environmental Laboratory Accreditation Program) and USACE (U.S. Army Corps of Engineers). Data Analysis: Prior to data analysis, a Pearson correlation was run on the environmental data.This was done to reduce the multi-collinearity within the data set (the coefficient estimates did not change erratically).The correlated variable with fewer data points or that was less normally distributed was removed from the data set.The remaining variables were analyzed using statistical and graphical approaches.Fish-fillet data used for statistical (and graphical) analyses were limited to largemouth bass, bream and catfish.Limiting certain analyses to the largest single-species data set avoided the problem of interspecies differences in metabolism and in Hg accumulation rates [8,3].For sites having specimens ranging in size, the laboratory separated specimens into three separate batches prior to analysis.In these cases, only the sample with the larger mean length was retained for inclusion in this study.Hg concentrations in largemouth bass fillets were normalized to mean sample total length prior to statistical and graphical analyses.Total length is related to age of fish, which has been shown in other studies [15] to influence Hg concentrations.Concentration of tHg in bed sediment was normalized to loss on ignition (LOI). Loss on ignition represents the mass of moisture and volatile material present in a sample. Distribution and Concentrations of Total Mercury Fish tissue, water, and soil/bed-sediment samples were obtained from streams throughout the Flint Creek.The US31 site was the northernmost site.Red Bank site was the southernmost site, with Vaughn Bridge site in the middle. Patterns of Distribution The distribution of Hg was somewhat variable throughout the watershed (Fig. 4, and Table 4). Fish-tissue concentrations that exceeded human health and wildlife criteria were found in the urbanized north and the forested south (Table 4).A north-south plot of tHg concentrations in largemouth bass (Fig. 4) indicates a pattern of higher length-normalized tHg concentrations in the northern section, an area of higher population density and industrialization, and in the southern section, an area of relatively low population density.The lowest concentrations tended to be at sites in the middle section of the study area (VB). Concentrations: Surface water tHg sample concentrations were less than the Detection Limit (0.0002 ppm). Soil/Bed-sediment sample tHg concentrations ranged from less than Detection Limit (0.0133 ppm) to 0.0682 ppm.The bed-sediment quality-control split samples showed better results than the surface water samples.The total organic carbon (TOC) sample had concentrations of 4.95 and 7.91 mg/L, and the dissolved organic carbon (DOC) samples had concentrations of 4.71 and 7.67 mg/L.Fish-tissue tHg concentrations ranged from 0.0125 to 0.0876 mg/g wet weight.The internal quality-control tests performed at the ETC Lab for fish tissue indicated good accuracy and precision except when sample concentrations were very low [3].Method blanks were near or below the instrument detection limit for eight of the nine test blanks, resulting in a higher MQL of 0.0133 μg/g wet weight [3]. Concentrations of tHg were detected in 33 out of 36 soil and bed-sediment samples (Table 4); however, no detection were observed in the surface water, which may be a result of the detection limit.Additionally, sunlight can break down methylmercury to Hg (II) or Hg (0), which can leave the aquatic environment and reenter the atmosphere as a gas. Factors Controlling Distribution of tHg The large concentration of mercury in soil/sediments may be due to low pH (5.78-7.07,Table 7) and high DOC (4.71-7.67 mg/L, Table 8); TOC (4.95-7.91,Table 6).DOC and pH are two factors that affect methylation because they have strong effect on the ultimate fate of mercury in an ecosystem.Organic matter can stimulate microbial populations, reduce oxygen levels, and therefore increase biomethylation.Biomethylation increases in warmer temperatures such as this summer (18 o C -22 o C, Table 3) when biological productivity is high, and decreases during the winter.In general, the form of mercury in the environment varies with the season. Studies have also shown that for the same species of fish taken from the same region, increasing the acidity of the water (decreasing pH) and/or the TOC/DOC (Table 8) content generally results in higher mercury levels in fish (bioaccumulation), an indicator of greater net methylation.Higher acidity and DOC levels in surface water enhance the mobility of mercury in the environment, thus making it more likely to enter the food chain. Human-Health and Wildlife Criteria. Fish-fillet tHg data were compared to human-health and wildlife criteria to assess potential effects in the Flint Creek watershed.Concentrations of tHg in fish fillets did not exceed the USEPA human-health criteria of 0.3 μg/g wet-weight fillets [17] in any of the samples from Flint Creek Watershed (Table 9). Discussion Though the study showed that tHg in fish at this watershed did not exceed human health criteria, a public health advisory is still in effect that states -in Morgan County, people shouldn't eat largemouth bass from Flint Creek, downstream of the West Flint Creek confluence in the vicinity of U.S. 31.People should limit consumption to two meals per month of largemouth bass from Limestone County, at Wheeler Reservoir, and one mile upstream (US31) of the confluence with the Tennessee River [14].**A meal consists of 6 ounces of cooked fish or 8 ounces of raw fish.All three media-fish, water, and bed sediments-reflected an urbanization effect, but the relation for each was different.Because of the limited number of fish samples additional sampling would be needed to draw more precise conclusions (Table 10).Relationship between dissolved organic carbon and total mercury in sediment.Relationship between total organic carbon and total mercury in sediment.(acute = 1.4,chronic = 0.77 ppb).In summary, mercury levels measured in water in Flint Creek were low and pose little risk to drinking water contamination, but mercury levels in soil/sediment were high and pose potential high risk to aquatic species in the river reaches. Mercury levels may be slightly elevated in TOC and DOC as a result of precipitations from waste incinerators, fossil fuel burning and various industrial operations in the area, but sampling efforts have not detected any large concentrations of mercury.36 fish samples were used for the analysis, and the differences in tHg concentrations in the fish samples can be attributed to higher flows and mercury associated with increased sediment and organic material in the water.The current fish advisories in Flint Creek are the result of testing in Fall, 2013. ADEM conducts testing on fish in the fall because chemicals are stored in fat, and fish have more fat in the Fall, [1]. Fig. 1 . Fig. 1.Bioavailabilty and toxic effects of mercury and its compounds.A: Oxidation in air and enzymatically in red blood cells and tissues; B: Biomethylation by sulfate-reducing bacteria.(Giuseppe, et.al., 2017) Total organic carbon (ppm) Total Mercury concentration (ppm) Table 3 . Relation between pH and Temperature in Flint Creek in Summer. Table 6 . Relationship between total organic carbon and total mercury in sediment. Table 7 . Water pH in Flint Creek during Winter season. Table 10 . Mercury Standards and Explanations for fish and drinking water (EPA Standards and Regulations) Total mercury concentrations in water in Flint Creek and its side channels were below the Detection Limits (0.0002 ppm) during the sampling period.Therefore, mercury levels in water in Flint Creek were well below the state and federal standards for drinking water.The site 3 (US31) likely experienced high rates of methylation as a result of stagnant conditions, especially during the July sampling.Total mercury in soil/sediment in the sites ranged from < 0.0133 ppm in Red Bank to 0.0682 ppm dry weight, with the highest levels observed at site #3 US31.No temporal trends existed, but mercury levels tended to increase with total organic carbon.Mercury levels in sediment were below the threshold effects level suggested as a preliminary screening level by National Oceanic and Atmospheric Administration (NOAA)
2018-10-08T04:27:56.809Z
2018-08-15T00:00:00.000
{ "year": 2018, "sha1": "da517b503da7b0dbd0de32f189235bbec7eaed6f", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/201808.0270/v1/download", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "da517b503da7b0dbd0de32f189235bbec7eaed6f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
13461152
pes2o/s2orc
v3-fos-license
Characterization of maximally random jammed sphere packings: Voronoi correlation functions We characterize the structure of maximally random jammed (MRJ) sphere packings by computing the Minkowski functionals (volume, surface area, and integrated mean curvature) of their associated Voronoi cells. The probability distribution functions of these functionals of Voronoi cells in MRJ sphere packings are qualitatively similar to those of an equilibrium hard-sphere liquid and partly even to the uncorrelated Poisson point process, implying that such local statistics are relatively structurally insensitive. This is not surprising because the Minkowski functionals of a single Voronoi cell incorporate only local information and are insensitive to global structural information. To improve upon this, we introduce descriptors that incorporate nonlocal information via the correlation functions of the Minkowski functionals of two cells at a given distance as well as certain cell-cell probability density functions. We evaluate these higher-order functions for our MRJ packings as well as equilibrium hard spheres and the Poisson point process. We find strong anticorrelations in the Voronoi volumes for the hyperuniform MRJ packings, consistent with previous findings for other pair correlations [A. Donev et al., Phys. Rev. Lett. 95, 090604 (2005)], indicating that large-scale volume fluctuations are suppressed by accompanying large Voronoi cells with small cells, and vice versa. In contrast to the aforementioned local Voronoi statistics, the correlation functions of the Voronoi cells qualitatively distinguish the structure of MRJ sphere packings (prototypical glasses) from that of the correlated equilibrium hard-sphere liquids. Moreover, while we did not find any perfect icosahedra (the locally densest possible structure in which a central sphere contacts 12 neighbors) in the MRJ packings, a preliminary Voronoi topology analysis indicates the presence of strongly distorted icosahedra. We characterize the structure of maximally random jammed (MRJ) sphere packings by computing the Minkowski functionals (volume, surface area, and integrated mean curvature) of their associated Voronoi cells. The probability distribution functions of these functionals of Voronoi cells in MRJ sphere packings are qualitatively similar to those of an equilibrium hard-sphere liquid and partly even to the uncorrelated Poisson point process, implying that such local statistics are relatively structurally insensitive. This is not surprising because the Minkowski functionals of a single Voronoi cell incorporate only local information and are insensitive to global structural information. To improve upon this, we introduce descriptors that incorporate nonlocal information via the correlation functions of the Minkowski functionals of two cells at a given distance as well as certain cell-cell probability density functions. We evaluate these higher-order functions for our MRJ packings as well as equilibrium hard spheres and the Poisson point process. It is shown that these Minkowski correlation and density functions contain visibly more information than the corresponding standard pair-correlation functions. We find strong anticorrelations in the Voronoi volumes for the hyperuniform MRJ packings, consistent with previous findings for other pair correlations [A. Donev et al., Phys. Rev. Lett. 95, 090604 (2005)], indicating that large-scale volume fluctuations are suppressed by accompanying large Voronoi cells with small cells, and vice versa. In contrast to the aforementioned local Voronoi statistics, the correlation functions of the Voronoi cells qualitatively distinguish the structure of MRJ sphere packings (prototypical glasses) from that of not only the Poisson point process but also the correlated equilibrium hard-sphere liquids. Moreover, while we did not find any perfect icosahedra (the locally densest possible structure in which a central sphere contacts 12 neighbors) in the MRJ packings, a preliminary Voronoi topology analysis indicates the presence of strongly distorted icosahedra. In order to characterize the properties of sphere packings, one may employ a geometric-structure approach in which configurations are considered independently of both their frequency of occurrence and the algorithm by which they are created [19]. For example, the simplest characteristic of a sphere packing is its packing fraction * Electronic mail: torquato@electron.princeton.edu φ, i.e., the fraction of space occupied by the spheres. Other useful characteristics of the structure include its pair-correlation function [14,[21][22][23][24][25][26][27][28], the pore-size distribution [4,29], and structure factor [3,24,[30][31][32]. It is also valuable to quantify the degree of ordering in a packing, especially those that are jammed (defined more precisely below). To this end, a variety of scalar order metrics ψ have been suggested [19,33] in which ψ = 0 is defined as the most disordered state (i.e. the Poisson point process) and ψ = 1 is the most ordered state. Using the geometric-structure approach, one may therefore construct an "order map" in the φ-ψ plane [19], where the jammed packings form a subset in this map. The boundaries of the jammed region are optimal in some sense, including, for example, the densest packings (the face-centered-cubic crystal and its stacking variants with φ max = π/ √ 18 ≈ 0.74 [13]) and the least dense jammed packings (conjectured to be the "tunneled crystals" with φ min = 2φ max /3 [16]). Among the set of all isotropic and statistically homogeneous jammed sphere packings, the maximally random arXiv:1501.00593v1 [cond-mat.stat-mech] 3 Jan 2015 jammed (MRJ) state is that which minimizes some order metric ψ. This definition makes mathematically precise the familiar notion of random closed packing (RCP) in that it can be unambiguously identified for a particular choice of order metric. A variety of sensible, positively correlated order metrics produce an MRJ state with the same packing fraction 0.64 [34,35], which agrees roughly with the commonly suggested packing fraction of RCP in three dimensions [19]. However, we stress that density alone is not sufficient to characterize a packing; in fact, packings with a large range of ψ may be observed at this packing fraction [27,34]. In order to study the MRJ state, a precise definition of jamming is needed. Therefore, rigorous hierarchical jamming categories have been defined [36,37]: A packing is locally jammed if no particle can move while the positions of the other particles are fixed; it is collectively jammed if no subset of particles can move without deforming the system boundary; and if also a deformation of the system boundary is not possible without increasing its volume, the packing is strictly jammed, i.e., it is stable against both uniform compression and shear deformations [38]. Strictly jammed MRJ sphere packings often contain a small fraction of rattlers (unjammed particles), but the remainder of the packing, i.e., the mechanically rigid backbone, is strictly jammed [39]. Determining the contact network of a packing is a subtle problem requiring high numerical fidelity. The Torquato-Jiao (TJ) sphere packing algorithm [25] meets this challenge by efficiently producing highly disordered, strictly jammed packings with unsurpassed numerical fidelity as well as ordered packings [25,40]. The algorithm achieves this by solving a sequence of linear programs which iteratively densify a collection of spheres in a deformable periodic cell subject to locally linearized nonoverlap constraints. The resulting packings are, by definition, strictly jammed and they are with high probability exactly isostatic (meaning that they possess the minimum number of contacts required for jamming) [39,41]. The TJ packing protocol is intrinsically capable of producing MRJ states with very high fidelity [25,39]. The MRJ state can be regarded as a prototypical glass because it is maximally disordered (according to a variety of order metrics) while having infinite elastic moduli [19]. Atkinson et al. [39] recently carried out a detailed characterization of the rattler population in these MRJ sphere packings. They found a rattler fraction of 1.5 % (substantially lower than other packing protocols, such as the well-known Lubachevsky-Stillinger packing algorithm [42]) and that the rattlers were highly spatially correlated, implying that they have a significant influence on the structure of the packing [29]. Moreover, as in previous studies [41], it was shown [39] that the backbone of the MRJ state is isostatic. We include rattlers in our analysis unless stated otherwise. MRJ packings have been characterized using a variety of statistical descriptors, including the radial paircorrelation function g 2 (r) (ρ 2 g 2 is the probability den- sity for finding two sphere centers separated by a radial distance r, where ρ is the number density, i.e., the number of particles per unit volume) [25], the bondorientational order metric Q 6 and the translational order metric T * [34,39], the cumulative pore-size distribution F (δ) [29], and the statistics of rattlers [39]. In addition, MRJ sphere packings exhibit disordered hyperuniformity [29], meaning that they are locally disordered, but possess a hidden order on large length scales such that infinite-wavelength density fluctuations of MRJ packings vanish, i.e., the structure factor vanishes at the origin: lim k→0 S(k) = 0 [43,44]. Disordered hyperuniformity can be seen as an "inverted critical phenomenon" with a direct correlation function c(r) that is long ranged [43,45]. In this paper, we characterize the MRJ sphere packings generated in Ref. [39] using Voronoi statistics, including certain types of correlation functions. We compare these computations to corresponding calculations for both a Poisson distribution of points and equilibrium hard-sphere liquids. In the second paper of this series, we will investigate density fluctuations, the pore-size distribution, and two-point probability functions of MRJ packings. Many studies for disordered sphere packings have been devoted to computing the volume distribution of the Voronoi cells [e.g., 15,17,20,[46][47][48][49][50][51][52]; see Fig. 1 for a MRJ sphere packing and its Voronoi diagram [53]. However, such statistics are incomplete in that they only quantify local structural information. For example, with appropriately rescaled variables, we will show that the distributions of the Voronoi volumes, surface areas, and integrated mean curvatures for the MRJ sphere packings are qualitatively similar to the distributions for an equilibrium hard-sphere liquid and partly even for the spatially uncorrelated Poisson point process. I. Mean Wµ , standard deviation σW µ , and correlation coefficients ρµ,ν of the Minkowski functionals Wµ of single Voronoi cells in the Poisson point process, in a system of hard spheres in equilibrium at a packing fraction φ = 0.48, and in the MRJ state. The unit of length is λ = 1/ρ 1/3 , i.e., the number density ρ is set to unity. To quantify nonlocal structural information, we formulate and compute correlation functions of the volume of Voronoi cells at a given distance and cell-cell probability density functions of finding a given sized Voronoi cell at a given distance of a sphere with another sized Voronoi cell. Because the volume is only one of a large class of versatile shape measures, namely, the Minkowski functionals [54][55][56][57], we devise and compute the correlation functions of all of the Minkowski functionals [58]. Besides characterizing MRJ packings in this way, we also carry out analogous calculations for the Poisson point process and the equilibrium hard-sphere liquid for purposes of comparison. We show that these Minkowski correlation functions contain visibly more information than the corresponding standard pair-correlation functions, even in the case of the Poisson point process. In Sec. II, we analyze the distributions of the Minkowski functionals of the single Voronoi cells for the Poisson point process, equilibrium hard-sphere liquid configurations, and MRJ packings. In Sec. III, we define the aforementioned two different types of correlation functions. In Sec. IV, we determine the volumevolume correlation function numerically for the threedimensional Poisson point process, equilibrium hardsphere systems, and the MRJ state; we also calculate the correlation functions for the surface area, and the integrated mean curvature. For a further investigation of the nonlocal structure features, we calculate in Sec. V the cell-cell probability density functions mentioned above. In Sec. VI, we make concluding remarks. II. MINKOWSKI FUNCTIONAL DISTRIBUTIONS OF A SINGLE VORONOI CELL While there are many detailed studies of the volume distribution in disordered sphere packings [e.g., 15,17,20,[46][47][48][49][50][51][52], here we analyze in a logarithmic plot the volume distributions of true MRJ packings as describe above and extend the analysis to all three (non-trivial) Minkowski functionals: the volume, the surface area, and the integrated mean curvature. They are robust and versatile shape descriptors which are widely used in statistical physics [55][56][57]59] and in pattern analysis [60][61][62]. We use voro++ [63,64] to construct the Voronoi diagram of Poisson point patterns (about 1000 patterns, each with 2000 points), equilibrium hard-sphere packings [3,4] at a packing fraction φ = 0.48, which is slightly below the freezing transition (100 packings, each with 10000 spheres), and MRJ sphere packings produced by the TJ algorithm [25,39] (about 1000 packings, each with 2000 spheres). The program karambola [65,66] then computes the Minkowski functionals of each cell. We first determine the distributions of the three Minkowski functionals (W 0 , volume; W 1 , surface area; and W 2 , integrated mean curvature) of the single Voronoi cells. Table I provides the mean, the standard deviation, and the correlation coefficients ρ µ,ν = WµWν − Wµ Wν σ Wµ σ Wν of the three different Minkowski functionals. As a unit of length, we use λ = 1/ρ 1/3 with ρ the number density, i.e., we compare the Poisson point process, the equilibrium hard-sphere liquid, and the MRJ state at the same number density ρ = 1 (the unit volume contains one particle on average) [67]. Because the number density is set to unity, the mean cell volume is also one. The average surface area and integrated mean curvature of a Voronoi cell in the MRJ state or in the equilibrium ensemble are slightly larger than those of a Poisson Voronoi cell because the latter is less regular, i.e., more aspherical. The Voronoi volume fluctuations and the standard deviations of the other Minkowski functionals are much stronger in the irregular Poisson point process than in the hard-sphere packings, where the MRJ state has significantly smaller Voronoi volume fluctuations than the equilibrium hard-sphere liquid. The Minkowski functionals of a single Voronoi cell, e.g., its volume and its surface area, are strongly correlated, i.e., the correlation coefficients ρ µ,ν are at least 0.9. The numerical estimates for the Poisson Voronoi tes- Int. mean curv. Table I). The lines in the plot of the volume distributions are γ distributions; generalized γ distributions are fitted to the distributions of the surface area and the integrated mean curvature. Poisson sellation are in agreement with the analytic values and numerical estimates in Ref. [68] and references therein. The high fidelity of the MRJ sphere packings produced by the TJ algorithm allows one to study the relation between the number of contacts of a sphere and the Minkowski functionals of its Voronoi cell. As expected, small cells have a higher number of contacts on average because a high local packing fraction [69] implies that there are many close neighbors. In units of λ, the mean Voronoi volume of a rattler, i.e., an unjammed particle, is 1.04 and that of a particle with 11 contacts is 0.88. The mean surface area of the Voronoi cells of rattlers and of backbone spheres with up to 11 contacts varies from 5.50 to 4.92, respectively, and the average integrated mean curvature varies from 8.65 to 8.17, respectively. However, because of the small difference between near contacts and true contacts, the distributions of the Minkowski functionals for rattlers are only slightly shifted compared to the distributions of a typical cell. There are, for example, very small cells containing rattlers, which is consistent with previous findings [39]. Starr et al. [47] and, similarly, Aste et al. [15] showed that by shifting the volume distribution by its mean and rescaling with its standard deviation, the volume distributions of many different sphere packings collapse. Figure 2 shows the rescaled distributions of the Minkowski functionals for the Poisson point process, the equilibrium hard-sphere liquid, and the MRJ packing. As expected, the volume distributions of the equilibrium hard-sphere packings and the MRJ packings are qualitatively very similar, while the distribution of the Poisson point process deviates. The same is true for the distribution of the mean curvatures. The distributions of the surface area for both the MRJ and the equilibrium hard-sphere packings are not only qualitatively similar to each other but also to the uncorrelated Poisson point process. So, besides the quantitative difference in the mean and the standard deviations of the Minkowski functionals, the distributions of the Minkowski functionals of single Voronoi volumes are qualitatively similar for the equilibrium hard-sphere liquid and the MRJ state as well as even partially for the Poisson point process. The distribution of the Minkowski functionals of a single cell only incorporates local information and is rather insensitive to global structural features such as hyperuniformity of the MRJ state [29,45]. Figure 3 shows the joint probability distribution of the volume and the surface area of a single Voronoi cell in a Poisson point process, an equilibrium hard-sphere liquid, and a MRJ sphere packing. The joint probability distributions for the equilibrium hard-sphere liquid and the MRJ state are also relatively similar. Both for the Poisson point process [70] and for many different numerical and experimental sphere packings [15], the volume distribution follows well a γ distribution [71]. We also find, for the volume distributions for the Poisson point process and the equilibrium hardsphere liquid, an excellent agreement with γ distributions [72]. However, for the MRJ sphere packings there is a slight but statistically significant deviation for very small cells for which the frequency of occurrence is too high. The surface area and the integrated mean curvature distributions are well approximated by generalized γ distributions [73], which was already found for the Poisson point process by Refs. [51,74]. However, the distribu- tions for the MRJ sphere packings deviate slightly but statistically significantly from a generalized γ distribution for cells with small surface area or small integrated mean curvature, respectively [75]. III. CORRELATION FUNCTIONS AND PROBABILITY DENSITY FUNCTIONS OF MINKOWSKI FUNCTIONALS In order to quantify the global structure of the Voronoi diagram, correlation functions of the Minkowski functionals of cells at a distance r and cell-cell probability density functions are introduced and defined here. A. Correlation Functions of Minkowski Functionals We define the volume-volume correlation function C 00 (r 1 , r 2 ) of the Voronoi cells of an arbitrary point process as the correlation between the volume of two Voronoi cells given that the corresponding centers are at the positions r 1 and r 2 : where . denotes the ensemble average given two points at r 1 and r 2 ; and σ v(ri|rj ) is the standard deviation of the volume v of the Voronoi cell at r i given that there is another point at r j . Note that because of this condition, both the mean and the standard deviation of a single Voronoi volume are functions of the positions r 1 and r 2 : e.g., knowing that there is a point in close proximity, very large volumes are less likely and the mean volume decreases. For a statistically homogeneous and isotropic point process, the volume-volume correlation is simply a radial function, which we denote by C 00 (r), where r = r 2 −r 1 . The correlation function C 00 (r) ∈ [−1; 1] measures the correlations, both positive and negative (anticorrelations), between Voronoi volumes of cells given that their centers are at a distance r. The Voronoi tessellation assigns to each point a volume of its corresponding Voronoi cell. This is a special case of a marked point process where the constructed mark assigned to each point is determined by the positions of the points in the neighborhood. In this sense, the volumevolume correlation function can be seen as a special type of a marked correlation function [56,68,76]. The volume-volume correlation function does not, in general, converge to perfect correlation for vanishing radial distance lim r→0 C 00 (r) < 1 because for all r > 0 the correlation function C 00 (r) provides the correlation of the Voronoi volumes of two different cells with volumes v(0) and v(r). Because the cell is perfectly correlated with itself, i.e., C 00 (0) = 1, the correlation function C 00 (r) is discontinuous at the origin. If there is no long-range order, the correlation function tends to zero for infinite radial distance lim r→∞ C 00 (r) = 0. The correlation functions of the other Minkowski functionals are defined analogously to Eq. (1), replacing volume (µ = 0) by surface area (µ = 1) or integrated mean curvature (µ = 2): with σ Wµ(ri|rj ) the standard deviation of the Minkowski functional W µ of the Voronoi cell at r i given that there is another point at r j . For a statistically homogeneous and isotropic point process, the correlation function of the Minkowski functionals is again a radial function, which we denote by C µµ (r). In general, C µµ (r) will be discontinuous for r → 0, as noted above for the volume-volume correlation function. In the Appendix, we calculate the volume-volume correlation function analytically for the one-dimensional Poisson point process. In Sec. IV, we determine the correlation functions for the three-dimensional Poisson point process, the equilibrium hard-sphere liquid, and MRJ sphere packings. A different type of correlation function, a pointwise Voronoi correlation function, assigns to arbitrary points the volumes of the Voronoi cells in which they lie [52]. Correlations between Voronoi volumes have also already been studied by finding a nonlinear scaling in the aggregate Voronoi volume fluctuations as a function of the sample size [49]. B. Cell-Cell Probability Density Functions of the Voronoi Volume The volume-volume correlation function C 00 (r 2 , r 1 ) is defined conditionally on the fact that the centers of the two cells are at r 1 and r 2 . The full two-point information about the Voronoi volumes is given by the cell-cell probability density function p(r 2 , v, r 1 , v * ) of finding two points in the point process at two arbitrary positions r 2 and r 1 with associated Voronoi volumes v and v * , respectively. It quantifies, for example, how likely it is to find near a point with a small Voronoi cell another point with either a large or another small Voronoi cell. Integrating over the volumes yields the standard pair-correlation function, This relation clearly indicates that the Minkowski probability density function p(r 2 , v, r 1 , v * ) contains more information than g 2 (r 2 , r 1 ). Moreover, the volume-volume correlation function C 00 (r 2 , r 1 ) from Sec. III A follows from calculating the moments vv * , v , and v * of p(r2,v,r1,v * ) ρ(r2)ρ(r1)g2(r2,r1) and the corresponding standard deviations σ v and σ v * . For a statistically homogeneous and isotropic point process, the cell-cell probability density function is a radial function, i.e., it only depends on the radial distance r = |r 2 − r 1 |: p(r, v, v * ) is the probability density of finding two points with Voronoi volumes v and v * at a radial distance r. If there is no long-range order, the cell-cell probability density function p(r, v, v * ) converges for large radii r → ∞ to ρ 2 f (v)f (v * ), with ρ the number density and f (v) the distribution of the Voronoi volume v of a single cell (see Sec. II). For a better visualization and comparison of different volumes, we divide the cell-cell probability density function by its long-range value; the cell-cell pair-correlation function is defined as and, for a homogeneous and isotropic system, If g vv (r, v, v * ) > 1, it is more likely to find a pair of Voronoi cells with volumes v and v * at a distance r than to find them at a large distance, i.e., uncorrelated. If g vv (r, v, v * ) < 1, the occurrence of a point in the point process with a Voronoi volume v at a distance r of another Voronoi center with a Voronoi volume v * is suppressed. Analogous cell-cell pair-correlation functions can be defined for the other Minkowski functionals. We analytically calculate the cell-cell probability density function for the one-dimensional Poisson point process in the Appendix. In Sec. V, we determine the cell-cell pair correlation function for the three-dimensional Poisson point process, the equilibrium hard-sphere liquid, and MRJ sphere packings. IV. CORRELATION FUNCTIONS OF MINKOWSKI FUNCTIONALS In order to sample the correlation functions of the Minkowski functionals, the distances of all pairs of particles [77] are computed and assigned to a bin. For each radial distance, i.e., for each bin, the correlation coefficient of the Minkowski functionals of the two Voronoi cells is determined. Figures 4-6 compare the correlation functions of the Minkowski functionals for the Poisson point process, equilibrium hard-sphere liquids, and MRJ sphere packings. It is seen that these Minkowski correlation functions contain visibly more information than the corresponding standard pair-correlation functions, even in the case of the Poisson point process. A. Poisson Point Process It is evident that in the infinite-system limit, the paircorrelation function g 2 (r) is a constant (unity) for the Poisson point process, i.e., the points are completely uncorrelated. Because a Voronoi cell is determined by the neighbors of its center, the volume will obviously be correlated; see Fig. 4. There are large Voronoi volume fluctuations for the Poisson point process. Very large cells lead to a strong correlation of the Voronoi volumes even for distances up to four times the mean nearest-neighbor distance. This is to be contrasted with the standard paircorrelation function g 2 (r), which is trivially unity for all radial distances. Figure 4 compares the correlation functions C µµ (r) for all Minkowski functionals µ = 0, 1, 2. All functionals have approximately the same correlation length. For r → 0, the surface areas are more strongly correlated than the volumes because at small radial distances, the cells will most likely share a face. In the Appendix, we calculate C 00 (r) analytically for the one-dimensional Poisson point process. B. Equilibrium Hard-Sphere Liquid Figure 5 shows the pair-correlation function and the correlation functions of the Minkowski functionals for equilibrium hard-sphere liquid configurations at a packing fraction φ = 0.48. Because the hard spheres are impenetrable, the correlation functions of the Minkowski functionals are only defined for radial distances larger or equal to the diameter D of a sphere; in this case, There is a strong correlation of the Voronoi volumes of spheres that are in near contact because the Voronoi neighbors are correlated by construction of the Voronoi diagram. However, the maximum correlation is reached for noncontacting spheres at r ≈ 1.3 λ; a large cell has many neighbors and a Voronoi neighbor with a sphere not in contact will be, on average, larger than another neighbor cell with a contacting sphere. Between 1.8 λ and 2.4 λ, there is a double peak of anticorrelation and, for larger radial distances, there is an oscillating anticorrelation and correlation similar to the pair correlation function g 2 , but nearly inverted. The correlation length of the Voronoi volumes in the hardsphere liquid is larger than in the uncorrelated Poisson point process, where the correlation was only due to the large Voronoi volume fluctuations. At the top of Fig. 5, the correlation functions of the other Minkowski functionals are compared. Similar to the Poisson case, the integrated mean curvature is more strongly correlated at contact r = D than the surface area which, in turn, is more strongly correlated than the volume. There is no double anticorrelation peak in the integrated mean curvature. For large radii, the correlation functions are shifted against each other despite the strong correlation of the different functionals for a single Voronoi cell. The surface area and the integrated mean curvature are slightly less (anti)correlated. C. MRJ Sphere Packings The pair-correlation function g 2 (r) and the correlation functions of the Minkowski functionals of the MRJ sphere packings are shown in Fig. 6. The diameter of the spheres in the MRJ sphere packings is D ≈ 1.07λ. The most striking differences in the pair correlation of the jammed packings to the equilibrium packings are the two discontinuities at r = √ 3D and r = 2D, the split-second peak, which corresponds to configurations of two edge-sharing equilateral and coplanar triangles (r = √ 3D) or a linear chain of three particles (r = 2D), respectively [78]. There is also a significant (seemingly nonanalytic) feature of the volume-volume correlation function C 00 (r) at r = √ 3D: a dip in the anticorrelation. However, at r = 2D, the feature is statistically insignificant. At least two double anticorrelation peaks are clearly resolved. The most important qualitative differences in the volume-volume correlation function are the much stronger anticorrelations in the MRJ packings compared to the equilibrium packings. The correlation with the nearest neighbors is weaker and the first anticorrelation double peak is more than twice as strong as for the equilibrium hard-sphere packings. The MRJ sphere packings are hyperuniform [29,45], i.e., large-scale density fluctuations are suppressed. Therefore, strong Voronoi volume anticorrelations are necessary such that Voronoi cells with a high local packing fraction are accompanied by cells with rather low packing fractions, and vice versa. Another difference between MRJ and equilibrium packings is a stronger shift of the correlation functions of the other Minkowski functionals. For the MRJ packings, there are radial distances, e.g., r = 2.51 λ, at which the integrated mean curvatures are anticorrelated [C 22 (2.51 λ) < 0] but the volumes are correlated [C 00 (2.51 λ) > 0], and vice versa. So, in contrast to the local Voronoi analysis, the global Voronoi analysis of the MRJ packing reveals qualitative structural differences to the equilibrium hard-sphere liquid. V. CELL-CELL PROBABILITY DENSITY FUNCTIONS The sampling of the cell-cell probability density function p(r, v, v * ) is very similar to that of the paircorrelation function g 2 (see, e.g., Ref. [4]); only an additional binning with respect to the Voronoi volumes is needed. Figures 7-9 show the cell-cell pair-correlation function g vv = p(r, v, v * )/(ρ 2 f (v)f (v * )) for exemplary large or small cell volumes v, v * in the three-dimensional Poisson point process, in an ensemble of equilibrium hard spheres, or in the MRJ sphere packings. As examples of large or small cells, the volumes were chosen such that their probability density is equal to 1/3 of the maximum of the volume distribution; see Table I and Fig. 2. For the Poisson point process, the small cells are strongly correlated at short distances because, by construction, there must be points at close distances, and the neighbor cells of a small Voronoi cell are more likely to be small as well. However, the probability of finding a point with either a corresponding large or a small cell at a short radial distance of the center of a large cell is strongly suppressed because it is unlikely for the center of a large cell to have close neighbors. At intermediate distances, two large cells are correlated, as expected, because of the Voronoi construction. In the equilibrium hard-sphere liquid, the large cells at near contact are less correlated than the small cells. Table I and Fig. 2. The curves are compared to the standard pair-correlation function g2(r) (dashed black line), which is trivially unity for the uncorrelated Poisson point process. The radial distance r is normalized by λ = 1/ρ 1/3 , where ρ is the number density. However, at slightly larger distances, where g 2 shows anticorrelation and the small cells are even more strongly anti-correlated, the large cells are positively correlated. For distances larger than twice the diameter, the cellcell pair-correlation function for a large and a small cell is equal to the standard pair-correlation function within statistical significance. However, both the cell-cell paircorrelation functions of finding two short or of finding two large cells at large radial distance r are shifted compared to the standard pair-correlation function. These features can also be found in the MRJ sphere packings. Moreover, the anticorrelations of two small cells are much stronger. The split-second peak even separates in two stronger peaks with anticorrelation in between, where the standard pair-correlation function shows positive correlation. In contrast to this, the peak at r = √ 3D completely vanishes for two large cells g vv ( √ 3D, 1.06, 1.06) = 1, and the peak at r = 2D is significantly weaker. VI. CONCLUSIONS AND DISCUSSIONS We have characterized the structure of MRJ sphere packings by computing the Minkowski functionals, i.e., the volume, the surface area, and the integrated mean curvature, of the associated Voronoi cells. The local analysis, i.e., the probability distribution of the Minkowski functionals of a single Voronoi cell, provides qualitatively similar results for the equilibrium hard-sphere liquid and the MRJ packings and partly even for the uncorrelated Poisson point process. In order to study the global structure of the Voronoi cells, we have improved upon this analysis by introducing the correlation functions C µµ (r) of the Minkowski functionals and the cell-cell probability density function p(r, v, v * ). The correlation function C µµ (r) measures the correlation of the Minkowski functionals W µ of two Voronoi cells given that the corresponding centers are at a distance r. The cell-cell probability density function p(r, v, v * ) also incorporates the probability that there are two particles at a distance r. For an easier interpretation and better visualization, we have defined the dimensionless cell-cell pair-correlation function is the probability of the Voronoi volume v. The generalization of the pair-correlation to the cell-cell pair correlations provides powerful theoretical and computational tools to characterize the complex local geometries that arise in jammed disordered sphere packings. Because the faces of a Voronoi cell are bisections between a point in the point process (whether a packing or not) and its neighbors and, moreover, because Voronoi neighbors share a face and edges, the Minkowski functionals of neighboring Voronoi cells are correlated by construction. This leads to a large correlation length for the Voronoi cells in a Poisson point process because of large Voronoi volume fluctuations. In the equilib- rium hard-sphere liquid and MRJ sphere packings, there are correlations and anticorrelations. In contrast to the qualitatively similar local Voronoi structure, the global Voronoi structure of the MRJ hard-sphere packings is qualitatively quite different from that of an equilibrium hard-sphere liquid. We find strong Voronoi volume anticorrelations, which is consistent with previous findings that MRJ sphere packings are hyperuniform [29,45], i.e., large-scale density fluctuations are suppressed. MRJ sphere packings are prototypical glasses in that they have no long-range order but they are perfectly rigid, i.e., the elastic moduli are unbounded [19,37,79]. The global analysis introduced here shows the difference in the structure of the Voronoi cells of the MRJ state and those of a hard-sphere liquid, which further indicates that the structure of a glass is not that of a "frozen liquid" [45,79,80]. An already known distinct structural difference between the hyperuniform MRJ sphere packings and equilibrium hard-sphere liquids is that while in the equilibrium packing the total pair-correlation function h(r) = g 2 (r) − 1 is exponentially damped, the total correlation function of the MRJ state has a negative algebraic powerlaw tail [29,45,79]. It is an interesting question as to whether the asymptotic behaviors of the correlation function of the Minkowski functionals C µµ (r) or the radial cell-cell correlation functions g vv (r, v, v * ) are different for the MRJ state and the hard-sphere liquid. However, a direct observation of the power-law tail has, so far, not been possible [29,45,79]; much larger systems are needed but are not available at the moment. Still, the global characteristics C µµ (r) and g vv (r, v, v * ), introduced in the present paper, allow for an investigation of the underlying geometrical reasons for the negative algebraic tail in the total pair-correlation function: the suppressed clustering of regions with low and high local packing fractions [45]. Moreover, they also allow for a quantification of the global structure of other cellular structures, e.g., foams, where the centers of mass of the single cells can be used as centers of the cells instead of the Voronoi centers used here. A frequently discussed question is whether or not there are local icosahedral configurations in jammed packings [14,20,21,31,81], i.e., a central sphere with 12 spheres in contact where the centers of the touching spheres form a regular icosahedron. The Voronoi cell of the central sphere in such an icosahedron is a regular dodecahedron, which has the maximum possible local packing fraction (≈ 0.76). There is growing evidence, that there are no regular icosahedral arrangements in hard-sphere packings, e.g., see Refs. [14,78,82]. Indeed, we find in our MRJ sphere packings no regular and hardly any nearly regular dodecahedral Voronoi cells. All spheres out of more than two million have less than 12 contacts. There are local packing fractions up to 0.75, but only 4.2 × 10 −5 of all cells have a local packing fraction greater than 0.74. In a preliminary approach to look for possibly strongly distorted dodecahedral Voronoi cells in the MRJ sphere packings, we examined the topology of the Voronoi polyhedra, i.e., the number of faces and the corresponding types of polygons, following Refs. [81,83,84]. In a compact notation, the topology of a polyhedron is given by the so-called p vector (n 3 n 4 n 5 n 6 ), where n 3 is the number of triangles, n 4 of quadrilaterals, n 5 of pentagons, and n 6 of hexagons. The dodecahedron is formed by 12 pentagons, i.e., its topology is denoted by (0 0 12 0). Although these polyhedron characteristics are discontinuous and inadequately metric for definite conclusions [85], they can provide a first insight into whether there could be a significant number of distorted dodecahedra. In the MRJ sphere packings, 1.1 % of all cells have the topology of a dodecahedron (0 0 12 0) [86]. The average local packing fraction of those distorted dodecahedra is 0.69 and is thus significantly greater than the total mean local packing fraction which is 0.64. However, only 0.4 % of the distorted dodecahedra have a local packing fraction greater than 0.74. The distorted dodecahedra also have a higher average number of contacts, ≈ 7, compared to the typical cell, ≈ 6, but as stated above there is not a single sphere with 12 contacts in this high-quality MRJ data. There are 25 other topologies in the Voronoi diagram of the MRJ sphere packings that occur more frequently than the dodecahedron. With 5.2 % of all cells, the most likely topology is (0 3 6 5). However, by adding one or two faces, the dodecahedron can transform to the following polyhedra [81]: 1.1 % of all cells in the MRJ sphere packings are (1 0 9 3), 3.1 % are (0 1 10 2), and 4.4 % are (0 2 8 4). The latter is the second most common type in the MRJ sphere packings. So, while we find no regular icosahedral configurations in the MRJ sphere packings, the preliminary topological analysis indicates that more detailed studies of probably strongly distorted icosahedra could be interesting. For example, also in metallic glasses, significant numbers of distorted icosahedra have been found [87,88]. In the second paper of this series, we will further investigate the global structure of the MRJ sphere packings by looking at density fluctuations, the pore-size distribution, and the two-point correlation functions. If r 2 − v * < v < r 2 , then If v > r 2 , then If v * > r/2, there will be at least one additional point y between x 1 and x 2 with probability rρ/(rρ + 1) and with probability 1/(rρ + 1) the points x 1 and x 2 are nearest neighbors. In the first case, the conditional probability distribution of the volume v is given by Eq. (A2), but the distance z is now uniformly distributed between 0 and r. (A10) Figure 10 depicts the volume-volume correlation function C 00 (r) of the one-dimensional Poisson point process; both the analytic result and simulation data are shown. As discussed in Sec. IV A for the three-dimensional Poisson point process, the Voronoi neighbors are correlated by construction. Although very large Voronoi cells are rather unlikely, their next-neighbor correlation leads to a large correlation length in C 00 (r). In contrast to the three-dimensional case, the Voronoi neighbors are uncorrelated if the distance of their centers vanishes because in one dimension these Voronoi cells become independent. They only depend on either the nearest neighbor on the left-or on the right-hand side of x 1 = x 2 , which are independent of each other. For large radii, the correlation vanishes exponentially, as expected, because there is no long-range order in the Voronoi diagram.
2015-01-03T19:11:05.000Z
2014-11-13T00:00:00.000
{ "year": 2015, "sha1": "c5de90053eefd66fd16b9d1fd4fa348247d0b15e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1501.00593", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c5de90053eefd66fd16b9d1fd4fa348247d0b15e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Mathematics" ] }
248024116
pes2o/s2orc
v3-fos-license
Blockade of the CXCR3/CXCL10 axis ameliorates inflammation caused by immunoproteasome dysfunction Immunoproteasomes regulate the degradation of ubiquitin-coupled proteins and generate peptides that are preferentially presented by MHC class I. Mutations in immunoproteasome subunits lead to immunoproteasome dysfunction, which causes proteasome-associated autoinflammatory syndromes (PRAAS) characterized by nodular erythema and partial lipodystrophy. It remains unclear, however, how immunoproteasome dysfunction leads to inflammatory symptoms. Here, we established mice harboring a mutation in Psmb8 (Psmb8-KI mice) and addressed this question. Psmb8-KI mice showed higher susceptibility to imiquimod-induced skin inflammation (IMS). Blockade of IL-6 or TNF-α partially suppressed IMS in both control and Psmb8-KI mice, but there was still more residual inflammation in the Psmb8-KI mice than in the control mice. DNA microarray analysis showed that treatment of J774 cells with proteasome inhibitors increased the expression of the Cxcl9 and Cxcl10 genes. Deficiency in Cxcr3, the gene encoding the receptor of CXCL9 and CXCL10, in control mice did not change IMS susceptibility, while deficiency in Cxcr3 in Psmb8-KI mice ameliorated IMS. Taken together, these findings demonstrate that this mutation in Psmb8 leads to hyperactivation of the CXCR3 pathway, which is responsible for the increased susceptibility of Psmb8-KI mice to IMS. These data suggest the CXCR3/CXCL10 axis as a new molecular target for treating PRAAS. Introduction Proteasomes degrade ubiquitin-coupled proteins in the cytoplasm and nucleus and are crucial for various types of cellular regulation (1)(2)(3)(4). The 26S proteasome is composed of a 19S regulator and 20S proteolytic core complex, and the 19S regulator acts as an ubiquitin receptor with an ATPase ring that regulates protein unfolding. The 20S core complex has 4 rings with 7 subunits. Immunoproteasomes were initially identified as IFN-γ-inducible proteasomes and characterized by preferential cleavage of ubiquitinated proteins to generate potential T cell epitopes that bind to MHC class I (4)(5)(6). Three inducible β subunits of the immunoproteasome are low molecular mass polypeptide 2 (LMP2; β1i), multicatalytic endopeptidase complex-like 1 (MECL-1; β2i), and LMP7 (β5i). The corresponding constitutive subunits β1, β2, and β5 are replaced by β1i, β2i, and β5i. These exchanges from constitutive subunits to immunoproteasome subunits change the cleavage specificity; in immunoproteasomes, caspase-like activity is strongly reduced, and chymotrypsin-like activity is enhanced. The preference for MHC class I binding is caused by a selective enhancement of chymotrypsin-like activity and the unique structural character, which enhances the generation of peptides with C-terminal hydrophobic and basic amino acids; thus, these peptides fit well in the groove of MHC class I (7). Various studies have demonstrated the roles of immunoproteasomes in cellular differentiation and disease progression by using proteasome inhibitors and genetically modified mice (2,4). Although mice genetically deficient in one of the catalytic subunits of immunoproteasomes, such as β1i, β2i, or β5i, have been used in many studies (8)(9)(10), mice deficient in a single gene or even 3 genes in the absence of thymic proteasomes did not show any inflammatory phenotype (11). We and other groups identified PMSB8 with a missense mutation as the causative gene of Japanese autoinflammatory syndrome with lipodystrophy; Immunoproteasomes regulate the degradation of ubiquitin-coupled proteins and generate peptides that are preferentially presented by MHC class I. Mutations in immunoproteasome subunits lead to immunoproteasome dysfunction, which causes proteasome-associated autoinflammatory syndromes (PRAAS) characterized by nodular erythema and partial lipodystrophy. It remains unclear, however, how immunoproteasome dysfunction leads to inflammatory symptoms. Here, we established mice harboring a mutation in Psmb8 (Psmb8-KI mice) and addressed this question. Psmb8-KI mice showed higher susceptibility to imiquimod-induced skin inflammation (IMS). Blockade of IL-6 or TNF-α partially suppressed IMS in both control and Psmb8-KI mice, but there was still more residual inflammation in the Psmb8-KI mice than in the control mice. DNA microarray analysis showed that treatment of J774 cells with proteasome inhibitors increased the expression of the Cxcl9 and Cxcl10 genes. Deficiency in Cxcr3, the gene encoding the receptor of CXCL9 and CXCL10, in control mice did not change IMS susceptibility, while deficiency in Cxcr3 in Psmb8-KI mice ameliorated IMS. Taken together, these findings demonstrate that this mutation in Psmb8 leads to hyperactivation of the CXCR3 pathway, which is responsible for the increased susceptibility of Psmb8-KI mice to IMS. These data suggest the CXCR3/CXCL10 axis as a new molecular target for treating PRAAS. JCI Insight 2022;7(7):e152681 https://doi.org/10.1172/jci.insight.152681 Nakajo-Nishimura syndrome; chronic atypical neutrophilic dermatosis with lipodystrophy, elevated temperature syndrome, and joint contractures; muscular atrophy; microcytic anemia; and panniculitis-induced lipodystrophy syndrome, which are characterized by persistent inflammation in adipose tissue, progressive lipodystrophy, splenomegaly, and hypergammaglobulinemia without an immunodeficient phenotype (12)(13)(14). A subsequent study showed that mutations in other subunits of immunoproteasomes cause a similar syndrome (15). Collectively, these syndromes are now named proteasome-associated autoinflammatory syndromes (PRAAS). A mutation in an immunoproteasome subunit disturbs proteosome assembly, which results in dysfunctional proteasome activity (12). The identification of human patients with immunoproteasome dysfunction provided insight for not only the involvement of immunoproteasomes in human diseases, but also the roles of immunoproteasomes in various cellular regulatory processes. Initially, activation of the p38 pathway was reported in patients with PRAAS (12,14). Subsequently, an interferon signature was reported, and treatment of patients with PRAAS with a JAK1/2 inhibitor was shown to at least partially ameliorate their inflammatory symptoms (16). However, it remains unclear how immunoproteasome dysfunction causes inflammation and which molecules are key in the inflammation in patients with PRAAS. In the present study, we sought to identify key molecules that initiate or enhance inflammatory responses induced by immunoproteasome dysfunction. To do this, we established mice in which the Psmb8 gene contains a mutation found in patients with PRAAS. We found that the CXCR3 pathway is activated in an imiquimod-induced skin inflammation (IMS) model. Blockade of the CXCR3 pathway ameliorated the inflammatory responses caused by immunoproteasome dysfunction, suggesting that the CXCR3 pathway could be a drug target in patients with PRAAS. Results Establishment of Psmb8-KI mice. To evaluate the mechanism of the inflammatory phenotypes of patients with PRAAS, mice that harbor the human mutation (Gly197Val) in Psmb8 were established ( Figure 1A). We hereafter refer to this mouse strain as the Psmb8-knock-in (Psmb8-KI) mouse strain. The mutation in the Psmb8 locus in Psmb8-KI mice was confirmed by PCR ( Figure 1B). The expression of each immunoproteasome subunit in total spleen cells was evaluated by Western blotting (Figure 2A). The expression levels of mature β5i were reduced, accompanied by insufficiently cleaved β5i, in Psmb8-KI mice compared with WT mice (Figure 2A). Insufficiently cleaved β1i and β2i subunits were also detected in Psmb8-KI mice. The β5 expression was much higher in Psmb8-KI mice than in control mice ( Figure 2A). Spleen cell lysates generated from control and Psmb8-KI mice were separated by glycerol gradient centrifugation to determine the mode of immunoproteasome assembly because patients with PRAAS show assembly defects in immunoproteasomes (12). Assembly intermediates containing immature β1i, β2i, and β5i were detected in cell lysates from Psmb8-KI mice, indicating that disturbed assembly of immunoproteasomes was present in these mice, similar to patients with PRAAS with a mutation in PSMB8 ( Figure 2B). The proteasome activity of total spleen cells from control and Psmb8-KI mice was measured ( Figure 2C). Trypsin-like activity was lower in Psmb8-KI mice, while chymotrypsin-like activity was increased in Psmb8-KI mice. Caspase-like activity was comparable between control and Psmb8-KI mice. The ubiquitin expression in Psmb8-KI mouse kidney, liver, and adipose tissues was equivalent to that in corresponding control mouse tissues ( Figure 2D). Unimpaired immune cell development in Psmb8-KI mice. The development of immune cells in the spleen was evaluated ( Figure 3A). The frequencies of TCRβ + , TCRβ -NK1.1 + , B220 + , CD11b + Gr-1 + , CD11c + M-HC class II + , and CD11b + F4/80 + cells; the CD4/CD8 ratio in TCRβ + cells; and the CD44/CD62L ratio in CD4 + or CD8 + cells were comparable between control and Psmb8-KI mice. The total cell number of spleen cells was comparable between control and Psmb8-KI mice ( Figure 3B). The expression of H-2K b was reduced ( Figure 3, A and C) in Psmb8-KI mice, similar to the expression in Psmb8-deficient mice (9). The secretion of IFN-γ by CD4 + T cells from Psmb8-KI mice after stimulation with anti-CD3 and anti-CD28 antibodies was equivalent to that by control T cells ( Figure 3D). Reduced adipose tissue weight in Psmb8-KI mice. Since patients with PRAAS develop partial lipodystrophy and Psmb8-deficient mice show reduced adipose tissue weight with high-fat diet feeding (17), we fed control and Psmb8-KI mice a normal diet and measured their body weight from 4-16 weeks old ( Figure 4A). The body weight of Psmb8-KI mice was much lower than that of control mice, especially at 15 or 16 weeks of age. Histological examinations of adipose tissue showed smaller adipocytes in Psmb8-KI JCI Insight 2022;7(7):e152681 https://doi.org/10.1172/jci.insight.152681 mice ( Figure 4B), and the fat ratio in the whole body tended to be lower in Psmb8-KI mice, as evaluated by computed tomography ( Figure 4C). Psmb8-deficient mice show slower weight gain than WT mice, with reduced adipose tissue volume and smaller mature adipocytes (17). The stromal vascular fraction (SVF) was smaller ( Figure 4D), and the differentiation toward adipocytes in the SVF fraction was lower in Psmb8-KI mice than in control mice, as evaluated by Oil Red O staining ( Figure 4E); these changes were accompanied by decreased expression of Pparg and Adipoq ( Figure 4E), which encode gene products required for adipocyte differentiation. Pdgfrb is expressed in maturation process from the preadipocyte stage to mature adipocytes. Before and after the differentiation, its expression level was comparable between control and Psmb8-KI mice. Increased sensitivity to IMS in Psmb8-KI mice. As we did not detect any signs of spontaneous inflammation in Psmb8-KI mice even after 6 months of age (data not shown), we tested the sensitivity to imiquimod-induced dermatitis (IMS). Imiquimod was painted on the ear skin, and ear thickness was measured ( Figure 5A). We began to measure ear thickness in Psmb8-KI mice 2 days after the initial treatment, and increased ear thickness was found in Psmb8-KI mice from 2 to 10 days after the initial treatment. Histological studies performed 10 days after the initial treatment demonstrated a thicker ear and more infiltration of mononuclear cells in the skin in Psmb8-KI mice than in WT mice ( Figure 5B). Since imiquimod is an agonist of TLR7, TLR7-associated inflammatory genes were evaluated by real-time PCR ( Figure 5C). The expression of Il6, Ifng, and Tnfa in regions treated with imiquimod tended to be higher in Psmb8-KI mice 4 days after the initial treatment. We administered anti-IL-6, anti-TNF-α, or anti-IFN-γ antibodies or baricitinib (a JAK1/2 inhibitor) to mice administered imiquimod on the ear skin ( Figure 5D). The anti-TNF-α and anti-IL-6 antibodies ameliorated ear thickening in both control mice and Psmb8-KI mice, but a difference in ear thickness was still detected between the control and Psmb8-KI mice treated with either antibody. In contrast, anti-IFN-γ treatment did not ameliorate IMS in either control mice or Psmb8-KI mice. Treatment with baricitinib most significantly reduced ear thickness in both control mice and Psmb8-KI mice, but a difference in ear thickness was still detected between the control and Psmb8-KI mice treated with baricitinib. These data indicate that blockade of IL-6 or TNF-α, or treatment with baricitinib, can suppress ear inflammation, while the magnitude of the suppressive effect is similar in control and Psmb8-KI mice, suggesting that other mediators are responsible for the higher sensitivity to inflammation in Psmb8-KI mice. Cxcl10 is highly expressed following proteasome inhibition or in Psmb8-KI mice. To identify the molecules that underlie the differential sensitivity to IMS between control and Psmb8-KI mice, we compared genes that were differentially expressed following treatment with proteasome inhibitors ( Figure 6A). We treated J774 cells with 1 of 3 proteasome inhibitors -MG132, epoxomicin, or ONX0914 -for 4 hours and tested gene expression patterns with a DNA microarray. The data was submitted to the NCBI Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo/; accession no. GSE189308). Among immune-associated genes, the chemokines Cxcl9 and Cxcl10, which use the receptor CXCR3, exhibited higher expression in Psmb8-KI mice than in control mice ( Figure 6A). The higher expression of Cxcl9 and Cxcl10, as well as elevated expression of Cxcl11, was confirmed by real-time PCR, while the expression of their receptor, Cxcr3, was not increased ( Figure 6B). Spleen cells from Psmb8-KI mice also highly expressed Cxcl9 and Cxcl10 compared with control cells ( Figure 6C). These data suggest that the CXCR3 pathway is activated in Psmb8-KI mice. Deficiency in Cxcr3 ameliorated IMS in Psmb8-KI mice. We sought to analyze the contribution of CXCR3 to the higher sensitivity to IMS in Psmb8-KI mice. We first applied imiquimod to the ear skin of WT and Cxcr3-deficient mice ( Figure 7A). The increase in ear thickness seen in the Cxcr3-deficient mice was equivalent to that observed in the control mice, suggesting that CXCR3 is not involved in the disease progression of IMS in control mice. We then applied imiquimod to the ear skin of Psmb8-KI mice deficient in the Cxcr3 or Cxcl10 gene (Psmb8-KI;CXCR3-KO or Psmb8-KI;CXCL10-KO) (Figure 7, A and B). Ear thickening was suppressed in the absence of Cxcr3 and tended to be inhibited in the absence of Cxcl10 on the Psmb8-KI background. These data demonstrate that the CXCR3 pathway is the key signaling pathway underlying the increased susceptibility to IMS in Psmb8-KI mice. Discussion Patients with PRAAS exhibit inflammatory signs in various organs (12,18,19). Although those inflammatory symptoms are, at least partially, improved by treatment with a JAK1/2 inhibitor (16,20), the molecular mechanisms by which these inflammatory responses are induced and which molecules are key in the initiation or progression of inflammation in patients with PRAAS are unknown. Here, we demonstrated that the CXCR3 pathway is hyperactivated in Psmb8-KI mice and that deficiency in the Cxcr3 gene in Psmb8-KI mice ameliorates IMS. These data suggest the CXCR3 pathway as a target for treating patients with PRAAS. Psmb8-KI mice on a C57BL/6 background did not exhibit any inflammatory signs, even after 6 months of age, in our specific pathogen-free (SPF) facility (data not shown). In contrast to the accumulation of ubiquitin-coupled proteins in the cells of patients with PRAAS (12), the total spleen cells of Psmb8-KI mice did not show increased accumulation of ubiquitin, although the patterns of chymotrypsin, trypsin, and caspase activities were distinct between control and Psmb8-KI mice. One possible reason for why Psmb8-KI cells did not show accumulation of ubiquitin-coupled proteins is the overexpression of β5 in Psmb8-KI mice, as our previous experiments using PRAAS cells did not show overexpression of β5 (12). This overexpression of β5 might, at least partially, compensate for the immunoproteasome dysfunction caused by the mutation in β5i, allowing cells to degrade ubiquitin-coupled proteins in a manner almost equivalent to that of control cells. Another point that needs to be discussed is the distinct patterns of chymotrypsin, trypsin, and caspase-like activities between control and Psmb8-KI mice. Given the high chymotrypsin activity and low caspase-like activity of immunoproteasomes compared with those of constitutive proteasomes, the increased chymotrypsin activity and reduced trypsin-like activity in Psmb8-KI cells compared with control cells suggest that an assembly defect in immunoproteasomes dynamically changes the expression of each catalytic subunit per cell. The hyperexpression of β5 might also contribute to the different patterns of protease activity in Psmb8-KI mice. CXCL9, CXCL10, and CXCL11 are known to be Th1 chemokines and to bind to CXCR3. These chemokines attract Th1 cells into inflamed tissues, where Th1 cells produce cytokines, which leads to increased Th1 chemokines levels in the inflamed tissues and amplification of the feedback loop. The levels of these chemokines are elevated in autoimmune and rheumatic diseases and in cancers (21)(22)(23)(24)(25). Several papers have reported that patients with PRAAS exhibit a typical type I IFN signature with increased expression of IFN-stimulated genes, including CXCL9 and CXCL10 (15,26,27). We here detected increased levels of Cxcl9 and Cxcl10 gene expression in J774 cells treated with proteasome inhibitors and spleen cells from Psmb8-KI mice, suggesting that elevated levels of CXCL9 and CXCL10 hyperactivate CXCR3 in Psmb8-KI mice. Indeed, the suppression of inflammation in Psmb8-KI mice induced by deficiency in Cxcr3 gene expression reveals that activation of the CXCR3 pathway is one of the key events leading to the increased susceptibility to IMS in Psmb8-KI mice. What is the mechanism underlying the upregulation of A and B) The body weight gain (female WT, closed circle; female Psmb8-KI, red triangle) and size of adipocytes of WT and Psmb8-KI mice were evaluated at 15 weeks old. Scale bar: 100 μm. (C) The fat ratio in the total body at the age of 15 weeks was evaluated by CT. Data represent the mean ± SD of technical triplicates. n = 5. *P < 0.05 (2-tailed unpaired t test). Blue box indicates the region for sagital image. (D) The SVF numbers of the adipose tissues of WT and Psmb8-KI mice at 15 weeks old were counted. Data represent the mean ± SD (n = 4 in each group). *P < 0.05 (2-tailed unpaired t test). (E) SVF cells from WT and Psmb8-KI mice were allowed to differentiate into mature adipocytes. Oil Red O staining was performed, and the expression of Pdgfrb, Pparg, and Adipoq was measured by real-time PCR. Data represent the mean ± SD of technical triplicates. n = 3. *P < 0.05; **P < 0.01 (1-way ANOVA). The data in this figure are representatives of 3 independent experiments. JCI Insight 2022;7(7):e152681 https://doi.org/10.1172/jci.insight.152681 CXCL9 and CXCL10 induced by a mutation in Psmb8? The J774 cells treated with proteasome inhibitors did not upregulate type I or type II IFNs (data not shown), suggesting that the upregulation of CXCL9 and CXCL10 induced by inhibiting proteasomes could be attributed to cell-intrinsic regulation of both genes. Previous studies using human cells also found that treatment of peripheral mononuclear cells and fibroblasts upregulated mRNA of CXCL10 but not mRNA of proinflammatory cytokines, including IL6 and IL1B (15). Therefore, although increased expression of CXCL10 in patients with PRAAS might occur in response to various inflammatory stimuli, increased expression of those chemokines could be regulated, at least partly, in a cell-intrinsic manner. Patients with PRAAS are characterized by hypergammaglobulinemia, and some patients showed low CD8 + T cell counts and low percentages of naive CD8 + T cells (20,28). Since immunoproteasomes are involved in generating peptides presented by MHC class I (7), patients with PRAAS might have a distinct repertoire of CD8 + T cell receptors compared with healthy people, which might contribute to inflammatory responses in patients with PRAAS. In addition, CXCR3 is highly expressed on mouse and human T cells and functions as the migration of T cells into inflammatory regions (21). The suppression of inflammation in Psmb8-KI mice by blockade of CXCR3 in this study would also suggest the contributions of CD8 + T cells to the inflammation in patients with PRAAS. The mutation of PSMB8 was originally identified in patients with PRAAS (12)(13)(14), and subsequent studies reported that mutations in other proteasome subunits also led to PRAAS phenotypes (15). Recent studies have reported that the mutation in PSMB9 causes immunodeficiency phenotypes (29). Therefore, it would be important to evaluate the relationship between the altered function of each proteasome subunit and the associated mutation. Here, we used 3 proteasome inhibitors with distinct specificities: (a) MG132, an inhibitor of chymotrypsin; (b) epoxomycin, a 20S proteasome inhibitor; and (c) ONX0914, a β5/β5i inhibitor. The data show the treatment with any 3 inhibitors upregulated Cxcl9 and Cxcl10. In addition to revealing the mechanism by which altered function of proteasomes upregulated those chemokines, it would be interesting to analyze the function of each subunit of the proteasomes and chemokine expression in future studies. The CXCR3 pathway has been a drug target of interest in inflammatory disorders (30,31). Indeed, a clinical trial was performed to evaluated rheumatoid arthritis treatment with an anti-CXCL10 blocking antibody (31), which showed partial improvement of inflammatory responses. Since our present study showed the involvement of the CXCR3 pathway in increased susceptibility to inflammation, antagonists of CXCR3 or monoclonal antibodies against CXCL10 might be effective for treating patients with PRAAS. However, as we have shown the increased susceptibility of Psmb8-KI mice to IMS, we need to establish an animal model that spontaneously reproduces inflammatory responses similar to those of patients with PRAAS. In addition, in this study, we used Cxcl10-deficient mice from a C57BL/6 strain that has a frameshift mutation in the Cxcl11 gene (32). CXCL11 is a strong inducer of CXCR3 internalization (33) that may lead to reduced accessibility of CXCL9 and CXCL10 to CXCR3. Thus, we need to reanalyze the effect of CXCL10 in a different background in future studies. Psmb8-KI mice showed a reduced volume of adipose tissue and impaired differentiation of mature adipocytes, similar to Psmb8-deficient mice (17). The accumulation of ubiquitin-coupled proteins was comparable between control and Psmb8-KI mice, although the patterns of proteasome activity were altered in Psmb8-KI mice. Therefore, the impaired adipocyte differentiation in Psmb8-KI mice could likely be attributed to the change in the turnover of particular proteins -not only cell stress or death caused by the accumulation of ubiquitin-coupled proteins. In conclusion, we identified the CXCR3 pathway as a drug target in PRAAS. Treatment with a JAK1/2 inhibitor led to therapeutic efficacy limiting inflammatory responses, although some patients still exhibit residual inflammation with this approach. Thus, potent CXCR3 inhibitors might have additive therapeutic potential when combined with a JAK1/2 inhibitor to treat patients with PRAAS. Finally, it should be noted that it remains unclear how CXCL9 and CXCL10 expression is increased and how the IFN signature is induced in patients with PRAAS. The elucidation of the steps involved in the initiation Proteasome activity assay. Splenocytes were assayed using the Proteasome-Glo Cell-Based Assay (Promega), according to the manufacturer's protocol. Histology. Epididymal adipose tissues and ear skin samples were collected and fixed in a 10% formalin solution. Paraffin-embedded tissue samples were sectioned and stained with H&E. Adipocyte differentiation assay. Epididymal adipose tissues were cut into small pieces, followed by incubation in adipose isolation buffer (17) containing 1 mg/mL collagenase (Wako Pure Chemical Industries) for 1 hour at 37°C with gentle shaking. SVF cells were collected as a pellet by centrifugation at 500g and 4°C for 5 minutes. SVF cells were cultured in 10% FBS-DMEM supplemented with a penicillin/streptomycin solution (Thermo Fisher Scientific). Two days after reaching confluence, cells were incubated in differentiation medium (AdipoInducer Reagent [for animal cells]; Takara Bio Inc.) containing dexamethasone (2.5 μM), 3-isobutyl-1-methylxanthine (0.5 mM), and insulin (10 μg/ mL) (included AdipoInducer Reagent [for animal cells]; Takara Bio Inc.) for 2 days. The medium was then replaced with maintenance medium (insulin [10 μg/mL] in 10% FBS-DMEM supplemented with antibiotics). The maintenance medium was renewed every 2 or 3 days for 6 days of culture. Adipocyte differentiation was evaluated by Oil Red O staining. Oil Red O uptake into cells was quantified by extraction with isopropanol, and the absorbance of the eluate was measured at 492 nm. IMS model. Mice received a daily topical dose of 25 mg of commercially available imiquimod cream (5%) (Beselna Cream, Mochida Pharmaceutical) on the right ear for 10 days, with the exception of day 5 or 6. For blockade experiments, a control antibody or an anti-IL-6 (MP5-20F3), anti-TNF-α (XT3.11), or anti-IFN-γ (XMG1.2) monoclonal antibody (BioXCell) was administered on 2 consecutive days, followed by no treatment on the third day for 10 days at 400 μg/dose via i.p. injection. Mice received 6 total injections. Baricitinib (AdooQ BioScience) was administered daily by oral gavage at 10 mg/kg/dose for 10 days, with the exception of day 5. DNA microarray. The mouse monocyte-macrophage cell line J774 was cultured in 10% FBS-RPMI medium (Nakalai Tesque) supplemented with a penicillin/streptomycin solution (Thermo Fisher Scientific) in the presence of the proteasome inhibitor MG132 (LifeSensors), the 20S proteasome inhibitor epoxomicin (PEPTIDE INSTITUTE), or the β5/β5i inhibitor ONX 0914 (Adooq Bioscience) at 1 μM or DMSO (Sigma-Aldrich) as a control solvent for 4 hours. RNA was extracted from cultured J774 cells, with genomic DNA degradation, using ReliaPrep RNA Cell Miniprep Systems (Promega). The quality of the isolated RNA was evaluated with an Agilent 2100 BioAnalyzer. Probe preparation and microarray analyses were performed on Whole Human Genome Microarray 4x44K v2 (Agilent Technologies). The resulting data were normalized using GeneSpring (Agilent Technologies) software. The DNA microarray data were submitted to the NCBI GEO database (https://www.ncbi.nlm.nih. gov/geo/; accession no. GSE189308). Data transfer agreements. The primer sequences are available upon request. Statistics. For all experiments, significant intergroup differences were calculated using 2-tailed unpaired t test or 1-way ANOVA. Differences were considered significant when P < 0.05. Study approval. All animal experiments were approved by the animal research committee of Tokushima University and performed in accordance with our institution's guidelines for animal care and use. Author contributions YS and KY designed the research; YS did most of the experiments; HA, ST, HK, and KO analyzed the data; YS and KY wrote the paper; HA, ST, HK, and KO reviewed the paper; and KY supervised all researches.
2022-04-09T06:17:37.672Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "f59ca4313fa62aae20d17e01d85cad923a747926", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/152681/files/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa99be11e6db4deb28bcab8e49b69884e72c9ea1", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
17409131
pes2o/s2orc
v3-fos-license
Relationship between cognitive impairment and nutritional assessment on functional status in Calabrian long-term-care Objective The interaction between dementia and nutritional state is very complex and not yet fully understood. The aim of the present study was to assess the interaction between cognitive impairment and nutritional state in a cohort of residential elderly in relationship with functional condition of patients and their load of assistance in long-term-care facilities of the National Association of Third Age Structures (ANASTE) Calabria. Methods One hundred seventy-four subjects (122 female and 52 male) were admitted to the long-term-care ANASTE Calabria study. All patients underwent multidimensional geriatric assessment. Nutritional state was assessed with the Mini Nutritional Assessment (MNA), whereas cognitive performance was evaluated by the Mini-Mental State Examination (MMSE). The functional state was assessed by Barthel Index (BI) and Activity Daily Living (ADL). The following nutritional biochemical parameters were also evaluated: albumin, cholesterol, iron, and hemoglobin. All patients were reassessed 180 days later. Results A severe cognitive impairment in MMSE performance was displayed in 49.7% patients, while 39.8% showed a moderate deficit; 6.9% had a slight deficit; and 3.4% evidenced no cognitive impairment. In MNA, 30% of patients exhibited an impairment of nutritional state; 56% were at risk of malnutrition; and 14% showed no nutritional problems. Malnutrition was present in 42% of patients with severe cognitive impairment, but only 4% of malnourished patients showed moderate cognitive deficit. The statistical analysis displayed a significant correlation between MNA and MMSE (P<0.001), as did MMSE correlated with Activity Daily Living (P<0.001) and BI (P<0.05). MNA correlated with BI (P<0.001) and albumin (P<0.001). The follow-up showed a strong correlation between cognitive deterioration and worsening of nutritional state (P<0.005) as well as with the functional state (P<0.05) and mortality (P<0.01). Conclusion The present study clearly shows that malnutrition may play an important role in the progression of cognitive loss. Introduction The elderly are particularly vulnerable to nutritional change deficits. Malnutrition in elderly patients has a large number of negative consequences on health: it can often influence the prognosis of different pathologies, reduce health-related quality of life, and increase morbidity/mortality and hospital admissions. 1,2 Malnutrition is common among residents in hospitals and nursing homes. In Europe, considering the time of admission to hospital, prevalence of malnutrition ranges from 10%-80%, with an average value of 35% upon hospital admission, and it tends to worsen in most cases during hospitalization, while, in long-term-care (LTC) settings and in nursing homes, the average prevalence is 30%. 3 The malnutrition prevalence has been reported to be 3%-5% of submit your manuscript | www.dovepress.com Dovepress Dovepress 106 Malara et al free-living older adults 4,5 and 21.3% in home care patients. 6 The etiology is multifactorial and involves physiological aging 7 and socioeconomic and psychological factors, 8 as well as comorbid conditions typical of the elderly. 9,10 Dementia is itself a risk factor for malnutrition. A recent study shows that the nutritional state of institutionalized dementia patients is worse than those not institutionalized of the same age and with a normal cognitive state or mild cognitive impairment. 11 The relationship between weight loss and dementia is complex and not completely clear; in fact, weight loss can be different according to the type of dementia, the stage of the disease, and the living situation of the patients. Studies examining food intake in the dementia population report varying results regarding the extent of weight loss and the adequacy of diet and/or energy intake. 12,13 Grundman et al found a significant association between low body mass index and cerebral cortex atrophy in the areas involved in control of eating behavior among patients with Alzheimer's disease (AD). 14 White et al explored the association between AD and weight changes, studying 362 subjects affected by AD and 317 control subjects for 2 years. 15 They found that almost twice as many patients with AD lost more than $5% total body weight compared to controls, and that patients with more severe forms of AD are six to seven times more likely to suffer progressive loss of weight. 15 Poor nutritional status further undermines the functional state of dementia promoting musculoskeletal damage (sarcopenia and osteopenia), causing immunosuppression and, ultimately, decreases in respiratory and cardiac capacity. 16 Malnutrition becomes more evident in later stages of the disease when dysphagia or complications, such as pressure ulcers, repeated infections, and immobility syndrome, that further worsen the nutritional state of the patient, occur. 17 About 78% of residents in nursing homes and extensive rehabilitation organizations associated with the National Association of Third Age Structures (ANASTE) Calabria (an Italian association of nursing home and rehabilitation to care for third age) were found to suffer from cognitive deterioration of different etiologies and gravities, of which 52% had severe dementia. Diagnostic reevaluation of these patients, according to the criteria of the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA), 18 showed that 56.14% were suffering from AD: in particular, 45.45% met the criteria for possible AD; 28.12% met the criteria for probable AD; and 37.5% met the criteria for probable-uncertain AD. 19 Few studies have focused on the relationship between patients' nutritional states and severity of dementia, comorbidity and functional state in institutionalized demented. 16 Patients suffering from malnutrition have an increased need for nursing care due to the increased incidence of complications and reduced quality of life. 20 The need for assistance may be calculated by an index of case mix and load assistance, expressed in minutes of assistance/day and multidisciplinary team in LTC. 21,22 The purpose of the present study was to investigate the relationship between cognitive deficit and nutritional state in a cohort of elderly residents in LTC on functional state and the load of assistance. Materials and methods study setting A network of LTC facilities for the care of frail elderly, consisting of nursing homes and extensive rehabilitation organizations, operates in Calabria, Italy. Access to these facilities is regulated according to the guidelines provided by the Calabria region (DGR 685/2002, DGR 695/2003, LR 29/2008, DGR 3137/1999). The present study was a 6-month observational, descriptive study carried out on residents across ten ANASTE Calabria nursing homes in January 2010. The customary care practices provided to all patients who belong to ANASTE Calabria LTC facilities were conducted throughout the study. At the moment of admission to LTC, informed consent of the patients and/or their caregiver was acquired for daily care practices and use of their personal data. All patients underwent multidimensional geriatric assessment. Baseline and follow-up data comprised a battery of validated indices chosen to establish an overview of health state. subjects and measurements A sample of 174 residents, 122 female (70.1%) and 52 male (29.8%), was subjected to multidimensional and multidisciplinary evaluation. All patients underwent clinical, neuropsychological, and biological investigations. The diagnosis of dementia was investigated in an interview covering detailed personal and family history and was subsequently confirmed by the administration of psychometric tests. All patients fulfilled the criteria for dementia as described in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR). 23 Cognitive evaluation was conducted by a neuropsychologist who used Folstein et al's Mini-Mental State Examination (MMSE). 24 The patients were assessed as affected by severe, moderate, or slight cognitive impairment, based on MMSE scores (0-10, 10-20, and 20-23, respectively). Functional state was evaluated with the use of the Activity Daily Living (ADL) 25 and Barthel Index (BI), 26 in each of which a lower score indicates a worse functional state. The affective state was scored using the Geriatric Depression Scale (GDS), in which a high score (.6) denotes a depressive 107 Cognitive impairment and nutritional assessment in Calabrian lTC state. 27 Comorbidity was examined according to the indices of severity and complex comorbidity of Cumulative Illness Rating Scale (CIRS), in which higher scores indicate greater comorbidity. 28 Health care need was calculated in terms of minutes of assistance through evaluation by Resource Utilization Groups (RUG)-III. 22 Figure 1; Table 2). Twenty-five patients (14%) had no nutritional problems as assessed with MNA; 97 patients (56%) were at risk of malnutrition; and 52 patients (30%) had an impaired nutritional state ( Figure 2; Table 2). Among patients with severe cognitive impairment, 36 (42%) presented with malnutrition; 48 were at risk of malnutrition; and nine (10%) showed no nutritional impairment. At follow-up, 142 patients, 47 male (33%) and 95 female (67%), were assessed. Seventy-eight patients (55%) showed a severe cognitive impairment in MMSE; 52 patients (36%) showed a moderate deficit; and ten (19%) had a slight deficit. In MNA, 32 patients (22%) demonstrated no nutritional problems; 69 (48%) were at risk of malnutrition; and 41 (29%) had an impairment of nutritional state ( Table 2). The linear regression analysis showed a statistically significant correlation between MNA and MMSE (r=0.39; P,0.001) (Figure 3; There was no statistically significant correlation between cognitive impairment, nutritional state, comorbidity, and use of drugs, but there was a weakly significant trend between severe cognitive impairment, impaired nutritional state, and care need in minutes of assistance. Follow-up at 6 months confirmed a statistically significant correlation between MNA and MMSE (r=0.37; P,0.01) (Figure 3; Table 2). In the malnourished group, there was no correlation between MMSE and MNA (r=0.25; P.0.05), whereas there was a significant correlation between the group with severe cognitive deficit and the group with moderate cognitive impairment (r=0.33; P,0.05). These results indicate that the improvement of nutritional condition is accompanied by an improvement of cognitive function. The analysis with the rank correlation coefficient for the Spearman and Student's t-test showed a strong correlation between levels of cognitive deterioration and worsening of nutritional state (P,0.05), functional state (P,0.05), and mortality (P,0.01). Discussion Prevalence of malnutrition and risk of malnutrition in institutionalized elderly patients with dementia is high and increases with progression of disease. The MNA, standardized for the elderly population, 30 is the preferred tool for nutritional assessment in LTC associated with ANASTE Calabria, although, in the literature, authors report different opinions about its use in institutionalized elderly with dementia. 32,33 The results of the present study confirm that patients affected by serious cognitive impairment are characterized by a poor nutritional state, a serious impairment of functional conditions, and increased mortality. This disorder does not appear to be associated with increased comorbidity and/or use of drugs. Previous studies have shown an inverse correlation between cognitive impairment and comorbidity in a large sample of patients admitted to nursing homes: 34,35 it was found that the prevalence of defined cardiovascular diseases, such as hypertension, decreased in relation to increased dementia severity, 36 although other authors disagree: the comorbidity is inversely correlated with the degree of cognitive impairment. 37 Moreover, some studies have shown a relationship between nutritional state and comorbidities. 38 In the present study, such an association was not confirmed, although we found a trend towards a worsening of health conditions in malnourished subjects. Generally, malnourished subjects show a greater functional impairment in ADL and higher care need. 39 Our study results are in line with the existing literature; in most studies, disability has been found to be associated with both anthropometric as well as biochemical parameters related to malnutrition. 40 Cereda et al 41 showed that a bad functional status correlates with a low MNA score. Moreover, in the present study, no correlation was found between mood disorder and nutritional state, while, in another study, depression is considered a risk factor for malnutrition in institutionalized elderly. 42 Since 52% of malnourished patients suffered from severe cognitive impairment in the present study, it was not possible to administer the GDS. The care need in these patients, calculated by RUG-III and expressed in minutes of assistance, shows a weak correlation with the prevalence of malnutrition, although it does not reach statistical significance. These data can be explained by considering that the admission of patients in nursing homes follow criteria of homogeneity of utilization of resources. 22 Patients were reevaluated after 6 months, and increased mortality was found in malnourished subjects when compared with patients who had a better nutritional status. It is well known that, in a geriatric population, loss of .4% body weight is an independent factor of morbidity and mortality. 43 The inci- 109 Cognitive impairment and nutritional assessment in Calabrian lTC dence of medical problems associated with malnutrition in elderly people in nursing homes is 27%, whereas it is about 16% in patients who are well nourished, although mortality is three times higher in the former. 44 In elderly patients, the effectiveness of nutritional interventions is reduced and recovery of malnutrition is difficult to achieve. 45 In patients with dementia, a state of malnutrition must be prevented or at least improved by an early and appropriate intervention strategy. 46,47 The results of the present study show that it is necessary to set appropriate nutritional interventions in patients with mild-to-moderate dementia; in fact, these patients have more opportunities for therapeutic response than patients suffering from severe dementia. In ANASTE Calabria LTCs, the multidisciplinary team is trained to carry out the assessment of nutritional state. This team is also trained to identify as risk factors for malnutrition both the cognitive impairment and worsening of functional state. Conclusion Malnutrition plays an important role in the progression of cognitive decline; early recognition and treatment of malnutrition or risk of malnutrition are important preventive measures to increase the quality of care and quality of life of patients with dementia. Clinical Interventions in Aging Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/clinical-interventions-in-aging-journal Clinical Interventions in Aging is an international, peer-reviewed journal focusing on evidence-based reports on the value or lack thereof of treatments intended to prevent or delay the onset of maladaptive correlates of aging in human beings. This journal is indexed on PubMed Central, MedLine, the American Chemical Society's 'Chemical Abstracts Service' (CAS), Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use.
2016-05-12T22:15:10.714Z
2014-01-09T00:00:00.000
{ "year": 2014, "sha1": "13a98d6de5cd61b19889bafeb4e479e5a3cf64d2", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=18639", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13a98d6de5cd61b19889bafeb4e479e5a3cf64d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262101116
pes2o/s2orc
v3-fos-license
Experimental and numerical study on the withdrawal behaviour of lag screws on wood side-grain . In wood connections using lag screws as mechanical fasteners, the tension force transfer mechanisms occur through the interaction between the screw threads and the wood surface. It is necessary to understand the screw withdrawal behaviour in order to make a simple but representative interaction model instead of modeling the geometry of the screw threads. This study aims to evaluate experimentally and numerically the withdrawal failure modes, stiffnesses, and capacities of lag screws on wood side-grain. The experiment is conducted on the withdrawal of lag screws with different diameters and penetration lengths in Meranti wood cubes. The numerical analysis is carried out using ABAQUS finite element program and compared with the experimental results. Both experimental test and numerical analysis results show that the failure of all specimens occurred in the wood around the hole, not due to the slip between the lag screw and the wood's surface, which validates the proposed interaction model between the screw and the wood ’s surfaces. The wood's effective material stiffness and strength in numerical models are obtained by matching the load-displacement curves with the experiment results. The effective F y value range is between 18.5 MPa - 24 MPa, while the effective E value range is between 115 MPa - 680 MPa. Introduction Wood and engineered wood are already known by society as one construction material.In its use as construction tools, using wood and engineered wood in the structural application involves a connector.The connection between a block of wood and other blocks of wood needs to be reviewed and designed as good as possible, it's because, generally, the failure of wood construction happens at the connection.Connectors could be mechanical fasteners such as nails, spikes, screws, bolts, lag screws, drift pins, staples, and metal connectors of various types [1] .In these mechanical connectors, there are two directions of force against the fastener; withdrawal and lateral, and also the combination of those two directions of force.The connector with withdrawal behavior is a connector that is either weighted axially or weighted by a force where the direction is the same as the direction of the fastener.On the other hand, a connector with lateral behavior is a connector that is weighted by a force where the direction is perpendicular to the direction of the fastener.One of the mechanical fasteners that are often used is the lag screw.Using a lag screw, the force transfer mechanism on the connector happens through the geometry of the screw thread and the wood's surface. The design parameter of a lag screw as a connector on wood material was obtained by experimental and numerical means.The design parameter is represented by withdrawal load and displacement.The maximum direct withdrawal load of lag screws from the side grain of the wood may be computed as [2] *Corresponding author: bryanyehezkiel11@gmail.com where W is the withdrawal load per unit length (N/mm), G is the specific gravity of the wood, D is the shank diameter (mm).The withdrawal load value must be multiplied by the correction factors that affect the connector capacity.Those factors are duration, water density, temperature, and wood end-grain factors.As a result, the maximum direct withdrawal load of lag screws from the side grain of wood after corrected by the correction factors may be computed as [2] = 2 (2) where P is maximum withdrawal load (N), W is withdrawal load per unit length (N/mm), CD is duration factor, CM is water density factor, Ct is temperature factor, Ceg is wood end-grain factor, and pt is length (mm) of penetration of threaded part.Numerical modelling on the connector is needed to obtain the design parameter numerically.The geometrical modelling of screw threads is inefficient to be done because it requires a lot of review points on the screw threads.In that case, the interaction modelling between the screw thread and the wood's surface, also known as the interface element, is needed.This research aims to numerically model the withdrawal behavior connector that is representative of actual conditions and evaluate the value of withdrawal load along with the displacement in experiments and numerical models. Experimental test The experimental test is done with Universal Testing Machine (UTM).The wood that is used in the experiment is Red Meranti (Shorea spp.).Water content and specific gravity were tested on the timbers, with an average water content of 14.9% and an average specific gravity of 0.707.Timbers are then cut into test specimens with dimensions of 50mm x 50mm x 50mm.The experiment is done with a steel tool that is used to hold the wood specimen to move in the vertical direction and to hold the head of the lag screw so that the lag screw could move according to the speed of the test on the UTM.The experimental test schematic is shown in Fig. 1 with the UTM holding the steel tool on point (1) and (2).The experimental test was carried out with variations in the dimensions of the lag screw and penetration length.Preliminary calculations are carried out according to the equations and formulas in SNI 7973:2013 [2] .The calculation is done by assuming the value of the duration factor,  to be 0.8.The resistance factor, z is taken as 1 for calculation (for design, z = 0.65).Water content factor, CM=1 because the timbers are dry.Temperature factor, Ct assumed to use the lower bound = 0.5.The calculations are done for three variations of diameter, with each diameter having two variations of penetration length.The estimated maximum withdrawal load is shown in Table 1. Test for determining the bending yield moment of the lag screw was carried out with UTM referring to ASTM F1575-03 [3] .The test is done on three samples of lag screws with a diameter of 9.53 mm (D10).The bending yield moment values of the lag screw were taken from the average of three samples and are shown in Table 2.The bending yield moment of lag screws with a diameter of 7.94 mm (D8) dan 6.35 mm (D6) is assumed to be the same as the 9.53 diameter lag screw (D10).Based on the test, the average bending yield moment of the lag screws used in the calculations and modeling is 899.71MPa.The withdrawal test refers to ASTM D1037-06 [4] .UTM gives a tensile force to the lag screw embedded in the wood specimen.Two LVDT (Linear Variable Differential Transformer) sensors are used to measure displacement.LVDT sensor on channel-1 measures the displacement of the test specimen against UTM, while the LVDT sensor on channel-2 calculates the vertical displacement of the lag screws against the wood surface.The test speed is set at 1.0 mm/min.The test is discontinued when failure on the connection occurs, indicated by a drastic reduction in the load curve.UTM will record the withdrawal load on the connection, while LVDT sensors will record the displacement that occurs during the test.Based on the test data, load-displacement curves are made for the six specimens.The withdrawal test can be seen in Fig. 2. Numerical modelling and analysis The finite element method is a numerical analysis for obtaining approximate solutions to a wide variety of engineering problems [5] .Finite elements are small parts of the actual structure.The basic concept of the finite element method is to discretize the elements or state the entire structural system into a series of finite elements. The modelling of the withdrawal load connection uses two kinds of materials, red Meranti wood, and steel material, on the lag screw.Modeling is done by entering the parameters of wood as an isotropic material.Parameters of wood as an isotropic material use elastic modulus, Poisson ratio, and compressive strength in a tangential direction (perpendicular to the grain) taken from secondary data from previous research [6] .The wood model's parameters are shown in Table 3, and the material stress-strain curve of the wood model is shown in Fig. 3. Plastic region Fy Fc┴ (MPa) 7.17 Wood plasticity is defined in ABAQUS by including very small values of stress and strain after defining the tangential compressive strength as Fy.The stress and strain values are defined as very small because the stress on the wood material will drop drastically after the connection fails.Connection behavior in the plastic region will not be reviewed in the modeling.In the connection model, interlocking between the wood and the screw is assumed to be perfect interlocking.In the ABAQUS modeling, the interface between the wood and the screw is defined as a tie constraint, which attaches two separate surfaces so that there is no movement between the attached surfaces.The hole's surface in the wood is selected as the master, and the lag screw's surface is selected as the slave.Fig. 5 shows the parts in the model as master and slave. The part of the wood retained by the steel tool cannot be deformed so that the part on the surface of the retained wood is given a boundary condition for deformation on the third axis that is equal to zero, or it cannot move in any direction.The withdrawal load on the model is made by giving a boundary condition in the form of a maximum displacement of 2 mm in the direction of the global Z axis on the top surface of the lag screw.The boundary conditions and loading on the model are shown in Fig. 6.The geometry of the model is made according to the conditions in the laboratory, and then the model is meshed.Meshing is the step of dividing a structural system into finite elements.Meshing is done on wood parts and the lag screw.The element shape in Mesh Controls is hex-dominated, where discretization prioritizes hexahedral shape.The mesh size in the wood is 5 mm in the whole wood and 1 mm in the hole.Moreover, the mesh size of the lag screw section is 1 mm along the length of the lag screw.The meshing results on the model are shown in Fig. 7. Thus, the model is run to obtain a load-displacement curve that occurs in the numerical model. Interface element validation In the experimental test results, test specimens were cut half.The test specimens after the withdrawal test can be seen in Fig. 8. It is seen that the wood interacting with the lag screw deforms and the wood grain is lifted.This proves that the failure of all specimens occurred in the wood around the hole, not due to the slip between the lag screw and the wood's surface.Thus, the assumption of perfect interlocking between the wood and the screw in the numerical model is appropriate. Failure modes In the experimental test, connection failure occurred in the wood around the hole, not material yielding in steel.In numerical analysis, PEEQ (Equivalent Plastic Strain) contours at maximum withdrawal load can be seen in Fig. 9. Fig. 9 shows that deformation occurs on the wood at the maximum withdrawal load in all models.Hence, it can be concluded that failure occurs in the wood around the hole, not material yielding in the lag screw.This indicates that the variety of failures in experimental tests and numerical analysis are similar: the failures in wood. Load-displacement curve evaluation The load-displacement curve was obtained from data in the UTM and LVDT sensors in the experimental tests.The load-displacement curve obtained from the results of experimental tests and numerical analysis is compared to evaluate a numerical model that is representative of the actual situation.A comparison of the load-displacement curves between the experimental test and numerical modeling was carried out, as well as the connection capacity against the calculation according to SNI 7973:2013 [2] .Fig. 10 shows a comparison of the load-displacement curves of the specimens. From the load-displacement curve, the peak point on the curve can be taken as the connection capacity.Connection capacity is the maximum load the connection can hold before the connection fails.The connection capacity values for each configuration are shown in Table 4.The results show that the connection capacity in the experimental test is much greater than the connection capacity in the numerical analysis and preliminary calculations based on SNI 7973:2013 [2] .This result represents that the equation from the code is sufficiently conservative. Effective Fy and effective E The connection capacity value in all modeling configurations is much smaller than the connection capacity value in the experiment.The wood's effective material stiffness and strength in numerical models are obtained by matching the load-displacement curves with the experiment results.The effective Fy and effective E are values of the wood model parameters in which a connection capacity value is close to the value produced by the experimental test.The effective Fy and E values for each configuration of the numerical model are shown in Table 5.The effective Fy value range is between 18.5 MPa -24 MPa, while the effective E value range is between 115 MPa -680 MPa. Conclusion Based on the analysis, it can be concluded that perfect interlocking as an interaction model between screw and wood surfaces can be used, as both numerical and experimental results are in line.From both experimental test and numerical analysis results, the failure of all specimens occurred in the wood around the hole, not due to the slip between the lag screw and the wood's surface. The withdrawal equation based on the current code is sufficiently conservative as the connection capacity in the experimental test is much greater than that in the numerical analysis and preliminary calculations based on the code, SNI 7973:2013.The wood's effective material stiffness and strength in numerical models are obtained by matching the load-displacement curves with the experiment results.The effective Fy value range is between 18.5 MPa -24 MPa, while the effective E value range is between 115 MPa -680 MPa. The experimental test facilities and software license are provided by the Department of Civil Engineering, Parahyangan Catholic University (UNPAR). Fig. 3 . Fig. 3. Stress-strain curve for Red Meranti wood model.The steel material of the lag screw is defined based on the bending yield moment test that was previously carried out.The modulus of elasticity for steel is 200000 MPa and the Poisson's ratio is 0.3.The lag screw parameters used in the modeling are taken based on the true stress-strain curve in Fig. 4. Fig. 6 . Fig. 6.Boundary conditions and loading on the model. the effective Fy and E values produces a load-displacement curve for each model, as shown in Fig. 11.The load-displacement curves in the elastic region of the numerical models are sufficiently representative of the results of experimental tests.Fig 11.Load-displacement curve with calibrated E and Fy: (a) D6P10; (b) D6P20; (c) D8P15; (d) D8P26; (e) D10P15; (f) D10P28. Table 1 . Estimated maximum withdrawal load. Table 3 . Wood parameters in modeling. Table 5 . Effective Fy and E value.
2023-09-22T15:05:50.556Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "d2fee5b93a16e7895b4919aaca4262e680b2c131", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/66/e3sconf_iccim2023_05008.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09a16a91f48ba0d901891a455085047cb5825770", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
226402963
pes2o/s2orc
v3-fos-license
Project Management and Agile Technology in Environmental Science and Sustainable Development in the “University – Employer – Region” System In this article, the authors present the outcomes of three years of their scientific work on the formulation of a research hypothesis, the formation of project research groups, and the presentation of the outcome to groups of “customers”: employers and/or regions. An interdisciplinary approach was applied in this work, which allowed to integrate environmental, economic, and social research methods. The objectives of the study were to determine the range of the relevant research topics in environmental science and sustainable development and formation of cases; to perform the personal and collective work in the project group; to form the skill of working with topic experts and documents in students; and to verify the work and present to the customer. The youth modeling of international and national processes and events, project laboratories, and cases obtained in the course of the study were presented in this work. Introduction The relevance of the presented work consists in adapting the topic of education for sustainable development to various teaching forms and methods. The interdisciplinary approach used as integrating the economic, social, and environmental agenda was applied in the research within the project works [1,2]. The project approach to teaching is dictated by the active transition of universities to the introduction of the ideology of social projecting and the active position of universities in fulfilling their "third mission" -social responsibility to the region and the entire country. For example, frontier universities in Russia (ten federal and some core universities) switched to a system of project-based education, where the project is a separate academic discipline valuated in "credits" to be credited to the student, and the project is be presented to a potential "customer" -the employer or the region -at the end of the research. The problem has arisen for universities regarding finding potential "customers" that could correctly break down their large research projects into small research items (parts) and offer them to universities as small project tasks "worth" two to six credits to allow each student to choose an interesting subject and depth of immersion convenient to their personal educational path. Such project-based work is often implemented using the eduScrum technology [3]. The process of higher education currently requires the introduction of the Agile technologies and the eduScrum educational environment. With this purpose, it was proposed to introduce a project-based approach in various types of formal education and in the informal approach [4]. Students are immersed in the subject of the sustainable development goals (SDGs), conduct research on environmental issues, simultaneously determining which objectives of which SDGs they correspond to, and explore the parameters of the possibility of achieving the SDGs by various methods. Teachers and employers should be able to evaluate the product (educational) result and develop mechanisms for the most flexible work with it [9 -16]. The elements of the Agile Manifesto are relevant, because the "customer" cannot always clearly state the idea of the study and the form of presenting the expected results. Methods A methodological approach to developing a more or less universal approach has been developed, which can be recommended to Russian universities as a rational form of the project-based work on the topic of Sustainable Development with a special emphasis on the topic of the SDG Environmental Cluster, because global and regional environmental problems are considered to be the most significant for the transformation of society, especially in the face of the increasing instability amid the global pandemic. The methodology consists of a set of the following steps ( Figure 1). Small project teams to work within formal education Medium and large teams to work using an informal approach Results and discussion Twenty major projects have been implemented over three years, which allows to present the works in clusters that will help the participants in the further process formulate their own research and apply the methodology in their project field (Table 1). The event was hold by the invitation of the Government and the Governor of the Irkutsk region. Six working groups on the problem areas in the region participated in the event. The recommendations were developed by the youth government doubles to the current Government of the Irkutsk region. A mechanism was established for a dialogue between decision makers in the region and the youth on the SDGs. The jury was headed by the Director of the UN Information Center in Moscow. Arctic Council Youth model of the project office "Sustainable development of the Russian Arctic". The event was held as part of the International Forum "Days of the Arctic in Moscow" on November 22, 2018. Six largest Moscow universities selected a Russian region of the Arctic zone and presented their solutions. A total of nine teams participated (equal to the number of the Russian Arctic regions). The total number of players was 100 people. There were more than 100 spectator fans. The task for the participants contained three blocks: to conduct a strategic analysis according to the Methodology of the Ministry of Economic Development of the Russian Federation; and to formulate regional indicators for achieving the UN SDGs on five vectors: Economic, energy and transport security, Sustainable environmental management and environmental security, Food and agriculture security, and Personnel training for the region. The proposals for the implementation of roadmaps of the National Technology Initiative were developed. The Youth Declaration on Sustainable Development of the Russian Arctic was signed in support of the 2030 Agenda for Sustainable Development. Form of organization: Project laboratory Regional refraction of the SDGs "The UN sustainable development goals: Volga dimension." Youth strategy "Volga -2030". Three universities were involved. The goal of the project office was to identify the focus of complex issues related to environmental management in the regions and to develop a roadmap to overcome them for each of the identified Project regions in order to achieve the UN SDGs. Global energy transition Youth model of the sustainable energy supply in the federal districts of the Russian Federation "Sustainable energy security of the Russian Federation 2030". The event was held with the assistance of the Organizing Committee of the Russian Energy Week and the BRICS Youth Energy Agency. The model involved more than 100 students from four universities. Eight teams were created, each representing one of the selected federal districts of the Russian Federation; a full strategic analysis of the current situation was carried out; and the Strategy for the energy sustainability development of the federal districts of the Russian Federation and a roadmap for its achievement were developed. The Youth Declaration on Sustainable Energy Security of the Russian Federation was signed. Agenda 2030 Youth team of the country. The UN sustainable development goals: federal dimension The event was held by the invitation of the Federal Agency Russian Youth and involved leaders of the youth governments of the Russian Federation from 76 regions. More than 160 people representing doubles of the regional governments were involved, each working for their own real federal district. The work of the working group of the youth governments from eight federal districts of the Russian Federation was simulated to draw up a roadmap to achieve the SDGs in the federal districts. The Memorandum on the SDGs Promotion in the regions of the Russian Federation was signed. Academic activities of universities Opportunities and prospects of science for achieving the UN sustainable development goals: thematic reflection of the SDGs The event was held by the invitation of the Ministry of Education and Science of the Russian Federation. The student scientific societies from 11 federal universities were involved. More than 250 participants took part in the event: 11 teams of the student scientific societies from the Russian universities. The scientific achievements and projects of the Russian universities for the achievement of the SDGs by Russia were identified. The mechanism of cooperation of the student scientific societies from the Russian universities to achieve the SDGs at the national and regional levels was proposed. The Declaration on the SDGs Promotion through Scientific Activities in the System of Student Scientific Societies in the Russian Federation was signed. Form of organization: Case study approach Global climate agenda World Café Operation Adaptation: Climatic Vagaries and Sustainability of Cities The event was held as part of the III Climate Forum of Cities (Moscow) in September 2019. The facilitation was based on the need to analyze climatic risks that arose in cities located in various natural zones. The players were offered to conduct an analysis of the potential natural and man-made risks for cities within each of the natural zones of the Russian cities; to study the set of problems in the economic complex of cities; to study the set of problems in the natural complex of cities; and to explore the best Russian and foreign practices to overcome urban problems. The results were presented in the form of a presentation report from each team. The work of the teams was evaluated by 12 experts who commented on the results of the facilitation and expressed their recommendations to the players of The World Café. The teams from five Russian universities (37 people) participated in the event. Agenda SDG 15, Environmental protection and implementation of the National project Environmental Science On-site research-topractice school of the MGIMO of the MFA of the Russian Federation "Environmental tourism for sustainable development of the Omsk region" The school was organized jointly by the MGIMO of the MFA of Russia, the Omsk branch of the Russian Geographical Society, and the Omsk Quantorium. A total of 50 participants took part in the event (students of higher educational institutions, vocational educational institutions and high schools) over seven days (two lecture days, two on-site days, two days of case studies, and one day for the project presentation). The work was performed in four thematic working groups. The Government of the Omsk region and the local branch of the Russian Geographical Society submitted a plan for the implementation of sustainable tourism eco-routes. According to the methodology of arranging the works, the university formulated the proposal to the participants in the event (to the Organizing Committee) or received an invitation from a potential customer to formulate a relevant topic and train teams for a major event. Such case studies as "Complementarity of the environmental agenda of the Russian regions with the UN SDGs" and "Implementation of the national project "Environmental Science": the possibility of implementing the Sustainable Development Goals within the national agenda in Russia" were deeply integrated into the educational process Conclusion The pedagogical technology has been developed in three formats to promote the SDG agenda in Russia: youth modeling, project laboratory, and case studies. In each case, the choice is made jointly by the event organizer (or the university) and the customer. The obtained result is presented in writing (as a review, a report, an analytical note, etc.), and the public presentation of the project is also provided. As a result, students get the opportunity to undergo practical trainings or internships (with remuneration) in the organizations that took part in the project activities. The Youth Declaration on Supporting SDGs has also been signed as a result. A range of the relevant research topics in environmental science and sustainable development has been identified, and the methodology for formulating game cases has been developed. The methodology for the implementation of personal and collective work in project teams has been developed, the skills of working with relevant experts and documents have been formed in students, the works have been verified, and the process of presenting them to customers has been worked out.
2020-08-20T10:12:42.073Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "522e3de555799e66f966ebc5babbe35614279d90", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2020/07/shsconf_tppme2020_02022.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a0c106549a800e29831a200c93d351aee472fc7d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
214633192
pes2o/s2orc
v3-fos-license
Truncus Arteriosus - modified Van Praagh’s Type 3A and Anesthesia: a case report. One of the rare complex congenital anomalies is truncus arteriosus—modified Van Praagh’s type 3A. Survival of this type of truncus arteriosus child beyond infancy without surgical treatment is unreported. Anesthesiologists do anesthetize children with complex congenital heart disease during the cardiac catheterization study. The final diagnosis of such children is often made after the anesthesia and cardiac catheterization study. We report a 12-year-old with truncus arteriosus with absent right pulmonary artery and main pulmonary artery with multiple Major Aorto-Pulmonary Collateral Arteries. (MAPCAs) for the right lung, who is surviving without surgical treatment. A 12-year-old girl was brought by her parents to Meenakshi Hospital at Thanjavur (India) with complaints of shortness of breath during respiratory infection. The patient was diagnosed to have congenital heart disease at 6 years of age and not on any treatment. There was no history of cyanotic spell. Her echocardiography revealed tetralogy of Fallot, situs solitus, levocardia, large mal-aligned ventricular septal defect with bidirectional shunt, VSD size 12 mm, pulmonary atresia, moderate tricuspid regurgitation (TR pressure gradient, 103 mmHg), thickened aortic valve, grade II aortic regurgitation, right ventricular hypertrophy, intact interatrial septum, dilated right atrium/right ventricle, dilated coronary sinus, and persistent left superior vena cava, good biventricular function 65%, multiple MAPCAs, no coarctation of aorta, normal veno atrial, atrio-ventricular connections, normal pulmonary venous drainage, and no pericardial effusion. She underwent cardiac catheterization study for further evaluation under anesthesia. Her final diagnosis was truncus arteriosus with absent right pulmonary artery and main pulmonary artery with multiple MAPCAs for right lung, (truncus arteriosus—modified Van Praagh’s type 3A). An anesthesiologist may be encountering such patients during cardiac catheterization study or emergency non-cardiac surgery, where an understanding of the complex anatomy (the aorta, left pulmonary artery, coronary artery, all arising from the common arterial trunk, the truncus arteriosus) and the physiology of their circulation would help in safe anesthesia. From our report, we conclude intra venous ketamine along with regional analgesia would be safe for sedating such patients coming for cardiac catheterization study. Background Anesthesiologists do anesthetize children with complex congenital heart disease during the cardiac catheterization study. The final diagnosis of such children is often made after the anesthesia and cardiac catheterization study. We report a 12-year-old with truncus arteriosus with absent right pulmonary artery and main pulmonary artery with multiple Major Aorto-Pulmonary Collateral Arteries. (MAP-CAs) for the right lung, who is surviving without surgical treatment. Good clinical history and assessment would help in their safe management. Case presentation A 12-year-old girl was brought by the parents to Meenakshi Hospital at Thanjavur (India) with complaints of shortness of breath during respiratory infection. The patient was diagnosed to have congenital heart disease at 6 years of age and not on any treatment. There was no history of cyanotic spell. Her height was 130 cm and weight was 22 kg. She had mild mental retardation and pandigital clubbing. Her heart rate was 88/min, blood pressure 90/60 mmHg. SpO 2 (saturation of oxygen by pulseoxymetry) was 78%. Under IV sedation with IV glycopyrrolate 0.1 mg, IV midazolam 0.4 mg, IV ketamine 10 mg bolus given thrice at 15-min interval, and local anesthesia infiltration with 1.5% lignocaine 4 ml, she underwent the procedure. There was right ventricular hypertrophy with large ventricular septal defect with absent right ventricular outflow tract. Aortic root angiogram revealed the truncus arteriosus, the common arterial trunk. There was no coronary anomaly. There was a left pulmonary artery arising from the posterolateral wall of the common arterial trunk distal to the common arterial trunk valve. The common arterial trunk continued as ascending aorta and right aortic arch with normal origin of major arteries and continued as descending aorta (Fig. 1). The main pulmonary artery and right pulmonary artery, ductus arteriosus, are absent. The right lung is supplied by 3 major MAPCAs (Major Aorto-Pulmonary Collateral Arteries)-one for each lobe. The MAPCA supplying the middle lobe had 75% ostial stenosis. She was finally diagnosed as having complex congenital heart disease-truncus arteriosus with absent right pulmonary artery and main pulmonary artery with multiple MAPCAs for the right lung. The patient was advised for surgical management at a higher center but parents deferred surgery. On follow up after 1.5 years, the child did not undergo any surgical intervention and remained the same symptomatically. The nomenclature for truncus arteriosus is reviewed for the purpose of establishing a unified reporting system by Jacob et al. (Jacobs, 2000). The above-discussed patient comes under truncus arteriosus hierarchy level 3-truncus arteriosus with absence of one pulmonary artery (large aorta type with absence of one pulmonary artery). Retrospectively, the aortic valve with tricusps turned out to be common arterial trunk valve. The truncus arteriosus babies are symptomatic in infancy, and the mortality is high if they are not operated. Rarely, they reach adolescence and adulthood without treatment. Patients reaching adolescence and adulthood without any treatment are reported in other types of truncus arteriosus (Mittal et al., 2006;Abid et al., 2015). To our knowledge on reviewing the literature, a patient with this type of truncus arteriosus modified Van Praagh's type 3A with absent one pulmonary artery reaching adolescence has not been reported in the literature. Our patient's survival throws light on the natural history of this rare complex congenital heart disease. Embryologically, these lesions are secondary to Fig. 1 Cardiac catheterization study. a Antero-posterior view. b Lateral view. TA, truncus arteriosus, the common arterial trunk; Asc Ao, ascending aorta; LPA, left pulmonary artery; MAPCAs, major aorto-pulmonary collateral arteries; Co Ar, coronary artery; The aorta, left pulmonary artery, and coronary artery, all arising from the common arterial trunk, the truncus arteriosus conotruncal anomalies and left sixth arch anomaly. It is usually associated with DiGeorge syndrome, but genetic analysis of our patient has not been done. Conclusions An anesthesiologist may be encountering such patients during cardiac catheterization study or emergency noncardiac surgery, where an understanding of the complex anatomy (the aorta, left pulmonary artery, coronary artery, all arising from the common arterial trunk, the truncus arteriosus) and the physiology of their circulation would help in safe anesthesia. Absent history of cyanotic spell in a cyanotic heart disease patient might give a clue to the diagnosis of non-dynamic obstruction or pulmonary atresia. Safe anesthesia for such children coming for noncardiac surgery/procedures has not been described. From our report, we conclude intra venous ketamine along with regional analgesia would be safe for sedating such patients coming for cardiac catheterization study. Written informed consent was obtained from the patient's father for publication of this case report and accompanying images.
2020-03-25T16:23:55.033Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "be8493d41e73bc101e9c3be9aeea013653d6b87f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s42077-020-00060-3", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "be8493d41e73bc101e9c3be9aeea013653d6b87f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234012075
pes2o/s2orc
v3-fos-license
Ethical Guidelines for Artificial Intelligence in Healthcare from the Sustainable Development Perspective Use of Artificial Intelligence (AI) in variety of areas has encouraged an extensive global discourse on the underlying ethical principles and values. With the rapid AI development process and its near instant global coverage, the issues of applicable ethical principles and guidelines have become vital. AI promises to deliver a lot of advantages to economic, social and educational fields. Since AI is also increasingly applied in healthcare and medical education areas, ethical application issues are growing ever more important. Ethical and social issues raised by AI in healthcare overlap with those raised by personal data use, function automation, reliance on assistive medical technologies and the so-called ‘telehealth’. Without well-grounded ethical guidelines or even regulatory framework in respect of the AI in healthcare several legal and ethical problems at the implementational level can arise. In order to facilitate further discussion about the ethical principles and responsibilities of educational system in healthcare using AI and to potentially arrive at a consensus concerning safe and desirable uses of AI in healthcare education, this paper performs an evaluation of the self-imposed AI ethical guidelines identifying the common principles and approaches as well as drawbacks limiting the practical and legal application of internal policies. The main aim of the research is to encourage integration of theoretical studies and policy studies on sustainability issues in correlation between healthcare and technologies, the AI ethical perspective. Introduction AI's core application is to autonomously perform tasks that traditionally require human intelligence. According to many prominent thinkers of our time, AI will deeply impact the society in a number of aspects sooner rather than later. The World Economic Forum in 2019 concluded that "as rapid advances in machine learning increase the scope and scale of AI's deployment across all aspects of daily life, and as the technology can learn and change on its own, multi-stakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality to create trust." (AI report, 2020). AI's tools that can be used in the healthcare sector grow extremely fast. Electronic medical records of patients are used for machine learning. Exponential growth of available medical data allows AI algorithms to rapidly improve in precision to detect various anomalies and over time become increasingly more accurately than human medical practitioners, especially in the field of cardiology and oncology. It can be concluded that technological impact, including AI, has set course for significant transformations in the healthcare system and creating a new technological era in medicine. Integration of technology into for researchers and education providers to anticipate the possible ethical issues (Organisation for Economic, 2019). Importantly the guidelines in majority of cases stipulate that ethical issues must be evaluated when AI-based educational activities are planned (Ethics for AI, 2020). It must be observed that the sheer number of academic, commercial, and governmental bodies working on ethical AI principles in healthcare makes it difficult to track the actual impact on decisions of AI developers and AI-enabled tools produced. While there is a complete lack of reinforcement mechanisms, deviations from the self-imposed guidelines are difficult to identify (OpenAI, 2018). The lack of empirical evaluation of AI tools' implementation societal effects makes it difficult to measure effectiveness of such internal policies, not to mention the risks related to commercial rewards -it cannot be excluded that ethical guidelines may serve for marketing purposes (Partnership on AI, 2018). Methodology For the purposes of research is to identify the main theoretical problematics of ethical guidelines for artificial intelligence in healthcare from the sustainable development perspective. The methods of analysis and induction are used. The authors will provide the legal analysis considering international regulations. In order to complete, the research the authors are using general scientific methods such as synthesis, modelling, comparative method and deductive method. Theoretical base of the study is constituted by contemporary international scientific works and the articles of national and international authors. As well as some authors, scientists who have provided global contribution to the development of Ethics AI will be provide. The object of the research is to encourage integration of theoretical studies and policy studies on sustainability issues in correlation between healthcare and technologies from the AI ethics perspective. Research. 3.1. Applications of AI in healthcare Education The definition of AI in correlation with the healthcare was used in 1984. AI in healthcare was defined as a particular mechanism of several AI based programs that can diagnose and as well as recommend to patients or doctors certain types of therapy (Coiera, 1998). Development of technology particularly the machine learning has created many new opportunities in the field of education, particularly healthcare education. Medical students are not so distant future will benefit from AI-powered mixed reality and computer vision solutions that can provide an immersive environment to stimulate interest and understanding, while the simulations will encourage student engagement and enhance learning in more intuitive and adaptive ways. As in any other area of study medical students will benefit from technology instantly connecting professors across the globe to their classrooms or scientific discovery laboratories. University administrative processes will benefit from utilizing AI on the large amounts of data produced during research and teaching as well as student and young medical practitioners' performance monitoring tools that will allow more responsive teaching planning and individualization of programs. A number of universities across the globe have already started implementation of AIenabled tools to their research and teaching processes. Technical University of Berlin in Germany develops AI-powered chat-bots using natural language applications to take over routine academic tasks such as grading assignments, answering questions from students and so forth. Carnegie Mellon University in the US develops AI-based cognitive tutors for the statistics course with an aim to minimize student -instructor contact, thus achieving comparable learning results with higher effectiveness of resources employed. Georgia State University in the US tracks individual student performance to predict evaluations and the need for interventions to allow students to reach their full potential and prevent students from dropping out. University of Aberystwyth in Wales employs an AI technology that performs the scientific process based on its own judgement -from formulation of hypotheses, to design of experiments, performance of experiments, data analysis and finishes with a decision on further research strategy (Chi -Tung Cheng, 2020). It shall be mentioned that important role in AI and education from the healthcare development perspective plays training programs for medical practitioners. There are several key values for training prospective medical practitioners as science, technologies. Medical education is on the transformation process. There are new central aspects become into the game. Medical practitioners must consider innovations, robotics, AI based healthcare equipment etc. For that reason, theoretical and practical knowledge of AI based technologies shall be part of educational system. Fast growing innovation in AI affects medical practice, as well affects training mechanism implemented in the future medical practitioners' sustainable development. As AI technologies in health care system develop very fast the key issue is what kind of specific technical knowledge medical students shall know. From that perspective the importance of new type of knowledge for medical students starts from technical use and understanding of AI based technologies and ends with the data protection issues or data security issues. The authors agree with the Liam G. McCoy and co-authors, who argue that medical practitioners need to understand AI logic and basic technology specific that impacting clinical decision-making. The are several important skills and professional information shall be provided during the medical students' educational process. Medical students, or medical professionals during the educational process shall understand and identify the situation when the technology is the better way for a given clinical context. The issue of interpretation and understanding of the information received by AI based technologies is another important issue. The results given by the AI in particular health care-based situations shall be analized with the accuracy. The information on medical error, clinical inapplicability shall be considered. Medical practitioners, medical students shall be protected and educated enough to be able to explain the results received by AI based systems. So, patients and medical practitioners considering the good practice of communication and informed consent between too parties shall be ready for communication and discussion. If medical practitioners used AI provided information, he must have the skills to clarify and explain all necessary technical as well medical information provided by AI to patient (McCoy, Nagaraj, Morgado, F. et al., 2020). Based on mentioned before medical practitioner can detect several problems the technical as well ethical point of view. AI can be characterised by the term of technical or ethical non explainability. Therefore, to ensure the best interests of patients and provide correct and ethical, professional services during fast growing technological progress in health care services educational system shall be improved. AI based technologies and rights, ethical issues in short period of time will take one of the central roles in the future health care system, where AI will have bigger power as data provider to medical practitioners and as developer of AI driven big data processing (Law, Veinot, Campbell, Craig & Mylopoulos, 2019). As practice shows knowledge based on innovations in collaboration with the medical practitioners 'practical knowledge can give better result as for science in research field, as well for society. (Prober & Khan, 2013).There is one aim for health care professionals -to take care on patients. AI can help to achieve the aim but for medical practitioner, medical students important is to acquire skills based on technical requirements. Therefore, special technological medical schools, medical education technology centres can be created to make the level of health care education more compliance to nowadays situations. It must be mentioned, that in common, the question on AI in health care and patients' rights is not solved from the ethical and legal point of view as well. If AI legal aspects can be clarified and issue on liability solved by national level according to national regulations, that ethical aspect is an international problem and challenge. As documents snow the standards that laid down information on AI use in health care are still in progress. But on educational level this standard can be simulated and viewed in the context of existing clinical care standards, quality standards, malpractice etc. More AI based technologies come into the relations between medical practitioners and patients. Patients actively use chatbots, health care applications, etc. Medical practitioners can use that tools as well. But the issue on ethical standards and AI ethics in common comes out (Paranjape, Schinkel, Nannan Panday, Car& Nanayakkara, 2019). The current question is how existing educational system and task can be transformed and updated to provide more effective approach to AI issue in healthcare. New realities show new challenges, and the central role of the challenges play AI ethical issues. Medical education nowadays provides more frameworks that shall be updated. According to High-Level Expert Group on AI Ethics Guidelines there are several key points. The Expert group provide following vital AI ethical principles. AI shall be lawful, it means that AI shall respect all applicable national and international laws and regulations, legal documents. AI shall be ethical, and it means that AI shall respect ethical principles and values from the different educational area perspective as well. If AI is applicable at medical education or medical training level, the principles of medical ethics must be considering and respected. (Ethics guidelines for trustworthy AI, 2019). Important issue from the medical education perspective is the use of AI systems in health care processes. This process must be secured. Patient's data is vital and sensitive information has to be protected on the highest level. Ensuring total respect for patient's data protection and the mechanisms the quality and integrity of the data must be taking into account. Therefore, the AI ethical guidelines for patient's privacy and data protection at educational level shall be provided (Ethics guidelines for trustworthy AI, 2019). Important AI ethical principle from the medical education perspective is connected to Transparency. This principle in healthcare situation shows that the data and AI systems used in healthcare should be transparent. The patients must know and understand the process of AI provided decisions. Moreover, this must be explained to the patients in good and understandable manner to whom this concerned. For that reason, medical practitioners shall know the concept of AI system, as well about the limitations of this system. In common, all AI ethical principles provided by European Commission's High-Level Expert Group can be implemented in healthcare level. The spatial, more detailed interpretation of particular principles is needed. In common AI ethical guidelines provided by scientists, European Commission etc., meet the Medical Ethics requirements or standards. Nevertheless, technical improvements and special explanation shall be provided on national and international level to unify existing policy planning documents. Global discourse on ethical AI from the Sustainable Development perspective and liability issues On April 25, 2018, the European Commission has adopted a communication "Artificial Intelligence for Europe" which referred to the need not only to assess the impact of new digital technologies, including AI, on the current liability regime, but also to identify and explore possible gaps in the AI liability regime and its potential consequences (Commission Staff working document, 2019). The issue of the AI liability is based on the AI safety considerations, which is also highlighted in the European Commission's Communication "Building Trust in Human-Centric Artificial Intelligence" (Building Trust in Human-Centric Artificial Intelligence, 2019). The Communication states that AI systems must have integrated safety and security-bydesign mechanisms. Based on the recommendations of the experts of the European Union and the European Union policy in general, the responsibility for the AI system as an object should lie with the owner of the object. If AI technology, identifying specific gene mutations from images of tumour pathology rather than traditional genome sequencing, makes a mistake, the issue of liability would be considered primarily taking into account the nature of the technology's technical error and the owner of the technology. However, it should be noted that the owner of AI technology can obtain it from the manufacturer, who will not always be competent in the technical nuances of AI. Changes to AI technology can also be made by service companies. The services provided may affect the operation and results of the machine. Therefore, in case AI has caused harm to the patient by its decision, there is a debatable question regarding the liability, whether it will be assumed by the manufacturer, service provider, owner or user (Neri, E., Coppola, F., Miele, V. et al., 2020). If the issue of AI liability in the context of treatment is viewed from the perspective of, for instance, product liability, then natural and legal persons have the right to compensation upon the occurrence of the specified conditions (discussed above). In fact, regulation is able to minimize the risk of harm to users and is intended to ensure compensation for harm caused by, for example, defective goods. Despite the existing liability regime and policy direction in general, there are a number of challenges regarding the matter of liability in the context of AI medical treatment. Firstly, the application of AI in medical treatment is a complicated process from both a technological and practical point of view, as well as from an ethical point of view. The Committee on Legal Affairs of the Council of Europe, in its report on Civil Law Rules on Robotics, in paragraph 59 (f) has recommended to consider the possibility of "creating a specific legal status for robots", which could also address ethical issues in healthcare. This also applies to AI robotic technologies used in healthcare. The report states that the most complex cognitive robots could obtain a new legal status with certain responsibilities, rights, and obligations. Given the rapid development of AI, especially in the field of healthcare, it is clear that scientists, philosophers, futurists and lawyers will return to the question of the legal status of AI in the near future, specifically in the context of liability and ethics. The legal issues of AI are closely related to ethics. AI ethics is currently associated with various types of "concerns". It is a predictable and explainable reaction to new technologies. Issues of concern to scientists, lawyers, doctors, other specialists and the society as a whole are most often ranging from AI's ability to communicate kindly to confidence in non-disclosure of information to third parties. The use of AI in healthcare is forming a new approach that could relieve pressure on healthcare professionals or potentially create competition. Despite the large number of studies and the development potential in the field of healthcare, ethical issues have raised new management requirements. The most important aspects are the strengthening of ethical principles and the liability of interested parties in the ethical governance system. AI can already accurately diagnose skin cancer and compete with a certified dermatologist (Esteva, Kuprel,, 2017). AI can do it faster and more efficiently because it requires a set of training data that includes years of experience and case analysis of health professionals. AI can be used in almost any field of medicine and has the potential to contribute to biomedical research, medical education, and healthcare delivery. In order to qualify a problem as an AI ethical problem, it is necessary to identify which actions are considered "right" and "wrong", which are "ethical actions" and "unethical actions", as well as to understand the concept of ethics in a medical context. In general, ethics is understood as a doctrine of morality, that is, careful and systematic reflection on decisions and behaviour of a moral nature, as well as their analysis in the past, present, and future (World Health Communication Associates, 2009). If ethics is considered in the context of medicine, it means the field of ethics, which deals with issues of a moral nature in medical practice, including in medical treatment. In turn, the explanation of IA ethics can be found in the European Commission's Ethics Guidelines for Trustworthy AI, which state that AI ethics is a sub-field of business ethics that addresses the ethical issues raised by the development, deployment and use of AI. Its main objective is to identify whether AI can improve living standards of individuals or raise concerns about them in terms of quality of life or the necessary independence and freedom of people in a democratic society (Guidelines for Trustworthy AI, 2019). It can be concluded from the above that, in essence, both the understanding of ethics in medicine and the ethics of AI are aimed at solving similar issues yet using different approaches to achieve the goals. The ethical problem, as well as the ethical problem of AI, must be based on a precondition, which indicates that an action taken in specific circumstances is not morally permissible. An important aspect of AI ethics is societal values, moral and ethical considerations, which help to determine the specific value priorities of different interested parties in different multicultural contexts, to explain the rationale for the decision and guarantee its transparency. However, with regard to morality, there is a question of whether it is possible to grant moral status to AI, especially in medical treatment. Today, AI acts more as a performer of moral action, yet will AI be able to experience the moral dignity and, for example, protection against harm that patients currently experience. It is a sound argument that most people are both performers of moral conduct and subjects of moral interest. But there are exceptions. For instance, infant patients or patients in a coma are only subjects of moral interest. When AI technologies capable of making decisions based on ethical behaviour will be developed, the challenge will be whether AI can fall into the category of moral performers, which will also affect the issue of AI liability. Given the complexity of AI, one of the most important issues, especially from an ethical point of view, is to guarantee safety and protection to a patientthe fundamental principle underlying human rights, so it is needed to trace or to explore the logic of all actions taken by AI, or the reasons and causes for non-action. It is also important in the context of AI's ethical liability. A significant step in the context of ethical issues is the European Parliament's report with the recommendations to the Commission on the Civil Law Rules on Robotics (Civil Law Rules on Robotics, 2017). The document referred focuses more on ethics of robotics, but similar issues are identified in the field of AI ethics. Specifically, the European Commission's AI working group's Ethics Guidelines for Trustworthy AI with the primary aim of promoting trustworthy AI should be highlighted in the matters of AI ethics. At the same time, three necessary components shall be noted, namely: 1. AI must be legal and comply with all applicable laws and regulations. 2. AI must be ethical, i.e., respect of ethical principles and values must be ensured. 3. AI must be technically and socially sustainable, as AI systems can cause unintentional harm, even if the intentions have been good. (A definition of AI, 2019). The guidelines also set out the main requirements to be met in order for AI to act ethically and credibly, and these are: Human agency and oversight, Technical robustness and safety, Privacy and data governance, Transparency, Diversity, non-discrimination and fairness, Environmental and societal well-being, Accountability (A definition of AI, 2019). These requirements are necessary in order to ensure the ethical principles of AI, which are vital in the medical treatment process in the context of the patients' rights protection. There are several views on the classification of ethical principles in AI, but the generally accepted ones are: 1) the principle of dignity, 2) the principle of privacy, 3) the principle of autonomy, 4) the principle of responsibility, 5) the principle of non-harm, 6) the principle of doing good, 7) the principle of justice. The ethical principles of AI in question are not exhaustive but comparing them with the generally accepted principles of medical ethics, a similarity can be concluded. Four basic principles dominate in medical ethics: the principle of personal autonomy, the principle of non-harm, the principle of doing good and the principle of justice. These principles are closely interlinked and complementary. The ethical principles of AI must also be considered in context, considering their complementary nature. The ethical principles of AI and medical professionals are based on identical elements, which by their similarity ensure the protection of the patient's rights and interests also in the context of ethics. Today, AI is not autonomous and operates under the supervision of medical professionals. It should be noted, however, that the ethics of AI in medical treatment is a relatively new direction, and it is possible that the content of AI ethics will be transformed. The incompleteness of the AI ethical issues, that are intricately connected to AI liability show that AI cannot not replace the medical practitioners yet. From that point of view in the near future medical practitioners will be the only who will take all key decisions in healthcare. But AI, as important digital assistants will increasingly enter medical practitioners' professional lives. The task of the AI is to minimize the probability of medical error. The whole point of introducing AI into medicine is to help the medical practitioner and save the professionals from routine. Nevertheless, the AI in healthcare can be divided or understanded from the two perspectives. The first one is the AI as assistants for patients and the second one the AI for medical practitioners. If the patients use AI for treatment procedures, for instance "Dr. Google" ( Kłak, Gawińska, Samoliński, Raciborski, 2017), than this is a zone of "responsible self-treatment". The patients are responsible for the results, they agreed on decisions made by AI. But the AI for doctors means that the AI systems usually are as "medical decision support systems", but not decision makers. Conclusion and Implications The future development of AI ethics in healthcare specifically is a difficult and very important question from the perspective of society's well-being. Considering how conveniently and affordably certain AI technology applications to healthcare have become, it is evident that patients will become more involved in healthcare processes themselves. It can be predicted that medical practitioners will play the role of providers in the treatment process and will collaborate with the technology providers and patients in gathering and processing medical data. Medical practitioners will have to be able to explain the treatment decisions that will be based on AI recommendations. It will in turn require significantly higher level of understanding by the medical staff of the underlying technology, including AI applications, both from technological, medical and regulatory aspects. Hence an important role in sustainable and ethical development of technology in healthcare will be played by the education system. Medical students will not only study the technology's application to healthcare services, but medical education will use AI to enhance the study process. Implementation of AI solutions in a host of areas from industry to medicine, from justice system to education has encouraged an extensive global discourse on the underlying ethical principles and values. With the rapid AI development process and its near instant global coverage and deployment, the issues of ethical principles underlying the algorithms has attracted the public's eye as vital. AI delivers significant increase in productivity in the healthcare with the rise of advanced solutions for medical data use for machine learning, healthcare function automation, assistive medical technologies and more precise and remote medical diagnosis tools enabling prevention of diseases. Technology developers in many cases are not bound by the regulatory framework applicable to institutional healthcare providers and their personnel as well as healthcare education providers. In order to narrow the regulatory gap and limit the risks, many of the AI developers have adopted internal self-imposed AI ethics guidelines and policies. But such approach has very clear drawbacks. Patients most certainly will experience difficulty accessing and understanding the mix of applicable government regulations with the private self-imposed ethical principles of a myriad of players involved in provision of modern technologically advanced healthcare service such as: (i) developer of technology that collects the medical samples and transforms results into data for machine learning, (ii) AI application developer, (iii) system operator at the medical centre, and finally (iv) medical practitioner administering drugs or treatment based on AI recommendations. In order to facilitate further discussion about the ethical principles, responsibilities of educational institutions using AI, and to potentially arrive at a consensus concerning safe and desirable uses of AI in education, an evaluation of the current and upcoming AI ethical guidelines is desirable. Without well-grounded ethical guidelines or even regulatory framework in respect of the AI in healthcare several legal and ethical problems at the implementational level can arise. In order to facilitate further discussion about the ethical principles and responsibilities of educational system in healthcare using AI and to potentially arrive at a consensus concerning safe and desirable uses of AI in healthcare education, an evaluation of effects of self-imposed AI ethics guidelines identifying the common principles and approaches as well as drawbacks limiting the practical and legal application of such policies. In order to guarantee implementation of ethical AI in medical education and in healthcare in common the providers will have to figure out how can humans most effectively perform research and study side by side with systems built by humans but constantly upgraded by systems themselves. It is likely that AI-enabled technology will take an ever-greater place in medical education, hence the need of regular evaluation of ethical AI guidelines and internal policies, matched by sufficient governmental regulatory intervention in case selfimposed guidelines will prove insufficient to guarantee human basic rights. And finally, AI ethics shall be closely connected to the dominate principles in medical ethics. The AI is not ready to be the main decision maker in healthcare, but the AI can be the good digital assistants for medical practitioners.
2021-05-10T00:03:52.757Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "ebafc98e4c1db09378108e38fbcf54ab31f3edc2", "oa_license": "CCBYNC", "oa_url": "http://ecsdev.org/ojs/index.php/ejsd/article/download/1157/1140", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dd5a328359fad4231a1555ba13ed00e1e2d7519f", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Philosophy" ], "extfieldsofstudy": [ "Business" ] }
218758033
pes2o/s2orc
v3-fos-license
Overweight and obesity and associated factors in adults in a poor urban area of Northeastern Brazil REV BRAS EPIDEMIOL 2020; 23: E200036 RESUMO: Introdução: As mudanças produzidas no processo saúde/doença, sobretudo no campo da nutrição, corroboram a substituição das carências nutricionais com emergência do excesso de peso (sobrepeso/ obesidade). Objetivo: Analisar a prevalência e fatores associados ao excesso de peso em adultos residentes em uma área urbana carente do Recife, Nordeste do Brasil. Métodos: Trata-se de um estudo transversal analítico, com uma amostra de 644 adultos de 20 a 59 anos. Analisaram possíveis associações do excesso de peso aos fatores demográficos, socioeconômicos, comportamentais e morbidades por meio de regressão de Poisson, considerando como estatisticamente significantes aquelas com valor de p < 0,05. Resultados: A prevalência do excesso de peso foi de 70,3%, sendo menor na faixa de 20–29 anos e maior na faixa etária de 30–39 anos, e estabilizando-se nas demais. No modelo de regressão multivariado, foi observado que a faixa etária, classe econômica, diabetes mellitus (DM) e hipertensão arterial sistêmica (HAS) mostraram-se diretamente associada ao excesso de peso, enquanto a variável consumo semanal de feijão mostrou-se inversamente associada. A alta prevalência do excesso de peso encontrado pressupõe que as comunidades carentes das quais os indivíduos fazem parte já se incluem no processo de transição nutricional que está em curso no país. Conclusão: Os resultados significativos de sobrepeso/obesidade detectados na área urbana carente estudada, impõe a necessidade de incluir esse problema como prioridade de saúde pública nessas comunidades. Overweight and obesity and associated factors in adults in a poor urban area of Northeastern Brazil INTRODUCTION The great changes that occurred in the health/disease process from the second half of the twentieth century began to present a very peculiar configuration in the field of nutrition, typified by the overlapping of global and specific nutritional deficiencies due to the epidemic or pandemic emergence of overweight and obesity 1 . Overweight and obesity are characterized by the accumulation of body fat, exceeding the acceptable standards of anthropometric normality in different degrees and belonging to the group of chronic non-communicable diseases (CNCDs) 2 .They act as important risk factors for the morbidity and mortality of adult populations, being associated with 63% of the global total of deaths caused by the CNCD.Of that amount, 78% of mortality occurs in middle and low income countries 3 . The worldwide prevalence of overweight/obesity has shown a rapid and progressive increase in the last decades.Currently, 2.1 billion adults have this condition, which represents almost 30% of the world population.It should also be noted that, from 1980 to 2013, overweight increased by 27.5% among adults 4 . METHODS This is a descriptive and analytical cross-sectional study, based on data from the research "Health, nutrition and assistance services in a slum population in Recife: a baseline study", developed by Instituto de Medicina Integral Professor Fernando Figueira (IMIP) in partnership with the Nutrition Department of Universidade Federal de Pernambuco (UFPE) and the Recife City Hall.Data collection was home based and took place between June and December 2014, in a poor urban area known as the Coelhos community, located in the Boa Vista neighborhood, in the municipality of Recife, capital of the State of Pernambuco. To calculate the sample, a group of 3,816 adults aged 20 to 59 years was used as reference.As was estimated by the Primary Care Information System (SIAB) in Recife, there is a prevalence of 51.5% of overweight and obesity of adults in this age group in the Recife Metropolitan Region of Pernambuco 7 .An estimate error of 4% and a confidence interval of 95% were assumed, and 10% was added to compensate for possible losses, cases of non-response or questionnaires eventually invalidated by inconsistencies, thus obtaining an initial sample of 570 participants.The final sample included a total of 644 adults.The number of observations assessed here constitutes a subset of the representative sample of people over 20 years old, calculated to represent the universe of adults in the Coelhos community. Sampling was probabilistic, and the adults were selected by simple random drawing, without substitution.Pregnant women, individuals with congenital or acquired physical limitations which made anthropometric measurements impossible to measure, and cases with visible edema or psychic disorders that hindered collaboration were excluded. Weight was measured on a digital scale, Seca ® 876, with a capacity of up to 250 kg and a scale of 100 g, with individuals barefoot, wearing minimal clothing and without any objects in their pockets, hands or head.For height measurement, a portable millimeter stadiometer, brand Alturexata LTDA., with precision of up to (1 mm) was used.The volunteers were in an upright position, barefoot, with upper limbs dangling along the body.In order to ensure accuracy, measurements were taken in duplicates and, when the difference between them exceeded 0.5 cm for height and 100 g for weight, the two measurements were repeated and noted with closer values, and then the average of these measures was used. The independent variables were presented categorically, classified as: • demographic: sex (male and female) and age group (20-29, 30-39, 40-49 and 50-59); • socioeconomic: economic class, assessed based on the criteria of the Brazilian Association of Research Companies (ABEP) 14 , defined based on a points system that considers the possession of goods and the education level of the head of the family (B1/B2, C1/C2 and D/E), schooling (illiterate/incomplete elementary school, complete elementary school/incomplete primary school, complete primary school/ incomplete secondary school and complete secondary school/complete higher education/incomplete higher education), race/ethnicity, which was self-reported (white, black, brown and others), occupation ((not working, unemployed, sporadic work, social benefits, employed/self-employed), housing -wall (brick/masonry and others), number of rooms (> 4 and ≤ 4), basic sanitation -garbage disposal (public collection and others), waste disposal and water supply (general network and others); • behavioral: weekly food consumption -beans, vegetables, fruits, soda or artificial juice, (1-2 times a week or never, 3-4 times and 5-7 times), meats with excess fat (no and yes) and physical activity (sufficiently active and insufficiently active); • diabetes and high blood pressure (no/yes). Food consumption was assessed using the weekly consumption questionnaire used by the Ministry of Health 15 .To determine the level of physical activity, the International Physical Activity Questionnaire (IPAQ) was used, in its short version 16 .This instrument measures the frequency and duration of moderate, vigorous physical activities and walks performed in the last week for at least 10 continuous minutes, including standardized exercises, sports, occupational and recreational physical activities performed at home, in free time, as a means of transportation and at leisure. REV BRAS EPIDEMIOL 2020; 23: E200036 The criteria established by IPAQ refer to four categories of the level of physical activity: very active, active, irregularly active and sedentary.For analysis purposes, a these variables were re-categorized into: • "sufficiently active" (very active + active), applied to people who reported practicing vigorous activity with a frequency greater than or equal to three times a week for 20 minutes or more, or who practiced moderate activity, or walking with a frequency greater than or equal to five times a week for at least 30 minutes, or any activity that added up was equivalent to a frequency greater than or equal to five times a week and greater than or equal to 150 minutes a week (walking + moderate activity + vigorous activity); • "insufficiently active" (irregularly active + sedentary), people who do not fit the aforementioned criteria. Regarding morbidities, the diagnosis of diabetes was performed by biochemical examination, with individuals with blood glucose ≥ 126 mg/dL or with a report of use of a hypoglycemic agent being considered "cases" 17 .Blood pressure was measured according to standardized procedures 18 .Two measurements were taken at different times (interval of 15 minutes), and adults with systolic blood pressure ≥ 140 mmHg, and/or with diastolic blood pressure ≥ 90 mmHg, or with reports of use of anti-hypertensive agents. Initially, descriptive analyzes were performed in order to characterize the frequency distribution of the variables under study.Subsequently, bivariate analyzes were performed using simple Poisson regression, to show possible associations with excess weight with the independent variables.In the adjusted model, the criterion for inclusion of the variables was the association in the crude analysis with excess weight, with a value of p < 0.20.The variables with p < 0.05, obtained through multivariate Poisson regression, with robust adjustment of the standard error, remained in the final model.The results were expressed by prevalence ratio (PR) and 95%CI.Statistical analyzes were performed using the SPSS software, version 13.0 (SPSS Inc., Chicago) and Stata, version 13.0 (StataCorp., College Station, United States). This study was approved by the IMIP Research Ethics Committee, protocol No. 4017-14, in accordance with the requirements of CNS Resolution No. 466/12.All respondents were informed about voluntary participation in the research and signed the Informed Consent. RESULTS In the final sample of the present study, there were a total of 644 adults, however, there were differences in sample values in some variables, due to the loss of responses for incomplete questionnaires and/or inconsistent data. The nutritional status of the studied population is shown in Figure 1.The prevalence of overweight and obesity were similar, around 35%, exceeding the prevalence of eutrophy.Furthermore, it is detailed that 3.4% of the total obese people had severe obesity, which is practically twice the frequency of cases of weight deficit.The joint frequency of overweight/obesity, representing excess weight, was 70.3%. The variables sex, age group, economic class, race/ethnicity, occupation, number of rooms, water supply (Table 1), weekly consumption of beans, consumption of meat with excess fat, high blood pressure and diabetes (Table 2) showed an significant association with excess weight (p < 0.20). The PRs adjusted through Poisson's multivariate regression analysis showed that categories C1/C2 and B1/B2 presented an association with excess weight, in relation to the reference category (D/E), thus revealing that the higher the class, the the higher the prevalence of overweight. Continue...The 30-39 years and 40-49 years age groups also showed an association with the outcome, in relation to the reference category (20-29 years), that is, as the age group increases, the prevalence of the problem was higher in relation to the reference category.The weekly frequency of bean consumption (≤ 2 times/week and never) showed an association with excess weight, in relation to the reference category (5-7 times/week), the lower the weekly frequency of bean consumption, the greater the prevalence of overweight.The referred morbidities diabetes and high blood pressure were also associated with the outcome, those who reported these morbidities had a higher frequency of excess when compared to those who did not (reference category).Such variables remained significantly associated with the outcome p < 0.05 (Table 3). DISCUSSION The high prevalence of excess weight found presupposes that poor or low-income communities are already included in the nutritional transition process.This result would be unusual in an urban environment of marked poverty, since a few years ago the forecast would be high frequencies of weight deficit 9 . The most current study that can be taken as a reference was carried out in 2009, in a sample of 3,214 adults from deprived urban areas of Maceió (AL) 6 .This research revealed that 41.2% of adults were overweight/obese, while our study found a prevalence of 70.3%, well above the value found in Brazil in 2013 (56.9%) 5 .On an international level, practically identical prevalence rates to those of the Coelhos community (around 73%) were found in studies with poor populations in the United States 19 and Afghanistan 20 . The much higher prevalence obtained in this assessment could result in three possible interpretations.As excess weight represents a rapidly progressive epidemic in Brazil, it is acceptable that a marked difference in five years may result in the very rapid pace of this problem's increase.The second version would accept that the situation of adults in the analyzed area may be very different from that found in Maceió, in a larger sample distributed over several slum areas.A third conjecture would be that favela populations start to reproduce, and even exceed, a generalized pattern for the whole country, as part of the epidemiological homogenization expressed as the most updated scenario of the Brazilian population's nutritional status.This may represent the most consistent interpretation, although without the support of sequential, up-to-date and representative data on populations living in deprived urban areas. In the Coelhos community, there was a statistically significant association between overweight/obesity and the age group, highlighting a higher prevalence among adults aged 30 to 39 years, and then stabilizing.In Brazil 4 , a study on the nutritional status of the beneficiaries of the Bolsa Família Program 21 and on low-income women in Rio de Janeiro 22 also found an association between the problem and the age group, showing a higher prevalence of overweight among adults aged 40 years or over and 50-59 years, respectively.This finding may be related to the decrease in the level of physical activity 23 , as well as basal metabolism and hormonal changes that happen with the aging process, which lead the body to store more fat 24 .Internationally, research carried out in the district of Kalutara, in Asia, found a higher prevalence of overweight in low-income adults, from the age of 40, with a reduction at the age of 50 25 .It is likely that the differences in life ecosystems specific to each low-income population may justify these mismatches in the results. There was no association between overweight and education.According to VIGITEL 26 , the frequency of overweight tends to decrease with increasing schooling, however we did not observe this in this study, since the prevalence of overweight was similar in different age groups, regardless of the level of education.This is probably because the low socioeconomic condition and the social context in which the study population is inserted ends up favoring the acquisition of cheaper and caloric foods. In the higher classes of the sample, the highest prevalence of overweight was identified (B1/B2).However, this represented only 5.8% of the sample.The size of the study sample may have influenced this result, since a higher prevalence of the problem was expected in the poorest classes.However, it should be considered that the population studied is a poor community and, therefore, a homogeneous population, so the expectations may have been found with a larger sample, such as the case of the studies conducted in Maceió 6 and in Ceará 27 , in which a higher prevalence of overweight was observed among adults with lower income. A higher frequency of the problem was found among the black race/ethnicity and among adults without an occupation, although no significant association was observed.It is important to emphasize that being overweight has shifted its focus of risk to the most socially disadvantaged ethnicities, such as blacks and browns or other groups close to the condition of poverty, such as rural families and lower income strata 28,29 . In the present study, approximately 70% of the population consumes beans five or more times a week.According to the Family Budget Survey, in 2008 and 2009, beans are still among the most consumed foods by the Brazilian population (≅ 70%) 30 .However, the low weekly consumption of beans (≤ 2 times) was shown to be associated with excess weight in this study, corroborating the study carried out in Belém (PA) on the consumption of beans in adults 31 , which may indicate a higher consumption of ultra-processed foods, in detriment to healthy foods such as beans.Furthermore, no associations were observed with other food consumption variables. There was no association between the level of physical activity and the prevalence of overweight.The coexistence of 70.2% of sufficiently active individuals and approximately 70% of overweight/obesity are apparently conflicting results; similar findings were also observed in Pernambuco 7 .The high prevalence of active adults can be justified by the fact that the measurement of physical activity was performed by an instrument that considers activities performed in leisure, commuting, domestic and occupational activities, as well as by the socioeconomic condition of the population, which causes active commuting, occupational tasks and domestic activities to be the predominant type of physical activity, to the detriment of physical activity performed during leisure time, which is more common in developed countries 32,33 . As expected, according to what other studies show, high blood pressure and diabetes were associated with overweight/obesity [34][35][36] .It is a trilogy of comorbidities (high blood pressure, type II diabetes and overweight/obesity) that are commonly observed together. The cross-sectional design stands out as a limitation of the study, as it constitutes a limitation in the analysis of the association between the associated variables and the outcome, due to the impossibility of inferring a causal relationship, disregarding the before/after relationship which, due to formal logic, must condition the relationship. The fact that the study was carried out in a poor community is considered to be a positive point, considering that few studies are still carried out in these communities, and the knowledge of the health situation of poor populations is important for planning interventions. CONCLUSION The 70% prevalence of overweight/obesity in a poor area of Recife is well above the results identified in other urban populations with similar characteristics and even much higher in representative samples from Brazil.The high prevalence of the problem, standing at around 30% above the frequencies found in studies published in the country after 2003 implies that the population analyzed is included in the rapid process of nutritional transition that the country has experienced in the last 40 years. From the analytical point of view, of the 22 groups of variables investigated, age group, economic class, weekly bean consumption, diabetes and high blood pressure, were the variables that made up the final model, adjusted to define the risks associated with overweight. Conclusively, the high prevalence of overweight/obesity detected in the studied underprivileged urban area imposes the need to include this problem as a public health priority in these communities.Furthermore, it imposes the recommendation to expand similar studies and also a qualitative approach to other similar communities that are spread across the national territory, in order to get a glimpse of a still somewhat unknown situation and to signal priority interventions to them. Table 1 . Prevalence and crude prevalence ratio (PR) of overweight in adults (20-59 years) in an poor urban area, according to demographic and socioeconomic variables.Recife, 2014. 95%CI: 95% confidence interval; MW: minimum wage; a sporadic/odd jobs/street work; b never worked and housewives; c has worked before, but had been unemployed for 30 days or more; d retired, pensioner, provisional benefits; *differences in sample values in some variables are due to the loss of observations: schooling (n = 633), race/ethnicity (n = 641), occupation (n = 642), water supply (n = 643). Table 2 . Prevalence and crude prevalence ratio (PR) of overweight in adults (20-59 years) in a poor urban area, according to behavioral variables and morbidities.Recife, 2014. Table 3 . Adjusted analysis of excess weight in adults (20-59 years) in a poor urban area.Recife, 2014.
2020-06-14T23:42:22.380Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "7b9eee2c6054f8d6b278d34713eb9860c9701233", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbepid/v23/en_1980-5497-rbepid-23-e200036.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a80640fffcd27bf8955d906f85874a22c093df98", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
97573125
pes2o/s2orc
v3-fos-license
Spray formation with complex fluids Droplet formation through Faraday excitation has been tested in the low driving frequency limit. Kerosene was used to model liquid fuel with the addition of PIB in different proportions. All fluids were characterized in detail. The mechanisms of ejection were investigated to identify the relative influence of viscosity and surface tension. It was also possible to characterize the type of instability leading to the emission drop process. Introduction The formation of spray from a liquid film on a vibrating surface is used by ultrasonic atomizers for applications ranging from humidification to metal-powder manufacturing. However, this subject of industrial interest requires detailed knowledge of the preconditions for the formation of drops, because it is necessary to control size and conditions of the ejection. In Faraday's experiment, a layer of fluid in a container is excited vertically by periodic oscillation. On the flat surface, at threshold frequency and applied acceleration, instability appears in the form of parametric waves. When these control parameters (acceleration and frequency) are increased, the initial alignment of these standing-wave patterns is lost due to the appearance of secondary instabilities. With further increased excitation, a sharp transition is observed, to a state with spikes on the surface with droplets being ejected from the tip [1]. The characteristics of the instability produced during the ejection process are analyzed here, taking into account the influence of rheology and surface tension of the fluids used to interpret the different behaviors of the system. We also test some models to check the influence of each of the variables which influence ejection. The chosen fluids are similar to those present in fuels, with different amounts of a viscoelastic component to control the properties of the drops produced. Experimental Details The experimental system has already been described in previous publications [1]. The different components are illustrated in Fig. 1: the signal from a digital generator is amplified to excite an electromechanical transducer, which is rigidly connected to the sample cell with an aluminium bar, monitored by a speedometer with an oscilloscope. The system is completed by suitable illumination and in line computer processing. Characteristics of selected fluids Fuel feed systems for certain types of engines are investigated for exhaust gas control (CO, NO X ), to improve fuel efficiency in thermodynamic terms, because a non-homogeneous mixture of compounds with different vapour pressures is required. When a viscoelastic polymer of high molecular weight is added, the physical form of hydrocarbon is altered and the size distribution of fuel droplets is narrowed [2]. ]. Interfacial tension also increases promoting drop encapsulation and favoring simultaneous component ignition. The rheology of the chosen components (kerosene and polyisobutilene (PIB) are represented in Fig. 2. The rheology of the solutions employed shows "shear thinning" behaviour, preceded by a quasi-Newtonian region which corresponds to the area of interest. Also the viscosity increases with PIB concentration. The relevant feature of this system lies in interfacial tension behavior, which grows when increasing the PIB added (cfr. Fig.3). Threshold of ejection To identify the critical acceleration at which ejection starts acceleration was gradually increased up to the ejection of one or two droplets detected in a time span of ten seconds. Goodriges [3] proposed an expression which combines density and interfacial tension to yield a critical acceleration ac where  stands for the interfacial tension,  density and ω frequency. This scaling law was obtained from the dispersion relation for capillary-gravity waves and it is valid for fluids of constant viscosity. The critical acceleration as a function of frequency is represented in Fig. 4. Goodrige's model (Eq.1) is represented with dotted lines in Fig. 4, jointly with our experimental results. The best fit corresponds to a Newtonian fluid (constant viscosity) and experimental a c values increase and depart from this model with increased PIB concentration. Dimensionless Parameters (Newtonian fluids) Goodriges [4] proposes dimensionless expressions for acceleration and frequency which depend on the surface tension and viscosity, which define the boundary for dominant dependence: Dimensionless acceleration Dimensionless frequency Depending on the values of w * is possible to identify the dominant influence for ejection. Our results match the alignment of a Newtonian liquid such as water, validating the choice of dimensionless parameters, as well as the importance of surface tension in the process. 6. Rayleigh-Plateau Instability (drops formation) and ejection mechanisms A liquid column of capillary dimensions may be formed on the surface of the fluid where irregularities appear as control parameters are increased [5]. These irregularities mediate the formation of necks which mark the onset of the Rayleigh Plateau instability. The two possible mechanisms are illustrated in Fig. 6. The height of the capillary columns increases with PIB concentration (Fig.6), illustrating the effect on the elastic properties of the solution, because greater polymer content increases this property. To characterize the ejection problem [6] the dimensionless Ohnesorge number Oh is used: The column formation The type of dripping produced directly from the surface of the liquid: with or without column formation before ejection (jetting) is represented in Fig. 7, for the Weber number as a function of Oh number. The type of ejection can be defined as simple dripping, complex dripping or jetting depending on the Weber and Ohnesorge [7,8] dimensionless numbers. As the We value increases, the behaviour of the fluid changes from a simple dripping to a complex dripping and attains jetting when Oh is about 10-1. In our case, the increased of the surface tension promotes the formation of a liquid column in which necks at a certain height will induce subsequent formation of droplet. The height of the capillary columns increases with a larger increased PIB content (5%). Working at low frequencies, have access to study ejection conditions for each type of fluid and explorer different behaviors that can occur. Critical acceleration (threshold) follows a law proportional to applied frequency to 4/3 according with Goodriges model. As the We value increases, the behaviour of the fluid changes from a simple dripping to a complex dripping until it reaches jetting Instability: Convective or Absolute We attempt to establish whether the observed instability can be classified as convective or absolute [9]. Starting from the neck, a droplet is ejected which may be generated above (convective instability) or above and below the neck (absolute instability). Both cases are illustrated in Fig. 8 corresponding to Kerosene + 5% PIB, 60 Hz, 19 g We have also observed that as the frequency increases, the Reynolds number decreases and the ejection departs from the absolute instability. Conclusions Working at low frequencies, it is possible to investigate ejection conditions for each type of fluid and explore various possible alternative responses. Our chosen solution increases its interfacial tension with PIB concentration. Critical acceleration (threshold) is proportional to the 4/3 power of applied frequency, in accordance with Goodriges model. With increased We value, the behaviour of the fluid changes from simple dripping to complex dripping and finally induces jetting. Both convective and absolute instabilities could be observed in this system.
2019-04-06T13:06:49.520Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "3bcbc0ce20fdc9e440d1b526ed881914c22c7749", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/296/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "92686f6505fc6f23283a711c9fe34abf979e974b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
32997729
pes2o/s2orc
v3-fos-license
Recommender System for Journal Articles using Opinion Mining and Semantics Till date, the dominant part of Recommender Systems (RS) work focusing on single domain, i.e. for films, books and shopping and so on. However, human inclinations may traverse over numerous areas. Thus, utilization practices on related things from various domains can be valuable for RS to make recommendations. Academic articles, such as research papers are the way to express ideas and thoughts for the research community. However, there have been a lot of journals available which recognize these technical writings. In addition, journal selection procedure should consider user experience about the journals in order to recommend users most relevant journal. In this work of journal recommendation system, the data about the user experience targeting various aspects of journals has been gathered which addresses user experience about any journal. In addition, data set of archive articles has been developed considering the user experience in this regard. Moreover, the user experience and gathered data of archives are analyzed using two different frameworks based on semantics in order to have better consolidated recommendations. Before submission, we offer services on behalf of the research community that exploit user reviews and relevant data to suggest suitable journal according to the needs of the author. Keywords—Recommendation system; journal recommendation system; user opinion; sementic similarity; text analysis I. INTRODUCTION As the universe is getting digital, a large volume of structured, semi-structured and unstructured data is being generated very fast.This data is in terabytes, so it is referred as Big Data.Big data approaches are used to handle those types of datasets that are so big and complex that typically used applications software are not sufficient to exploit them fully.Because of the rapid increase in data volume, one is always flooded with Superfluity of choices in any domain [1].A recommendation system uses the large volume of data in the form of text and sentiments available for summarization purpose to make serious and valid decisions.Recommender systems gather information from the users about their preferences for a particular item to make predictions for the product such as which bag I should buy or which paper I should read next [2].Recommendations can be made based on user"s interest which can be analyzed by the user"s profile or considering their online or offline behavior e.g.RS is a subclass of information filtering system that tries to predict the "opinion" that a user would give to an item. Recommender frameworks have turned out to be amazingly common in recent years, and are used in an assortment of zones.Some prevalent applications incorporate music, books, movies, research papers; seek questions, social labels, and items in general.In any case, there are likewise recommended frameworks for specializations, partnership, jokes, eateries, life insurance, and Twitter pages [3].Similarly, journal recommendation system has also become an important topic of discussion for research community which writes and publishes research articles, patents, and books.Because today we have numerous choices of journals that publish articles annually, quarterly, monthly and even bi-monthly, it becomes very difficult to choose an appropriate journal to submit your manuscript. With an increase in the publication of different research papers in multiple journals of diversified fields, authors find it difficult to choose an appropriate journal for their research work.In submission of journal, article may result in rejection and the main reason for rejection is that the paper is not submitted to a relevant journal even when the paper itself is excellent.So there is a need to develop a Recommendation system that can suggest suitable journals to the authors.The journal recommendation system can provide services to authors on behalf of publishers of academic journals.The choice of journal directly influences the authorial decisions like impact on practitioners, CV value of publication and acceptance or rejection risk [4].Now the core problems that arise while building a journal recommendation system are:  which data set should be collected for a journal recommendation;  where to store this amount of big data of journals;  how to effectively perform data mining and sentiment analysis to make better journal recommendations;  Providing accurate recommendations to the users with accuracy and exactness;  which recommendation system technique would be best for journal recommendation system. The solution to the above-mentioned issues is our proposed solution that is based upon user opinion to make suitable journal recommendations.Existing systems for journal recommendation works by matching title and abstract of the papers [5] and do not consider the user experience with journals.Previously most of the work has been done by just content similarity and didn"t focus on other aspects e.g.lowlevel features.Our proposed system not only considers the www.ijacsa.thesai.orgcontent similarity but also takes into account low-level features like subscription charges, access options etc.The main contribution of our paper is that our system collects user experiences also.For this purpose, we have conducted a survey that gathers user experience.Combining the content similarity with low-level features and user reviews for journal recommendation provide better recommendations. In this work, the information about the user experience about an arrangement of journals focusing on different domains has been accumulated.This information incorporates journal domain, name and overview questions which address user experience with the journal.Section 2 contains related work; Section 3 contains methodology and proposed framework followed by experimental setup and results. II. LITERATURE REVIEW Recommendation systems play a significant role in ebusiness and information sharing systems.Over two decades of research and different algorithms being implemented for recommendation engines it is declared that recommendation is not a one-size-fits-all problem.So, recommendation systems must need to be designed according to application-specific embedded tasks.Successful deployments must include user required tasks, for which different design choices are in practice.If authors are assumed to perform reasonable in the typical financial or economic sense, they should choose a journal for publication of their work according to where they can expect the highest average value adjusted for risk and expenses. Journal recommendation systems have been studied by researchers in different backgrounds.For example, recommendation system is proposed to recommend the appropriate journals considering the other factors like price, openness, and subscription rather than just matching the content [5].It is proposed that how author selects journal in development administration influenced by the quality and administration recognitions [4]. A hybrid research paper recommender system is introduced by researchers which improves the research paper recommendations by combining keyword-based search with implicit and explicit rating, citation analysis and source analysis [6].The system uses "Distance Similarity Index" (DSI) and the "In-text Impact Factor" (ItIF) methods to improve the quality of recommendations. A research paper recommender framework is proposed in view of the hypothesis that provides a clear indication of user interest by depending upon previously published articles of author.The system differentiates between senior and junior researchers and prunes the unnecessary citations and references [7].Filtering these information sources result in the higher accuracy of recommendations.In [8], authors have discussed the online and offline evaluation of research paper recommender framework and conclude that offline evaluation in this domain does not provide promising results. Docear"s research paper recommender framework is proposed using content-based filtering in which user"s data (citations, references, and papers) is directed in mind maps and are then utilized for recommendations [9].A research paper recommender framework is introduced using a Dynamic Normalized Tree of Concepts (DNTC) model and a complex ontology [10].The system is evaluated offline using ACM digital library papers and the results show that this model performs better than the vector of space models. Authors have discussed that Mendeley recommender system works by incorporating collaborative filtering and user feedback to produce recommendations in [11].Results show that the proposed method provides better accuracy for new users.To serve the new researchers in getting a diagram of the research performed in a specific zone, authors have proposed keywords based retrieval procedure in [12] for giving an overview and a various arrangement of papers as a piece of the preliminary reading list.A literature review is presented on ontology-based recommender frameworks in the domain of elearning [13].This investigation demonstrates that intersection of information based proposal with other suggestion methods can upgrade the viability of e-learning recommender systems.Authors have discussed the performance of stereotype and most-popular recommendations in the domain of scholarly recommender frameworks in [14]. Researchers have discussed the new item problem and proposed a method of automatically analyzing the video and audio contents through low-level characteristics rather than just focusing on high-level features of the video content [15].The Paper typically focused on the visual features.In [16] authors have proposed a real-time web service for providing recommendations for different items using opinion and ratings of people provided on Twitter, Facebook and other social media sites.Reviews about four products given by blippar site have analyzed using CF based approach. A Latent Dirichlet Allocation approach that is used for sentiment mining and feature retrieval to improve the accuracy of recommendations is proposed in [17] and it was found that this technique provides the best results as compared to typical clustering techniques.An efficient user-modelling technique based on mind maps to recommend the Research papers is presented by researchers in [18]. In this paper, numerous variables concerning to mind-mapbased user modelling were identified, and assess the variables' influence on user-modelling efficiency with an offline evaluation.Research work is done in which Authors have developed a hierarchical Poisson matrix factorization (HPF) for recommendation purpose.HPF model considers sparse user activities data, where every client has given criticism on just a little subset of things [19].HPF handles both express appraisals, for example, various stars, and implicit ratings, for example, perspectives, snaps, or buys. In [20], Apache Mahout is used to evaluating TF-IDF weighted technique of clustering.The dataset of tweets is used to evaluate the result of eliminating stop words from the dataset.The proposed system in [21] uses the slope-one recommendation algorithm to recommend micro-videos.The result shows that the strength of used algorithm provides better visualization interface and Hadoop framework provides high-level performance.The challenge of using Map Reduce www.ijacsa.thesai.orgparadigm to parallelize CF technique is being addressed in [22].The result shows that CF algorithms are not useful for Hadoop platform as it does not decrease the response time for an individual user.To overcome the issues like scalability, sparsity and imprecision etc. a CF method with Dimensionality technique is applied using Mahout in [23] to improve the recommendation accuracy of prediction and quality.Results show that approaches such as PCA and SVD can decrease the noise of high dimensional data, and provides an improvement in tackling the scalability and sparsity issues of prediction. In [24] the authors have discussed that Recommendation systems are important platforms for users pursuing technical ways to find best choices available from a big amount of data.Directed edge recommendation problem is described in [25] where the user can recommend items to his connected user based on the algorithm that combines sharing preferences model and user preference model.Results demonstrate that joining the undertaking setting prompts more exact proposals as compared to group recommender system.The author provided an up-to-date and detailed survey of the recommended field, considering various kinds of interfaces, the range, and diversity of different recommendation system algorithms, the functionalities provided by these systems and their use of Artificial Intelligence methods [26]. III. PROPOSED STUDY AND DESIGN The proposed approach comprises of two frameworks targeting user opinion and analysis of detailed content (i.e.journal archive data).Each of these frameworks is used to provide a consolidated recommendation.The theme of the proposed work is to explore user opinions and archive analysis which definitely results in better recommendations. In preceding section introduction regarding recommender frameworks and related study have been provided.This section gives comprehensive insights about the proposed study considering journal recommender framework.Previously, it has been described that there are some studies that have been performed for journal recommendation considering various factors like matching the contents of the script, content matching combined with script charges, and access options, etc. A. Framework for User Opinion Analysis In Fig. 1, a conceptual framework is provided to analyze user opinion.First of all, the data is collected from the users by means of a survey paper in which user provide an opinion about his experience with the journals.Now the gathered data is unstructured and required some preprocessing before it can be analyzed.Preprocessing phase was a major challenge and a plenty of time is consumed during this phase.This textual preprocessing includes cleaning steps, such as removing duplicate characters, replacing special characters with spaces, removing stop words and word stemming. From the cleaned data, attribute selection is made and separated into numerical/categorical and textual attributes.Then, by using different text analysis approaches it is analyzed that whether the user provides a positive opinion or negative opinion.User opinion analyzed in this section is further used in the second framework to provide recommendations. B. Framework for Semantic Similarity based Approach In Fig. 2, a conceptual framework for semantic similarity based approach is provided.For recommendations, another data set is gathered based on the survey data collected in the above-mentioned step.This dataset includes archives about the journal. Preprocessing phase is done in which TF-IDF approach is used.A Term Document Matrix is generated that describes the frequency of input words in the collection of documents based on term-term correlations.Then, by using KNN approach similarity is measured. For checking semantic relationships, we used an approach based on the work which counts semantic connections in light of terms by utilizing semantic kernel.Semantic relationship implies which terms are co-related; in this manner it can enhance the clustering model.The work did in such manner additionally incorporate GVSM which is Generalized Vector Space Model (GVSM).GVSM accept that vectors are straightly autonomous so figure the term-term relationship.The similarity is measured using approach defined in [27]. If there is a matrix X which contains n archives and m terms, by applying GVSM we have semantic piece.(1) Here K is a gram matrix of lines; G is a gram matrix of Columns separately.In this way, cosine similarity can be figured by: www.ijacsa.thesai.org(2) In the above conditions, G is vital and it must satisfy a portion of the properties, for example, G ought to be positive semi-distinct and represent the inner products of the term vectors.So there ought to be some estimation of G which is as under. (3) (4) Is an m×n diagonal matrix whose components are the diagonal components of .In this way the semantic kernelthat relate to various estimates of K are: (5) (6) Diverse measures of semantic kernel in view of term-term relationships, which is proportional to mapping archives to the higher semantic space where correlated terms are related with each other. IV. EXPERIMENTAL DESIGN AND SETUP In this section, comprehensive details are provided about the collection of data set.Detailed results are also shown in this section. A. Data Collection and Analysis Process for User Opinion This study involves the steps that need to be addressed in order to recommend journals based on user"s opinion about the journal.In the first section, study design, information related to techniques existing for survey type research participants will be discussed.After that, the critical components to be considered for survey sort research will be explained.Further, the rules for setting up a questionnaire and choice of target population will be displayed.The fundamental reason of our examination is to explore the part of "user experience" in creating a positive or negative effect on the journal selection of researcher"s community. 1) Mode of Observation: This study is based on a survey that is known as ex-post-facto design.Such type of study only reports what has happened or what is happening.It is a longitudinal study.Questionnaires were distributed to the faculty and data was collected face to face. 2) Target Population: The qualities of the target population, the intended interest group in each investigation is by all accounts seems to be critical as it will establish the framework of your research work.The directed application is by all accounts a pivotal point in this investigation; following are the different parameters that have been considered in such manner: Age: 25-50 years of age Education: Master"s, M-Phil and PhD Gender: both male and female 3) Targeted Locations/Organization: Researchers in this topographical region are chosen and features of the targeted audience have been given. Following University with the named departments is selected for this study: "COMSATS University" Departments: Bioinformatics Computer Science Math 4) Observational Approach: In this work, the survey was the fundamental wellsprings of gathering data from the specified group of audience.The polls of our study were utilized as a component for the gathering of data essential for this investigation.The Close community has been considered and ended questions are incorporated in the questionnaire.The survey comprises different questions about the user experience e.g.view about the journal response time, subscription charges, etc. In addition, different factors were additionally considered with the goal that the investigation can have all the essential data and information that will lead toward successfully finishing of this study considering recommendation services. 5) Data Collection: Data is collected from the field that is the campus of COMSATS Institute of Information Technology from different departments.Prior to the rounding out the questionnaires, we led a session for the focused group of audience with the goal that every one of the surveyors must have important data that can help them in the correct filling of the survey.In addition, this action will help in getting the desired outcomes from this investigation.Survey papers were given over to the researchers after a short portrayal and extension about the reason for this study.Essential information was recorded subsequent to getting back the filled questioners.Then this data was adjusted according to the need suited for recommendation purpose.For cleanness and simplicity surveys were provided in two different domains, offline and online. In online, a survey was produced on Google Forms and was made accessible by giving the connections of this survey around, as this action will empower us to draw in the clients that incline toward the online medium.The substantial link of the survey appears below: Showing Links of the Survey: In addition, for the second kind of group of an audience, the survey was made accessible in hard shape with the goal that individuals can undoubtedly give the supposition in the https://docs.google.com/forms/d/17fMH6u_6o_LxhTqhbYW YPxhbk2Sh8xANcgT0ZkBUYHw/edit www.ijacsa.thesai.orgrequired arrangement and that movement will surely help us in pulling in the audience. 6) Data analysis: Analysis of the gathered information is very important and crucial task as it provides us with the information and results that we were looking for.In this study the simple information has been gathered and analyzed via different tools.This activity will help us in finding the relevant information.Moreover, the gathered textual information has been processed.This section provides the detailed results which have been shown and the results depicts that incorporating user experience have impact on selected domain of study and can improve recommendation results.In previous sections complete descriptions have been provided. The following are the outcomes which have been derived from the users in the form of a survey.All the gathered outcomes were plotted utilizing different tools and were appeared here one by one introducing the data in regards to each inquiry in this study.Here for the straightforwardness and brevity, the chosen results have been demonstrated which give noteworthy information in such manner.Data in Fig. 3, apprised that 67% researchers find submission procedure helpful, 8% find it difficult, and 39% find this procedure fair.Fig. 4 revealed that 60% researchers feel that archive papers do not help them in getting the idea about the journal, 5% researchers feel that archive papers help them partially while 35% feel archive papers inappropriate and do not provide any idea about the journal.Fig. 5 illustrated that 32% people were agreed with the statement that defined format was well elaborated, 47% people feel that it was ambiguous while 21% find it difficult.Fig. 6 in the survey presented that 24% people have an opinion that reviewer"s comment was helpful for improving their manuscripts, 32% were not satisfied with the comments and 44% people find the comments ambiguous.Fig. 7 exhibited that 79% researchers" reveal that communication was supportive, 12% researchers take this communication as fair while 9% feel it was discouraging.All the gathered opinions from user over the questioners were processed and results were shown.For simplicity and better recommendations individual ratings of journal were found by considering positive and negative opinion from user.The results of some of the journals were shown respectively.For the pool of forty journals we tried to pick diverse journals.Following figures explain the experience of users with individual journals.As per information, the experience of a client with "Big data research" journal is awful.Average rating of this journal is 1.It is described in Fig. 10. B. Data Set Description for Archive Data For recommendations, we collected another dataset based on survey data collected in the above-mentioned step.At-least 40 research papers were collected along with their title, abstract and keywords for every journal against which user have provided the information in the survey.Journal attributes were also collected; which includes aim and scope of the journal, impact factor, and publication frequency and cite score.The user provides the title of the research paper, abstract and keywords in the form of text which is considered as input.Firstly, recommendations are generated within the dataset.Then the recommendations are proposed by combining the user opinion.www.ijacsa.thesai.org C. Journal Recommendations using Hybrid Approach As we have processed the survey data and the results of user experience is available.Now, we are going to recommend the journals by combining simple journal recommendations with the user opinion.As defined above, term to term correlation is used to check the similarity. To generate the journal recommendations, a query in the form of abstract is given which was related to computer science and big data.For checking the similarity value of a given query, it is added to the previously collected dataset of journal papers. Results reveal in Fig. 13 that the given query has the most similarity with "Big data research" journal. According to survey data, the average rating of "Big data research" journal is 3. So, it can be suggested the author submit the paper in this journal.A query in the form of a keyword is provided which relates to Information technology and big data combined with bioinformatics.Recommendation results show in Fig. 14 that the given keyword has the best match with "Advanced Engineering Informatics" and "Big data research" journal.As per overview information, the normal rating of "Big data research" journal and "Advanced Engineering Informatics" is 3 and 4, respectively.Along these lines, the author can choose among these two journals according to the priority. A general keyword related to big data is used as a query to check the recommendations about the journal.Similarity value in Fig. 15 indicates that provided query has higher similarity with "Big data research" journal and is also similar to "Big data Analytics" journal. As per survey result data, "Big data analytics" journal has an average rating of 2 and "Big data research" journal has an average rating of 3. Thus, the researcher can pick among these two journals as indicated by the need. For journal recommendations, a keyword related to bio is added in the dataset.Similarity value in Fig. 16 indicates that it is suitable to choose "Biological Psychiatry" journal for the provided query.Also, the survey results give 4 rating to this journal.Keyword related to the network is introduced in the dataset as a testing query which clearly has highest similarity with "Computer Networks" journal as show in Fig. 17. Rating for "Computer Network" journal is 2. V. CONCLUSION In journal recommendation system better results were achieved using both user opinion and archives.The results show that our model will help researchers to fasten the paper submission procedure enhance user experience. Similarly, the selection of good similarity measure for semantic analysis is vital part of our proposed framework.In addition, the proposed work will be optimized for web-based application which helps us in making the user experience better. In conclusion, this work may pave the way to other domains which certainly have impact on the life of the user. In future, we aim to implement this work in different tools like Hadoop and spark in order to compare their relative recommendation accuracy. Fig. 1 . Fig. 1.A conceptual framework for user opinion analysis. Fig. 8 Fig. 8 reveals that researcher does not have good experience with "Acta Biomaterialia" journal.Average rating of this journal is 2. Fig. 9 Fig.9depicts that researchers have good experience with this journal named as "Biological Psychiatry".Average rating of this journal is 3. Fig. 11 that researchers have average experience with "Advances in Electrical and Computer Engineering" journal.Average rating of this journal is 4. Data indicated that most of the researchers have good experience with "Computer Science Review" journal.Average rating of this journal is 4. It is described in Fig. 12.
2018-01-05T23:50:24.493Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "0fa0626399f2d7abc6ff9e493c9b26e862eca382", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume8No12/Paper_27-Recommender_System_for_Journal_Articles.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0fa0626399f2d7abc6ff9e493c9b26e862eca382", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253220750
pes2o/s2orc
v3-fos-license
Counteracting Immunosuppression in the Tumor Microenvironment by Oncolytic Newcastle Disease Virus and Cellular Immunotherapy An apparent paradox exists between the evidence for spontaneous systemic T cell- mediated anti-tumor immune responses in cancer patients, observed particularly in their bone marrow, and local tumor growth in the periphery. This phenomenon, known as “concomitant immunity” suggests that the local tumor and its tumor microenvironment (TME) prevent systemic antitumor immunity to become effective. Oncolytic Newcastle disease virus (NDV), an agent with inherent anti-neoplastic and immune stimulatory properties, is capable of breaking therapy resistance and immunosuppression. This review updates latest information about immunosuppression by the TME and discusses mechanisms of how oncolytic viruses, in particular NDV, and cellular immunotherapy can counteract the immunosuppressive effect of the TME. With regard to cellular immunotherapy, the review presents pre-clinical studies of post-operative active-specific immunotherapy and of adoptive T cell-mediated therapy in immunocompetent mice. Memory T cell (MTC) transfer in tumor challenged T cell-deficient nu/nu mice demonstrates longevity and functionality of these cells. Graft-versus-leukemia (GvL) studies in mice demonstrate complete remission of late-stage disease including metastases and cachexia. T cell based immunotherapy studies with human cells in human tumor xenotransplanted NOD/SCID mice demonstrate superiority of bone marrow-derived as compared to blood-derived MTCs. Results from clinical studies presented include vaccination studies using two different types of NDV-modified cancer vaccine and a pilot adoptive T-cell mediated therapy study using re-activated bone marrow-derived cancer-reactive MTCs. As an example for what can be expected from clinical immunotherapy against tumors with an immunosuppressive TME, results from vaccination studies are presented from the aggressive brain tumor glioblastoma multiforme. The last decades of basic research in virology, oncology and immunology can be considered as a success story. Based on discoveries of these research areas, translational research and clinical studies have changed the way of treatment of cancer by introducing and including immunotherapy. Introduction Immunotherapy is a change of paradigm in the treatment of cancer. It focusses the attention in the fight against cancer on the immune system of the cancer patient and tries to translate knowledge from basic immunological research into new strategies of treatment. Such translational research is constantly generating new advances and approaches. This invited manuscript is a contribution to the Section "State-of-the-Art Biochemistry in Germany". The corresponding author (VS) is a german biochemist and immunologist, co-author SvG is specialist in paediatric hemato-oncology and co-author WS is specialist in pharmaceutical biology, tumor immunology and translational oncology. The review is authentic with a focus on own achievements embedded in related work and latest findings in the field. Oncolytic viruses (OVs) are interesting anti-cancer agents with high tumor selectivity. They replicate in and kill cancer cells without damaging healthy cells. This review focusses on the avian OV Newcastle disease virus (NDV) with its inherent anti-neoplastic and immune stimulatory properties. Avian NDV is the most extensively characterized member of the avulaviruses due to the high mortality rate and economic loss caused by virulent strains in the poultry industry. This enveloped paramyxovirus contains a non-segmented, negative-sense, single-stranded RNA genome encoding 6 structural propteins (N, P, L, M, HN, F) involved in the viral life cycle that is limited to the cytoplasm of the host cell. A special property of oncolytic NDV consists of its potential of breaking therapy resistance in human cancer cells. It is therefore particularly suited to counteract the immunosuppressive effects of the tumor microenvironment (TME). T cell based cancer-specific immune reactivity represents the basis of the types of immunotherapy discussed in this review: post-operative active-specific immunotherapy with NDV-modified vaccines (see 4.11. and 4.15.) and adoptive cellular immunotherapy (see 2.3. to 2.5.). We therefore start with T cells. Chapter 2 presents examples of spontaneous anti-tumor T cell mediated immune responses, as evidenced in particular in the bone marrow (BM). It also provides examples of adoptive T cell mediated immunotherapy studies in which immune memory T cells infiltrate the TME and lead to tumor rejection. Chapter 3 addresses cellular details of the immunosuppressive TME before in chapter 4 the effect of NDV on the various cell types of the TME is being described. While intratumoral application of NDV directly influences the TME, post-operative active-specific immunotherapy with NDV-modified vaccines stimulates tumor-reactive T cells which indirectly affect the TME. The fifth and final chapter reports upon clinical studies of immunotherapy, using glioblastoma multiforme (GBM) with its immunosuppressive TME as an example. For introduction to the topic, two up-to-date reviews may be helpful, one concerning oncolytic NDV [1], the other cellular immunotherapy [2]. Spontaneous Anti-Tumor T Cell Responses and Cellular Immunotherapy Studies The formation of a solid tumor by a neoplastic cell requires a support ecosystem, i.e., an appropriate TME to allow growth and prevent immune attacks. In absence of a TME, such a transformed cell can principally initiate spontaneous host immune responses via the innate and the adaptive immune system. This paragraph provides evidence for spontaneous immune T cell reactivity from a well-defined murine tumor model system. It then presents latest insights into spontaneous anti-tumor immune responses in human. Cellular immunotherapy studies involve murine and human immune T cells. Evidence for Spontaneous Anti-Tumor T Cell Responses from a Mouse tumor Model The highly aggressive murine ESb lymphoma, when transplanted into syngeneic mice subcutaneously (sc) or intraperitoneally (ip) grows, metastasizes and kills the host within 2-3 weeks. When transplanted into the ear pinna (ie), however, a site with a high density of dendritic cells (DCs), the cancer cells induce a strong immune response which prevents tumor growth and metastasis [3]. In these immune mice, induced ESb cancer-reactive CD8+ memory T cells (MTCs) control tumor dormancy in the bone marrow (BM) and establish long-term systemic immune resistance upon sc tumor cell challenge [3]. When ESb cells were transfected with the bacterial lacZ gene, it was possible to follow single tumor cells in tissues such as lymph nodes, spleen and BM of ESb-lacZ ie transplanted mice. LacZ, coding for the enzyme ß-galactosidase (Gal), was not only a marker to visualize individual tumor cells but it also served as a surrogate tumor-associated antigen (TAA) that induced major histocompatibility complex (MHC) class I-lacZ-peptide specific CD8 cytotoxic T lymphocytes (CTL) [4]. Research on tumor dormancy in the BM revealed that BM can function as a priming site for spontaneous T cell responses against blood-borne antigens, including TAAs [4]. This was a surprise because textbook immunology teaches that BM is a primary lymphoid organ involved in hemato-and lymphopoiesis while secondary lymphoid organs like lymph nodes and spleen are involved in initiating and facilitating immune responses. Due to tumor-induced angiogenesis, solid tumors become connected to the blood circulatory system so that tumor cells and TAAs can enter the blood. Blood-derived naïve T cells can home to BM sinus endothelium, transmigrate into the parenchyma and interact there with resident CD11c+ DCs. The latter are highly efficient in taking up exogenous blood-borne antigen and processing it via MHC class I and class II (MHC-I and MHC-II) pathways. Upon scanning of the DCs for expression of blood-borne antigens, the T cells with the corresponding T-cell receptor (TCR) form clusters with the antigen-presenting cells (APCs) in BM stroma, become activated, proliferate and differentiate into MTCs. Both, the activated T cells and the MTCs can transmigrate back into the BM sinuses and recirculate via the blood. A novel tumor model was established for the study of long-term protective immunity and immune T cell memory [5]. Tumor-reactive immune cells against ESb-lacZ tumor cells were generated from a naïve T cell repertoire by a well established ie priming/ip restimulation protocol and transferred to tumor-inoculated T cell deficient nude (nu/nu) mice. The cell transfer prevented tumor outgrowth and resulted in the persistence of a high frequency of Gal-specific CD8+ T cells in the BM and spleen as demonstrated by tetramer staining of CD8 T cells specific for an immunodominant Gal epitope [5]. In contrast, immune cell transfer without tumor cell challenge did not result in detectable levels of Gal-specific CD8 MTCs. Long-term immune memory and tumor protection could be maintained over four successive transfers of Gal-primed T cells between tumor-inoculated nu/nu recipients. The Gal-specific CD8+ MTCs from the first transfer could be activated and recruited into the peritoneal cavity by ip tumor cell challenge. From there they were harvested for a second adoptive transfer together with tumor cell challenge. About four weeks later the Gal-specific MTCs had returned into a resting state and were detected in the BM. This longterm experiment (>6 months) with four rounds of antigenic restimulation and adoptive immune cell transfer demonstrated longevity and functionality of the MTCs [6]. The results also suggested that the BM microenvironment has special features that are of importance for the maintenance of tumor dormancy and immunological T-cell memory. A low level of persisting Gal antigen appeared to favour the selection of Gal-specific MTCs over irrelevant MTCs in BM niches of CD8+ MTCs [6]. Acquisition of a BM phenotype by recirculating and tissue-resident MTCs has been described [7]. Redirection to the BM of gene-modified T cells was reported to improve T cell persistence and antitumor functions [8]. A dynamic kinetic view of human T cell memory concluded that homeostasis of circulating, proliferating and resting MTCs is controlled by different rheostats: tissue-exit and tissue entry signals for circulating MTCs, proliferation-inducing signals for proliferating MTCs, and availability of a survival niche for tissue-resident, resting MTCs [9]. Primary CD4 and CD8 T-cell responses generated in BM are autonomous and can occur in the absence of classical secondary lymphoid organs (lymph nodes and spleen) [4]. In spite of the absence of molecular adjuvants, the BM T cell responses to blood-borne surrogate TAAs were not tolerogenic and resulted in the generation of CTLs. Primary T-cell responses in BM were also discovered in mice reconstituted with transgenic T cells from OT-I or OT-II mice specific for ovalbumin (OVA) [4]. The BM microenvironment, upon entry of antigens in absence of adjuvants, also facilitates DC (APC):CD4 T cell interactions and maintenance of CD4 memory [10,11]. Why the BM microenvironment does not require adjuvants to initiate primary T cell responses against blood-borne antigens is not yet clear but it is possible that naïve T cells in this environment are in a higher state of activation. An important role for BM as a secondary lymphoid organ was confirmed later by demonstrating high frequencies of Wilms tumor antigen 1 (WT1)-specific CD8+ T cells in BM from tumor-bearing patients [12]. BM represents an excellent site for long-term maintenance of memory CD4+ and CD8+ T cells due to special niches providing survival cytokines such as IL-7 and IL-15 [11,13]. The link between MTCs and stromal cells in survival niches is very robust and can provide efficient memory over a lifetime in tissues such as the BM [14]. Mutation-derived tumor neoantigens play an important role in generating spontaneous anti-tumor immune responses. In recent years molecular pathways have been identified which influence T cell immunoreactivity to tumor neoantigens and cancer immunoediting. For example, durable CD8+ neoantigen-specific T cell immunity was discovered to be controlled through mRNA m 6 A and the YTHDF1 m 6 A binding protein in DCs involved in cross-priming [15]. Type I interferon (IFN-I) was found to activate APC function in DCs through IFN-stimulated genes (ISG+DCs). Unlike cross-presenting DC1, ISG+DCs acquire and present intact tumor-derived peptide-MHC (pMHC) complexes [16]. In addition to MHC-I neoantigens recognized by CD8+ T cells, MHC-II neoantigens recognized by CD4+ T cells have been identified [17]. These have a key function in shaping tumor immunity and response to immunotherapy. Spontaneous Anti-Tumor Immune Responses in Cancer Patients Spontaneous immune responses to TAAs or tumor neoantigens have been described not only in animal model systems but also in cancer patients. Immune cells can be isolated (i) from tumor samples as tumor-infiltrating lymphocytes (TILs), (ii) from peripheral blood derived mononuclear cells (PBMCs) or (iii) from BM derived mononulear cells. Spontaneous anti-tumor immune responses in cancer patients could be analyzed from BM aspirate samples. BM samples from 39 primary operated breast cancer patients and 11 healthy females were analyzed for the presence and frequencies of spontaneously induced MTCs with peptide-HLA-A2-restricted reactivity against 10 breast tumor associated TAAs and 3 normal breast tissue-associated antigens in short-term IFN-γ enzyme-linked immunospot (ELISPOT) essays. 67% of the patients recognized TAAs with a mean frequency of 144 TAA reactive cells per 10 6 T cells. Strong differences of reactivity were noticed between TAAs, ranging from 100% recognition of prostate-specific antigen (p141-149) to only 25% recognition of MUC1 (p12-20) or Her-2/neu (p369-377). Reactivity to normal breast tissue-associated antigens was low [18]. The study revealed the shaping of a polyvalent and highly individual BM T-cell repertoire in cancer patients [19]. Enrichment of MTCs and other profound immunological changes in the BM was reported from untreated breast cancer patients [20]. The proportion of MTCs among CD4+ and CD8+ T cells was much higher in BM from cancer patients than in BM from healthy donors. The extent of MTC increase was related to the size of the primary tumor. Patients with disseminated tumor cells in their BM had more memory CD4+ T cells and more CD56+CD8+ cells than patients with tumor cell-negative BM [20]. BM samples and peripheral blood from 41 pancreatic cancer patients were characterized for location, frequencies and functional potential of spontaneously induced MTCs specific for individual or common TAAs. Pancreatic cancer is highly malignant and dominated by Th2 cytokines in patients' sera suggesting systemic tumor-induced immunosuppression. Surprisingly, high numbers of tumor-reactive T cells were found in all BM samples and in 50% of blood samples. These cells secreted the Th1 cytokine IFN-γ upon stimulation with TAAs [21]. Detailed studies of cognate interactions between MTCs and APCs from the BM of cancer patients revealed bidirectional cell stimulation, survival and antitumor activity in vivo [19]. For example, IFN-α which can be induced in DCs by T cells, has a reciprocal effect on T cells by inducing the expression of IL-12 receptor ß, enabling the T cells to respond to IL-12 and to differentiate into Th1 cells. Other relevant cytokines in this cognate interaction between DCs and CD4+ and CD8+ T cells are IL-2, IFN-γ and TNF-α [22]. Therapy of Human Tumors in NOD/SCID Mice with Patient-Derived Reactivated MTCs from BM Freshly isolated T cells from BM of breast and pancreatic cancer patients recognized autologous tumor cells and rejected them in a xenotransplant model demonstrating their functional and therapeutic potential [23]. In short-term culture with autologous DCs pre-pulsed with tumor lysates, patient's MTCs from BM (but not from PBMC) could be specifically reactivated to IFN-γ producing and cytotoxic effector cells [23]. A single ip transfer of such restimulated BM T cells into NOD/SCID mice caused regression of autologous tumor xenotransplants. This immune response was associated with infiltration by human T cells and tumor cell apoptosis and necrosis. This demonstrated therapeutic efficiency in vivo of ex vivo re-activated BM-derived cancer-reactive MTCs from cancer patients. Transfer of BM derived CD45RA(-) MTCs but not CD45RA(+) naïve T cells infiltrated autologous tumor but not autologous skin tissue. The TILs had a central or effector memory phenotype and produced perforin. Many of them expressed P-selectin glycoprotein ligand 1 and were found around P-selectin (+) tumor endothelium. Tumor infiltration included cluster formation in tumor tissue by MTCs with co-transferred DCs. Depletion of DCs from restimulation cultures before transfer to NOD/SCID mice reduced therapeutic efficiency suggesting an important contribution of APC restimulation in tumor tissue [24]. These studies demonstrated selective homing of human MTCs to human tumors in xenotransplanted mice and suggested that tumor rejection is based on the recognition of TAAs on tumor cells and DCs by autologous specifically activated central and effector MTCs [23,24]. Therapeutic Potential of Cancer-Reactive MTCs from BM in Cancer Patients A review from 2015 described the spontaneous induction of cancer-reactive MTCs from BM, their maintenance by the BM microenvironment and their therapeutic potential [25]. A pilot clinical study investigated adoptive immunotherapy of advanced metastasized breast cancer with BM-derived cancer-reactive MTCs [26]. The BM MTCs apparently had an extensive expansion capacity in the patients [25]. Immunological responder patients showed a significantly longer overall survival (OS) than nonresponders (median survival 58.6 vs. 13.6 months; p = 0.009) [27]. Cellular Immunotherapy Counteracting Advanced Metastasized Cancer Effective immune rejection of advanced metastasized cancer was demonstrated in a graft-versus-leukemia (GvL) animal model of already cachectic mice [28]. In situ activated tumor-immune T cells, induced in allogeneic, tumor-resistant, MHC identical but superantigen different donor mice could transfer strong GvL effects accompanied by only mild graft-versus-host (GvH) reactivity. A single systemic immune cell transfer into 5 Gy irradiated cachectic DBA/2 mice bearing up to 4 week established syngeneic tumors and macrometastases led to massive infiltration of tumor tissue by CD4+ and CD8+ donor T lymphocytes. Primary tumors of 1.5 cm diameter were encapsulated and rejected from the skin and liver metastases were eradicated. For the first time such adoptive cellular immunotherapy was followed in individual live animals by 31 P-NMR spectroscopy of primary tumors. This allowed to evaluate changes in tumor tissue pH. An approximately 25,000 fold excess of metastatic tumor cells could be rejected as revealed quantitatively by FACScan analysis of lacZ gene transfected tumor cells [28]. Lessons from such GvL studies in animals about complete remission of cancer in late-stage disease by radiation and transfer of allogeneic MHC-matched immune T cells, in particular of MTCs from the BM [29], were: (i) reversion of tumor tissue pH from acid to neutral after 3-4 days as a first sign of the immunotherapeutic effect, (ii) donor CD4 T cell infiltration in the tumors 6 days after cell transfer, (iii) formation of a broad capsule of fibrous tissue between the tumor area and the skin, (iv) tumor rejection and long-term survival, (v) wound healing and scar tissue formation at sites of primary tumor rejection (skin) and at sites of metastases (liver and kidney), (vi) reconstitution of normal fur at the site of rejected primary tumor, (vii) cellular interactions: donor CD4+ and CD8+ immune T-T cell interactions, donor T cellhost macrophage interactions around liver metastases, vß6 donor T cells recognizing a tumor-associated viral superantigen (vSAG-7) interacting with tumor cells and APCs [29], (viii) reversibility of a state of cachexia, (ix) disproval of the hypothesis that a tumor is a never healing wound [30]. Table 1 lists the most important aspects presented in this chapter. The Tumor Microenvironment The TME consists of ECM and stromal cells, such as immune cells, mesenchymal cells (MSCs), cancer-associated fibroblasts (CAFs) and vascular endothelial cells. The ECM contains ECM proteins, type IV collagen, galectin-1, proteoglycans and glycoproteins. Type IV collagen is the major component of the basement membrane that separates epithelium and epithelium-derived tumors (carcinomas) from stroma. The key enzymes and inhibitors regulating ECM turnover are matrix metalloproteinases (MMPs) and tissue inhibitors of MMPs (TIMPs). Unlike cancer cells which transform through a series of genetic alterations, stromal cells are mostly genetically intact. However, stromal cells can become corrupted by malignant cells which try to create a microenvironment permissive for tumor growth and cancer progression [31]. Key findings which have defined the TME have been reviewed [31]. Among the nine key findings are the "seed and soil" hypothesis by Paget from 1889 and the discovery of tumor angiogenesis by Folkman and colleagues in 1971 [32]. We provide a few examples to demonstrate interactions of tumor cells with their TME. Such interactions serve to advance tumor cell invasive growth, to suppress or evade attacks by the immune system and to establish local or distant metastases. An understanding of cellular and molecular pathways that lead to an immunosuppressive TME is a prerequisite for targeted therapies against the TME. BM TME Solid tumors such as breast, prostate, and lung cancers frequently spread to bone, causing severe pain, disability and cancer-related deaths [33]. Tumor growth factor ß (TGF-ß), bone morphogenic protein (BMP) and Wint (Wnt) signaling pathways are key mediators of paracrine signaling between bone stromal cells and tumor cells. Bone metastases from solid tumors change the complex BM microenvironment [34]. Novel translational approaches targeting the BM microenvironment have recently been presented [34]. Clinical trials targeting bone metastasis pathways use (i) monoclonal antibodies (mAbs) against TGF-ß, CCL2 or CXCL8 chemokines or (ii) small molecule inhibitors (SMIs) as antagonists of the chemokine receptor CXCR4 or as inhibitors of MMPs involved in matrix remodeling [34]. Other emerging therapeutic targets are CXCR1, RANKL, PTHrP, VEGF, and LOX [34]. CAFs in the TME Recent reviews highlight the complexity of CAF biology including CAF heterogeneity, functionality in drug resistance, TGF-ß mediated immune evasion, contribution to a progressively fibrotic tumor stroma, the involved signaling pathways and the participating genes [35][36][37]. Infiltration of the TME by modified stromal cells initiates remodeling of the tumor ECM through increased secretion of fibronectin and collagen and expression of ECM receptors, ultimately generating a modified fibrotic desmoplastic TME. CAF-assisted contractility of this tumor matrix induces signaling pathways (e.g., NF-kB, JAK/STAT3), in conjunction with the direct interaction of plasminogen activator inhibitor-1 (PAI-1) with respective cell surface receptors (LRP1, uPAR) on inflammatory and cancer cells. The clinical outcomes are multidrug resistance, cancer stem cell self-renewal, poor disease outcome and shorter disease-free survival [38]. TGF-ß modulates ovarian cancer invasion by upregulating CAF-derived versican (VCAN) in the TME [39]. VCAN expression by CAFs is regulated through TGF-ß receptor type II and SMAD signaling. Upregulated VCAN promotes the motility and invasion of ovarian cancer cells by activating NF-kB signaling pathway and by upregulating expression of CD44, MMP9, and a hyaluran-mediated motility receptor [39]. CTLs eliminate tumor cells via a combination of killing modes [41]. Lethal hits at the lytic synapse between a CTL and a tumor cell lead within seconds to sustained Ca 2+ release, damaged membrane and caspase 3 activity. Against this activity tumor cells can induce ultra-rapid defense mechanisms based on the synaptic lysosomal/late endosomal (LLE) membrane repair pathway [41]. To date, four modalities have been implicated in target cell death upon CTL attack: intrinsic apoptosis, extrinsic apoptosis, pyroptosis and ferroptosis. In addition to the ultra-rapid defense mechanisms, tumor cells are able to establish slower defense mechanisms (e.g., removing damaged organelles) and constitutive defense mechanisms (e.g., upregulation of inhibitors of apoptosis proteins) [41]. MDSCs and Immunosuppression During tumor progression, immunosuppression is mediated, among others, by myeloidderived suppressor cells (MDSCs) [42,43]. BM-derived immature myeloid cells (IMC) differentiate under steady-state conditions, into granulocytes, macrophages and DCs. Under chronic inflammatory conditions, typical for tumor progression, this differentiation is impaired, leading to accumulation of IMCs. Immunosuppressive pathways by MDSCs inhibit T cell functions through the expression of various cytokines and immune regulatory molecules, inhibition of lymphocyte homing, stimulation of other immunosuppressive cells, depletion of metabolites critical for T cell function, expression of ectoenzymes regulating adenosine metabolism and by the production of reactive oxygen or nitrogen species [42]. There are two major groups of MDSCs, namely granulocyte/polymorphonuclear MD-SCs (PMN-MDSCs) and monocytic MDSCs (M-MDSCs). Major developmental factors for PMN-MDSCs are high levels of granulocyte-macrophage-colony-stimulating factor (GM-CSF), vascular endothelial growth factor (VEGF), IL-6, Il-1ß, adenosine and hypoxiainducible factor α (HIF1α). Major developmental factors for M-MDSCs are high levels of macrophage-colony-stimulating factor (M-CSF), VEGF, adenosine and HIF1α [43]. MDSC, CAF and Neutrophil Involvement in the Pre-Metastatic Niche MDSCs, CAFs and neutrophils contribute to the formation of the pre-metastatic niche by "priming" it [44]. Neutrophils or PMN-MDSCs are recruited to the pre-metastatic niches mostly through the chemokine receptors CXCR2 and CXCR4. Neutrophil extracellular traps (NETs) are extracellular structures released by neutrophils in response to stimuli. They are composed of cytosolic and granule proteins as well as DNA. In pancreatic cancer, NETs were reported to be triggered by tissue inhibitor of metalloproteinase 1 (TIMP1) [45]. Preparation of pre-metastatic niches includes matrix remodeling, immunosuppression, NETs, reactive oxygen species (ROS) production, inflammation and tumor cell recruitment. PMN-MDSCs also escort tumor cells into the circulation. This leads to increased metastatic potential, to inhibition of NK cells and to increased extravasation. NETs finally trap tumor cells into the microvasculature [44]. It is clear from this information that a TME is relevant not only for a primary tumor but also for secondary or tertiary sites, i.e., metastases. The "seed and soil" hypothesis by Paget already stated that metastases need a proper "soil" for being able to grow. M2 TAMs In the TME tumor-associated macrophages (TAMs) are abundant within stromal cells. TAMs are classified into two major subsets, M1-like and M2-like TAMs [46]. M1-like TAMs are activated by IFN-γ, TLRs, lipopolysaccharide and GM-CSF. M1-like TAMs strengthen a T-helper 1 response and secrete pro-inflammatory cytokines. M2-like TAMs are activated by IL-4 and IL-13. They enhance a T-helper 2 response and participate in the regression of inflammation and wound healing by secreting anti-inflammatory factors such as IL-10 and TGF-ß. M1-like TAMs secreting pro-inflammatory cytokines such as TNF-α, IL-1, IL-6, IL-12 and IL-23 have anticancer effects, while M2-like TAMs contribute to tumor progression [46]. Tumor-derived exosomal non-coding (nc) RNAs were found to induce M2 macrophage polarization through signaling pathway activation, signal transduction, and transcriptional and post-transcriptional regulation. Exosomes from TAMs also play a role in intercellular communication. TAM-derived exosomal ncRNAs promote tumor proliferation, angiogenesis, immunosuppression, chemoresistance and metastasis [47]. Signaling through signal transducer and activator of transcription 3 (STAT3) and NFkB was reported to mediate crosstalk between M2 TAMs and malignant cells. Disruption of such signals caused a switch from M2 phenotype (CD163) to M1 phenotype (CD68) associated with reduced levels of IL-10, TGF-ß and CCL22 [48]. Tolerogenic DCs DCs play an important role in the TME [49]. Their absence or paucity characterizes "cold" TMEs such as those from pancreatic cancer [50]. The DCs may be tolerogenic or polarize the T cell response towards Th17. Factors regulating type 1 conventional DC (cDC1) function in the TME have been described [50]. cDC1 production of IL-12 can be directly inhibited by IL-10 released by macrophages or other immunosuppressive cells. Tumor-derived factors such as VEGF inhibit the maturation of cDC1 [51]. Tumors are frequently infiltrated by type 2 conventional DC (cDC2). Upon migration to the lymph node, these DCs are able to initiate CD4+ T cell responses if they are not restrained by T regulatory cells (Treg) [51,52]. Tolerogenic CD103+ DCs produce the enzyme IDO and secrete the cytokines IL-4 and IL-10 [53]. The acquisition of an immunosuppressive DC phenotype is tightly regulated by epigenetics [54]. Anergic T Cells and Treg Cells A balance of positive and negative signals tunes the immune reactivity of T lymphocytes [55,56]. To avoid immune-mediated tissue damage and autoimmunity, the signals generated by immunoreceptors must be tightly controlled by negative signals [55]. The best studied T cell activating receptors are the TCR and CD28 and the best studied T cell inhibitory receptors are PD-1 and CTLA-4. Activating receptors signal via tyrosine kinases (Tyr kinases) (e.g., ZAP70 and LCK) to produce diacylglycerol (DAG) via PLCy1 and via phospho-inositol (PI) kinases such as PI3Kα. Conversely, inhibitory receptors signal via Tyr phosphatases (e.g., SHP-2) to metabolize DAG and via PI phosphatases (e.g., SHIP-1). The joined inhibitory action of PD-1 on the PI3K/AKT and the MAPK pathway results in transcriptional modulation of cell cycle progression [55,56]. Anergic T cells from the TME are characterized by low expression of the TCR, of perforin and of Fas-L [51]. TILs can either respond to anti-PD-1/PD-L1 immune checkpoint inhibition (ICI) or they can be resistant. Resistance mechanisms of TILs from solid tumors to ICI have been described [55]. They have to do with negative receptor signaling in T cells [56]. Rho G, a member of the Rac family of small GTPases, induces T cell anergy by promoting the activities of transcription factors, including nuclear factor of activated T cell (NFAT)/AP-1 [57]. Accumulation of Tregs in the TME has been correlated with poor prognosis in many solid tumors [58]. A recent study reveals that obstruction of antitumor immunity by Tregs is due to promotion of T cell dysfunction and to restricting clonal diversity in CD8+ TILs [58]. Dysfunctional NK Cells Low numbers of dysfunctional NK cells are often observed in many advanced solid human cancers [59]. Potential mechanisms that influence suboptimal mature NK cell recruitment and function in the TME have been discussed [59]. The expression of major activating NK receptors, the NK cytolytic activity and the cytokine production were inhibited upon co-culture with PMN-MDSCs through cell-to-cell contact, soluble factors and exosomes [60]. Tumor-Derived Factors Tumor-derived factors modulate the TME. They include growth factors (e.g., TGF-ß), immune inhibitory ligands (e.g., PD-L1), prostaglandins and lactic acid as by-product from tumor metabolism. A variety of cytokines, chemokines and growth factors are produced in the TME by different cells representing a complex ecosystem and network of cell interaction, regulation of differentiation, activation, function and survival or death of various cell types [62]. Metabolic Barrier, T Cells, Hypoxia and Tumor Dormancy T cells encounter a hostile metabolic environment in tumors [63]. TILs isolated from clear cell renal carcinoma patients showed decreased glucose uptake as well as small, fragmented mitochondria with elevated ROS [63]. T cells primed in nutrient-rich lymphoid tissues enter tumors where cancer cell metabolism and poor vascular exchange lead to competition for resources. One hostile aspect of the TME is hypoxia created by the high metabolic rate of tumor cells in conjunction with inadequate vasculature. Under low oxygen states, the transcription factor HIF is free from its negative regulator von Hippel-Lindau (VHL) to upregulate its target genes [64]. T cells undergo metabolic re-programming in different stages of their life. 1. Naïve T cells take up glucose via the receptor Glut1. This fuels via pyruvate oxidative phosphorylation (OXPHOS) the tricarbonicacid (TCA) cycle. 2. Upon cognate antigen encounter on APCs, T cells become activated and rapidly take up more glucose and additionally glutamine to fuel their bioenergetic needs. Activated T cells perform aerobic glycolysis, which shunts products of glycolysis to biosynthetic processes necessary for proliferation and effector function. Like tumor cells, activated T cells produce lactate as a byproduct. 3. Once the antigen is cleared, T cells can form long-lived MTCs. In memory cells AMP-activated protein kinase (AMPK) signaling stimulates fatty acid oxidation. The fatty acids are synthesized upon uptake of glycerol. MTCs also increase their mitochondrial mass and spare respiratory capacity to prepare for future encounter with cognate antigen. 4. T cells can become exhausted if they fail to clear antigens such as during chronic infections or cancer. TILs isolated from tumors display elevated levels of PD-1. This decreases PI3K/AKT/mammalian target of rapamycin (mTOR) signaling and glycolysis. Exhausted TILs often have dysfunctional mitochondria and decreased mitochondrial mass [63]. A recent study revealed that TCR-induced upregulation of Myc-dependent glycolytic metabolism in murine CD8+ T cells is substantially inhibited by TGF-ß [65]. TGF-ß has pleiotropic effects on T cell populations and plays an essential role in the maintenance of immune tolerance [65]. Another recent study revealed that in primary breast cancer, tumor cells that resist T cell attack become quiescent. Such quiescent cancer cells (QCCs) were found to form clusters (niches) with reduced immune cell infiltration [66]. A transcriptomic analysis of TILs inside and ouside such QCC niches revealed hypoxia-induced programs and identified more exhausted T cells, tumor-protective fibroblasts, and dysfunctional DCs inside the clusters. Thus, QCCs constitute immunotherapy-resistant reservoirs. These orchestrate a local hypoxic immune-suppressive milieu that blocks T cell function [66]. Like estrogen receptor positive breast cancer, prostate cancer can become undetectable after curative intent radiation or surgery, only to recur years or decades later. For induction of tumor dormancy, prostate cancer cells respond to signals from their microenvironment, including TGF-ß2, BMP-7, GAS6, and Wnt-5a. These signals result in other signals and transcription factors including SOX2 and NANOG, which likely affect the epi-genome through histone modification [67]. That signals from the microenvironment can be of relevance for the immunobiology of cancer metastasis and can lead to shifts in tumor cell phenotypes has been hypothesized as early as 1980 [68]. At that time the main hypothesis discussed in the USA was that metastasis is a result of selection of tumor cell variants pre-existing in primary tumors [69]. Dormant tumor cells appear to have similarities with cancer stem cells [70]. Genetic makeup in tumor dormancy may be pivotal but cellular context must be paramout [71]. The main pathways involved in hypoxia-dependent EMT are TGF-ß, PI3K/Akt, Wnt, and Jagged/Notch. Responsible for transducing TGF-ß signals are SMAD proteins. The SMAD complex binds specific DNA regions along with transcription factors (e.g., SNAIL and ZEB) in order to modulate EMT-related gene expression [64]. Therapy Resistance and Cancer Hallmarks Therapy resistance is our last topic of the TME. In 2000, Hanahan and Weinberg compiled eight key concepts with regard to cancer into hallmarks [73]: Sustained proliferative signaling, resisting cell death, deregulating cellular energetics, activating invasion and metastasis, enabling replicative immortality, inducing angiogenesis, avoiding immune destruction and evading growth suppressors. Recently, new hallmarks have been added: (i) dedifferentiation and transdifferentiation, (ii) epigenetic dysregulation, (iii) altered microbiome, (iv) altered neuronal signalling [74]. The new dimensions of cancer have been summarized by Hanahan [75]. Table 2 lists the topics discussed in chapter 3. Counteracting Immunosuppression via Oncolytic Newcastle Disease Virus The many features of Table 2 and the many examples of tumor-TME interactions presented demonstrate the complexity of this phenomenon. Fortunately, basic and translational research have provided many strategies on how to counteract the immunosuppressive TME. We will provide an overview focussing on the various cells of the TME and on oncolytic NDV. Counteracting CAFs It has been demonstrated that the cellular cross-talk between CAFs and cancer cells promotes OV activity [76]. TGF-ß, produced by tumor cells, reprogrammed CAFs, reduced their level of antiviral transcripts and rendered them sensitive to OV infection. In turn, CAFs produced high levels of fibroblast growth factor 2 (FGF2), initiating a signaling cascade in cancer cells that reduced retinoic acid-inducible gene I (RIG-I) expression and impeded the ability of malignant cells to detect and respond to OV infection. Furthermore, in xenografts derived from pancreatic cancer patients, the expression of FGF2 correlated with the susceptibility of the cancer cells to OV infection. An OV engineered to express FGF2 showed improved therapeutic efficacy in tumor-bearing mice compared to the nonengineered parental virus [76]. That TGF-ß produced by tumor cells can reprogram CAFs and render them sensitive to OV infection could mean enhanced replication of oncolytic NDV in the TME upon intratumoral application and increase of its therapeutic efficacy. In support of this conclusion, expression of RIG-I and other type I IFN responsive genes (IRF3, IFN-ß, IRF7) was reported to determine resistance or susceptibility of cells to infection by NDV [77]. Low expression of RIG-I was associated with increased susceptibility to infection [77]. Direct evidence does not yet exist, however, that NDV treatment can revert CAF-induced immunosuppression. ICI Treatment One approach to change the TME is immune checkpoint inhibition (ICI) which affects T cells. ICI can be considered at present as the most successful cancer immunotherapy in solid malignancies. In a significant proportion of treated cancer patients it appears that such treatment can turn "cold" and therapy-resistant tumors into "hot" T-cell inflamed tumors. However, even in ICI responsive tumors like malignant melanoma, a high percentage of patients remains unresponsive. In some successfully treated patients (complete remission of tumor lesions) early tumor regression was followed by a phase where residual tumors remained dormant. The cytotoxic mechanisms of the regression phase included apoptosis, necrosis, necroptosis and immune cell-mediated cell death. To explain the dormant state, a recent review proposes immune (cytokine)-mediated induction of senescence in cancers as one important mechanism. The immune system's ability to establish defensive walls around tumors isolate tumor cells and keeps them in a non-proliferating state [67]. OV Treatment OV cancer immunotherapy and the transformation of "cold" into "hot" tumors has been discussed and associated with four different types of activating the immune system: (i) Release of danger signals and DC maturation, (ii) T-cell priming and trafficking, (iii) antibody-dependent cellular cytotoxicity (ADCC) and phagocytosis and iv) T-cell/NKcell mediated tumor cell killing [78]. Oncolytic NDV appears to change cold into hot tumors via mechanisms (i) and (iv). Tumor cell infection by oncolytic NDV leads to induction of immunogenic cell death (ICD) with expression of pathogen-associated molecular patterns (PAMPs) (HN, ppp-RNA Leader, and dsRNA) and release of damage-associated molecular patterns (DAMPs) (ecto-CRT, HSP, HMGB1, and ATP) [79]. A schematic illustration of mechanisms of antitumor activity of NDV is shown in Figure 1 below. Further molecular details have been illustrated [80]. Pro-immunotherapeutic properties of OVs include immune activation at the tumor site and the possible effects of transgene expression [81]. Virus-based immuno-oncology models will help to further differentiate between cold vs. hot tumors. Recent reviews focus on OVs combined with bi-and tri-specific antibodies as next generation cancer immunotherapy [82,83]. Also, OVs have been developed as nanomedicines against an immunosuppressive TME [84]. Future developments of OV therapy, including genetic modification and combination therapy have been discussed [85]. OV Treatment Combined with ICI and Role of DCs A phase Ib clinical trial tested the impact of intratumoral OV therapy with talimogene laherparepvec (TVEC) on CTL infiltration and therapeutic efficacy by treatment of n = 21 patients with advanced melanoma with anti-PD-1 (pembrolizumab) antibody [86]. OV treatment promoted intratumoral T cell infiltration and improved the anti-PD-1 immunotherapy. The therapy was generally well tolerated and the objective response rate was 62% [86]. Intra-tumoral application of oncolytic NDV to murine melanomas had abscopal effects at sites of secondary tumors and made these susceptible to ICI therapy [87]. In the context of oncolytic NDV therapy innate immune sensing of viral RNA via RIG-I and activation of innate immune cells may enhance DC accumulation in tumors and make them more susceptible to ICI. A recent study reported that retinoic acid (the vitamin A derivative tretinoin) induces an IFN-I driven inflammatory TME, sensitizing it to ICI [88]. Apart from T cells, DCs are also important in the TME. A recent paper demonstrated that expansion and activation of CD103+ DC progenitors at the tumor site can enhance tumor responses to therapeutic PD-L1 and BRAF inhibition [89]. Adoptive T cell immunotherapy studies of human tumors in NOD/SCID mice had demonstrated that co-transfer of TAA-laden DCs supports T cell effector functions and maintenance within the treated tumor tissue [24]. Downregulation of MDSCs NDV can downregulate immunosuppression in the TME. One example is the effect of antitumor vaccination by NDV pHN plasmid DNA vaccination [90]. Such vaccination at the site of the mouse ear pinna induced high levels of systemic IFN-I and reduced tumor growth in a prophylactic model of subcutaneously implanted mammary carcinoma. Analysis of the TME revealed a significant increase in NK cells and decrease in MDSCs [90]. Macrophage Activation and Polarization NDV can also activate macrophages. It induces synthesis of nitric oxide (NO) and causes activation of NFkB in murine macrophages. These were part of an activation process that included stimulation of adenosine deaminase and inhibition of 5 -nucleotidase [91]. Further studies revealed that NDV-stimulated human monocytes to kill various human tumor cell lines and that the tumoricidal activity is mediated by TRAIL [92]. Soluble TRAIL-R2-Fc but not soluble CD95-Fc or TNF-R2-Fc showed a specific blocking effect. TRAIL induction on human monocytes by NDV was independent from viral replication and functioned also with UV-inactivated NDV [92]. These results suggest that oncolytic NDV in the TME can counteract M2 TAM mediated immunosuppression (see Table 2). DC Activation and Polarization Oncolytic NDV was reported to activate in immune cells a number of innate immunity sensing receptors: Protein kinase dsRNA activated (PKR), RIG-I, Toll-like receptor (TLR) and IFN-I α receptor (IFNAR) [79,93]. The effects on DCs and Th cells were priming towards DC1 and Th1 responses and thereby counteracting Th2 and Th17 polarization. In human DCs, NDV infection was demonstrated by a systems biology analysis, to upregulate within 18 h 779 genes by a choreographed cascade of transcription factors [94]. T Cell Activation and Polarization T cells require for activation (i) signals via the antigen-specific T cell receptor complex TCR-CD3 as well as (ii) co-stimulatory signals via other receptors such as CD28. Infection of human melanoma cells by NDV was reported to provide co-stimulatory activity towards autologous melanoma-specific CD4+ T helper cell TILs. The co-stimulatory signals were independent of CD80/CD86 signaling [95]. The earliest report of T cell activation by NDV is from 1993. A greater than sixfold increase in peptide-specific CTL responses was observed. The findings suggested that NDV or viral HN expressed on APCs or tumor cells can exert a T-cell co-stimulatory function [96]. An experimental study, 10 years later, revealed that modification of tumor cells by a low dose of NDV could potentiate tumor-specific CD8+ CTL activity via induction of IFN-I [97]. The generation of CTL activity in vitro could be blocked specifically by antisera to IFN-I. Similar effects were observed in vivo, suggesting that IFN-I is essential for the generation of CTL activity in general [97]. IFN-I was reported to provide a third signal to CD8+ T cells to stimulate clonal expansion and differentiation [98]. Naïve T cells (Tn) require two homeostatic signals for long-term survival: TCR-pMHC contact and IL-7 stimulation. Recently, it was reported that microbial exposure has an impact on Tn homeostasis. The conversion and expansion of long-lived Ly6C+ CD8+ Tn cells depended on IFN-I, which upregulates MHC class I and enhances tonic TCR signalling in differentiating Tn cells. Moreover, for these cells, IFN-I mediated signals optimized their homing to secondary sites, extended their lifespan and enhanced their effector differentiation [99]. Upregulation of PD-L1 by various oncogenic mutations such as EGFR, BRAF, and activation of PI3K and JAK-STAT3 in tumor cells are critical pathways modulating tumor immune responses in the TME. STAT3 was recently demonstrated to contribute to oncolytic NDV-induced immunogenic cell death (ICD) in glioma, lung cancer and melanoma [100]. Whether strong T cell co-stimulation can cause re-activation of unreactive, possibly anergized MTCs from late-stage cancer patients is unknown. To investigate this, a bispecific anti-CD28 fusion protein (bsHN-CD28) was produced which can easily attach to the autologous NDV-modified tumor cell vaccine ATV-NDV. 14 colorectal carcinoma (CRC) patients with unresectable late-stage disease were treated by vaccination with the vaccine ATV-NDV to which increasing amounts of bsHN-CD28 had been attached. While no severe adverse events were recorded, all patients showed an immunological response of tumor-reactive MTCs, at least once during the course of five vaccinations. A dose-response relationship of the response with the amount of co-stimulatory protein was seen. A partial response of metastases was documented in four patients. The study suggests that the threecomponent vaccine is safe and can re-activate possibly anergized T cells from a chronic disease like advanced-stage cancer [101]. NK Cell Activation Upon infection by oncolytic NDV, human carcinoma and melanoma cells showed enhanced expression of ligands for the NK cell cytotoxicity receptors NKp44 and NKp46 [102]. The HN protein of NDV served as ligand. NKp44-and NKp46-CD3zeta lacZ-inducible reporter cells were activated by NDV-infected tumor cells. NDV-infected tumor cells stimulated NK cells to produce increased amounts of the effector lymphokines IFN-γ and TNF-α. NK cell lysis of NDV-infected tumor cells was eliminated by the treatment of target cells with the neuraminidase inhibitor Neu5Ac2en. These results suggested that direct activation of NK cells contributes to the antitumor effects of NDV [102]. Further studies revealed that HN, upon interaction with NKp46 receptor, upregulates expression of the tumor necrosis factor-related apoptosis inducing ligand (TRAIL)-death receptor (TRAIL-R) in murine NK cells through activation of spleen tyrosine kinase (Syk) and nuclear factor kappa B (NFkB) [103]. Exposure of NK and T cells to NDV resulted in enhanced tumoricidal activity that was mediated by upregulated TRAIL via an IFN-γ dependent pathway [103]. Targeting Rac1 and Spread of NDV in Tumors 4.8.1. Targeting Rac1 Oncolytic NDV exerts anti-neoplastic as well as immune stimulatory properties. Molecular mechanisms of these dual properties have recently been elucidated [93]. One important aspect of the viruses anti-neoplastic activity is the targeting of migratory cancer cells via the Rho GTPase Rac1 [104]. Figure 1 shows a schematic illustration of a migratory and invasive glioblastoma cell. The direction of cell movement is accompanied by an increase in expression of Rac1 at the leading edge of the lamellipodia. NDV targets Rac1 at the lamellipodia via macropinocytosis/endocytosis. Following cell entry, NDV targets the capdependent translational machinery through the MNK1/2-eIF4E axis [93]. Tumor-selective virus replication then occurs in autophagosomes [56]. F and HN proteins play important roles in virus release and immune cell activation. Oncolysis is exerted by ICD [93]. Spread of Oncolytic NDV in Tumors Virus spread in tumors starts with virus progeny encapsulation, membrane budding and virus release mediated via M, HN and F [93]. The release of progeny virions from the surface of infected cells is facilitated by neuraminidase activity located at the sialidase ß-propeller domain of HN [92]. Another important virus protein for spread in tumors is the fusion protein F. Mutations at the cleavage site of the F precursor protein F 0 facilitate multicyclic virus replication. The formation of syncytia by fusogenic NDV strains leads to virus spread to neighbouring cells and reduces exposure to virus neutralizing antibod-ies [93]. NDV particles produced in autophagosomes can possibly be exported from the cell via a process designated as secretory autophagy [105]. NDV infection of stem-cell enriched spheroids of lung cancer inhibited their 3D growth potential in vitro. The infection resulted in the degradation of LC3 and P62, two hallmarks of autophagy maturation. Apparently, NDV promoted autophagy flux in these spheroids as confirmed by transmission electron microscopy [93]. NDV taken up in Rab5a positive endosomes or macropinosomes can be recycled for export via exosomes [106]. This would increase viral spread in tumor tissue. Rab5a was found to be associated with genes involved in exosome secretion [107]. NDV-related exosomes containing NP proteins and microRNAs were reported to exhibit replicationpromoting and IFN-ß-suppressing abilities [108,109]. Suppression of the Metabolic Barrier The TME also represents a metabolic barrier ( Table 2). Cancer cells increase glycolysis rates to generate ATP as the primary energy source for cell growth and proliferation. A recent study now revealed that infection of human breast cancer cells with oncolytic NDV suppresses the glycolysis pathway. The NDV exposed cancer cells in contrast to normal embryonic REF cells showed decreased hexokinase activity, decreased pyruvate and ATP concentrations and decreased acidity. NDV infection induced cell death (oncolysis) in the cancer cells but not in the normal control cells [110]. Blockade of the immunosuppressive oncometabolic circuity has been reported by application of the oncolytic adenovirus Delta-24-RGDOX in combination with IDO inhibitors [111]. Another recent study reports that localized delivery of PD-1 inhibitors by engineered oncolytic herpes simplex virus (YST-OVH) activates collaborative intratumoral response to control tumor and synergizes with CTLA-4 or TIM-3 blockade [112]. Oncolytic NDV is a cancer-therapeutic biological agent that, in contrast to chemotherapy and radiotherapy, does not require cells to be in a proliferating state. The RNA virus replicates in the cytoplasm of cells and is independent of DNA replication. NDV can replicate in X-irradiated cells (e.g., ATV-NDV vaccine) because X-irradiation damages DNA but not RNA. As a consequence, oncolytic NDV can potentially replicate and destroy tumor cells in a resting state such as tumor stem cells or cells from tumor dormancy that may not be affected by chemo-or radiotherapy. This suggests that oncolytic virus therapy with NDV may well complement conventional cancer therapies [113]. Breaking Resistance to Hypoxia, Apoptosis, TRAIL, Drugs and Small Molecule Inhibitors (SMIs) The center of a growing tumor is often hypoxic. A hypoxic TME induces the transcription factor HIF which influences gene expression and contributes to tumors radio-, and chemo-resistance. It was reported that oncolytic NDV can break resistance to hypoxia [114]. Oncolytic NDV can also break resistance to apoptosis and TRAIL. An increased oncolytic activity was reported utilizing a human non-small-cell lung cancer cell line overexpressing the anti-apoptotic protein Bcl-xL. The enhanced oncolytic activity was secondary to enhanced viral replication and syncytium formation [115]. A similar result was observed with apoptosis-resistant primary melanoma cells that overexpressed the inhibitor of apoptosis protein Livin [116]. Caspases could cleave Livin to create a truncated protein with proapoptotic activity [116]. TRAIL-resistant hepatocellular carcinoma-derived cell lines were found to be more susceptible to NDV-mediated oncolysis than TRAILsensitive cells. IFN-stimulated gene-12a over-expression or silencing enhanced or reduced the cells TRAIL sensitivities [117]. Recent treatment strategies incorporating ICIs and anti-angiogenic agents have brought many changes and advances in clinical cancer treatment [118]. However, challenges still exist with regard to immune suppressive tumors, which are characterized by lack of T cell infiltration and treatment resistance. Crosstalk between angiogenesis and immune regulation in the TME has been recently reviewed [118]. Rac1 signaling has been identified as a major mediator of drug-resistance mechanisms [119]. Examples are v-raf murine sarcoma viral oncogene homolog B (BRAF) protein inhibitors in melanoma. Their effectivity is restricted by resistance mechanisms. Under NDV mediated hyper-activation of Rac1, Rac1-GTP activates Pak1, leading to the downstream activation of mitogen-activated protein kinase (MEK) and to bypassing BRAF inhibition [119]. Breaking of T Cell Tolerance to TAA-Expressing Tumor Cells NDV infection of human melanoma cells could break tolerance of a melanoma-specific CD4+ T helper cell line [96]. Potentially anergized TAA-specific T cells from late-stage CRC patients could become re-activated by strong T-cell costimulation involving NDV and anti-CD28 mediated signals [101]. In tumor immunology, the concept of augmentation of costimulatory signals in T cells is complementary to that of inhibition of negative signals delivered via coinhibitory checkpoint receptors. Breaking Resistance to Oncolysis, to ICI and to Anti-Viral Immunity Intra-tumoral application of NDV to murine melanoma [86] and to oncolysis-resistant bladder cancer [120] had abscopal effects at sites of secondary tumors and made these susceptible to ICI therapy. Oncolysis-independent immune stimulatory effects of NDV are based on increased adhesiveness mediated by HN [92]. Anti-viral immunity is considered as a major hurdle for effective therapeutic avtivity of OVs. With regard to NDV, it was reported from an animal model that pre-existing anti-viral immunity potentiated rather than inhibited its immunotherapeutic efficacy [121]. Chapter 4.10. has summarized evidence that oncolytic NDV has the potential to break therapy resistance and immune resistance. It is the first OV for which such potential has been reported. This explains why the authors focus on NDV. Recruitment of Cancer-Reactive MTCs Spontaneous anti-tumor T cell responses in the BM against blood-borne antigens, including TAAs have been described as well as the presence of cancer-reactive MTCs in the BM of cancer patients ( Table 1). The recruitment of such cells from the BM to the site of a primary tumor or a metastatic site by active-specific vaccination can be expected to exert a tumor-protective effect. One of many clinical studies supporting this concept was performed in colorectal carcinoma (CRC) patients. CRC is one of the leading causes of cancer-related deaths worlwide. Surgery remains the primary curative treatment but nearly 50% of patients relapse as consequence of micrometastatic or minimal residual disease (MRD) at the time of surgery. A prospective randomized trial investigated the efficiency of adjuvant active-specific immunization (ASI) with autologous NDV modified tumor vaccine (ATV-NDV) in stage IV CRC patients as a tertiary prevention method following resection of liver metastases [122]. 50 patients were available for analysis after a long follow-up period of about ten years. While rectal carcinoma patients apparently did not profit from this type of immunotherapy, a subgroup analysis revealed a significant advantage for vaccinated colon cancer patients with respect to overall survival (OS) and metastases-free survival [122]. In the control arm, 78.6% had died, in the vaccinated arm only 30.8%. The trial provides clinical evidence for the value and potential of the cancer vaccine ATV-NDV. The observed improvement of long-term survival was explained by activation and mobilization of a pre-existing repertoire of cancer-reactive MTCs [123] which reside in distinct niches of patient's bone marrow [14]. This is possible by the use of an autologous cancer vaccine. MTC niches are in neighbourhood with hematopoietic (HSC) and mesenchymal (MSC) stem cells. BM MTCs also contain a subset of stem memory T cells (SMTs) in addition to effector (EMTs) and central memory T cells (CMTs). The recruitment of cancer-reactive BM MTCs to the site of vaccination with ATV-NDV occurs through recognition of TAAs in association with NDV-induced pro-inflammatory cytokines and chemokines [123]. In Situ Vaccination In situ vaccination against cancer is a concept that is being developed based on mechanisms of immunogenic cell death, ICD, and on DC vaccination. Knowledge about DC vaccine's therapeutic value has been increased in the past decade. Improvements of this "advanced therapy medicinal product (ATMP)" have been achieved. For instance, a superior monocyte-derived DC preparation has been reported. It includes short-term culture with IL-15, pro-inflammatory cytokines and immunological danger signals. In situ silencing of programmed-death ligands potentiated anti-tumor potency of such a DC vaccine [124]. Photothermal Treatment Inducing ICD Recent studies have discovered that certain immunotherapeutic photosensitizers, such as Rose Bengal (RB) can improve antitumor immune responses by triggering ICD. "Eat me" and "danger" signals such as calreticulin (CRT) become expressed on the surface of tumor cells undergoing ICD. They can enable immature DCs (iDCs) to phagocytose tumor cells and present TAA epitopes to T cells through MHC-I or -II molecules. A novel in situ DC vaccine triggered by RB was reported to enhance adaptive antitumor immunity [125]. Local tumor photothermal treatment with near-infrared light (NIR-II) is a promising strategy in triggering in situ tumor vaccination for cancer therapy, in particular of skin pre-cancerous lesions. However, limited penetration of photothermal agents within tumors seriously limits their spatial effects. In a recent study, a deep tumor penetrating gold nano-adjuvant is described which significantly inhibits tumor growth, induces a cascade of immune responses, generates adaptive immunity against the re-challenged cancers and boosts an abscopal effect which completely inhibits pulmonary metastases [126]. IMI Strategy A new individual multimodal cancer immunotherapy (IMI) strategy has been developped at IOZK (Cologne, Germany) [127]. It combines (i) local moderate electrohyperthermia (mEHT)/ oncolytic NDV pre-treatment for in situ vaccination with (ii) specific autologous anti-tumor vaccination employing the ex vivo generated DC vaccine IO-VAC R (formerly termed VOL-DC). IO-VAC R is a ATMP product consisting of a modern DC vaccine pulsed with patient-derived NDV oncolysate. In the first step, the patient's immune system is conditioned by NDV towards a Th1 polarized immune response based on in situ induction of ICD combined with locally applied mEHT. Some results from this IMI therapy will be presented and discussed under 5.1. Use of GM-CSF Modified NDV The murine gene encoding GM-CSF was inserted in 2007 as an additional transcription unit at two different positions into the NDV genome. The recombinant virus rNDV-muGM-CSF with the strongest production of the transgene product was selected for further studies. Tumor vaccine cells infected with rNDV-GM-CSF stimulated human PBMCs to exert antitumor bystander effects in vitro in a tumor neutralization assay (TNA). These effects were significantly increased compared to rNDV without the transgene. In addition, rNDV-GM-CSF led to a much higher IFN-I production in PBMC than rNDV when added as virus or as virus-modified vaccine. Monocytes and plasmacytoid DCs were demonstrated to contribute to the augmented IFN-α response. Thus, the already inherent anti-neoplastic and immunostimulatory properties of NDV could be further augmented. The transgene product initiated the recruitment of DCs and a broad cascade of immunological effects [128]. A recent manuscript reports that a similar recombinant NDV (rNDVhuGM-CSF (MEDI5395 from Astra Zeneca)) combined broad oncolytic activity with the ability to modulate genes related to immune functionality in human tumor cells [129]. Intratumoral injection conferred antitumor effects in three syngeneic models in vivo. The efficacy could be further augmented by concomitant treatment with anti-PD-1/PD-L1. Ex vivo immune profiling, including TCR sequencing, revealed profound changes consistent with priming and potentiation of adaptive immunity and TME reprogramming toward an immune permissive state [129]. A clinical study (NCT03889275) is in progress. Use of Antibody Modified NDV It was demonstrated that a recombinant NDV could express a full IgG antibody from two transgenes [130]. The antibody targeted an angiogenesis epitope on vascular endothelial cells of the TME. The use of antibody-modified NDV allows to combine the advantages of oncolytic RNA viruses and monoclonal antibodies in a single powerful anticancer agent. In fact, oncolytic NDV expressing a chimeric antibody against the TAA CD147 enhanced anti-tumor efficacy in orthotopic hepatoma-bearing mice [131]. Altering the TME by Sytemic Transfer of OV-Loaded Carrier Cells Activated T cells can be loaded with oncolytic NDV in such a way that the virus load can be transferred to tumor target cells upon contact of the virus loaded T cells with tumor cells [132]. NDV "hitchhiking" could potentially increase NDV tumor targeting after systemic transfer of virus loaded T cells [132]. Oncolytic NDV could also be delivered to tumors by BM derived MSCs [133]. OV delivery by MSCs enhanced therapeutic effects altering the TME [133]. Active-Specific Immunization (ASI) Prevention of metastatic spread was reported by post-operative active-specific immunization (ASI) with NDV modified but not with unmodified irradiated autologous tumor cell vaccine [134]. About 50% of the mice immunized with NDV-modified vaccine survived longterm while mice immunized with unmodified vaccine were dead within three weeks. Post-operative activation of tumor-specific CTL precursors (CTLPs) from mice with metastases required stimulation with the specific TAA plus additional signals. Such signals could be provided by NDV and resulted in the augmentation of CD4+ T helper and CD8+ CTLP T-T cell cooperation [135]. These findings provide a mechanistic explanation for the in vivo effect of NDV modified vaccine. Associated studies provided early evidence for the generation of protective immune T cell-mediated memory responses to cancer [136]. Results from clinical ASI studies employing the vaccine ATV-NDV have been presented under 4.10. ASI can be well combined with ICI therapies. Adoptive Cellular Immunotherapy (ACT) Successful examples of this approach have been presented in Chapters 2.3 to 2.5. Effective immune rejection was demonstrated in a GvL animal model of advanced leukemia (2.5.). Human acute leukemias are also responsive to allogeneic ACT. It requires clinics which are specialized to deal with problems related to graft-versus-host disease (GvHD) and host-versus-graft (HvG) reactivity. A few selected publications from 2022 demonstrate further novel approaches of ACT: (i) adoptive immunotherapy with engineered invariant natural killer T (iNKT) cells to target cancer cells and the suppressive microenvironment [137]. Off-the-shelf third-party HSC-engineered iNKTcells were demonstrated in xenograft models to ameliorate GvHD while preserving a GvL effect in the treatment of blood cancers [138]. (ii) use of IL15 in cell-based cancer immunotherapy [139], (iii) immunotherapy of TAA positive common solid cancers with natural high-avidity TCR-engineered T cells [140], (iv) improvement of immunotherapeutic efficacy against solid tumors in mice with dual-specific chimeric antigen receptor (CAR) T cells expanded in vitro with TCR reactivity against OV encoded antigens [141]. Illustrations of the fight of T cells against tumor cells can be seen from several references of this publication [2,11,13,23,28,56,113,137]. In GBM patients, T cells can be activated in situ by vaccination. Another procedure is adoptive T-cell immunotherapy (ACT). One study reported cross-talk between T cells and bone marrow hematopoietic stem and progenitor cells (HSPCs) during ACT in vivo in GBM hosts [142]. Transfer of HSPCs with concomitant ACT led to the production of activated CD86+CD11c+MHC-II+ cells consistent with a DC phenotype and function within the brain TME. These cells relied on T-cell-released IFN-γ to differentiate into DCs, activate T cells, and reject intracranial tumors [142]. Tumor-Suppressing Functions of Immune Cells within the TME Tumor-associated immune cells within the TME with tumor-suppressing function include NK cells, DCs, M1 TAMs and effector T and B cells. The latter kill cancer cells by granule exocytosis and FasL-mediated apoptosis induction, polarizing M2-TAMs to M1 TAMs and inducing DC maturation. Effector B cells produce Th1 cytokines, enhance CTL activity and NK cell-mediated tumor cell killing [143]. While tumors can polarize DCs, macrophages and T cells from the TME towards immunosuppression, immunotherapeutic strategies can change the polarization of these cells towards a tumor-suppressing effect. Table 3 lists the various aspects of counteracting immunosuppression in the TME via oncolytic NDV. Post-Operative Vaccination with or without NDV of Glioblastoma Patients Having presented evidence for spontaneous anti-tumor T cell responses in animal models and cancer patients, of immunosuppression by the TME and of the counteracting potential of oncolytic NDV, this Chapter considers one fatal cancer disease of the brain, GBM, to explore by some examples where we stand at present with regard to immunotherapeutic strategies. The brain TME contains numerous distinct types of non-neoplastic cells which serve a diverse set of roles relevant to the formation, maintenance, and progression of central nervous system cancers [144,145]. T cells in brain TME are low in numbers and characterized by tolerance, ignorance, anergy, and exhaustion. Distinct exhaustion profiles have been reported for GBM TILs in comparison to peripheral blood T cells [146]. Despite apparent challenges, such as the blood-brain barrier (BBB) composed of tight-junction endothelial cells and astrocyte endfeet, some endogenous T cells are nevertheless capable of infiltrating GBM. A recent study identified T cell subsets expressing the chemokine receptors CCR2, CCR5, CXCR3, CXCR4, CXCR6 and the integrin-adhesion molecules CD49a and CD49d as being enriched in GBM tumors compared to matched peripheral blood T cells [146]. Vaccination Studies with NDV Reasons for selection of GBM are as follows: (i) sytemic application of NDV to GBM patients has resulted, in single-case studies, to impressive results [147,148], (ii) apparently, NDV could pass the blood-brain barrier after systemic application [148], (iii) Rac1 targeting by NDV at the invasion front of migratory GBM cells [93,104]. The studies selected in Table 4 are based on T-cell based immunotherapeutic principles, including DCs as APCs and pMHC complexes as TAAs. They are post-operative vaccination studies without or with oncolytic NDV support. The earliest type of such a study is from 2004 [149]. Operated adults with primary GBM were treated post-operatively for ASI with the already mentioned autologous tumorcell vaccine ATV-NDV, prepared from patient-derived primary tumor cell cultures. 23 such patients were compared in this non-randomized study to 87 similar non-vaccinated patients treated at the same institution and during the same time period by standard therapy. 91% of vaccinated patients survived 1 year, 39% survived 2 years (compared to 11% in the control group), and 4% were long-term survivors. In the vaccinated group, immune monitoring revealed significant increases (i) of delayed-type hypersensitivity (DTH) reactivity of the skin towards autologous tumor cells, (ii) of numbers of tumor-reactive MTCs in peripheral blood and (iii) of numbers of CD8+ TILs in secondary tumors. The conclusion was that post-operative vaccination with ATV-NDV was feasible and safe and appeared to improve the prognosis of patients with GBM [149]. The second study describes a new strategy of cancer immunotherapy combining hyperthermia/oncolytic NDV pre-treatment with specific autologous anti-tumor vaccination employing the DC vaccine IO-VAC R [127]. The first Kaplan-Meier analysis of 10 treated GBM patients revealed a median OS of 30 months. This can be compared to a median OS of 14.6 months with standard radio/chemotherapy according to the Stupp protocol [127]. The third study describes the induction of ICD during maintenance chemotherapy and subsequent IMI for GBM [150]. This retrospective analysis of 60 adults with primary GBM treated at IOZK suggested that the additional induction of ICD via NDV/mEHT during temozolamide maintenance (TMZm) cycles is beneficial in improving OS [150]. The prognosis of GBM patients with isocitrate dehydrogenase 1 (IDH1) wild-type MGMT promoter-unmethylated remains poor. All adults meeting these criteria and treated from 06/2015 to 06/2021 at IOZK were selected for a retrospective analysis [151]. Goup 1 patients (n = 9) were treated with surgery/radio(chemo)therapy and susequently with IMI. Group 2 patients (n = 25) were treated with radiochemotherapy followed by TMZm plus IMI during and after TMZ. The mean OS of group 1 patients was 11 months while that of group 2 patients was 22 months with a two-year OS of 36%. The difference was significant with a Log-rank p of 0.0001. The conclusion was that a synergy between TMZ and IMI had improved OS in group 2 patients [151]. In another retrospective analysis from IOZK involving n = 70 GBM patients, the 2-year OS was 38.8 %. When stratified for MGMT promoter methylation status, there was a highly significant difference [152]. One particular innovative finding in study [152] was a case of complete remission and specific T cell response. This case is presented with some detail to demonstrate the individuality and multimodality of the treatment process. An 18-year-old patient had an incomplete resection of a left frontal lobe IDH1 wild-type and MGMT unmethylated GBM. The tumor mutational burden was low (0.5 variants/megabase), and there was no evidence for microsatellite instability or germline variants. At presentation her Karnofsky performance index was 90. She was lymphopenic with low NK cell functioning and she had Th2/Th17 skewing. The treatment consisted of radiotherapy and, subsequently, five cycles of TMZm chemotherapy. She continued treatment for another seven TMZm chemotherapy cycles combined with 5-day ICD treatments, which were given during each TMZ cycle at days 8 to 12. Afterwards, she received two IO-VAC R DC vaccinations. The DCs were loaded with ICD-treatment induced, serum-derived, antigenic extracellular microvessels, and apoptotic bodies. Later on, she received two IO-VAC R vaccines loaded with tumorspecific peptides based on the individualized tumor-specific neo-antigen detection test performed at CeGaT (www.CeGat.de). She underwent complete remission. From the T cell response analysis it was clear that surgery, radiochemotherapy, and five cycles of TMZm did not induce a tumor-specific T-cell response. However, the addition of seven ICD treatments to the last seven TMZm treatments and the first IO-VAC R vaccine generated a clear tumor-antigen-specific CD4+ and CD8+ T-cell response. The response was further boosted by the DC vaccines loaded with tumor-specific peptides. This observation has important impacts: (i) it is not mandatory any more to have fresh frozen tumor material to prepare a tumor lysate as a TAA source, (ii) ICD treatments allow to yield TAAs that are expressed within the body (e.g., serum) at the time of treatment [152]. Brain tumors, unfortunately, can also affect children. The prognosis of children with diffuse intrinsic pontine glioma (DIPG) remains very poor despite radio-and chemotherapy or SMIs. Surgery is no option. IOZK reported recently about a single institution experience with IMI involving n = 41 children with DIPG [153]. When IMI was part of primary treatment, median PFS and OS were 8.4 m and 14.4 m from the time of diagnosis, respectively, with a 2-year OS of 10.7%. It was concluded that multimodal immunotherapy for these children is feasible without major toxicity [153]. Vaccination Studies without NDV High-grade gliomas (HGG) have an incidence currently estimated at 14,000 new diagnoses per year, according to the 2007 WHO classification, which includes patients with anaplastic astrocytomas (WHO grade III) and with GBM (WHO grade IV). To evaluate the therapeutic efficacy of TAA-pulsed DC treatment, a systematic analysis in terms of patient survival of relevant published clinical studies was performed [154]. An electronic search yielded 189 references. From these, 9 articles were selected for reasons presented. A total of 409 patients, including historical cohorts, nonrandomized and randomized controls with HGG, were the basis of the meta-analysis. DCs were matured using cocktails containing GM-CSF, IL-4, TNF-α, IL-1ß, or PGE2. The sources of TAA were different. Most were derived from tumor cells: autologous irradiated tumor cells, autologous tumor lysate, HLA-I-eluted peptides, autologous acid-eluted tumor peptides and autologous heat-shock tumor cells. The routes of DC injection were mainly intradermal, intratumoral or subcutaneous. The numbers of TAA-pulsed DCs injected ranged from 1 × 10 6 to 5 × 10 8 . Here we present 2-year OS data from seven of those trials (with n = 354 patients). The OS rates were 34% for HGG patients receiving DC treatment and 14% for the controls. The difference was highly significant (p < 0.00001) [154]. Comparisons were also made for 3-, 4-, and 5-year OS between the non-DC and DC groups in HGG patients using Forest plot analysis. The odds ratios (OR) all favoured the DC vaccination group [154]. A recent Nature article reports about a T helper type peptide vaccine targeting mutant IDH1 in newly diagnosed glioma [155]. Mutated IDH1 defines a molecularly distinct subtype of diffuse glioma. Pre-clinical studies had demonstrated that a specific peptide vaccine (IDH1-vac) induces specific therapeutic Th responses that are effective against IDH1 (R132H) mutant tumors in syngeneic MHC-humanized mice. A multicentre, singlearm, open-label, first-in-humans phase I trial was carried out in 33 patients with newly diagnosed WHO grade 3 and 4 IDH1 (R132H)+ astrocytomas (NCT02454634). Vaccineinduced immune responses were observed in 93.3% of patients across multiple MHC alleles. Three-year progression-free and death-free rates were 63 and 84% respectively. Pseudogrogression was observed at high frequency and associated with increased vaccineinduced T cell responses. The three-year survival rate of 84% is impressive but cannot be compared with the previous GBM studies, since 2/3 of the patiens were astrocytomas grade 3 [155]. Table 4 contains a list of the mentioned GBM studies. Conclusions Avoiding immune destruction is one of the hallmarks of cancer. To achieve this, cancer cells interact with host cells and organize an immunosuppressive tumor microenvironment. One important factor in this context is TGF-ß. It is secreted by tumor cells and regulatory T cells (Tregs), affects cancer-associated fibroblasts (CAFs), macrophages (M2-TAM) and NK cells. In addition it is involved in the formation of the pre-metastatic niche and in epithelial to mesenchymal cell transition (EMT) ( Table 2). Tumor-promoting immune cells of the TME include among others M2-TAMs, Tregs and MDSCs. This review proposes to employ the avian oncolytic virus NDV and cellular immunotherapy to counteract the immunosuppressive influence of the TME. NDV is the first OV with reported potential to break therapy resistance, drug resistance and immune resistance (see Chapter 4.10). Intratumoral inoculation of NDV can counteract immunosuppression by inducing a strong IFN-I response and by activating innate and adaptive immunity systems. In addition to its immune stimulatory activities, NDV exerts a variety of anti-neoplastic functions such as tumor-selective oncolysis and breaking several types of resistancies. In contrast to the TME, the microenvironment of the bone marrow favors spontaneous anti-cancer immune responses. Tumor-induced angiogenesis connects a locally growing tumor with the blood circulatory system thereby releasing tumor cells and TAAs into the blood. There is a bi-directional connection between blood and BM. All cellular components of the blood are derived from hematopoietic stem cells (HSCs) of the BM. The BM parenchyma contains among others resident CD11c+ DCs. These capture blood-borne antigens, including TAAs, process them and present them to CD4+ and CD8+ T cells arriving from the blood through BM sinuses. A distinct speciality of the adaptive immunity system with importance for the fight against cancer and its metastases is its memory function. Memory T cells consist of various subtypes and represent a very dynamic system of control. BM plays an important role in memory homeostasis and provides distinct niches for maintenance and long-term survival of MTCs. Powerful MTCs could be generated in mice from a naïve T cell repertoire against a surrogate TAA. Upon transfer to tumor cell challenged T cell deficient mice, these MTCs protected the mice and thereafter returned into a resting state in niches from the BM. From there, they could be re-activated via antigenic challenge and recruited into the peritoneal cavity. Transfer of these peritoneal MTCs to secondary tumor cell challenged T cell deficient hosts again protected the mice. Four such successive transfers provided evidence for longevity and functionality of TAA-reactive CD8+ MTCs. Spontaneously induced cancer-reactive MTCs have been documented to exist in the BM of patients with different types of cancer. Their mobilization and recruitment to the site of a tumor would be another way of counteracting an immunosuppressive TME. Postoperative active-specific immunization of cancer patients with autologous tumor vaccines modified by oncolytic NDV was apparently capable to mobilize and recruit cancer-reactive MTCs from the BM and/or from other tissue sites. An example is colon cancer (stage IV), where a randomized-controlled clinical study revealed a long-term survival benefit by about 30% of thus vaccinated patients. Cellular immunotherapies involving transfer of allogeneic MHC-matched immune T cells (GvL model) or re-activated human MTCs from the BM of cancer patients to tumorbearing NOD/SCID mice (tumor xenotransplant model) led to infiltration of tumors by the T cells and to tumor rejection. An individualized multimodal immunotherapy strategy and protocol has been established at IOZK, Cologne, Germany. It employs systemic oncolytic NDV in combination with local mEHT in a pretreatment phase to polarize the patient's immune system towards DC1 and Th1 responses. The NDV induced IFN-I was recently reported to prime a subtype of DCs (ISG+DCs) to acquire and present whole pMHC complexes. Following the pretreatment phase, patients receive active-specific immunization with IO-VAC R , a patient-derived DC vaccine pulsed with NDV-mediated oncolysate. One fatal cancer of the brain, GBM, has been selected in this review to demonstrate what type of result can be obtained with immunotherapeutic approaches. The TME of GBM represents a particular challenge because of strong immunosuppression. Accordant results obtained in several GBM phase I/II studies including a meta-analysis of 354 high grade glioma patients vaccinated with DC vaccine suggest that anti-tumor vaccination has the potential to prolong overall survival. Another recent experimental study in GBM patients reported on cross-talk between adoptively transferred T cells and BM derived hematopoietic stem cells. This led to the production of activated DCs within the brain TME. It can be concluded that immune cells within the TME with immunosuppressive functions (e.g., NK cells, DCs, TAMs, T and B cells) can be converted to immune cells with tumor-suppressing function. This can be achieved locally (e.g., by intratumoral application) or systemically. The review provides examples of both approaches including OVs and cellular immunotherapy. A few decades ago it was heavily disputed whether the immune system might have anything to do with cancer, especially with cancer in human. Meanwhile it can be concluded: the mere fact that tumors need an immunosuppressive microenvironment to grow is evidence for a role of immunosurveillance in cancer. Progress in research of molecular, cellular and tumor immunology has led to several Nobel prizes and to immunology-based therapeutics, such as antibodies (mAbs, including ICI reagents, and bs-Abs), cancer-and DC-vaccines, CAR-T cells and others. Such immunotherapeutics represent a change of paradigm in the treatment of cancer. The results presented demonstrate the importance of innovative experimental and translational studies to improve effective therapeutic treatments of cancer patients. Examples of the review have demonstrated that the potential of the immune system to fight cancer goes far beyond what present-day cancer immunotherapy achieves. So it is worth to invest more research effort into this direction. Conflicts of Interest: The authors declare no conflict of interest.
2022-10-30T15:18:40.419Z
2022-10-27T00:00:00.000
{ "year": 2022, "sha1": "ae9068610b40acf9d29c486d7d5ab577b01fae7d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/21/13050/pdf?version=1666872821", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "955b4ac10e10306b3484a2a726d5d36ffb8cb97e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270416695
pes2o/s2orc
v3-fos-license
Antagonistic interaction between caffeine and ketamine in zebrafish: Implications for aquatic toxicity The coexistence of caffeine (CF) and ketamine (KET) in surface waters across Asia has been widely reported. Previous studies have implied that CF and KET may share a mechanism of action. However, the combined toxicity of these two chemicals on aquatic organisms remains unclear at environmental levels, and the underlying mechanisms are not well understood. Here we demonstrate that KET antagonizes the adverse effects of CF on zebrafish larvae by modulating the gamma-aminobutyric acid (GABA)ergic synapse pathway. Specifically, KET (10–250 ng L−1) ameliorates the locomotor hyperactivity and impaired circadian rhythms in zebrafish larvae induced by 2 mg L−1 of CF, showing a dose-dependent relationship. Additionally, the developmental abnormalities in zebrafish larvae exposed to CF are mitigated by KET, with an incidence rate reduced from 26.7% to 6.7%. The competition between CF and KET for binding sites on the GABA-A receptor (in situ and in silico) elucidates the antagonistic interactions between the two chemicals. Following a seven-day recovery period, the adverse outcomes of CF exposure persist in the fish, whereas the changes observed in the CF + KET groups are significantly alleviated, especially with KET at 10 ng L−1. Based on these results, it is imperative to further assess the environmental risks associated with CF and KET co-pollution. This pilot study underscores the utility of systems toxicology approaches in estimating the combined toxicity of environmental chemicals on aquatic organisms. Moreover, the nighttime behavioral functions of fish could serve as a sensitive biomarker for evaluating the toxicity of psychoactive substances. Introduction Drug abuse is a prominent issue of concern from pole to pole [1].Drugs are excreted by the human body as parents or metabolites, enter sewage networks, and then contaminate surface water through effluent discharge [2].In conjunction with the types of drugs rising and the number of abusers growing, the number of pharmaceuticals detected in aquatic environments is also increasing [3].According to the anatomical therapeutic chemical (ATC) classification system, 502 in 4000 pharmaceuticals administered worldwide are psychoactive drugs [4].Most share the same mode of action by altering the secretion and uptake of neurotransmitters in the brain, such as dopamine and gammaaminobutyric acid (GABA) [5].Considering that human neural and nervous architecture is conservative in evolutionary terms, psychopharmaceuticals designed for people may also successfully interact with nontarget organisms [6].Regulatory authorities have identified the mixture effect as a major concern in the environmental risk assessment of organic pollutants [7].Therefore, it is of utmost importance to estimate the combined toxicity of psychopharmaceuticals in aquatic organisms. Caffeine (CF), a natural alkaloid, is the most widely consumed psychoactive substance as an additive in food products and many prescription drugs [8].In China, CF is administrated as a class II psychotropic drug [9].Statistically, CF abuse has increased rapidly in the northwest and northeast of China [10].In particular, drug abusers frequently take CF with other illicit drugs (psychoactive substances), such as methamphetamine and ketamine (KET) [11]. The illegal use of KET as a recreational drug is widespread in China [1], and CF is often used as an additive in KET [12].Consequently, the two psychoactive drugs have shown marked co-pollution in the rivers of China.The detected concentrations of CF and KET are up to 6167 and 9533 ng L À1 , respectively, in Taiwan [3], 3100 and 1100 ng L À1 in Haikou [13], and 56.9 and 3.7 ng L À1 in Wuhan [14,15].Additionally, CF concentrations up to 0.7 and 1.1 mg L À1 have been detected in surface water in Costa Rica [16].There have been many studies on the ecotoxicity of CF and KET in aquatic organisms.For example, changes in the growth of the Africa clawed frog have been observed after four days of exposure to CF at 0.11 mg L À1 [17].A decrease in lysosomal membrane stability has been observed in mussels after exposure to CF at 500 ng L À1 [18].Meanwhile, the growth of Daphnia magna is inhibited by KET at 1000 mg L À1 [19].Exposure to KET at 100 ng L À1 has shown significant teratogenic effects on Caenorhabditis elegans [20].In fish, neurotoxicity [21,22], developmental toxicity [23,24], and oxidative stress [25,26] have been reported after exposure to CF or KET.Hence, the widespread coexistence of these two chemicals in Asian aquatic ecosystems may pose a significant environmental risk.However, their combined toxicity and associated mechanisms remain unclear. Whether the presence of CF and KET in aquatic systems has combined effects on organisms typically depends on the overlap between their mechanisms of action [27].A previous study revealed that CF delayed the circadian clock of humans as a nonselective adenosine receptor antagonist [28].Adenosine receptor A1 and A2a subtypes are mainly localized in striatopallidal GABAergic neurons of the brain [29].Chronic injection of CF reduces the uptake of GABA and increases the release of GABA [30].Intriguingly, convergent evidence suggests that KET features robust antidepressant effects by blocking N-methyl-D-aspartate receptors (NMDARs) on GABAergic interneurons [31].Preclinical studies have associated KET's mechanisms of action with increased GABA levels [32].Therefore, changes in GABAergic function might play a pivotal role in assessing the combined toxicity of CF and KET.Compared to other pollutants, psychoactive substances have the potential to mimic natural infochemicals in structure and disrupt intraspecies communication, including predator avoidance, navigation, and circadian rhythms [33].Hence, behavioral functions are more sensitive than the classical biomarkers (i.e., morality, growth, and fecundity) as endpoints for assessing the ecological risks of psychoactive substances.Developing behavioral ecotoxicology would benefit the risk assessment of chemicals in aquatic environments [34].However, such toxicological data, especially changes in the behavioral phenotypes at nighttime mediated by circadian rhythm, are limited [35].The modulation of GABA levels in suprachiasmatic nuclei has been implicated in the synchronizing of circadian rhythms in mammals [36].Zebrafish (Danio rerio) have welldeveloped GABAergic neurotransmission, and their response to GABAergic hypnotics is similar to that in mammals [37].The release and uptake of the inhibitory neurotransmitter GABA have been demonstrated to be related to the regulation of circadian clocks in zebrafish [38].This provides new insight to further evaluate the ecological effects posed by CF and KET, particularly in vertebrate models, such as zebrafish. In this study, we use zebrafish larvae as model animals to assess the combined toxicity of CF and KET based on systems toxicological approaches.The behavioral (at nighttime), histological, morphological, and physiological (oxidative stress) indicators of fish are quantified as the toxicological endpoints.Then, the metabolomics profiles, molecular docking patterns between the chemicals and GABA receptor, and associated mRNA levels are determined to elucidate the underlying mechanism.After a seven-day withdrawal period, the biomarkers (i.e., locomotion, neurotransmitter levels, and gene expression levels) of larvae are analyzed to evaluate whether the effects posed by CF and KET are continuous.The results provide empirical evidence for assessing the joint toxicity of CF and KET in aquatic toxicity for the future study. Experimental design Zebrafish maintenance and embryo collection were performed following standard procedures [39].Briefly, wild-type (AB strain) zebrafish were cultured at 28 ± 0.5 C in a photoperiod of 14 h light/ 10 h darkness and fed twice daily with Artemia spp.nauplii.Fertilized eggs (n ¼ approx.250) were obtained from adult fish by spawning in the morning, induced by the beginning of the light period.The embryos were rinsed several times using ultraviolet (UV)-sterilized water from our facility (28.5 C, 200 mg L À1 instant ocean salt, and 100 mg L À1 sodium bicarbonate) before being randomly distributed [40].Standards of CF (purity >99%) and hydrochloride salt of KET (purity >99.5%) were purchased from Aladdin Biochemical Technology Co., Ltd.(Shanghai, China) and Sigma-Aldrich Co., Ltd.(St Louis Missouri, USA), respectively.Stock solutions (1 mg mL À1 ) were prepared by diluting the liquid standards using water from our facility. An overview of the experimental design is shown in Fig. 1a.Briefly, the exposure experiments were conducted in the semiclosed water system (200 mL) equipped with both mechanical and biological filtration.There were six groups set in this study, including control (water from our facility), CF (2 mg L À1 ), and CF (2 mg L À1 ) þ KET (concentrations of 10, 50, 100, and 250 ng L À1 ). 150 hatched larvae (five days post-fertilization [dpf]) per group were placed into three semi-closed water systems (50 individuals for each system).The exposure concentration of CF was selected based on the environmental levels (up to 1.1 mg L À1 ) [16] and a previous publication in which 193.82 and 0.039 mg L À1 of CF were found as thresholds for the photo motor responses of Danio rerio and Pimephales promelas larvae, respectively, in the dark [21].The range of KET concentrations was determined according to the threshold of behavioral function disorders in Oryzias latipes induced by KET and the levels detected in fresh water [22].The exposure period lasted 21 days, and the exposure solution was entirely renewed every 24 h.During exposure, the morphology of each fish larva was recorded using a stereomicroscope.After exposure, 60 fish per group were transferred to a fresh container (200 mL of system water) to acclimate for 1 h and were then immediately used for biomarker testing.The remaining fish were placed in new beakers equipped with clean system water (no chemicals) for a seven-day recovery period.At the end of this stage, all the fish were used to assay the corresponding biomarkers.The concentrations of CF and KET in the exposure media were confirmed using high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS), as previously described [13].The actual concentrations were 98.0e102.6% of the nominal values (Supplementary Material Table S1), and the details are shown in the Supplementary Materials. Locomotion test At the end of the exposure and recovery stages, the behavioral functions of the zebrafish larvae (11 individuals in each group) were evaluated using a system and software for animal movement tracking (EthoVison XT, Noldus Information Technology, the Netherlands), as previously described [38].Briefly, the swimming trajectory was continuously recorded with an infrared-sensitive camera equipped with infrared light and a filter.The testing was performed at night (9 p.m.e12 a.m.) without any disturbances.After analyzing the baseline, we selected a 10-min video segment where the fish had reached a steady state.This segment automatically analyzed behavioral parameters, such as immobility duration, mean velocity, total distance, and turn angle.Immobility duration (s min À1 ) was calculated as the proportion of time during which the fish remained still out of the total measured time.Mean velocity (cm s À1 ) represented the average distance traveled per unit of time.Total distance was the locomotor distance covered by the fish during the 10 min.Turn angle (degrees) measures the change in movement direction, either clockwise or counterclockwise. Metabolic profiling determination At the end of exposure, 25% of the fish larvae in the control and CF groups were collected.Subsequently, an accurate fish sample weight (60 mg) from each group (n ¼ 3) was placed in a 2 mL centrifuge tube equipped with three steel beads, and 1 mL of tissue extracting solution (75% 9:1 (v:v) methanol:chloroform, 25% H 2 O) was added.The tube was put into a high-throughput tissue grinder and ground for 60 s at 50 Hz, and this was repeated twice to blend the samples.The pretreatment for the mixtures, details on the instrument's conditions (HPLC-MS/MS), and data analysis are provided in the Supplementary Materials. The metabolites involved in the GABAergic synapse, including GABA, oxoglutaric acid (a-KG), L-glutamic acid (Glu), and succinic acid (Suc), were quantitatively analyzed (n ¼ 3) according to nontarget metabolomic analysis.The absolute concentrations in the fish in the control and CF groups were detected by ultraperformance liquid chromatography (UPLC; ExionLC™ AD, SCIEX, Framingham, MA, USA) equipped with tandem mass spectrometry (MS/MS; QTRAP®6500þ, SCIEX, Framingham, MA, USA) in positive ionization mode.The parameters are shown in Supplementary Material Table S2, and the analysis details are shown in the Supplementary Materials.) for 21 days (salmon zone).The morphology was recorded daily.After exposure, the biomarkers were analyzed; part of the animals recovered for seven days (azure zone), and the same biomarkers were analyzed after recovery.b, The representative swimming trajectory of larvae from control and exposure groups.c, The quantitative results of the behavioral indicators from different groups, including immobility duration (s min À1 ), total distance (cm), mean velocity (cm s À1 ), and turn angle (mean degree).d, The histopathology changes of the gray zone (red dash line rectangle) localized in the optic tectum of zebrafish from different groups, and the pathological scores were calculated.Black arrow: haphazard sequence; Red arrow: apoptosis. Histopathological brain analysis Given the small size of the larvae, the entire body of each fish was fixed in 4% paraformaldehyde (Aladdin Biochemical Technology Co., Ltd., Shanghai, China) for 24 h.The back side of the fish was taken as the embedded surface in the paraffin.The blocks were cut into slices of 5 mm, ensuring that the brain zone was fully exposed in the cross-section.Each slice was stained with 0.5% toluene blue (Nissl body staining of neurons).The changes in the histopathology in the brain tissues were observed and recorded with an optical microscope with a charge-coupled device camera (n ¼ 4), and the pathological scores were calculated according to the criteria [41]. Molecular docking Molecular docking analysis was performed using the "Discovering Active Sites of Homologous Proteins by Sequence Alignment" function module of Scigress (Ultra Version 3.0.0,Fujitsu) in silico [42].The 3D structure of the ligand-binding domain between human GABA-A receptor a (GABAR; ID: 6D6T) and FYP (flumazenil) was downloaded from the Protein Data Bank website (http://www.rcsb.org.pdb).The sequence coding for amino acids in the protein of zebrafish GABAR (ID: AAI24698.1)was obtained from the National Center for Biotechnology Information database (https://www.ncbi.nlm.nih.gov/), and its 3D structure was predicted using the SWISS-MODEL online tool (https://swissmodel.expasy.org/),referring to the structure of human GABAR.The accuracy was evaluated using the SAVES v6.0 online tool (https://saves.mbi.ucla.edu/).On this basis, the binding affinities of CF and KET in the antagonist pocket of zebrafish GABAR were evaluated using the built-in program to discover active sites of homologous proteins via sequence alignment.The details are provided in the Supplementary Materials.The AutoDock score (DG, kcal mol À1 ) of the top 20 ligand-protein structures were calculated. Gene expression analysis At the end of the exposure and recovery stages, the zebrafish larvae from each group (n ¼ 6) were collected, and the total RNA of each whole fish was extracted with TRIzol® reagent and liquid nitrogen and treated with the Turbo DNA-free kit (Ambion Inc., TX, USA) to eliminate DNA contamination.The details of the subsequent reverse transcription (cDNA synthesis), quantitative polymerase chain reaction (qPCR) analysis method, primers (Supplementary Material Table S3), and instruments utilized are provided in the Supplementary Materials. ROS Visualization and enzymatic activity measurements After exposure, in situ ROS in the insomnia fish gut was detected by 2 0 ,7 0 -dichlorofluorescein (H 2 DCF) (Sigma-Aldrich Co., Ltd., St Louis Missouri, USA) following a previously described protocol [43]. Briefly, fish larvae (n ¼ 6) were incubated with 10 mM H 2 DCF for 15 min in the dark at room temperature and then anesthetized with tricaine (20 mg L À1 ; Sigma-Aldrich Co., Ltd., St Louis, Missouri, USA) for 10 min.Fluorescent pictures of the larvae were taken using a fluorescence microscope (ZEISS Axio VertA1, Carl Zeiss AG, Oberkochen, Germany) immediately after rinsing three times in water from our facility.The excitation and emission wavelengths were 485 and 535 nm, respectively.The pictures were calibrated by Photoshop CC2015 (Adobe, USA) to eliminate background disturbances.Fluorescence intensities were quantified using Image-Pro Plus 6.0 software (Media Cybernetics, Silver Spring, USA). For measurements of enzymatic antioxidant activity, fish samples (n ¼ 6) were randomly collected from each group and homogenized in PBS (pH ¼ 7.2) on ice.The homogenate was centrifuged at 12,000Âg and 4 C for 30 min.The supernatant was pipetted into a fresh centrifuge tube used for the measurement of superoxide dismutase (SOD) and catalase (CAT) activities [44].Nitroblue tetrazolium and hydrogen peroxide were the substrates used for SOD and CAT, respectively.The determination wavelengths were 560 nm for SOD and 240 nm for CAT.The parameters are shown in Supplementary Material Table S2. Gamma-aminobutyric acid and melatonin quantification At the end of the exposure and recovery stages, fish samples from each group (n ¼ 4) were homogenized in PBS (pH ¼ 7.2) on ice and then centrifuged (3000Âg, 4 C) for 10 min.Given that inhibitory neurotransmitter GABA and melatonin (MTN) secretion are associated with circadian clocks [45], the supernatant was used to determine the contents of GABA and MTN using ELISA kits (HEPENG Bio, Shanghai, China) according to the manufacturer's instructions.The parameters are shown in Supplementary Material Table S2, and the details are shown in the Supplementary Materials. Statistical analysis All data are shown as the mean ± SD.The normality and homogeneity of variance in the data were checked using the Shapiro-Wilk test and the Brown-Forsyth method, respectively.The differences in the biomarker levels of fish from different exposure groups were tested using one-way ANOVA followed by a post hoc Tukey's test (95% confidence interval).A c2 test was performed to examine the difference in the incidences of fish with abnormal morphology between the control and exposure groups.The changes in the concentration-dependent relationship of the biomarker in the different exposure groups were assessed using the Jonckheere-Terpstra test.Differences with p-values less than 0.05 were considered statistically significant. Changes in locomotion and histopathology After 21 days of exposure, the swimming trajectory of the fish was determined, and the representative patterns from the different groups are shown in Fig. 1b.A relatively complex trajectory by the fish larvae was observed in the CF group compared to the control group featuring a simple track, which provided evidence of the stimulation of CF.Otherwise, because consistently swimming in circles is the primary behavioral phenotype of depressive zebrafish [46], the same pattern observed in the CF group suggests the potential depressive effects of CF on the fish.Intriguingly, the zebrafish from 50 to 250 ng L À1 of KET groups showed the same swimming pattern with the fish from control group, exploring the chamber with oblique turn and half-turn rotations [46].The hyperactivity of zebrafish induced by CF (2 mg L À1 ) was reported in a previous publication [47], while exposure to KET at 50 mg L À1 for 24 h has been shown to markedly increase the swimming distance of zebrafish larvae [24].However, depression-like behaviors were not observed in the CF þ KET group, which can be attributed to the sedative effects previously reported with joint CF and KET exposure [48].As has been previously reported [38], the typical postures of zebrafish larvae in sleep, including floating with the head down and staying in a horizontal position close to the bottom, were observed in the control group.However, exposure to CF significantly disturbed the sleeping behavior of the zebrafish, and this phenomenon was mitigated with the combined treatment with KET.In summary, exposure to CF at high environmental concentrations (2 mg L À1 ) induced depression-like behavior in zebrafish, and the addition of KET at trace levels (10e250 ng L À1 ) alleviated these outcomes. Based on the behavioral criteria, the sleep state is defined in zebrafish larvae as more than 6 s of immobility in 10 s (60%) [49].We statistically analyzed the corresponding behavioral indicators (Fig. 1c).The immobility duration reduced from 43.58 s min À1 (control) to 9.94 s min À1 after exposure to CF (p < 0.0001), indicating the interruption of sleep state by trace CF.Meanwhile, the values of total distance, mean velocity, and turn angle of the fish significantly increased in the CF-treatment group (4827.12cm, p ¼ 0.0002; 50.06 cm s À1 , p < 0.0001; and 189.24 ; p < 0.0001, respectively) compared to the control group (2378.23 cm, 23.67 cm s À1 , and 151.58 , respectively).The immobility duration was prolonged for the fish in the CF þ KET groups, and the locomotor activity (i.e., total distance, mean velocity, and turn angle) was suppressed.The changes in immobility duration (p ¼ 0.046, 0e250 ng L À1 ), mean velocity (p ¼ 0.048, 0e100 ng L À1 ), and turn angle (p ¼ 0.006, 0e250 ng L À1 ) were in a dose-response manner.In our study, the lowest observed effect concentrations in the presence of 2 mg L À1 of caffeine for immobility duration, total distance, mean velocity, and turn angle were 50, 10, 50, and 50 ng L À1 , respectively.The results highlight the antagonistic effects of CF and KET on the sleep behavior alterations of zebrafish at the environmental level.Notably, this effect was weakened at a high dose of KET (250 ng L À1 ), implying the complexity of the combined toxicity of CF and KET. Given the potential of CF for modification to cortical synapse and neuron networks in the brain [50], we assayed the changes in the histopathology of zebrafish brain.The periglomerular gray zone was selected as the target since the retinorecipient areas of zebrafish are mainly localized at the mesencephalon, which is responsible for the visually evoked behaviors (e.g., sleep and predation) (Fig. 1d) [51].In comparison with the control (compact and regular granular cell layer), the cell layer showed a markedly haphazard sequence (black arrow) and neural apoptosis (red arrow) in the CF group.The detrimental effects on zebrafish brain from CF þ KET groups (neural apoptosis) were obviously slighter than the CF group (disorganized granular cells þ neural apoptosis).The degree of neuronal injury (i.e., neuronal necrosis and apoptosis) was scored according to three types: neuronal necrosis (grades 1e3), laminar necrosis (grades 4e6), and confluent infarct (grades 7e9).The score in the CFeexposed group (4.3) was much higher than in the other groups, and the addition of KET mitigated the pathological changes (ranging from 1.8 to 0.8; Fig. 1d). Therefore, molecular docking (in silico) was performed to investigate the structural basis of the anti-GABAergic activity by CF using the "Discovering Active Sites of Homologous Proteins by Sequence Alignment" function module of Scigress (Ultra Version 3.0.0,Fujitsu).The 3D structure of GABA-A receptor a (GABAR) in zebrafish was modeled using SWISS-MODEL (Supplementary Material Fig. S1b), and this structure was well matched with the human GABAR template (Supplementary Material Fig. S1a).The sequence identity, global model quality estimate, and global QMEANDisCo score were 86.46%, 0.73, and 0.73, respectively.The Ramachandran plot derived from a PROCHECK analysis indicated over 90% of the dihedral angle involved in the most favored regions (crimson zone) (Supplementary Material Fig. S1c).Molecular docking was performed using FYP (flumazenil) as the template since it is a choosy and competitive GABA receptor antagonist for preventing benzodiazepine recognition action [55].FYP, CF, and KET were found to fit well into the predicted antagonist pocket of GABAR in the zebrafish (Supplementary Material Fig. S1d).Most of the adjacent surfaces of the FYP (Supplementary Material Fig. S2a), CF (Supplementary Material Fig. S2b), and KET (Supplementary Material Fig. S2c) were hydrophilic (blue and purple-red), indicating amino and hydroxyl groups in the structure of the ligand.The core hydrophobic moiety was provided by ARG35, LYS89, PHE90, GLY91, SER92, TYR95, PRO96, MET97, ILE100, AIA101, TYR102, SER132, SER133, GLU134, ARG135, and LEU136.The hydrogen bonds between the CF and PHE90, and KET and ARG135 and LEU136 determined the interactions between CF and KET and GABAR (Fig. 2d).Meanwhile, the DG of FYP, CF, and KET binding to GABAR were À6.2, À4.3, and À4.9 kcal mol À1 , respectively. Based on the metabolomics and molecular docking results, GABA may play an important role in fish exposed to CF and KET responses.A previous study elucidated that GABAergic amacrine cells are direct targets of melatonin (MTN) [56], and the administration of MTN has been found to impact behaviors by mediating the central GABAergic system [57].Therefore, the relative contents of GABA and MTN (for the absolute contents, Supplementary Material Table S4), as well as the transcriptional levels of genes (i.e., gabra1, kcnj3a, mntr1a1, and mntr1ba) within GABAergic synapses, were determined.Compared with the control (as 100%), the relative levels (except gabra1) decreased after exposure to CF and gradually increased with the addition of KET.The changes in kcnj3a (p ¼ 0.034, 0e250 ng L À1 ) were concentration-dependent (Fig. 2e).The significant upregulation of the gene gabra1 (Danio rerio gamma-aminobutyric acid type A receptor subunit a1) in the CF group (p < 0.05) indicated the blockade of CF for GABAR, which was evidenced by molecular docking.However, the GABA content of zebrafish larvae in the CF group significantly decreased.The contrary changes between gene expression and neurotransmitter secretions suggest that CF has an antagonistic effect on GABAR, triggering the compensation mechanism of the GABAergic system [58]. ROS accumulation and developmental malformations To comprehensively assess the combined toxicity of CF and KET, it is crucial to identify individual adverse outcomes.Sleep loss has been found to alter the redox state of several rhythm-regulating neurons in fly brains and then impair waking locomotor activity [59].However, oxidative stress in the brain does not appear to be significantly induced at a measurable level by rhythm disorders [60].Hence, a search for signs of oxidative stress in other tissues was warranted.A previous study revealed that sleep deprivation may result in ROS accumulation in the guts of flies and mice and consequently cause the death or shortened lifespan of the organisms [43].Similarly, in the present study, a visible accumulation of ROS in the larval guts was observed in individuals exposed only to CF compared to those in the control group (Fig. 3a).In addition, treatment with KET reduced this phenomenon, as the fluorescence area in the gut (the accumulation of ROS in the larval gut) was gradually lowered, while the area in the 250 ng L À1 group was larger than that in the 100 ng L À1 group (Fig. 3a).Statistically, the relative density of fluorescence in the gut (the density of the control was set as 100%) was significantly greater than that in the control group until the concentration of KET was increased to 100 ng L À1 (p > 0.05) (Fig. 3b).SOD and CAT are crucial enzymes that have been implicated in the neutralization of oxidative stress [61].A previous study reported that exposure to environmental pollutants can induce oxidative stress in zebrafish embryos and trigger an increase in the activity of SOD and CAT.That increase will then counteract the ROS accumulation in the fish's body by eliminating surplus reactive radicals [62].However, when xenobiotics have long-term stimulative effects, the upregulation of enzymatic activity and ROS accumulation may occur simultaneously [43].Similarly, in the present study, the expression levels of the genes sod1 and cat and the activities of SOD and CAT in zebrafish significantly increased in Fig. 2. The antagonistic effects of KET on CF through mediating the GABAergic synapse pathway.a, The volcano plot of the metabolites between the control and CF groups was done using metabonomic analysis.Yellow dash: p ¼ 0.05; Gray dash: Absolute values of log 2 (fold change) ¼ 1. b, The interaction network between the main metabolites and corresponding pathways.Primary metabolites were selected based on different metabolite profiles.c, The absolute concentrations of the chemicals involved in the GABAergic synapse pathway using LC-MS/MS analysis.d, Binding sites of FYP, CF, and KET to GABA-A receptor antagonist pocket, the interaction modes, and the binding scores.e, Changes of the relative contents of neurotransmitters GABA and MTN based on the absolute contents and the relative expression levels of genes, including gabra1, mntr1a1, mntr1ba, and kcnj3a from different groups (CF and CF þ KET vs. the control). the CF group (Fig. 3c), which was consistent with the observed ROS accumulation in the gut (Fig. 3a).For the CF þ KET groups, the mRNA levels of sod1 and cat were downregulated in the 50 and 100 ng L À1 groups, as were the activities of SOD and CAT when compared to the group exposed solely to CF, while the transcription levels of the two genes in the 250 ng L À1 group were upregulated compared with those in the 100 ng L À1 group (Fig. 3c).These results indicate that the antagonistic effects posed by KET on the accumulation of ROS in the larval gut were triggered by CF.Alternatively, most ROS (~90%) is generated during adenosine triphosphate synthesis in the mitochondria, which correlates to apoptosis mediated via the bax-mitochondria-caspase protease pathway [63].Likewise, we found that the transcriptional expression of apoptotic genes (Supplementary Material Fig. S3), including tp53, aifm1, casp6, and casp9, was significantly upregulated by CF in normal zebrafish larvae.However, the relative expression levels of the above genes were considerably decreased when KET was added and returned to the levels of the control group in the higher KET group (100e250 ng L À1 ). Sleep deprivation can reduce growth hormone secretion [64], alter host defense responses [65], and cause the breakdown of the skin and mucosal barrier functions of neonatal rats [66], which may consequently affect infant development.The adverse effects posed by ROS accumulation have been shown in the larval development of flies and zebrafish [67].It has previously been reported that CF consumption may lead to developmental anomalies of Danio rerio [68].In the present study, hydrocardia (Fig. 3d(ii)e(v)), single eye (Fig. 3d(iii)), and morphological deformation (Fig. 3d(v)) were observed in larvae at the early stage (7e14 dpf) compared with the controls (Fig. 3d(i)).Generally, developmentally retarded zebrafish larvae cannot survive.The mortality rate of larvae (7e14 dpf) in the CF exposure group was 10.3%, and no lethal effects were observed in the CF þ KET groups.At the next stage (>14 dpf), abnormal spinal development (Fig. 3d(vii)e(x)) was found in larvae treated with CF compared to the normal morphology shown in Fig. 3d(vi), and the abnormal larvae showed obvious dyskinesia, although they were alive.As shown in Fig. 3e, the incidence of abnormalities increased by up to 26.67% and was significantly higher than that in the control group (3.35%, p < 0.001).Surprisingly, combined CF and KET exposure decreased the incidence (18.33% at 10 ng L À1 (p ¼ 0.003), 15.00% at 50 ng L À1 (p ¼ 0.034), 8.30% at 100 ng L À1 (p ¼ 0.068), and 6.65% at 250 ng L À1 (p ¼ 0.072)). The expression levels of the associated genes, including bmp2, bmp4, gata4, and path2ra, were quantified to further elucidate the underlying mechanisms for the developmental disorder.Bone morphogenetic proteins (BMPs) have been identified by their boneinducing activities as being members of the transforming growth factor beta (TGF-b) family [69].Moreover, the skeletal deformation induced by triphenyltin has been attributed to the upregulation of BMP-related genes in medaka fish [70].GATA4 (zinc finger transcription factor) is essential for the formation of the proepicardium in vertebrates [71], while parathyroid hormone-related peptide (PTHrP) plays a crucial role in craniofacial skeletogenesis in zebrafish [72].The significant upregulation of the genes bmp2, bmp4, and pth2ra and the suppression of the gene gata4 found in the CF group, compared with the control group, accounted for the teratogenic effect associated with the loss of sleep (Fig. 3f).KET alleviated the downregulation or upregulation of the genes induced by CF and then reduced the incidence of deformation. Changes in biomarker levels in fish seven days post-recovery A previous investigation using medaka fish as model animals treated with KET for 14 days found that the levels of ROS and SOD activity did not completely return to control levels after seven days of recovery [73].Nevertheless, the ability of fish cotreated with CF and KET to recover is unknown.Hence, the associated biomarkers of zebrafish larvae exposed to CF or CF þ KET were estimated after seven-day recovery.The movement trajectory (Fig. 4a) of the fish featured a similar pattern to Fig. 1b, implying that the stimulation posed by CF was prolonged beyond 21-day exposure and was still measurable after a seven-day recovery period.The immobility duration (9.44 s min À1 ) and mean velocity (42.19 cm s À1 of fish in the CF group were significantly different from those in the control group (50.90 s min À1 and 22.57 cm s À1 , respectively), with fish in the CF group showing a shorter immobility duration and a greater mean velocity (Fig. 4b).However, these parameters did not exhibit significant differences between the fish in the CF þ KET groups.Those in the control group as the KET concentrations increased. After a seven-day recovery, the changes in the relative levels of GABA and MTN (absolute data are shown in Supplementary Material Table S4) and the expression of genes (gabra1, mntr1a1, mntr1ba, and kcnj3a) in fish in the CF group, compared to those in the control group, showed similar patterns to those observed after 21 days of exposure (Fig. 4c and d).Meanwhile, the relative GABA content in the CF þ KET groups (ranging from 50 to 250 ng L À1 ) significantly increased after a seven-day recovery period (Fig. 4d).This indicates that the antagonistic effects of KET on the adverse outcomes of CF were still observable even after seven days of withdrawal.Notably, the relative expression levels of the gabra1 gene in the zebrafish were significantly upregulated in the CF þ KET groups (10e100 ng L À1 ) compared to the control group, contrary to the results after 21 days of exposure (Fig. 2e).These findings further imply that the GABAergic synapse pathway may be crucial in mediating the combined toxicity of CF and KET. The upregulation of development-related genes, including bmp2, bmp4, gata4, and pth2ra4, as well as the sod1 and cat genes of larvae in the CF group, was still observed after seven days of recovery (Supplementary Material Fig. S4).There were no significant differences between the CF þ KET and control groups.For the apoptosis-associated genes, concentration-dependent downregulation was found in the CF and CF þ KET groups (Supplementary Material Fig. S4), demonstrating that apoptotic effects abated after recovery. There are some limitations to consider.The changes in the indicators were evaluated after a short-term recovery stage (seven days) in this study, but the long-term effects (14 or 21 days) remain unclear.Notably, the persistence of the adverse effects induced by CF should be assessed based on long-term recovery.Hence, future investigations involving longer recovery durations and comparisons between withdrawal and continuous exposure experiments should be carried out. Molecular mechanisms and environmental implications Regulatory authorities have acknowledged that combined toxicity is a major concern based on environmental risk assessments of organic contaminants [7].Chemical cocktails may have multiple effects on the physiological, behavioral, and genetic systems of organisms [74].CF and KET are primary combined pollutants in surface water, with concentrations ranging from 3.7 to 9533 ng L À1 .Furthermore, CF and KET have circadian-disrupting effects on organisms by sharing an overlapping action mode [28,75].Sleep behavior, regulated by the summation of circadian rhythms and homeostasis, should be considered when assessing the environmental risk of psychoactive substances.The circadian rhythm disorders in fish species posed by psychoactive substances (e.g., CF and KET) pouring into aquatic system should be a concern [76].The results of the present experiments demonstrate that the duration of fish immobility during the dark period can efficiently reflect the phenotype of the alterations in sleep state induced by a combination of CF and KET, consistent with previously proposed criteria [49].For the first time, we found that CF had adverse effects on sleep state and that KET mitigated this when added at trace levels.Hence, traditional risk assessment models are applied to estimate single chemicals [77], which might overestimate the realistic toxicity risks related to the co-occurrence of CF and KET in the natural environment. Meanwhile, developing sensitive biomarkers to evaluate sleep behavior changes is crucial.According to the changes observed in this study, treatment with CF inhibited GABAergic synapse activity through the feedback pathway, while KET (10e100 ng L À1 ) could mitigate suppression by irritating the nervous impulse (Fig. 4e(i)).CF can regulate GABA release through NMDAR activation [78].Drinking regular caffeinated coffee causes a marked decrease in MTN metabolism through to the following nigh t [79].Interestingly, the inhibition of the NMDAR by KET on GABAergic interneurons has been reported to lead to pyramidal cell disinhibition and the enhancement of glutamatergic neurotransmission [80].The administration of an inverse agonist at the benzodiazepine binding site of the GABA A receptor promoted coherent network activity and exerted rapid antidepressant actions in some animal tests [81].Evidence has shown that KET administration in mice selectively potentiates GABAergic synaptic inhibition and reverses behavioral despair [82].Hence, the GABAR-associated pathway might be responsible for the antagonistic effects of CF and KET.Based on the metabolomic analysis, changes in the corresponding gene expression and neurotransmitter contents, and molecular docking, alternative mechanisms were postulated (Fig. 4e(ii)).Selectively blocking GABAR activity localized at the GABAergic cells in the periglomerular gray zone of mesencephalon induced a decline in the contents of GABA as well as neurotransmitters, including Suc, a-KG, and Glu.Subsequently, the lack of GABA and Glu in the synaptic cleft would reversely catalyze the upregulation of the expression of the gabra1 gene and the sodium influx/potassium outflow.As a result, the normal circadian rhythm might be impaired by CF through this pathway, potentially inducing behavioral dysfunction in zebrafish at night.The subsequent addition of KET may reverse the changes above and ameliorate the adverse outcomes for fish. Overall, the neurotransmitters (i.e., GABA, MTN, Suc, a-KG, and Glu) are molecular candidate indicators for estimating the adverse effects of aquatic pollutants on the behavioral functions of fish at night. In the aquatic environment, a variety of emerging pollutants, such as psychoactive drugs [83] and perfluorooctanoic acid [84], possess GABAergic-disrupting effects.Accordingly, fish behaviors at night are a valid indicator for assessing the environmental risk posed by the cocktail of psychoactive substances.This study provides new insight into the combined toxicity of CF and KET in aquatic systems, including physiological and molecular biomarkers. Conclusion This study, using system toxicological approaches, identified that exposure to caffeine (CF) markedly changes the behavioral functions of fish at night by disturbing their circadian rhythm and inducing obvious abnormalities during their development.Notably, ketamine (KET) at environmental levels could significantly mitigate the adverse effects of CF on fish by mediating GABAergic synapse activity.After a seven-day recovery period, the adverse effects posed by CF and the antagonistic effects of KET on CF could still be observed.Considering the coexistence of CF and KET in aquatic environments in Asia, the antagonistic effects may result in overestimations of the environmental risks posed by the monomers.Furthermore, future experiments should be conducted to test the effects of CF and KET in zebrafish throughout their life cycle, from embryos to adults, to better estimate the environmental risks. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 . Fig.1.Effects of caffeine and caffeine (CF) þ ketamine (KET) on the swimming behaviors and histopathology in brain tissues of zebrafish larvae after 21 days of exposure.a, Schematic of the experiment.The larvae of five days post-fertilization were treated to CF at 2 mg L À1 or caffeine þ KET (10e250 ng L À1 ) for 21 days (salmon zone).The morphology was recorded daily.After exposure, the biomarkers were analyzed; part of the animals recovered for seven days (azure zone), and the same biomarkers were analyzed after recovery.b, The representative swimming trajectory of larvae from control and exposure groups.c, The quantitative results of the behavioral indicators from different groups, including immobility duration (s min À1 ), total distance (cm), mean velocity (cm s À1 ), and turn angle (mean degree).d, The histopathology changes of the gray zone (red dash line rectangle) localized in the optic tectum of zebrafish from different groups, and the pathological scores were calculated.Black arrow: haphazard sequence; Red arrow: apoptosis. Fig. 3 . Fig. 3. ROS accumulation in the gut, developmental abnormalities, and the changes of relative expression of associated genes from different groups.a, The fluorescence area represents the ROS accumulation in the gut from control, CF, and CF þ KET, respectively.b, Changes of relative intensity (% vs. the control) of the fluorescence.c, Changes of the relative expression levels of genes sod1 and cat and the activities of SOD (control ¼ 127.47 U mg À1 protein) and CAT (control ¼ 7.85 U mg À1 protein).d, Abnormalities ((i)e(v): after 14 dpf; (vi)e(x): after 21 dpf) from different groups.Red arrow: hydro cardia; Black arrow: the single eye; Green arrow: skeleton deformity.e, The percentage of abnormalities in different groups.f, Changes in the relative expression levels of development-associated genes, including bmp2, bmp4, gata4, and pth2ra. Fig. 4 . Fig. 4. The behavioral parameters and associated biomarkers of zebrafish larvae from different groups after seven-day recovery.a, The representative swimming trajectory of zebrafish larvae from different groups.b, The quantitative results of the behavioral indicators from different groups include immobility duration (s min À1 ) and mean velocity (cm s À1 ).c, The relative contents (% vs. the control) of neurotransmitters GABA and MTN are based on the absolute contents from different groups (CF and CF þ KET vs. the control).d, The relative expression levels of genes including gabra1, mntr1a1, mntr1ba, and kcnj3a from different groups (CF and CF þ KET vs. the control).e, The schematic of the underlying mechanism of the antagonistic effects posed by KET to neurotoxicity of CF in zebrafish.
2024-06-13T15:30:59.467Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "0f2750744f89a6d24fa635ce2c2ff7f07fb0cd6f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ese.2024.100437", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "02c6bec627459f151adb474e0e65d61752143aef", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233827519
pes2o/s2orc
v3-fos-license
Developing porang agribusiness for multiple stakeholder benefits and supporting sustainable development in dryland areas of Lombok Lombok Island, including its dryland area, has a high potential for developing many kinds of crops, not excluded porang (elephant foot yam). This study describes the development of porang agribusiness in Lombok, its opportunities, and ways to develop forward. This study uses several sources of data and several data collection methods, in a term called triangulation. Primary data were collected through interviews guided by the instrument of unstructured questions listed as topics of investigations. Secondary data collection capitalize on available data from individuals and bodies. Data were analyzed mainly using descriptive statistics and qualitative evaluation. Results of the study evidenced several points. Porang is one of the promising crops to be developed in the dryland of Lombok Island. It grows easily and is adaptable to many agricultural conditions. It has also high demand in several forms of products. The consequences that follow from these are that the development of this crop can benefit not only porang growers, but also others, including traders, workers, or regional economic development as a whole. Above all, given that porang grows better with shades, then it brings implications for conserving nature better than that without porang, to add to the benefits of economic and social. This study recommends developing porang in dryland of Lombok or other regions. Introduction Lombok has a high portion of dryland. The total amount of dryland in Lombok Island was 221,484 ha in 2017, and this takes 63 % of the total areas in the island [1]. This land is currently grown with food crops, such as rice, corn, soybean, vegetables, or with perennial crops, such as fruit trees and woods [2]. Lombok Island, including in its dryland area, has a high potential for developing many kinds of crops, not excluded porang (elephant foot yam). Porang can be grown in Lombok Island since Lombok Island meets the agronomic requirements of porang growth [3][4][5]. This crop can grow easily in the island, some have been growing by itself, for example in forest areas, without human involvement. On the other demand for this crop and its processed products is currently very high, with some proportion of that demand is unfulfilled, and that demand comes many sources including from overseas [ 6,[7][8][9][10]. The combination of high strength of porang farming with high opportunity of its business development obviously leads people to develop this crop and run business on farming and subsequent business that follow. The purpose of this paper is to describe the development of porang agribusiness in Lombok Island, West Nusa Tenggara, its opportunities, and ways to develop forward. Following this introduction and study method, this paper starts with describing, on the results and discussions section, on agronomic requirements for porang growth, development of porang production and its derivatives, marketing of the products, and added value of the processed porang. These all are important for understanding on the business development of this crop and its derivative products. The last part of the section of results and discussions present the bigger impact of porang business on not only the business itself but also on social and environment that porang business development brings in. This again shows the important of developing the business of porang in Lombok Island, or elsewhere. Materials and methods This study took place in Lombok Island, with special attention is given to North Lombok, where authors have more interactions than other parts of the island. Data were collected in triangulation methods or in combinations, to complement data in one way or another, for example, by application of several observers, information sources, theories, and methods, and materials [11][12][13][14]. The combinations in this study include primary and secondary data collections. Primary data collection Primary data collection was carried out through surveys through interviews [14][15][16] with porang growers, traders, and related others. The interviews were conducted in unstructured mode in focus group discussions (FGD) guided by the list of topics investigated. Secondary data collection Secondary data collections were sourced from literatures and written reports [14,16]. Literatures are particularly related to availabe theories and published research findings, Written reports were those reports available from individuals or bodies of data providers. Combined primary and secondary data collections Among the two sources of data collections, meeting notes are also capitalized from several meetings attended by the authors. It is called here as combined sources, since it contained mixed of both data sources, provided by the same respondents or informants. The same informants provided data in the form of primary data and secondary one. The providers expressed their responses and opinions and also provided already available data orally in the meetings and in the forms written reports which are handed to the reseracher. Of particular utilizations were the notes from meetings that authors attended in relation to porang. The first meeting on porang was held in Slelos Village, Gangga District, North Lombok Regency, on 22 January 2020. This meeting was attended by several porang growers, farmer heads, village leaders, and other key persons. The second meeting was held in the Office of Planning Board (Bappeda) of North Lombok Regency, on 31 January 2020. This meeting was attended by several heads of porang farmers, village leaders, officers from Bappeda of North Lombok, and academicians from the University of Mataram. The meetings were held in focus group discussion (FGD) format [17][18][19]. Information gathered in the meetings was on production, processing, marketing, and other issues related to porang. The meetings were followed up with phone calls or WhatsApp messages with several attendees of the meetings, in order to gather or complete information in needs after the meetings. Another meeting on porang was in the format of National Seminar or Webinar titled 'Developing porang agribusiness for increasing society prosperity and improving environment in Indonesia'. Data analysis Data were then analyzed accordingly, majorly using descriptive statistics [20,21] and qualitative evaluation [22,23], to achieve research objectives. Examples of descriptive statistics applied in this study were frequency of occurences and mode of data. Qualitative evaluations included evalutaions by informants or respondents on gathered data and evaluations by researchers on obtained data and the findings related to the data. Agronomic Requirements for Porang Production Growing porang needs several agronomic conditions; include soil types, soil pH, height above sea level, and crop shade. Some of the properties of Lombok Island are described and furthermore be connected with these agronomic requirements, in the following. Porang can grow in any type of soil [3][4][5]. This is a very obvious sign showing the very adaptability of the crop to soil. Since porang can grow in any type of land then porang has no problem to grow in any part of Lombok Island. Another property of soil is its pH for the growth of porang, which requires 6-7 [3,5]. Soil pH in Lombok was found of 5-6 [24]. This also indicates that porang can be grown on this soil with this pH. There is a slightly difference on the pH requirement and actual condition, yet there is the similar part on this aspect, and the different part out of the requirement can be managed with specific efforts, for example, by adding lime to the soil. The addition of lime to soil which then increase soil pH is important action to help improve nutrient availability to crops grown [25,26]. Height requirement for porang production is 0-700 m above sea level, with 100-600 m is the best for porang growth [3,5]. Lombok Island is 12-166 m above sea level [2]. Accordingly, from height aspect, porang is suitable to be grown in Lombok Island. The suitable soil in its type and pH brings understanding on why porang can grow in many places in Lombok Island and with porang self-growth, in the sense that the crop has been growing without planting and maintaining by the people. There is another requirement for porang to grow. Porang growth needs shades [4,27,28], although some also said that porang can grow without shade [3,5]. The portion of shade that causes porang to grow better is of 40% [3][4][5]. The best plants to be used as shades for porang are of tree plants, such as teak and mahogany [3,5,29]. The need of shade from tree crops brings implication to soil or land conservation and subsequent impact of that conservation on agricultural practice. Sustainable agriculture occurs as porang plantation and its growth requirements of other plants conserve water, balancing nutrients and gases for growth of crops, both porang and other plants, on the soil (land). The resources such as water and other improved environment conditions become available during the year around or from year to year, such that they can be utilized for a long period of time. Hence, the growing of porang can be concluded as supporting sustainable agricultural practice. Environmental sustainability is one of three aspects of sustainable development, along with the aspects of economic and social [30][31][32]. Development of Porang Production in Lombok Information on Porang development sources mainly from two meeting with porang communities or stakeholders, as described in the methods section. However, other sources are also included in necessary parts. Porang is found in many places in Lombok. Farmers and traders confirmed that porang can be found easily in their villages. Porang particularly grow in the slopes of mountains or nearby. The crop can be found almost in many places around the mountain slopes. The crop has been growing without effort from growers. The people did not plant them and did not do maintenance to them. People can come and harvest them on their wish time. Given the awareness of the people on the high economic value of porang then currently there are more commercial plantations of porang than previous period. Porang is grown now not only in forest and but also in other types of land, including in rain fed land, gardens, house yards, and so forth. It can be stated that virtually all land are utilized to be more productive now than before. As a result porang production eventually increase, albeit lack of quantitative figures. Quantitative data will need to be completed now and in future for better management of, for instance, marketing and job creation of this product and its derivatives. Reasons for more plantation and production and subsequent activities like processing and trading can be said as a rational economic behavior [33][34][35] in which people are motivated by profits of doing Porang Products and several more Porang is harvested in the form of wet yam [27,36,37]. In production locations or in the fields, this wet porang is called raw porang. This raw porang is then processed lightly or heavily into several forms. The lightest processed for porang is to cut the raw porang into pieces and then dried. This product is called porang chips. Moreover, porang chips are processed into porang flour. The transformation into porang flour Included as heavy process since there are several treatments applied to eliminate unwanted character or unwanted contents of the flour. One content that need to be removed from porang flour is calcium oxalate [38][39][40]. Porang flour can be used for many purposes in several products. As has become flour, this porang product can function as substitute of flour, for example, as wheat flour or rice flour. Flour is used in industries of several foods like cookies, bakeries, and noodles. The flour can also have other functions, such as, for ice cream stabilizator [41], for health supplements [42,43], for friendly environment polymers [44], several more. Marketing Aspect of Porang The increased production of porang in Lombok Island is obviously driven by market demand. Demand for porang in the locations is high and comes from several markets [see for examples, 6, 7-10]. High demand is reported here in the sense that all available porang products in the locations are all taken out by buyers. Porang demand sources from local, domestic, and international markets. In local market, such as in villages around Lombok Island, porang is sold in the form of fresh porang (that is raw porang yams just harvested from the land) and porang chips (harvested porang yams that are cut into pieces and then dried under the sun or in room conditions). In reference to domestic market (defined here as market within Indonesia), porang in the form of the two previously mentioned forms are traded or more accurately bought by traders from Java. In Java then fresh porang or porang chips are processed into porang flour. Examples of international market for porang are the demand of porang from Japan and China [8]. Added Value of Porang Products Marketing of porang in the forms of other than fresh porang creates added value for the product. Each processed porang, lightly or heavily, has higher value than its fresh form. As a description fresh porang at village level in several locations in Lombok Island during the early year of 2020 was about Rp 10,000 per kg. The price of this porang increased markedly, following processing activities. For instance, porang chips, the lightest processed product of porang, with the treatments of cutting into pieces and then drying them, had the price of Rp 40,000 per kg. Moreover, porang flour, the more and farer processed product from its original, had the prices of Rp 300,000 per kg. Furthermore, porang flour is used in many products, including as main products (in which porang contributes the most to the products), and as additional products (as opposed to the main products). These main and additional products have prices far higher than fresh porang. This description of added value appears to be rough, as the conversion ratio from fresh porang to other processed porang has not been included, yet this at least indicates high added value of processing porang, and therefore indicates that this business is promising or profitable and attracts people to involve in the business. Profit is one the main motive for people to participate in doing something [33,35,45]. This description of the price jump from fresh porang into several processed product indicates the increase value of the product, or well known as value added or added value. In addition to the form utility, the processed products have additional utilities, including the utilities of time and place. As products have been processed, they can have a longer self-life, hence they can be stored and be sold at times of high prices. Similarly, the processed products are easier to be transported than the unprocessed ones, due to smaller volume of the processed products, in which unwanted parts have been removed from the original products [46][47][48]. Economic and Social Impacts of the Production of Porang and its Derivatives Porang production has proven scientifically to have positive impact on environment, that is to create better environment for agricultural development, particularly because of conserving water in the soil or land. Available water in the soil creates healthy soils and these have several important functions for plant growth. These include, among others, binding soil minerals, facilitating the transportation of the micro-organisms and dissolved chemical nutrient, and making nutrients available for plant uptake [see for example, 49]. It is not only good for environment, the production of porang and its derivatives also brings positive impacts economic and social. These three aspects of development lead to sustainable development [31,50]. Economic aspect of sustainable development means that the development should bring benefits, like creating prosperity for the community. This economic benefit must also create social advantage, in the sense that the development, for example, creates jobs for the people and therefore those people are happy with the development. Porang business, as described previously, provides business profits for the business implementer and also brings benefits many groups of people, including porang farmers, traders, transporters, marketers, and workers, for creating jobs and sources of income. Conclusions and Recommendations Porang is one of the promising crops to be developed in dryland of Lombok Island. It grows easily and is adaptable to many agricultural conditions. It has also high demand in several forms of products. The consequences that follow from these are that the development of this crop can benefit not only porang growers, but also others, including traders, workers, or regional economic development as a whole. Above all, given that porang grows better with shades, then it brings implications for conserving nature better than that without porang, to add to the benefits of economics and social. This study clearly recommends developing porang in dryland of Lombok or other regions. Acknowledgement Authors of this paper express special thanks to: University of Mataram, for funding this study; informants, for sharing data and ideas for this study; participants in ICBB2020 seminar for any constructive comments and critiques; and others, for support in one way or another to any stage of this research.
2021-05-07T00:03:57.550Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "b01d29ee6ddf809683f32d541a2fda3faeb90c45", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/712/1/012031", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cbbc94a979e74f502d61ae0011604dc29b19d651", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Business" ] }
197434948
pes2o/s2orc
v3-fos-license
Intensity-Modulated PM-PCF Sagnac Loop in a DWDM Setup for Strain Measurement Featured Application: Potential application of the presented setup for strain measurement in a DWDM configuration. Abstract: A novel intensity-modulated Sagnac loop sensor based on polarization-maintaining photonic crystal fiber (PM-PCF) in a setup with a dense wavelength division multiplexer (DWDM) for strain measurement is presented. The sensor head is made of PM-PCF spliced to single-mode fibers. The interferometer spectrum shifts in response to the longitudinal strain experienced by the PM-PCF. After passing the Sagnac loop, light is transmitted by a selected DWDM channel, resulting in a change in the output optical power due to the elongation of PM-PCF. Hence, appropriate adjustment of spectral characteristics of the DWDM channel and the PM-PCF Sagnac interferometer is required. However, the proposed setup utilizes an optical power measurement scheme, simultaneously omitting expensive and complex optical spectrum analyzers. An additional feature is the possibility of multiplexing of the PM-PCF Sagnac loop in order to create a fiber optic sensor network. Introduction Optical fiber sensors have been widely explored due to their potential advantages, i.e., immunity to electromagnetic interferences, compact size, lightweight and high sensitivity [1]. They could be applied as sensors for temperature [2,3], refractive index [4,5], humidity [6,7] or strain [8][9][10][11][12][13][14]. In fact, strain determination is one of the most important factors in structural health monitoring (SHM) [8]. So far, different fiber strain sensors have been presented, for example Mach-Zehnder interferometers [9,10], Fabry-Pérot interferometers [11], fiber Bragg gratings [12] or long period gratings [13,14]. In addition, Sagnac interferometers with highly birefringent fibers are also used for strain measurement. For instance, polarization-maintaining fibers (PMF) were proposed to be sensor heads for strain detection [15]. However, conventional PMFs, i.e., PANDA or bow-tie, are susceptible not only to strain, but also to ambient temperature. To overcome the temperature cross-sensitivity of PMF, Dong et al. [16] and Han [17] demonstrated the use of polarization-maintaining photonic crystal fiber (PM-PCF) in order to achieve strain sensing, which is inherently not sensitive to ambient temperature. Fu et al. presented a pressure sensor by applying PM-PCF within a Sagnac loop achieving a sensitivity of 3.42 nm/MPa [18]. The PM-PCF Sagnac loop was also incorporated within a fiber ring laser (FRL) to evaluate its environmental stability [19]. However, current setups based on spectral analysis in order to convert shifts in the transmission spectrum of the interferometer in Appl. Sci. 2019, 9, 2374 2 of 9 response to elongation or pressure are not convenient and portable. Therefore, intensity-modulated optical fiber sensor setups are of interest in order to avoid expensive and advanced spectral analysis instruments [10,20]. Hence a novel, cost-effective, highly sensitive PM-PCF Sagnac loop strain sensor connected to a DWDM (Dense Wavelength Division Multiplexer) based on optical power measurement is herewith proposed. The incident light is modulated by the interferometer and output optical power is measured after passing through the DWDM. The elongation of PM-PCF causes a shift in the interferometer spectrum, resulting in different output spectra. Dong et al. showed a possibility of 30 mε elongation of PM-PCF, which indicates good sensitivity, durability and efficient application in the field of strain sensing [16]. In the meantime, PM-PCF has low temperature sensitivity due to its structure, which was previously shown to be~0.3 pm/1 • C [16,21]. The idea of the presented setup relies on a shift in the interferometer spectrum, which has an influence on the transmitted light modulated by DWDM. Therefore, investigation of axial strain is performed only by optical power meters. Approximately 11 dB of output optical power change was experimentally measured for elongation in a range of 0-2000 µε, which proves good sensing capabilities. The highest achieved sensitivity is approximately 0.01 dB/µε with maximal resolution of 1 µε. By applying reference measurement, incident light fluctuation could be eliminated. Additionally, the proposed setup may be easily multiplexed to create a sensor network consisting of PM-PCF Sagnac interferometers. The work presented here considers one possible sensor configuration utilizing PM-PCF. It sheds light on a new approach with the use of PM-PCF Sagnac loops and optical power meters in order to determine the elongation of the fiber itself. One great advantage of the proposed system is that it is cost-effective in comparison to the current literature that utilizes spectral analysis measurement, where an interrogation unit is also quite expensive [16,17,22]. Currently, the PM-PCF fabrication technology is well developed as evidenced by the wide commercial availability of these fibers. Only a small length of PM-PCF is needed to complete a single sensor unit. Another feature of the proposed setup is its fairly high resolution compared to other wavelength-based setups employing this type of photonic crystal fiber [16,17,22]. A disadvantage of the demonstrated setup is the need for careful adjustment of PM-PCF Sagnac interferometer spectrum with the DWDM channel. However, preparation of proper PM-PCF length to correlate the spectrum should not to be a challenging problem. Another issue, which needs to be taken into consideration regarding practical implementation, is the protection of single-mode fibers in order to avoid any bending and elongation of the non-sensing length of the fiber. This paper, for the first time to the best of the authors' knowledge, presents a PM-PCF Sagnac loop sensor setup connected to a DWDM for sensing applications. The paper presents the theoretical background followed by the proposal for a sensor network as well as experimental results. Theory The Sagnac interferometer relies on the phase difference between two counter propagating light beams. Introducing highly birefringent fibers inside a loop provide different light paths, which results in a specific interferometer pattern at the output. The phase difference can be formulated as follows [16]: where B is the birefringence of the PM-PCF known as the difference between effective refractive indices of the fast and slow axis, respectively, L is the length of the fiber and λ refers to the light wavelength. Due to the determination of the phase difference (ϕ), the transmission spectrum of the interferometer could be presented according to the following equation [16]: Indeed, the transmission spectrum is a period function depending on the phase difference (ϕ). The wavelength spacing between two adjacent interferometer fringes could be approximated by the following function [16]: According to Equation (3), the wavelength spacing of interferometer fringes directly depends on parameters of the fiber, i.e., birefringence and length. Elongation of the fiber leads to a change of phase difference (∆ϕ) between counter propagating light waves, which could be expressed by the following formula [16]: As a consequence, this change of phase difference causes the interferometer spectrum to red-shift due to an increase in longitudinal strain. The temperature effect is negligible because of the PCF structure. In the proposed setup, the output power could be estimated as an integral over the common spectrum of the Sagnac interferometer (T INT ) and filter function of a given DWDM channel (T DWDM ) with respect to incident broadband light emission (T BLS ): The strain induces change in phase difference, which results in a shift in the interferometer spectrum. Thus, the change of optical power could be approximated as follows: Change of phase difference caused by elongation has an influence on the transmitted power. Both the sensitivity and the measurement range are related to the edge slope of the interferometer and the spectral characteristics of DWDM. According to Equation (3), the wavelength spacing of interferometer fringes depends on both incident light wavelength and parameters of the fiber, i.e., birefringence and length. Thus, by the adjustment of the length of the PM-PCF, the measurement range could be modified. Proposed Sensor Network The sensor network consists of a broadband light source, which is split into N number of Sensor Units. A small fraction of light is coupled out to the optical power reference measurement in order to eliminate power fluctuations. One sensor unit refers simply to the PM-PCF Sagnac loop. The main part of light propagates through the fiber coupler, demultplexer (DEMUX) and multiplexer (MUX). At the end, each DWDM channel is assigned to a given detector, which corresponds to the PM-PCF Sagnac loop. The whole scheme of the sensor network proposed is shown in the Figure 1. In Figure 1 the OPM (REF) refers to the optical power meter used for reference measurement. OPM corresponds to the optical power meter used for measurement of optical power at the output of the n-th sensor unit, MUX is a multiplexer and DEMUX is a demultiplexer. The experimental setup was performed with one PM-PCF Sagnac loop (sensor unit) in order to prove the concept of the sensor network. Multiplexing of sensor units could lead to the obtaining of the proposed sensor network by selecting different wavelengths as it operates on a wavelength division multiplexing scheme. The presented sensor network proposal could then be implemented practically. The operating wavelength range of DWDM is consistent with the International Telecommunication Union (ITU) recommendations. In Figure 1 the OPM (REF) refers to the optical power meter used for reference measurement. OPM corresponds to the optical power meter used for measurement of optical power at the output of the n-th sensor unit, MUX is a multiplexer and DEMUX is a demultiplexer. The experimental setup was performed with one PM-PCF Sagnac loop (sensor unit) in order to prove the concept of the sensor network. Multiplexing of sensor units could lead to the obtaining of the proposed sensor network by selecting different wavelengths as it operates on a wavelength division multiplexing scheme. The presented sensor network proposal could then be implemented practically. The operating wavelength range of DWDM is consistent with the International Telecommunication Union (ITU) recommendations. Experimental Setup The experimental sensor setup is presented in Figure 2 and consists of a broadband light source (superluminescent light-emitting diode, λ peak = 1544.4 nm, FWHM = 45.5 nm, Thorlabs), two fiber couplers 1 × 2 (split ratio-95:5), a 3-port circulator, a fiber coupler 1 × 2 (split ratio-50:50), a polarization controller, polarization-maintaining photonic crystal fiber (PM-PCF), a dense wavelength division multiplexer (DWDM, 100 GHz, 8 channels, Fiberon) and two hand-held optical power meters (OPM, Detector: InGaAs, Resolution 0.01 dB, Grandway). In Figure 1 the OPM (REF) refers to the optical power meter used for reference measurement. OPM corresponds to the optical power meter used for measurement of optical power at the output of the n-th sensor unit, MUX is a multiplexer and DEMUX is a demultiplexer. The experimental setup was performed with one PM-PCF Sagnac loop (sensor unit) in order to prove the concept of the sensor network. Multiplexing of sensor units could lead to the obtaining of the proposed sensor network by selecting different wavelengths as it operates on a wavelength division multiplexing scheme. The presented sensor network proposal could then be implemented practically. The operating wavelength range of DWDM is consistent with the International Telecommunication Union (ITU) recommendations. The first fiber coupler 1 × 2 (95:5) was used to eliminate fluctuations from the light source. The fluctuation of any light source output power could be observed in a function of time. In order to prevent fluctuations, the fiber coupler is proposed to be implemented in the setup. Otherwise, the sensor output power value could be disturbed by light source power variation, thereby affecting strain determination accuracy. A circulator was placed within the sensor setup so as to provide the light propagation in the proper direction and to prevent any light reflections from affecting the superluminescent diode. In order to prepare the Sagnac interferometer, the splicing process of PM-PCF to SMF (single-mode fiber) was taken into consideration to enhance repeatability and minimize losses. Following the parameters presented by Xiao et al. [23], the PM-PCF was spliced to SMF using a commercial fusion splicer (FSU975, Ericsson). Additionally, appropriate adjustment of the polarization state within the Sagnac loop was made to provide adequate interferometer fringe visibility. A DWDM was incorporated to modulate the intensity of light by adjusting its spectrum with the edge slope of the PM-PCF Sagnac interferometer. Both spectrum of the DWDM channel and the Sagnac loop interferometer are shown in Figure 3. image of PM-PCF cross-section. The first fiber coupler 1 × 2 (95:5) was used to eliminate fluctuations from the light source. The fluctuation of any light source output power could be observed in a function of time. In order to prevent fluctuations, the fiber coupler is proposed to be implemented in the setup. Otherwise, the sensor output power value could be disturbed by light source power variation, thereby affecting strain determination accuracy. A circulator was placed within the sensor setup so as to provide the light propagation in the proper direction and to prevent any light reflections from affecting the superluminescent diode. In order to prepare the Sagnac interferometer, the splicing process of PM-PCF to SMF (single-mode fiber)was taken into consideration to enhance repeatability and minimize losses. Following the parameters presented by Xiao et al. [23], the PM-PCF was spliced to SMF using a commercial fusion splicer (FSU975, Ericsson). Additionally, appropriate adjustment of the polarization state within the Sagnac loop was made to provide adequate interferometer fringe visibility. A DWDM was incorporated to modulate the intensity of light by adjusting its spectrum with the edge slope of the PM-PCF Sagnac interferometer. Both spectrum of the DWDM channel and the Sagnac loop interferometer are shown in Figure 3. Figure 3 reveals that the interferometer fringe spacing is approximately 6.5 nm (near λ = 1550 nm) with a fringe visibility of ~20 dB. The selected DWDM channel spectrum overlaps the slope of the interferometer spectrum, which ensures adequate operation of the proposed sensor. A shift in the interferometer spectrum provides different values of output optical power. The slope of the interferometer spectrum is also crucial for sensor setup capabilities as it is directly related to the fringe spacing and the measurement range of the sensor. Hence, the length of the PM-PCF needs to be controlled in order to meet specific requirements. Spectral analysis from In summary, Table 1 presents all components used within the experimental sensor setup with specified parameters. Spectral analysis from Figure 3 reveals that the interferometer fringe spacing is approximately 6.5 nm (near λ = 1550 nm) with a fringe visibility of~20 dB. The selected DWDM channel spectrum overlaps the slope of the interferometer spectrum, which ensures adequate operation of the proposed sensor. A shift in the interferometer spectrum provides different values of output optical power. The slope of the interferometer spectrum is also crucial for sensor setup capabilities as it is directly related to the fringe spacing and the measurement range of the sensor. Hence, the length of the PM-PCF needs to be controlled in order to meet specific requirements. Components of Sensor Setup Parameters In summary, Table 1 presents all components used within the experimental sensor setup with specified parameters. Strain Response of the PM-PCF Sagnac Interferometer Firstly, the strain response of the PM-PCF Sagnac interferometer was investigated over the range of 0-1500 µε in steps of 250 µε through the use of translation stages in order to prove the sensing idea. The spectra of the interferometers are presented in Figure 4 as well as the wavelength shift as a function of elongation. Strain Response of the PM-PCF Sagnac Interferometer Firstly, the strain response of the PM-PCF Sagnac interferometer was investigated over the range of 0-1500 µε in steps of 250 µε through the use of translation stages in order to prove the sensing idea. The spectra of the interferometers are presented in Figure 4 as well as the wavelength shift as a function of elongation. An interferometer fringe at 1546.3 nm was selected to analyze shifts due to elongation of the PM-PCF. A linear response to strain is observed, which is in agreement with the literature. The sensitivity of the PM-PCF Sagnac loop is determined to be approximately 0.98 pm/µε. Strain Response of the Intensity-Modulated DWDM PM-PCF Sagnac Loop Sensor The proposed sensor setup as depicted in Figure 2 was investigated by monitoring the output light after passing the Sagnac loop and the DWDM due to elongation of PM-PCF. The fiber was stretched over a range of 0-2000 µε in steps of 250 µε. The output transmission spectra are shown in Figure 5. An interferometer fringe at 1546.3 nm was selected to analyze shifts due to elongation of the PM-PCF. A linear response to strain is observed, which is in agreement with the literature. The sensitivity of the PM-PCF Sagnac loop is determined to be approximately 0.98 pm/µε. Strain Response of the Intensity-Modulated DWDM PM-PCF Sagnac Loop Sensor The proposed sensor setup as depicted in Figure 2 was investigated by monitoring the output light after passing the Sagnac loop and the DWDM due to elongation of PM-PCF. The fiber was stretched over a range of 0-2000 µε in steps of 250 µε. The output transmission spectra are shown in Figure 5. It could be observed from Figure 5 that the intensity of the output light is different due to the applied strain, i.e., the elongation of PM-PCF influences the spectrum of PM-PCF, which shifts towards longer wavelengths and thus the light coupling out from the Sagnac loop is accordingly modulated by the DWDM. The integral over the output spectrum corresponds to measured optical power, which determines the elongation of the fiber. An increase in strain causes an increase in output light. The intensity levels of the output spectrum are different due to the influence of edge slope spectrum of the interferometer. Thus, an investigation into the optical power change was conducted using the reference (P ref ) and output power (P out ), which eliminates the fluctuation of incident light. Multiple measurements were performed in order to examine the proposed experimental setup. The relationship between the change of optical power and the axial strain is presented in Figure 6. It could be observed from Figure 5 that the intensity of the output light is different due to the applied strain, i.e., the elongation of PM-PCF influences the spectrum of PM-PCF, which shifts towards longer wavelengths and thus the light coupling out from the Sagnac loop is accordingly modulated by the DWDM. The integral over the output spectrum corresponds to measured optical power, which determines the elongation of the fiber. An increase in strain causes an increase in output light. The intensity levels of the output spectrum are different due to the influence of edge slope spectrum of the interferometer. Thus, an investigation into the optical power change was conducted using the reference (Pref) and output power (Pout), which eliminates the fluctuation of incident light. Multiple measurements were performed in order to examine the proposed experimental setup. The relationship between the change of optical power and the axial strain is presented in Figure 6. The analysis of measurement data shows that the change in optical power exceeds 11 dB (~11.2 dB) within 2000 µε with a negligible deviation between the performed measurements, maximally ± 0.03 dB. A nonlinear response to strain is observed, which could be a result of the spectral correlation between the interferometer and DWDM. The quadratic function was selected as a fitting function (R 2 = 0.999) determined by the following equation: It could be observed from Figure 5 that the intensity of the output light is different due to the applied strain, i.e., the elongation of PM-PCF influences the spectrum of PM-PCF, which shifts towards longer wavelengths and thus the light coupling out from the Sagnac loop is accordingly modulated by the DWDM. The integral over the output spectrum corresponds to measured optical power, which determines the elongation of the fiber. An increase in strain causes an increase in output light. The intensity levels of the output spectrum are different due to the influence of edge slope spectrum of the interferometer. Thus, an investigation into the optical power change was conducted using the reference (Pref) and output power (Pout), which eliminates the fluctuation of incident light. Multiple measurements were performed in order to examine the proposed experimental setup. The relationship between the change of optical power and the axial strain is presented in Figure 6. The analysis of measurement data shows that the change in optical power exceeds 11 dB (~11.2 dB) within 2000 µε with a negligible deviation between the performed measurements, maximally ± 0.03 dB. A nonlinear response to strain is observed, which could be a result of the spectral correlation between the interferometer and DWDM. The quadratic function was selected as a fitting function (R 2 = 0.999) determined by the following equation: The analysis of measurement data shows that the change in optical power exceeds 11 dB (~11.2 dB) within 2000 µε with a negligible deviation between the performed measurements, maximally ± 0.03 dB. A nonlinear response to strain is observed, which could be a result of the spectral correlation between the interferometer and DWDM. The quadratic function was selected as a fitting function (R 2 = 0.999) determined by the following equation: In the equation above S refers to the strain value applied on PM-PCF (µε) and ∆P corresponds to the change in output power (dB). To determine sensitivity at given strain values, the first order derivative of the fitting function was calculated as follows: Thus, the sensitivity is approximately 0.01 dB/µε at the initial value (0 µε) and 0.002 dB/µε at 2 mε, respectively. Assuming the resolution of the standard optical power meter used in the experiment (0.01 dB), the resolution of the proposed setup changes within the range of 1 µε to 5 µε was the result of the measurement. Due to the use of an optical spectrum analyzer with a resolution of 0.01 nm and assuming the use of the experimental PM-PCF Sagnac interferometer (sensitivity of 0.98 pm/µε according to the Figure 4), the possible resolution could be~10 µε, which is lower than expected from the proposed intensity-modulated sensor setup. The setup presented by Dong et al. had a sensitivity of 0.23 pm/µε and a resolution of~43 µε [16]. A similar setup demonstrated by Han achieved a sensitivity of ∼1.3 pm/µε [17]. Another comparable sensitivity was achieved by Orlando et al. [22], i.e., 1.21 pm/µε or 1.11 pm/µε depending on whether PM-PCF is uncoated or coated (acrylate). Thus, compared to wavelength-based sensor setups employing this type of PCF [16,17], the presented sensor exhibits a higher resolution (~1 to 5 µε) and reduces system cost through the replacement of OSA (Optical Spectrum Analyzer) by optical power meters. Moreover, an easy multiplication of PM-PCF Sagnac loops is possible in order to create a sensor network as its operation relies on DWDM. It is also necessary to account for possible error resulting from temperature effects. This constraint had already been thoroughly investigated in the literature regarding this type of fiber, [16,21,22] where almost inherent insensitivity to ambient temperature had been found in PM-PCF (~0.3 pm/ • C). Assuming a temperature variation of 50 • C (0-50 • C), the induced error could be approximately 15 µε. It evidently shows that thermal effect could be negligible in the proposed setup. Nevertheless, the advantage of this setup is the utilization of cost-effective optical power measurement instead of spectral analysis. The measurement range is related to physical parameters of PM-PCF. An increase of PM-PCF length causes relatively smaller spacing of interferometer fringes, which reduces the measurement range of the sensor setup. Conclusions An intensity-modulated PM-PCF Sagnac loop for strain measurement has been presented and experimentally verified. The sensing part contains a Sagnac interferometer using a PM-PCF, which was subjected to axial strain. Due to the elongation of the PM-PCF, the interferometer spectrum shifts. By adjusting the DWDM, elongation of PM-PCF could be determined by direct output optical power measurement without a need for an OSA. Experimental results pointed out an increase in the output optical power as a function of longitudinal strain experienced by the PM-PCF. The setup has a maximal sensitivity of 0.01 dB/µε and resolution of 1 µε when measured using standard optical power meters. Additionally, this setup could be multiplexed in order to build a fiber sensor network by exploiting different DWDM wavelengths. It greatly reduces cost due to replacement of expensive and advanced OSA with optical power meters.
2019-07-18T19:11:16.706Z
2019-06-11T00:00:00.000
{ "year": 2019, "sha1": "40e8eeb719248d0e41b65e3e3561c3de887a8423", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/11/2374/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "986c38a1ce88f522144d2e8a6d50e346e4dbe83e", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
243882933
pes2o/s2orc
v3-fos-license
Dual-Energy Heart CT: Beyond Better Angiography—Review Heart CT has undergone substantial development from the use of calcium scores performed on electron beam CT to modern 256+-row CT scanners. The latest big step in its evolution was the invention of dual-energy scanners with much greater capabilities than just performing better ECG-gated angio-CT. In this review, we present the unique features of dual-energy CT in heart diagnostics. Introduction While cardiac CT was developed to assess coronary arteries and heart anatomy, thanks to technological development, we can evaluate much more than that alone. The first step in its development was to increase the number of detector rows and the temporal and spatial resolution of the scanners. The next important step was the introduction of a dualsource CT scanner by Siemens Healthcare in 2006, capable of working in the dual-energy mode [1]. Since then, different vendors have produced dual-energy (DECT) scanners of their own design, each with sets of unique advantages and disadvantages [2]. In this review, we summarize the possibilities of dual-energy CT scanners in heart diagnostics, and the limitations and advantages of different types of DECT scanners. Fundamentals of Dual-Energy CT Conventional single-energy CT (SECT) scanners generate a beam of X-ray photons of different energies with maximal energy equal to the value of the peak voltage of the X-ray lamp (kVp)-a beam of that kind is polychromatic [3] and images represent the attenuation of photons of all energies in each voxel. Dual-energy CT scanners acquire two sets of data with different energy levels for each voxel and create two sets of images independently for each energy, similarly to SECT [2]. Photons that travel through a patient's tissues interact with them through two main processes: Compton scattering and the photoelectric effect. In the photoelectric effect, the X-ray photon interacts with an atom's K-shell electron, causing its ejection from the shell. The likelihood of such event is greatest if the energy of the X-ray photon is equal or slightly above the binding energy of electrons to the K-shell, which is different for each element. The binding energy increases proportionally to the atomic number. Compton scattering is the ejection of electrons from the outer shell of an atom, and it mainly occurs in elements with low atomic numbers such as hydrogen Z = 1, oxygen Z = 8 or carbon Z = 6 [2,3]. Diagnostic Capabilities of DECT All types of dual-energy CT scanners, regardless of their technical concept, have similar capacities and do not have some of the single-energy CT limitations related to scanning with one polychromatic beam of X-ray. Moreover, DECT scanners have some unique functions that are not available in traditional devices. Each substance has its own unique profile of absorption of X-rays with specific energy. Using images obtained with two different energies, we can calculate the concentration of any substance with the known attenuation curve [2]. This is possible thanks to the photoelectric effect, which is Z-number dependent [3]. Due to this relation, DECT scanners can create images coded with concentrations of certain substances instead of X-ray attenuation in voxels; another way of using these data is to remove specific substances from an image, e.g., iodine or calcium. By removing iodine, we can obtain an image very similar to a noncontrast image-they are called virtual noncontrast (VNC) images [5,7,12]. It has been proved by several authors that VNC images can successfully replace noncontrast scans in the case of cardiac examinations and the examination of other anatomical regions [4,10,13,14]. However, this technology has some limitations and cannot completely remove the attenuation from highly concentrated contrasts, e.g., in the SVC and artifacts associated with it [15]. Effective Atomic Number Images Having two sets of X-ray attenuation values for each voxel allows one to determine the composition of tissue by calculating the effective atomic number (Z effective ). This is the average atomic number of all atoms in the voxel. These values can be displayed as a grayscale image or as color overlay on top of standard image, or as a VMI [13]. The effective Z number image can be used to differentiate two highly hyperdense structures, e.g., iodine in the lumen of the artery and calcification in its walls [4]. Types of DECT Scanners from Different Manufacturers There are two types of dual-energy CT scanners: source-based and detector-based ( Figure 1) [16]. More commonly used are source-based CT scanners: dual-source, twin-beam, rapid kVp switching and sequential kVp switching. The only operational detector-based scanner is the layer detector, designed by Philips Healthcare, Best, Netherlands. The ideal detector of spectral imaging-the photon counting detector-has been engineered for many years, but it is still not suitable to be used in CT scanners. Small photon counting detectors have been successfully used in mammography. Each of these technologies has its own advantages and disadvantages, which is discussed in more detail (Table 1). More commonly used are source-based CT scanners: dual-source, twin-beam, rapid kVp switching and sequential kVp switching. The only operational detector-based scanner is the layer detector, designed by Philips Healthcare, Best, Netherlands. The ideal detector of spectral imaging-the photon counting detector-has been engineered for many years, but it is still not suitable to be used in CT scanners. Small photon counting detectors have been successfully used in mammography. Each of these technologies has its own advantages and disadvantages, which is discussed in more detail (Table 1). Dual-Source Dual-Energy CT Siemens-designed dual-source CT scanners comprise two sets of X-ray lamps and detectors shifted relative to each other by 90 or 95 degrees ( Table 2). These scanners can operate as dual-energy ones when each lamp is powered with a different kVp [2]. These scanners create monoenergetic images by blending images obtained with two detectorscalled image-based DECT [17]-in contrast to projection-based DECT; this type has very limited capabilities in terms of reducing the beam-hardening artifacts [25]. The biggest advantages of dual-source scanners are their extraordinarily high temporal resolution up to 66 ms when operating in single-energy mode [17] and their ability to independently modulate currents to reduce radiation doses and install filters in order to increase the energy separation of emitted photons [5,9,26,27]. The temporal resolution in dual-energy mode is not so extraordinary but is still high-125 ms [17,28]. Dual-source scanners can generate 40 keV-190 keV monoenergetic images (Table 1) [5]. Almost twice as much hardware, meaning a higher price, and more components are a couple of the main disadvantages. The different FOV of two detectors limits the area within which dual-energy data can be calculated. This is not a problem in case of the heart due to its central location but it limits possible usage in diagnostics of other organs in larger patients [11,29]. The FOV of a detector linked with a higher voltage powered tube is 50 cm in all generations; a second detector FOV has been expanded in the next generation but it is a limiting factor of dual-energy data (Table 1). Carefully placing the patient in the center of the scanner is crucial [15]. The shift of tubes causes a minimal delay of registration data from the same location of both lamps, which can generate misregistration artifacts [29,30]. Moreover, scatter photons that originate in one tube can reach the detector of another and generate artifacts [27,29]. Split Filter DECT-TwinBeam The second type of DECT developed by Siemens Healthcare, Erlangen, Germany, uses sets of X-ray tube filters to split the beam in half in the Z-axis. The gold filter eliminates high-energy photons from the beam generated with 120 kV kVp, while a zinc filter absorbs low-energy photons [31]. It is a cheaper solution then dual-source scanners because the only added hardware to standard scanner is sets of filters [24]. However, due to big delays in two energy registration data-the time of tube rotation with pitch = 1-this type of scanner has limited application in heart examination, which is confirmed by a lack of any publication in this field concerning this type of scanner. Rapid-kVp-Switching DECT A dual-energy solution developed by General Electric Healthcare, Waukesha, WI, USA, uses two unique elements: an X-ray tube capable of switching kVp between 80 and 140 kV [11,32] and an ultra-fast registering detector based on gemstones with shorter afterglows than traditional material used for their construction [30,32,33]. During the acquisition, the kVp of the tube changes every 0.25 ms between 80 and 140 kV, which allows one to obtain two datasets from almost same point. This allows one to use projection-based methods in reconstructing monoenergetic images [18]. In order to minimize misregistration artifacts, the speed of tube rotation is decreased [17]. The lower the voltage, the lower the number of generated photons with this same current. In order to overcome this limitation, 66% of the cycle lamp operates with 80 kV [17,26]. Imaging the entire FOV [9,18] in dual energy from almost the same location of the tube almost completely eliminates misregistration artifacts and errors in the calculation of monoenergetic images [29,30]. Very rapid changes to kVp makes it impossible to use filters to increase energy separation between low-and high-energy photons [11,30]. For that same reason, current modulation cannot be applied to reduce the radiation dose [5]. In order to use projection-based reconstruction methods, the rotation speed has to be limited because the tube has to be in the same spot or very close to the same point, which limits the temporal resolution of scanner. Moreover, fixed settings of 80 kVp and 140 kVp limit the possibility of performing the examination in obese patients due to low-energy photon starvation [15]. Multilayer Detector CT The only commercially available detector-based dual-energy CT scanner was created by Philips Healthcare, Best, Netherlands, who designed a dual-layer detector sometimes called a "sandwich detector". It consists of two layers: an inner layer that registers lower-energy photons and is transparent for high-energy ones, and an external layer that registers them [11,13]. A standard X-ray tube is used in this scanner. It is truly a projectionbased DECT, which has huge advantages due to a greater possibility of artifact reduction. Similarly, as in the rapid-kVp-switching DECT, dual-energy data are available for the entire FOV, but in contrast, there is no need to reduce the rotation speed, so temporal resolution is not compromised. Working with 120 kV kVp, these scanners work as dual-energy scanners without any further modification, which improves workflow because personnel do not have to decide if a specific examination has to be in the DECT mode, like in other types of DECT. Moreover, it does not carry the penalty of an extra radiation dose or the loss of the temporal resolution of a scanner [13]. It is possible that a few low-energy photons would not be absorbed by the inner layer of the detector and reach and interact with the external one. The opposite scenario is also possible. Both would result in an artifact and the miscalculation of the error in calculating monoenergetic images. Sequence DECT The simplest method is to scan the area of examination twice with different kVp. This solution was adopted by Toshiba Medical Systems, Tochigi, Japan in their Aqulinon CT scanners, which change kVp after a full gantry rotation. This does not require any specific hardware modification, just dedicated software. Due to completely independent scanning in both cycles, techniques of dose reduction are available and obtain similar signal-to-noise ratios, which improve the quality and accuracy of monoenergetic images. Both scans from the same spot are delayed by the time of tube rotation (the minimum rotation time for Aqulinon is 0.27 s) which increases the likelihood of motion artifacts and differences in contrast concentration [2]. Coronary Artery Assessment The primary goal of heart CT is the assessment of coronary arteries and improving quality and diagnostic possibilities is the main reason for performing this examination in dual-energy mode (Figure 2A-I). This is possible by reducing the number of blooming artifacts from stents and calcifications. Monoenergetic images, material specific reconstruction and effective Z-number imaging are helpful in reaching that goal [13,34]. The blooming artifacts that originate from stents and calcified plaques are a reason for the overestimation of the degree of stenosis. They can be reduced by VMIs of high energy, e.g., 110 keV, which has been proven to highly reduce artifacts from hyperdense metallic structures, but simultaneously, they are less sensitive to iodine. For that reason, it is essential to assess the lumen of coronary arteries using multiple VMIs. Calcium subtraction is other method of increasing the accuracy of heavily calcified coronary arteries [24,34,35]. The same phenomenon makes the assessment of stents in lumen challenging. This problem was researched in detail by Hickethier et al. [36]. They reported that the amount and severity of blooming and beam-hardening artifacts depends on the stent's material and its structure. The VMIs are significantly more effective in visualizing stents in lumen compared to standard polychromatic images as they reduce blooming artifacts, decrease noise and increase the contrast of images. VMI's capabilities depend on the metal the stent is made of, e.g., stainless steel artifacts can be almost completely eliminated, while tantalum are just slightly reduced [36]. However, the research of Hickethier et al. did not take differences in stent structure into account, such as strut thickness, which is also related to a number of artifacts. The degree of stenosis caused by soft plaques can be better assessed by using lowenergy VMIs, e.g., 50 keV, which increase CNR and allows one to use a lower volume of contrast media [4]. Moreover, they can be used to salvage examinations with suboptimal vessel enhancement [9,13] (Figure 3) or to asses pulmonary and coronary arteries in a single examination without an extra dose of contrast-making each coronary CTA rule out examination. lem was researched in detail by Hickethier et al. [36]. They reported that the amount and s verity of blooming and beam-hardening artifacts depends on the stent's material and its stru ture. The VMIs are significantly more effective in visualizing stents in lumen compared standard polychromatic images as they reduce blooming artifacts, decrease noise and increa the contrast of images. VMI's capabilities depend on the metal the stent is made of, e.g., stai less steel artifacts can be almost completely eliminated, while tantalum are just slightly r duced [36]. However, the research of Hickethier et al. did not take differences in stent structu into account, such as strut thickness, which is also related to a number of artifacts. The degree of stenosis caused by soft plaques can be better assessed by using low energy VMIs, e.g., 50 keV, which increase CNR and allows one to use a lower volume contrast media [4]. Moreover, they can be used to salvage examinations with suboptimal vessel enhanc ment [9,13] (Figure 3) or to asses pulmonary and coronary arteries in a single examinatio without an extra dose of contrast-making each coronary CTA rule out examination. The best contrast to noise ratio is at 60-70 keV images (C,D) Lower energies have higher iodine density but also much higher noise. Higher-energy VMIs (F-H) are less useful due to low contrast density. (I)-curved MPR reconstructed from iodine(water) map can also be used to assess lumen of RCA. Despite the constant development of CT scanners, invasive coronarography still has much better spatial and temporal resolutions then any CT scanner, but it cannot provide any information about plaque composition. These data are only obtainable by performing intravascular ultrasound (IVUS), which is not widely available. This information is very important because it is proven that plaques with tiny fibrous cups or a large necrotic core are prone to rupture and cause myocardial infarction, which makes them very dangerous. SECT can provide limited information about plaque composition by evaluating its density. DECT can offer much more than just plaque density, by analyzing its atomic number [37][38][39]. Several studies have proved that is possible to assess the composition of soft Despite the constant development of CT scanners, invasive coronarography still has much better spatial and temporal resolutions then any CT scanner, but it cannot provide any information about plaque composition. These data are only obtainable by performing intravascular ultrasound (IVUS), which is not widely available. This information is very important because it is proven that plaques with tiny fibrous cups or a large necrotic core are prone to rupture and cause myocardial infarction, which makes them very dangerous. SECT can provide limited information about plaque composition by evaluating its density. DECT can offer much more than just plaque density, by analyzing its atomic number [37][38][39]. Several studies have proved that is possible to assess the composition of soft plaque using DECT and more accurately detect venerable ones, which can lead to more intensive and potentially beneficial treatment for patients [37,40,41]. There are some characteristic features of venerable plaques in CT: A study by Nakajima et al. determined that using a value of 9.3 as the effective atomic number has 90% sensitivity in distinguishing soft and fibrous plaques, while density with a cutoff value of 55 HU only has 62% sensitivity. A limitation of this study is that it had a small population of just 18 patients [42]. In summary, plaque characterization in DECT has still not been fully researched and requires further investigation, but combining information regarding the effective atomic number with CT features of unstable plaques (Figure 3) can help to determine the nature of atherosclerotic changes in examined vessels [24]. We do not use DECT to characterize plaques in daily practice and rely on CT futures of venerability, which were mentioned earlier. Contrast Volume Reduction By using low-energy VMIs, we can significantly increase CNR and enhance the ves- A study by Nakajima et al. determined that using a value of 9.3 as the effective atomic number has 90% sensitivity in distinguishing soft and fibrous plaques, while density with a cutoff value of 55 HU only has 62% sensitivity. A limitation of this study is that it had a small population of just 18 patients [42]. In summary, plaque characterization in DECT has still not been fully researched and requires further investigation, but combining information regarding the effective atomic number with CT features of unstable plaques (Figure 3) can help to determine the nature of atherosclerotic changes in examined vessels [24]. We do not use DECT to characterize plaques in daily practice and rely on CT futures of venerability, which were mentioned earlier. Contrast Volume Reduction By using low-energy VMIs, we can significantly increase CNR and enhance the vessels in comparison to the SECT examination using the same volume of contrast media and iodine delivery rate, or we can obtain a comparable quality of image that is obtained with a lower volume of contrast media. Several papers present examinations performed with less than 50% of standard volume of contrast without a loss of quality, which requires some modification in contrast delivery protocol [7,[43][44][45]. Reduced contrast volume is especially beneficial for patients with impaired renal function, due to dose-related contrast-induced nephrotoxicity. However, this does not reduce the risk of allergic reactions which are not dose related [34,45]. The increased sensitivity to iodine of low-energy VMIs allows one to asses smaller or poorly enhanced vessels [7,9] (Figure 3). It is important to note that the same iodine concentrations may have slightly different Hounsfield unite values in the same VMI produced by different types of scanners [46]-it is very important to know the type of scanner that is installed in one's institution. Radiation Dose Reduction Coronary CTA originally had one the highest doses of radiation of all CT examinations; thanks to the development of prospective ECG-gating, current modulation and iterative reconstruction algorithms, it has been radically decreased, even to below 1 mSv [10]. It can be further reduced by omitting the noncontrast phase and using DECT's capabilities of creating virtual unenhanced images (VUIs) [13,24]. Many phantoms and human-based experiments have proved that there is a strong correlation between the Agatston score calculated from the real unenhanced images and VUIs. However, none of the vendors that provide DECT scanners have FDA-or EU-approved Agatston scoring software for use with DECT contrast images [4,10,14,24,47]. In order to use calcium scoring from VUIs in routine clinical practice, precise software has to be modified, because it uses a threshold of 130 HU to extract calcium, whereas water (iodine) maps are coded in element concentrations, making automatic extraction impossible ( Figure 5). Most vendors also offer a method of obtaining VUIs coded with Hounsfield units, but this method tends to misclassify small calcification as iodine and extract them as well. J. Clin. Med. 2021, 10, x FOR PEER REVIEW 10 of 22 one to asses smaller or poorly enhanced vessels [7,9] (Figure 3). It is important to note that the same iodine concentrations may have slightly different Hounsfield unite values in the same VMI produced by different types of scanners [46]-it is very important to know the type of scanner that is installed in one's institution. Radiation Dose Reduction Coronary CTA originally had one the highest doses of radiation of all CT examinations; thanks to the development of prospective ECG-gating, current modulation and iterative reconstruction algorithms, it has been radically decreased, even to below 1 mSv [10]. It can be further reduced by omitting the noncontrast phase and using DECT's capabilities of creating virtual unenhanced images (VUIs) [13,24]. Many phantoms and human-based experiments have proved that there is a strong correlation between the Agatston score calculated from the real unenhanced images and VUIs. However, none of the vendors that provide DECT scanners have FDA-or EU-approved Agatston scoring software for use with DECT contrast images [4,10,14,24,47]. In order to use calcium scoring from VUIs in routine clinical practice, precise software has to be modified, because it uses a threshold of 130 HU to extract calcium, whereas water (iodine) maps are coded in element concentrations, making automatic extraction impossible ( Figure 5). Most vendors also offer a method of obtaining VUIs coded with Hounsfield units, but this method tends to misclassify small calcification as iodine and extract them as well. The greatest dose reduction can be achieved using third-generation dual-source scanners capable of performing coronary CTA with pitch = 3 and a submillisievert radiation dose, but this mode can be used only in patients with low heart rates [10]. Heart Perfusion Coronary CTA performed with SECT can only assess the anatomy of coronary arteries and the degree of stenosis, but with almost no information on perfusion, only vast perfusion defects can be spotted as hypodense areas of myocardium. DECT CTA can be The greatest dose reduction can be achieved using third-generation dual-source scanners capable of performing coronary CTA with pitch = 3 and a submillisievert radiation dose, but this mode can be used only in patients with low heart rates [10]. Heart Perfusion Coronary CTA performed with SECT can only assess the anatomy of coronary arteries and the degree of stenosis, but with almost no information on perfusion, only vast perfusion defects can be spotted as hypodense areas of myocardium. DECT CTA can be used to calculate the concentration of iodine in myocardium distally to stenosis and assess the significance of stenosis. The ability to simultaneously evaluate the morphology of stenosis and its hemodynamic significance is not available to any other method of heart imaging [4,34,48]. However, it is a static evaluation depicting the contrast in time of CTA scanning, which is usually less than 1 s. In contrast to classic perfusion CT, it does not produce information about blood flow over time. Using a DECT scanner can reduce the beam-hardening artefact originating from concentrated contrast in SVC and the right chambers, which can mimic hypoperfused areas of the myocardium and increase the accuracy of first-pass perfusion compared to SECT [9,34,48,49]. Dynamic examination is much more accurate than static and gives the opportunity to perform a quantitative assessment of myocardial perfusion but with a higher radiation dose. Some centers perform coronary CTA, which is also used as static rest perfusion and dynamic examination under pharmacological stress to assess the influence of detected changes on blood flow during one patient visit to a CT lab [48,50,51]. his protocol offers the most complex assessment of coronary arteries and can be used to roll out any significant stenosis, but comes with the price of a relatively high radiation dose [52]. Both dynamic and static DECT heart perfusion have been evaluated against SPECT, MRI, PET, FFR and FFR CT with very good correlations in several studies, which demonstrates that dynamic examination is more sensitive and specific then static [53][54][55][56][57]. A recent study published by Ruiz-Muñoz et al. demonstrated the superiority of dual-energy perfusion over singleenergy with better sensitivity, specificity, negative and positive predictive value in detecting significant stenosis with SPECT and invasive coronarography used as standard [58]. Similar conclusions of better accuracy of static dual-energy perfusion over single-energy were stated by Assen et al., but both methods were inferior to dynamic perfusion [59]. The large multi-center clinical trial DECIDE-Gold was launched in 2014 in order to determine DECT perfusion in detecting significant coronary artery disease, but its results have not been published yet [60]. Myocarditis and Fibrosis The modality of choice in diagnostic of myocarditis and myocardial fibrosis is magnetic resonance with gadolinium contrast injection and the assessment of late gadolinium enhancement (LGE); however, some patients have contraindications for MR examination. For this population, heart CT with delayed phase is an alternative method, especially when performed in the dual-energy mode. Inflammatory processes locally disturb the function of ion pumps and result in the leaking of gadolinium or iodine from vassals into peripheral tissues, resulting in an enhancement better seen in late phases due to the trapping of contrast in the site of inflammation and washout from normal muscle [50]. Low-energy VMIs and iodine(water) maps can clearly show regions of even weak enhancement and can be used to measure iodine concentration in the case of doubt [61] (Figure 6A-C). It was proved by Ohta et al. that due to similar the pharmacokinetic properties of gadolinium and iodine-based contrast, DECT can be used in differentiating ischemic and non-ischemic cardiomyopathies in patients with heart failure. They report the better concordance of iodine density maps with LGE in CMR studies than low keV VMIs [62]. A similar conclusion was provided by the study of Matsuda et al., who confirmed that late iodine enchantment can be used as a substitute of LGE in diagnostic infarction [63]. Adding late-phase acquired in the dual-energy mode to perfusion examination can increase the sensitivity and specificity in detecting areas of infarction [59]. J. Clin. Med. 2021, 10, x FOR PEER REVIEW 12 Figure 6. At 70 keV VMI, the subepicardial region of lateral wall of increased density is visible in delayed phase (A); it is caused by increased iodine uptake and reduced water concentration (B,C). Differentiating Thrombus, Tumor and Artifacts There are three causes of contrast filling defects of heart chambers in coronary C a thrombus, a tumor or a blood flow artefact. The first two require further diagnosti vestigation due to different treatments. The most common location of contrast filling fects in patients with atrial fibrillation is the left atrial appendage (LAA); it is also the common location of intracardiac thrombus. Definite differentiation is possible by forming an additional scan in the venous phase but has the disadvantage of addit radiation dose; definite differentiation can also be achieved by transesophageal echo diography (TEE). DECT, by using low-energy VMIs or iodine(water) maps can detect minimal concentrations of iodine in LAA and exclude the presence of thrombus. measurements of iodine concentration are more accurate than the density of LAA, as carried out in SECT ( Figure 7A-D) [9,64], and as proven by Hur et al., using 1.74 mg Differentiating Thrombus, Tumor and Artifacts There are three causes of contrast filling defects of heart chambers in coronary CTA: a thrombus, a tumor or a blood flow artefact. The first two require further diagnostic investigation due to different treatments. The most common location of contrast filling defects in patients with atrial fibrillation is the left atrial appendage (LAA); it is also the most common location of intracardiac thrombus. Definite differentiation is possible by performing an additional scan in the venous phase but has the disadvantage of additional radiation dose; definite differentiation can also be achieved by transesophageal echocardiography (TEE). DECT, by using low-energy VMIs or iodine(water) maps can detect even minimal concentrations of iodine in LAA and exclude the presence of thrombus. The measurements of iodine concentration are more accurate than the density of LAA, as it is carried out in SECT ( Figure 7A-D) [9,64], and as proven by Hur et al., using 1.74 mg I/mL as the cutoff value for thrombus has 100% specificity. The third cause of the filling defect is the presence of a tumor in the chamber of the heart. As proven by Hong et al., it is possible to differentiate tumors and thrombus using dual-energy coronary CTA by measuring the iodine concentration [65,66]. as the cutoff value for thrombus has 100% specificity. The third cause of the filling defect is the presence of a tumor in the chamber of the heart. As proven by Hong et al., it is possible to differentiate tumors and thrombus using dual-energy coronary CTA by measuring the iodine concentration [65,66]. Diagnosis of Implant-Related Pathologies There is a growing population of patients with implemented intracardiac devices such as artificial valves and electrodes of pacemakers, which can become the source of complications-the electrodes can break or perforate the heart wall. Besides, every foreign material can be colonized by pathogens and became the source of endocarditis. Visualizing vegetation on the surface of electrodes or parts of artificial valves can be hard due to beam hardening, blooming and photon starvation artifacts. These artifacts can completely obscure small vegetations, which makes it had to detect them using SECT [13,67]. It has -D). Case of patient with chronic atrial fibrillation after 3 unsuccessful ablations, currently admitted due to chest pain. Coronary CTA was performed to roll out coronary artery stenosis. Differentiating thrombus and filling defect of LAA using iodine concentration is much more specific and sensitive than the use of Hounsfield unities ratio, as proven by Hur et al. [64]. Diagnosis of Implant-Related Pathologies There is a growing population of patients with implemented intracardiac devices such as artificial valves and electrodes of pacemakers, which can become the source of complications-the electrodes can break or perforate the heart wall. Besides, every foreign material can be colonized by pathogens and became the source of endocarditis. Visualizing vegetation on the surface of electrodes or parts of artificial valves can be hard due to beam hardening, blooming and photon starvation artifacts. These artifacts can completely obscure small vegetations, which makes it had to detect them using SECT [13,67]. It has been proven that DECT can significantly reduce metal-related artifacts originating from artificial valves or electrodes ( Figure 8A-D) [68]. Reduction of Metal-Related Artifacts Very dense materials such as metal clips, electrodes or stents and massive calc tions cause artifacts due to the much higher absorption of X-ray photons than surroun tissues. These phenomena, in combination with how CT scanners reconstruct images reasons why such structures are the source of many types of artifacts such as beam h ening, blooming and photon starvation [69][70][71]. The beam hardening artifact occurs when a polychromatic X-ray beam pa through a high-density structure, which absorbs disproportionally more low-energy tons than high-energy ones. This disproportion generates hyper-and hypodense str on reconstructed images [5,69,72]. VMIs are much more resistant to these artifacts bec they simulate images obtained with photons of single energy-is it not possible to ha that beam, only to attenuate it. Photon starvation artifacts are hypodense areas with increased noise around ma and dense structures due to the absorption of almost all photons by them. We can see between hip prostheses or superior thoracic apertures. They are best seen in coronal M Blooming artifacts are induced by the partial volume effect, which is related to method of how scanners measure density and reconstruct images. As a result, hyperd Reduction of Metal-Related Artifacts Very dense materials such as metal clips, electrodes or stents and massive calcifications cause artifacts due to the much higher absorption of X-ray photons than surrounding tissues. These phenomena, in combination with how CT scanners reconstruct images, are reasons why such structures are the source of many types of artifacts such as beam hardening, blooming and photon starvation [69][70][71]. The beam hardening artifact occurs when a polychromatic X-ray beam passes through a high-density structure, which absorbs disproportionally more low-energy photons than high-energy ones. This disproportion generates hyper-and hypodense streaks on reconstructed images [5,69,72]. VMIs are much more resistant to these artifacts because they simulate images obtained with photons of single energy-is it not possible to harden that beam, only to attenuate it. Photon starvation artifacts are hypodense areas with increased noise around massive and dense structures due to the absorption of almost all photons by them. We can see them between hip prostheses or superior thoracic apertures. They are best seen in coronal MPR. Blooming artifacts are induced by the partial volume effect, which is related to the method of how scanners measure density and reconstruct images. As a result, hyperdense objects appear to be bigger than they really are, which in the case of stents or calcifications, causes narrowing of the lumen. Stehli et al. proved that VMIs of 80 keV and more are superior to SECT in reducing this type of artifact [73]. Stent-and surgical clip-related artifacts are problems in the assessment of arterial lumen during coronary CTA, due to the abovementioned types of artifacts, which makes them appear bigger and the lumen smaller, which leads to the overestimation of stenosis and unnecessary invasive coronarography. VMIs and material-specific images, especially iodine(water), are useful in reducing artifacts [6,8,74], but they cannot eliminate them [73]. Moreover, the latest CT scanners, both DECT and SECT, are equipped with metal reduction algorithms that can be additionally applied to increased image quality, e.g., MARS in GE scanners [75], O-MAR in Philips Healthcare, Best, Netherlands or iMAR in Siemens Healthcare, Erlangen, Germany [20]. The severity of artifacts and DECT's ability to reduce them is related to stents' structures and their composition. Nitionol structures create few artifacts, which are significantly reduced, whereas tantalum structures are sources of severe artifacts that are almost resistant to reduction using VMIs. Stent diameter is also a very important factor that influences the severity of artifacts [36,74]. Incidental Extracardiac Findings Quite often in the scanning area of coronary CTA, there are some important pathologies, such as enlarged lymph nodes, pulmonary nodules, liver changes or nodules in adrenal glands. It has been proved that DECT can differentiate metastatic lymph nodes from inflammatory [12,76,77], malignant nodules and benign nodules in lungs [12,76,[78][79][80] and adrenal glands [76]. DECT pulmonary CTA is the most sensitive method of detecting pulmonary embolism, but coronary CTA performs a few seconds after the contrast travels from the pulmonary circulation into systemic circulation. Its concentration is too low in pulmonary arteries to use them using SECT, but low-energy VMIs and iodine(water) maps allow the assessment of pulmonary circulation ( Figure 9) [12,16,81,82]. [73]. Stent-and surgical clip-related artifacts are problems in the assessment of arterial lumen during coronary CTA, due to the abovementioned types of artifacts, which makes them appear bigger and the lumen smaller, which leads to the overestimation of stenosis and unnecessary invasive coronarography. VMIs and material-specific images, especially iodine(water), are useful in reducing artifacts [6,8,74], but they cannot eliminate them [73]. Moreover, the latest CT scanners, both DECT and SECT, are equipped with metal reduction algorithms that can be additionally applied to increased image quality, e.g., MARS in GE scanners [75], O-MAR in Philips Healthcare, Best, Netherlands or iMAR in Siemens Healthcare, Erlangen, Germany [20]. The severity of artifacts and DECT's ability to reduce them is related to stents' structures and their composition. Nitionol structures create few artifacts, which are significantly reduced, whereas tantalum structures are sources of severe artifacts that are almost resistant to reduction using VMIs. Stent diameter is also a very important factor that influences the severity of artifacts [36,74]. Incidental Extracardiac Findings Quite often in the scanning area of coronary CTA, there are some important pathologies, such as enlarged lymph nodes, pulmonary nodules, liver changes or nodules in adrenal glands. It has been proved that DECT can differentiate metastatic lymph nodes from inflammatory [12,76,77], malignant nodules and benign nodules in lungs [12,76,[78][79][80] and adrenal glands [76]. DECT pulmonary CTA is the most sensitive method of detecting pulmonary embolism, but coronary CTA performs a few seconds after the contrast travels from the pulmonary circulation into systemic circulation. Its concentration is too low in pulmonary arteries to use them using SECT, but low-energy VMIs and iodine(water) maps allow the assessment of pulmonary circulation ( Figure 9) [12,16,81,82]. . Dual-energy coronary CTA performed due to worsening of dyspnea and suspected CAD. There is small clot in peripheral artery in segment 9 of right lung which can be easily misted in axial VMI at 70 keV (A). There is a large V-shape aerial of hypoperfusion visible in sagittal reformat on iodine(water) map in segment 9 (B); similar aerials were discovered in segments: 4R and 5R. Finally, diagnosis of chronic peripheral pulmonary embolism was made. Impact of DECT on Workflow in Radiology Department Dual-energy CT has greater diagnostic capabilities then SECT. Each type of scanner has its own unique advantages, disadvantages and limitations. When planning to Figure 9. Dual-energy coronary CTA performed due to worsening of dyspnea and suspected CAD. There is small clot in peripheral artery in segment 9 of right lung which can be easily misted in axial VMI at 70 keV (A). There is a large V-shape aerial of hypoperfusion visible in sagittal reformat on iodine(water) map in segment 9 (B); similar aerials were discovered in segments: 4R and 5R. Finally, diagnosis of chronic peripheral pulmonary embolism was made. Impact of DECT on Workflow in Radiology Department Dual-energy CT has greater diagnostic capabilities then SECT. Each type of scanner has its own unique advantages, disadvantages and limitations. When planning to purchase such a device, one should take into consideration what kind of examinations will mainly be performed on that scanner. If other types of examinations will also be performed, the main areas of diagnostic and scientific interest of the specific department have to be considered. Besides the technical ability to perform dual-energy examination, the knowledge of how to interpret them is even more important. Training in the interpretation of dual-energy examinations by radiologists is time-consuming, and to be cost-effective, close cooperation of the radiologist and radiographer is required. In every type of DECT, except the multi-layer detector, it is necessary to plan specific examinations to be dual energy, which requires some work planning and patient selection. It is possible to perform every examination as a dual energy examination, but in some cases there will be no useful information and patients will be exposed to an additional dose of radiation. Every institution working with a DECT scanner has to develop its own way of organizing work with this type of machine. In our department, we select patients for DECT examination if the referral suggests pathologies that can be better assessed in that mode or if previous examinations were inconclusive. Due to their complexity, DECT scanners are more expensive than SECT ones, so the installation of them should be thoroughly thought out. Limitations of DECT Dual-energy CT has many advantages over SECT, so why it is not wildly used? The main reason is probably the lack of radiologists' and hospital ménages' knowledge about its capabilities. Moreover, complicated, often unintuitive and expensive software necessary to fully use the potential of this technology is required. Dual-energy scanners are about 25% more expensive to buy and operate than single-energy devices of similar class due to the highly complex elements produced exclusively for them, which increases their cost due to their small quantity. Furthermore, DECT, as with every imaging modality, has some limitations strongly connected with the type of scanner. The rapid-kVp-switching DECT is prone to a motion artifact due to inferior temporal resolution, but they offer good energy separation and projection-based VMI reconstruction. Twin-beam, dual-source and sequential DECT scanners have better temporal resolution but come with the price of delayed registration of a second energy dataset and the possible miscalculation of VMIs. Sandwich detector scanners allow for the simultaneous registration of both energy datasets but are at risk of artifacts due to the misregistration of photons by the wrong layer of the detector. The Future of Heart DECT The current applications of DECT in heart diagnostics are presented in Table 3, and the most important studies comparing DECT with other modalities are presented in Table 4, which determines their sensitivity and specificity. Researchers are continuously looking for new applications for DECT in many fields, including the heart. There are several papers that describe the ability of DECT to estimate extracellular volume (ECV), which is helpful in diagnostics of cardiomyopathies. Until recently, only CMR was able to measure ECV. There are some discrepancies in the formulas used to calculate ECV depending on the type (imageor projection-based) of scanner [83,84]. The accuracy of this method has been proved in comparison with CMR and histological sampling [83,85]. DECT is the only one-stop imaging modality that allows one to assess ECV and the coronary arteries simultaneously, as well as simultaneously assessing perfusion, coronary arteries and plaque to predict their stability. This wide range of information that can be obtained during one examination is beyond the reach of invasive coronarography. It has been proved in many trails, e.g., the SCOT-HEART trail, that using CTA is cost-effective in the care of patients with stable chest Table 4. Summary of sensitivity, specificity, positive predicting value (PPV), negative predictive value (NPV) and significant details of citated original study comparing DECT with other modalities. n/a-not available.
2021-11-10T16:15:56.979Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "8eba924fcb2a18f2ce77f9fb863d89cc63a15b1e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/21/5193/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1a1ecd7e3d550cdef2ce3a37a71722d3b65ce70", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
257209870
pes2o/s2orc
v3-fos-license
Customized gaming system engages young children in reaching and balance training Purpose Trunk stability, an important prerequisite for many activities of daily living, can be impaired in children with movement disorders. Current treatment options can be costly and fail to fully engage young participants. We developed an affordable, smart screen-based intervention and tested if it engages young children in physical therapy goal driven exercises. Methods Here we describe the ADAPT system, Aiding Distanced and Accessible Physical Therapy, which is a large touch-interactive device with customizable games. One such game, “Bubble Popper,” encourages high repetitions of weight shifts, reaching, and balance training as the participant pops bubbles in sitting, kneeling, or standing positions. Results Sixteen participants aged 2–18 years were tested during physical therapy sessions. The number of screen touches and length of game play indicate high participant engagement. In trials lasting less than 3 min, on average, older participants (12–18 years) made 159 screen touches per trial while the younger participants (2–7 years) made 97. In a 30-min session, on average, older participants actively played the game for 12.49 min while younger participants played for 11.22 min. Conclusion The ADAPT system is a feasible means to engage young participants in reaching and balance training during physical therapy. Motor disabilities are the most common functional disability in the United States, affecting nearly 14% of the nation's population. 1 Motor disabilities hinder a person's ability to complete everyday tasks. 1 Poor trunk control caused by motor disabilities affect balance control when sitting, standing, and walking. 2 These deficits, when occurring early in childhood, cause delays in achieving developmental milestones leading to long-term physiological and psychological impairments and reduced quality of life. 3 Early physical therapy (before age five) may minimize these delays but can be hindered by lack of long-term adherence. 4,5 Consistent adherence to a long-term physical therapy regimen promotes positive adaptations through neuroplasticity. 6 Low adherence has often been attributed to low levels of engagement. 7 Thus, improving engagement may be key to increasing adherence and reducing the effects of motor deficits. Virtual reality (VR) is thought to be an effective solution for engaging physical therapy for adults and older children, but is unfortunately unsuitable for young children. 8,9 Young children require specific customizable gaming appropriate for their developmental age to engage optimally. The Self-Determination Theory (SDT) is one framework defining key tenets of engagement in pediatric rehabilitation. 10 SDT theorizes that growth and well-being depend upon the fulfilment of three psychological needs: autonomy, competence, and relatedness. In rehabilitation, autonomy is given through authentic choice, competence through perceived excellence, and relatedness through meaningful connections and interactions. Engagement is raised by meeting these needs which improves adherence both insession and long term and leads to better patient outcomes. Here we describe the development and implementation of a VR-mimetic, smart screen-based intervention suitable for clinic and home use to address the need for greater engagement of young children in physical therapy goal driven exercises. We propose that a therapy solution which specifically aims to improve patient engagement during the session has strong potential to result in improved physical outcomes. Participants Our objective was to engage young children in therapy. We enrolled children aged 2-18 years to compare younger to older children. Participants were recruited from Kennedy Krieger Institute in Baltimore, Maryland, USA. Eligible inpatients and outpatients were selected by physical therapists (PTs) based on the appropriateness of the game for their treatment sessions. All participants or a parent/legal guardian gave informed and written consent to participate prior to study sessions according to the Johns Hopkins Medicine Institutional Review Board. The ADAPT system We developed the Aiding Distanced and Accessible Physical Therapy (ADAPT) system based on discussion between engineers and pediatric PTs. We identified a need for a system to engage young children in greater dosing of therapeutic activities, similar to that which can be accomplished for older individuals using a VR system. Review of existing technologies showed the lack of a VR system suitable for young children that is easy to set up and provides quantitative information. We achieve this by gamifying therapeutic exercises that promote balance control during reaching. Our system administers games requiring the child to physically interact with the touch-interactive display in ways that are clinically relevant. For example, to strengthen the trunk, children maintain balance while reaching far from midline in the game (Figure 1(a)). At the end of a session, the ADAPT system produces a statistical report of performance metrics. Figure 1(a) shows the design of the ADAPT system. The prototype evolved based on feedback from both PTs and children who used the system. The current design includes a large 55-inch TV mounted on a height-adjustable, rolling TV stand with a clear acrylic sheet protecting the screen. An infrared (IR) frame attached to the sheet registers the user's input. The equipment cost was $735 excluding the laptop computer. We created games for the ADAPT system using Unity® that promote patient engagement by upholding the core tenets of the SDT model. The Bubble Popper game promotes autonomy by providing children the ability to customize gameplay with background and in-game reward category (e.g. animals, superheroes, etc) selections. The intuitive user interface allows children to participate in navigating through the game further bolstering autonomy. Competence is promoted through adjustable difficulty settings (e.g. bubble size and speed, and range of play area), changeable before and during the game allowing gameplay that accommodates the user's abilities (Figure 1(b)). At the end of the game, the participant is shown a congratulatory screen reading "You popped bubbles!". The fixed end screen ensures participants will not be discouraged by a lower score if they are given a particularly difficult task. This way, the participant feels pleased at their performance and is encouraged to play again, no matter their score. Finally, the Bubble Popper game fosters relatedness through its visually stimulating, game-like design creating a fun experience. During the Bubble Popper game, bubbles spawn at random locations throughout the screen. Therapists are able to change the pattern of bubble spawning by altering game modes. Setting examples include "Obstacle" mode which obstructs portions of the screen and "Quadrant" mode, which spawns bubbles sequentially across quadrants. Bubbles move around the screen at a pace decided by the therapist. When the child touches a bubble, the bubble pops and a "reward" appears. Participants are not penalized for missing any bubbles. The play time is preset by the therapist and a motivational message appears on the screen at the end. The ADAPT system not only engages patients but also aids PTs. Therapists and parents can easily tailor the game for the child's specific therapeutic needs. The ADAPT system returns an encrypted output file of game data after each trial. These files can be uploaded to a website for analysis and generation of a PDF report. To preserve participant privacy, the files are 2-factor encrypted and destroyed during the process, ensuring that the files are never stored remotely. The report includes quantitative engagement metrics for evaluating a child's performance. The feedback provided by the data analysis reports allows therapists to track change over time in a clear, simple way. Study design/testing procedure Participants completed at least one 30-min physical therapy session that included playing the Bubble Popper game. The therapist oriented the participant by demonstrating the system and the child selected their preferred in-game rewards. The PT administered each session, guiding the participant through a series of trials with different game modes and difficulty settings. The PT increased the level of difficulty during and across sessions to increase the movement challenge and prevent boredom. During the trials, the system recorded in-game performance data and the PT documented notes on the participant's subjective performance. Participant characteristics Participant characteristics including age, sex, disability level, and cognition were obtained from chart review and PT assessment, and are displayed in Tables 1 and 2. Therapist reported outcomes of engagement (qualitative) The participant's therapist completed pre-and post-test engagement questionnaires. We report the mean and standard deviation (SD) of these tests. Pretest. The Hopkins Rehabilitation Engagement Rating Scale (HRERS) establishes a baseline measure of the participant's engagement during traditional therapy. The HRERS evaluates five items on a 6 level Likert scale from 1 (never) to 6 (always). The HRERS has been shown to have an inter-rater reliability of 0.73 and an internal consistency of 0.91. 11 Post-test. We developed an engagement questionnaire to assess the PTs' and participants' subjective experience with the ADAPT system in reference to the participants' behavior during traditional therapy sessions. 12 The post-test evaluates nine items on a 5-level Likert scale from 1 (strongly disagree) to 5 (strongly agree). This questionnaire asks the therapist to assess the participant's performance, the participant's engagement, and whether the ADAPT system meets their therapy needs. Performance metrics (quantitative) The ADAPT system provides performance statistics on game play. Key outcome measures that demonstrate engagement include the total play time (minutes), the number of touches, the inter-touch interval (ITI), and the pop-totouch ratio. The ITI is the amount of time between touches. The pop-to-touch ratio is the number of touches that resulted in a popped bubble divided by the number of touches to the screen. We define total play time as our primary quantitative engagement metric, as an unengaged participant would stop playing the game sooner than a more engaged participant. Results The study sample included eight participants with cerebral palsy, two with complex regional pain syndrome, two with ataxia, one with spina bifida, and three with other movement disorders, for a total of 16 participants (6 males:10 females). Participants had a range of disabilities from requiring trunk support in sitting to supervision for dynamic standing activities. Nine PTs used the game with the participants in a variety of treatment conditions including prone over a therapy ball, quadruped, tall kneeling, sitting, and standing. Younger participants (age 2-7 years) engaged in a similar duration of therapeutic movement as the older participants (Figures 4(a) and (b)). Both younger and older participants performed up to 200 reaches during game play, with some participants reaching up to 300 times. Younger participants played the game for nearly the same amount of time as older participants (younger: 10.7 ± 3.6 min, older: 13.3 ± 5.6 min). Game output includes the number, timing, and location of all screen touches regardless of accurate bubble pops to assess the treatment dose and participant performance. Across ages, participants showed similar screen touch frequency (ITI, Figure 4(c)) though a few of the youngest showed slower touches. All participants showed similar successful performance in the pop-to-touch ratio (Figure 4(d)) quantitatively indicating that the game is customizable enough to be both challenging and rewarding enough to motivate children of varying capability to make accurate touches. These results also show that the ADAPT system fulfils the core tenets of the SDT model. As can be seen by the ITI and the pop-to-touch ratio data, participants made clear, motivated attempts to pop the bubbles. Figure 4(e) is a spatio-temporal plot of the locations of touches of a single trial, allowing the visual qualitative analysis of the ability of the ADAPT system to engage participants in reaching and balance training. Throughout the trial, the participant shifted from making touches in the center-left of the screen to the center-upper right of the screen and eventually to the bottom-right of the screen. Discussion Current physical therapy interventions for motor disabilities are often limited in achieving desired results due to the challenges in fully engaging young children in directed activities. Traditional therapy is both safe for young children and has been shown to be effective, but often fails to effectively engage young children. Specialized garments are safe for children but fail to train the muscles, decrease independence of children, and are inconvenient to use. 13 Figure 2. Pre-test engagement survey. Prior to using the game with a participant, therapists were asked to rate the participant's typical therapy performance using the five questions from the Hopkins Rehabilitation Engagement Rating Scale (HRERS). Average therapist responses shown for each question. A total of 3 therapists were surveyed. One of the most recent innovations in physical therapy with a high potential for engagement is virtual reality. Unfortunately, current designs for this technology prevent it from being used with young children. 9 Engineers and rehabilitation professionals collaborated to develop the ADAPT system with a design centered around the three core tenets of the Self-Determination Theory to fulfill the need for engaging rehabilitation tools and address the shortcomings of existing solutions. Qualitative and quantitative assessments demonstrate its success in engaging young participants in reaching and balance training. Qualitatively, PTs report that therapy with the ADAPT system is as engaging as other delivery methods. Therapists agreed with the statements that the game made the delivery of therapy easier, that therapy with the game was more fun, and that the game was equally effective in meeting the therapy goals. Additionally, therapists affirmed that they would incorporate the game in a future therapy session. Quantitatively, the ADAPT system provides a number of metrics that can be used to measure therapy dose in young children with more ease and specificity than by other means. Young children demonstrated continuous engagement in a focused reaching activity while using the ADAPT system. Similar average game play times between younger and older participants highlight that both age groups were willing to engage in the game for similar amounts of time and younger participants did not opt to stop a session early. The ADAPT system addresses several of the questions to developers of rehabilitation technology made by Sulzer and Karfeld-Sulzer. 14 Specifically, the ADAPT system encourages active participation of the child, requires minimal setup and expertise to run, and facilitates therapy goals. During the design process, engineers consulted with PTs for insights on treating motor disabilities, standard engagement strategies during therapy, and meaningful measurements of performance. With this feedback, engineers designed the ADAPT system with flexibility to facilitate different types of exercises and in a range of positions. The design process of the ADAPT system affirms the importance of multidisciplinary collaboration to address rehabilitation needs. Furthermore, the ADAPT system has received positive and encouraging feedback at several presentations and conferences. [15][16][17] Most notably, the ADAPT system was presented at the American Physical Therapy Association's 2022 Combined Sections Meeting, a conference that allowed the ADAPT system to garner interest at the national level. 15 Limitations The game settings and testing conditions were not uniform across trials. Therapists tailored each game to meet a participant's particular therapy goals. The screen size was changed part way through data collection. The initial prototype for the ADAPT system that used rear projection techniques with a custom-made screen (122 cm × 163 cm) was changed to a standard TV screen (123 cm × 76 cm) to improve device portability and reproducibility. Because therapists had routinely reduced the initial prototype play area, the screen size change had a minimal effect on study results. Game variation is one of the ADAPT system's main strengths, enabling autonomous customization for therapists and participants alike. Therefore, while not consistent between participants, varying game settings across participants allows us to evaluate ADAPT in a realistic therapeutic setting. Future directions We plan to develop additional games for the ADAPT system that incorporate a cognitive component (i.e. card-matching). PTs are interested in an adaptive and responsive game algorithm based on a child's performance. Ongoing research targets children ages 2-10 years old with balance impairments, in a movement comparison study of ADAPT game play versus traditional therapy activities. The two therapy delivery mechanisms are compared via the respective dosing of weight shifts, reaches, and overall movement. Conclusion Engagement is an essential but challenging factor in successful therapeutic outcomes. The ADAPT system is an affordable, smart screen-based intervention that engages young children in physical therapy goal driven exercises.
2023-02-27T16:10:42.438Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "b113b112ee803856ac615c9b3728a24d96507d71", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c250ead716cd6fd84ba14a2912c92433a170f760", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
10605412
pes2o/s2orc
v3-fos-license
Non-Canonical MSSM, Unification, And New Particles At The LHC We consider non-canonical embeddings of the MSSM in high-dimensional orbifold GUTs based on the gauge symmetry SU(N), N=5,6,7,8. The hypercharge normalization factor k_Y can either have unique non-canonical values, such as 23/21 in a six-dimensional SU(7) model, or may lie in a (continuous) interval. Gauge coupling unification and gauge-Yukawa unification can be realized in these models by introducing new particles with masses in the TeV range which may be found at the LHC. In one such example there exist color singlet fractionally charged states. Introduction High-dimensional orbifold grand unified theories (GUTs) [1,2] provide the elegant solutions to the well-known problems encountered in four-dimensional (4D) GUTs such as SU (5) and SO (10), especially the doublet-triplet splitting problem and the proton decay problem. The non-supersymmetric version has, in particular, been exploited to show that unification of the standard model (SM) gauge couplings can be realized with a non-canonical embedding of U(1) Y , the hypercharge component of the SM gauge group [3]. The couplings unify at M GUT ≃ 4×10 16 GeV, which is also the scale at which the 4D N = 1 supersymmetry (SUSY) is broken, without introducing additional new particles. This approach has been taken a step further along two different directions. In [4] it was shown that by implementing additional gauge-Yukawa unification, the SM Higgs mass can be predicted. The mass turns out to be 135 ± 6 (144 ± 4) GeV with gauge-top (bottom/tau) Yukawa unification. This is encouraging because it is different from the prediction of 130 GeV in the minimal supersymmetric standard model (MSSM). In [5] these ideas were extended to the case of split supersymmetry, with similar predictions for the Higgs mass. The orbifold scenario for the GUT breakings assume the supersymmetric GUT models exist in high dimensions and are broken to 4D N = 1 supersymmetric standard like models for the zero modes due to the discrete symmetries on the extra space manifolds [1,2]. The zero modes can be identified with the low-energy SM fermions and Higgs fields, allowing gauge-Higgs unification [6] and gauge-Yukawa unification [7]. For the canonical U(1) Y normalization, the unification of the gauge couplings, top and bottom quark Yukawa couplings, and τ lepton Yukawa coupling can be realized in the 6D orbifold SU (8) and SU(9) models, and cannot be obtained in the orbifold SU(N) models with N < 8. Therefore, it is interesting to construct the minimal orbifold SU(N) model with gauge-Yukawa unification. In this paper, we show that the minimal model with the unification of the gauge couplings and third-family Yukawa couplings is the 6D orbifold SU(7) model with noncanonical U(1) Y normalization k Y = 23/21 where k Y is defined in Eqs. (1) and (2). Moreover, we construct the 7D SU(8) models with gauge-Yukawa unification and k Y > 23/21. And for completeness, we consider the 6D orbifold SU (5) and SU(6) models with gauge-fermion and gauge-fermion-Higgs unification first as warm up exercise. The 4D gauge group in these models is SU(3) C × SU(2) L × U(1) Y accompanied by one or several extra U(1) factors assumed to be broken at M GUT . We define the unified gauge couplings at the GUT scale (M GUT ) as where where k Y is the U(1) Y normalization factor, and the g Y , g 2 , and g 3 are the gauge couplings for U(1) Y , SU(2) L , and SU(3) C gauge groups, respectively. For the canonical U(1) Y normalization, we have k Y = 5/3. For orbifold GUTs where all of the SM fermions and Higgs fields are placed on a 3-brane at an orbifold fixed point, we can have any positive normalization for U(1) Y , i. e., k Y is an arbitrary positive real number. However, in this case charge quantization cannot be realized. We wish to consider the more interesting orbifold GUTs in which at least one of the SM fermions and Higgs fields arise from the zero modes of the bulk vector multiplet and their U(1) Y charges can be determined. The charge quantization can be achieved due to the gauge invariance of Yukawa couplings and anomaly free conditions. In the orbifold models we consider, k Y is then either uniquely determined to have a non-canonical value or lies in a continuous interval. For the latter case k Y = 5/3 is possible, but there is no apparent reason why this value would be realized. Since the three SM gauge couplings unify quite nicely with the canonical hypercharge normalization, it can be argued that we should simply discard the models which do not predict k Y = 5/3. However, unification in MSSM with k Y = 5/3 may well be accidental, and as the example of non-supersymmetric unification shows there are different possibilities. In this paper we assume a non-canonical hypercharge normalization as the models under consideration generally predict. We then discuss how gauge coupling unification and gauge-Yukawa unification can be obtained by adding a minimal set of vector-like particles to the MSSM spectrum. It is certainly our hope that these vector-like particles will be found at the Large Hadron Collider (LHC). The paper is organized as follows. In sections 2 and 3 we consider SU(5) and SU(6) models. In the SU(5) model the only zero mode that can be introduced in the bulk is a quark doublet and k Y is predicted to be 1/15. The model can be extended to SU (6), with k Y ≥ 1/15. We construct two SU(6) models with gauge-top and gauge-bottom Yukawa coupling unification, with k Y = 2/3 and 1/3 respectively. We discuss SU (7) and SU(8) models in sections 4 and 5. We can have gauge-Yukawa unification for the third family in an SU(7) model, with k Y = 23/21. This model can be extended to SU (8), with k Y ≥ 23/21. Sections 6 and 7 concern gauge coupling unification and gauge-Yukawa unification with new particles in these models. We briefly remark on the Higgs mass in section 8 and conclude in section 9. Some details of the 6D and 7D orbifold models are provided in the two appendices. SU (5) Models We consider a 6D N = (1, 1) supersymmetric SU(5) gauge theory compactified on the orbifold M 4 ×T 2 /Z 6 (for some details see Appendix A). The N = (1, 1) supersymmetry in 6D has 16 supercharges and corresponds to N = 4 supersymmetry in 4D, and thus only the gauge multiplet can be introduced in the bulk. This multiplet can be decomposed under the 4D N = 1 supersymmetry into a vector multiplet V and three chiral multiplets Σ 1 , Σ 2 , and Σ 3 in the adjoint representation, where the fifth and sixth components of the gauge field, A 5 and A 6 , are contained in the lowest component of Σ 1 . To break the SU(5) gauge symmetry, we choose the following 5 × 5 matrix representation for R Γ T , where w = e iπ/3 and n 1 = 0. Then, we obtain 5 So, for the zero modes, the 6D N = (1, 1) supersymmetric SU(5) gauge symmetry is broken down to 4D N = 1 supersymmetric SU(3) C × SU(2) L × U(1) Y gauge symmetry [2]. We define the generator for U(1) Y as follows: Because tr[T 2 U (1) Y ] = 1/30, we obtain k Y = 1/15. Under SU(3) C × SU(2) L × U(1) Y , the adjoint representation 24 of SU(5) decomposes as where the last term (1, 1) Q 00 denotes the gauge field associated with U(1) Y . The subscripts Qij, with Qij = −Qji, denote the charges under U(1) Y , and The Z 6 transformation properties for the decomposed components of V , Σ 1 , Σ 2 , and Σ 3 are given by the first 2 × 2 submatrices in Eqs. (78)-(81) in Appendix A. We choose where k is given in Eqs. (76) and (77) in Appendix A. There are no zero modes from the chiral multiplets Σ 2 and Σ 3 , and only one zero mode, (3,2) Q12 , from the chiral multiplet Σ 1 , which can be identified as the third-family quark doublet Q 3 . The remaining MSSM matter fields and the two MSSM Higgs doublets can be put on the 3-brane at z = 0, where only the SM gauge symmetry is preserved. SU (6) Models For the SU(6) models where at least one of the SM fermions and Higgs fields arise from the zero modes of the chiral multiplets Σ 1 , Σ 2 and Σ 3 , we can show that the minimal normalization k Y for U(1) Y is 1/15, and the corresponding zero mode is quark doublet because it has the smallest U(1) Y quantum number. Moreover, we can only have the gauge-top or gauge-bottom quark Yukawa coupling unification, and we cannot obtain the right-handed leptons from the zero modes of bulk vector multiplet. In the following subsections, we present three SU(6) models. In the first, the third family quark doublet Q 3 is the only zero mode from the bulk vector multiplet, and k Y is an arbitrary positive real number that is larger than or equal to 1/15. In the second and third SU(6) models, we have gauge-top and gauge-bottom quark Yukawa coupling unification, respectively. We consider 7D N = 1 supersymmetric SU(6) compactified on the orbifold M 4 × T 2 /Z 6 × S 1 /Z 2 (for some details see Appendix B), and 6D N = (1, 1) supersymmetric SU(6) compactified on the orbifold M 4 × T 2 /Z 6 . The compactification process yields 4D N = 1 supersymmetric SU ( The generators for U(1) Y × U(1) α are defined as follows: where a is a real number. Because tr[T 2 U (1) Y ] = 1/30 + 30a 2 , we obtain The adjoint representation 35 of SU (6) is decomposed under SU ( where (1, 1) Q00 in the third diagonal entry of the matrix and the last term (1, 1) Q 00 denote gauge fields associated with U(1) Y × U(1) α . The subscripts Qij, with Qij = −Qji, are the charges under U(1) Y × U(1) α . The subscript Q00 = (0, 0), and the other subscripts Qij with i = j are We will consider the following three models. SU (6) Model I Here the third-family quark doublet Q 3 is the only zero mode from the bulk vector multiplet, a is an arbitrary real number, and we have To project out all the unwanted components in the chiral multiplets, we consider the 7D N = 1 supersymmetric SU (6). The N = 1 supersymmetry in 7D has 16 supercharges corresponding to N = 4 supersymmetry in 4D, and only the gauge supermultiplet can be introduced in the bulk. This multiplet can be decomposed under 4D N = 1 supersymmetry into a gauge vector multiplet V and three chiral multiplets Σ 1 , Σ 2 , and Σ 3 all in the adjoint representation, where the fifth and sixth components of the gauge field, A 5 and A 6 , are contained in the lowest component of Σ 1 , and the seventh component of the gauge field A 7 is contained in the lowest component of Σ 2 . To break the SU(6) gauge symmetry, we choose the following 6 × 6 matrix representations for R Γ T and R Γ S where n 1 = n 2 = 0. Then, we obtain Note that R Γ S only breaks the additional supersymmetry. The Z 6 × Z 2 transformation properties for the decomposed components of V , Σ 1 , Σ 2 , and Σ 3 are the 3 × 3 submatrices in Eqs. (93)-(96) in Appendix B where the third and fourth rows and columns are crossed out. We choose Then, we obtain that there is no zero mode from the chiral multiplets Σ 2 and Σ 3 , and only one zero mode, (3,2) Q12 , from the chiral multiplet Σ 1 , which can be identified with the third-family quark doublet Q 3 . SU (6) Model II and SU (6) Model III In this subsection, we will construct SU(6) models with gauge-top and gauge-bottom quark Yukawa coupling unification. We consider 6D N = (1, 1) supersymmetric SU (6) compactified on the orbifold M 4 × T 2 /Z 6 . To break the SU(6) gauge symmetry, we choose the following 6 × 6 matrix representation for R Γ T where n 1 = n 2 = 0. Then, we obtain The Z 6 transformation properties for the decomposed components of V , Σ 1 , Σ 2 , and Σ 3 are given by the first 3 × 3 submatrices in Eqs. (78)-(81) in Appendix A. We choose and consider the following two models: we have The zero modes from the chiral multiplets Σ 1 , Σ 2 and Σ 3 are presented in Table 1. We can identify them as the third-family quark doublet, the right-handed top quark, and the MSSM Higgs doublets. Chiral Fields Zero Modes From the trilinear term in the 6D bulk action, we obtain the top quark Yukawa term Thus, at M GUT , we have where y t is the top quark Yukawa coupling, and V is the physical volume of extra dimensions. (B) SU (6) Model III (gauge-bottom quark Yukawa coupling unification) For this case we set in which case The zero modes arise from the chiral multiplets Σ 1 , Σ 2 and Σ 3 , and are presented in Table 2. We can identify them as the third-family quark doublet, the right-handed bottom quark, and the MSSM Higgs doublets. From the trilinear term in the 6D bulk action, we obtain the bottom quark Yukawa term Thus, at M GUT , we have where y b is the bottom quark Yukawa coupling. SU (7) Models As we discussed above, to achieve gauge-fermion-Higgs unification, the minimal gauge group is SU (7), with U(1) Y normalization k Y = 23/21 which is uniquely determined. This can be seen as follows. The U(1) Y generator in SU (7) belongs to its Cartan subalgebra, and can be parametrized as The traceless condition yields and gauge-fermion-Higgs unification requires that Thus, we have the unique solution for which tr[T 2 U (1) Y ] = 23/42. With a canonical normalization tr[T 2 i ] = 1/2 of nonabelian generators, we obtain k Y = 23/21. We consider a 6D N = (1, 1) supersymmetric SU (7) gauge theory compactified on the orbifold M 4 × T 2 /Z 6 (for some details see Appendix A). To break SU (7), we select the following 7 × 7 matrix representation for R Γ T where n 1 = n 2 = n 3 = 0. Thus, So, for the zero modes, the 6D N = (1, 1) supersymmetric SU (7) gauge symmetry is broken down to 4D N = 1 supersymmetric SU (3) [2]. We assume that the two additional U(1) symmetries can be spontaneously broken at M GUT by the usual Higgs mechanism. It is conceivable that these two symmetries can play some useful role as flavor symmetries [8], but we will not pursue this any further here. We define the generators for the The SU (7) adjoint representation 48 decomposes under the SU (3) where (1, 1) Q00 in the third and fourth diagonal entries of the matrix and the last term (1, 1) Q 00 denote the gauge fields associated with U(1) Y ×U(1) β ×U (1) The Z 6 transformation properties for the decomposed components of V , Σ 1 , Σ 2 , and Σ 3 are given by Eqs. (78)-(81). We will consider two concrete models. SU (7) Model I We choose where k is given in Eqs. (76) and (77) in Appendix A. The zero modes from the chiral multiplets Σ 1 , Σ 2 and Σ 3 are presented in Table 3. We can identify them as the thirdfamily SM fermions, and one pair of Higgs doublets. Interestingly, we do not have any exotic particle from the zero modes of the chiral multiplets. From the trilinear term in the 6D bulk action, we obtain the top quark and tau lepton Yukawa terms Thus, at M GUT , we have where y τ is the tau lepton Yukawa coupling. However, we do not have the bottom quark Yukawa term from 6D bulk action. The zero modes from the chiral multiplets Σ 1 , Σ 2 and Σ 3 are given in Table 4. We can identify them as the third-family SM fermions, the MSSM Higgs doublets, and an exotic (left-handed singlet) quark b X . From the trilinear term in the 6D bulk action, we obtain the top quark, bottom quark, and tau lepton Yukawa terms Table 4: Zero modes from the chiral multiplets Σ 1 , Σ 2 and Σ 3 in the SU (7) Model II. Thus, at M GUT , we have Thus, we have unification of the SM gauge couplings and the third-family SM fermion Yukawa couplings. We can give GUT-scale mass to the exotic quark b X by introducing an additional exotic quarkb X with quantum number (3, 1) QX on the observable 3-brane at z = 0, where QX = ( 1 3 , −3, 0). Suppose we introduce one pair of SM singlets S ′ and S ′ with charges 1 and −1 respectively whose VEVs break U(1) γ at M GUT . The exotic quarks b X andb X can pair up and acquire M GUT mass via the brane-localized superpotential term S ′ b XbX . The zero modes from the chiral multiplets Σ 1 , Σ 2 and Σ 3 are presented in Table 5. We can identify them as the third-family SM fermions, the MSSM Higgs doublets, and the exotic quark b X . From the trilinear term in the 7D bulk action, we obtain the top quark, bottom quark, and tau lepton Yukawa terms Chiral Fields Zero Modes Thus, at M GUT , we have New Particles and Gauge Coupling Unification For non-canonical U(1) Y normalization, it is necessary to introduce new particles to achieve unification. Here, as an example, we consider restoring gauge coupling unification by adding a minimal set of vector-like particles with SM quantum numbers. These particles can be put on the 3-brane at z = 0, and their masses can be the order of the weak scale due to the Giudice-Masiero mechanism [9]. We denote these particles as u x and so on, where u x stands for the vector-like pair with the same quantum numbers as these for u + u c . Although we employ twoloop renormalization group equations (RGEs) for the gauge couplings in the numerical calculations, for the discussions below we will consider one-loop β-coefficients which, for the MSSM and vector-like particles, are as follows: From the one-loop RGEs, it is straightforward to obtain the following relations: where s W stands for sin θ W , and α and α s are the electromagnetic and strong couplings at m Z . From Eq. (59), we see that b 3 −b 2 is an integer. For the GUT scale to be smaller than the Planck scale and large enough to avoid the bounds on proton decay, Eq. (60) requires the contribution (b 3 − b 2 ) x from vector-like particles to vanish, assuming the latter have masses close to the weak scale. From Eq. (61), the range of (b 1 − b 2 ) x allowing gauge coupling unification can be obtained depending on the value of k Y . Also, α GUT ≪ 1 is required for perturbative unification. Simple examples that satisfy the above conditions are as follows. For k Y = 1/15 as in the SU(5) model, gauge coupling unification can be restored by adding two sets of L x + u x . Unification can also be restored by adding L x + u x + 2e x or by adding 4e x . For k Y = 1/3 as in the SU(6) model with gauge-bottom quark Yukawa coupling unification, one can again add two sets of L x + u x , or 3e x . And for k Y = 2/3 as in the SU(6) model with gauge-top quark Yukawa coupling unification, one can add L x + u x + e x or 3(L x + d x ) + e x . Finally, for k Y = 23/21 as in the SU(7) model with the unification of the gauge couplings and third-family Yukawa couplings, one can add L x + u x . Because such additional vector-like particles can be observed at the LHC and ILC, we can distinguish these models with these future experiments. New Particles and Gauge-Yukawa Unification In this section we probe gauge-Yukawa unification following the analysis in Ref. [10] (see also Ref. [11] for details and references). In our analysis, we use a dimensional reduction (DR) renormalization scheme, which is known to be consistent with SUSY. DR Yukawa couplings (y t,b,τ ) and gauge couplings (g i ) in the MSSM at Z-boson mass scale are written as follows: [13]. The quantities δ t,b,τ,g i represent SUSY threshold corrections. In our analysis, we treat them as free parameters without specifying any particular SUSY breaking scenario. When all parameters δ t,b,τ,g i are specified, all DR couplings in the MSSM are determined at m Z . Then we use the two-loop RGEs for the MSSM gauge couplings and the one-loop RGEs for the Yukawa couplings in order to study the unification of couplings at the GUT scale. In order to study the gauge-Yukawa unification, we look for a region where top, bottom and tau Yukawa couplings are unified (y t = y b = y τ ≡ y G ) at the GUT scale. We define the GUT scale (M G ) as a scale where g 1 (M G ) = g 2 (M G ) ≡ g G . In our analysis, we allow the possibility that the strong gauge coupling is not exactly unified, i. e., g 3 (M G ) 2 /g 2 G = 1 + ǫ 3 where ǫ 3 can be a few %. This mismatch ǫ 3 from exact unification can be due to the GUT-scale threshold corrections to the unified gauge coupling. First, we review gauge-Yukawa unification for the canonical case k Y = 5/3. In Fig. 1, contours of δ b (dotted lines in Fig. (a)), tan β (dashed lines in Fig. (b)) and ǫ 3 (dotted lines in Fig. (b)) are shown as a function of δ t and δ g 3 , which are required for the Yukawa unification at the GUT scale. In order to fix δ g 1,2 , we assume that all SUSY mass parameters which contribute to δ g 1,2 are equal to 500 GeV (δ g 1 = −0.006 and δ g 2 = −0.02). As shown in Fig. 1, tan β should be about 52, and the value of δ b should be a few %, which is much smaller than one naively expected in large tan β case. Small values of δ b significantly constrains the superpartner spectrum, as discussed in Refs. [14,11]. On the other hand, δ t is in the expected range (see Ref. [15]). Fig. (a)), tan β (dashed lines in Fig. (b)) and ǫ 3 (dotted lines in Fig. (b)) are shown as a function of δ t and δ g 3 , required for Yukawa unification (y t = y b = y τ ). After finding the region for the Yukawa unification, contours of a parameter R (defined in text) are plotted in Fig. (a). The shaded regions represent a region where the gauge-Yukawa unification is achieved within 5% level (R ≤ 1.05). Here we have fixed δ τ = 0.02, δ g 1 = −0.006 and δ g 2 = −0.02. After requiring Yukawa unification, we calculate a parameter R defined as follows: In the shaded regions of Fig. 1, gauge-Yukawa unification is realized within 5% level (R ≤ 1.05), while allowing ǫ 3 to be a few %. Next, we take k Y = 23/21 as predicted by the SU(7) model, and give examples as how gauge-Yukawa unification might be realized. Gauge coupling unification can be restored by adding vector-like particles with SM quantum numbers, as in section 6. A simple example for k Y = 23/21 is adding one set of L x + u x . However, as shown in Fig. 2, Yukawa unification then requires δ t shifted up 0.06 compared to Fig. 1, which is not compatible with the SUSY threshold corrections in most of the parameter space. Note that δ t can be modified if mixing in the top quark sector is allowed. We then have the Yukawa and mass terms where the primes denote weak eigenstates. Diagonalizing the mass matrix, we obtain Here the notation is as follows: y t0 is the value without mixing, x ≡ M/m t , and ξ ≡ y ′ /y t . Experimentally, M 200 GeV is excluded [16]. As an example we take Figure 3: Same as Fig. 1, but for k Y = 23/21 with one set of L x + d x + e x added at M = 300 GeV. The Yukawa coupling y 1 is assumed negligible, while y 2 is taken to be 0.7 at M, corresponding to ≃ 1.5 at the GUT scale. M = 300 GeV. Precision electroweak data (more precisely the bounds on the oblique parameter T) then requires the extended CKM parameter V xb 0.4 [17]. This constraint corresponds to ξ 0.5 and a downward shift in δ t of 0.06. A similar example is adding one set of L x + d x + e x . Gauge-Yukawa unification is then obtained essentially with the same parameters as above, since the β-coefficients are identical at one loop. δ t in this case can be modified even assuming no mixing, due to the new Yukawa couplings y 1 L x H d e c x + y 2 L c x H u e x . Shifting δ t down appreciably requires no or a weak y 1 coupling and a strong y 2 coupling, and a numerical example is provided in Fig. 3. Another way to restore gauge coupling unification while preserving Yukawa unification is to add vector-like charged singlets and allowing fractional charges. As an example, we again take k Y = 23/21, and add two pairs of charged singlets with mass m Z and charges ±1 and ±2/3. As shown in Fig. 4, gauge-Yukawa unification is then achieved similar to the canonical case. Figure 5: |Q| of the vector-like charged singlet with mass m Z allowing unification for k Y in the range 1/15 to 5/3. In Fig. 5, we show the charge of a vector-like charged singlet pair with mass m Z allowing unification, for k Y in the range 1/15 to 5/3. (Adding one pair with charges ±Q is equivalent at one-loop to adding multiple pairs with charges ±Q i if Q 2 = i Q 2 i .) Here we choose δ t,b,τ,g i such that Q = 0 for k Y = 5/3 and α MS s (m Z ) = 0.119. The ±0.01 uncertainty we display for α MS s (m Z ) represents both SUSY and GUT threshold corrections. For fractionally charged singlets, there is a constraint on particle per nucleon of about 10 −22 [18]. This requires the particle mass M to be 10 4 T r , where T r is the reheating temperature [19]. 6 Since T r can be as low as a few MeV, this in principle allows fractionally charged singlets as light as allowed by accelerator searches. The mass limit from accelerators is around m Z (for a review see Ref. [20]). Higgs Mass We end the paper with some remarks on the Higgs mass, where by the Higgs mass we refer to the mass of the light CP -even scalar. Assuming that m Z ≪ m SUSY , where m SUSY is the characteristic supersymmetry particle mass scale, the theory below m SUSY is the SM with threshold effects at m SUSY . The SM Higgs quartic coupling at m SUSY is given by where tan β is the ratio of the two supersymmetric Higgs vacuum expectation values, and θ W is the Weinberg angle. Since cos 2 θ W = k Y /(1 + k Y ) at M GUT , θ W at m SUSY depends on k Y . The Higgs mass therefore also depends on the value of k Y , but for SUSY broken at the TeV scale the effect is numerically insignificant, of order a few hundred MeV. The Higgs mass predictions are therefore practically the same as in canonical MSSM [21] and SUSY SO(10) for the case with third-family Yukawa unification [14,22]. The Higgs mass upper bound for m t = 172.5 GeV and m SUSY = 1 TeV is ≈ 130 GeV [21]. Conclusion We have considered a class of orbifold GUTs based on 6D N = (1, 1) and 7D N = 1 supersymmetric SU(N) gauge theories, where the 4D gauge group is SU(3) C × SU(2) L × U(1) Y below the compactification scale. For the SU(5) model the only zero mode that can be introduced in the bulk is a quark doublet, while the SU(6) model allows gauge-Higgs unification. Finally, we can have gauge-Yukawa unification for the third family in SU (7) or higher rank groups. Depending on the model, the U(1) Y normalization factor k Y is either uniquely determined to have a non-canonical value or lies in a continuous interval. Gauge coupling unification and gauge-Yukawa unification can be obtained for non-canonical k Y values by adding particles to the MSSM spectrum. As examples, we introduce a minimal set of vector-like multiplets with SM quantum numbers or fractionally charged color singlets, assuming masses in the TeV range. The existence of such particles will be tested by the upcoming LHC.
2014-10-01T00:00:00.000Z
2006-08-16T00:00:00.000
{ "year": 2006, "sha1": "431beff5291e55b639a549f3d1d39d56a62e06b3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0608181", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "825579e53d55d3d6beb9b1e19e3ea4f2260d80c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27349494
pes2o/s2orc
v3-fos-license
Cell Replication and Angiogenesis in Central Nervous System Tumors and Their Relationship with the Expression of Tissue Prolactin and Hyperprolactinemia This study aimed to assess the effect of intracellular prolactin (ICPRL) and hyperprolactinemia on cell replication, using an immunohistochemical (IHC) technique for Ki-67 and Mcm-2, and angiogenesis, using IHC for endoglin CD-105, in central nervous system (CNS) tumors. This cross-sectional study included 79 cases of surgically excised primary CNS tumors of neuroepithelial origin (41.8% of all cases: 10.2% astrocytomas, 24% glioblastomas and 7.6% oligodendrogliomas) and meningeal origin (58.2% of all cases). Ki-67 and Mcm-2 indexes were calculated as a percentage of marked cells. The medians for Ki-67 and Mcm-2 indexes were significantly lower in meningiomas than in glioblastomas (p < 0.001 for Ki-67 and p < 0.001 for Mcm-2) and oligodendrogliomas (p < 0.001 for Ki-67 and p = 0.02 for Mcm-2). A good correlation was observed between the Ki-67 and Mcm-2 (rS = 0.60) replication markers. There were no significant differences in vascular density between the different histological types. Immunohistochemistry for ICPRL was positive in 45.6% of the tumors. Serum prolactin (PRL) was elevated in 30.6% of the cases. Multiple regression analysis revealed no important correlation of ICPRL and serum PRL on Ki-67 and Mcm-2 indexes or vascular density. The analysis of the combined impact of ICPRL and serum PRL variables revealed a trend towards an increase in microvessel density in tumor tissue and a significant increase in cell replication markers (p = 0.009 for Ki-67 and p = 0.05 for Mcm-2). PRL in tumor tissue may be one of the modulating factors of cell proliferation in the CNS. Introduction Prolactin (PRL) was originally identified as a neuroendocrine hormone of an exclusively pituitary origin, but its presence and secretion have been recently described in other tissues [1,2].The main extrapituitary sites of PRL production are the decidua, mammary tissue, Tlymphocytes, brain and endometrium [3,4].Likewise, PRL receptor (PRL-R) has already been found in the hypothalamus, choroid plexus and lymphocytes [3], as well as in prostate and breast tumors and in some cases of central nervous system (CNS) tumors [5][6][7][8]. More than 300 different organic functions have been reported for PRL in different tissues [4].At the cellular level, PRL has mitogenic, antiapoptotic, morphogenic, secretory activity and angiogenesis modulation effects [5,9]. The association between PRL and breast cancer risk has been described by several authors [10][11][12], and the expression of PRL and PRL-R has also been detected in prostate cancers [13], where it has a positive correlation with the histological degree of the tumor [14].Evidence suggests that PRL stimulates cell proliferation, increasing motility and modulating neovascularization in some tumor strains [5,13].The role of PRL in the CNS is uncertain, although its mitogenic activity in astrocytes [15] and its proliferative effect in meningioma [16,17] and glioblastoma [18] cultured-cells have previously been described. A recent study demonstrated the presence of PRL and hyperprolactinemia in a series of CNS tumors [8].Therefore, this study evaluated the possible association between intracellular PRL (ICPRL) and elevated serum PRL with cell proliferation, assessed using Ki-67 antigens and the minichromosome maintenance protein 2 (Mcm-2), and with angiogenesis, assessed using endoglin Materials and Methods This cross-sectional study included 79 cases of primary CNS tumors of neuroepithelial (41.8%) and meningeal (58.2%) origin that were surgically excised at Hospital São José at Irmandade Santa Casa de Misericórdia de Porto Alegre (ISCMPA), Brazil, over a period of 40 months.Patient age ranged from 15 to 86 years (mean age = 55.6 years) and 67% were women.The neuroepithelial tumors were distributed as follows: astrocytomas (10.2%), glioblastomas (24%) and oligodendrogliomas (7.6%).The classification and grading of the tumors according to the World Health Organization (WHO) criteria [19] is shown in Table 1. This study was approved by the Research Ethics Committee of Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA).All patients signed an informed consent form, and the authors signed a term of confidentiality. Before the surgery, patients were asked about exposure to medications that potentially increase serum PRL levels, and an affirmative answer was considered as an exclusion criterion, as well as elevated levels of thyroidstimulating hormone (TSH).One day before the procedure, serum PRL and TSH were measured by an automated direct chemiluminometric assay (Chiron Diagnostics Corp., East Walpole, MA).Prolactin levels above the reference were considered hyperprolactinemia.The reference values for PRL were 2 -17 ng/ml for men and 3 -29 ng/ml for women and 0.3 -4.7 mIU/L for TSH.All patients were operated on by the same neurosurgeon.After routine histopathological exam, a sample of the surgical specimen embedded in paraffin was cut (3 m microtome) and prepared for immunohistochemistry. The same pathologist examined all slides to confirm the diagnosis and determine tumor histological type and grade according to the WHO criteria [19]. Ki-67 and Mcm-2 indexes were calculated as a percentage of marked nuclei in about 1000 cells and expressed as the mean of the values found by two observers, who were blinded to the experiment [20,21].To evaluate the expression of ICPRL, the unequivocal presence of at least 1% of tumor cells with clearly marked cytoplasm in 300 counted tumor cells was classified as positive [8,22].Evaluation of microvascular density (MVD) using anti-CD-105 was performed using the Chalkley point counting method, internationally acknowledged as the criterion standard for the evaluation of MVD [23].The technique consists of selecting three fields of greater MVD-the so-called hotspots-which are subjectively chosen on each slide after scanning the tumor section in a microscopic field of low magnification (×10).The Chalkley grid with 25 random points was attached to the lens of a light microscope and, at a larger magnification (×200), directed to each hotspot so that the greatest number of grid points coincided with the endothelium or fell within the microvascular areas stained with IHC.Endothelial cells or cell groups were classified as countable microvessels.MVD CD-105 was evaluated according to the mean count of microvessels in the three hotspots, also called the Chalkley index or mean MVD.The Chalkley point count was performed by two experienced observers independently.The final MVD was the mean value of the two independent counts.Median Ki-67 indexes in meningiomas, astrocytomas, glioblastomas and oligodendrogliomas were 3.0%, 4.7%, 10.4% and 18.6%, and there was a significant difference between meningiomas and glioblastomas (p < 0.001) and oligodendrogliomas (p < 0.001). Median vascular density values were 8.1; 6.7; 9.2 and 12.8 for meningiomas, astrocytomas, glioblastomas and oligodendrogliomas, but there were no significant differences between groups. Immunohistochemistry for ICPRL was positive in 36 (45.6%) of the tumors.There were no significant differences between groups with and without positivity for ICPRL regarding age, sex, tumor histological type or cell proliferation and angiogenesis markers (Tables 2 and 3). Figure 3 shows the cytoplasmic immunopositivity for ICPRL in the juxtanuclear distribution.Serum PRL samples were available from 62 of the 79 cases analyzed.Serum PRL levels ranged from 4 to 70 ng/ml, and were high in 19 cases (30.6%).No significant differences were found in age, sex, histological type or cell proliferation and angiogenesis markers between the groups of patients with variable hyperprolactinemia and normal serum PRL (Tables 4 and 5). When evaluated in isolation using a multiple regression model, no important effect was found for ICPRL or serum PRL on Ki-67 and Mcm-2 indexes and vascular density. To assess the possible combined impact of ICPRL and serum PRL variables on cell replication and angiogenesis There was good agreement between readings by the two markers, the 62 samples were assembled into three groups: group 1 = positive ICPRL and hyperprolactinemia; group 2 = ICPRL or hyperprolactinemia; and group 3 = negative ICPRL and normal serum PRL.This analysis of gathered groups revealed a trend towards an increase in vascular density in the presence of ICPRL and/or hyperprolactinemia, which was significant for the Ki-67 and Mcm-2 indexes (Table 6). Discussion The Ki-67 index, a parameter of cells in the cell cycle, has been extensively studied and validated as a good cell replication marker [24,25].The Ki-67 index values in this series were similar to those reported in other studies [26], although higher in oligodendrogliomas when compared with the series studied by Wharton et al. [20].More recently, Mcm-2, a pre-replication complex essential for the replication of eukaryotic cells, has been used to assess the cell cycle [27].The Mcm-2 index in this study showed a good correlation with the Ki-67 index, with the highest median Mcm-2 in all groups of tumors under study.This finding may be assigned to the fact that Mcm-2 can also identify cells in stage G1 to G0, whereas the Ki-67 marker can only identifies cells in G1 [28]. The tumor angiogenesis grade measured according to microvascular density has also been used as a potential prognostic marker and possible treatment target in CNS tumors [29], particularly for gliomas.In a study that evaluated angiogenesis in gliomas, Lebelt et al. [30] found a greater microvessel density in glioblastomas and a significant correlation with degrees of malignancy.Angiogenesis in oligodendrogliomas, different from in other CNS tumors, is little known.Usually considered a slowgrowing tumor, in some cases the course is more rapid and the histological features show vascular endothelial proliferation [31].In a recent study, Netto et al. [32] found a significant difference in microvessel density between grade II and III oligodendrogliomas.In spite of not presenting statistical difference, the higher values of median vascular density in our study were found in oligodendrogliomas.Although the pattern of angiogenesis can be quite different among histological types of CNS tumors, in our series, the comparison of microvessel median densities did not reveal any significant differences between the different types of tumors of our sample. Barresi et al. [21] found a strong correlation between histological grade, Ki-67 index and extension of tumor vascularization using the CD-105 marker in meningiomas.Our study did not confirm their results, as the correlation between replication markers and angiogenesis markers was weak. The relationship between PRL and CNS tumors claims for attention since 1980, when the first reports of cases of hyperprolactinemia associated with meningiomas were described [33,34].Other cases of hyperprolactinemia associated with a gangliocytoma with immunohistochemically positive PRL [35], and hyperprolactinemia and third ventricle epidermoid cyst [36], were reported.Ciccarelli et al. [7], in a series of CNS tumors found high PRL levels varying from 27% to 61.5% in the different histological subtypes.In our study, despite serum prolactin have varied from 4 to 70 ng/ml and levels of hyperprolactinemia have varied from 20 to 70 ng/ml, the possibility of PRL influence could not be disregarded, since hyperprolactinemia it was found in 30.6% of all tumors. The mainly source of PRL in the human organism is the anterior pituitary.However, there is clear evidence that several human cells/tissue physiologically express PRL and the main extrapituitary sites described of PRL production are the decidua, mammary tissue, the prostate, the brain, the skin, T-lymphocytes and adipocytes [3,4,37].In this study immunohistochemistry for ICPRL was positive in 45.6% of the tumors.To the best of our knowledge literature describes just one study in which ICPRL in nervous system tumors was observed in 21% of all tumor subtypes [8].The relationship between ICPRL and serum PRL is complex, given that 1) serum PRL could act in tumor cells regardless of the PRL receptor; 2) intracellular expression of PRL may reflect or not local origin; and 3) the hyperprolactinemia do not necessarily reflect the tumor tissue production. The functional impact of extrapituitary PRL has been mainly linked to tumorigenesis [37].There has been growing interest in the recent recognition of the prolixferative and angiogenesis action associated with the activity of PRL, an endocrine and autocrine/paracrine hormone [9,38].In this sense, the interest in substances with a therapeutic potential against the proliferative action of extrapituitary PRL has grown with the evidence of tissue expression of PRL-R in 80% -90% of breast cancers, with greater expression in neoplastic tissue than in the tissue adjacent to the tumor [39], and the expression of PRL and activation of stat5a/b in association with tumor grade in prostate cancer [14].In this series, the analysis of the combined impact of ICPRL and serum PRL variables revealed a trend towards an increase in microvessel density in tumor tissue and regarding replication, despite assessing different CNS tumors, histological type and grade of malignancy, we found a significant increase in cell replication markers. To our knowledge, this is the first study to assess the effect of ICPRL and the increase in PRL serum levels on the cell cycle and angiogenesis in CNS tumors.The significant increase in Ki-67 and Mcm-2 indexes when both variables (ICPRL and hyperprolactinemia) were positive suggest that PRL modulation has an effect on cell replication in tumor tissue.Future clinical studies should investigate possible progression paths in patients with CNS tumors who have hyperprolactinemia or ICPRL in tumor cells. The median (minimum and maximum) values were used in the analysis of quantitative variables because of data asymmetry.The intraclass correlation coefficient (ICC) was used to analyze agreement between the two observers.The Mann-Whitney test was used to compare the Ki-67, Mcm-2, CD-105 marker values between groups with positive and negative ICPRL according to the different types of tumors.A linear regression was run on logarithmically transformed data to assess the combined effect of ICPRL and serum PRL on cell proliferation and angiogenesis markers.The level of significance was set at 5%.Data were analyzed using the Statistical Package for the Social Sciences (SPSS) version 15.0 for Windows (SPSS Inc., Chicago, IL). Cell Replication and Angiogenesis in Central Nervous System Tumors and Their Relationship with the Expression of Tissue Prolactin and Hyperprolactinemia 51(CD-105). Table 2 . Distribution of cases according to age, sex, tumor histological type and ICPRL positivity. * Chi-square test; † Median (minimum and maximum). Table 3 . Values for cell proliferation (Ki-67 and Mcm-2) and angiogenesis (CD-105) markers in different histological types according to positive and negative ICPRL * . OJPathologyCell Replication and Angiogenesis in Central Nervous System Tumors and Their Relationship with the Expression of Tissue Prolactin and Hyperprolactinemia 54 ICPRL = intracellular prolactin; n = number of cases; * Data presented as median (minimum and maximum); † Mann-Whitney test.Copyright © 2012 SciRes. Table 5 . Values for cell proliferation (Ki-67 and Mcm-2) and angiogenesis (CD-105) markers in different histological types according to serum PRL * . * Data presented as median (minimum and maximum); † Mann-Whitney test.
2017-10-31T18:40:08.877Z
2012-07-18T00:00:00.000
{ "year": 2012, "sha1": "17e7bb62899d85c14e83bcf45fa49168b9c884d4", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=21112", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "17e7bb62899d85c14e83bcf45fa49168b9c884d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
209500970
pes2o/s2orc
v3-fos-license
The volumes of Miyauchi subgroups Miyauchi described the L and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document}ε-factors attached to generic representations of the unramified unitary group of rank three in terms of local newforms defined by a sequence of subgroups. We calculate the volumes of these Miyauchi groups. Miyauchi's theory of local newforms. Let E/F be a quadratic unramified extension of non-Archimedean local fields of characteristic zero. We use x to denote the non-trivial Galois automorphism applied to x ∈ E. Let o, O be the rings of integers of the two fields F , E respectively. Let p, P be their maximal ideals, and q F , q E be the cardinalities of their residue fields. Let (1. 2) The group U is a rank 3 unitary group defined over F . There is a unique isomorphism class of genuine unitary groups over F [20,Section 1.9], so we are content to work with the above explicit model for this single isomorphism class of unitary groups. Consider the family of compact open subgroups of U defined for all n 0 by We call these the Miyauchi subgroups, as defined in [17]. These are of particular interest because of the theory of local newforms established by Miyauchi [16], relating the notions of conductors and newforms to L and ε-factors attached to representations of U , as summarized in the theorem below. Choose a non-trivial unramified additive character ψ E of E and a nontrivial additive character ψ F of F . For an irreducible admissible generic representation π of U , let W(π, ψ E ) be its Whittaker model with respect to ψ E . Gelbart, Piatetski-Shapiro [7] and Baruch [1] defined the L-factor L(s, π) associated to an irreducible generic representation π of U as the greatest common divisor of a family of explicit zeta integrals Z(s, W, Φ) where W ∈ W(π, ψ E ) and Φ is a function in C ∞ c (F 2 ). The ε-factor ε(s, π, ψ E , ψ F ) is then defined as the quantity appearing in the functional equation satisfied by L(s, π). These fundamental quantities in the theory of automorphic forms on U can be characterized in a more explicit and computational way with the sequence of Miyauchi subgroups. Theorem 1.1 (Miyauchi). Assume the residue characteristic of F is odd. Let π be an irreducible generic representation of U . For any n 0, let V (n) be the subspace of π fixed by Γ n . Then (i) there exists n 0 such that V (n) is non-zero; (ii) if n π is the least such integer, then V (n π ) is one-dimensional and its non-zero elements are called the newforms for π; (iii) if v is the newform for π such that the corresponding Whittaker function W v satisfies W v (I 3 ) = 1, then the corresponding Gelbart-Piatetski-Shapiro zeta integral Z(s, W v , 1 p nπ ⊕o ) matches the L-factor L(s, π); (iv) if ψ F has conductor o, then the ε-factor of π satisfies ( Since the subgroups Γ n characterize the conductor by the point (iv) in Miyauchi's theorem, their volumes should appear as invariants in many problems involving automorphic forms on unitary groups of rank 3. Theorem 1.2 is the analogue of e.g. [15,Theorem 4.2.5] which is a central fact in the theory of automorphic forms, appealed to for many purposes: establishing the dimension of the space of automorphic cusp forms of given level [15], sieving out the oldforms and the spectral multiplicities in arithmetic statistic questions [9], or motivating normalization in spectral theory and trace formulas [11] for instance. In a more representation theoretic framework, the analogous result to Theorem 1.1 was established by Casselman [3] and Deligne [4] in the case of GL 2 and by Jacquet, Piatetski-Shapiro, and Shalika [10] in the case of GL m . More precisely, let be the classical congruence subgroups. These are of particular interest because of the following result, relating the theory of classical L-functions to the fixed vectors by these congruence subgroups, analogously to Theorem 1.1. Let ψ be a non-trivial unramified additive character of F viewed as a character of the upper-triangular Borel subgroup of GL m (F ) as in [10,Eq. (2)]. For an irreducible admissible generic representation π of GL m (F ), let W(π, ψ) be its Whittaker model with respect to ψ. (ii) if n π is the least such integer, then V (n π ) is one-dimensional and its elements are called the newforms for π; (iii) the ε-factor ε(s, π, ψ) is q −nπ(s−1/2) F times a quantity independent of s. Since these congruence subgroups characterize the conductor by the point (iii) of the above theorem, their volumes govern the sizes of the families of automorphic forms on GL m . The indices (1.6) are therefore necessary to compute the main term in the automorphic Weyl law in the analytic conductor aspect for GL 2 and its inner forms [2,13], where the characterization of the conductor in terms of congruence subgroups is used in a critical way. The strong similarity between Theorem 1.1 and Theorem 1.3, as well as the fundamental role played by the volumes of the subgroupsΓ n in the GL m case, suggest that the volumes of the Miyauchi subgroups Γ n will play a central role in the analytic theory of U . In particular, we expect that these volumes will appear in the main term of the automorphic Weyl law for rank 3 unitary groups, see [12, Chapter 5]. Outline of the proof. We follow the strategy of Roberts and Schmidt [21, Lemma 3.3.3], but encounter some new difficulties owing to the fact that U is non-split. The proof has several steps. 1. Consider the subgroups of Γ n given by, for any n 0, which are the analogs of the Klingen subgroups introduced by Roberts and Schmidt [21, Equation (2.5)] in the GSp(4) setting. Lemma 3.2 gives the decomposition of Γ n into A n -cosets in terms of the group of trace zero elements of O/P n . The sizes of these groups can then be computed using group cohomology and the integral normal basis theorem for unramified extensions of local fields (see Lemma 2.6). In particular, Lemma 3.2 reduces the computation of the volume of Γ n to the calculation of the indices [Γ 0 : A n ]. 2. Consider the subgroups B n of Γ n given by and note that B n contains A n . Lemma 4.1 provides the short exact sequence where E 1 P n is the set of elements in O with xx = 1, reduced mod P n . The cardinality of E 1 P n may be calculated using Hilbert's theorem 90 (see Lemma 2.5). 3. Calculate the index of B n in B 1 . This is the content of Lemma 5.1 and uses the Iwahori factorization of reductive groups in order to state a recursive relation between the index of B n in B n−1 . 4. Finally, the index of B 1 in Γ 0 is computed in Lemma 6.3 by appealing to the Bruhat decomposition of the reduction modulo P of B 1 . Altogether, these results compile into Theorem 1.2. Throughout the proofs, we may assume n 1 since we normalize Γ 0 to have measure one. Comments on ramification. Even though Miyauchi's theory of newforms has only been established for unitary groups attached to unramified extensions E/F , the subgroups Γ n can be defined without this assumption and it would be natural to want to extend the results of this paper to allow for ramification. We discuss here some modifications it would induce. It seems reasonable to expect that the proof of Theorem 1.2 given in this paper could be adapted to allow for tame ramification in E/F . The generalization would force us to adapt most of the group cohomology results in Section 2 On the other hand, allowing E/F to be wildly ramified seems to be more difficult. The hypothesis that E/F is at most tamely ramified is used in Lemma 2.1 and also in the proof of Lemmas 5.1 and 6.3 to compute the cardinalities of certain unipotent subgroups. Underlying these is the existence of a normal o-basis of O, which according to Noether's integral normal basis theorem happens if and only if E/F is at most tamely ramified. Since these generalizations would increase the length of the paper and it is not clear that the Miyauchi subgroups take the same shape for unramified extensions, we leave aside the question of ramification for the remainder of this work. Let be a uniformizer of p. Since E/F is unramified, is also a uniformizer of P. Let E P n = O/P n . Corollary 2.2 also descends to the finite additive rings E P n , as the next lemma shows. Lemma 2.3. We have that H m (G, E P n ) = 0 for all m, n 1. Proof. We have the short exact sequence and taking the long exact sequence in cohomology, we have, for all m, n 1, that Since the outer two terms vanish by Corollary 2.2, we also have H m (G, E P n ) = 0 for all m, n 1. We also consider finite multiplicative groups. Let E × P n = (O/P n ) × . Lemma 2.4. We have H 1 (G, E × P n ) = 0 for all n 1. Proof. The case n = 1 is just Hilbert's theorem 90: since E/F is unramified, G acts on the field E P by Galois automorphisms, and so H 1 (G, E × P ) = 0. Assume henceforth that n 2. Hilbert's theorem 90 no longer applies since E P n is not a field, but we shall still manage to show that H 1 (G, E × P n ) = 0 as follows. By considering the sequence and taking the long exact sequence in cohomology, we get (2.1) We would like to show that H 2 (G, 1 + n O) = 0. For any m 1, consider the exact sequence By the long exact sequence in cohomology, we see that Taking the long exact sequence in cohomology and by Hilbert's theorem 90, we have The valuation map here is surjective since E/F is unramified. Therefore we have H 1 (G, O × ) = 0, and thus H 1 (G, E × P n ) = 0 from (2.1). Let E 0 P n be the subgroup of E P n of trace-zero elements, that is E 0 P n = {x ∈ E P n : x + x = 0}. Likewise, write E 1 P n for the subgroup of E P n consisting of norm-one elements, i.e. E 1 P n = {x ∈ E × P n : xx = 1}. Our intended application of the above group cohomology lemmas is to calculate the cardinalities of the finite groups E 0 P n and E 1 P n . For any G-module M , we write Z 1 (G, M ) for the group of 1-cocycles and B 1 (G, M ) for the set of 1-coboundaries. Proof. Let σ denote the non-identity element of G. We have 4) Vol. 119 (2022) The volumes of Miyauchi subgroups 43 where the map is given by ξ → ξ(σ). We have an exact sequence where f is given by the composition of x → x/x and (2.4). The image of f is by definition where the map is given by ξ → ξ(σ). Now consider the exact sequence where the map f is given by the composition of u → u − u and (2.7). The Since E/F is unramified, we have q E = q 2 F . We have |E P n | = q n E and | o P n ∩o | = q n F (again by the unramified hypothesis). Therefore, we have shown that |E 0 P n | = q n/2 E = q n F . For every n 0 and k 0, let Lemma 3.1. For E/F unramified, we have n , where every Greek letter is a generic element in O except the uniformizer . By calculating the determinant of the matrix (3.5), we find that det(g) ≡ αι mod P k . Since det(g) ∈ O × , if k 1, then we have n . For such u and any 0 k < n, we claim that where ι and γ on the right side of (3.7) refer to entries of g ∈ C (k) n as in (3.5). We first show ⊆. Suppose g ∈ C (k+1) n is written as in (3.5), but with k + 1 in the upper right entry in place of k. Then ι ∈ O × by (3.6) for any k 0 since k + 1 1. Calculating, we have To show that t n−k (u)g is in the right hand side of (3.7), it then suffices to show that ι ∈ O × and u − ι −1 (γ + uι) ∈ P. The first of these is (3.6), and the second is clear after expanding out. Now we show ⊇. Suppose g ∈ C (k) n is written as (3.5), ι ∈ O × , and u ∈ O is such that u + u = 0 and u − ι −1 γ ∈ P. Then but γ − ιu ∈ P by hypothesis, so that t n−k (−u)g ∈ C (k+1) n . The claim (3.7) follows. Note that if k 1, then the hypothesis ι ∈ O × can be omitted from the right hand side of (3.7) because it follows automatically from (3.6). Therefore under the hypothesis that k 1, we see that (3.8) The result now follows from Lemma 2.6. Vol. 119 (2022) The volumes of Miyauchi subgroups 45 It remains to treat the case k = 0. Under this hypothesis, we claim that To see ⊆, write g ∈ C (1) n in the form (3.5), and calculate To see ⊇, write g ∈ C (0) n in the form (3.5), and recall the hypothesis ι ∈ P. Note that σ 2 n = 1, and calculate from which we see that ι −n ∈ P 1−n , as desired. Combining (3.7) and (3.9), we see that from which the result again follows by Lemma 2.6. Since Γ n = C From A n to B n . This section aims at reducing the study of the indices of A n in Γ n to those of the more paramodularly-shaped B n . Consider the homomorphism Lemma 4.1. If n 1, the sequence is short exact. In particular, by expanding the center entry of t bJb = J, and since was chosen so that ∈ p ⊂ F . Therefore we have ¯ ≡ 1 mod P n , so that f (B n ) ⊆ E 1 P n . On the other hand, we have ⎛ for any ∈ O such that = 1 by a direct calculation. Therefore f (B n ) ⊇ E 1 P n , and the lemma is proved. Index of B n . To compute the index of B n in Γ 0 , we will need the Iwahori decomposition, our reference for which is [14]. The index of B n in Γ 0 is given in the following lemma. Proof. We use the Iwahori decomposition for the group U , which may be quickly derived from that of GL 3 (E). In [14,Proposition 2.6.4], taking G = GL 3 (E) and S to be the convex hull of the following 4 points {(0, 0, 0), (n, 0, 0), (0, 0, −n), (n, n, 0)}, we obtain the factorization ⎛ where each matrix represents the indicated subgroup of GL 3 (E). We may now realize the group B n as the fixed points of the left hand side of (5.2) by the involution g → J t g −1 J. Let g = n − tn + be the factorization of a typical element g as in (5.2). Since J t g −1 J = J t n −1 − JJ t t −1 JJ t n −1 + J, we derive for any n 1 that For n 1, this leads to the decomposition ⎛ where the condition β +β + αᾱ n = 0 on α, β ∈ E P is understood to mean that we sum over those cosets for which there exists a representative in O satisfying the indicated equation. Given α, β as above, it is clear by reducing the condition modulo P that β ∈ E 0 P . Conversely, given α * ∈ E P and β * ∈ E 0 P , there exists a representative β ∈ O for β * such that β +β = 0. Let α be any choice of representative for α * . Then αᾱ n ∈ p. Since E/F is at most tamely ramified, we have as a consequence of the integral normal basis theorem (see e.g. [6, §5 Theorem 2]) that Tr P = p, and so there exists z ∈ P such that αᾱ n = z +z. Then β − z is another representative for β * ∈ E 0 P , and (β − z) + (β − z) + αᾱ n = 0. Thus, the decomposition (5.3) may be re-written as We have |E P | = q E = q 2 F and |E 0 P | = q F by Lemma 2.6, so this finishes the proof by induction. 6. Index of B 1 . We use reduction modulo P and the Bruhat decomposition, and recall now some useful lemmas to do so. Since φ is surjective, for any representative of g ∈ G/H, we may choose a preimage g ∈ φ −1 (g). Then φ −1 (gH) = g φ −1 (H) and the lemma follows. We apply the Bruhat decomposition for U , following [5,Chapter 4]. Consider the group GL 3 over F p and the involution τ defined by g → J t g −1 J, where g denotes the Frobenius automorphism applied to g, i.e. the entries of g raised to the pth power. The fixed points of GL 3 by this involution are exactly the group U (cf. [5,Example 4.3.3]). Let T be the standard maximal torus of GL 3 , N = N (T ) its normalizer, and W = W (T ) the associated Weyl group. Then we have a Bruhat decomposition of U of the form Thus W τ consists of two elements, which may be represented by I and J (see (1.2)). Now we calculate U τ w for w = I and w = J. We first determine the sets {α ∈ Φ + : w(α) < 0}. Let U ij , 1 i = j 3, denote the root space of GL 3 that is non-zero in the ijth entry. Let α ij denote the corresponding root (see [5,Theorem 2.3.1(i)]). The set Φ + of positive roots corresponding to our choice of the standard upper triangular Borel is Φ + = {α 12 , α 13 , α 23 }. The Weyl group W acts on T by conjugation, and thus on Φ. Clearly, I fixes Φ, and one may compute that J(α 12 ) = α 32 , J(α 13 ) = α 31 , and J(α 23 ) = α 21 . Thus, {α ∈ Φ + : I(α) < 0} = ∅ and {α ∈ Φ + : J(α) < 0} = Φ + . Therefore we have This last group of unipotent matrices is in bijection with the set ∪ α∈O/P N α , where N α = {β ∈ O/P : β +β + αᾱ = 0} (cf. (5.4)). Recall (e.g. [6, §5 Theorem 2]) that Tr O = o since E/F is at most tamely ramified. Thus, there exists θ ∈ O/P such that θ+θ = −αᾱ. We then find that N α = θ+E 0 P , which is of cardinality q F by Lemma 2.6. Thus, the cardinality of the group of unipotent matrices in question is then q E q F = q 3 F . We conclude that [U : B] = q 3 F + 1, hence the lemma follows. We now collect the previous results. By Lemmas
2019-12-26T19:39:03.000Z
2019-12-26T00:00:00.000
{ "year": 2022, "sha1": "1ee073aec12f94a806ea7003cd1cf72c0d90019b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00013-022-01746-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0275ace7f019cfe67fe16c2d8bf2a26338f45f54", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
85464073
pes2o/s2orc
v3-fos-license
Tariffs and Politics: Evidence from Trump's Trade Wars We use the recent trade escalation between the US, China, the European Union (EU), Canada and Mexico to study whether retaliatory tariffs are politically targeted. Using aggregate and individual-level data we find evidence that the retaliatory tariffs disproportionally targeted areas that swung to Trump in 2016, but not to other Republican candidates. We propose a novel simulation approach to construct counterfactual retaliation responses. This allows us to both quantify the extent of political targeting and assess the general feasibility. Further, the counterfactual retaliation responses allow us to shed light on the potential trade-offs between achieving a high degree of political targeting and managing the risks to ones own economy. China, while being constrained in its retaliation design, appears to put large weight on achieving maximal political targeting. The EU seems successful in maximizing the degree of political targeting, while at the same time minimizing the potential damage to its own economy and consumers. Introduction Work by political scientists and economists suggests that a common factor linking the election of Donald Trump, the UK's Brexit vote and the wider populist surge in Western Europe may reflect a long-delayed political backlash against globalization (see Autor et al., 2016;Colantone and Stanig, 2018a,b). It is thus not surprising that US trade policy would see marked shifts under a Trump administration. Nevertheless, the announcement on March 1, 2018 that the US would impose a 25% tariff on steel and a 10% tariff on aluminium imports still came as a surprise. Initially exempt, Canada, Mexico, and the EU became subject to the steel and aluminium tariffs from May 31, 2018. Additionally, the Trump administration set a tariff of 25% on 818 categories of goods imported from China worth $50 billion on July 6. Following the announcement tariffs, President Donald Trump asserted that "Trade wars are good, and easy to win." Despite this assertion, the dispute involving China, the European Union (EU), Canada and Mexico escalated with reciprocal tariffs targeting imports from the US. As only few of the 316 GATT disputes the WTO (2018) lists from 1948 to 1995 reach this stage of escalation in which threatened tariffs are actually imposed and retaliation is triggered, we know little about how trade disputes are actually fought. In this paper, we use the recent trade-escalation to study how trading partners engage the US in this trade dispute. In particular, this paper tackles two interrelated questions using both aggregate and individual-level data. In the first part, we ask whether the retaliatory tariffs are designed to target Trump's voter base. The large literature on pork barrel spending (e.g. Levitt and Snyder, 1995;Bickers and Stein, 1996;Canes-Wrone and Shotts, 2004;Berry et al., 2010;Berry and Fowler, 2016) suggest that politicians see value in spending money to gain the support of their voter base. By the same logic, negative shocks hitting Donald Trump's voter base may generate political pressure to remove tariffs and deter future protectionism. 1 While the question whether adverse economic shocks can produce political effects is a field of active and extensive research, so far findings suggest that the effects are highly heterogenous and contextdependent (see Margalit, 2019 for an excellent review). Margalit (2011), finds a distinct anti-incumbent effect of trade-related job losses vis-a-vis other types of economic shocks, while Scheve and Slaughter (2004) suggests that trade-integration increases perceived insecurity. Feigenbaum and Hall (2015) shows that politicians from districts most exposed to the "China-shock" became more protectionist. This lends credence to the idea that economic effects of retaliatory tariffs can shift political support. Yet, a likely necessary condition is that retaliation is sufficiently targeted (see Kavaklı et al., 2017;Marinov, 2005). In the second part, we investigate both the feasibility of political targeting and study the extent to which countries are managing the harm retaliation may afford on their own economy. In the context of trade disputes the structure of trade between countries may afford an important constraint (see Kavaklı et al., 2017). Given the large literature on the welfare enhancing effects of trade (e.g. Frankel and Romer, 1999;Baldwin, 2004), it is widely accepted that tariffs, while able to help certain individual industries, are not only harmful for trading partners but also constitute an act of self-harm (Bown, 2004). As such, the design of retaliation by countries may not only reflect the desire to induce economic and political pressure, but also may reflect domestic political and economic considerations (Davis, 2004;Barari et al., 2019). Our findings suggest that political targeting appears to play an important role in the retaliation design. To assess the degree of political targeting, we construct a county-specific exposure measure similar to Autor et al. (2016) and Colantone and Stanig (2018a,b). Based on this exposure measure, we find that retaliatory tariffs target areas that swung to Trump in the 2016 presidential election. In contrast, areas that swung behind other Republican candidates in the House or Senate elections held on the same day where not target of retaliatory tariffs. Using individuallevel opinion polling data, we show that even among self-identified Republicans retaliation appears to be distinctly targeted towards areas in which Republicans favored Donald Trump over other Republican contestants for the 2016 presidential nomination. Further, we document that the degree of political targeting appears to pick up a distinct shift in geographic patterns in Republican party affiliationbut only after Donald Trump entered the 2016 Presidential race in 2015. To asses both the feasibility and the degree of political targeting that retaliating countries could have achieved we propose a novel simulation approach constructing counterfactual retaliation responses. This simulation approach further allows us to shed light on which other considerations are likely to play a role in the design of retaliation responses. For the EU, for example, it is known that policy preferences and national politics may impact the bloc's stance in international negotiations (Kleine and Minaudier, 2019;Wratil, 2019;Meunier, 2000). In the context of trade disputes the EU is transparently stating its objectives of retaliation: retaliation should induce compliance of the US with trade rules, while mitigating harm to EU consumers and firms. Motivated by this example, we construct reducedform measures proxying the potential for domestic harm of retaliation. In our analysis we compare the actual chosen retaliation response relative to the counterfactual baskets. The results confirm that, in line with the objectives the EU's achieved a high degree of political targeting, while ensuring that the US is not the dominant supplier of targeted targeted products. For the Chinese tariffs we find that, due to feasibility constraints afforded by the structure of China-US trade, any retaliation response that produces some degree of political targeting requires imposition of tariffs on goods for which the US is a dominant supplier. This suggests that Chinese retaliation may be particularly harmful for Chinese consumers. Our results contribute to the literature on the politics of protectionist trade policies. The imposition of domestic tariffs has been attributed to either the influence of interest groups (Grossman and Helpman, 1994), peoples inequity aversion (Lü et al., 2012), the importance of tariffs as a source of revenue (Hansen, 1990), the structure of consumer tastes (Baker, 2005) along with the relative factor endowments (Scheve and Slaughter, 2001). Existing research further suggests that democracies are more likely to lower tariff barriers, but are more likely to protect their agricultural sectors and make use of non-tariff barriers (NTB) (Kono, 2006;Barari et al., 2019, e.g.). Cameron and Schuyler (2007) investigates the determinant of protectionism in the agricultural sector. In closely related work Gawande and Hansen (1999) investigate the deterrence effect of NTB and how retaliatory NTB can be used to reduce foreign trade barriers. Our findings shed light on how other countries react to protectionism and the US's aggressive trade policy. Context and Data The international trading system after the second World War was first institutionalized through the General Agreement on Tariffs and Trade (GATT) in 1948. It was a direct result of the failings of the international trade system during the Great Depression. In 1930, the Smoot-Hawley Act increased tariffs on more than 20,000 products imported by the US. This set off a tit-for-tat retaliation. Irwin (1998) estimates that nearly a quarter of the observed 40% decline in imports can be attributed to the rise in the US tariff and thereby contributed to the lengthening of the Great Depression. Through multiple GATT rounds from 1948 onwards, average tariff rates were reduced significantly. One of the most important features of the international trading system which is now regulated by the WTO -the successor organisation to the GATT established in 1995 -is a formal Dispute Resolution System. In principle, governments are still able to restrict trade to foster non-economic social policy objectives, to ensure "fair competition", or to support preferential treatment of developing countries, regional free trade areas and customs unions. But measures of this kind are subject to scrutiny, should adhere to the broad principles of the WTO and can be contested by WTO member countries by invoking the WTO's Dispute Resolution mechanism. Rosendorff (2005) and Sattler et al. (2014) provide evidence that the WTO's Dispute Resolution mechanism helps to enforce stable trade relationships. The Dispute Resolution mechanism also regulates the imposition of retaliation measures. Retaliatory Tariffs as a Political Tool The most recent precedent in which the international trading system came close to a similar escalation were steel tariffs imposed by President George W. While this does not proof that the threat of retaliation was the reason why tariffs were abandoned, it does suggest that it may have played a role. The European Commission stands out in terms of transparently the objectives it aims to achieve in the context of trade disputes (see Baccini, 2010;Stasavage, 2004 (c) availability of alternative sources of supply for the goods or services concerned, in order to avoid or minimise any negative impact on downstream industries, contracting authorities or entities, or final consumers within the Union; In other words, trade policy should aim to change the trade policy of the opposing country, while minimizing harm to the own economy. To design the retaliation response, the European Commission is known to use an algorithm to select products against which retaliatory tariffs are targeted. This algorithm is naturally a safely guarded secret. 2 The Chinese government does not publish their policy objectives in the trade dispute, but there is evidence that they also try to target their tariffs against the electoral base of Donald Trump and the Republican party. For example, the Chinese as well as the EU's retaliation targeted bourbon whiskey produced in Ken- These examples suggest that the design of retaliatory tariffs shares some similarities with political sanctions. The growing literature on sanctions (see for ex-2 One of the authors of this paper had a conversation with an anonymous senior EU commission source, who referred to the algorithm as the EU's "weapon of war" in the context of the trade dispute, indicating why it is a closely guarded secret. ample (e.g. Elliott and Hufbauer, 1999;Eaton and Engers, 1992;Ahn and Ludema, 2017)) understands sanctions as a political tool to induce compliance. In a closely related paper, Kavaklı et al. (2017) find that comparative advantage in exports and domestic production capabilities determine a countries' ability to maximize the economic impact while minimizing the domestic costs of sanctions. In this literature, Dashti-Gibson et al. (1997) studies the success factors of economic sanctions, while Marinov (2005) and Allen (2008) provide evidence that sanctions increase the probability of leadership change. In other related work, Draca et al. (2018) show that US sanctions against Iran are indeed targeting politically connected firms and actors. 3 In contrast, the political dimensions of tariffs so far has been widely ignored. In our analysis, we investigate to what degree the retaliating countries indeed systematically politically targeted their retaliation. For our analysis, we construct a measure of exposure to retaliatory tariffs for each US county, which we discuss next. Descriptives of the retaliation measures The retaliation measures against the US tariffs take the form of a list of products with descriptions and (typically) the Harmonized System (HS) code along with an (additional) tariff rate to be imposed on imports of these goods stemming from the US. These lists are prepared through a consultative process in the case of the EU and Canada. They are lodged and registered with the WTO and, there is typically a delay prior to the tariffs being implemented. For our analysis, we have obtained 3 Whether sanctions are effective in inducing compliance is a different question: Grossman et al. (2018) find that the EU's labelling of products from the West Bank -in the relative short-termproduced a backlash in Israel and increased support for hardline policies. Similarly, Peeva (2019) suggest that sanctions against Putin following the Crimea annexation actually backfired and helped Putin's approval ratings. retaliatory tariff lists from the EU, China, Mexico and Canada. We do not analyze the retaliation of other countries such as India and Turkey, as the overall trade volume and therefore the retaliation is far smaller. 4 Appendix Figure A2 visualizes the distribution of the retaliation measures across coarse economic sectors. The figure suggests similarly, that manufacturing sector outputs, as well as agricultural commodities were significant features in the retaliation lists. We next describe how we use the retaliation list to construct a county's exposure to tariffs. Measuring exposure to retaliation We use two data sources to construct a county level measure of exposure to retaliation measures. First, we use data from the Brookings (2017) Export Monitor. This data contain a measure of county-level exports across a set of 131 NAICS industries. 5 We denote X c,i the export of industry i for each county c. The data also provides an estimate of the total exports at the county level and the number of export dependent jobs. The latter will be used to weight the regressions. Secondly, we use the individual retaliation lists L r for r ∈ {EU, MX, CA, CN}. These are matched at the 8-digit HS level to the US trade data using export volume. 6 -cmpcaa-eng.asp, accessed 18.08.2018. 5 The data incorporates a host of data, including US goods trade data, service-sector export data from the Bureau of Economic Analysis (BEA), Internal Revenue Service (IRS) data for royalties, Moody's Analytics production estimates at the county level, and foreign students' expenditures from NAFSA. More details on Brookings (2017) can be found https://www.brookings.edu/ research/export-nation-2017/. 6 While technically the codes of products are provided at the 10 digit level, the matching results are best at the 8-digit HS level due to slight discrepancies in the coding standard across countries. affected by tariffs with the official WTO submissions. For this exercise, we make use of HS-level U.S. import and export data from the U.S. Census Bureau. 7 In the case of the EU, the retaliation measures officially target trade worth USD 7.2 billion. Matching the EU list to the US trade data for 2017, we find that US exports worth USD 7.6 billion are affected by retaliation, suggesting that the overall magnitude is similar. To link the targeted exports to the different six digit NAICS sectors that produce the goods (HS10 codes), we use the concordances between HS codes and NAICS/SIC codes from Schott (2008). These concordances provide up to 10 digit commodity codes, which map into the Harmonized System codes used, together with SIC and NAICS codes. This allows us to merge the tariffs lists with the employment data. In case multiple sectors are linked to an HS10 code, we retain the NAICS sector listed first in the concordance. As an illustration, consider the example of the EU's rebalancing measures, which includes the item "10059000 Maize (excluding seeds for growing)". This HS code is mapped to the NAICS industry 111150, which stands for "Corn Farming". This procedure results in a list of tariff exposed industries. Next, we collapse the total volume of affected trade to the four digit industry level. This gives us a measure of export T i,r affected by retaliatory tariffs of country r for each four digit industry i. We break this measure down to the county level using X c,i , the amount of production of industry i in county c as measured by the Export Monitor data. In other words, the total export volume affected by tariffs is broken down to the county-level using the share of a county in overall exports This introduces only a small amount of inconsequential noise. 7 These data can be found here https://usatrade.census.gov/. from industry i. The final exposure measures τ c,r for county c and list of retaliatory tariffs r is given as: This measures approximates the exposure of counties to retaliation measures of each retaliating country r. The measure is bounded between 0 and 1. If all industries in a county are unaffected by tariffs the measure is 0. If the entire production of a good subject to retaliation were to take place in a single county and if that county were to export only this good, the exposure measure would be 1. Our approach is similar to the Autor et al. (2013)-type labor market shocks. The main difference is that rather than constructing this measure based on sector level employment figures, our measure is based on sector level output figures. This should come closer to capturing the economic impact more broadly. As a robustness check we consider an alternative exposure measure based following Autor et al. (2013) and Kovak (2013), which uses the County Business Patterns (CBP) employment data to construct a county-level retaliation exposure based on sector-level employment shares. In Appendix Table A3 we show that results are similar when using this alternative measure. 8 Since the added tariff rate was set at 25% for 85% of the products, our retaliation exposure measure ignores the actual added tariff rate. This also implies that the variation in our county-level exposure measure τ c,r is driven by the choice of products and not the choice of tariff rates. While this is only a small deviation from the actual data, it greatly simplifies the simulation of counterfactual retaliation baskets in Section 4. In Appendix Figure A1 we compare our exposure measure with the exposure measure that would result if we incorporate the actual added tariff rate. The two measures are statistically virtually identical. Main political outcome measures In the following, we describe the aggregate and individual-level data sources used to measure the extent of political targeting. we provide some auxiliary evidence that complements their work suggesting that retaliation was indeed effective in reducing US exports and lead to drop in export prices, suggesting that exposure to retaliation produced indeed an economic shock. Aggregate election results 3 Was the retaliation politically targeted? Descriptive evidence We first provide descriptive evidence that counties with a stronger support for the Republican party were more heavily targeted by tariffs. Figure 1 In this specification, y c,s measures the vote share of the Republican party in county c in state s in 2016. As an alternative outcome we use ∆y c,s , the change in the Republication party vote share between the 2012 Presidential election and the 2016 Presidential election at the county level. τ c,r is the county level exposure measure for retaliatory tariffs list r (for more details see Section 2.3). All regressions includes state fixed effects, hence we exploit within-state variation in retaliation exposure. Standard errors are clustered at the county level. Results The results from the estimation of model 1 are presented in Table 1. The results suggest that counties which are more exposed to retaliatory tariff had higher levels of support for Trump in the 2016 presidential election. Further, as indicated in Panel B, counties exposed to retaliation also saw larger swings in sup-port from the 2012 Presidential election to the 2016 Presidential election. The point estimate in column (2) suggests that the counties most exposed to EU retaliation saw an average swing in the Republican candidates' vote share of 22 percentage points vis-a-vis counties not exposed to EU retaliation. As the retaliation exposure measures τ c,r are bounded between zero and one the coefficients are directly comparable. We find, that the degree of political targeting is strongest for the EU and Mexico's retaliation. We will revisit this result in our simulation study in Section 4. Before turning to the individual-level data, we next conduct further robustness checks for our baseline findings. Robustness We first explore whether the targeting was stronger for the presidential election than for the House and Senate election held on the same day (Tuesday, November 8, 2016). The results of this exercise can be found in Appendix Table A1. Panel A explores Republican party vote shares. Throughout, there is a strong positive correlation -yet, we find no evidence for differences in targeting across election types. In Panel B we compare the changes in Republican candidate vote share vis-a-vis the elections held in 2012 (for Presidential and House elections). For the Senate election, we compare the change with the most recent prior Senate election for Senators (as only 1/3 or Senators are up for election each time). In this specification it appears that the regression coefficient for retaliation exposure is markedly larger for the Presidential election but not for Republican candidates across other election types. This hold true despite the fact that voters could vote on the same day in 2016. This provides some additional evidence that retaliation may have been targeted to hit areas that swung behind Trump in 2016. A potential rational behind such a strategy could be that these voters may conceivably swing back (see Alesina and Rosenthal, 1995, 2006or Scheve and Tomz, 1999 for work studying the dynamics of US presidential and midterm elections). In Appendix Table A2 we highlight that the correlation between retaliation exposure and (shifts in) support of Republican presidential candidates is distinctly stronger for the 2016 election. We investigate this observation further in the individual level analysis. Our finding are similar when we use an alternative exposure measure based on the sector-level employment shares inspired by Autor et al. Lastly, in Appendix Table A4 we show that our results are robust to the inclusion of additional control variables. 9 First, we control for a county-level measure of the China shock used in Autor et al. (2013). This control is motivated by the work of Autor et al. (2016) who find that Trump performed better in counties that were more exposed to Chinese import competition. 10 In line with this result, we find that the estimated coefficient on the China shock is positive and significant. Yet, our retaliation exposure coefficient hardly changes.This is not surprising for two reasons. Naturally, a county's exposure to retaliation depends on the structure of trade between the US and the trading partner. Retaliation exposure is driven by US exports, while the China shock is based on US imports. In addition, tradedispute induced retaliation can only produce economic shocks in regions and parts of the US in which the tradable-goods producing sectors have survived the "China shock". We also control for the level (and changes) in turnout in the 2016 presidential election. Guiso et al. (2018) suggest that the ability for populist candidates 9 Note that we focus on the combined retaliation exposure measure. The patterns are very similar when analyzing country-by-country. 10 Similar effects have been documented in the context of the UK and Western Europe more broadly (Colantone and Stanig, 2018a,b); Feigenbaum and Hall (2015) shows that politicians from districts most exposed to the "China-shock" became more protectionist direction. to affect turnout may be a key feature to understanding their success. Indeed, in Appendix Table A5 we document that places more exposed to retaliation saw, on average, lower levels of turnout. Yet, our observation suggesting that retaliation was politically targeted remains intact. Cross-sectional individual-level data We use repeated individual-level cross-sectional data from the Gallup Daily Tracker. This allows us to study the extent of support for Donald Trump using individual-level micro data allowing us to control for a set of potential confounders. Further, we can exploit variation over time and draw comparison to other Republican candidates. Empirical specification To leverage individual level data we modify our above regression specification in the following way: In this regression y i,c,t measures whether an individual i in county c in year t has a favorable view of Donald Trump as candidate. In our analysis, we focus on the period from June 2015 to March 2016 prior to the election and prior to Donald Trump becoming the presumptive nominee. This allows us to compare the degree of targeting for other Republican candidates who were (still) in the race at the same time. The specification controls for state fixed effects α s as well as a set of individual controls X i . In particular, we control for the respondents race across five categories, income across 12 categories, gender and the year of the survey. In specifications where the dependent variable is not party-affiliation, we also control for an indicator whether a respondent describes themselves as Republican or leaning Republican. Since the Republican party affiliation is observed consistently from 2012 onwards, we can further estimate a flexible difference-in-difference specification: Since the regression contains county fixed effects α c and time fixed effects γ t , the coefficient β r,t captures the differential changes in individuals' leaning towards the Republican party and our county-level measure of retaliation exposure. In other words, β r,t picks up whether areas more exposed to retaliatory tariffs exhibited shifts in the support for the Republican party relative to previous years. If retaliation was indeed targeted to counties with a Republican voter base that identifies with Donald Trump, we would expect the correlation between individual respondents self-reported affinity towards the Republican party and the county-level retaliation exposure measure to increase with Trump's presidential run. Further, this analysis will also show whether there were changes in Republican support before Trump's campaign started. In this way, we can disentangle general shifts in political preferences or party affiliation from support for Trump as a candidate. Results In Table 2 A potential concern with these findings could be that the retaliation patterns simply capture geographic differences of republican versus democratic support. To highlight that retaliatory tariffs indeed appear to target areas with strong support for Donald Trump, we analyze the period in which the Republican nomination was still open and included Donald Trump as a candidate (July 2015 onwards until March 2016). 11 We further focus on the subset of respondents who self-identify as Republican (≈ 23.4% of the sample). With this analysis we aim to capture whether retaliation exposure was distinctively targeted against areas who supported Trump instead of another Republican presidential candidate. The results are presented in Table 3 for any of the other presidential candidates. For the Mexican and the Canadian retaliation the correlation is also positive, but statistical insignificance. This finding suggests that retaliation was carefully chosen to target areas with Republican supporters with an affinity for Donald Trump. The specific targeting of Trump's voter base, exhibits parallels to the targeting of politically connected firms by economic sanctions in Iran (Draca et al., 2018), both of which are likely to increase the pressure on the respective political leader. Lastly, in Figure 3 we present the estimated difference-in-difference coefficients from specification 3. The figure suggests that the correlation between a county's exposure to retaliation and indviduals' leaning towards the Republican party becomes distinctly stronger from 2016 onwards. This suggest that retaliation was targeted against areas which increased their support for the Republican party relative towards the 2015 baseline level. In other words, areas that during Trump's presidential run were swayed to support Republicans were more strongly targeted than areas that always exhibited a strong support of the Republican party. It is also worth noting that trends prior to 2015 are flat. This suggests that our retaliation measure is not confounding other latent trends in the geography of Republican party affiliation that pre-date Donald Trump's candidacy. 12 If we were simply picking up the trade-induced manufacturing sector decline (Autor et al., 2016), for example, this trends should be visible before the 2016 presidential election We next explore a short individual-level panel highlighting that retaliation, especially from the EU and China, was targeted to the US that saw sizable swings from Obama in 2012 to supporting Trump in 2016. Individual-level panel data As an additional piece of evidence, we leverage the 2016 CCES study which asked the respondents if and for whom individuals voted in the 2012 and 2016 Presidential elections. The advantage of the CCES in comparison to the Gallup data is that it directly measures voting behavior instead of approval or party affiliation. In this way, the CCES data allow us to study whether individuals switched their party support vis-a-vis the 2012 election. We estimate regression specification 2. The set of individual-level controls X i includes race, gender, age, income and political party affiliation. As we estimate the regression in first-differences, we implicitly accounting for time-invariant individual-level characteristics (similar to individual fixed effects). In particular, we study the direction of the switchi.e. whether retaliation was concentrated in counties with voters that swung from supporting Barack Obama to supporting Donald Trump. We present the results from this analysis in Table 4. The estimates are statistically significant for the EU and the Chinese retaliation exposure measures. The point estimates suggest that in counties most targeted by EU retaliation, the likelihood of an individual voter to be a swing voters that switched from supporting Obama to Trump is 7.6 percentage points higher. For counties exposed to Chinese retaliation at the same level, the likelihood is 3.8 percentage points higher. In Appendix Table A6 we confirm the results for the subset of voters for which their voting status has been validated based on official voter lists. The patterns remain broadly the same, even though we do lose some statistical precision. Taken together, the results suggests that retaliation appears to have been chosen to target counties in which Trump had a particular appeal and voters increased their support for the Republican party. The patterns documented across three different data sources are remarkably consistent. Additionally, the fact that the Trump administration provided billions of dollars in farm aid packages (see for example NYT, 2018), suggest that the effect of retaliatory tariffs was felt in the targeted areas. In Appendix B we provide auxiliary evidence for the economic consequences of the tariffs. In line with the findings of Levitt and Snyder (1995); A remaining concern is that the underlying patterns could be spurious in a fashion that can not be accounted for with individual level or other county-level control variables. Specifically, one might worry that the specific mix of products that countries purchase from the US may mechanically constrain the structure of any retaliation response. To address this concern we exploit the fact that for the initial wave of tariffs -which we study in this paper -the constraints on the retaliation response are quite well defined. This allows us to construct counterfactual baskets countries could have chosen and evaluate the degree of political targeting against these counterfactuals. These counterfactual baskets additionally allow us to investigate other constraints on the retaliation response. Counterfactual retaliation baskets Is the observed targeting of Republican counties a mere artefact of the US export mix with specific trading partners and do trading-partners face trade-offs due to domestic constraints? In this section we attempt to answer these questions by proposing a simulation approach exploiting retaliation design constraints to construct feasible counterfactual retaliation baskets. Retaliation design constraints In our simulations, we leverage the fact that trading rules impose constraints on the design of retaliation (or more formally, rebalancing measures). The key constraint is that that the applied retaliatory tariffs should be commensurate with the US tariffs. For example, the tariffs imposed by the US on steel and aluminium affected around USD 7.2 billion of EU exports to the US with an expected added overall tariff revenue volume of USD 1.6 billion. 13 To comply with WTO rules, the EU's expected tariff revenues from the retaliation should not exceed this amount. Our aim therefor is to identify a vector of products i among all traded HS goods categories for which there is non-zero imports, M i,r > 0, into retaliating country r along with a vector non-zero tariff rates t i,r > 0 to be applied such that the combined expected tariff revenues ∑ i∈S r t i,r M i,r is less than the expected tariff revenues that the US levies on imports from country r, T r . As previously discussed, the choice of the tariff rates is secondary for the retaliation wave we study: for 85% of product classes included in the actual retaliation the added tariff rate was fixed at 25%. For the counterfactual construction we therefore ignore the choice of the added tariff rate t i,r implicitly assuming a fixed rate t. 14 With a fixed tariff rate the above problem becomes a subset sum problem. (see Figure A3). This potentially leaves an uncountably large set of combinations of products for which the combined affected imports from the US is approximately equal to the US tariffs. To overcome this challenge, we use a probabilistic simulation approach to identify a set of alternative baskets Simulation approach In particular we use the following sampling procedure for each country's retaliation list L: while less than 1000 alternative retaliation baskets L r,i have been found: do 1. Randomly draw an integer N i indicating length of retaliation list in terms of HS10 codes -allow for a 20% deviation around list length of actual retaliation N r * 2. Draw a sample list L i,r of HS10 codes of length N i on which there is some exports from the US in 2017 3. Compute the volume of exports from US to country r that would be affected by retaliation if the sample list L i,r were chosen ∑ i∈L i,r E i,US,r 4. if 0.9 < ∑ i∈L i,r E i,US,r ∑ i∈L * r E i,US,r < 1.1 then Accept the candidate list L i,r ; end end As indicated in the pseudo-code we construct counterfactual retaliation list by first choosing a similar number of products to target (allowing for 20% deviation). We then sample a set of products to target and calculate the effected export volume. Lastly, we accept any list which effects a similar amount of exports as the actulal list (allowing for a 10% deviation). The result of our sampling procedure is a set of retaliation lists that are similar to the original list in many dimensions, but target a different set of US exports. While the simulation approach trace out some aspects of the "retaliation possibilities frontier", it ignores two potential strategic elements. First, retaliation lists may be designed in a way to preserve an option value to hit back in case of a further escalation. Second, retaliating countries may coordinate their retaliation responses to maximize the effectiveness. It is also import to note, that the counterfactual retaliation bundles are not orthogonal to the actual retaliation basket (see Appendix Figure A4). The observed positive correlation mechanically results because the simulated baskets overlap with the actual retaliation basket. Evaluating the degree of political targeting The simulation approach is particularly useful as it allows us to quantify the degree of political targeting relative to the counterfactual baskets. More specifically, we evaluate whether retaliation appears at the upper-or lower end of the potential retaliation distribution. We also investigate the underling trade-offs that countries face in their retaliation design. For this analysis, we estimate the regression models studied in the previous Tables 1, 2 and Table 4 Table 5 presents the share of the counterfactual estimatesβ r that would imply 15 Note that there is a non-negligible cross-correlation across retaliation bundles. Appendix Figure A4 highlights that the implied measures and the actually chosen retaliation response have a positive correlation almost across each of the 1000 counterfactual bundles. This is a mere direct result of the retaliation response that meet the criteria to be quite similar will produce some overlap, implying a mechanical cross correlation. a higher level of political targeting. In column (1) and (2) we focus on the the outcomes studied in Table 1. Column (1) suggests that for China, there exist hardly any feasible and comparable retaliation response that would produce a stronger Tables 2 and Table 4. They observed patterns are broadly similar. The finding that Canadian and Mexican retaliation, while being quite robustly associated with support for Donald Trump, does not appear to be at the upper end of the achievable targeting distribution, suggests that other considerations may have played just as important a role. We next aim to investigate the which other objectives countries might include in their considerations. Retaliation trade-offs The previous section suggests that retaliation appears to specifically target parts of the US that swung to support Donald Trump. Yet, relative to a set of counterfactual retaliation responses, especially for Canada and Mexico, we observed that the implemented choice seems suboptimal. What may explain this observation? As our discussion of the EU's retaliation design objectives suggested, countries designing the retaliation have multiple objectives. In the EU regulation constraining retaliation design the mitigation of harm to consumers and firms features prominently along with the political effectiveness. In this section we construct a set of relevant measures that might constrain the retaliation choice. In particular, we investigate the role of the revealed comparative advantage, the import demand elasticities and the dominance of US exports. Revealed comparative advantage The first measure is an index of the revealed comparative advantage (henceforth, RCA) as introduced by Balassa (1965). The intuition for this index, which is constructed based on export data, is that countries appear to have a revealed comparative advantage for a good h if a higher share of the countries export is accounted for by this good relative to the export share of this good across all trading countries. Formally, an RCA value above 1 for a specific good h indicates that a country has a revealed comparative advantage (see Kavaklı et al., 2017 for a recent example using RCA measures in the context of economic sanctions). When designing their retaliation response countries reasonably might want to avoid goods for which the US has a Revealed Comparative Advantage. We denote the implied average RCA for each retaliation list as RCA i,r , which we weight by the implied volume of trade. 16 As the construction of the RCA indices requires trade data between all countries, we can only construct the RCA at the HS6 digit level, based on data from UN Comtrade. Import demand elasticities Whether a specific good is chosen for retaliation may also depend on the associated (import) demand elasticities. Presumably, in order 16 While the sum of the weights across baskets will be the same as our counterfactual baskets target a similar volume of trade, the distribution of weights differ. for retaliation to be effective, goods for which import-demand is found to be particularly price elastic would proof to be more effective. Further, tariffs on goods with a high import demand elasticity are less likely to affect domestic consumers. We therefore use the import-demand elasticity estimates constructed by Soderbery (2018) at the HS4 level for each of the retaliating countries. As before, we compute the retaliation-specific weighted average import-demand elasticity specific to a counterfactual retaliation list i for country r, σ i,r and evaluate this against the elasticities associated with the actually chosen retaliation response, σ r * . 17 Dominance of US exports Countries may also want to avoid retaliating and raising the cost of a specific imports for which the US is the predominant source. To measure this, we construct the share of imports I i,h,r of a good h on a retaliation list i of country r that stems from the US relative to the rest of the world, s h,i,r = We compute the trade-volume weighted average implied share of US imports, s i,r for each good in the retaliation lists, across each of the counterfactual retaliation lists i for country r. We then again evaluate the corresponding shares associated with the actual retaliation list s r * compared to the counterfactual lists. This analysis is conducted at the HS6 level (based on UN Comtrade data). In Table 6, we report summary statistics for the three measures and how they compare across retaliation baskets. Ideally, countries in order to minimize harm to their own economy would favor retaliating against goods with a low RCA, a large import-demand elasticity and a low US import market share. In Table 7 we contrast how the distribution of counterfactual baskets compares with the actual retaliation response. The EU's retaliation appears to be targeting goods for which 17 For the EU, we use Germany as the US's biggest trading-partner's estimated elasticities from Soderbery (2018). The results qualitatively similar if we use other EU countries as reference. the US has a weaker RCA and goods for which US is less dominant supplier. The Mexican response, on the other hand, appears to be targeted goods basket with a relatively high import demand elasticity and a lower revealed comparative advantage. We next shed light on the underlying trade-offs visually. Results For every (potential) retaliation list i of retaliating country r, we have now constructed a vector of attributes (β i,r , RCA i,r , s i,r , σ i,r ) . To illustrate the trade-offs and constraints imposed on retaliation design we visualize the joint distribution of the pair (β i,r , RCA i,r ) in ships with the US, there are much fewer constraints on retaliation design. Relative to the counterfactual baskets, we observe that in particular for the EU and China, retaliation appears to have been chosen at the upper end. To the best of our knowledge, we are not aware of another paper that has explored retaliation in this way. For the EU, there exist very few alternative retaliation bundles that would produce a higher degree of political targeting and a lower RCA value. The same is true for Canada, and, to a lesser extent for Mexico. In Appendix Figure A5, we study the implied import-demand elasticity. The figure highlights that, for both Canada and Mexico, retaliation appears to targeted towards goods with a high import demand elasticity and a higher degree of political targeting. Appendix Figure A6 studies the implied US market power for specific retaliation baskets. Based on this measure, the EU's retaliation response stands out in achieving a fair degree of political targeting, while avoiding goods for which the US is a dominant supplier. In Table 8, we computes the shares of retaliation baskets that would imply a higher degree of political targeting while considering our other proxies capturing retaliation effectiveness and domestic economic harm. Throughout, the chosen retaliation appears at the upper end in terms of producing high political targeting but a lower RCA. For the EU only around 1% of the counterfactual retaliation responses would produce a higher degree of political targeting and a lower RCA. The Chinese retaliation response clearly stands out as it appears to target goods with a high RCA. Much of this is afforded by the specific constraints that Chinese retaliation design faces as the vast majority of other feasible retaliation baskets would produce no political targeting whatsoever. Conclusion Based on the recent trade escalation provoked by the administration of Donald Trump, this paper provides empirical evidence for the political targeting of retaliatory tariffs. Using a novel simulation approach, we show that retaliatory tariffs indeed disproportionately targeted more Republican areas. This suggest that retaliatory tariffs appear to have a clear political dimension. We further illustrate that countries face a trade-off between the degree of political targeting and potential harm done their own economy. Our findings suggest that countries appear to put different weights on these two policy objectives. To the best of our knowledge, this paper is the first to empirically document this trade-off. Future work should hence incorporate whether retaliation is effective in shap- ing the underlying trade-policy preferences of politicians and the electorate more broadly. This paper suggests that such an empirical study, for example, using difference-in-difference designs will have to find a way to navigate the endogeneity of retaliation exposure that this paper highlights. 43 Electronic copy available at: https://ssrn.com/abstract=3349000 Notes: The dependent variable is an indicator stating whether a respondent holds a favorable view of the candidate indicated. The responses includes don't know, refused and those that hold no view. The patterns are robust to dropping these observations. Regressions include individual level controls: respondents racial identity, income, republican party affiliation, gender and the year of the survey. Regressions are weighted using survey weights provided by Gallup. Standard errors are clustered at the county level and are presented in parentheses, stars indicate *** p < 0.01, ** p < 0.05, * p < 0.1. Notes: The dependent variable is a dummy indicated in the panel label. All regressions control for state FE and are weighted with the provided survey weights. Regressions include individual level controls: respondents racial identity, income, republican party affiliation and gender. Standard errors are clustered at the county level and are presented in parentheses, stars indicate *** p < 0.01, ** p < 0.05, * p < 0.1. The table reports analysis of the implied measures of the extent of political targeting implied by the set of simulated counterfactual retaliation baskets vis-a-vis the actually chosen retaliation response. The figures represent the share of retaliation baskets that imply a retaliation exposure measure above what is implied in the actually chosen retaliation response. Columns (1) -(2) study the county level data explored in Table 1, columns (3) -(5) use the measures leveraged in Table 2, while columns (6)-(8) explore the measures studied in Table 4. Figure A1: Ignoring tariff-rate does not skew retaliation exposure measure: comparing retaliation exposure measure including or ignoring the applied tariff rate Figure A2: Which sectors were targeted by retaliation measures? Combining the EU, Canada, Turkey, India, and Chinese retaliation lists. A Additional Figures and Tables Notes: Pie chart plots the trade-volume weighted distribution of countermeasures across sectors using the 2017 export volume. B What are the economic effects of retaliation? As a first measure of economic impact, we study the effects of retaliation on trade flows and export price indices. While reduced trade-flows could purely capture both trade-disruptions as well as trade-diversion, any impact of retaliatory tariffs on export price indices is likely to indicate tangible economic shocks. This data is available for around 90 different four digit NAICS sectors and will help complement the analysis on trade flows. Specifically, since trade flows may simply be re-routed, it could be that the income implications of the tariff may be limited. Hence, studying export price indices may help shed light on whether tariffs actually did produce a negative income shock. Figure ?? presents the year on B.1 Data on Economic Impact Measures year changes in the export price indices of the US for agriculture and manufacturing sector outputs. Note that, these figures do not account for the different size of the relevant sectors, but the observed deterioration in export prices following July 2018 is evident, indicating that export prices did indeed collapse. We also test this in a more robust econometric framework later on. B.2 Empirical specification Impact on exports We first investigate the impact of retaliation on US trade flows. We use monthly US export data at the HS8 level to measure US exports to China, the EU, Canada and Mexico as well as the rest of the world. We then estimate the following difference-in-difference regression: In this specification y measures US exports and the index r indicates the country which retaliated against the US. T h,r is indicator variable which is 1 if good h was chosen to be included in the retaliation basket of country r. The regressions control for a range of shifters and fixed effects. Most importantly we include HS8 by trading country specific shifters α c,h capturing country r specific tastes for imports from the US of goods h. We also control for destination country r specific time fixed effects as well as additional time fixed effects, indicated here by ν i,t . These additional time effects can be specific to a destination country r or, could account for good-specific seasonality. The latter is particularly relevant as US agricultural exports are highly seasonal. Impact on export prices Secondly, we estimate the impact of retaliation on export price indices. This analysis is based on export price indices constructed for 46 NAICS4 sectors by the Bureau of Labor Statistics. We study to what extent sectors more exposed to retaliation measures saw a differential change in their export prices. To do so, we construct the exposure of a NAICS4 sector n to retaliation from country r, indicated as E n,r as follows. Having merged the HS8 export data to NAICS codes, we compute the total volume of US exports in 2017 at the 4 digit NAICS level that would become subject to retaliatory tariffs from July 2018 by country r and divide this by the overall export volume. The tariff exposure measure across the 46 four digit industry groups for which it is constructed ranges from 0 to 34.6%, indicating that at the top 34.6% of exports produced by an industry was affected by tariffs. The average exposure measure is 5%. We then estimate the following regression: The dependent variable measures the export price index at the four digit NAICS sector n. The sector fixed effects, α n j , are at the level of the three digit sector or the four digit sector. Hence, we explore both within and between NAICS sector variation. We include time fixed effects throughout. Further, in some more demanding specifications we allow for time by first digit NAICS sector fixed effects. These first digit sectors broadly distinguish agriculture, mining and manufacturing. Standard errors are clustered at the four digit NAICS sector level. The main coefficient of interest is β r . We would expect that this coefficient to be negative, indicating that after retaliatory measures came into effect, export price indices decrease for exports from sectors with a higher retaliation exposure E n,r . B.3 Results Impact on exports The regression results are presented in Table B1. The point estimates in panel A suggest that exports that were exposed to retaliation shrank by around 75%. Panel B-E explores to what extent this result is robust to the exclusion of specific trading partner. It becomes obvious that, the Chinese retaliation, accounts for around 50-60% of the estimated contraction of US exports. This is expected since the Chinese retaliation was by far the most extensive given the structure of US trade with China. Nonetheless, also goods targeted by the EU, Canada and Mexico exhibit a significant reduction in exports to these markets. Overall, the point estimate suggest that each month around USD 2.55 billion worth of exports have either not taken place or were diverted as a result of the tariff measures, amounting to around USD 15.28 billion in aggregate since the retaliation measures became effective in July 2018 until the end of 2018. Panel A in Figure B2 provides an event study version of specification 4, estimating separate coefficient for each pre-and post treatment month. The figure highlights the sharp contraction in export volumes since July 2018, when most retaliation measures became effective. In Appendix Figure B1, we estimate the event studies focusing on pairs of countries, studying the US exports to a specific country that retaliated and to the rest of the world with just these two series. The results highlight a strong degree of seasonality in exports of goods that were subject to retaliation by China, which captures the agricultural crop cycle across the US. Notably, the peak in exports that should occur around the summer failed to materialize as commodity exports fell significantly due to retaliation. The figure suggests significant contractions in bilateral exports relative to trade with the rest of the world across the dyads that were affected by the retaliatory measures. These results do not preclude the possibility that most of this trade was rerouted and absorbed by other trading partners. Yet using the case of soybeans, a look at aggregate numbers suggests, that there is a net contraction of exports. In other words, the exports to the rest of the world have not absorbed the tariffinduced reduction in demand. To show that the retaliatory tariffs likely also had a significant effect on incomes in areas that produce the affected commodities (and not just capture tradererouting), we next provide some evidence suggesting that US export price indices also significantly declined. Table B2 presents the results from this analysis. Since the data are aggregated into far coarser industry sectors, the point estimates are unsurprisingly more noisy. Nevertheless, the results suggest that export prices declined significantly in 4-digit NAICS sectors that were more exposed to retaliatory tariffs. Panel A studies the overall sector level retaliation exposure measure, while in Panels B -E, split the retaliation exposure measure by country. The findings indicate that, at the coarse four digit level, only the retaliation by China, Mexico and Canada had a significant effect on export price indices. To reiterate, this is not surprising given the coarseness of the export price indices data. The results also rely on variation between 4 digit NAICS sectors (in columns 1-3). When wen fully focus on within sector variation over time (columns 4-6), the estimates are even more noisy. Impact on export prices To illustrate the timing of the effects, in Panel B of Figure B2 shows that the contraction in export price indices occurs at the time of the introduction of retaliatory measures, with export prices growing strongly in early 2018. This could partly highlight increased demand due to stockpiling. Taken together, the evidence from exports as well as exports prices, indicates that the retaliatory tariffs did indeed, induce some economic harm on the effected sectors. In that sense the tariffs were effective. As the last piece, of our analysis we now investigate whether tariffs also had a political impact. Notes: Figure plots estimates from a difference-in-difference regressions. Panel A presents point estimates capturing the evolution of exports from the US to EU, China, Canada, Mexico and the ROW over time on goods targeted by retaliation. The underling regressions control for HS8 code by destination shifters, destination by time fixed effects and targeted sector specific seasonality. Standard errors are clustered at the 4 digit HS code level. The right Panel B presents results from a regression studying 46 export price indices constructed at four digit NAICS level. The plot presents point estimates capturing the evolution of export price indices over time as a function of the 4 digit NAICS sectors exposure to retaliation measures as the share of exports in 2017 at the NAICS4 level that was exposed to retaliation measures. The underlying regressions control for NAICS4 export price index fixed effects and time fixed effects; regressions are weighted by the 2017 overall export volume and standard errors are clustered at the NAICS4 level. 90% confidence bands are indicated. Notes: The dependent variable is the level of US exports at the HS8 level by month. Standard errors are clustered at the 4-digit HS good level with stars indicating *** p < 0.01, ** p < 0.05, * p < 0.1.
2019-03-13T01:46:07.779Z
2019-03-01T00:00:00.000
{ "year": 2020, "sha1": "b8b0c439dea3c5a79aebc02b5e5fb14af032c664", "oa_license": "CCBY", "oa_url": "http://wrap.warwick.ac.uk/142997/13/WRAP-tariffs-politics-evidence-Trump%E2%80%99s-Trade-Wars-Fetzer-2020.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "f0d545396c9403e29b06c2decf21f45c3c1881c3", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
8489712
pes2o/s2orc
v3-fos-license
New records of mites ( Acari : Spinturnicidae ) associated with bats ( Mammalia , Chiroptera ) in two Brazilian biomes : Pantanal and Caatinga A first survey of mite species that ectoparasitize bats in the states of Ceará and Mato Grosso was conducted. The specimens of bats and their mites were collected in areas of the Caatinga and Pantanal biomes. A total of 450 spinturnicids representing two genera and ten species was collected from 15 bat species in the Private Reserve of the Natural Patrimony Serra das Almas, Ceará State, Northeast Brazil and 138 spinturnicids represented by two genera and four species were found in seven bats species collected in Private Reserve of the Natural Patrimony Sesc Pantanal, Mato Grosso State, Central-Western Brazil. The occurrence of Cameronieta genus and the species Mesoperiglischrus natali as well as four new associations (Periglischrus iheringi Chiroderma vizottoi; P. micronycteridis Micronycteris sanborni; P. paracutisternus – Trachops cirrhosus; Spinturnix americanus Myotis riparius) are registered for the first time in Brazil. Introduction The family Spinturnicidae comprises hematophagous mites found exclusively on bats. These mites go through five life cycle stages, including the egg, larva, protonymph, deutonymph, and adult. The egg and larval stages occur inside a pregnant female (RUDNICK, 1960), and, according to Almeida et al. (2015), nymph and adult mites mostly infest the plagiopatagium of bats. The most recent taxonomy for the Spinturnicidae lists four genera in the New World (HERRIN & TIPTON, 1975): Cameronieta Machado-Allison, 1965, which are exclusive to mormoopidae bats; Periglischrus Kolenati, 1857, the largest genus, found in phyllostomidae bats; Spinturnix Von Hayden, 1826, a cosmopolitan genus, with a majority of known species occurring in association with Old World bats of the subfamily Vespertilionoidae, and seven species recorded in the New World (HERRIN & TIPTON, 1975); Paraspinturnix Rudnick, 1960, a monotypic genus that parasitizes the anal orifice of Myotis bats; and a fifth genus Mesoperiglischrus (DUSBÁBEK, 1968) presented by Morales-Malacara in the 10 th Internacional Congress of Acarology (2001) as a valid genus with two species found in Natalidae bats (MORALES-MALACARA, 2001). Studies on the occurrence of Spinturnicids in Brazil have been conducted with bats collected in the capital city of Brasília, in regions of the Cerrado (GETTINGER & GRIBEL, 1989), in Atlantic forest areas in the states of Minas Gerais (AZEVEDO et al., 2002;MORAS et al., 2013), Pernambuco (DANTAS-TORRES et al., 2009), Rio Grande do Sul (SILVA et al., 2009), Rio de Janeiro (ALMEIDA et al., 2011), and in the Pantanal region, state of Mato Grosso do Sul (SILVA & GRACIOLLI, 2013), besides of Confalonieri dissertation (1976) that present a biometric study of P. iheringi and P. ojastii. In the present paper, it is reported the diversity and distribution of ectoparasitic Spinturnicidae species found in surveys conducted in the Pantanal region in the state of Mato Grosso and in the Caatinga region in the state of Ceará. Materials and Methods Species inventories were conducted in different areas of two Brazilian biomes, the Private Reserve of the Natural Patrimony (RPPN, from the original Portuguese) Serra das Almas and RPPN Sesc Pantanal. The RPPN Serra das Almas (05° 15' S/41° 00' W) comprises 6,146 hectares and is considered an Outpost of the Caatinga Biosphere Reserve situated in the municipality of Cratéus, state of Ceará (ARAÚJO et al., 2011). The RPPN Sesc Pantanal (16° 41' S/56° 24' W) represents the largest RPPN in Brazil, with approximately 106,000 hectares between the rivers Cuiabá and São Lourenço in the municipality of Barão de Melgaço, state of Mato Grosso. It is an important area for the protection of Brazilian biodiversity and the preservation of genetic resources (SILVA & ABDON, 1998). In the RPPN Serra das Almas, bats were collected during nine nights in the dry season (August 2012) and 10 nights in the rainy season (February 2013). In the RPPN Sesc Pantanal bats were collected during 15 nights in dry season (May 2008). In both areas, bats were collected with mist nets measuring from 6 to 18 meters in length and 2.5 meters in height placed in existing trails or above streams. The sampling period extended for six hours after sunset. Bat specimens that were returned to the wild were released at the capture site following their identification in the field and voucher specimens were fixed in 10% formaldehyde and preserved in 70% alcohol, as previously described by Vizotto & Taddei (1973) and Handley (1988), and catalogued in the National Museum (MN, from the original Portuguese) Mammal Collection and the Adriano Lucio Peracchi (ALP) collection, Universidade Federal Rural do Rio de Janeiro, Rio de Janeiro, Brazil. Silva et al. (2015) and Tavares (2009) describe the bat species collected in both study areas. The taxonomic nomenclature applied to bat species follows the one proposed by Nogueira et al. (2014). Results and Discussion A total of 450 Spinturnicidae mite specimens, representing two genera and 10 species, were collected from 15 bat species captured in RPPN Serra das Almas. In the RPPN Sesc Pantanal, seven bat species were collected carrying 138 mites distributed in two genera and four species of the same family (Tables 1 and 2). Mite family, genera and species are presented in alphabetical order and by collection area. The host species is listed along with parasite load information. Released bats are listed following the same norms, but with month, in roman numerals, and year of capture (number of host), followed by parasite load. The following results constitute the first survey of Spinturnicidae mites for the Caatinga in the state of Ceará and for the Pantanal biome in Mato Grosso. Comments: Cameronieta genus is exclusive to mormoopidae bats (HERRIN & TIPTON, 1975) and is comprised of six species: C. strandtmanni (TIBBETTS, 1957); C. thomasi (MACHADO-ALLISON, 1965b) and C. elongatus (FURMAN, 1966) reported from Venezuela and C. machadoi Dusbabek, 1967;C. tibbettsi Dusbabek, 1967 andC. torrei Dusbabek, 1967 reported from Cuba. Although P. parnelii have been already associated with C. elongatus and C. tibbettsi in Venezuela and Cuba, respectively (DUSBÁBEK, 1967;HERRIN & TIPTON, 1975), the specimens collected could not be allocated to any known species because the position of sternal setae and the length of podosomal and metasternal setae (ALMEIDA et al., unpublished). Furthermore, this is the first time the genus is reported in Brazil. Comments: Herrin & Tipton (1975) described Periglischrus tonatii as a primary parasite of the genus Tonatia, and reported the occurrence of the mite in association with T. silvicola, T. brasiliensis and T. carrikeri, all of which currently belong to the genus Lophostoma (Lee et al., 2002). In southeast Mexico and Panama, Morales-Malacara & Juste (2002) described P. steresotrichus, a species morphologically close to P. tonatii, and P. eurysternus, which is close to P. paratorrealbai, both in association with T. evotis (currently Lophostoma evotis) and T. saurophila, respectively. The specimens obtained from T. bidens in the RPPN Serra das Almas are phenetically close to P. torrealbai, but they belong to a new species in description process. Comments: Considered a primary parasite of the genus Glossophaga this species had been reported in Brazil in association with G. soricina in Rio de Janeiro, São Paulo, Brasília and Pernambuco (CONFALONIERI, 1976;GETTINGER & GRIBEL, 1989;DANTAS-TORRES et al., 2009). Comments: This species is the most often cited in studies of bat parasites, and it is found in association with emballonuridae, noctilionidae, mormoopidae and with a majority of Phyllostomidae subfamilies (HERRIN & TIPTON, 1975). Given this wide range of hosts, it is possible that P. iheringi in fact comprises a number of species (HERRIN & TIPTON, 1975). In Brazil, this mite has been reported in Artibeus lituratus (GETTINGER & GRIBEL, 1989;DANTAS-TORRES et al., 2009;SILVA et al., 2009;ALMEIDA et al., 2011;CONFALONIERI, 1976) Sturnira lilium (AZEVEDO et al., 2002;ALMEIDA et al., 2011;DANTAS-TORRES et al., 2009;CONFALONIERI, 1976). Here, we report for the first time an association between P. iheringi and C. vizottoi. Because of the vast number of cases we found, this association is likely correct. Comments: This species represents a primary parasite of the genus Micronycteris (FURMAN, 1966). In the state of Rio de Janeiro, it has been found in association with M. megalotis Gray, 1842 by Almeida et al. (2011). The association with M. sanborni is reported here for the first time in Brazil. Comments: In Venezuela, Herrin & Tipton (1975) found this species in association with T. cirrhosus, its primary host. This is the first time that this association is reported to Brazil. Comments: A primary parasite of the bat genus Phyllostomus (MACHADO- ALLISON, 1965a), this species has been reported in Brazil in association with P. discolor in Brasilia (GETTINGER & GRIBEL, 1989) and with P. hastatus in Rio de Janeiro (ALMEIDA et al., 2011) andMinas Gerais (CONFALONIERI, 1976). This is the first time that the association with A. planirostris is reported in Brazil; however, because we found it in only one host, this association requires further evaluation before being considered valid. The parasite has been reported in association with A. planirostris in Venezuela (HERRIN & TIPTON, 1975). Comments: This primary parasite of the genus Anoura (HERRIN & TIPTON, 1975) has been reported in Brazil in association with A. geoffroyi and Anoura sp. in the state of Rio Grande do Sul (SILVA et al., 2009) and with A. caudifer and G. soricina in Rio de Janeiro (CONFALONIERI, 1976). Comments: This mite is known to parasitize the genus Myotis in neotropical regions (HERRIN & TIPTON, 1975). It has been found in Brazil with M. nigricans (CONFALONIERI, 1976;SILVA & GRACIOLLI, 2013) and Nyctinomops macrotis (CONFALONIERI, 1976). The association with M. riparius is the first reported to Brazil. Conclusions The occurrence of these Spinturnicidae species is reported here for the first time for the state of Ceará and for the Pantanal region of Mato Grosso. The 15 bat species collected in the Caatinga and seven in Pantanal carried 3 genera and 11 species of Spinturnicidae mite, totaling 588 specimens. Furthermore, the occurrence of a Cameronieta sp. and M. natali and the associations P. iheringi -C. vizottoi, P. micronycteridis -M. sanborni, P. paracutisternus -T. cirrhosus, and S. americanus -M. riparius were registered for the first time in Brazil. The available data for the Spinturnicidae family in neotropical bats leaves wide gaps in the understanding of host associations and parasite distribution. This fact reflects the lack of research focus on bat ectoparasitic fauna. Despite the fact that collection methods are almost identical, researchers often neglect ectoparasites after capturing bats. Thus, we reaffirm the need for proper and standardized ectoparasite data collection that minimizes contamination, proper cataloguing in museums, and greater collaboration between mammalogists and ectoparasitologists in identifying species.
2017-10-21T18:23:37.331Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "d3a69dc524f21ef20a819784ee8d9212abcf88ec", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbpv/v25n1/1984-2961-rbpv-S1984-29612016005.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d3a69dc524f21ef20a819784ee8d9212abcf88ec", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235606215
pes2o/s2orc
v3-fos-license
Mutual-Information Based Few-Shot Classification We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. We motivate our transductive loss by deriving a formal relation between the classification accuracy and mutual-information maximization. Furthermore, we propose a new alternating-direction solver, which substantially speeds up transductive inference over gradient-based optimization, while yielding competitive accuracy. We also provide a convergence analysis of our solver based on Zangwill's theory and bound-optimization arguments. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2 % and 5 % improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios, with random tasks, domain shift and larger numbers of classes, as in the recently introduced META-DATASET. Our code is publicly available at https://github.com/mboudiaf/TIM. We also publicly release a standalone PyTorch implementation of META-DATASET, along with additional benchmarking results, at https://github.com/mboudiaf/pytorch-meta-dataset. INTRODUCTION D EEP learning models have achieved unprecedented success, approaching human-level performances when trained on large-scale labeled data. Nevertheless, the generalization of such models might be seriously challenged when dealing with new (unseen) classes, with only a few labeled instances per class. Humans, however, can learn new tasks rapidly from a handful of instances, by leveraging context and prior knowledge. The few-shot learning (FSL) paradigm [11], [35], [54] attempts to bridge this gap, and has recently attracted substantial research interest, with a large body of very recent works, e.g., [7], [10], [12], [13], [19], [24], [34], [40], [44], [45], [48], [57], [59], among many others. In the few-shot setting, a model is first trained on labeled data with base classes. Then, model generalization is evaluated on few-shot tasks, composed of unlabeled samples from novel classes unseen during training (the query set), assuming only one or a few labeled samples (the support set) are given per novel class. Most of the existing approaches within the FSL framework are based on the "learning to learn" paradigm or meta-learning [12], [28], [45], [48], [54], where the training set is viewed as a series of balanced tasks (or episodes), to simulate test-time scenario. Popular works include prototypical networks [45], which describes each class with an embedding prototype and maximizes the log-probability of query samples via episodic training; matching network [54], More closely related to our work, the recent transductive inference of Dhillion et al. [10] minimizes the entropy of the network softmax predictions at unlabeled query samples, reporting competitive few-shot performances, while using standard cross-entropy training on the base classes. The competitive performance of [10] is in line with several recent inductive baselines [7], [49], [55], which reported that standard cross-entropy training for the base classes matches or exceeds the performances of more sophisticated meta-learning procedures. Also, the performance of [10] is in line with established results in the context of semi-supervised learning, where entropy minimization is widely used [2], [14], [37]. It is worth noting that the inference runtimes of transductive methods are, typically, much higher than their inductive counterparts. For, instance, the authors of [10] fine-tune all the parameters of a deep network during inference, which is several orders of magnitude slower than inductive methods such as ProtoNet [45]. Also, based on matrix inversion, the transductive inference in [34] has a complexity that is cubic in the number of query samples. Info-max principle: While the semi-supervised and few-shot learning works in [10], [14] build upon Barlow's principle of entropy minimization [1], our few-shot formulation is inspired by the general info-max principle enunciated by Linsker [31], which formally consists in maximizing the Mutual Information (MI) between the inputs and outputs of a system. In our case, the inputs are the query features and the outputs are their label predictions. The idea is also related to info-max in the context of clustering [21], [22], [27]. More generally, info-max principles, well-established in the field of communications, were recently used in several deep-learning problems, e.g., representation learning [18], [52], metric learning [4] or domain adaptation [30], among other works. Contributions • We propose Transductive Information Maximization (TIM) for few-shot learning. Our method maximizes the MI between the query features and their label predictions for a few-shot task at inference, while minimizing the cross-entropy loss on the support set. We formally motivate the mutual information loss as a surrogate of the classification error. • We derive an alternating-direction solver for our loss, which substantially speeds up transductive inference over gradient-based optimization, while yielding competitive accuracy. Furthermore, we provide a convergence analysis based on Zangwill's theory and boundoptimization arguments. • Following standard transductive few-shot settings, our comprehensive evaluations show that TIM outperforms state-of-the-art methods substantially across various datasets and networks, while using a simple crossentropy training on the base classes, without complex meta-learning schemes. It consistently brings between 2% and 5% of improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging, recently introduced scenarios, with domain shifts and larger numbers of ways. This work extends and generalizes in many different ways our preliminary results in [3], published at the NeurIPS 2020 conference. More specifically, it introduces an information-theoretic justification for the previous formulation in subsection 2.3, it provides new results on the convergence of our TIM-ADM algorithm in subsection 3.2 and subsection 3.3, and reports several new experiments and benchmarking results on META-DATASET, a recently introduced, challenging few-shot dataset, in subsection 4.5. Few-shot setting Assume we are given a labeled training set, , where x i denotes raw features of sample i and y i its associated one-hot encoded label. Such labeled set is often referred to as the meta-training or base dataset in the few-shot literature. Let Y base denote the set of classes for this base dataset. The few-shot scenario assumes that we are given a test dataset: from which we create randomly sampled few-shot tasks, each with a few labeled examples. Standard tasks: Traditionally, models are (trained and) evaluated on K-ways N S -shot task, which involve randomly sampling N S labeled examples from each of K different classes, also chosen at random. Let S denote the set of these labeled examples with size |S| = N S .K, referred to as the support set. . Furthermore, each task has a query set denoted by Q composed of |Q| = N Q .K unlabeled (unseen) examples from each of the K classes. With models trained on the base set, few-shot techniques use the labeled support sets to adapt to the tasks at hand, and are evaluated based on their performances on the unlabeled query sets. Random tasks: Recently, there has been an increasing interest to move towards random tasks, which arguably provide a more challenging but more realistic scenario. In particular, META-DATASET [50] proposes several improvements over the standard setting: break the symmetry in the support set by having each class contain a different random number of labelled samples, randomly sample the total number of support samples for a task and randomly samples the total number of ways. Both standard and random task setting will be evaluated in section 4. Proposed formulation We begin by introducing some basic notations and definitions before presenting our overall Transductive Information Maximization (TIM) loss and the different optimization strategies for tackling it. For a given K-way few-shot task, with a support set S and a query set Q, let X denote the random variable associated with the raw features within S ∪ Q, and let Y ∈ Y = {1, . . . , K} be the random variable associated with the data labels. Let f φ : X −→ Z ⊂ R d denote the encoder (i.e., feature-extractor) function of a deep neural network, where φ denotes the trainable parameters, and Z stands for the set of embedded features. The encoder is first trained from the base training set X base using the standard cross-entropy loss, without any meta training or specific sampling schemes. Then, for each specific few-shot task, we propose to minimize a mutual-information loss defined over the query samples. Formally, we define a soft-classifier associated to the random variable Y ∈ Y and parametrized by weight matrix W = [w 1 , . . . , w K ] ∈ R K×d , whose posterior distribution over labels given features 2 , p ik := P( Y = k|X = x i ; W , φ), and marginal distribution over query labels, p k := P( Y Q = k; W , φ), are given by: where the L2-normalized embedded features, and τ is a temperature parameter. Now, for each single few-shot task, we introduce our empirical weighted mutual information between the query samples and their latent labels, which integrates two terms: The first is an empirical (Monte-Carlo) estimate of the conditional entropy of labels given the query raw features, denoted H( Y Q |X Q ), while the second is the empirical labelmarginal entropy, H( Y Q ): with α a non-negative hyper-parameter. Notice that setting α = 1 recovers the standard mutual information. Setting α < 1 allows us to down-weight the conditional entropy term, whose gradients may dominate the marginal entropy gradients as the predictions move towards the vertices of the simplex. The role of both terms in Eq. (2) will be discussed after introducing our overall transductive inference loss in the following, by embedding supervision from the task's support set. We embed supervision information from support set S by integrating a standard cross-entropy loss CE with the information measure in Eq. (2), which enables us to formulate our Transductive Information Maximization (TIM) loss as follows: where y ik denotes the k th component of the one-hot encoded label y i associated to the i-th support sample. Non-negative hyper-parameters α and λ will be fixed to α = λ = 0.1 in all our experiments. It is worth to discuss in more details the role (importance) of the mutual information terms in (3): • Conditional entropy H( Y Q |X Q ) aims at minimizing the uncertainty of the posteriors at unlabeled query samples, 2. In order to simplify our notations, we deliberately omit the dependence of posteriors p ik on the network parameters (φ, W ). Also, p ik takes the form of softmax predictions, but we omit the normalization constants. thereby encouraging the model to output confident predictions 3 . This entropy loss is widely used in the context of semi-supervised learning (SSL) [2], [14], [37], as it models effectively the cluster assumption: The classifier's boundaries should not occur at dense regions of the unlabeled features [14]. Recently, [10] introduced this term for fewshot learning, showing that entropy fine-tuning on query samples achieves competitive performances. In fact, if we remove the marginal entropy H( Y Q ) in objective (3), our TIM objective reduces to the loss in [10]. The conditional entropy H( Y Q |X Q ) is of paramount importance but its optimization requires special care, as its optima may easily lead to degenerate (non-suitable) solutions on the simplex vertices, mapping all samples to a single class. Such care may consist in using small learning rates and fine-tuning the whole network (which itself often contains several layers of regularization) as done in [10], both of which significantly slow down transductive inference. • The label-marginal entropy regularizer H( Y Q ) encourages the marginal distribution of labels to be uniform, thereby avoiding degenerate solutions obtained when solely minimizing conditional entropy. Hence, it is highly important as it removes the need for implicit regularization, as mentioned in the previous paragraph. In particular, highaccuracy results can be obtained even using higher learning rates and fine-tuning only a fraction of the network parameters (classifier weights W instead of the whole network), speeding up substantially transductive runtimes. As it will be observed from our experiments, this term brings substantial improvements in performances (e.g., up to 10% increase in accuracy over entropy fine-tuning on the standard few-shot benchmarks), while facilitating optimization, thereby reducing transductive runtimes by orders of magnitude. Mutual information and risk We now give some theoretical justification on the proposed formulation, especially on the mutual information I α used in Eq. (3). First, let us make a subtle difference between the soft decision Y |X that would correspond to sampling a decision from the softmax distribution output by the network, and the hard-decision: that simply picks the class with the highest softmax score. Let us now introduce define the probability of classification error (or risk) as: where recall X Q , Y Q model the data distribution on the query set. We argue that without any assumption, mutual information needs not be a well-suited criterion for classification purposes. To illustrate this point, consider any permutation (except the identity) of the labels π : Y → Y, and a classifier Y |X such that P( Y = π(y)|X = x, Y = y) = 1. Then, one can verify that the classifier Y |X satisfies both 100% classification error and maximum mutual information I(X; Y ) = log(|Y|). Therefore, restricting assumptions on the classifier Y |X must apply in order to relate the mutual information objective to the probability of classification error. In this section, we address the following question: Can we find sufficient conditions on the classifier Y Q |X Q such that the mutual information I(X Q ; Y Q ) and the classification error P e can be explicitly related ? In our following result, we draw a theoretical link between mutual information maximization and classification error. Specifically, under the assumption that a classifier's confusion matrix is diagonal dominant, we show its risk can be upper bounded by a non-decreasing function of mutual information. Proposition 1. Consider the classifier Y Q |X Q defined on the query set Q. Assume the confusion matrix of Y Q |X Q is diagonal dominant that is: Without loss of generality, we assume there exists > 0 such that: Then the following relation holds: where δ(·) is a strictly increasing function on the restricted domain 0, |Y|−1 |Y , with δ(0) = 0 and f (0) = 0. In the case of a uniform prior distribution over classes Y ∼ U[1 : K], expression (6) becomes: The full proof of Proposition 1 is provided in the Supplemental. In all few-shot benchmarks, we consider a uniform distribution on the query set. Therefore, Eq. (7) holds on the query set, which clearly motivates the transductive mutual information. In the case of a perfectly diagonal confusion matrix, i.e., = 0, one can verify that a maximum mutual information, i.e. H( Y Q ) = log(K) and H( Y Q |X Q ) = 0 leads to a perfect classification P e = 0. In practice, such assumption is surely unrealistic, but we show on Figure 1 that the assumption of a diagonal dominant confusion matrix is verified on average even at initialization. OPTIMIZATION At this stage, we consider that the feature extractor has already been trained on base classes (using standard crossentropy). We now propose two methods for minimizing our objective (3) for each test task. The first one is based on standard Gradient Descent (GD). The second is a novel way of optimizing mutual information, and is inspired by the Alternating Direction Method of Multipliers (ADMM). For both methods: • The pre-trained feature extractor f φ is kept fixed. Only the weights W are optimized for each task. Such a choice is discussed in details in subsection 4.6. Overall, and interestingly, we found that fine-tuning only classifier weights W , while fixing feature-extractor parameters φ, yielded the best performances for our mutual-information loss. • For each task, weights W are initialized as the class prototypes of the support set: Gradient descent (TIM-GD) A straightforward way to minimize our loss in Eq. (3) is to perform gradient descent over W , which we update using all the samples from the few-shot task (both support and query) at once (i.e., no mini-batch sampling). This gradient approach yields our overall best results, while being one order of magnitude faster than the transductive entropybased fine-tuning in [10]. As will be shown later in our experiments, the method in [10] needs to fine-tune the whole network (i.e., to update both φ and W ), which provides implicit regularization, avoiding the degenerate solutions of entropy minimization. However, TIM-GD (with W -updates only) still remains two orders of magnitude slower than inductive closed-form solutions [45]. In the following, we present a more efficient solver for our problem. The algorithm associated to TIM-GD is presented in Algorithm 2. Alternating direction method (TIM-ADM) We derive an Alternating Direction Method (ADM) for minimizing our objective in (3). Such scheme yields substantial speedups in transductive learning (one order of magnitude), while maintaining excellent accuracy performances. To do so, we introduce auxiliary variables representing latent assignments of query samples, and minimize a mixed-variable objective by alternating two sub-steps, one optimizing w.r.t classifier's weights W , and the other w.r.t the auxiliary variables q. Proposition 2. The objective L in Eq. (3) can be minimized via the following constrained formulation of the problem: Proof: It is straightforward to notice that, when equality constraints q ik = p ik are satisfied, the last term in objective (9), which can be viewed as a soft penalty for enforcing those equality constraints, vanishes. Objectives (3) and (9) then become equivalent. Splitting the problem into sub-problems on W and q as in Eq. (9) is closely related to the general principle of ADMM (Alternating Direction Method of Multipliers) [5], except that the KL divergence is not a typical penalty for imposing the equality constraints 4 . Note that the multiplier β is kept fixed in practice, and treated as an hyperparameter. The main idea is to decompose the original problem into two easier subproblems, one over W and the other over q, which can be alternately solved, each in closed-form. Interestingly, this KL penalty is important as it completely removes the need for dual iterations for the simplex constraints in Eq. (9), yielding closed-form solutions. We now describe the TIM-ADM algorithm. Consider the following closed-form updates for t > 0: where f (y) = y z i + p k − z i ), and where we recall ∝ means "proportional to" (with the correct constant s.t. k q ik = 1). 4. Typically, ADMM methods use multiplier-based quadratic penalties for enforcing the equality constraint. ik Id are both semi-definite-negative, where Id ∈ R d×d is the identity matrix. Then ADM formulation in Proposition 2 can be minimized w.r.t auxiliary assignment variables q and classifier weights W by alternating the closed-form updates (10) and (11). Specifically, updates (10) and (11) for some t > 0 are guaranteed to fulfill: Proof: A detailed proof is deferred to the supplementary material. Here, we summarize the main technical ingredients. Keeping the auxiliary variables q fixed, we optimize an auxiliary bound on Eq. (9), that is convex w.r.t W . With W fixed, the objective (9) is strictly convex w.r.t the auxiliary variables q whose updates come from a closed-form solution of the KKT (Karush-Kuhn-Tucker) conditions. Interestingly, the negative entropy of auxiliary variables, which appears in the penalty term, handles implicitly the simplex constraints, which removes the need for dual iterations to solve the KKT conditions. In Proposition 3, the symmetric matrices H S k and H Q k introduced corresponds to hessian matrices w.r.t parameters w k , and their semi-definite negativeness allow to interpret W -update (11) as a bound optimization step. This assumption is empirically verified, as shown in Figure 2. The algorithm associated to TIM-ADM is presented in Algorithm 1. Note the loss L ADM is bounded from below, as the cross-entropy CE is positive, the two entropy terms are bounded between 0 and log(K), and the D KL is positive. Therefore, Proposition 3 allows us to affirm that, provided the assumptions are respected, the sequence of loss values {L ADM (W (t) , q (t) )} t∈N is both non-increasing and bounded from below, hence converges. However, this does not inform us on the behavior of the parameter sequence {W (t) , q (t) } t∈N . The latter is examined in the next subsection 3.3. Convergence of TIM-ADM In this section, we study the convergence of the sequence {W (t) , q (t) } t∈N in the TIM-ADM method. The idea is to show that each q-update and each W -udpate each strictly decrease the objective function unless the method has reached a stationary point. To formalize this idea, we analyze our proposed TIM-ADM algorithm through the lens of Zangwill's global convergence theory [60], which provides a simple but general framework to study the convergence of iterative algorithms. Note that this theory was already used to prove the convergence of the concave-convex [46] and the EM/GEM iterative procedures [58]. In particular, we show that all limits points of any sequence {W (t) , q (t) } t∈N produced by our algorithm are stationary points. To avoid interrupting the flow of the main paper, we hereby only provide our convergence result. We defer the technical background on convergence of iterative algorithms that leads to our main result to the supplementary material. The full proof of Proposition 4 is provided in the Supplemental material. The proof is a direct application of Zangwill's convergence theorem. Datasets We provide an extensive evaluation of TIM the following few-shot learning benchmarks: Standard benchmarks: Standard benchmarks all use standard K-shot N -ways task generation procedures. Specifically, we experiment on: contains with 100 classes, split as in [41] in 64/16/20 classes for training/validation/testing. Each class contains exactly 600 images. • Caltech-UCSD Birds 200 (CUB) dataset [56] possesses 200 classes, split in 100/50/50 classes for training/validation/testing. Each class contains approximatively 60 images. • Tiered-Imagenet [42] dataset is composed of 608 classes. The train/val/test split of classes is 351/97/160. Each class contains close to 1300 images. While these benchmarks have been traditionally used to evaluate few-shot learning methods, their fixed task format causes a problem as to the realism of the evaluation. In fact, [6] showed that using the same number of shots during training and evaluation already represents a learning bias. META-DATASET: To complement our experiments on standard benchmarks, we use the recently introduced META-DATASET [50]. META-DATASET aggregates the most popular image classification benchmarks. In total, it combines 10 different datasets including the well known ImageNet dataset. For each dataset, the classes are split between trainining/validation/testing, roughly following the 70%/15%/15% proportion. For instance, ImageNet classes are split in 712/158/130 train/val/test classes. Therefore, the first challenge of META-DATASET lies in the presence of domain shift between base training set and test set. Second, META-DATASET offers a significantly more challenging task generation process than the standard K-ways N -shot tasks. In particular, each task has a random number of ways, support and query shots. Moreoever, the number of support samples varies across classes within a task. We refer the reader to [50] for more details on the task generation process and the exact splits of each of the 10 datasets present in META-DATASET. Hyperparameters Standard benchmarks: Hyperparameters for TIM are kept fixed across benchmark experiments for both methods TIM-GD and TIM-ADM. Specifically, the conditional entropy weight α and the cross-entropy weights λ in Objective (3) are both set to 0.1, and the penalty weight β is set to 1. The temperature parameter τ in the classifier is set to 15. For TIM-GD method, we use the ADAM optimizer with the recommended parameters [25], and run 1000 iterations for each task. For TIM-ADM, we run 150 iterations. META-DATASET: Following the procedure of [50], the hyperparameters of each method are tuned (following the instructions of each method) on the validation split of Ima-geNet ILSVRC 2012, both for TIM methods and reproduced methods. 1: Comparison to the state-of-the-art methods on mini-ImageNet, tiered-Imagenet and CUB. The methods are sub-grouped into transductive and inductive methods, as well as by backbone architecture. Our results (gray-shaded) are averaged over 10,000 episodes. "-" signifies the result is unavailable. [62], which includes random cropping, color jitter and random horizontal flipping. META-DATASET: For the newly introduced META-DATASET, we reimplement the data pipeline from scratch in PyTorch 5 and we reproduce all compared methods in our framework. 5. We found the original Tensorflow implementation prohibitively slow when plugged in a Pytorch code. We make our reimplementation publicly available at https://github.com/mboudiaf/ pytorch-meta-dataset For non-episodic methods, we train a Resnet-18 for 100'000 iterations on the train split of ImageNet ILSVRC 2012 [43]. Except for the total number of iterations, we use the exact same training procedure as for standard benchmarks. We also reproduce the Proto-Net [45] episodic baseline, for which we also train a Resnet-18 for 100'000 episodes, and same hyperparameters/augmentations as non-episodic methods. Note that contrary to [50], we train with fixed size episodes, as doing otherwise would represent an unfair learning bias for the episodic method (i.e knowing the testing task generation process prior to testing). Each training iteration represents two 5-ways 5-shot and 20 query shots episodes, such that the number of samples processed for each batch (250) amounts the batch size of non-episodic methods (256). Comparison on standard benchmarks We first evaluate our methods TIM-GD and TIM-ADM on the widely adopted mini-ImageNet, tiered-ImageNet and CUB benchmark datasets, in the most common 1-shot 5way and 5-shot 5-way scenarios, with 15 query shots for each class. Results are reported in Table 1, and are averaged over 10,000 episodes, following [55]. We can observe that both TIM-GD and TIM-ADM yield state-of-the-art performances, consistently across all standard datasets, scenarios and The results for the domain-shift setting mini-Imagenet → CUB. All methods use a ResNet-18 as backbone. The results obtained by our models (gray-shaded) are averaged over 10,000 episodes. Beyond standard benchmarks Impact of domain-shift: Chen et al. [7] recently showed that the performance of most meta-learning methods may drop drastically when a domain-shift exists between the base training data and test data. Surprisingly, the simplest discriminative baseline exhibited the best performance in this case. Therefore, we evaluate our methods in this challenging scenario. To this end, we simulate a domain shift by training the feature encoder on mini-Imagenet while evaluating the methods on CUB, similarly to the setting introduced in [7]. TIM-GD and TIM-ADM beat previous methods by significant margins in the domain-shift scenario, consistently with our results in the standard few-shot benchmarks, thereby demonstrating an increased potential of applicability to real-world situations. Increasing the number of ways: Most few-shot papers only evaluate their method in the usual 5-ways scenario. Nevertheless, [7] showed that meta-learning methods could be beaten by their discriminative baseline when more ways were introduced in each task. Therefore, we also provide results of our method in the more challenging 10-ways and 20-ways scenarios on mini-ImageNet. These results, which are presented in Table 3, show that TIM-GD outperforms other methods by significant margins, in both settings. Random tasks and domain shift: More recently, the META-DATASET [50] was introduced to provide a more realistic evaluation of few-shot methods. META-DATASET combines both randomness of number of samples and of ways, as well as domain-shift scnearios. To first validate our PyTorch implementaiton, we provide a comparison between the performances of the SimpleShot [55] baseline obtained with the original implementation and our implementation in Table 4. We found a significant difference of 3 % on average, that we eventually identified to be due to the absence of Anti-aliasing when resizing images in the original implementation of [50]. More details on this can be found in the supplementary material. To provide the fairest comparison possible, we reproduce all methods with our implementation. The results are provided in Table 5. TIM-GD appears as the best overall performing method, followed by TIM-ADM. The simple inductive Finetune baseline achieves impressive performance, even above the transductive method BD-CSPN [33]. Note that the episodic ProtoNet baseline performs dramatically worse than other inductive baselines, which we hypothesize is due to the fact it was trained on fixed-size episodes, but tested on random tasks. Ablation study Influence of each term: We now assess the impact of each term 6 in our loss in Eq. (3) on the final performance of our methods. The results are reported in Table 6. We observe that integrating the three terms in our loss consistently outperforms any other configuration. Interestingly, removing the label-marginal entropy, H(Ŷ Q ), reduces significantly the performances in both TIM-GD and TIM-ADM, particularly when only classifier weights W are updated and feature extractor φ is fixed. Such a behavior could be explained by the following fact: the conditional entropy term, H(Y Q |X Q ), may yield degenerate solutions (assigning all query samples to a single class) on numerous tasks, when used alone. This emphasizes the importance of the label-marginal entropy term H(Ŷ Q ) in our loss (3), which acts as a powerful regularizer to prevent such trivial solutions. Fine-tuning the whole network vs classifier only: While our TIM-GD and TIM-ADM optimize w.r.t W and keep basetrained encoder f φ fixed at inference, the authors of [10] fine-tuned the whole network {W, φ} when performing their transductive entropy minimization. To assess both approaches, we add to Table 6 a variant of TIM-GD, in which we fine-tune the whole network {W, φ}, by using the same optimization procedure as in [10]. We found that, besides being much slower, fine-tuning the whole network for our objective in Eq. 3 degrades the performances, as also conveyed by the convergence plots in Figure 3. Interestingly, when fine-tuning the whole network {W, φ}, the absence of H(Ŷ Q ) in the entropy-based loss CE + H(Ŷ Q |X Q ) does not cause the same drastic drop in performance as observed 6. The W and q updates of TIM-ADM associated to each configuration can be found in the supplementary material. [50]. We report the results of SimpleShot [55] with a Resnet-18. We found activating anti-aliasing when resizing images to 84x84 in the original code of [50] (deactivated by default) can significantly improve the performances. More details can be found in the supplementary material. Results are averaged over 600 random episodes. earlier when optimizing with respect to W only. We hypothesize that the network's intrinsic regularization (such as batch normalizations) and the use of small learning rates, as prescribed by [10], help the optimization process, preventing the predictions from approaching the vertices of the simplex, where entropy's gradients diverge. Inference run-times Transductive methods are generally slower at inference than their inductive counterparts, with run-times that are, typically, several orders of magnitude larger. In Table 7, we measure the average adaptation time per few-shot task, defined as the time required by each method to build the final classifier, for a 5-shot 5-way task on mini-ImageNet using the WRN28-10 network. Table 7 conveys that our ADM optimization gains one order of magnitude in run-time over our gradient-based method, and more than two orders of magnitude in comparison to [10], which fine-tunes the whole network. Note that TIM-ADM still remains slower [10] {φ, W} 2.1 × 10 +1 than the inductive baseline. Our methods were run on the same GTX 1080 Ti GPU, while the run-time of [10] is directly reported from the paper. CONCLUSION TIM inference establishes new state-of-the-art results on the standard few-shot benchmarks, as well as in more challenging scenarios, with random numbers of classes, of samples and domain shifts. We used feature extractors based on a simple base-class training with the standard crossentropy loss, without resorting to the complex meta-training schemes that are often used and advocated in the recent few-shot literature. TIM is modular: it could be plugged on top of any feature extractor and base training, regardless of how the training was conducted. Therefore, while we do not claim that the very challenging few-shot problem is solved, we believe that our model-agnostic TIM inference should be used as a strong baseline for future few-shot learning research. APPENDIX A PROOF OF PROPOSITION 1 A.1 Preliminary results We first derive some results that will be needed in the main proof: Lemma A.1. (Soft-classifier vs hard-classifier) The following relations holds: where P ∆ = P( Y = y * (X)), and δ(·) is a strictly increasing function on the restricted domain 0, |Y|−1 |Y . Proof: Now, let us introduce the error variable Then, we have: because the variable (Y, E) can only contain more information than Y alone. Using the chain rule for conditional mutual information, we can write: For ease, let us note P(E = 1) = P ∆ . Then, one can verify that δ : is the binary entropy, is a strictly increasing function in the domain 0, |Y|−1 |Y . We can prove the second inequality: with similar arguments. Lemma A.2 (Continuity of entropy [8]). For any arbitrary discrete random variables Y and Y with probability distributions P Y and P Y , respectively, it follows that where · 1 denotes the total variation distance Lemma A.3. Let us consider a soft classifier Y |X. Let us assume this classifier has a diagonal dominant confusion matrix. Then the following result holds: where P ∆ = P({ Y = y * (X)}), and δ(·) is a strictly increasing function on the restricted domain [0, 1 − 1/|Y|]. Proof: We will start by using a result from [26], that relates the conditional entropy to the MAP error probability: where φ * is a piecewise linear convex function, and P MAP e is the error probability of the optimal MAP estimator of Y given y * (X), i.e.: In other words, for a given sample (x, y), this estimator is the best one at guessing the value of y given y * (x) only. Note that there is a difference a prior between P e and P MAP e . Still, they are not completely unrelated: • First, it always holds that P MAP e ≤ P e . This can be easily seen because the identity estimator f ( y * (X)) = y * (X) already achieves P e error. Therefore, the best estimator can only be equal or better. Intuitively, the MAP estimator allows for "a posterior" correction of the y * (X) predictions. • Second, it can be shown that if the confusion matrix of the classifier y is diagonal dominant, i.e for any y = y : P( y * (X) = y, Y = y) ≥ P( y * (X) = y, Y = y ) (17) then P MAP e = P e . (17) is exactly what we initially assumed such that we can write P MAP e = P e in the rest of the proof. Furthermore, if we consider the common case where P e ≤ 1/2, then φ * (P e ) = 2P e . Putting it all together, we finally obtain that: Finally, we need to relate I(Y ; y * (X)) and I(Y ; Y ). In order to do this, we use Lemma A.1, which allows us to write that: where P ∆ = P({ Y = y * (X)}) reflects the uncertainty of the soft-decision (for a very peaked soft-decision, Y = y * (X) with high probability). Therefore, we have shown that: A.2 Proof We now derive the proof of Proposition 1: Proof: In Lemma A.3, we showed that: We begin by upper bounding the mutual information and the entropy as follows: where (22) follows by the data processing inequality. We now bound the absolute difference in (22) as follows: where P Y is the prior probability distribution on the labels and P Y is the marginal probability on the labels computed from the data distribution. In order to show this, consider the following chain of inequalities: where · 1 denotes the total variation distance, i.e., P Y − P Y 1 ; and (24) follows from the continuity Lemma A.2 and (25) follows from Pinsker's inequality [8,Problem 3.18] which implies that Provided that (ln 2)D KL Y Y /2 ≤ (|Y| − 1)/|Y|, by combining expressions (25) and (22), we have shown that We now bound the conditional entropy H( Y |Y ) in (27). We've assumed that ∃ > 0 such that: Such relation allows us to have a tighter upper bound on H( Y |Y ) than the naive log(K): One can check that as the confusion matrix becomes perfectly diagonal (i.e., → 0), g( ) goes to 0. It remains to upper bound the uncertainty of the softclassifier denoted by P ∆ . To this end, we begin by observing: At this point, we recall that y * (x) := arg max y P Y |X (y|x) such that: Since this holds for every pairs (x, y) ∈ X × Y, it holds in expectation that: with equality if Y = y (X) almost surely. We notice that by Jensen's inequality, where equality holds if P Y |X is degenerate. From expressions (30) to (33), we have: Therefore, and again provided that H( Y |X) ≤ (|Y| − 1)/|Y|, we can write Finally, by combining (21), (27), (29) and (35) together, we obtain: APPENDIX B PROOF OF PROPOSITION 2 Proof. Let us start from the initial optimization problem: We can reformulate problem (36) using the ADM approach, i.e., by introducing auxiliary variables q = [q ik ] ∈ R |Q|×K and enforcing equality constraint q = p, with p = [p ik ] ∈ R |Q|×K , in addition to pointwise simplex constraints: We can solve constrained problem (38) with a penalty-based approach, which encourages auxiliary pointwise predictions q i = [q i1 , . . . , q iK ] to be close to our model's posteriors To add a penalty encouraging equality constraints q i = p i , we use the Kullback-Leibler (KL) divergence, which is given by: Thus, our constrained optimization problem becomes: where β > 0 is the Lagrange multiplier associated with penalty (40). As said in the main text, we treat β as a fixed hyperparameter in practice. APPENDIX C PROOF OF PROPOSITION 3 Proof. Recall that we consider a softmax classifier over distances to weights W = {w 1 , . . . , w K }. To simplify the notations, we will omit the dependence upon φ in what follows, and write z i = f φ (xi) f φ (xi) , such that: Without loss of generality, we use τ = 1 in what follows. Plugging the expression of p ik into Eq. (9), and grouping terms together, we get: Now, we can solve our problem approximately by alternating two sub-steps: one sub-step optimizes w.r.t classifier weights W while auxiliary variables q are fixed; another sub-step fixes W and update q. • q-update: With weights W fixed, the objective is convex w.r.t auxiliary variables q i (sum of linear and convex functions) and the simplex constraints are affine. Therefore, one can minimize this constrained convex problem for each q i by solving the Karush-Kuhn-Tucker (KKT) conditions 7 . The KKT conditions yield closedform solutions for both primal variable q i and the dual variable (Lagrange multiplier) corresponding to simplex constraint K j=1 q ij = 1. Interestingly, the negative entropy of auxiliary variables, i.e., K k=1 q ik log q ik , which appears in the penalty term, handles implicitly non-negativity constraints q i ≥ 0. In fact, this negative entropy acts as a barrier function, restricting the domain of each q i to non-negative values, which avoids extra dual variables and Lagrangian-dual inner iterations for constraints q i ≥ 0. As we will see, the closedform solutions of the KKT conditions satisfy these non-negativity constraints, without explicitly imposing them. In addition to non-negativity, for each point i, we need to handle probability simplex constraints K k=1 q ik = 1. Let γ i ∈ R denote the Lagrangian multiplier corresponding to this constraint. The KKT conditions correspond to setting the following gradient of the Lagrangian function to zero, while enforcing the simplex constraints: 7. Note that strong duality holds since the objective is convex and the simplex constraints are affine. This means that the solutions of the (KKT) conditions minimize the objective. This yields: Applying simplex constraint K j=1 q ij = 1 to (45), Lagrange multiplier γ i verifies: Hence, plugging (46) in (45) yields: Using the definition ofq k , we can decouple this equation:q which implies:q Plugging this back in Eq. (47), we get: Notice that q ik ≥ 0, hence the solution fulfils the positivity constraint of the original problem. Therefore, by updating q (t+1) using (50) we can guarantee that the solution q (t+1) : • W -update: Without loss of generality, we derive the update for w k , k ∈ {1, . . . , K}. Omitting the terms that do not involve w k , Eq. (43) reads: One can notice that objective (43) is not convex w.r.t w k . Actually, it can be split into convex and non-convex parts as in Eq. (52). Thus, we cannot simply set the gradients to 0 to get the optimal w k . Concavity of C S and C Q : We show in what follows that in practical cases, the non-convex parts are actually concave. To see that, let us derive the hessian of C S . To simplify equations, we will denote Now we compute the derivative of p ik : Putting (53) and (54) together, we have: By assumption, (56) is semi-definite negative, which allows us to say that C S is a concave function of w k , for all k. The exact same reasoning applies to C Q . Concave-Convex procedure: Given the concavity of C S and C Q , we find ourselves in the well-known convexconcave setting. Concave-convex techniques proceed as follows: for a function in the form of a sum of a concave term and a convex term, the concave part is replaced by its first-order approximation, while the convex part is kept as is. The result forms an auxiliary bound on the function W → L ADM (W , q). In our case, linearizing the concave part C S of the objective at the current solution W (t) yields: k . Exact same thing can be done with C S . Therefore, the initial objective (52) is upper bounded by: with equality if w k = w (t) k . Now the whole benefit of E is that it is strictly convex in w k , and its global optimum can be obtained in closed-form by simply setting its gradient to 0: Setting the right-hand side of (58) to 0 exactly recovers the update (11), and we can guarantee that the solution W (t+1) improves the initial objective: APPENDIX D DETAILS OF ADM ABLATION In Table 8, we provide the W and q updates for each configuration of the TIM-ADM ablation study, whose results were presented in Table 6. The proof for each of these updates is very similar to the proof of Proposition 3 detailed in section C. Therefore, we do not detail it here. The W and q-updates for each case of the ablation study. "-" refers to the updates in Proposition 3. "NA" refers to non-applicable. -- APPENDIX E PROOF OF PROPOSITION 4 E.1 Background In this section, we try to introduce the minimal set of required elements to understand Zangwill's theory, upon which our own convergence result is based. We first introduce the central concept of point-to-set map ψ which maps a point θ ∈ Θ to a set of points ψ(θ) ⊂ Θ. Intuitively, ψ has to be understood as representing one iteration of the algorithm considered that, from a point θ (t) in the parameter space Θ outputs a new point in the parameter space θ (t+1) ∈ ψ(θ (t) ) from a set of (local minima) points. We hereby recall the notion of closedness, which generalizes the concept of continuity in standard point-to-point maps to point-to-set maps: Let us also assume that: ∀n ∈ N, θ n ∈ ψ(θ n ). The point-to-set map ψ is said to be closed on the set Θ if it is closed at every point of Θ. As noted in [46], the general idea to prove the convergence of an iterative algorithm is to properly set Γ and L. The natural choice for Γ is to set it as the set of fixed points of the algorithm Γ = θ ∈ {Θ | ψ(θ) = {θ}} , and L as the loss the algorithm minimizes. With that in mind, assumption (2) in Theorem E.1 simply ensures that while the algorithm has not reached stationary points Γ, the loss is strictly decreasing. We first expose a result from [15] that will be used in the main proof: Then, ψ is closed at a if ψ(a) is nonempty. E.2 Proof We now start the main proof of Proposition 4: Proof. The idea of the proof is to apply Theorem E.1 with the right ingredients. Definition of ψ: First, we define the point-to-set map ψ. First, we define the point-to-set map associated to the q and W updates: ψ W (W (t) , q (t+1) ) = (W (t+1) , q (t+1) ), where W (t+1) and q (t+1) are given by (11) and (10) respectively. Then, the point-to-set map associated to the algorithm is simply defined as a composition of the two previous: Then, we define Γ as the set of fixed points of ψ: We now define our parameter space. Given that the loss keeps decreasing, {(W (t) , q (t) )} t∈N has to live in the following parameter space: where L 0 = {W | L ADM (W , q 0 ) ≤ L ADM (W 0 , q 0 )} represents a sublevel set of L ADM ( . , q 0 ). Assumption (1): Using the continuity of L in both W and q and Lemma E.2, the closedness of ψ W and ψ q follows. APPENDIX F ANTI-ALIASING FOR META-DATASET In our experimental section, we showed that our PyTorch implementation of META-DATASET [50] yielded significant gains over the original implementation. We found this to be caused by a simple but important implementation detail: the resize transform. In particular, two elements of the resizing have different defaults behaviors between TensorFlow and PyTorch frameworks: • PyTorch resizes an image with dimension (H, W) to a fixed size R by multiplying both dimensions by max R H , R W , which we can then complement by a central crop to obtain an RxR image with a preserved aspect ratio. In contrast, TensorFlow resizes the image by scaling with a factor min R H , R W and padding the rest with zeros. • More importantly, PyTorch uses by default anti-aliasing in its resize function, while TensorFlow does not. This typically leads to seemingly more pixelated images, which can lead to significant differences on datasets where tiny details matter a lot (for instance the beak of a bird in CUB). A visual illustration of this phenomenon is presented on Figure 4. anti-aliasing=False anti-aliasing=True Fig. 4: Visual inspection of resized bird images from CUB [56] with and without anti-aliasing. Left pictures tend to have a more pixelated and less smooth appearance than those on the right. While this difference may look subtle to our eye, it certainly is not for a network.
2021-06-24T01:16:20.482Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "ae02980f8b950fd046bbd603f5831792d713c666", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae02980f8b950fd046bbd603f5831792d713c666", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
3299643
pes2o/s2orc
v3-fos-license
Randomised comparison of drug-eluting versus bare-metal stenting in patients with non-ST elevation myocardial infarction Objective The superiority of drug-eluting stents (DES) over bare-metal stents (BMS) in patients with ST elevation myocardial infarction (STEMI) is well studied; however, randomised data in patients with non-ST elevation myocardial infarction (NSTEMI) are lacking. The objective of this study was to investigate whether stenting with everolimus-eluting stents (EES) safely reduces restenosis in patients with NSTEMI as compared to BMS. Methods ELISA-3 patients were asked to participate in the angiographic substudy and were randomised to DE (Xience V) or BM (Vision) stenting (ELISA-3 group). The primary end point was minimal luminal diameter (MLD) at 9-month follow-up angiography. In addition, 296 patients with NSTEMI who were excluded or did not want to participate in the ELISA-3 trial (RELI group) were randomised to DE or BM stenting and underwent clinical follow-up only (major adverse cardiac events (MACE), stent thrombosis (ST)). A pooled analysis was performed to assess an effect on clinical outcome. Results 178 of 540 ELISA-3 patients participated in the angiographic substudy. MLD at 9 months angiography was 2.37±0.63 mm (DES) versus 1.84±0.62 mm (BMS), p<0.001. Binary restenosis occurred in 1.9% in the DES group versus 16.7% in the BMS group (RR 0.11, 95% CI 0.02 to 0.84, p=0.007). In the pooled analysis, the incidence of MACE, target vessel revascularisation and ST at 2 years follow-up in the DES versus BMS group was 12.5% versus 16.0% (p=0.28), 4.0% versus 10.4% (p=0.009) and 1.3% versus 3.0% (p=0.34), respectively. Conclusions In patients with NSTEMI, use of EES is safe and decreases both angiographic and clinical restenosis as compared to BMS http://www.isrctn.com/search?q=39230163. Trial registration number 39230163; Post-results. INTRODUCTION Percutaneous coronary intervention with bare metal stent implantation is associated with high restenosis rates as compared to the firstgeneration drug-eluting stents (DES). [1][2][3][4] The second-generation everolimus-eluting stent (EES) has shown a strong antiproliferative effect with a non-inferior efficacy profile compared to the first-generation DES but with an improved safety profile. While the effect of DE versus BM stenting in ST elevation myocardial infarction (STEMI) populations has been extensively evaluated, consistently showing that the second-generation DES are as safe as bare-metal stents (BMS) in terms of stent thrombosis while reducing restenosis rates, [5][6][7][8] there are no randomised studies comparing DES versus its BMS counterpart in the setting of non-STEMI (NSTEMI). This subset of patients, however, comprises up to 50% of patients included in some stent trials, particularly those with an all-comer design. This evidence has translated into a class I, level of evidence a recommendation in current KEY QUESTIONS What is already known about this subject? ▸ The superiority of drug-eluting stents (DES) over bare-metal stents (BMS) in patients with ST elevation myocardial infarction (STEMI) is well studied. What does this study add? ▸ This trial provides randomised data, showing that also in non-STEMI (NSTEMI), everolimus-eluting stents are safe and decrease restenosis compared to BMS. How might this impact on clinical practice? ▸ Considering that randomised data on usage of second-generation DES in patients with NSTEMI are scarce, our study provides more evidence that our current clinical practice of treating patients with STEMI with DES is safer and more efficient than treating with BMS. clinical guidelines for the use of new-generation DES over BMS. 9 Montalescot et al 10 demonstrated that patients with STEMI and NSTEMI have similar in hospital and longterm prognoses as well as similar independent correlates of outcome, despite different in-hospital management and despite differences in lesion pathology. In STEMI, the culprit artery is usually occluded by a red thrombus, whereas in NSTEMI the culprit artery is usually patent with a non-occlusive white thrombus. Also, patient characteristics differ; the NSTEMI population is older, has a higher cardiovascular risk profile more often with diabetes and hypertension. Patients with NSTEMI have more extensive coronary artery disease than patients with STEMI and more often a personal history of coronary heart disease. 11 In this randomised study, we focus on the effects of the use of an EES on the incidence of restenosis and on long-term safety in terms of MACE in this population with NSTEMI, treated with either DES or its bare metal counterpart. METHODS In this article, we describe the results of the ELISA-3 angiographic substudy and the ELISA prospective Registry (RELI). The rationale, design and primary results of ELISA-3 have been previously described. 12 Briefly, the ELISA-3 trial is a prospective multicentre randomised controlled trial, in which 542 patients, hospitalised with non-ST elevation acute coronary syndrome (NSTE-ACS), were randomised to either an immediate (angiography and revascularisation if appropriate<12 hour) or a delayed invasive strategy (>48 hour after randomisation). This prespecified substudy investigates whether stenting with EES safely decreases the incidence of restenosis, compared to stenting with a BMS with the same stent frame design. Patients were eligible if they were hospitalised with ischaemic chest pain or dyspnoea at rest, with the last episode occurring 24 hours or less before randomisation, and had at least two of three of the following high-risk characteristics: (1) evidence of extensive myocardial ischaemia on ECG (shown by new cumulative ST depression >5 mm or temporary ST segment elevation in two contiguous leads <30 min), (2) elevated biomarkers (troponin T >0.10 μg/L or myoglobin >150 μg/L) or elevated CKMB fraction (>6% of total CK), (3) age above 65 years. Exclusion criteria were persistent ST segment elevation, symptoms of ongoing myocardial ischaemia despite optimal medical therapy, contraindication for diagnostic angiography, active bleeding, cardiogenic shock, acute posterior infarction and life expectancy <1 year. During the same study period, patients with NSTEMI who did not want to participate in, or who did not meet the inclusion criteria for, high-risk NSTEMI of the ELISA-3 study, were recruited in the ELISA prospective registry. Both patients in the ELISA-3 trial and in the ELISA registry, who underwent coronary angiography and were deemed appropriate for percutaneous coronary intervention (PCI) and stenting, underwent randomisation in the catheterisation laboratory to either EE or BM stenting. Patients with multiple lesions in need of more than one stent were treated with the same type of stent for all lesions. Patients received dual antiplatelet therapy (acetylsalicic acid and clopidogrel) for the duration of 1 year. Between July 2007 and June 2012, 542 patients were randomised in the ELISA-3 trial. About 344 of these patients were eligible for PCI and 178 of these patients underwent a second randomisation to EES (n=87) and BMS (n=91). In the same period, 296 patients in the ELISA registry group were also randomised (EES n=147, BMS n=149). Patients in the ELISA-3 group were planned to undergo coronary angiography at 9 months, whereas patients in the prospective registry were followed for 2 years for clinical end points without planned follow-up angiography (figure 1). The trial was conducted in six Dutch hospitals of which one had 24 hour facilities for (primary) PCI and coronary artery bypass graft (CABG) surgery. The study complied with the Declaration of Helsinki, was approved by the ethics committee of Isala, Zwolle, the Netherlands, and all patients gave written informed consent before entering the study or the registry. The study was registered in the ISRCTN Register (ISRCTN39230163). Randomisation and treatment Patients were randomised by a closed envelope system to blinded stent designs. Operators were blinded to the device used and the clinical end points were adjudicated by investigators blinded with regard to patients' treatment allocation (flow chart: summary of the study design). Coronary angioplasty was performed according to the local standards of the intervention centre. All patients were treated according to the guidelines. Concomitant medication included a loading dose of aspirin (500 mg orally or intravenously), clopidogrel (600 mg orally) and 5000 IU unfractionated heparin intravenously as soon as possible after diagnosis. Tirofiban (bolus of 25 mg/kg followed by continuous infusion of 0.15 mg/min/kg), nitrates, β-blockers and calcium channel blockers were given at the discretion of the investigator. Definitions Procedure time was defined as the time interval between placement of the arterial sheath and removal of the guiding catheter. Clinical procedural success was defined as immediate angiographic success (defined as a diameter stenosis postprocedure of <50% (visual assessment) and TIMI 3 flow) without major in-hospital complication, including death, myocardial infarction (MI), stent thrombosis or emergency coronary artery bypass surgery. MI was defined by the presence of new Q waves or creatine kinase level or MB fraction at least twice the upper limit of normal. Lesions were classified according to the definitions recommended by the American College of Cardiology/American Heart Association task force. Stent thrombosis was defined as complete occlusion of the stented lesion at follow-up angiography or at recurrent angiography performed because of recurrent chest pain and signs of ischaemia. End points The primary end point of the ELISA-3 angiographic substudy was the extent of restenosis, expressed by the difference in minimal luminal diameter at 9-month follow-up angiography, as assessed by an independent core laboratory. We conducted a pooled analysis of the ELISA-3 and the prospective ELISA registry patients, in which the incidence of definite stent thrombosis at 2 years follow-up was the key secondary and safety end point. The incidence of MACE at 2 years follow-up was an exploratory end point in this pooled analysis. Qualitative and quantitative coronary analysis Coronary angiograms were performed before angioplasty, immediately after angioplasty and at 9-month follow-up. Standard acquisition procedures were followed for qualitative and quantitative coronary angiography analysis. To improve the accuracy and reproducibility of measurements, intracoronary isosorbide-dinitrate (1-3 mg) was given before the initial and final post-stent placement angiograms. Angiographies were recorded on a CD-ROM. Matched orthogonal views were used for quantitative analysis at each control. Dye-filled guiding catheters were used for magnification calibration. Data collection included assessment of TIMI flow grade, lesion eccentricity, estimation of thrombus load and AHA/ACC classification. An independent laboratory (DIAGRAM, Zwolle, the Netherlands) performed routine quantitative coronary angiography measurements using the Coronary Angiography Analysis System (CAAS II System). Two orthogonal angiographic views with minimised vessel foreshortening were obtained, and the angiogram showing the most severe stenosis was selected for quantitative coronary analysis. Postprocedure and follow-up angiograms, which duplicate the initial orthogonal views, were obtained after the removal of the balloon and guidewire. Follow-up Coronary angiography was planned at 9 months in the ELISA-3 angiographic substudy patients. Coronary angiography could be prematurely performed on the basis of clinical indications; it was used as the follow-up angiogram in the case of restenosis or if performed after 4 months. When it was performed within 4 months' time without evidence of restenosis, angiographic control was repeated at 9 months. All major clinical events including death, MI, readmission to hospital for unstable angina pectoris and the need for additional (ischaemia driven) revascularisation of the target vessel were monitored at the time of repeated angiography or by phone at 9 and 24 months for all patients and adjudicated by two independent physicians blinded to randomised treatment. Previous studies have shown that it is reasonable to assume that the MLD measurement after angioplasty follows a normal distribution. It is expected that in all groups the mean will be ∼1.9 mm and the SD will be ∼0.5 mm. Allowing for a type I error of 5% and a dropout rate of 20%, a sample of 280 patients (140 per group) will give 85% power to prove superiority of Figure 1 Flow diagram of study design. BMS, bare-metal stent; EES, everolimus-eluting stent; MACE, major adverse cardiac events (composite of death, myocardial infarction and target vessel revascularisation); PCI, percutaneous coronary intervention. coated stenting compared to the use of a non-coated stent. Data were analysed according to the intention-to-treat analysis. Continuous variables were expressed as means ±SD and were compared between the intervention groups using a Mann-Whitney U test. Categorical data were described by proportions and compared with the χ 2 or Fisher's exact test. Logistic regression was used to calculate the p value of the interaction between the effect of the intervention and the prespecified subgroups on the primary end point. All tests were twosided and an α of 5% was used. Statistical analysis was performed with SPSS (V.20); SPSS, Chicago, Illinois, USA MACE survival Kaplan-Meier curves were obtained and compared by means of the log-rank test. Baseline characteristics Between July 2007 and June 2012, 178 ELISA-3 patients (87 EES, 91 BMS) and 296 ELISA registry patients (147 EES, 149 BMS) were randomised. Baseline characteristics in the ELISA-3 population were well balanced between the treatment groups (table 1). There was a significant difference in age between ELISA-3 and the ELISA registry group (68.0±10.9 vs 63.6±12.5 years, p<0.001); other baseline characteristics did not differ between Elisa-3 and the registry group (tables 2 and 3). Angiographic outcome Follow-up angiography was performed at 9 months in 124 (70%) of the ELISA-3 patients. Baseline characteristics of patients who declined follow-up angiography were similar to those of patients who had a follow-up angiography. The primary end point, the degree of restenosis (MLD), was significantly different when comparing DES to BMS (2.37±0.63 mm vs 1.84±0.62 mm, p<0.001) (table 4). The incidence of binary restenosis, defined as a diameter stenosis at 9 months follow-up of more than 50%, was 1.9% in the DES group vs 16.7% in the BMS group (RR 0.11, 95% CI (0.02 to 0.84), p=0.01). Clinical outcome In the ELISA-3 group, clinical follow-up at 24 months was complete in 173 (97%) patients. DISCUSSION The main finding of this study was that the use of an everolimus eluting second-generation DES is safe and decreased restenosis, angiographic as well as clinical, in patients with NSTEMI. In STEMI, Laarman et al 7 found no significant benefit associated with the use of first-generation paclitaxel-eluting stents in primary PCI as compared with uncoated stents with the same design. Spaulding et al, 13 however, found a significant reduction in the incidence of target-vessel failure at 1 year, using a sirolimus-eluting stent, compared with uncoated stents. Rates of stent thrombosis were similar in the coated and uncoated stent groups in both studies. In the EXAMINATION trial, an allcomer trial in 1498 patients with STEMI comparing second-generation EES versus BMS, Sabate et al 14 showed that the rate of target lesion revascularisation and the rate of stent thrombosis were reduced in recipients of EES. The same result on stent thrombosis was found in a subgroup of patients with NSTE-ACS from the BASKET-PROVE trial; 15 however, neither trial was sufficiently powered for this end point and the latter was a post hoc analysis. Although there is growing evidence that the cobalt-chromium (CoCr)-EES is safe, there is still debate about the relative safety of DES compared to BMS related to stent thrombosis. Pathological studies suggest that the permanent presence of polymers may result in chronic arterial inflammation, resulting in delayed endothelial healing and late thrombotic events. 16 A large meta-analysis in 2007 comparing BMS and firstgeneration DES strengthened concerns about late and very late stent thrombosis with paclitaxel-eluting stents. 17 Recently, however, it has been shown that secondgeneration polymers (ie, polyvinylidene fluoride-co-hexafluoropropene (PVDF-HFP)) used in current DES provide a more biocompatible surface than earlygeneration polymers 18 and Kolandaivelu et al 19 showed in a controlled model of early ST that drug-eluting polymercoated stents are even consistently less, not more, thrombogenic than matched bare metal platforms. Continuous refinement in stent design and the development of thinner stent struts has resulted in significantly lower rates of stent thrombosis; thus nowadays even larger sample sizes are required to accurately estimate differences between stents and as such many RCTs are presently underpowered for this endpoint. For this reason, Palmerini et al 20 conducted a large network meta-analysis of RCTs comparing the risk of thrombosis between bare-metal, first-generation and second-generation DES. They reported a profound reduction of stent thrombosis with cobalt-chromium EES, compared with other DES as well as with BMS at 2-year follow-up. These findings were corroborated by the results of another meta-analysis of 4896 patients comparing the cobalt-chromium EES with its uncoated otherwise identical metallic counterpart, showing improvement in cardiovascular outcomes including cardiac survival, MI and overall stent thrombosis with the cobalt-chromium EES. 21 The issue of restenosis is often thought of as trivial, not having any influence at clinical end points, but there is evidence that in ∼10% of cases, patients with in-stent restenosis present with reMI instead of just angina. 22 In our study, restenosis rates were highly significantly lower in the EES group at 9 months angiographic follow-up, which is consistent with findings in previous trials. Our study, however, is the first randomised trial to investigate the safety and efficacy of second-generation DES in a NSTEMI population. Patients with NSTEMI differ from those with STEMI. In STEMI, the culprit artery is usually occluded by a thrombus, whereas in NSTEMI the culprit artery is usually patent with a nonocclusive thrombus, but both conditions stem from the same pathophysiological process. 10 23 24 Thereby, patients with STEMI are older and have more comorbidity as compared to patients with STEMI, reflecting their worse long-term clinical outcome. This study shows that DE stenting in this patient population is safe and improves long-term target vessel revascularisation. Limitations of the study Several limitations of the present study should be acknowledged. Most important was the lower than expected inclusion rate in the ELISA-3 angiographic substudy. When inclusion in the main study was finished, of 344 eligible patients only 178 were randomised in this angiographic substudy, giving an ∼78% power to prove superiority of the DES, while we anticipated to recruit 280 patients in our power calculations. Furthermore, we encountered a higher than expected loss of angiographic follow-up at 9 months. We conducted a pooled analysis of the ELISA-3 and the ELISA Registry patients to have more power with regard to the safety of DE versus BM stenting in terms of clinical outcome; this study, however, was not powered to show differences in MACE. Conclusion In patients with NSTEMI, the use of an EES secondgeneration DES is safe and decreases both angiographic and clinical restenosis as compared to a cobalt chromium BMS.
2017-07-27T02:21:59.503Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "b805f86675dd28088b94432b435d4a612d2af288", "oa_license": "CCBYNC", "oa_url": "https://openheart.bmj.com/content/openhrt/3/2/e000455.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb8e6f2a389111f8c249226ab8ea074ac4abcb0a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35280993
pes2o/s2orc
v3-fos-license
Non-animal replacement methods for veterinary vaccine potency testing: state of the science and future directions NICEATM and ICCVAM convened an international workshop to review the state of the science of human and veterinary vaccine potency and safety testing methods and to identify opportunities to advance new and improved methods that can further reduce, refine, and replace animal use. Six topics were addressed in detail by speakers and workshop participants and are reported in a series of six reports. This workshop report, the second in the series, provides recommendations for current and future use of non-animal methods and strategies for veterinary vaccine potency testing. Workshop participants recommended that future efforts to replace animal use give priority to vaccines (1) that use large numbers of animals per test and for which many serials are produced annually, (2) that involve significant animal pain and distress during procedures, (3) for which the functional protective antigen has been identified, (4) that involve foreign animal/zoonotic organisms that are dangerous to humans, and (5) that involve pathogens that can be easily spread to wildlife populations. Vaccines identified as the highest priorities were those for rabies, Leptospira spp., Clostridium spp., Erysipelas, foreign animal diseases (FAD), poultry diseases, and fish diseases. Further research on the identification, purification, and characterization of vaccine protective antigens in veterinary vaccines was also identified as a priority. Workshop participants recommended priority research, development, and validation activities to address critical knowledge and data gaps, including opportunities to apply new science and technology. Recommendations included (1) investigations into the relative impact of various adjuvants on antigen quantification assays, (2) investigations into extraction methods that could be used for vaccines containing adjuvants that can interfere with antigen assays, and (3) review of the current status of rabies and tetanus human vaccine in vitro potency methods for their potential application to the corresponding veterinary vaccines. Workshop participants recommended enhanced international harmonization and cooperation and closer collaborations between human and veterinary researchers to expedite progress. Implementation of the workshop recommendations is expected to advance alternative in vitro methods for veterinary vaccine potency testing that will benefit animal welfare and replace animal use while ensuring continued protection of human and animal health. Introduction Veterinary vaccines contribute to improved human and animal health and welfare by preventing and controlling infectious agents that can cause disease and death. However, the testing necessary to ensure vaccine effectiveness and safety can involve large numbers of animals and significant pain and distress. In the United States, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) promote the scientific validation and regulatory acceptance of test methods that accurately assess the safety of chemicals and products while reducing, refining (less pain and distress), and replacing animal use. Accordingly, NICEATM and ICCVAM recently identified vaccine potency and safety testing as one of their four highest priorities [1]. ICCVAM is an interagency committee of Federal agencies that is charged by law with evaluating new, revised, and alternative test methods with regulatory applicability. ICCVAM members represent 15 U.S. Federal regulatory and research agencies that require, use, generate, or disseminate safety testing data. These include the Department of Agriculture (USDA), which regulates veterinary vaccines, and the Food and Drug Administration (FDA), which regulates human vaccines. ICCVAM is a permanent interagency committee of the National Institute of Environmental Health Sciences (NIEHS) under NICEATM. NICEATM administers ICCVAM, provides scientific and operational support for ICCVAM-related activities, and conducts international validation studies on promising new safety testing methods. NICEATM and ICCVAM serve a critical public health role in translating research advances from the bench into standardized safety testing methods that can be used in regulatory practice to prevent disease and injury. To promote and advance the development and use of scientifically valid alternative methods for human and veterinary vaccine testing, NICEATM and ICCVAM organized the International Workshop on Alternative Methods to Reduce, Refine, and Replace the Use of Animals in Vaccine Potency and Safety Testing: State of the Science and Future Directions. The workshop was held at the National Institutes of Health in Bethesda, Maryland, on September [14][15][16]2010. It was organized in conjunction with the European Centre for the Validation of Alternative Methods (ECVAM), the Japanese Center for the Validation of Alternative Methods (JaCVAM), and Health Canada. The workshop addressed the state of the science of human and veterinary vaccine potency and safety testing. Participants developed recommendations for future progress in three major areas: (1) in vitro replacement methods for potency testing; (2) reduction and refinement methods for potency testing; and (3) reduction, refinement, and replacement methods for vaccine safety testing [2]. Reports were prepared for each of the three topics for human vaccines and for each of the three topics for veterinary vaccines [3,4,5,6,7,8]. This report addresses methods and strategies for the replacement of animal use for potency testing of veterinary vaccines. Goals and organization of the workshop The goals of the international workshop were to (1) identify and promote the implementation of currently available and accepted alternative methods that can reduce, refine, and replace the use of animals in human and veterinary vaccine potency and safety testing; (2) review the state of the science of alternative methods and identify knowledge and data gaps that need to be addressed; and (3) identify and prioritize research, development, and validation efforts needed to address these gaps in order to advance alternative methods that will also ensure continued protection of human and animal health. The workshop was organized with four plenary sessions and three breakout group sessions. In the breakout sessions, workshop participants: Identified criteria to prioritize vaccine potency and safety tests for future alternative test method development and identified high priorities using these criteria Reviewed the current state of the science of alternative methods and discussed ways to promote the implementation of available methods Identified knowledge and data gaps that need to be addressed Identified and prioritized research, development, and validation efforts needed to address these gaps in order to advance alternative methods while ensuring continued protection of human and animal health The workshop opened with a plenary session in which expert scientists and regulatory authorities from the United States, Europe, Japan, and Canada outlined the importance of vaccines to human and animal health [9,10] and described national and international regulatory testing requirements for human and veterinary vaccines [2,11,12,13,14,15,16]. Authorities emphasized that, following the regulatory approval of a vaccine, testing is required to ensure that each subsequent production lot is pure, safe, and sufficiently potent to generate a protective immune response in people or animals [11,12]. The second plenary session addressed methods that have been accepted and methods that are in development that do not require the use of animals for assessing the potency of vaccines [17,18,19,20]. This was followed by breakout sessions to discuss the state of the science and recommendations for future progress for in vitro potency tests for human and veterinary vaccines. This paper provides workshop recommendations to advance the use and development of alternative methods that can replace animals for the potency testing of veterinary vaccines. Recommendations for human vaccines are available elsewhere in these proceedings [3]. The third plenary session addressed (1) potency testing methods that refine procedures to avoid or lessen pain and distress by incorporating earlier humane endpoints or by using antibody quantification tests instead of challenge tests and (2) methods and approaches that reduce the number of animals required for each test [21,22,23,24,25,26,27]. Breakout groups then discussed the state of the science and developed recommendations for future progress. Workshop recommendations to advance the use and development of alternative methods that can reduce and refine animal use for potency testing of human vaccines [5] and veterinary vaccines [6] are available in the respective papers in these proceedings. The final plenary session addressed methods and approaches for reducing, refining, and replacing animal use to assess the safety of serial production lots of human and veterinary vaccines [11,28,29,30]. Breakout groups then discussed the state of the science and developed recommendations for advancing alternative methods for vaccine safety testing. Workshop recommendations to advance the use and development of alternative methods for safety testing of human vaccines [7] and veterinary vaccines [8] are available in these proceedings. Requirements for veterinary vaccine potency testing Strict regulations and guidelines are designed to ensure that every veterinary vaccine distributed in or from the United States is pure, safe, potent, and effective [31]. An estimated 18,000 serials (batches) of veterinary vaccines are released annually in the United States for approximately 2000 different products that protect animals from 213 different animal diseases [12].Given that many inactivated vaccines still require animals for potency testing, significant numbers of animals are necessary. Veterinary vaccines contribute to the health and well being of people and animals. In addition to controlling and preventing diseases of companion and domestic animals, vaccines help ensure a safe and efficient global food supply. They reduce the transmission of zoonotic and foodborne infections from animals to people. Vaccines also reduce the need for low-level antibiotics to control some diseases in food animals. Due to the number of animals used annually for the release of veterinary vaccines, global regulatory agencies actively encourage the evaluation, development, and implementation of novel approaches that reduce, refine, and replace (3Rs) the use of animals in vaccine safety and potency product release testing [12,14,22]. Prioritizing vaccine potency tests for future replacement activities Potency testing procedures for many veterinary vaccines still require the use of animals; therefore, the development and validation of additional replacement tests could significantly benefit animal health and welfare. Workshop participants prioritized the veterinary vaccines that should be targeted for further development and validation of in vitro replacement tests. The criteria for prioritization included: Vaccines that use large numbers of animals per test and for which many serials are produced annually Vaccines that involve significant animal pain and distress during testing procedures Vaccines for which the functional protective antigen has been identified and characterized Vaccines that involve foreign animal/zoonotic organisms Vaccines that involve pathogens that can be easily spread to wildlife populations Based on these criteria, the following vaccines were given highest priority for further development of alternative replacement methods: Rabies vaccines Leptospira spp. vaccines Clostridium spp. vaccines Erysipelas vaccines Vaccines for foreign animal diseases (FADs) especially those posing viral biohazards that require enhanced security and biosafety measures (e.g., foot and mouth disease [FMD] and bluetongue disease) Poultry vaccines Fish vaccines New vaccines that are currently undergoing prelicensing development and evaluation Rabies, Leptospira spp., and Clostridium spp. vaccines were identified as the highest priorities because their required potency tests use large numbers of animals and involve significant pain and distress. For example, analysis of serials released in the UK between 2007 and 2009 indicated that potency tests involving live challenge testing for Leptospira spp. and rabies vaccines accounted for a high proportion (>25%) of animals used in batch potency testing [14]. Vaccine challenge tests that require live viruses and bacteria that are hazardous to laboratory workers, livestock, companion animals, and wildlife were also considered high priorities (e.g., rabies and FMD vaccines). In addition, prioritization of vaccines for which the functional protective antigen has previously been identified would greatly facilitate the successful development of antigen quantification methods. Finally, new vaccines were included as high priorities in order to encourage the development of replacement alternatives early in the development cycle. As shown in Table 1, several of the vaccines identified as high priorities, those that currently use animals in vaccination-challenge or toxin-neutralization testing, have alternative serology methods either in development or accepted for use by specific regulatory authorities. Therefore, validated refinement methods already exist and represent critical first steps toward the ultimate goal of identifying in vitro replacement methods for these highpriority vaccines. For many veterinary vaccines, regional differences affect the availability and implementation of in vitro replacement assays. For example, the USDA published an in vitro ELISA potency test for inactivated swine erysipelas vaccine (Erysipelothrix rhusiopathiae), while the European Directorate for the Quality of Medicines & HealthCare (EDQM) published a mouse-based serology test in the European Pharmacopoiea (Ph. Eur.) ( Table 2). The EDQM has developed, validated, and approved an in vitro test for inactivated Newcastle disease vaccine that is not a standard requirement in the United States ( Table 2). Clearly, improved international communication and harmonization may expand the number of veterinary vaccines for which replacement methods are available and/or accepted for use. However, regional differences in disease status, product composition, number of manufacturers, and funding may all affect priorities established in those specific regions. State of the science Current veterinary vaccines consist of (1) modified live (attenuated) virus and bacteria, (2) inactivated (killed) viruses and bacteria, (3) toxoid or bacterin toxoids, (4) peptide and subunit vaccines, and (5) genetically engineered products. The general types of potency tests employed by vaccine manufacturers include the following: Titration of live organisms (in vitro but occasionally in vivo) In vitro assays such as ELISAs or other quantitative methods Serology methods (in vivo to in vitro) Vaccination-challenge in vivo methods using either the host animal (fish, poultry) or laboratory animals (e.g., hamsters, mice) [17] For a typical U.S. veterinary vaccine manufacturer, 37% of tests use in vitro titration assays, 22% use in vitro ELISAs, 12% use some other in vitro method, 8% use in vivo serology test, and 21% use in vivo vaccinationchallenge methods [17]. These data exclude poultry and fish vaccine potency testing but do suggest that in vitro methods are being applied for most potency testing conducted on veterinary vaccines. Animal welfare concerns, increased scientific accuracy, and the financial benefits associated with in vitro assays provide significant incentives to veterinary vaccine manufacturers for the replacement of animals for potency testing procedures, especially if a vaccine product can be released without the potential concern for repeat in vivo testing [17,60]. Modified live vaccines In vitro potency testing procedures are currently used in the release of many modified live (attenuated) and genetically modified vaccines ( Table 2) but are not widely used for the potency release of inactivated vaccines. In the United States, the USDA's Center for Veterinary Biologics (CVB) publishes many supplemental assay methods (SAMs) that provide detailed, validated protocols for the safe and effective potency testing of specific veterinary vaccines. To further facilitate the use of alternative in vitro methods, the CVB and other regulatory authorities provide many of the critical reagents and reference standards necessary to conduct these potency assays. In vitro potency methods for the quantification of several modified live bacterial vaccines are currently outlined in publicly available USDA SAMs. For example, enumeration methods that quantify the colony-forming units (CFUs) of specified live organisms are described for Brucella abortus [48], Erysipelothrix rhusiopathiae [61], and avirulent Pasteurella haemolytica (new name Mannheimia haemolytica) [62] vaccines. In addition, the CVB has published an in vitro potency assay that uses indirect fluorescent antibody staining of inoculated cell culture to quantify bacterial titers for Chlamydophila felis (formerly feline Chlamydia psittaci) [63]. As the majority of bacterial vaccines for veterinary use are inactivated, toxoid-or bacterin-toxoid-based, there are relatively few modified live bacterial vaccines available for veterinary use. For live or genetically engineered viruses, virus titration is performed in cell cultures using endpoints such as plaque formation; cytopathology; and, indirectly, virus neutralization by virus-specific serological reagents. For example, in vitro titration assays utilizing the enumeration of plaque-forming units (PFUs) are available for feline calicivirus [64], feline rhinotracheitis virus [65], and Marek's disease vaccines [66]. For other live viral vaccines, the virus is quantified by determining its cytopathic effect in primary cell culture. These include vaccines for the following: Porcine transmissible gastroenteritis [67] Porcine rotavirus [68] Infectious canine hepatitis [69] Canine adenovirus [70] Canine distemper [64] Infectious bursal disease [71] Finally, some modified live viral vaccines, such as those for feline panleukopenia [72] and canine parvovirus [73], quantify virus titers using direct or indirect fluorescent antibody staining of virus-inoculated cell cultures. Although these assay methods are approved by the USDA, it is often difficult to estimate which procedures are routinely used to release vaccine products because product-specific validation is required. However, it is estimated that approximately 50% of all U.S. veterinary vaccine serials are now released based on in vitro potency testing [26]. Examples of modified live veterinary vaccine potency assays that do not require the use of animals are provided in Table 2. Other live vaccines, such as mink distemper virus vaccines [74], use an alternative in vitro system to quantify viral content by counting viral plaques that grow on the chorioallantoic membrane of inoculated chicken embryos. For live chicken embryo-adapted Chlamydophilia felis vaccine [75], embryonated chicken eggs are used as the indicator host system to determine vaccine titer ( Table 2). In addition, a procedure is available for titrating Newcastle disease virus (NDV), infectious bronchitis virus (IBV), and combination NDV-IBV vaccines through the inoculation of embryonated chicken eggs in order to calculate the 50% egg infective dose (EID50) [76]. The majority of modified live vaccines use in vitro methods for potency release, however, some live attenuated vaccines, such as ovine ecthyma vaccine for sheep [77], still require a target animal vaccination-challenge potency test. Inactivated vaccines For many inactivated veterinary vaccines, especially bacterial vaccines, a key hurdle to the successful development of in vitro antigen quantification assays is the lack of protective antigen identity and the inclusion of complex adjuvants in vaccine formulations [60]. Therefore, many inactivated veterinary vaccines still require in vivo methods (i.e., serology or vaccination-challenge methods) for determining relative potency. However, there are specific examples in which the protective antigen for an inactivated bacterial vaccine has been identified and used to develop a specific ELISA quantification assay based on comparison to a reference standard of antigen. These include reference standards available from CVB or product-specific standards developed by the manufacturer. Examples include Erysipelothrix rhusiopathiae bacterins for 65kD protein [78] and Escherichia coli bacterins testing for K99 Pilus [79], K88 Pilus [79], 987P Pilus [79] and P41 Pilus [79]. The development of the swine erysipelas potency test also included extensive work to develop humane endpoints [80] and an ELISA serology test [40]. For the potency determination of various Leptospira interrogans serovars, an in vitro ELISA assay is used to measure the relative potency of specific bacterins compared to a suitably qualified reference standard, such as the one available from the CVB. The Leptospira interrogans serovars tested in this way include pomona [81], canicola [48], grippotyphosa [82], and icterohaemorrhagiae [83]. Published in vitro assays are also available for selected inactivated virus vaccines. For example, the potency of an inactivated respiratory cattle vaccine containing several bovine respiratory viruses (bovine diarrhea [BVD], bovine respiratory syncytial virus [BRSV], bovine rhinotracheitis [BRV], bovine parainfluenza [PI 3 ]] is determined using an ELISA assay relative to a reference standard [84]. Additional in vitro methods have been published for feline leukemia virus GP70 antigen quantification [85] and inactivated canine coronavirus vaccines [86]. An in vitro ELISA antigen quantification assay for inactivated NDV vaccine has been developed and validated by the EDQM [18,87,88,89]. The successful transition from an in vivo assay to an in vitro ELISA was aided by the fact that there was a strong correlation between antigen content and antibody response. The antigen-specific antibodies correlated with protection, and the European NDV vaccines were a homologous group with a single serotype and comparable oil-based adjuvants. Even with these distinct advantages, the replacement test took almost 10 years (1997-2006) to be incorporated in the EU monograph for inactivated Newcastle vaccines [18,88,90,91]. Although publication in the EU monograph is encouraging, the in vitro assay is only optional because it is one of several approved assays for inactivated Newcastle vaccines currently included in the monograph. Accordingly, it is difficult to estimate how widely this or any other replacement assay is used by vaccine manufacturers to release vaccine products. Table 2 provides specific examples of potency tests for inactivated veterinary vaccines that do not require the use of animals. This is not an exhaustive list, and in some cases general methods are available, often without detailed methodologies. Adding to the complexity, these references do not clearly indicate what assays are being used to release a product. Nor do they indicate that multiple methods may be available and approved for a specific vaccine by a specific regulatory agency. The proceedings of the EDQM International Symposium on Alternatives to Animal Testing included a report provided by vaccine manufacturer Intervet International on the development of alternative veterinary vaccine potency tests [92]. According to this report, alternative in vitro potency tests for inactivated veterinary vaccines are described in only a few individual monographs. For example, of the inactivated mammalian veterinary vaccines released from the Intervet Boxmeer facility, 33 separate potency tests are conducted of which three utilize vaccination/challenge tests, 28 use serology, and two use in vitro techniques. The EU monographs provide detailed descriptions of only 13 of the 33 tests. Both of the in vitro tests used by Intervet are described. For inactivated poultry vaccines, Intervet conducts 16 potency tests: three use vaccination-challenge methods, 14 use serology, and one has a serology or challenge option. Twelve of these potency tests are currently described in EU monographs with one in vitro alternative also described (currently not in use by Intervet) [92]. For fish vaccines, Intervet uses 11 potency tests, all of them vaccination-challenge tests. Five of the 11 are described in EU monographs. As yet, no in vitro alternatives are provided. Although this represents only one vaccine manufacturer's potency release of inactivated veterinary products, for which fewer in vitro methods currently exist, it does provide some indication of the potency tests utilized and the need for improved availability of both general and detailed in vitro methods. Knowledge gaps and priority research, development, and validation activities The development of in vitro potency assays for the highest-priority vaccines that still use animals requires an understanding of the knowledge and data gaps that have delayed the introduction of such non-animal assays. Understanding the protective antigen was identified as the primary technical issue. However, for many veterinary vaccines, especially bacterial vaccine products, the protective antigen is unknown or is a complex combination of antigens [17]. Therefore, development of antigen quantification tests is technically difficult because demonstrating a dose response between an antigen and protection in the target species may not be possible. Future efforts could focus upon cloning the genes for the protective antigens or obtaining the rights to those genes that have already been cloned during the development of reference standards. Purification methods could then be developed for the protective antigens, these antigens characterized, and appropriate assays developed and validated. Purified antigens may be made available to industry as reference standards. The availability of reference standards would enable vaccine manufacturers to develop their own standards for inhouse evaluations. Regulatory agencies such as the Animal and Plant Health Inspection Service (APHIS), the CVB, and the Biological Standardisation Programme (BSP) under the EDQM develop, produce, characterize, and distribute reference standards and other critical reagents. These references are provided to manufacturers to use in developing assays; comparing direct or indirect potency; or independently testing efficacy, identity, and purity. The challenges caused by the adjuvants that are present in many veterinary vaccines present the second key technical issue identified by workshop participants. These challenges must be addressed during the development of in vitro replacement alternatives. Typically, in vivo potency tests evaluate the protective or immune response to the complete vaccine, including antigenic material (e.g., adjuvants, excipients). However, many typically use adjuvants such as mineral oil and aluminum salts, which may interfere with in vitro quantification methods. Therefore, these adjuvants would need to be separated from the antigen component of the vaccine before in vitro potency testing. Because the adjuvant is a critical component for developing the appropriate protective response for inactivated vaccines, additional in vitro tests may be required to ensure their quality. Regardless, when antigen quantification methods are being developed, the effect of an adjuvant on the immunogenicity of the protective antigen will also need to be investigated [18,88,91]. In addition, the effect of the inactivant on in vitro potency methods must be investigated. A recent study showed that the method of inactivation (in this case, formaldehyde) on an oil-based adjuvanted inactivated Newcastle vaccine lowered the in vitro ELISA potency result but did not affect the in vivo potency result compared to the use of the inactivant B-propiolactone [110]. This study indicated that the in vitro potency results for commercial Newcastle vaccines inactivated with formaldehyde cannot be directly compared to those inactivated using B-propiolactone [110]. Validation of an in vitro potency assay begins when the assay is initially developed and involves establishing its relationship to efficacy in the target species. The protective antigen (protein) must be identified, purified, and shown to elicit protection in vaccinated animals. Antibodies to that protein should neutralize infectivity of the pathogen. Extensive validation continues through the assessment of the assay's precision, accuracy, and ruggedness, toward the transition to implementation and use over time [111]. Workshop participants recommended the following highpriority research, development, and validation activities. Rabies vaccines The current in vivo potency test for inactivated veterinary rabies vaccine comprises a multidilution vaccinationchallenge test in mice, traditionally termed the National Institute of Health (NIH) test. It is known to be highly variable with a high frequency of invalid results [112,113,114]. Recently implemented reduction and refinement alternatives to this test include (1) the use of a single-dilution vaccination (reduction) [53] that results in a significant reduction in animal usage to approximately 60 mice per test and (2) the incorporation of earlier humane endpoints of paresis, paralysis, and convulsions (refinement) [52]. In addition, several alternative serological methods have also been developed in which the rabies virus neutralizing antibodies are quantified from the serum of immunized animals. Two such serological methods include the rapid fluorescent focus inhibition test (RFFIT) [55,115] and the fluorescent antibody virus neutralization test (FAVN) [54]. According to the European Pharmacopoeia, the RFFIT may be used after a correlation has been established with the mouse vaccination-challenge in vivo test. A recent study demonstrated good correlation between results from the RFFIT and the traditional in vivo challenge assay [55]. The RFFIT is also reproducible within and between laboratories, providing a potential alternative to the mouse vaccination-challenge assay [22,56]. In fact, the European Pharmacopoeia recently published a revised draft monograph incorporating the RFFIT potency assay for inactivated rabies vaccines for veterinary use [53]. Considering these recent developments, workshop participants recommended a focused international workshop to discuss the barriers to international implementation of the RFFIT. Several types of antigen quantification tests are currently in development for inactivated rabies veterinary vaccines, including single radial diffusion tests, antibody-binding tests, and ELISA methods [57,116,117]. Although the ELISA assays are reproducible, inexpensive, and quantitative, they are currently product specific, and reagents are not universally available [116]. In addition, it has yet to be demonstrated that the antigen concentration in the vaccine can be correlated with an ability to stimulate a protective immune response [118]. Furthermore, guidance and/or recommendations from global regulatory agencies are necessary to resolve how any new alternative assay (i.e., serological or antigen quantification) can be validated against the current, highly variable in vivo assay [22,116]. There was broad recognition and general consensus among workshop participants that interaction between the human/veterinary regulatory agencies and vaccine manufacturers should be expanded. Such interaction would significantly increase, where appropriate, information exchange to keep all parties current on possible approaches that can be used to further the development and implementation of replacement alternatives for vaccine potency testing. The potency release test used for human rabies vaccines is similar to that used for veterinary products. All U.S.licensed rabies vaccines for human use define potency as the geometric mean of two valid NIH potency tests with humane endpoints defined [19]. In the EU, a similar vaccination-challenge procedure with humane endpoints is also described for human rabies vaccines [119]. The FDA has approved the replacement of several animal-based immunogenicity assays with ELISA-based potency assays for some vaccine products, but this does not include human rabies vaccines [19]. At issue is the fact that, although the neutralizing antigens are well defined, a clear correlation has not been demonstrated among the amount of antigen required to induce immune response in animals, the amount of antigen measured using alternative in vitro assays, and the immune response in human vaccines [19]. Consequently, serological assays may be required to serve as an intermediate step toward the successful development of an in vitro antigen quantification test. Although the development of a single potency test (i.e., serological, antigen quantification) for both human and veterinary rabies vaccines is the desired goal, it may be necessary to adapt the test for both product-specific and strain-specific vaccines [116]. Because of the clear synergies between human and veterinary rabies vaccines, workshop participants recommended as a priority that manufacturers and regulatory agencies worldwide collaborate on the development and validation of a refinement or replacement assay for all rabies vaccine products. Leptospira spp. vaccines Briefly, the current in vivo Leptospira potency test consists of a vaccination-challenge procedure in hamsters, followed 14 days later with a lethal endpoint. The in vivo test is time consuming (more than five weeks) and exposes laboratory personnel to live, viable Leptospira, a zoonotic pathogen. The USDA recently developed a sandwich ELISA as an alternative in vitro test using rabbit polyclonal capture and a specific mouse monoclonal detecting antibody to measure the relative potency of specific bacterins compared to a qualified reference standard for several Leptospira interrogans serovars including pomona [81], canicola [37], grippotyphosa [82], and icterohaemorrhagiae [83]. Studies still to be completed include the testing of adjuvants and other vaccine components on assay interference [17]. The in vivo and in vitro assay methods are currently published by the USDA in SAMs and European monographs (e.g., canine leptospiral antigen quantification method, Ph. Eur. Monograph 447; Leptospira hardjo antigen quantification method, Ph. Eur. Monograph 1939) [49,51,60,117]. In summary, the development and validation of an in vitro potency assay is product-and manufacturer-specific, and manufacturers must perform the necessary studies using specific regulatory memorandums as guidance throughout this process. As a secondary priority, workshop participants recommended the continued development and implementation of ELISA antigen quantification methods, including research into the effects of adjuvants and other vaccine excipients, and the harmonization of these tests among global regulatory authorities. Clostridium spp. vaccines The typical potency test for veterinary Clostridium spp. vaccines is an in vivo rabbit/mouse toxin-neutralization test currently used, for example, for Clostridium novyi [32,33,109] and Clostridium perfringens [37, 38,] (Table 1). However, alternative methods for Clostridial toxoid potency testing have also been developed and published [34,35,109]. For example, European regulatory authorities have a serological potency test for Clostridium perfringens [39] and Clostridium septicum [36] vaccines that has been accepted by European regulatory authorities, although product-specific validation is still required by each vaccine manufacturer [109]. For Clostridium chauvoei, an alternative approach using a validated ELISA method [102,103] and an in vitro replacement test for Clostridium hemolyticum utilizing toxin-neutralizing antibodies with the characterized protective antigen, is described [22]. Potentially, all the Clostridium protective antigens could be evaluated by antigen quantification methods, such as quantitative ELISAs, after the protective antigen has been identified by gene cloning or after rights to the protective gene have been obtained from sources that have cloned the genes for the purpose of developing reference standards. Based upon the published literature and available regulatory methods, replacement of the toxin-neutralization test for specific Clostridium spp. vaccines is a realistic goal but will require the global recognition of reference vaccines and the identification of the target antigens for these vaccines. In addition to rabies vaccines, workshop participants agreed that a synergy among experts in human and veterinary tetanus vaccines could facilitate and expedite the development of a replacement potency test for both of these vaccine products. Currently, in the United States and the EU, the potency tests for human and veterinary vaccines consist of vaccination of guinea pigs and serological evaluation of antitetanus toxoid antibodies by an indirect ELISA [44] or a toxin-binding inhibition (ToBI) test [45,47]. Efforts to develop a replacement test for either human or veterinary tetanus vaccines are impeded by the facts that toxoid vaccines are not well characterized, and potential analytical tests, including physiochemical and immunochemical tests, require much greater data generation, characterization, and validation for in-process and final product characterization [120]. A proposed blueprint for the development of an in vitro replacement potency test for Clostridium tetani included (1) the validation of currently available physiochemical and immunochemical tests, (2) parallel testing of vaccines by in vitro and serological methods, and (3) regulatory acceptance and implementation [120,121]. A focused, coordinated effort by human and veterinary tetanus vaccine experts to develop a replacement implementation plan was given a high priority by all workshop participants. Foreign animal disease vaccines Vaccines for foreign animal diseases were identified as high priorities due to the biohazard imposed upon laboratory workers and the threat to livestock and wildlife. Foot and mouth disease is the most economically important viral livestock disease worldwide, infecting both domestic and wild cloven-footed animals including cattle, swine, sheep, goats, and deer [122,123,124]. Control of FMD has proven difficult because of the rapid replication of the virus, persistence of the virus in both infected and vaccinated animals, existence of multiple serotypes, and the lack of a globally available and effective vaccine supply [124,125,126]. Inactivated vaccines are commonly used but limited by the vaccines' short shelf life, the short duration of immunity, the need to include many antigens to obtain broad immunity, and biosafety concerns with production of live virus [123,124,1237]. Improved vaccines currently in development include (1) recombinant protein and peptide vaccines, (2) DNA vaccines, (3) empty capsid vaccines, and (4) adenoviral or fowlpox-vectored vaccines [122,123,124,127]. There is also growing need for a marker FMD vaccine that would differentiate infected from vaccinated animals (DIVA). The development of such a vaccine would be significant because vaccination can interfere with disease surveillance using serological testing, and may result in a country's loss of FMD-free status and substantial economic loss [128]. As superior, functionally characterized vaccines are developed, greater opportunities to reduce, refine, or replace animal use in potency testing will undoubtedly arise. To date, the most successful vaccine strategy has been the development of a recombinant, replication-defective human adenovirus type 5 that expresses the FMD capsid sequence. Solid efficacy has been demonstrated in cattle and swine [124]. However, it is uncertain whether a single vaccine approach can successfully overcome all the shortcomings of the current inactivated vaccines. A combination of different vaccine strategies is likely to be required for effective disease control [124,125]. Currently, the vast majority of FMD infections occur in Asia, Africa, and South America. FMD-free regions include North America, Europe, and Australia [124,125]. Because of significant safety concerns associated with the production of large amounts of FMD virus, the United States prohibits live virus vaccine production on its mainland [124]. To achieve global disease control, vaccines with improved themostability and a longer duration of immunity are required, especially in those regions of the world without advanced infrastructures [125]. For the complete control and eradication of FMD, vaccination, surveillance, and an effective monitoring program are necessities [126]. Poultry vaccines Workshop participants recommended poultry vaccines as priorities for future research and development of in vitro assays because of the large number of target animals currently used in vaccination-challenge and vaccinationserology testing procedures. In vitro potency testing of live viruses is typically performed in primary cell cultures using endpoints such as plaque formation and cytopathology. Examples of live virus poultry vaccines that use in vitro potency assessment include those for Marek's disease [129] and infectious bursal disease [130]. Other examples of non-animal potency testing for poultry vaccines include a procedure for titrating Newcastle disease virus (NDV) vaccine, infectious bronchitis virus (IBV) vaccine, and a combination NDV-IBV vaccine that uses embryonated chicken eggs to determine the EID 50 [76]. As described earlier, an in vitro ELISA antigen quantification for inactivated NDV is validated and accepted for use in the EU [18, 87, 88. 89]. Additional antigen quantification assays have been developed for infectious bursal disease virus and IBV vaccines; however inadequate funding has prevented further validation [18,87,131]. Although the technology is now available, sufficient resources and efforts must still be adequately applied to validate these replacement potency assays and gain regulatory approval. Finally, as new and better characterized poultry vaccines are developed through the use of viral-vectored systems, purified recombinant proteins, or DNA vaccines, alternative in vitro approaches to potency testing should become available [128]. Fish vaccines Fish vaccine potency tests were highlighted at the workshop because of the large number of animals used, including unvaccinated controls, in vaccination-challenge procedures [14]. The majority of fish vaccine potency release tests consist of host animal vaccination-challenge methods. Little progress has been achieved in reducing, refining and replacing the use of animals (fish) for this process [132]. Fish inactivated bacterial vaccines have been successfully used in aquaculture, but only recently has the industry developed effective viral vaccines. The number of available fish vaccines increased significantly in the 1990s [133]. Increasingly, adjuvants and immunostimulants are being used to enhance vaccine potency in fish, thereby further complicating the ability to develop refinement or replacement potency testing procedures [134]. For many fish vaccines, the correlation of serological response and protection is not well established either, impeding the development of serological potency tests [132]. However, some protective antigens have been identified for inactivated bacterial vaccines, such as those protecting from Vibrio salmonicida and Vibrio anguillarum diseases. This suggests that serology or antigen quantification methods could be developed for selected vaccine products [132]. Finally, research and development efforts are expected to expand as additional fish vaccines enter the market and more animal health companies develop vaccines for aquaculture use. Each of the priority vaccines described above requires a significant investment of time and resources because of (1) the complexities associated with moving from an in vivo test method to one that does not require animals and (2) the costs associated with the significant research, development, and validation of in vitro vaccine potency test methods [18,22]. Therefore, early and frequent interactions with regulators are strongly encouraged throughout this process to maximize the likelihood of a final product that will be accepted by regulatory authorities and to avoid any unnecessary delays. Achieving broader acceptance and use of currently available non-animal replacement methods for veterinary vaccine potency testing Workshop participants agreed that the primary impediment to broader acceptance and use of available nonanimal replacement methods is the associated cost and time required for each vaccine manufacturer to conduct a product-specific validation of the in vitro potency assay for each specific vaccine. In addition, the lack of international harmonization on alternative potency methods often means that the veterinary vaccine manufacturer must perform multiple potency release tests for the same vaccine depending on its point of manufacture and use. As a starting point, workshop participants recommended that regulatory agencies harmonize the general principles for the validation of alternative potency tests. In the United States, the CVB has issued general guidelines on the validation of in vitro potency assays [111] and relative potency assays and reference preparations based on ELISA antigen quantification [135]. International organizations also play an important role in this harmonization process. The International Cooperation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH) is a trilateral program of collaboration among the regulatory authorities and animal health industries of the European Union, Japan, and the United States. The VICH aims to harmonize technical requirements for the registration of veterinary medicinal products by establishing and implementing specific guidelines after extensive input and review from national regulatory authorities. The VICH was established under the auspices of the World Organization of Animal Health (OIE), which participates as an associate member in the VICH process by supporting and disseminating the outcomes at a worldwide level (http://www.vichsec.org). As VICH guidelines are developed and reviewed by members of the international animal health community, there is increased acceptance of the regulatory principles that should facilitate faster and more uniform implementation. Examples of VICH guidelines that have been adopted by APHIS include VICH GL 41: Examination of Live Veterinary Vaccines in Target Animals for Absence of Reversion to Virulence (VICH 2007 (adopted by the U.S. in 2008) and GL 44: Target Animal Safety for Veterinary Live and Inactivated Vaccines (2008) (adopted in the U.S. in 2010). In addition, a draft guideline is in development by VICH to consider a waiver for the Target Animal Batch Safety Test [26]. In addition to harmonizing general principles, there is a need to harmonize the testing procedures for individual vaccine antigens, including development of the necessary reagents. For example, reference standards such as specific antibodies, viruses, bacteria, and antigens can be accessed from the CVB by U.S. entities to aid in the development of in vitro potency test assay development. Broad international availability of reference standards, supported by the national and regional regulatory authorities, would greatly help to convert animal-based tests to non-animal assays. Additionally, universal reference standards could be monitored and maintained by organizations such as the OIE, USDA, World Health Organization (WHO), or EDQM. The availability of reference standards is a key factor in the ability of vaccine manufacturers to switch to an in vitro replacement assay. For example, in an ELISA, the reference must be analyzed in conjunction with the sample so that a direct comparison of test vaccine to a known reference can be used to determine a relative potency. Relative potency is defined by the CVB as the potency of a product as determined by comparison with an approved reference [135]. For in vitro antigen potency assays, the unknown is typically compared with a working reference that was generated from the master reference. The master reference potency must have been previously correlated, directly or indirectly, to host animal immunogenicity. As the master reference is correlated to host immunogenicity, its relative stability must be monitored over time to ensure that the reference remains stable during storage. Currently, in the United States a frozen master reference is allowed a maximum dating of five years or, if stored under refrigeration, a maximum dating of two years [135]. After the dating period, each reference must be requalified in the host animal immunogenicity test. To avoid the use of additional animals for requalification, workshop participants recommended that requalification be conducted in any currently acceptable potency test. Development of new requalification tests is the responsibility of the vaccine manufacturer. This requires significant resources, especially in the development stage. Vaccine manufacturers cannot afford to dedicate these resources to products that are older and less profitable. Therefore, prioritization of veterinary vaccines for replacement testing and the potential availability of reference standards can significantly accelerate the animal test replacement process. Workshop participants recommended that stability monitoring for both products and reference standards be considered early in the development process. They recommended that regulatory authorities work with industry stakeholders to set expectations for the stability monitoring program [17]. The stability monitoring of references typically requires that multiple previously validated tests be conducted on a 3-, 6-, or 12-month schedule (Brown 2010, personal communication). As test methods change so might the stability monitoring methods and even the reference standard itself. Consequently, regulatory agencies may require flexibility to work with the vaccine manufacturers in bridging reference standards and methodologies as industry moves toward in vitro replacement assays. Regulatory guidance will also be required on the development and application of new technologies to the development of veterinary vaccines such as genetically engineered (rDNA) products, including inactivated/subunit, live (or inactivated) gene-deleted, or live vector (gene insertion) products [17]. Clearly, vaccine manufacturers must decide which products to prioritize for specific non-animal replacement potency testing. Typically, considering the output of industry resources, this decision is based upon product revenue and profitability. To aid in this process and to expedite replacement testing, sufficient resources are essential to develop and maintain reference standards specifically for industry use. In addition, broad accessibility of general procedural guidelines (as well as specific testing procedures) for individual antigens would further facilitate the international harmonization of replacement assay development and use. Other issues to be addressed to facilitate the replacement of animals in veterinary vaccine potency testing A key issue that should be addressed is the available funding for research and development of alternative methods. This research and development should be funded not just by industry stakeholders but also by government granting agencies, industry associations, and animal welfare advocacy groups. For example, the U.S. National Institutes of Health may offer funding opportunities for veterinary vaccines for those animal diseases associated with human health, such as rabies. Furthermore, academic research into test method alternatives should be promoted, and manufacturers should be encouraged, where appropriate, to present and/or publish their research findings regarding their alternative test methods. Workshop participants also encouraged the increased availability of regulatory guidance documents in the public domain. As indicated in Section 5, the inclusion of adjuvants in veterinary vaccines complicates the development of alternative methods because of their reported interference with antigen quantification assays. Consequently, priority should be given to developing replacement potency tests for vaccines that do not contain adjuvants. Where adjuvants are required, priority should be given to those adjuvants for which methods already exist to separate the adjuvant from the antigen. Newly developed adjuvants improve the immune response but may also be more difficult to separate from the antigen. In such instances, regulatory agencies may consider allowing manufacturers to measure potency on the bulk material, before the addition of adjuvant, or allowing antigen testing on the bulk material with an additional characterization/quantitative test on the final product. There is a clear need for further research on simpler adjuvants (and/or the methods to extract then) that may exert an effect on the animal's immune system but that do not directly interact with the antigen. Detailed protocols for available replacement alternatives that have been reviewed and endorsed by scientific groups should be readily available in the public domain to facilitate scientific exchange and consideration. For example, detailed protocols and supporting data for validated methods, such as those that appear or are referenced in the European Pharmacopoeia monographs, should be freely available to manufacturers and the scientific community to facilitate the implementation of alternative methods. Further incentives for industry stakeholders to develop, validate, and implement alternative methods need to be clearly conveyed and implemented by regulatory agencies. Workshop participants identified several examples of incentives that may be considered attractive to relevant vaccine manufacturers, including an expedited regulatory review time, waiving the variation fee (if applicable), and the opportunity to utilize intermittent in vivo/in vitro parallel data to expedite validation of new in vitro methods. Discussion This was the first international workshop in the United States that focused on the reduction, refinement, and replacement of animal use for safety and potency release testing of both human and veterinary vaccines. A key accomplishment of the workshop was bringing together experts from industry, academia, and government in the areas of safety and potency testing for both human and animal vaccines. There was broad recognition among the vaccine manufacturers and regulatory authorities and a general consensus among the participants that international workshops vastly improve information exchange not only between global regions but also between regulatory authorities (e.g., the USDA and the FDA) in the same country. This interaction may accelerate development of alternative methods once priorities are firmly established. The presentations and subsequent breakout group sessions allowed participants to clarify the current status of in vitro replacement testing procedures and establish the key criteria to identify those vaccines for prioritization. A focus on inactivated vaccines for rabies, Leptospira spp., and Clostridium spp. diseases was generated from this debate. An important outcome of this workshop was the recommendation for a similar international workshop to specifically discuss the development, validation, and implementation of alternative reduction, refinement, and replacement potency testing assays for rabies vaccines for both human and veterinary use. This workshop is currently scheduled for October [11][12][13]2011 in Ames, Iowa. The workshop reflected a growing awareness of the need for alternative tests for both poultry and fish vaccines, in which the vaccines are typically tested in large numbers of target animals. Because the number of fish vaccines has grown significantly in the last 20 years, much more research and greater focus is needed to identify protective antigens for replacement testing. Finally, workshop participants recognized the uniqueness of veterinary vaccines and the need to focus on more-modern, stronger revenue-generating vaccines that can support the cost of new test method development. This workshop also brought attention to (1) the development and use of more-complex adjuvants and (2) the use of multiple adjuvants to generate solid and sustained immunity with poorer immunogens (vaccines) and to lower vaccine antigen levels. The use of more-complex or multiple adjuvants further complicates potency replacement efforts and therefore highlights the need for much more extensive research into simpler adjuvants and/or methods to extract them from the protective antigen. Workshop participants were encouraged by the significant number (estimated to be between 50% and 70%) of veterinary vaccines, especially the modified live viral and bacterial vaccines that now use in vitro potency tests. Clearly, better estimates of the number of veterinary vaccine serials released using replacement methods would be beneficial and would also focus the discussion on those vaccines for which replacement potency testing is not yet available or in use. Accessing the information on the current state of the art of veterinary vaccine potency tests is challenging because some procedures or general guidelines are not universally available. This results in an unnecessary hindrance to the implementation of the 3Rs for vaccine product release. The growing role of international organizations such as the VICH and the OIE is apparent. Workshop participants agreed that the harmonization of guidelines and reference standards for broad use by the vaccine community would likely increase the interaction between those organizations and the national regulatory groups. In addition, workshop participants clearly expressed the need for additional funding for these regulatory groups to allow greater availability of some of these key reagents (e.g., reference standards) to vaccine manufacturers. Although the vaccine companies must develop and validate product-specific assays, the reference standards would provide the basis for this further development and validation. This workshop set the stage for a series of specific workshops on the identified priority vaccines. Based upon the general scientific literature and the presentations at the workshop, there is broad international consensus to reduce, refine, and replace the use of animals for both human and veterinary vaccine potency testing. Implementation of the workshop recommendations discussed in this report is expected to advance alternative methods for veterinary vaccine potency testing that will benefit animal welfare while ensuring continued protection of human and animal health. Conclusions This veterinary vaccine session summarized the current status of in vitro potency testing for veterinary vaccines and identified the critical issues to further advance and implement in vitro replacement assays for currently used in vivo challenge or toxin-neutralization testing. To focus these efforts, criteria were established for vaccines that should have the highest priority for development of replacement testing methods. Based upon these criteria, the highest-priority vaccines were identified as those for rabies, Leptospira spp., Clostridium spp., erysipelas, foreign animal diseases (e.g., FMD), poultry diseases, and fish diseases. Workshop participants also prioritized the research, development, and validation activities necessary to expedite veterinary vaccine potency testing with fewer animals. Workshop participants recognized that there are special considerations with veterinary vaccines due to the complexity of antigenic material and the inclusion of complex adjuvants. They acknowledged that, in many cases, reduction/refinement testing may precede the introduction of in vitro replacement assays. This, combined with the number of veterinary vaccines and their value to the veterinary industry, suggests that the priorities identified are correct and have the highest chance of successful implementation. There was consensus among workshop participants on the need for more universally available reagents and harmonized approaches. The successful implementation of these activities will require additional resources at both national and international levels. Finally, workshop participants agreed that the continued interaction of the global vaccine community (i.e., manufacturers, regulatory agencies, animal health organizations), both human and veterinary, could expedite the unified goal of the replacement of animals for veterinary vaccine potency testing.
2017-10-17T09:58:36.879Z
2011-12-23T00:00:00.000
{ "year": 2011, "sha1": "6b83784051f3198aae8de3cd9bad3c3f61d79f6e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.provac.2011.10.005", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "eeed3ffbb9e4731570065f7634b6b7927a729862", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11129935
pes2o/s2orc
v3-fos-license
Multidimensional dosimetry of ¹⁰⁶Ru eye plaques using EBT3 films and its impact on treatment planning. PURPOSE The purpose of this study was to establish a method to perform multidimensional radiochromic film measurements of (106)Ru plaques and to benchmark the resulting dose distributions against Monte Carlo simulations (MC), microdiamond, and diode measurements. METHODS Absolute dose rates and relative dose distributions in multiple planes were determined for three different plaque models (CCB, CCA, and COB), and three different plaques per model, using EBT3 films in an in-house developed polystyrene phantom and the mcnp6 MC code. Dose difference maps were generated to analyze interplaque variations for a specific type, and for comparing measurements against MC simulations. Furthermore, dose distributions were validated against values specified by the manufacturer (BEBIG) and microdiamond and diode measurements in a water scanning phantom. Radial profiles were assessed and used to estimate dosimetric margins for a given combination of representative tumor geometry and plaque size. RESULTS Absolute dose rates at a reference depth of 2 mm on the central axis of the plaque show an agreement better than 5% (10%) when comparing film measurements (mcnp6) to the manufacturer's data. The reproducibility of depth-dose profile measurements was <7% (2 SD) for all investigated detectors and plaque types. Dose difference maps revealed minor interplaque deviations for a specific plaque type due to inhomogeneities of the active layer. The evaluation of dosimetric margins showed that for a majority of the investigated cases, the tumor was not completely covered by the 100% isodose prescribed to the tumor apex if the difference between geometrical plaque size and tumor base ≤4 mm. CONCLUSIONS EBT3 film dosimetry in an in-house developed phantom was successfully used to characterize the dosimetric properties of different (106)Ru plaque models. The film measurements were validated against MC calculations and other experimental methods and showed a good agreement with data from BEBIG well within published tolerances. The dosimetric information as well as interplaque comparison can be used for comprehensive quality assurance and for considerations in the treatment planning of ophthalmic brachytherapy. INTRODUCTION In the treatment of uveal melanoma different external beam therapy techniques, i.e., stereotactic radiotherapy using pho-tons, 1-3 particle beam therapy with protons 4,5 and helium ion 6,7 beams have been employed as well as brachytherapy utilizing different radionuclides, e.g., 125 I, 90 Sr, and 106 Ru. Whereas plaques are preferred across Europe. 8 Due to lower penetration depths of their beta particles, 106 Ru plaques are likely to cause fewer severe side effects in small-apex tumors adjacent to the optical nerve and macula. 8 The sharper penumbra, however, requires an accurate positioning of the plaque by the surgeon to achieve a high local tumor control. 9 The steep dose gradients and the generally confined geometry of the different 106 Ru plaque models imply dosimetric challenges. Hence, sufficiently small sensitive volumes are a prerequisite for accurate dosimetry of ophthalmologic brachytherapy plaques, as demonstrated in several studies. A comprehensive comparison was presented by Soares et al. 10 who compared most commonly used detectors in beta-ray dosimetry for one concave plaque model (CCB). They were able to reduce the uncertainty down to ±15% as compared to the 95% confidence interval of ±25% specified by the manufacturer. 11 Other studies investigated the use of radiochromic films, 10,[12][13][14][15][16][17] TLDs, 15,16,[18][19][20][21][22][23] extrapolation ionization chambers, 24 and diodes 25 which are considered to be a standard for 125 I eye plaque dosimetry. Other dosimetric techniques which are less widely used, such as plastic scintillators, 11,26,27 threedimensional liquid scintillators, 28 and polymer gel dosimetry 29 were explored as well. The manufacturer of BEBIG eye plaques also uses plastic scintillators to provide the reference data for the customer. The purpose of this study is to present a method to determine two dimensional (2D) relative dose distributions and also absolute dose rates at reference depths for three different models of 106 Ru eye plaques (CCA, CCB, and COB) in a specially designed phantom with radiochromic films. In this context, 2D measurements were compared with Monte Carlo (MC) simulations and 1D relative depth profiles and off-axis profiles were benchmarked against a diamond detector and a Si-diode in a standard water scanning phantom, to validate the results from radiochromic film measurements and MC calculations. Inhomogeneities in the active layer among different plaques of the same type and their impact on treatment planning were investigated as well. Finally, results of 2D measurements were used to determine margins for typical uveal melanoma treatments. According to the authors' knowledge this has not been addressed by others previously. 2.A. 106 Ru eye plaques In this paper, three different types of BEBIG plaques were used that are typically applied for different tumor sizes, i.e., CCA, CCB, and COB. For each plaque model, three different plaques were evaluated (CCA model numbers 1263, 1382, and S1505, CCB model numbers 1780, 1855, and S2042, and COB model numbers 935, 980, and S1027, Eckert & Ziegler, Germany). The COB type has a small indentation for the optical nerve that allows for the irradiation of juxtapapillary melanomas [see Fig. 2(b) below]. All 106 Ru sources are applied on a 0.2 mm silver shell which is coated with a 0.7 mm silver backing to shield the beta-rays on the convex side of the plaque. The low-energy beta-particles (E max = 39 keV) produced by the embedded layer of 106 Ru are absorbed by a 0.1 mm silver radiation exit window. 30 Having a half-life of 368 days, Ruthenium-106 decays to Rhodium-106 ( 106 Rh), having a maximum energy of 3.54 MeV and a mean energy of 1.42 MeV. 106 Rh then decays into stable palladium ( 109 Pd) with a half-life of 2.2 h. The spherical plaques have an inner radius of 12 mm to conform to the eye, with an area of 15.3-20.2 mm in diameter. These radii include an inactive rim of 0.7-1 mm (according to the manufacturer), depending on the type of plaque. 2.B. Radiochromic film calibration and measurements Gafchromic EBT3 (ISP Technologies, Inc., Wayne, NJ) films were used according to published guidelines. 31 They are of water-equivalent material and provide an excellent spatial resolution for 2D dosimetry. EBT3 type films consist of a sensitive polyethylene terephthalate (PTP) layer coated by a thin layer of protecting emulsion yielding a total thickness of 0.3 mm. Upon irradiation, the sensitive layer turns dark without the need for postirradiation film development. They show no significant energy dependence with regard to the dose response of a 106 Ru beta ray emitter. 17,32 For calibration, films were cut into pieces of 30 × 30 mm 2 and exposed to a 60 Co photon beam at a depth of 4 cm in a polymethyl methacrylate (PMMA) phantom under fixed SSD conditions in a 10 × 10 cm 2 reference field. Additional 10 cm water-equivalent material was used for backscatter. Films were irradiated in ten dose steps between 0 and 25 Gy. One piece of film was left unirradiated for background reading. The films were then digitally read out with an EPSON Perfection V 700 scanner (image resolution of 0.169 mm/pixel) 24 h after exposure, to account for postexposure darkening. 33 For further evaluation only the red component of the RGB image, which shows the highest sensitivity, was used. To obtain a calibration curve, the mean optical densities (OD) of an area of 20 × 20 mm 2 in the center of the films were determined. From these OD values, the net optical densities (netOD) were calculated according to the procedure proposed by Devic et al.,34 netOD = log 10 SV BG SV . SV BG is the average pixel gray value from the red color channel of the un-irradiated film, and SV is the averaged scan value in the center region of interest of the irradiated calibration films. For film handling, general recommendations for radiochromic film dosimetry were followed. 31 Special care was put on the film cutting procedure. A sharp scalpel and hard metal surface were used for the manual film cutting. Additionally, the cutting device was inclined to minimize crimpling on that side of the film. Bending of the film was found to be the main cause for separation of the film protective layers and was therefore avoided at all. By doing this, the whole of the film except the first 0.5-1.0 mm could be used to extract dosimetric information. To avoid penetration of water, a thin layer of nail polish was carefully applied on the film edges. The films were calibrated up to doses of about 25 Gy and irradiated up to doses between 15 and 18 Gy. This dose level F. 1. Schematic illustration of the polystyrene eye phantom. It enables horizontal off-axis measurements at different depths in the x-y plane and vertical measurements by inserting a properly cut film piece (highlighted in red) in the x-z plane. allowed achieving adequate dose levels even at distances of 12 mm from the plaque's surface. No significant saturation effects were observed for this calibrated dose range. A dedicated polystyrene eye phantom was developed inhouse. Polystyrene has near water-like properties and is commonly used as phantom material. 30 The phantom consists of a spherical cap with a 12 mm radius and a height of 7 mm, to fit to the shape of the 106 Ru plaques, and several interchangeable slabs (see Fig. 1). The base of the phantom was built from horizontal slabs to allow for film measurements at different depths, each with a square cross-section of 30 × 30 mm 2 and thicknesses of 1-3 mm. The phantom can be split into two segments in order to insert an EBT3 film parallel to the central axis z (see Fig. 1). This enables 2D depth dose measurements from the eye plaque surface down to a maximum depth of 22 mm, representative for a human eye. To account for possible backscatter and to allow for a robust assembly of the polystyrene slabs, the eye phantom was embedded in PMMA. In order to avoid air pockets, the whole eye phantom assembly can be filled with water. Experimental values for reference dose rates were exclusively obtained with EBT3 films, as these were the only detectors available for this study that can be calibrated. To account for the nonwater equivalence of the polystyrene phantom, the dose rate was corrected to absorbed dose to water using reference parameters from ICRU report 72. 30 Film results were compared to absolute dose rates from MC simulations as well as to the dosimetric reference data provided by the manufacturer. 2D dose distributions measured parallel and perpendicular to the central axis were compared with MC results using dose difference maps. To investigate the reproducibility of the setup, both horizontal and vertical measurements were repeated three times with individually cut film pieces, applying the same read-out protocol each time. 2.C. Measurement comparison with high spatial resolution point detectors For relative dose measurements, a newly developed single crystal diamond detector (PTW-Freiburg, Germany) and a ptype silicon diode (PTW-Freiburg, Germany) were used. The microdiamond detector has a 1 µm thick sensitive layer with a volume of 0.004 mm 3 and is thus suitable for measuring along steep dose gradients. While having identical outer dimensions of 7 mm in diameter, the Si-diode has a slightly larger sensitive volume of 0.03 mm 3 (30 µm thickness, 1 mm 2 circular). Despite the larger sensitive thicknesses compared to films, these detectors provide distinct advantages over ionization chambers. The effective point of measurement does not only depend on the water equivalent thickness of the detector coating and the thickness of the sensitive volume, respectively, but also on the offset of the 7 mm diameter detectors and the curved inner surface of the plaque. This may introduce uncertainties which we estimate to be less than 10% at the reference depth of 2 mm. In accordance with Soares et al. 10 For the measurements, the plaques were carefully placed in a paraffin wax mold with the active concave side face up. They were reproducibly aligned with respect to an orthogonal laser beam system in a linear accelerator bunker. The detectors of identical outer dimensions were each mounted on the robotic step motor of a standard scanning water phantom (IBA Dosimetry, Schwarzenbruck, Germany). The high precision motor enabled high resolution dose profile acquisition in steps of 1 mm. The whole setup was surrounded by water. In the setup shown in Fig. 2(a), depth dose profiles along the central axis, from the surface to a depth of 10 mm in 1 mm steps, were determined. Figure 2(b) illustrates off-axis profile measurements perpendicular to the central axis across the plaque's rim. The off-axis measurements were also used to confirm the centered position of the detectors on the central axis for (a) by determining the dose minimum corresponding to the center of the plaque. All measurements were repeated five times in individually set up sessions. 2.D. Monte Carlo simulations using 6 All experimental setups described in Secs. 2.B and 2.C were simulated using 6, a MC code that can be used for simulating neutron, photon, and electron transport. The purpose of the MC simulations was to reproduce the performed measurements in order to validate the underlying physical dose calculation model, which is intended to be used for treatment planning as a later stage. After confirming the findings of previous studies, 35,36 the 106 Ru beta spectrum as well as the prompt gamma emissions of 106 Ru/ 106 Rh were disregarded and the final dose calculations were performed using the 106 Rh beta energy spectrum as given in ICRU Report 72. 30 For the simulation of electrons as well as the secondary photons produced by the electron transport in the media (E, P mode) 's, ITS option was chosen since its accuracy for tissue equivalent materials was confirmed in numerous studies. [37][38][39][40] Furthermore, the ESTEP default option was used. The geometries of the three investigated plaque types (CCB, CCA, and COB) were modeled based on the data made available by the manufacturer for our specific plaques, identified by serial numbers. Concerning the relative dose measurements with the diamond detector and the silicon diode, the absorbed dose in water was calculated at the respective detector positions as indicated in Fig. 2 for each specific experiment. In order to compare the absolute dose rate with the data provided by BEBIG, the absorbed dose to water was calculated in volumes that had the identical geometry, dimension, and position as the scintillation detectors used by the manufacturer to determine their dosimetric reference data. By applying the stated reference activity of the manufacturer, the respective MC results were converted to an absolute dose rate corresponding to a specific date. The dosimetric reference data from BEBIG are based on measurements performed in water. Although electron transport in soft tissue and water hardly differs, additional simulations were performed with soft tissue as surrounding medium to assess systematic differences. For this investigation, even two different soft tissue definitions were considered, i.e., the elemental compositions listed in the Oak Ridge National Laboratory's technical report ORNL TM-8381 (Ref. 41) and the one published in ICRU Report No. 44. 42 In order to determine how well the MC simulations can reproduce the experiments, all measurements performed with radiochromic films were simulated by specifically modeling the polystyrene eye phantom, including its material composition. As described before, mesh tallies were superimposed in the respective planes with the same spatial resolution as the film scan. Despite the water-like properties of polystyrene, this approach was chosen in order to minimize systematic uncertainties regarding the comparison of experiment and simulation. Dependent on the specific simulation, between 3 and 20 × 10 6 starting particles were used. 2.E. Evaluation of 2D dose distributions Plaque nonuniformity is an important parameter to characterize the homogeneous activity distribution for radioactive plaques in brachytherapy. Tests for source nonuniformity were performed on a circle of radius r according to published ICRU guidelines, 30 D min andḊ max describe the minimum and maximum absorbed dose rates, respectively, whereasḊ avg is the average dose rate. The source nonuniformity U ICRU should not exceed a maximum of 30%. 43 For concave plaques, the measurement of source uniformity was particularly challenging because it contains both the effects of an inhomogeneous source and those of varying distance when the plaque is not perfectly leveled. Therefore, an additional setup was used in which the plaques were placed on a thin PMMA-slab with circular cutouts to fit the different plaque types as shown in Fig. 2(c). A piece of film was irradiated for the best possible leveled positioning of the plaques to obtain the pure source nonuniformity index. Vertical film measurements were used to investigate the penumbra generated by the transition between active 106 Ru layer and inactive rim. This region changes with depth and indicates the usable area of differently sized plaques. If the plaques show a low nonuniformity index across the plaque surface, one can consider vertical measurements from a single upright film piece to be representative for all vertical crosssections through the central axis. From these 2D dose distributions, the plaque specific dosimetric characteristics in the penumbra region can be assessed. The dose reduction across the inactive edge of the plaque can be expressed by plotting equidistant dose profiles with respect to the plaque's surface. Such profiles determined along a constant radius give an idea of the active width of the plaque including its variation as opposed to the geometrical width of the different plaque types. From such measurement results it is possible to estimate dosimetric margins, tumor coverage, and doses to adjacent organs at risk. Figure 3 illustrates this conceptual idea in more detail. 2D dose distributions were superimposed on model geometries of dome-shaped tumors in order to obtain dosimetric margins and coverage of a broad range of representative tumor volumes. Target coverage was evaluated as follows: 100% isodose was defined at the tumor apex which reflects the simple treatment planning of eye plaque therapy if a minimum target dose is prescribed. 44,45 Three-or two dimensional dose calculation and dose volume histogram based evaluation are not widely available for plaque therapy of uveal melanoma, but might contribute to establish DVH-based dose-response relationships for tumors and organs at risk in the future. The representative tumor model illustrated in Fig. 3 consists Equation (3) describes the radius of the segment of the circle in which r eye is the radius of the eye and α 0 the angle defined by the tumor base (in one direction from the central axis). From this, one can derive the angle β n defined by points P n lying on the equidistant profiles with radius r n (n = 2 or 3 mm) at the surface of the tumor, Equation (4) was used to describe the extension of the tumor from the central axis of the respective points P n as a multivariate function depending on the shape of the tumor for different base diameters and apex heights, respectively. By converting the angles β n into radian via β n = b n /r n this yields a measure of the tumor extent b n at depths of 2 and 3 mm. From vertical film measurements, equidistant profiles were obtained and used to describe the extent of dose coverage (100% isodose) attributed to a certain apex height. RESULTS The statistical error (k = 2) of all MC simulations referred to in Secs. 3.A and 3.B was ≤1.4%. 3.A. Percent depth dose on central axis Results of the relative depth dose along the central axis, normalized at a depth of 2 mm, are shown in Fig. 4 for all three investigated types of plaques. Film data consist of three independent measurements of the same plaque and two measurements of different plaques. Film measurements were weighted according to the number of measurements performed on the same plaque and finally averaged values for three different plaques were used for dosimetric comparisons. Microdiamond and diode measurements were repeated five times on the same plaque. The data obtained from the different measurement techniques show good agreement with the MC derived data as well as with the manufacturer's specification. The film measurements showed deviations of less than 6% for depths up to 7 mm from the plaque surface for CCB and CCA type data. For the COB type plaque, deviations are typically higher. This can also be seen in detail in Fig. 5(a), in which percent depth dose data from the specifications of different plaques of the same type were averaged and plotted with the resulting standard deviation (k = 2). These were compared to the averaged film measurements and plotted in terms of relative deviations. The deviations increase significantly for depths larger than 7 mm (CCB and CCA) and 5 mm (COB). The diode (diamond) detector results resulted in dosimetric deviations of less than 5% (7%) for the first 7 mm (5 mm). At larger depths, deviations increased up to 40%. The reproducibility of measurements is shown in Table I. The given values refer to the lowest precision within the 95% confidence level found among the detectors over the whole depth dose curve. 3.B. Lateral dose distribution and source nonuniformity The off-axis profiles determined with the microdiamond and diode showed an agreement within 10% compared to the MC simulations for points located at distances ≤8 mm off-axis. These profile measurements also confirmed that the central position of the detector is crucial. The evaluation of the source nonuniformity yielded mean indices of 15.51%, 16.81%, and 14.69% for the CCB type plaque. CCA type plaques yielded values of 15.79%, 18.51%, and 13.45% and COB values of 17.67%, 20.15%, and 18.50%. The indices are well within the recommended limits of 30%. 43 3.C. Reference absolute dose rate Reference dose rates were determined at a depth of 2 mm in accordance with the report of The Netherlands Commission on Radiation Dosimetry. 43 The values were directly taken from the netOD of the vertical film strips. The averaged dose rates over three separate measurements are summarized in Table II together with MC derived data and the manufacturer's specifications. T I. Comparison of the reproducibility (k = 2) of the used detectors with respect to the different plaque types. Film measurements were performed three times, diamond and diode measurements five times on the same plaque. EBT3 film measurements were, in general, in good agreement with the manufacturer's reference data with deviations (k = 2) of 0.4% ± 2.7% (CCB), 1.4% ± 4.5% (CCA), and 4.6% ± 2.9% (COB), respectively. Type The largest deviations of about 4.6% were found for the COB type plaque. In this case, the large experimental uncertainty might originate in a systematic offset in the alignment of the plaque and EBT3 film because it was aimed to determine the dose along the cross-section through the optical nerve sparing notch [see Fig. 1(b)]. For the smallest plaque, i.e., the CCA type, a standard deviation (k = 2) of 4.5% was achieved which indicated reproducible results between three independent measurements. The deviations between the MC simulation and the BEBIG data were higher than the ones obtained when comparing with film data. Therefore, a more detailed analysis was performed in a depth interval between 1 and 10 mm. The absolute dose rate differences in mGy/min between 6 and BEBIG are displayed in Fig. 5(b) as the difference from the BEBIG value in every measuring point. As BEBIG data are associated with an uncertainty 52 of about 20% (k = 2) whereas the statistical error of the simulation is negligible (see Table I), these deviations were considered to be acceptable. At a depth of 1 mm, the simulation results for the CCB, CCA, and COB plaques reached only 85%, 89%, and 80% of values specified by the manufacturer. This divergence however quickly vanishes and drops below 5% with respect to the normalized dose rates at distances of 2.5, 1.8, and 3 mm for CCB, CCA, and COB, respectively. T II. Absorbed dose rates and uncertainties (k = 2) for three plaque models (CCB, CCA, and COB) on the central axis at a depth of 2 mm obtained with EBT3 film,  and from the BEBIG data. Repeated simulations of the absolute dose rate with the soft tissue definitions of Oak Ridge National Laboratory and ICRU did not show any significant difference in the volume around the plaque that was considered to be relevant for the dosimetry of 106 Ru plaques, i.e., the area up to 10 mm from the plaque along the central axis, when compared to the results obtained assuming water as medium. 3.D. Dose difference maps of vertical 2D distributions Two dimensional dose distributions for all three plaque types are shown in Figs. 6(a)-6(c). Film measurements were compared to MC simulations by means of absolute dose difference maps and maps showing deviations between averaged film measurements from different plaques of the same type and MC simulations [see Figs. 6(d)-6(i)]. In order to align 2D maps, translational shifts of up to 2 pixels in vertical and up to 6 pixels in lateral direction were applied (pixel width 0.169 mm) by a single rigid transformation. For the absolute dose difference maps [Figs. 6(d)-6(f)], the largest deviations of up 150 cGy between simulation and measurement were found close to the plaque surface within the first 1.5 mm, corresponding to about 15% dose difference with respect to the reference dose at 2 mm depth. At distances between 2 and 7 mm, the absolute dose difference decreases to <50 cGy (5%) and vanishes to 0 cGy beyond that depth. Increased differences in the dose penumbra region around the edges of the plaques can be either explained by a rotational misalignment (which was not corrected to conserve the dosimetric information of the relatively coarse dose grid) or a mismatch between simulated 106 Ru layer and actual extent of the measured plaque. Difference in the penumbra regions was especially observed for the COB type plaque. After averaging the film measurements for three different plaques of the same type the influence of source inhomogeneities could be minimized, but still remain prominent for the COB type plaque [Figs. 6(g)-6(i)]. Local relative deviations remained within 5% up to 7 mm from the plaque surface for CCB and CCA type plaques and up to 5 mm for COB. Results on the reproducibility of the 2D film measurements are shown in Figs. 6(j)-6(l) for two measurements on the same plaque. The influence of source inhomogeneity among plaques of the same plaque type was investigated by comparing film measurements of different plaques [Figs. 6(m)-6(o)]. When comparing relative deviations between two film measurements of different plaques, a good agreement in the central region was observed up to 5-7 mm depth. For film measurements of the same plaque, good agreement was not only observed in the central region up to 7 mm depth, but was found for a wider region in lateral direction as well. At larger depth, relative deviation increased up to 20%-40% as the absorbed doses approached 0 cGy for both comparisons. Discrepancies in the high dose region (>10% of reference dose) appeared primarily for the interplaque comparison close to the plaque edges and were most pronounced for the COB type. This effect can be explained most likely by minor differences in the 106 Ru layer extent close to the edges in different plaques of the same type, and again a rotational misalignment of the plaque between different measurements. The comparison of film measurements of the same plaque showed a generally better agreement, especially in lateral direction. 3.E. Penumbra characteristics and tumor coverage The dosimetric characteristics along equidistant "radial" profiles are shown on the left side in Fig. 7 for both CCA and CCB type plaques. These profiles were in turn used to estimate the tumor extension x tumor in lateral direction from the center at point P n (n = 2 or 3 mm) that can be treated by considering different scenarios of tumor apex height and basal diameters, respectively. Typically, the dose D CTV is prescribed to the tumor apex. Hence, with increasing apex height the overall area that is covered by this prescription dose (D CTV = 100%) is decreased as represented by the area x 100% enclosed under the dose profile in Fig. 7. However, at the same time the geometric extension of the tumor increases. Any points beyond this area receive less than 100% of the prescribed dose. The difference between the tumor extension x tumor and the overall margin x 100% illustrates the specific margin x margin that guarantees complete tumor coverage. Alternatively, this can be understood as the tolerable shift of the plaque without compromising full tumor coverage with respect to the 100% prescription isodose, thus simulating an error in surgical positioning. Finally, margin matrices were derived that describe x margin (in mm) for various scenarios of apex heights and base diameters for CCA and CCB type plaques. Typically, margins increase with decreasing apex height and base diameter. Negative values in Fig. 7 indicate an underdosage at the respective points. Such scenarios were found for especially large basal diameters (d base > 11 mm) for the small CCA type plaque. On the other hand, a typically good coverage was found for these scenarios for the larger CCB type plaque, with margins well above 1 mm. DISCUSSION Multidimensional dosimetric measurements for eye plaques as used in brachytherapy for uveal melanoma have been F. 7. Equidistant radial profiles at depths of 2 and 3 mm for plaque types CCA (a) and CCB (d). The corresponding margins x margin to cover points P 2mm and P 3mm with the prescribed dose D CTV are given in terms of mm. They are represented as a function of apex height h apex and base diameter d base for the CCA type [(b) and (c)] and CCB type [(e) and (f)] plaque at points P 2mm and P 3mm . described in other studies. 17,29 In our study, a recent type of radiochromic film was used and a purpose built phantom enabled measurements in several planes. A reproducible method for eye plaque dosimetry was established that allowed evaluating the dose characteristics. Such dosimetric information can be directly used in a simple treatment planning procedure. The most noticeable dosimetric deviation was observed between MC simulations and reference values specified by the manufacturer in the vicinity of the plaque, with dose differences ranging from −20% (for COB) to −10% (for CCA). BEBIG data for an individual plaque were measured with scintillators 52 which might overestimate the dose due to Cherenkov radiation effects. 46 The Cherenkov effect vanishes once the velocity of the charged particle drops below the speed of light in water, which corresponds to an electron energy of about 0.26 MeV assuming specific relativity. The mean electron energy of 0.52 MeV was determined by MC simulations at the most distant measurement point of 10 mm, indicating that a potential effect stemming from Cherenkov radiation would affect the entire depth dose range. Since the observed deviations between measurements and MC simulations only occur at depths below 2 mm, effects from Cherenkov radiation were ruled out. A more likely explanation for these large deviations is uncertainties with respect to size, extension, and homogeneity of the active layer of the plaque. Also, the comparison of the simulations with the measurements of the 2D vertical planes (see Fig. 7) showed divergent results close to the plaque (in a depth of 1-2 mm). Another possible source of error could be the assumption of the effective point of measurement to be the center of the sensitive volume of each detector. The reproducibility of the film measurements was validated for the same plaque on the central axis and in 2D. Deviations between consecutive measurements of the same plaque were lower than for measurements of different plaques of the same type, especially in lateral direction. This suggests that, while there are considerable uncertainties associated with the difficulty of 106 Ru eye plaque dosimetry, the influence of plaque inhomogeneities has an impact on film measurements. Overall, a good agreement between measured and calculated multidimensional 1D and 2D dose distributions was found. The dosimetric information allowed describing the penumbra region of 106 Ru plaques in terms of lateral dose falloff, the usable area of the investigated plaque models as a function of depth along the central axis. Moreover, from such dosimetric information margins and coverage values can be extracted for the different types of plaques and applied in treatment planning as illustrated by the following considerations. According to the authors' knowledge, such an extrapolation of the 2D dosimetric characteristics of plaques and the margins has not been addressed previously in such detail. A tumor base diameter (e.g., determined from funduscopy) and an apex height of a dome-shaped tumor (e.g., determined from ultrasound imaging) were used as input parameters to determine the plaque size and prescribed dose. The size of the plaque (15.3 mm for CCA and ∼20 mm for CCB and COB) was chosen with respect to the clinical target volume (CTV) which usually exceeds the gross tumor volume (GTV) by 1-2 mm on each side. In addition, a margin of 1 mm was added to account for dosimetric uncertainties. According to this protocol, one can evaluate the suitability of different sized plaques to fully cover a target volume in terms of its CTV dimension from the matrices presented in Fig. 7. For example, a tumor with a CTV of 5 mm apex height and 13 mm base diameter yields a margin of −0.06 mm at a depth of 3 mm (P 3mm ) and 0.45 mm at 2 mm depth (P 2mm ) when overlaid with the experimentally determined dose distribution of a CCA type plaque. This means that P 3mm cannot be completely covered by the prescribed 100% isodose and P 2mm would not allow for any margin. With a CCB type plaque, the same CTV can be treated with margins well above 1 mm. This exemplifies that even if the physical diameter of the plaque matches or exceeds the CTV diameter, it is not guaranteed that the entire tumor volume receives the full prescribed dose. So far, a perfectly domeshaped target volume was assumed but other, more bulky tumor geometries that have been described need to be addressed in more detail from a geometric point of view. 45,47 The observed interplaque differences emphasize the importance of these dosimetric safety margins. Sources for inhomogeneities among different plaques of the same type may be the heterogeneous source material as well as uncertainties in determining the edge of the active layer in the rim of the plaque. This may lead to local dose deviations not only close to the plaque surface but also in regions around the central axis. However, while the relative local discrepancies can be large, the introduction of a 1 mm dosimetric margin would in most cases be sufficient. As seen in Fig. 5(a), but also in the 2D dose map evaluation in Fig. 6, the local deviation between film measurements and BEBIG data (film and MC, respectively) increases significantly for larger distances. While the absolute differences in absorbed doses at these distances remain rather small due to the low dose rates, for very large tumor apex heights these uncertainties must be considered when estimating the dose to adjacent healthy tissue. While tumor control, recurrence rate, and overall survival are the most important aspects in cancer management, the outcome of 106 Ru brachytherapy in terms of treatment related toxicity and visual acuity is also important. 44,[47][48][49][50][51] According to the authors' knowledge, so far no detailed studies on dose-response relations for organs at risk have been published for brachytherapy of uveal melanoma since 3D treatment planning system (TPS) including volumetric segmentation based on 3D image sets and three dimensional dose calculations are not common standard in clinical practice. In order to better describe the tumor coverage and relate applied doses to observed side effects, such a widespread use of a TPS might be desirable. Multidimensional dose measurements for 106 Ru plaques as presented above are a first step, e.g., to benchmark TPS dose calculations. CONCLUSION EBT3 films and an in-house developed phantom enabled multidimensional dosimetric measurements for 106 Ru plaques. Profiles determined along the central axis and those determined off-axis showed good agreement with values specified by the manufacturer, with MC simulations and point detectors well within the experimental uncertainties or tolerances given by the manufacturer. Dosimetric information from such multidimensional film measurements can be utilized for treatment planning, i.e., to assess target coverage and to establish margin concepts for representative tumor models. ACKNOWLEDGMENTS The authors gratefully would like to thank Michael Andrássy and Carmen Schulz from Bebig for providing information on plaque geometry and composition as well as details with regard to the dosimetric reference data such as the methods of measurement and the respective error margins. The associated communication and discussions were highly appreciated. The study was financially supported by the Austrian Science Fund (FWF) Project No. P25936.
2018-04-03T00:22:50.719Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "9d03f0e0ed70a04027f74ee2e9a0de8e3f26df00", "oa_license": "CCBY", "oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1118/1.4929564", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "aeb59ae46611c146ca00c0ccff34932ab09ea61b", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
119316385
pes2o/s2orc
v3-fos-license
Large sieve with sparse sets of moduli for $\mathbb{Z}[i]$ We establish a general large sieve inequality with sparse sets $\mathcal{S}$ of moduli in the Gaussian integers which are in a sense well-distributed in arithmetic progressions. This extends earlier work of S. Baier on the large sieve with sparse sets of moduli. We then use this result to obtain large sieve bounds for the cases when $\mathcal{S}$ consists of squares of Gaussian integers and of Gaussian primes. Our bound for the case of square moduli improves a recent result by the authors. Introduction The classical large sieve inequality with additive characters asserts that q≤Q q a=1 (a,q)=1 M <n≤M +N a n e n · a q where Q, N ∈ N, M ∈ Z and {a n } is any arbitrary sequence of complex numbers. There are numerous applications of this inequality in analytic number theory, in particular, in sieve theory and to questions regarding the distribution of arithmetic functions in arithmetic progressions. The large sieve with sparse sets of moduli q, in particular with prime moduli and with square moduli were investigated by Wolke, Zhao and the first-named author in a series of papers (see [1], [2], [4] and [13]). In the case of prime moduli, it was established by D. Wolke [13] that p≤Q p−1 a=1 M <n≤M +N a n e n · a p provided that Q ≥ 10, N = Q 1+δ , 0 < δ < 1. Here, C is an absolute constant. In the case of square moduli, it was first established by Zhao [12] that q≤Q q 2 a=1 (a,q)=1 M <n≤M +N a n e n · a q 2 S. Baier improved this to q≤Q q 2 a=1 (a,q)=1 M <n≤M +N a n e n · a q 2 2 ≪ ε (QN) ε Q 3 + Q 2 √ N + N M <n≤M +N |a n | 2 . (3) A further improvement was obtained by S.Baier and L. Zhao in [4], where the term N + Q 2 √ N was replaced by N + min{ √ QN, Q 2 √ N}. To date, this is the best known bound. A generalization of the large sieve for number fields was established by M.Huxley [9]. For the number field Q(i), it takes the form Here as in the following, N (q) denotes the norm of q ∈ Z[i], given by In [3], we studied the large sieve with square moduli for the number field Q(i), i.e. we investigated the order of magnitude of the expression We established an analogue of (2), namely the inequality For comparison, Huxley's version (4) of the large sieve in Z[i] with the set of moduli extended to all q with 0 < N (q) ≤ Q implies only the bound which is weaker than (5) if Q ≫ N 2/7+ε . On the other hand, it is easy to show that a mod q 2 (a,q)=1 (in particular, this follows from our later Theorem 5 with ∆ = 1/N (q) 2 and (x r ) being the sequence formed by all Farey fractions a/q 2 with 1 ≤ |a| ≤ |q| 2 and (a, q) = 1), which implies the bound by summing up (7) over all q ∈ Z[i] with N (q) ≤ Q. This bound is weaker than (5) if Q ≪ N 1/2−ε . Thus, (5) is sharper than both (6) and (8) The goal of this paper is to improve (4) for sparse sets S of moduli which are in a sense well-distributed in arithmetic progressions. As a consequence, we derive an analogue of (3) for Q[i], thus improving (5). Also, we establish a large sieve inequality with Gaussian prime moduli which is an analogue of (1). Similarly as in [3], our method starts with an application of the large sieve for R 2 . Then we convert the resulting counting problem back into one for Q(i). At this stage, we deviate significantly from the method in [3], where we used Fourier analytic tools to attack the said counting problem. Instead, we proceed along similar lines as in [1], only using Diophantine approximation and elementary counting arguments. Main results Throughout this paper, we reserve the symbols c i (i = 1, 2, .....) for absolute constants and the symbol ε for an arbitrary (small) positive number. The ≪constants in our estimates may depend on ε. As usual in analytic number theory, ε may be different from line to line. We further suppose (a n ) n∈Z[i] to be any sequence of complex numbers and Q, N ∈ N. For α ∈ Z[i], we set and Z := We further suppose that S ⊆ B(0, where B(0, Q 1/2 ) denotes the closed ball with center 0 and radius Q 1/2 , i.e. We note that We shall require that the number of elements of S t in small regions of arithmetic progressions in Z[i] (which form shifted lattices in C) does not differ too much from the expected number. To measure the distribution of S t in regions of arithmetic progressions, we define the quantity Here B(y, u) denotes the closed ball with center y and radius u, i.e. B(y, u) We first establish the following large sieve inequality for general sets S of moduli in Z[i]. where Z is defined as in (10). If we assume the set S t to be nearly evenly distributed in the residue classes l mod k, This suggests to set a condition of the form where X ≥ 1 is thought to be small compared to Q and N. Under the condition (12), we shall infer the following bound from Theorem 1. Theorem 2. Suppose the condition (11) to hold for all Inequality (13) is stronger than the "trivial bound" following directly from Huxley's large sieve (4), if Employing Theorem 2 with S a set of non-zero squares of norm ≤ Q 2 , we shall derive the following improvement of (5). Theorem 3. We have where ε is any positive constant, and the implied constant ≪-constant depends only on ε. This bound is stronger than the three bounds (5), (6) and (8) When S is the full set of all Gaussian primes with norm ≤ Q, we shall establish the following version of the large sieve for Z[i]. where p runs over the Gaussian primes. Large sieve for R 2 We shall employ the following version of the large sieve for R 2 , proved below. where ||.|| 2 is the Euclidean norm on R 2 . Then, Here as in the following, ||x|| 2 denotes the Euclidean norm of s ∈ R 2 , given by To prove Theorem 5, we use the duality principle and the Poisson summation formula for R 2 . Proposition 1 (Duality principle, Theorem 288 in [8]). Let C = [c mn ] be a finite matrix with complex entries. The following two statements are equivalent: (1) For any complex numbers a n , we have m n a n c mn (2) For any complex numbers b m , we have Proposition 2 (Poisson summation formula, see [11]). Let f : R 2 → C be a smooth function of rapid decay and Λ be a lattice of full rank in R 2 . Then where Λ ′ is the dual lattice,f is the Fourier transform of f , defined aŝ and Vol(R 2 /Λ) is the volume of a fundamental mesh of Λ. Here as in the following, by rapid decay we mean that the function f : Conversion into a counting problem Now we return to the large sieve for Q(i). We aim to estimate the quantity Our first step is to re-write U in the form where q = u + iv, a = x + iy, n = s + it. To bound U, we employ Theorem 5, which immediately gives us the following. (20), we have the bound Corollary 1. For U as defined in where Z is defined as in (10) and Thus, we have converted the problem into a counting problem in R 2 , which we shall now interpret as a counting problem in C. We observe that It follows that Now we are left with counting Farey fractions in C. 8 Counting Farey fractions in small regions in C To estimate P (α), we approximate α by a suitable element of Q(i). Let Then, using the Dirichlet approximation theorem in C (see [7]), α can be written in the form Thus, it suffices to estimate P (b/r + z) for all b, r, z satisfying (23). We further note that we can restrict ourselves to the case when We deduce the following. Lemma 6. We have The next lemma provides a first estimate for P b r + z . 1. Then, where B(0, √ Q) is the closed ball with center 0 and radius √ Q. Proof of Lemma 7: Define Then, if δ ≤ Q, we have This implies whenever δ ≤ Q. Proofs of Theorems 1 and 2 Next, we express Π(y, δ) in terms of A t (u, k, l). This shall lead us to the following estimate for P (b/r + z). Lemma 8. We have where bb ≡ 1 mod r. Proof of Theorem 2: From equation (12), we get t|r 0<|m|≤ We deduce that the right-hand side of the inequality in Theorem 1 is dominated by This completes the proof. ✷ Proof of Theorem 3 In this section, we derive Theorem 3 from Theorem 2. First, we rewrite the sum in question in the form where S is the set of non-zero squares with norm ≤ Q 2 . We split up the set S into O(log 2Q) subsets of the form where 2Q 0 ≤ Q 2 . Then we shall apply Theorem 2 to bound the quantity As previously, we define ..p vn n be a prime factorization of t in Z[i] (unique up to associates of p 1 , ..., p n and the unit ǫ). For i = 1, 2, ..., n let We observe that Hence, As previously, we suppose that 0 Noting that q > √ Q 0 /|t| if q ∈ S t (Q 0 ), it follows that We aim to verify the condition (12) for X = N ε . Thus, our next task is to bound the cardinality of A(y) if Let δ t (k, l) be the number of solutions x mod k to the congruence Then the number of Gaussian integers x contained in a ball B(a, r) and satisfying the congruence x 2 g t ≡ l mod k is ≪ 1 + r 2 N (k) δ t (k, l). We deduce that by ignoring the condition that q 2 2 g t ∈ B(y, u). We shall use (33) if √ 2Q 0 /(2|t|) < u ≤ √ 2Q 0 /|t|. If u ≤ √ 2Q 0 /(2|t|), then we obtain a stronger bound as follows. Note that Hence, where we define the square root of the complex number s = ρe iφ to be ρ 1/2 e iφ/2 if ρ := |s| and −π < φ = arg(s) ≤ π. The set consists of two connected components, one containing y/g t and the other one containing − y/g t . By symmetry, it suffices to look at the case when q 2 is contained in the first component. Moreover, we may restrict ourselves to the case when 0 ≤ arg(y/g t ) ≤ π/2 since all other cases are similar. In this case, ℜ(q 2 ) ≥ 0 and hence, Combining (34) and (35), we get |yg t | and hence Thus, which contains also the bound (33) for the case when √ 2Q 0 /(2|t|) < u ≤ √ 2Q 0 /|t|.
2018-11-18T09:28:17.000Z
2018-11-18T00:00:00.000
{ "year": 2018, "sha1": "d01ec5b49d27a7da38f25ef4ee7271d52a5cf540", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.07300", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0efeecf8fcd397d44029d9006d957e91ff3c500b", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
4580326
pes2o/s2orc
v3-fos-license
Mapping of the three-dimensional lymphatic microvasculature in bladder tumours using light-sheet microscopy Background Cancers are heterogeneous and contain various types of irregular structures that can go undetected when examining them with standard two-dimensional microscopes. Studies of intricate networks of vasculature systems, e.g., the tumour lymphatic microvessels, benefit largely from three-dimensional imaging data analysis. Methods The new DIPCO (Diagnosing Immunolabeled Paraffin-Embedded Cleared Organs) imaging platform uses three-dimensional light-sheet microscopy and whole-mount immunolabelling of cleared samples to study proteins and micro-anatomies deep inside of tumours. Results Here, we uncovered the whole three-dimensional lymphatic microvasculature of formalin-fixed paraffin-embedded (FFPE) tumours from a cohort of 30 patients with bladder cancer. Our results revealed more heterogeneous spatial deviations in more advanced bladder tumours. We also showed that three-dimensional imaging could determine tumour stage and identify vascular or lymphatic system invasion with higher accuracy than standard two-dimensional histological diagnostic methods. There was no association between sample storage times and outcomes, demonstrating that the DIPCO pipeline could be successfully applied on old FFPE samples. Conclusions Studying tumour samples with three-dimensional imaging could help us understand the pathological nature of cancers and provide essential information that might improve the accuracy of cancer staging. INTRODUCTION Cancers are heterogeneous, and there are various types of irregular structures that exist in three-dimensions (3D). [1][2][3][4] To date, the lack of techniques and methods has limited researchers' and physicians' ability to spatially elucidate the entire cancer landscape. Studying solid tumours with traditional two-dimensional (2D) light microscopy restrict our findings to surface pictures. 5 However, recent advances in tissue clearing techniques and lightsheet microscopy have enabled high-end 3D visualisation deep inside samples. [6][7][8][9] Additionally, we recently optimised the use of formalin-fixed paraffin-embedded (FFPE) samples for wholemount immunolabeling, clearing, and imaging with light-sheet microscopy, naming the approach DIPCO (Diagnosing Immunolabeled Paraffin-Embedded Cleared Organs, Fig. 1a). 5 The time is ready for a new imaging platform to characterise cancers, which will fill the information gap in studying 3D objects, such as cancerous tumours, with 2D microscopy. Studies of vasculature systems, e.g., the tumour lymphatic microvessels, benefit largely from using 3D imaging data analysis. Lymphatic dissemination is the major pathway for systemic tumour spread in patients with urinary bladder cancer. 10 However, little knowledge exists about the spatial distribution of lymphatic microvessels within intact human bladder tumours. Herein, we applied the DIPCO pipeline to answer this question. Further, we demonstrated that cancer staging by 3D imaging data analysis provide more accuracy than standard 2D histological diagnostic methods. Sample collection Thirty human FFPE samples from bladder cancers were included; namely, 2 from the Karolinska University Hospital in Sweden and 28 from the Medical University of Lublin in Poland. The tissues were fixed after surgery using formaldehyde and were then embedded in paraffin. One tissue block was randomly picked from each patient for further experiments. All tumours were histologically confirmed to be urothelial carcinomas. Immunohistochemistry of paraffin-embedded sections (IHC-P) FFPE sections (4-6 μm) were deparaffinised and rehydrated. Then, the antigen was removed, and endogenous peroxidase was quenched. After blocking, the sections were incubated overnight with the primary antibody for LYVE-1 (1:100, # ab33682, Abcam) with the appropriate species-specific secondary antibody. The specificity of the LYVE-1 immunosignal for detecting tumour lymphatic vessels was tested and confirmed using an alternative lymphatic marker Podoplanin (1:100, # ab10288, Abcam). Images were acquired with a fluorescence microscope (Cell Observer, Carl Zeiss, Jena, Germany). Preparation and image processing for 3D analysis Preparation and 3D imaging data processing of samples are described elsewhere. 5 The lymphatic endothelial hyaluronan receptor LYVE-1 was targeted to label lymphatics within tumours. 11,12 The primary and secondary antibody used was anti-LYVE-1 (1:100, # ab33682, Abcam) and Alexa 647-conjugated affinity purified F(ab')2 fragment antibody (1:200, # 711-605-152, Jackson ImmunoResearch Laboratories), respectively. For tissue clearing, immunolabeled samples were incubated in methanol, dichloromethane, and finally dibenzyl ether. 6 Cleared tumours were imaged using a custom-built light-sheet microscope. 13 Amira (FEI) software was used for 3D volume rendering, vessel segmentation, and quantification. 14 Images were processed and normalised using Amira and ImageJ (National Institutes of Health, Washington, DC) software. Lymphatics were segmented according to the LYVE-1 immunosignal level 15 using an intensity-based threshold and spatial graph view algorithms of the Amira suite, which also calculated the vessel length and radius. Every vessel was automatically separated to the next branch as one segment and used for the analyses. The spatial heterogeneity feature of the LYVE-1 expression was examined by calculating the kurtosis, skewness, and variance of the LYVE-1 expression density for each 5-μm Z-section. 5 Statistics The values are given as the mean ± SE, median and interquartile range (IQR) for continuous variables, and frequency with percentage for categorical variables. Variables between groups were compared using the Mann-Whitney U-test. To assess the ability of the DIPCO pipeline, we carried out a receiver operating characteristic (ROC) curve analysis to distinguish cancers with advanced stages and vascular or lymphatic system invasion, i.e., lymphovascular invasion plus positive lymph node involvement. Finally, an area under the curve (AUC) value with a 95% confidential interval (CI) was determined for discrimination. Statistical significance was accepted for P values < 0.05. All analyses were performed using the SPSS version 22.0 statistical software package. RESULTS AND DISCUSSION Clinical FFPE samples from a cohort of 30 patients with bladder cancer, of which 20% had low-grade tumours and 80% had highgrade tumours, were assessed. The pathological T stage for Ta-1, T2, and T3-4 was in 53%, 20%, and 27%, respectively. Lymphovascular invasion was observed in five patients (17%) and positive lymph node involvement was observed in four patients (13%). The median storage time for the FFPE samples was 20 months (IQR, 13-71) (Fig. 1b) and no association between sample storage times and imaging quality was observed. The FFPE tumours were cleared (Fig. 1c), immunolabeled for LYVE-1 and studied applying the DIPCO pipeline. 5 LYVE-1 is predominantly expressed on the initial lymphatic vessels and not on the collecting lymphatics. 16,17 The 3D imaging data analysis revealed heterogeneous lymphatic microvessels (Fig. 1d, Supplementary Video 1) with diversified vessel thicknesses (Fig. 1e, Supplementary Video 2) throughout the entire tumour. The specificity of the LYVE-1 antigen for detecting tumour lymphatics in bladder tumour tissue was verified using the alternative lymphatic marker Podoplanin ( Supplementary Fig. 1). We then examined seven parameters: the 2D LYVE-1 density, 3D LYVE-1 density, architectural features of lymphatic vessel length and radius, spatial heterogeneity features of LYVE-1 density kurtosis, skewness, and variance (Supplementary Table 1). Of these parameters, only the 2D LYVE-1 density was acquired using 2D imaging of IHC-P. The 3D imaging data analysis demonstrated that the LYVE-1 density kurtosis was significantly higher in advanced tumour stages (Fig. 1f). There was no association between sample storage times and outcomes (Table 1), indicating that the DIPCO pipeline could be applied to study old FFPE samples. To determine the clinical relevance of examining bladder tumours with 3D imaging, ROC curves were constructed thereafter to predict pathological features, such as advanced pT stage (Fig. 2a) and positive vascular or lymphatic system invasion (Fig. 2b). Measuring the 2D LYVE-1 density gave AUC values of 0.568 for detecting pT3 tumours or greater and 0.609 for detecting positive vascular or lymphatic system invasion (Fig. 2c). Both parameters were improved when they were transferred into 3D, especially to predict positive vascular or lymphatic system invasion (AUC = 0.658). Furthermore, assessing the LYVE-1 density kurtosis revealed the highest AUC values of 0.756 for detecting pT3 tumours or greater and 0.702 for detecting positive vascular or lymphatic system invasion (Fig. 2c). The remaining 3D imaging parameters, lymphatic vessel length and radius, were also better for predicting cancer stages compared to those of 2D imaging (Fig. 2c). Light-sheet microscopy offers 3D imaging of cleared tissues deep inside of samples. Furthermore, this new technology can visualise immunolabeled samples with micro-scale resolution, possibly leading to diagnostic advantage in comparison with other 3D imaging alternatives, e.g., high-resolution magnetic resonance microscopy. 18 Our results showed that light-sheet microscopy of cleared bladder tumours could uncover lymphatic microvasculatures running throughout entire tumours. Moreover, there was no association between the sample storage time and the quality of the 3D imaging analysis, as the DIPCO pipeline could be successfully applied to FFPE samples that were stored over 9 years. Lymphangiogenesis is vital for various cancer types, participating in tumour development and metastasis. 19,20 We examined the clinical relevance of using 3D imaging to analyse the expression pattern of the lymphatic vessel marker LYVE-1, revealing more heterogeneous spatial deviations in samples from patients with advanced bladder cancers. The highest AUC values of LYVE-1 density kurtosis was presented for diagnosing pT3 tumours or greater (0.756) and positive vascular or lymphatic system invasion (0.702), exceeding values obtained from 2D imaging data analysis. Precisely characterising and diagnosing tumours are essential for physicians to establish appropriate counselling and treatment In summary, these results show the capacity of light-sheet microscopy to phenotypically characterise intact bladder tumours and to improve accuracy of cancer staging. The limitations of this study are its retrospective nature as well as the small cohort and the lack of survival data with treatment annotations, with the exception of three patients who were deceased.
2018-04-03T03:35:53.357Z
2018-03-08T00:00:00.000
{ "year": 2018, "sha1": "922456023f3df6bcad0b510eceac881157e1c196", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/s41416-018-0016-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "29fd2eff6de55b07caa5986bd4e8523919825ddd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220384304
pes2o/s2orc
v3-fos-license
Cell cycle exit during bortezomib‐induced osteogenic differentiation of mesenchymal stem cells was mediated by Xbp1s‐upregulated p21Cip1 and p27Kip1 Abstract Mesenchymal stem cells (MSCs) are multipotent cells capable of differentiating into a variety of cell types. Bortezomib, the first approved proteasome inhibitor used for the treatment of multiple myeloma (MM), has been shown to induce osteoblast differentiation, making it beneficial for myeloma bone disease. In the present study, we aimed to investigate the effects and underlying mechanisms of bortezomib on the cell cycle during osteogenic differentiation. We confirmed that low doses of bortezomib can induce MSCs towards osteogenic differentiation, but high doses are toxic. In the course of bortezomib‐induced osteogenic differentiation, we observed cell cycle exit characterized by G0/G1 phase cell cycle arrest with a significant reduction in cell proliferation. Additionally, we found that the cell cycle exit was tightly related to the induction of the cyclin‐dependent kinase inhibitors p21Cip1 and p27Kip1. Notably, we further demonstrated that the up‐regulation of p21Cip1 and p27Kip1 is transcriptionally dependent on the bortezomib‐activated ER stress signalling branch Ire1α/Xbp1s. Taken together, these findings reveal an intracellular pathway that integrates proteasome inhibition, osteogenic differentiation and the cell cycle through activation of the ER stress signalling branch Ire1α/Xbp1s. medicine and immune diseases. 3,4 Nonetheless, our understanding of the mechanisms by which MSCs impact clinical and immunological abnormalities in these diseases remains incomplete. For instance, MSCs have been suggested to be attracted to primary tumours, thereby contributing to tumour metastasis as well as drug resistance. [5][6][7][8][9][10] On the other hand, chemotherapeutic drug treatments have been shown to alter the phenotype and differentiation potential of MSCs, and even render them more chemoprotective of the tumour cells. [11][12][13][14] Accordingly, further therapeutic efforts to target MSCs may help to prevent chemoresistance and disease relapse in tumours. The proteasome is a central component of the protein degradation machinery in eukaryotic cells. Inhibition of the proteasome has emerged as a powerful approach for the treatment of multiple myeloma (MM), a haematologic cancer characterized by the accumulation of malignant plasma cells in the bone marrow (BM). 15 Bortezomib, as the first approved proteasome inhibitor, has been used as a first-line drug for the treatment of MM. 16 In addition to its direct antitumour activity, bortezomib also exerts bone protection effects in MM patients. Of note, the effect of bortezomib on bone formation has been suggested to be related to the enhanced differentiation of MSCs towards osteoblasts. 13,17,18 Although the fate determination and terminal differentiation of MSCs are known to be tightly controlled by diverse transcription factors and signalling pathways, many observations have identified important connections between cell fate decisions and the cell cycle machinery in pluripotent stem cells. [19][20][21] For example, terminal differentiation is usually associated with cell cycle exit, and the transition through mitosis and G 1 phase plays an essential role in establishing a window of opportunity for pluripotency exit and the initiation of differentiation. 19 The purpose of this study was to determine the mechanisms by which the bortezomib-induced differentiation of MSCs towards osteoblasts affects the cell cycle machinery. The antibodies against p21 Cip1 , p27 Kip1 , X-box-binding protein 1 (Xbp1s), activating transcription factor 6 (Atf6), 78 kDa glucose-regulated protein (Grp78), C/EBP homologous protein (Chop), cyclin D3, cyclin E1, cyclin-dependent kinase 2 (CDK2), cyclin-dependent kinase 4 (CDK4) and β-actin were obtained from Proteintech (Wuhan, Hubei, China), and the antibody against activating transcription factor 4 (Atf4) was obtained from Santa Cruz Biotechnology (Dallas, TX, USA). All other chemicals were obtained from Sigma-Aldrich (Burlington, MA, USA) unless otherwise specified. | mBM-MSC isolation and expansion Inbred male C57BL/6 mice aged 4-6 weeks were purchased from the animal centre of Xi'an Jiaotong University, housed and treated according to conditions approved by the Ethical Committee for Animal Experiments of the Xi'an Jiaotong University Health Science Center (No. 2015-123). In brief, individual mice were killed by cervical dislocation, and the whole body was soaked thoroughly with 70% ethanol solution for 2 min. Following dissection of the hind legs and vertebrae, all tissues were removed from around the bones and the bones placed in a Petri dish with 5 mL of Dulbecco's modified Eagle's medium (DMEM; HyClone, Logan, Utah, USA). The ligaments between femur and hip were cut, and the bone was cut below the ankle joint. The tibia was separated from the femur by bending slightly at the knee joint. Holding the femur/tibia with a sterile forceps, both epiphyses were then removed with a sterile scissors. The contents of the bones were then flushed with a 1-mL syringe with a needle, into a Petri dish with 5 mL of medium. The medium was then aspirated and flushed several times to disperse the bone marrow cells. The vertebrae were then crushed with the backside of a 5-mL syringe in 5 mL of medium. The cell suspensions were then filtered through a nylon filter (70 µM mesh diameter) into a 50-mL tube. | Alizarin Red S staining mBM-MSCs were plated in 35-mm-diameter culture dishes and grown in DMEM containing 10% FBS, 100 U/mL penicillin, 100 μg/ mL streptomycin, and 2 mM L-glutamine at 37°C in a humidified incubator with 5% CO 2 in the air. When cell density reached 70%- | RNA purification and Real-time PCR analysis Total RNA from cells was extracted using Ultrapure RNA Kit Table S1. | Western blotting analysis Western blotting was performed as described previously. 16 | Chromatin Immunoprecipitation Chromatin immunoprecipitation (ChIP) was performed as described previously. 16 Briefly, mBM-MSCs treated with vehicle or 2.5 nM bortezomib for 16 h were cross-linked with 1% formaldehyde. (Table S2) covering the putative regions of the p21 Cip1 and p27 Kip1 promoters. | Statistical analysis Results were statistically analysed in GraphPad Prism 5.0 (GraphPad Software Inc, San Diego, CA, USA) and presented as mean ± SEM. Statistically significant differences between two groups were assessed by two-tailed unpaired t test. P < .05 was considered statistically significant. | Bortezomib decreases mBM-MSC cell viability To assess the effects of bortezomib on cell viability, we performed MTT assay in mBM-MSCs grown in various concentrations of bortezomib for 24 h and 48 h. As shown in Figure 1A much higher amounts of calcium phosphate crystals than the control cells. To further prove the regulatory role of bortezomib in osteogenesis, we measured the changes of the other bone formation markers and confirmed that bortezomib can induce the expression of Runx2, Sp7, Col1A1, alkaline phosphatase (ALP) and osteocalcin (OCN/BGLAP) ( Figure S1). | Bortezomib inhibits mBM-MSC cell proliferation Given the potential association between cell differentiation and proliferation, we further investigated the effects of bortezomib on cell proliferation during bortezomib-induced osteogenic differentiation. By using EdU incorporation assay, we found that bortezomib dose-dependently decreased the numbers of EdUpositive mBM-MSCs, which represent the proliferating population ( Figure 2A and B). | Bortezomib induces G 0 /G 1 phase cell cycle arrest Based on the finding above that bortezomib inhibits the proliferation of mBM-MSCs, we further analysed the effect on the cell cycle distribution. As shown in Figure 2C and 2D, bortezomib treatment for 24 h significantly induced G 0 /G 1 phase arrest in mBM-MSCs. Compared with the control group, the percentages of G 0 /G 1 phase of cells treated with 2.5 nM and 5 nM of bortezomib were increased from 55.14 ± 5.132 to 67.36 ± 6.067 and 68.117 ± 2.743, respectively. In contrast, the proportion of S phase cells were decreased from 32.017 ± 1.991 to 21.807 ± 2.844 and 19.940 ± 4.321. However, there was no significant change in the proportion of cells in the G 2 /M phase. | Bortezomib triggers changes in cell cycle machinery To further determine the molecular mechanism underlying G 0 /G 1 phase cell cycle arrest, we examined the effects of bortezomib on the expression of G 0 /G 1 phase-associated cyclins, cyclin-dependent kinases (CDKs) and cyclin-dependent kinase inhibitors (CKIs). As shown in Figure 3A, bortezomib treatment has no effects on the expression of cyclin D3 and cyclin E1. However, the expression of Cdk2 and Cdk4 was markedly decreased by bortezomib ( Figure 3B). In contrast, the expression of p21 Cip1 and p27 Kip1 was significantly increased by bortezomib ( Figure 3C). In line with the increase in p21 Cip1 and p27 Kip1 at the protein level, we further observed that the mRNA levels of p21 Cip1 and p27 Kip1 were significantly up-regulated by bortezomib ( Figure 3D). | ER stress signalling Xbp1s is involved in the transcriptional regulation of bortezomib-induced p21 Cip1 and p27 Kip1 To further investigate whether ER stress is involved in bortezomibinduced G 0 /G 1 phase arrest, we analysed the expression of key ER stress signalling-related proteins, including the ER stress markers Grp78 and Chop, as well as three major regulators Xbp1s, Atf4 and Atf6, in response to bortezomib treatment. As shown in Figure 4A To validate the regulatory relationship between the activation of ER stress signalling and the induction of p21 Cip1 and p27 Kip1 , we used MKC3946 (an inhibitor of inositol-requiring enzyme 1α (IRE1α)) and GSK2606414 (an inhibitor of double-stranded RNA-activated protein kinase [PKR]-like ER kinase (PERK)) to block the bortezomib-activated ER stress signalling pathways accordingly. As shown in Figure 4B, when the bortezomib-induced Xbp1s was aborted by MKC3946, we also found a decrease in the expression of p21 Cip1 and p27 Kip1 . However, when using GSK2606414 to block PERK-Atf4 signalling, although we observed a marked decrease in Atf4, it had no significant effects on the expression of p21 Cip1 and p27 Kip1 ( Figure 4C). More importantly, we further confirmed that the MKC3946-aborted up-regulation of p21 Cip1 and p27 Kip1 happened at the mRNA level, validated by real-time PCR ( Figure 4D). Given the potential effects of MKC3946 on the cells, we further analysed the changes of cell cycle and found that the combination of MKC3946 with bortezomib significantly decreased the percentage of S phase, but MKC3946 alone had no effects on the cell cycle distribution ( Figure S2). These results strongly suggest that the activation of Xbp1s may be tightly associated with the expression of p21 Cip1 and p27 Kip1 . | Enforced expression of XBP1s up-regulates p21 Cip1 and p27 Kip1 and induces G 0 /G 1 cell cycle arrest in mBM-MSCs To further investigate the role of Xbp1s in cell cycle arrest, we used a Tet-On lentiviral system to overexpress human spliced XBP1 in mBM-MSCs. We found that enforced expression of XBP1s inhibited | Transcriptional regulation of p21 Cip1 and p27 Kip1 by Xbp1s To elucidate the potential transcriptional regulation of Xbp1s, we sought to determine whether Xbp1s binds to the p21 Cip1 | D ISCUSS I ON The development of multicellular organisms relies on the temporal and spatial control of cell proliferation and differentiation. 19,[22][23][24] Developmental signals not only direct cell cycle progression but also set the frame for cell cycle regulation by determining cell typespecific cell cycle modes. 25,26 Usually, inhibition of the cell cycle is a requisite for terminal differentiation. 23,25,27,28 However, the precise cell cycle mechanisms for growth/differentiation transition remain unclear. In this study, we found that there exists a cell cycle exit that is mediated by the accumulation of CKIs p21 Cip1 and p27 Kip1 during bortezomib-induced osteogenic differentiation of MSCs and Bortezomib is a proteasome inhibitor of the 26S proteasome that plays a central role in protein degradation. The introduction of bortezomib has been a major breakthrough in the treatment of MM. 29 Besides the anti-MM activity, both preclinical and clinical data also substantiate that bortezomib plays a significantly beneficial role in bone formation. 30 The increased osteoblast differentiation in BM hypothesizes one possible mechanism behind bone protection. 17,[31][32][33][34][35] In the current study, by using mBM-MSCs as an in vitro model, we demonstrated that bortezomib can induce osteogenic differentiation, validated by the markedly enhanced ARS staining. Our findings in mBM-MSCs confirmed the previous report in human MSCs. 13,36 Meanwhile, EdU incorporation assay demonstrated that cell proliferation was almost entirely blocked by bortezomib. Cell cycle analysis further indicated that a G 0 /G 1 phase arrest was induced by bortezomib in mBM-MSCs. These findings strongly indicate a link between G 0 /G 1 phase arrest and bortezomib-induced differentiation. Cell cycle progression is tightly governed by CDKs, which are the major regulators of the cell division cycle, activated by cyclin binding and inhibited by CKIs. 36,37 The close cooperation between this trio is necessary for ensuring orderly progression through or exit from the cell cycle. For this reason, we further studied the changes of cyclins, CDKs and CKIs in response to bortezomib treatment and found that the expression of G 0 /G 1 phase-related CDKs such as Cdk2 and Cdk4 was decreased by bortezomib. More importantly, the expression of p21 Cip1 and p27 Kip1 was observed to be increased significantly by bortezomib. Considering that p21 Cip1 and p27 Kip1 were extensively characterized as negative regulators of progression through G 1 to S phase in mammalian cells, and several lines of evidence have suggested that p21 Cip1 and p27 Kip1 exert similar effects on cell cycle progression by mediating the inhibition of Cdk2 and/or Cdk4 activities, 38,39 it is reasonable to speculate that the induction of p21 Cip1 and p27 Kip1 may play an important role in the cell cycle exit induced by bortezomib. It is known that p21 Cip1 and p27 Kip1 can inhibit cell cycle progression in response to numerous stimuli, but little is known IRE1α-XBP1, PERK-Atf4 and Atf6. 44 Focusing on the mechanisms of inducing p21 Cip1 and p27 Kip1 , we further investigated whether the up-regulation of p21 Cip1 and p27 Kip1 is related to the ER stress signalling activated by bortezomib. Firstly, we found that the up-regulation of p21 Cip1 and p27 Kip1 occurred at the mRNA level. Next, we confirmed that bortezomib can activate both PERK-Atf4 and IRE1α-Xbp1s signalling pathways in mBM-MSCs. Thirdly, we confirmed that Xbp1s other than Atf4 plays a major role in regulating p21 Cip1 and p27 Kip1 expression. More importantly, by perform- ing ChIP assay, we demonstrated the direct interaction between Xbp1s and the promoter of p21 Cip1 and p27 Kip1 , further supporting the role of Xbp1s in transactivating the transcriptional activity of the p21 Cip1 and p27 Kip1 . Xbp1 is a bZIP (basic-region leucine zipper) transcription factor that interacts specifically with the conserved X2 boxes of major histocompatibility complex class II gene promoters. 45 Xbp1 can yield two isoforms: unspliced Xbp1 (Xbp1u) and spliced Xbp1 (Xbp1s). In response to ER stress, the mRNA of Xbp1u is spliced to generate Xbp1s, which is considered as the active form, playing a pivotal role in ER stress signalling. Nonetheless, Xbp1u has also been shown F I G U R E 6 Xbp1s binds to the promoter of p21 Cip1 and p27 Kip1 . (A) Graphic representation of the putative Xbp1s binding sites in p21 Cip1 and p27 Kip1 promoter. Two putative Xbp1s binding sites were identified in the promoter of the p21 Cip1 and p27 Kip1 by searching for the Eukaryotic Promoter Database. (B-C) Chromatin immunoprecipitation followed by real-time PCR assay of Xbp1s binding in the p21 Cip1 and p27 Kip1 promoters in response to 0 and 2.5 nM bortezomib treatment for 16 h. Results are expressed as percentage of input. *P < 0.05 compared with control (n = 3) F I G U R E 7 Diagrammatic presentation of the potential mechanism of cell cycle exit during bortezomib-induced osteogenic differentiation of mBM-MSCs to inhibit Xbp1s-mediated effects. For example, Xbp1u has been demonstrated to down-regulate the expression of p21 Cip1 by negatively inhibiting the p53/p21 axis. 46 Moreover, it has been indicated that Xbp1s is essential for bone morphogenic protein 2-induced osteoblast differentiation through up-regulating the transcription of Osterix, which is an osteoblast-specific transcription factor. 47 We further demonstrated that Xbp1s plays central roles in regulating several osteogenic differentiation-related genes in response to bortezomib stimuli (data not shown). Focusing on the effect of Xbp1s in the cell cycle, we further showed that forced expression of XBP1s in mBM-MSCs can directly trigger the accumulation of p21 Cip1 and p27 Kip1 . Meanwhile, we observed that forced expression of Xbp1s can drive mBM-MSC differentiation into osteoblasts (data not shown). In addition to the well-known function of CKIs in cell cycle control, it is becoming increasingly apparent that CKIs also play indispensable roles in processes such as transcription and epigenetic regulation. Both p21 Cip1 and p27 Kip1 are known to interact with a range of transcription factors involved in modulating the expression of numerous genes in various biological processes. 48 In this regard, one limitation of this study is that we cannot conclude whether the up-regulated p21 Cip1 and p27 Kip1 directly stimulate the expression of osteogenic-related genes. Secondly, we cannot conclude whether p21 Cip1 and p27 Kip1 play redundant roles in this process. For example, although both p21 Cip1 and p27 Kip1 proteins were induced during erythroid differentiation, only p27 Kip1 is associated with the inactivation of Cdk2, and p21 Cip1 may have a function independent of growth arrest during erythroid differentiation. 49 In myeloid leukaemia cells, p21 Cip1 and p27 Kip1 have been demonstrated to induce distinct cell cycle effects and differentiation programmes. 39 | CON CLUS IONS In this study, we demonstrated that bortezomib-induced p21 Cip1 and p27 Kip1 are required for cell cycle exit during osteogenic differentiation and that induction of p21 Cip1 and p27 Kip1 by bortezomib is transcriptionally regulated by activation of the ER stress signalling pathway Ire1α/Xbp1s. These findings may provide valuable information enabling a better understanding of the mechanisms underlying proteasome inhibitor-induced osteogenic differentiation of MSCs. ACK N OWLED G EM ENTS This research was supported by the National Natural Science CO N FLI C T O F I NTE R E S T The authors report no conflict of interest. AUTH O R CO NTR I B UTI O N S JH and DZ designed the experiments, analysed and interpreted the experimental results and wrote the manuscript. RF, LL, LL, YM and NL performed most of the experiments and analysed the experimental data. PC and RAW carried out Western blotting and real-time PCR analysis. BW made substantial contributions to the conception and design of the study and revised the manuscript. All authors read and approved the manuscript. DATA AVA I L A B I L I T Y S TAT E M E N T The data used to support the findings of this study are available from the corresponding author upon request.
2020-07-08T13:02:46.659Z
2020-07-06T00:00:00.000
{ "year": 2020, "sha1": "ffddba9a64220d49076c8f2358867d440de14831", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.15605", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a63280dc6beacf8e396ef565a250efa5f25b77fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
228955718
pes2o/s2orc
v3-fos-license
catena-Poly[[chloridotris(1,3-thiazolidine-2-thione-κS)cadmium(II)]-μ-chlorido] In the structure of the title compound, [CdCl2(C3H5NS2)3]n, the CdII atom is coordinated by three S and three Cl atoms in a mer arrangement. Structure description 1,3-Thiazolidine-2-thione (tzdSH: C 3 H 5 NS 2 ), is a well-known heterocyclic thione/thiol ligand. Crystallographic studies and investigations of its modes of coordination have been reported (Saithong et al., 2007). We are interested in the coordination behaviour and structure of tzdSH complexes with Cd II chloride. The synthesis is accompanied by a transformation of the tzdSH (tzdSH: C 3 H 5 NS 2 , thiol form) into a tzdt ligand (tzdt: C 3 H 5 NS 2 , thion form). A similar transformation was described previously by Saithong et al. (2014). Metal complexes of thiones and thionates were reviewed by Raper (1997). The above structural studies show that thiones coordinate to cadmium (II) via the sulfur atom. To further investigate the structural aspects of such complexes, we report in this work a complex with a Cd II :thione ratio of 1:23. The asymmetric unit consists of a cadmium (II) ion bonded to three 1,3-thiazolidine-2-thione moieties via the exocyclic sulfur atom and two Cl atoms (Fig. 1). The Cd-S and Cd-Cl bond lengths are in the range 2.7004 (11)-2.7347 (13) and 2.5430 (12)-2.7258 (16) Å , respectively. The bond lengths are slightly different from those reported in the literature [Cd-S = 2.604 Å and Cd-Cl = 2.7105 Å ; Bell et al., 2004]. This may be due to the intramolecular hydrogen bonds observed in the crystal structure. In the crystal, one of Cl À anions connects two neighbouring Cd II centers leading to polymeric chains. No hydrogen bonds are observed between the chains. The structure of the compound can be described as parallel chains running along the a-axis direction. The conformation of the chains is stabilized by N-HÁ Á ÁCl hydrogen bonds (Table 1, Fig. 2). Figure 2 Packing diagram of the title compound. N-HÁ Á ÁCl hydrogen bonds are show as light blue dashed lines. Figure 1 The asymmetric unit of title compound with displacement ellipsoids drawn at the 50% probability level. appeared after the light yellow filtrate had been kept at room temperature for two days (yield 75%). Refinement Crystal data, data collection and structure refinement details are summarized in Table 2. data-2 IUCrData (2020). 5, x201423 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. H atoms were refined using a riding model with N-H = 0.86 Å or C-H = 0.97 Å and U(H)=1.2U eq (C,N).
2020-11-05T09:09:08.840Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "7ce5cf9baa7b0f004b9ec36f32de48ad2f30fc96", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/x/issues/2020/10/00/bt4100/bt4100.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22a4b024f406654a38a140ef1fbae5454cd4ba7f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
222220524
pes2o/s2orc
v3-fos-license
Unified Framework for Secrecy Characteristics With Mixture of Gaussian (MoG) Distribution The mixture of Gaussian (MoG) distribution was proposed to model the wireless channels by implementing the completely unsupervised expectation-maximization (EM) learning algorithm. With the high convenience for density estimation applications, the focus of this letter is supposed to investigate the secrecy metrics, including secrecy outage probability (SOP), the lower bound of SOP, the probability of non-zero secrecy capacity (PNZ), and the average secrecy capacity (ASC) from the information-theoretic perspective. The above-mentioned metrics are derived with simple and unified closed-form expressions. The effectiveness of our obtained analytical expressions are successfully examined and compared with Monte-Carlo simulations. One can conclude that this letter provides a simple but effective closed-form secrecy analysis solution exploiting the MoG distribution. To this end, we were motivated to seek a more general and flexible model, which can encompass or generalize most of the well-known fading channel models to a large extent. The mixture gamma (MG) distribution and the Fox's H-function distribution were proved to be two promising candidates to address the aforementioned concern. The MG distribution was proposed by Atapattu et al. in [15] to model the signal-tonoise ratio (SNR) of wireless channels. This distribution can highly accurately characterize the SNRs of composite fading channels. The application of using the MG distribution to characterize the physical layer security is effectively verified in [3], where the secrecy outage probability (SOP), the probability of non-zero secrecy capacity (PNZ), and average secrecy capacity (ASC) are derived with closed-form expressions in terms of the Fox's H-function. In parallel, the feasibility of utilizing the Fox's H-function distribution was explored in [12], which provides a unified secrecy analysis framework, and the analytical results therein indicates that simple transformation of the fading channel characteristics in the manner of Fox's H-function distribution can largely encompass the existing works [1], [5]- [7], [10]. As discussed earlier, both the MG and Fox's H-function distributions are useful and beneficial, but limited to the scenario that all the channel characteristics, i.e., the probability density functions (PDFs) and cumulative distribution functions (CDFs) of fading channel models are known. To the authors' best knowledge, no work has ever considered the scenario that the PDFs and CDFs are not exactly known. A possible answer to such a concern is the mixture of Gaussian (MoG) distribution. Selim et al. in [16] proposed the MoG distribution to model the wireless channels, while unsupervised expectationmaximization (EM) learning algorithm was utilized to estimate the parameters of the MoG distribution. The findings of [16] shows that the MoG distribution is especially advantageous to approximate any arbitrarily shaped non-Gaussian density, and can accurately model both composite and noncomposite channels in a very simple expression. Motivated by [16], the main contributions of this letter are twofold. (1) Providing a simple but effective information-theoretic secrecy analysis solution under the condition of unknown fading channel characteristics. Specifically, highly accurate closed-form SOP, PNZ, and ASC expressions are derived with the aid of the MoG distribution; (2) Validating the tightness of our obtained results with Monte-Carlo simulations. The accuracy indicator, i.e., analytical error, further confirms our obtained results. II. SYSTEM MODEL Consider the Alice-Bob-Eve classic wiretap model, it is assumed that the instantaneous received SNRs γ i =γ i h 2 i , i ∈ {B , E } at Bob and Eve, whereγ i is the average received SNR, h i is the channel coefficient and modeled as the MoG distributed random variables (RVs). The PDF and CDF of γ i are respectively given by [16, eqs. (23) and (45)]: 2162-2345 c 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. where C i represents the number of Gaussian components. 1 w l > 0, μ l and η l are the lth weight, mean, and variance with the constraint of C i l w l = 1, which can be evaluated using the unsupervised EM learning algorithm.γ i is the average SNR at the receiver. erf(x) and Φ(x ) are the error function and the CDF of the standard normal distribution. Step (a) is developed for the sake of simplifying the following derivations. According to [1], for one realization of (γ B , γ E ) pair, the instantaneous secrecy capacity over quasi-static wiretap fading channels is defined as where A. SOP Characterization Secure communication can be guaranteed only when the secrecy rate R t is lower than the instantaneous secrecy capacity. The SOP is a pivotal and crucial secrecy indicator, and widely used to characterize the probability that perfect secrecy is compromised. Theorem 1: The SOP is given by Proof: For a given target secrecy rate R t , the SOP is mathematically defined as P out = Pr (C s ≤ R t ) [6], and further developed as follows where R s = 2 Rt , W = 2 Rt − 1. Next, plugging (1a) and (1b) into (4), we get step (b) is developed by applying the interchange of variables y = γ γ E . Since y is a normally distributed RV, i.e., y ∼ N (μ k , η k ), subsequently applying the result given in [9, eq. (4)], the proof for P out is achieved. The difficulty of deriving the exact closed-form SOP expression, the lower bound of the SOP P L out is thereafter widely used to provide an asymptotic behavior of SOP for two scenarios: (i) R t → 0; and (ii) both γ B and γ E operate at high SNR regimes. As such, P L out is developed as next, substituting (1a) and (1c) into (6), and subsequently making the change of variables y = γ γ E , yields where Next, applying [17, eq. (3.462.1)] on U and after some mathematical manipulations, P L out is eventually derived as For the simplicity of following notations, let ρ =γ B γ E . B. PNZ Characterization The PNZ is regarded as another important secrecy metric to measure the existence of the positive secrecy capacity with a probability P nz , mathematically speaking, it means that positive secrecy capacity can be achieved when γ B > γ E . Theorem 2: The PNZ is given by Apparently, P nz does not vary with the change of ρ. Proof: Revisiting the definition of P nz [12], i.e., P nz = ∞ 0 F E (γ)f B (γ)d γ, then following the same procedure as the proof of P L out , the proof is finished. Equation (9b) is obtained by using )). C. ASC Characterization The ASC is another secrecy metric that quantifies the maximum achievable secrecy rate. Theorem 3: The ASC is given by (10), as shown at the bottom of the page, where G B (x ,γ B , μ, η, ρ . Proof: By averaging (2) over γ B and γ E , the ASC is mathematically expressed as [12, eq. (6)],C s = I 1 + Next, substituting (1a) and (1b) into I 1 , yields Subsequently, performing the change of variables y = γ/γ B , we have step (c) is developed by using [9, eq. (4)]. Similarly, I 2 and I 3 can be obtained. After some simple mathematical manipulations, the proof ofC s is finished. IV. NUMERICAL RESULTS AND DISCUSSIONS In this section, the accuracy of our derived analytical results is validated by performing the Monte-Carlo simulations over κ − μ fading channels. Assuming that the main channel and wiretap channel undergo the same fading conditions, herein κ = 3, μ = 1, where the estimated parameters for the MoG distribution are adopted the ones from [16,Tab. 4]. In order to encompass more fading models, we also plotted the SOP Fig. 1. P out againstγ B when R t = 0.5 over (a) κ − μ fading channels; and (b) Rayleigh, Nakagami-m, Weibull, and α − μ fading channels [6] with γ E = 5 dB. over Rayleigh, Nakagami-m, Weibull, and α−μ 2 fading channels in Fig. 1. (b). The SOP, PNZ, and ASC are respectively plotted and compared in Figs. 1-4 with Monte-Carlo simulations. Apparently, one can observe that there exist excellent agreements between our analytical and simulated results. A. Numerical Results Figs. 1 and 2 plot the SOP, P out , and the lower bound of SOP, P L out . The increase ofγ E means an increasingly improving quality of the received SNR at Eve, it physically says secure communication gradually confronts high risks. Besides, the lower bound of SOP gradually shows a tight approximation to the exact SOP when (i) R t goes to 0, i.e., observing from (4) and (6), as R t → 0, it means W → 0, resulting in diminishing the gap between P L out and P out . Practically speaking, Alice adopts no transmission rate; and (ii)γ E locates at the high 2 The approximation parameters used to estimate α − μ distribution in the manner of MoG distribution are obtained by using the method given in [18,Appendix B]. SNR regime, i.e., it can be physically interpreted that Eve is close to Alice. Fig. 3 depicts the PNZ, as shown in (9a). In continuation with P out , higherγ E values leads to lower P nz performance, it means Eve is largely capable of wiretapping the legitimate link. Fig. 4 shows that the analytical ASC, as in (10), demonstrates an increasing tendency with regard to ρ. Larger ρ illustrates bigger gap betweenγ B andγ E , and thereafter resulting in higherC s . Conclusively speaking, inspired by [8], [16], the MoG distribution is feasible and applicable in cellular device-to-device, vehicle-to-vehicle and on-body communications. B. Accuracy Analysis Observed from Figs. 1-4, our analytical results present an excellent match with Monte-Carlo simulations. For the purpose of illustrating the tightness of the analytical results, a useful measure, namely analytical error, is used as the accuracy indicator [9] analytical error = 1 − analytical results simulation results × 100%. (13) As shown in Table I, the analytical errors for the PNZ and ASC are considerable small, within ±1%. The analytical error for the SOP gradually increases as ρ increases, but within ±3%. Conclusively speaking, our derivations are highly accurate. V. CONCLUSION In this letter, the feasibility of the MoG distribution on the PLS analysis was explored. The secrecy metrics, including P out , P L out , P NZ , andC s , are respectively derived with TABLE I ANALYTICAL ERROR AGAINST ρ (DB) WHENγ E = 0 DB simple and closed-form expressions. The accuracy of our analytical results are further successfully validated by performing Monte-Carlo simulations. This letter offers a unified and effective framework when analyzing physical layer security over fading channels. The MoG approach is beneficial when the main channel and wiretap channel confront different type of fading conditions, e.g., mixture of composite and non-composite fading channels.
2020-06-04T09:04:08.953Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "5180c2fc9d29adf42fd41a245c63072eba068c4a", "oa_license": "CCBYNCSA", "oa_url": "https://orbilu.uni.lu/bitstream/10993/43869/1/PLS_MoG_2020.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "87c390c5b52354aa60ee0282ac4ffb7b626a7f5b", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
42033614
pes2o/s2orc
v3-fos-license
Modeling Chemical Reactivity in Ionic Detergent Micelles: a Review of Fundamentals Ionic detergent micelles have the capacity to solubilize organic substrates, interact selectively with counterions, repel coions, exhibit partial “dissociation” of the counterions, grow in size with added salt, affect the position of chemical equilibria, accelerate or inhibit the rates of chemical reactions, modulate photochemical reactivity and determine the dynamics of diffusion or neardiffusion controlled processes. Many of these phenomena can be understood and analyzed quantitatively in terms of relatively simple models for binding, selectivity and electrostatics that often require no knowledge of micellar structure or dynamics (the pseudophase limit), without compromising chemical intuition. An overview is provided of our current understanding of the interplay between micellar structure and electrostatics, selectivity, solubilization, and reactivity and their role in the development of quantitative formalisms for analyzing micellar effects on reactivity and equilibria. Introduction In the mid-1970s, micellar catalysis was still viewed as a model for enzymatic catalysis and several attempts had been made to analyze and understand micellar effects on reaction rates.Notable among these were the enzyme-like substrate-binding model of Menger and Portnoy 1 and the model of Berezin and collaborators, 2 both of which were (as it later turned out) more adequate for uni-and bimolecular reactions of non-ionic species in non-ionic micelles.Micellar effects on indicator equilibria were known in the literature and attributed to interaction of the charged forms of the indicator with the oppositely charged surface.These precursor works have been reviewed, 3,4 including in the context of their relationship to pseudophase ion exchange (PPIE). 5The present paper will outline the development of PPIE and our increased understanding of ionic interactions in micellar systems over the last 4 decades. Micellization and the Critical Micelle Concentration (CMC) Our starting point is the classical pseudophase treatment of the phenomenon of micellization. 3 The pseudophase model for micellization treats the formation of micelles as if it were a "charged-phase"-separation at the critical micelle concentration (CMC) rather than a stepwise aggregation of monomers to form the micelles.If the micellization of an ionic detergent DY involves the association of an average number N ag of monovalent detergent monomers (of concentration [m] aq ) with b monovalent counterions of type Y (present in concentration [Y] aq ) in the aqueous phase to form the micelles, M: the corresponding equilibrium relation can be written as: The basic reason that this pseudophase description of micellization works as well as it does, despite the fact that the micelles are actually aggregates dispersed in the solution, is that micellization is typically highly cooperative, occurring over a very small concentration range, and micelle aggregation numbers, N ag , are of the order of ca.100.Thus, for typical CMC values of 10 -2 -10 -3 mol L -1 and N ag of ca.70-150, the value of [M] 1/Nag approaches unity and reduces the right-hand side of this equation to the "charged-phase"separation equilibrium expression. At the CMC, [Y] aq is equal to the CMC plus the concentration of added common counterion salt, i.e., [Y] aq = CMC + [Y] ad .The value of the constant K CMC can be calculated from CMC o , the value of the CMC in the absence of added salt: log K CMC = (1 + b) log CMC o (3) This leads directly to the classic Corrin-Harkins relation for the decrease of the CMC with added commoncounterion salt: Values of b are often in the range of 0.7 ± 0.1, similar in magnitude to values of 1 -a, where a is the apparent degree of counterion dissociation from the micelle (as is in fact implicit in the definition of b above).In these and all subsequent equations, we have assumed the equivalence of concentrations and activities for convenience. In principle, the same relationship should also allow the estimation of the free or non-micellized detergent monomer concentration [m] aq in the intermicellar aqueous phase above the CMC by simply replacing CMC by [m] aq and taking into account the additional counterions in the aqueous phase due to the partial dissociation of the micelles: where C T is the total concentration of added surfactant monomers, of which C T -[m] aq are micellized.Equation 5predicts that the free monomer concentration of an ionic detergent will reach its maximum concentration at the CMC and then decrease as the detergent concentration is increased above the CMC.What is constant above the CMC is not [m] aq , but rather the product [m] aq [Y] aq b , the square root of which (for a monovalent detergent DY) is the mean ionic activity of the detergent in the intermicellar aqueous phase. The confirmation of this predicted behavior for the free monomer concentration of sodium dodecyl sulfate (SDS) using a dodecylsulfate ion-selective electrode 6 led us to attempt to measure [m] aq of N-hexadecylpyridinium chloride (HPCl) above the CMC using fluorescence quenching.Thus, we chose the water-soluble cationic fluorescence probe, 9-(3-(N,N,N-trimethylammonium) propyl)-anthracene (TMPA + ), the fluorescence of which is efficiently quenched by pyridinium ions.Although we had naively expected the probe to remain in the intermicellar aqueous phase, we soon realized that, because it was also an amphiphilic molecule, it could be partially incorporated into the quencher micelles.Indeed, the fluorescence quenching obeyed the Stern-Volmer equation for mixed static-dynamic quenching, the static component being due to the incorporation of the probe into the HPCl micelles: In this equation, F f o (t f o ) and F f (t f ) are the fluorescence quantum yields (lifetimes) of the probe in the absence and presence of HP + , K SV is the Stern-Volmer constant for quenching of the probe in the intermicellar aqueous phase by the non-micellized HP + monomers, and K S is the equilibrium constant for incorporation of the probe into the micelles.In the micelles, the probe is nonfluorescent because it is totally quenched by the high local concentration of HP + .On the other hand, the fluorescence lifetime ratio depended on the dynamic quenching of the probe by HP + monomer free in the aqueous phase, i.e., t f o /t f = (1 + K SV [m] aq ).Because K SV could be determined from the quenching behavior below the CMC, the fluorescence lifetime ratio then permitted estimation of [m] aq above the CMC as a function of detergent concentration as initially planned. 7his initial study led to two new investigations.In order to understand the incorporation of amphiphilic organic ions like TMPA + into like-charged ionic micelles, the experimental system was switched to the partitioning of the carboxylate anion of 1-pyrenebutryic acid (PBA -) into micelles of SDS. 8 The relatively long-lived fluorescence of PBA -in the aqueous phase could be selectively quenched by the iodide anion, which was shown to be micelleexcluded because it did not alter the lifetime of PBA -in the micellar phase.This permitted the determination of the fraction of PBA -in each phase and hence the incorporation constant K S of PBA -.The value of K S was found to be highly dependent on [Na] aq , the concentration of sodium counterions free in the intermicellar aqueous phase.Indeed, in order to obtain coherent results, [Na] aq had to be maintained constant by appropriate additions of the non-quencher salt NaCl to compensate for variations in the free Na + derived from micellar dissociation and added NaI.This study 8 provided two important lessons: (i) the proposal by Larry Romsted 9 that the apparent degree of counterion dissociation from ionic micelles, a, might be relatively constant and insensitive to detergent or added salt concentration appeared to work quite nicely; and (ii) when highly charged interfaces are involved, the important parameter is the net counterion concentration (and composition) and it is this that must be maintained constant in the aqueous phase, not the ionic strength. In the second investigation, we opted to use the very long-lived emission of the tris(bipyridine)ruthenium(II) dication, Ru(bpy) 3 2+ , as the water-soluble probe and N-dodecyl-4-cyanopyridinium (DCP + ) bromide as the surfactant as an alternative to our previous anthracenederived probe/HPCl system to determine DCP + free monomer concentrations via emission quenching.We also prepared the short-chain, hydrophilic, non-micellizing N-methyl-4-cyanopyridinium cation (MCP + ) in order to determine the dependence of the quenching rate constant K SV on added salt concentration (using the extended Debye-Hückel relationship with the ionic strength replaced by the aqueous counterion concentration).From the quenching of Ru(bpy) 3 2+ by MCP + in the absence and presence of micellar hexadecyltrimethylammonium bromide (CTAB), it was possible to show that both of these ions were indeed excluded from cationic micelles, i.e., resided exclusively in the intermicellar aqueous phase. A literature search (prompted by a question from Henrique Toma following a presentation of our preliminary results) indicated that MCP + undergoes alkaline hydrolysis to give two products, 11 but at appreciable rates only for pH > 10.However, at the micelle surface, micellar catalysis of the hydrolysis of DCP + might occur at much lower pH values.Bunton et al. 12 had just reported the binding of protons to the surface of SDS micelles based on pH measurements in the aqueous phase, but the same method failed for the hydroxide ion binding to CTAB (in part because the CTA + cation binds strongly to the glass electrode, creating a junction potential that prevents accurate pH measurements).Together with Hernan Chaimovich, we realized that, because the MCP + cation was restricted to the aqueous phase of CTAB, the rate constant for alkaline hydrolysis of MCP + as a function of [CTAB] should be proportional to the amount of hydroxide ion free in the intermicellar aqueous phase.Indeed, this proved to be the case and we soon were ready to publish 13 the first actual measurements of the intermicellar hydroxide ion concentration as a function of [CTAB].The problem was then how to describe the observed behavior using what we knew at the time about ionic micellar systems and counterions.The solution of the kinetic system of successive replacements of bromide ions by hydroxide ions at the CTAB micelle surface (ensconced in a footnote in reference 14) proved to be a binomial distribution of micelles with zero, one, two, three, etc. bound hydroxide ions.In hindsight, it was obvious that binding to a fixed number of sites in which occupied sites are no longer available would necessarily lead to a binomial distribution, but at the time, it provided the impetus for understanding how to count ions in ionic micellar systems in a very straightforward manner. 14 Systems -the Basics of Pseudophase Ion Exchange The Romsted 9 assumption of a constant degree of micellar dissociation (a), together with our initial studies, showed that, in a micellar solution of detergent D + Y -containing a common-counterion salt, e.g., NaY, the analytical concentrations of Y in the micellar ([Y] m ) and aqueous ([Y] aq ) compartments or pseudophases of the solution could be expressed as: 14 where the concentration of micellized detergent, C D , can be approximated as the total detergent concentration (C T ) minus the CMC and [Y] ad is the concentration of the added salt.Upon addition of a foreign counterion salt, e.g., NaX, the assumption that a part [X] m of the total added X counterions, [X] T , dislocate an equivalent amount of Y ions from the micelle surface into the aqueous phase is equivalent to the equilibrium: This equilibrium is governed by an ion exchange selectivity coefficient K X/Y reflecting the difference in affinity of X and Y for the micellar surface: where the concentrations of X and Y in the two phases can be written as: A priori, this system of four equations has only two unknowns, the analytical concentration of X in either the micellar or aqueous phase and the value of K X/Y .Hence, by assuming different values of K OH/Br for the selectivity of hydroxide ion binding to CTAB micelles, we could predict [OH] aq as a function of [CTAB] and compare the results to our experimental values 13 obtained from the rate of alkaline hydrolysis of MCP + . Pseudophase Ion Exchange (PPIE) and Reaction Kinetics If the total concentration of X is known, the analytical concentration at the micelle surface, [X] m , in mol L -1 of micellar solution, can then be converted into the local concentration [X] mloc at the micellar surface by dividing [X] m by the volume fraction of the micelles in liters of micellar pseudophase/liter of micellar solution. 14We assumed (purely for simplicity) this volume fraction to be the concentration of micellized detergent times the molar volume of the detergent, V m : For CTAB, with a = 0.2 and V m = 0.37 L mol -1 , the total local counterion concentration at the micelle surface, given by the expression [X] mloc + [Y] mloc = (1 -a)/V m , is of the order of 2.2 mol L -1 .For SDS, with a = 0.25 and V m = 0.25 L mol -1 , the local counterion concentration is ca. 3 mol L -1 .These high counterion concentrations at the micelle surface can have a large influence on bimolecular reaction rates simply as the result of a local concentration effect. The application of these ideas to local concentration effects on the rates of chemical reactions performed in micellar solutions was then straightforward. 5,14,15For a bimolecular reaction between a non-ionic substrate S and a foreign counterion X, the observed rate constant, k obs , under pseudo-first-order conditions (excess X) will depend on the fraction of S in each pseudophase [f m and f aq , where f aq = 1/(1 + K S C D ), governed by the micellar incorporation coefficient of the substrate, K S ], the "true" second-order rate constants (k 2m and k 2aq ) and the local reactive ion concentrations in each phase: If K S , K X/Y , k 2aq and a are known or can be estimated independently, the only parameter that is needed to fit kinetic profiles of k obs vs. detergent concentration is k 2m , the second order rate constant in the micellar pseudophase. On the other hand, many bimolecular reactions of interest in micelles are performed in buffered solutions.If the solution is correctly buffered (vide infra), then it is the concentration of the reactive ion in the aqueous phase, [X] aq , that is constant and not the total concentration of X, which varies as the detergent concentration is varied. 5,14,16owever, using the known value of [X] aq in the expression for K X/Y (equation 10), together with equations 12 and 13 for [Y] m and [Y] aq , respectively, [X] m can also be calculated in the presence of buffer if K X/Y is known.Hence, the same ion counting approach can be employed to analyze reaction rate constants for bimolecular reactions in which the reactive counterion is appropriately buffered. How then can one buffer a micellar solution so that [X] aq is indeed reasonably constant?There are in principle two ways to buffer an ion concentration: (i) use an excess of a slightly soluble salt of the reactive species X {e.g., Mg(OH) 2 to maintain [OH] aq in micellar CTAB} or (ii) use a buffer for which the ions involved in the buffering equilibrium are both coions (or a coion and a very hydrophilic neutral species) and the counterions of the buffer are the same as those of the detergent.Thus, for SDS, appropriate buffers for maintaining the intermicellar pH might be sodium H 2 PO 4 -/HPO 4 2-or HCO 3 -/CO 3 2-.For CTAB, bis-tris hydrobromide or low concentrations of tris hydrobromide are adequate.In both cases, the contribution of the buffer components to [Y] ad is known and the buffer ions are restricted primarily to the aqueous phase.An inappropriate choice, that probably will not adequately buffer the intermicellar pH, for example, would be H 2 PO 4 -/HPO 4 2-in micellar CTAB because the mono-and divalent phosphate counterions bind differently to the micelle, altering their relative concentrations in the aqueous phase, and they compete with Br -and OH -for the micelle surface.Knowing how to buffer micellar solutions properly permitted the analysis of ionic micellar effects on the dissociation of weak acids, HA, like phenols and thiols, in CTAB, where the conjugate base, A -, is a counterion. 5,14In this case, the apparent dissociation constant, pK ap , is defined as: Once [A -] m is known, one can then analyze bimolecular reactions such as the thiolysis or oximolysis of esters where the reactive nucleophile is the weak-acid-derived anion. 5,17 Implications of Simple PPIE This simple PPIE approach, which included most of the previous models as limiting cases, nicely reproduced most of the known reactivity patterns in ionic micellar solutions. 5,14,15Moreover, when the apparent rate constants of bimolecular reactions were corrected by PPIE for the effects of the local concentration of the reagents at the micelle surface, the true second-order rate constants in the aqueous and micellar pseudophases (k 2aq and k 2m ) usually were found to be remarkably similar in magnitude. 15The inescapable conclusion was that, in most cases, intrinsic micellar effects on reactivity were not particularly large and perhaps even non-existent (for unimolecular reactions, Vol. 27, No. 2, 2016 there are modest effects 15 that have been interpreted in terms of an equivalent homogeneous medium, see reference 18).As a consequence, PPIE could do more than just analyze reactivity patterns.By assuming that k 2m = k 2aq , one could now actually make predictions of the expected reactivity patterns when reasonable estimates of the requisite parameters such as substrate incorporation coefficients, ion exchange selectivities and degrees of micellar dissociation were available. On the other hand, the ability to make predictions of the "expected" reactivity patterns also permitted the detection of situations in which the simple PPIE approach apparently failed.Early on, inadequacies were found in the treatment of reactivity patterns in hexadecyltrimethylammonium hydroxide or fluoride, CTAOH 19 or CTAF, 20 i.e., cationic micelles with highly hydrophilic hydroxide or fluoride counterions.The micelles of both of these detergents have aggregation numbers that are substantially smaller than those of CTAB and their size and apparent degree of micellar dissociation, a, change with detergent concentration.Hence, the apparent breakdown of PPIE in these surfactants was not a problem of the model per se, but rather of the inadequacy of the assumption of constant a, as shown by the agreement between PPIE and experiment when the variation of a was taken into account. 15,202][23] LSERs based on Abraham solute parameters have been particularly useful for this purpose.In the Abraham approach, the transfer of a solute from water to the micelle is assumed to be the sum of five free energy contributions: (i) the difference in cavitation energy between water and the micelle, which is proportional to the (appropriately scaled) molar volume of the solute, V; (ii) the solute polarizability in excess of that of an alkane, E, which can be calculated from the refractive index of the solute; (iii) the solute dipolarity, S, which accounts for dipolar interactions; (iv) the solute hydrogen bond basicity, B, or propensity to accept hydrogen bonds; and (v) the solute hydrogen bond acidity, A, or hydrogen bond donating ability.Values of these solute parameters are currently available for several thousand molecules. 24Transforming this into a LSER for K S gives an equation of the form: log K S = constant + eE + sS + aA + bB + vV (17) For SDS, multiple regression of a large number of K S values provided the following quantitative relationship, in which the coefficients reflect the relative contribution of each term to the overall free energy change for incorporation of the solute into the micelle: log K S = 0.08 + 0.58 E -1.09 S + 0.03 A -3.40 B + 3.81 V (18) Similarly, for CTAB, the corresponding relationship was found to be: log K S = -0.57+ 0.57 E -0.15 S + 0.85 A -3.61 B + 3.36 V In both cases, the coefficients with the greatest magnitude are those associated with the size of the solute and its hydrogen bond basicity.Greater solute size, which encompasses the hydrophobic effect, favors incorporation of the solute into the micelle, reflecting the much higher cavitation energy of water relative to the micelle.In contrast, a greater solute hydrogen bond basicity disfavors incorporation in the micelles, indicating that the aqueous phase is a much better hydrogen bond donor than the solubilization environment sensed by the solute in the micelle.An important point that should not be overlooked is that this type of LSER should work well only if the nature of the average solubilization environment is reasonably similar for all of the solutes, despite their structural diversity.Thus, although it has been speculated for decades that solutes of different hydrophobicities might solubilize in different regions of micelles (hydrocarbon core, micelle-water interface, etc.), solute incorporation into micelles, as measured by K S , provides no evidence for the necessity to assume the existence of distinct micellar solubilization environments for different classes of solutes, at least for SDS and CTAB micelles.Although difficult to apply to multifunctional solutes, this LSER approach does provide a qualitative framework for estimating reasonable magnitudes of solute incorporation coefficients from solute structure. Lessons from the Simple Electrostatics of Ionic Micelles Ionic micelles have relatively high electrostatic potentials at their surface and it is this potential that attracts the counterions to -and repels coions from -the vicinity of the surface.How then, does the PPIE approach avoid an explicit consideration of the micellar surface potential?The traditional model for counterion binding to the micelles assumes that a certain faction of the counterions penetrate in between the ionic headgroups of the ionic surfactant, forming the Stern layer, while the reminder are distributed around the micelle in the diffuse electrical double layer.For a planar interface, the parameter that reflects the thickness of the double layer or the (approximately exponential) decay of the potential with distance out from the interface is the Debye length 1/k, given by the relationship: 1/k (in nm) = 0.3/I ½ , where I is the ionic strength of the bulk aqueous phase.Thus, for a cationic micelle with a radius of 2.2 nm in a solution with an aqueous counterion concentration of 0.005 mol L -1 (ca.0.021 mol L -1 detergent), 1/k = 4.2 nm and the double layer should extend out at least 10-15 nm from the micelle center.On the other hand, for an aggregation number, N ag , of about 90, corresponding to a micelle concentration of C D /N ag = 0.0002 mol L -1 , the average midpoint between the centers of any two micelles is only 0.735/(C D /N ag ) 1/3 = 12.2 nm.Consequently, except at low detergent concentrations in the presence of high concentrations of added salt, the electrical double layers of adjacent ionic micelles will overlap, i.e., the micelles will interact electrostatically with each other and the electrostatic potential will pass through a minimum at the midpoint between micelles rather than decay to zero far from the micelles.This results in a continuous variation of the electrostatic potential, and hence of the local concentrations of counterions and coions, throughout the solution, as shown schematically in Figure 1 and more quantitatively for the potential in Figure 2. The micellar counterions that are at the midpoint between micelles, where the potential goes through the minimum, no longer pertain to the peripheral regions of the double layer but rather to the intermicellar aqueous phase, i.e., these are the counterions that give rise to the apparent dissociation of the micelles. An important parameter in colloidal electrostatics is the dimensionless charge density parameter x o dependent on the geometry of the charged particle. 25Thus, for an infinitely long rod-like particle: where L is the distance between primary charges along the polyelectrolyte chain.For a charged spherical micelle: where a rod and a m are the radii of the rod or spherical micelle and s/e is the charge per unit area on the surface of the colloidal particle [= N ag /(4pa m 2 ) for a spherical micelle].The Bjerrum length, l B , corresponds to the distance at which the interaction between two elementary charges is equal to the available thermal energy.For a medium of relative dielectric constant e r , the value of l B is given by: 25 where e is the elementary charge, k B the Boltzmann constant and e o the permittivity of vacuum.In water at 25 °C, the value of l B is 0.72 nm. For an infinitely long rod-like polyelectrolyte, it was well-known that the value of x or determines whether or not part of the counterions will "condense" on the polyelectrolyte; when they do, the apparent degree of counterion dissociation can be estimated from the Manning 26 relationship a rod ca.1/x or .Thus, a decrease in the spacing, L, between the charges on the polyelectrolyte backbone increases x or and decreases a rod .From Poisson-Boltzmann calculations for finite concentrations of charged spheres in the absence of added salt, we obtained the following analogous expression for the apparent degree of micellar dissociation: This equation nicely rationalizes the general features of the behavior of a for micelles.Thus, for a CTAB-like micelle with a radius of a m = 2.3 nm and an aggregation number of ca. 100 (0.66 nm 2 per detergent headgroup charge), x om = 7.8 and a mic = 0.19, in good agreement with the experimental a value of ca.0.2.A CTAOH micelle with about half the aggregation number of a CTAB should have an a about twice that of CTAB.On the other hand, the value of a should decrease gradually as the chain length of the detergent increases and approach zero for particles with very large radii, such as the external surface of charged vesicles. A particularly useful relationship between counterion selectivity and the electrostatic potential was first derived by Plaisance and Ter-Minassian-Saraga 27 in 1976 in a study of specific ion effects on cationic polyelectrolyte monolayers.In a micellar solution of the monovalent cationic detergent D + Y -, three locations are assumed for the Y counterions derived from the micelle (Figure 3): (i) a fraction a of the micellar counterions in the intermicellar aqueous phase; (ii) counterions that have penetrated into the Stern layer, interact with the detergent headgroups with the average binding energy j Y and compensate a fraction qº Y of the headgroup charge; and (iii) a fraction 1 -a -qº Y of counterions in the electrical double layer around the micelle.The local concentration of Y ions in the double layer is a function of the electrostatic potential difference, y(r) relative to that at the midpoint between micelles: where F is the Faraday.Assuming that the affinity of the Y ions for the surface depends on the concentration of Y ions just outside the Stern layer, equal to [Y] aq exp(Fy o /RT), where y o is the micellar surface potential, qº Y can be expressed in terms of the simple binding isotherm: where the affinity constant is: For a CTAB-like micelle with a = 0.2, a typical estimate of qº Y would be about 0.60-0.65,meaning that only about 15-20% of the counterions pertain to the double layer.Rearrangement of this last equation provides the following relationship for the reduced surface potential Fy o /RT: For the case of two monovalent counterions X and Y, the corresponding expression for the net fraction of counterions in the Stern layer, q XY , can be written as: Solving for the surface potential gives: where we have expressed the ion exchange selectivity coefficient K X/Y as: The salient feature of equation 29 is that it tells us that the effect of a mixture of common (Y) and non-common (X) counterions on the properties of the ionic micelle DY will be determined by the equivalent counterion concentration [Y] aq + K X/Y [X] aq .Thus, the contribution of X is modulated by its selectivity relative to the common counterion Y. Equations 27-30 can be readily generalized to the case of mixtures of mono-and divalent counterions by replacing Fy o /RT by the more general term -z X Fy o /RT or to micelles formed by divalent detergent ions.Although this is beyond the scope of the current review, the general result is that the effect of counterion valence on the micellar properties scales as the selectivity times the ion concentration raised to the inverse power of the valence of the ion, i.e., monovalent ions scale as their concentration, divalent ions as the square root of their concentration and trivalent ions as the cube root of their concentration in the aqueous phase.The simple scaling of the relative electrostatic effects of counterions nicely rationalized the effects of cations of different valences on zwitterionic sulfobetaine micelles saturated with perchlorate ion. 28e can now interpret the consequences of the addition of a non-common counterion salt on a variety of micellar properties, some of which puzzled researchers for years. 29hus, the CMC of an ionic detergent should obey the modified Corrin-Harkins relationship (compare to equation 4): where b Y is the slope of the common-salt Corrin-Harkins plot and CMC oY is the CMC of DY in the absence of added salt.As shown in Figure 4, this is indeed the case.As the micellar surface potential decreases, the free detergent monomer concentration, m aq , will also decrease.This can be investigated indirectly by looking at the effect of added salt on the incorporation of a monomer-like molecule (or pseudomonomer) 30,31 such as the N-hexadecylpyridinium cation into a like-charge CTACl micelle, for which the incorporation can be formulated as a monomer (m)-pseudomonomer (PM) exchange equilibrium: PM aq + micelle m aq + PM m (32) The apparent incorporation coefficient of the pseudomonomer reflects the added-salt induced changes in the free monomer concentration, which in turn depends on the equivalent counterion concentration Since the rates of entry and exit of ionic species from ionic micelles depend on the electrostatic field around the micelle, the question arises as to what extent the field affects the individual entry and exit rates. 31,32The ratio of entry (k + ) and exit (k -) rates for a counterion: can be separated into the individual rates: by introducing a parameter d, which provides a measure of the fraction of the overall electrostatic work that must be overcome for the ion to escape from the micelle; 1 -d is then the corresponding fraction at which capture of the ion by the surface becomes irreversible.Studies of the dynamics of the incorporation of thiosulfate ion into CTACl micelles and of the N-ethylpyridinium ion and Cu II in SDS showed that both k + and k -are sensitive to salt concentration. 32For pseudomonomers like N-alkylpyridinium ions in CTACl, however, the rate constants for micellar entrance were very sensitive to salt concentration (d close to zero), but insensitive to the alkyl chain length.In contrast, the exit rate constants were relatively insensitive to salt concentration but very sensitive to the alkyl chain length, i.e., the exit rate constants are controlled almost entirely by the hydrophobicity of the pseudomonomer, becoming larger as the alkyl chain length decreases. 30hat then does a consideration of micellar electrostatics tell us about the PPIE model for treating reactivity patterns in ionic micellar solution?The first conclusion is that, although the ratio of local concentrations is equal to that of the analytical concentrations for the exchange of counterions of the same valence, the same is not true for the exchange of counterions of different valences.Hence, the ion exchange selectivity coefficients should really be expressed in terms of the local concentrations of the micellar ions and the valences of the ions involved in the exchange: 33 The second conclusion is that, instead of just having micellar and aqueous counterions, we actually have micellar Stern-layer counterions, micellar double layer counterions and aqueous counterions.By assuming that the micelles contribute aC D counterions to the aqueous phase, ascribing (1 -a)C D counterions to the micellar pseudophase lumps the counterions in the double layer together with those in the Stern layer.In principle, monovalent-divalent counterion selectivities determined in the aqueous phase (by ultrafiltration) and at the micelle surface (by fluorescence quenching) should be different if this were a significant problem for the model; however, these measurements failed to show differences in the selectivities. 33icellar electrostatics also identified an additional limitation of the way the local concentrations at the micelle surface were expressed in the original formalism.By assuming that there are (1 -a)C D counterions at the micelle surface, the original PPIE ion counting scheme fails to count the additional contribution of the aqueous ions to the local counterion concentration at the micelle surface (it counts only the surface excess of counterions).Consequently, the local counterion concentration extrapolates to the wrong limit as a goes to unity, i.e., in this limit it predicts that there are zero counterions at the micellar surface and hence zero reaction.Indeed, the true local counterion concentrations should be written as: which goes to the proper limit of [Y] mloc = [Y] aq when [Y] m = 0.In most cases the concentration in the aqueous phase are small relative to the local concentrations at the micelle surface.However, inclusion of this last term will become essential, e.g., for the situation of a reaction involving an ionic nucleophiles in mixed ionic-nonionic detergent micelles, where a will tend to unity as the proportion of ionic detergent decreases.In addition, as shown by Romsted, 34 this term is necessary in order to reproduce the measured local concentrations of counterions at the micelle surface of CTACl in the presence of very high added (> 0.2 mol L -1 ) concentrations of NaCl. Conclusions In this work, we have touched on just a few of the potential effects of charged interfaces like ionic detergent micelles on reactivity and equilibria.These effects typically derive from the capacity of these ionic surfactant aggregates to solubilize organic substrates, interact selectively with counterions, repel coions, exhibit partial "dissociation" of the counterions, grow in size with added salt, and determine the dynamics of diffusion or near-diffusion controlled processes.Most micellar effects on equilibria and ground-state chemical reactions can be understood in terms of the relatively simple PPIE formalism, which is still the chemically most satisfying approach for analyzing and predicting the effects of the charged interfaces of ionic association colloids such as micelles, 15 vesicles, 17 microemulsions, 35 etc. on reaction rates and equilibria.The model does not require explicit consideration of factors such as size, shape, curvature or dynamics of the aggregates or interaggregate interactions.The coulombic and specific interactions of the ions with the surface are incorporated into the model via ion exchange selectivity coefficients and the non-uniform distribution of the ions throughout the aqueous phase of the solution need not be taken into account. Treating the aggregates as if they were a separate pseudophase, ignoring the structure or dynamics of the charged interface, can be shown to be valid as long as the equilibration of at least one of the reactive species between the aqueous and micellar phases and among the ensemble of micelles is faster than the rate of reaction (the pseudophase limit). 36This is true for all equilibria, essentially all ground-state reactions and even for many excited state processes.On the down side, pseudophase limit phenomena reflect time-averaged properties of the system, rather than instantaneous properties of the charged interface.Hence, these phenomena cannot and do not provide meaningful insight into things like the dynamics or structure of the aggregates, the sites of reaction within the aggregate, the orientation of molecules in the aggregates, etc. Over the years, several limitations of the original formulation have been identified and can be readily incorporated into the model when necessary.The assumption of a constant degree of counterion binding to the surface breaks down for highly hydrophilic counterions, but this can be taken into account by employing the requisite variable values of a in the model.The consequences of the failure to add the contribution of the aqueous counterion concentration to the local counterion concentration at the surface are manifested only in certain situations that can now be readily anticipated.Thus, PPIE provides us with a working understanding of how counterions compete at the surface and the relationship between the properties of the interface and the ionic composition of the medium.The counterion types and concentrations in the aqueous phase modulate the surface potential and hence the size and shape of the aggregates.The proper counting of the ions and the ability to attribute them unambiguously to either the aqueous or micellar pseudophases requires a thorough knowledge of the ionic composition of the medium.The use of ions that will certainly cause undesirable interferences (such as buffer ions that interact with the interface) must be avoided. Finally, since the medium effects of micelles on the reactions of polar organic molecules and ions generally appear to be similar to those in water, PPIE is more than a model for deriving rate or apparent equilibrium constants from kinetic or equilibrium data.Fairly reliable methods are available for estimating counterion exchange selectivities, the binding constants of neutral substrates and the values of a. Hence, the assumption of similar reactivity in water and the micelle allows PPIE to be employed as a tool for experimental design, i.e., to predict a priori the expected effects of a charged interface on the reaction or equilibrium of interest. Fellowship and research support from the CNPq is gratefully acknowledged, as are INCT-Catalysis and the USP Consortium for Photochemical Technology.None of these ideas would have matured without the valuable insights and collaboration of the students and colleagues mentioned in the references.The author also thanks Erick Leite Bastos for the design art photography of the cover image. Figure 1 . Figure 1.Three-dimensional cartoon representation of the electrostatic potential in the solution around a hexagonally-packed ensemble of charged spheres (a m = 2.3 nm; N ag = 75; CMC = 1 mmol L -1 , C D = 30 mmol L -1 ) and the corresponding local concentrations of coions and counterions. Figure 2 . Figure 2. Solutions of the Poisson-Boltzmann equation for the electrostatic potential between two positively-charged CTACl-like charged spheres (a m = 2.2 nm; N ag = 90; CMC = 0; C T = C D = 0.020 mol L -1 ) in water at 30 °C. Figure 3 . Figure 3. Cartoon representation of the distribution of ions according to the model of Plaisance and Ter-Minassian-Saraga, 27 with indications of the corresponding electrostatic (y) and Stern-layer (j) potentials. Figure 4 . Figure 4. Correlation of foreign counterion effects on the CMC of SDS via: (upper panel) a Corrin-Harkins type relationship (equation 4); or (lower panel) the relationship modified to include ion exchange selectivity (equation 31).Experimental data from reference 29. Institute of Chemistry, USP, with over 150 publications.He is a Senior Editor of Langmuir, on the editorial boards of Photochemical & Photobiological Sciences.and the Brazilian Journal of Chemical Engineering, and a coorganizer of the AutoOrg meetings.He is a past director of the Photochemistry Division of the SBQ, a member of the Brazilian Academy of Sciences, an RSC and IUPAC fellow, an Advisory Board member and fellow of the Inter-American Photochemical Society and the 2015 recipient of the JBCS Medal of Honor.
2017-10-19T17:15:31.148Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "19300cf1a32fdcd2b6ccbe7d8eaaca0a719e3857", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/0103-5053.20150311", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3db565a2f5e9565c0f257ddc5445f5d8d7e77852", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
235194260
pes2o/s2orc
v3-fos-license
Determination of thiol/disulphide homeostasis as a new indicator of oxidative stress in dairy cows with subclinical endometritis The objective of this study was to determine thiol/disulphide homeostasis (TDH) in infertile cows with subclinical endometritis (SCE). Endometrial cytological samples were collected using a cytobrush to diagnose SCE in 36 infertile cows. According to the results of the cytology examination, those with acute endometritis were classified as Group I (n = 20) and those with chronic endometritis were classified as Group II (n = 16). A control group was formed of heifers as Group III (n = 20). Blood samples were taken from each group on the day of diagnosis (day 0) to analyse TDH. In the cytology examination, both the Giemsa method and immunocytochemical staining were applied to determine chronic inflammation and activity status. In 55.55% (20/36) of the infertile cows with cytological endometritis, the inflammation was determined to be active, and in 44.44% (16/36) it had become chronic. The native thiol and total thiol levels were found to be statistically significantly lower in the acute (206.54 ± 8.30 μmol/L; 227.11 ± 9.30 μmol/L) and chronic SCE cases (225.15 ± 11.89 μmol/L; 247.96 ± 10.80 μmol/L) compared to the heathy control group (308.47 ± 13.59 μmol/L; 336.83 ± 15.5 μmol/L respectively) (P<0.001). Disulphide levels, disulphide/total thiol, native thiol/ total thiol and disulphide/native thiol ratios were similar in all the groups (P>0.05). The diagnostic accuracy of native thiol, which can be used in the diagnosis of SCE, was 92.8%, that of total thiol was 89.3% and that of disulphide was 64.3% according to the ROC curve analysis. These results demonstrate that TDH is a reliable and sensitive indicator of oxidative stress in cow SCE, and that abnormal TDH might play a role in SCE pathogenic mechanisms. This is the first study to evaluate thiol/disulphide homeostasis in dairy cows with SCE as a new indicator of oxidative stress. , cardiovascular diseases (ALTIPARMAK et al., 2016) and cancer (PRABHU et al., 2014). The parameters of TDH include native thiol and total thiol, disulphide and disulphide/total thiol, native thiol/total thiol and disulphide/native thiol ratios. Although it has been possible to measure one side (reductive thiol) of these bilaterally balanced variables since 1979 (ELLMAN and LYSKO, 1979), with the recent development of a new method (EREL and NESELIOGLU, 2014), the level of both these two variables can be calculated separately and in total, and the ratios between both sides of the system can also be analysed. To the best of our knowledge, there have been no studies in the literature to date concerning TDH in infertile cows. Therefore, this study provides the first report on this subject. The aim of this study was to determine thiol/disulphide homeostasis in infertile dairy cows with SCE, and to compare these with healthy heifers. The hypothesis of the study was that this homeostasis might have a role in the etiopathogenesis mechanisms of infertility problems in cows with SCE. materials and methods Animals, housing and feeding. This study included 40 Holstein-Friesian dairy cows, aged 3-8 years, that had not become pregnant despite 3 inseminations and had no anomalies (abnormal uterine discharge, pyometra, urovagina, pneumovagina, perineal defects) upon the gynaecological examination. The study also included a control group of 20 healthy heifers aged 11-16 months. The present study was conducted on a private diary farm where the cows were housed in semi-open free-range barns throughout the whole year, and were fed twice daily with a feed mixture which included corn and grass silage, hay, triticale, canola and a balanced grain ration, and water intake was ad libitum. The herd consisted of 350 milking cows. Primiparous and multiparous cows were housed together and milked twice daily. Dairy records and anamnesis. Data related to the animals' age, calving date and count, birth-first Introduction Infertility is one of the major problems affecting reproductive performance in dairy farms. By extending the period between calving and reconception, failure in reproduction, the feeding of infertile animals in vain, and incurring extra labour, infertility in cows leads to economic losses (SENÜVER and NAK, 2015). The presence of subclinical endometritis (SCE) is one of the etiological factors of infertility (KASIMANICKAM et al., 2004;GILBERT et al., 2005). SCE is an inflammation of the endometrium without the clinical signs of endometritis (SHELDON et al., 2006;ORUC et al., 2015). Studies in literature have demonstrated that SCE in dairy cows has a negative impact on subsequent reproductive performance (GILBERT et al., 2005;KASIMANICKAM et al., 2004). Thiols, known as functional sulfhydryl (-SH) groups, are of vital importance in preventing oxidative stress (OS) forming in cells (KEMP et al., 2008). While thiol components are mainly formed of albumin and other proteins, a small proportion consist of low molecular weight thiols such as: cysteine, glutathione, homocysteine and γ-glutamyl cysteine (TURELL et al., 2013). In proteins, thiol groups of amino acids (methionine, cysteine) which include sulphur are the primary targets of reactive oxygen species (ROS). Thiol groups presenting with ROS in the setting are oxidised and converted into disulphide bonds, also known as sulphur bridges. This transformation is an indicator of protein oxidation and, under this OS, thiol / disulphide homeostasis (TDH) is disrupted (JONES and LIANG, 2009). Just as this disruption may directly lead to certain diseases, some diseases may also result in such a disruption. Thiols and disulphides have significant roles in detoxification, apoptosis, antioxidant defence, enzyme activity regulation, receptors, transporters, Na-K channel and transcription (BISWAS et al., 2006;CIRCU and AW, 2010). In humans, abnormal TDH, which is a part of the antioxidant defence, has been examined in the pathogenesis of some diseases, such as preeclampsia (ÖZLER et al., 2015), diabetes mellitus (ATEŞ et al., 2016), some anomalies that occur during pregnancy such as recurrent miscarriage insemination time, insemination date and count, and incidence of puerperal period problems were collected from the dairy records and the anamnesis. Study design and grouping. For determination of the cows without clinical endometritis and ovarian problems, a gynaecological examination was conducted of the vulva, tail and perineum inspection by transrectal palpation, vaginoscopy and ultrasonography. Endometrial smear samples were taken from the cows that were not found to have any clinical or anatomic anomalies in the examination (n = 40). The infertile cows with no pathological problems in the clinical examinations were evaluated according to the cytology examination results (n = 40). Those with acute endometritis were included in Group I (n = 20) and those with chronic endometritis were included in Group II (n = 16). A total of 4 samples were not readable. Group III was the control group, consisting of healthy heifers with no previous gynaecological anomalies (n = 20). These heifers were used as the control group because it is difficult to select cows that are not bred at approximately 200 days postpartum as SCE. Cytological sampling and evaluation. The cytology samples of cows with infertility problems were taken using an endocervical brush using the method suggested by KASIMANICKAM et al., (2004). The collected samples were placed on slides and transferred to the laboratory after fixation. The smear samples were stained using the Giemsa method, and a proportion of 5% neutrophils or greater was defined as the threshold for SCE (MELCHER et al., 2014). Inflammatory cell presence and inflammation characteristics were evaluated according to the design by POLAT et al., (2015). To determine the inflammatory status according to this method, the percentage of polymorphonuclear (PMN) cells and lymphocytes (LYM) were calculated. Using these criteria, cytopathological classifications were determined (PMN+LYM ≥5%: acute SCE; LYM ≥5%, chronic SCE). Within the cytology examination, immunocytochemical staining was also applied to determine chronic inflammation and activity status. The collected samples were incubated for 20 min with Bovine Serum Albumin (BSA) solution (1 g BSA in 100 ml PBS). To determine T lymphocytes and active inflammation, the smear samples were incubated for 20 min with CD3 (Sigma, Catalog no: C7930, Dilution rate: 1/200) and IFNɣ (AbdSerotec, Catalog no: MCA1964, Dilution rate: 1/300) primary antibodies, respectively. For dilution, antibody diluent solution (Catolog no: 003118, Thermo Fischer Scientific) was used. Secondly, a Mouse and Rabbit Specific HRP/DAB Detection IHC Kit (Abcam, Catalog no: Ab80436) was used. The preparates were then applied with a reverse staining process using Mayer's Haematoxylin, and they were exposed to a graded alcohol-xylol series. Finally, these sections were covered with mounting medium (Entellan ® , Merck, 107960) and analysed under a light microscope. The ratio of immunopositive lymphocytes to all lymphocytes was evaluated. Collection of blood samples. Blood samples were collected from the coccygeal vein for all groups on the day of examination (day 0). The samples were put into vacuumed glass tubes of 10 ml that did not include any anticoagulants. The collected samples were centrifuged for 15 mins at 3000 revolutions/ min to extract the serum, which was then placed in Eppendorf tubes and stored at -20 °C until analysis. Determining thiol/disulphide homeostasis. Thiol/disulphide parameters were determined in the collected blood samples according to EREL and NESELIOGLU, (2014). TDH was analysed by a fully automated method, which allows the evaluation of the two sides of thiol/ disulphide homeostasis. This technique uses sodium borohydrate (NaBH 4 ) for reduction of the dynamic disulphide bonds to functional thiol groups. Formaldehyde was used to eliminate all the unused NaBH 4 , to prevent extra reduction of the 5,50-dithiobis-2-nitrobenzoic acid (DTNB) and extra reduction of the formed disulphide bond, produced after the DTNB reaction. After taking the measurements of the native thiol, total thiol and disulphide levels, the disulphide/total thiol, native thiol/total thiol and disulphide/native thiol ratios were calculated (EREL and NESELIOGLU, 2014). The results were obtained as μmol/L. Animal rights statement. In this study, the Ethics Committee report was received in accordance with the directions of the Dollvet Inc. Experimental Animals Local Ethics Board (2018/01). Statistical methods. Data obtained in the study were analysed statistically using NCSS 9.1 software. Conformity of the data to normal distribution was determined using the Shapiro-Wilk test. Evaluation of the significance of differences between the study groups related to the serum TDH profiles was conducted using One-way ANOVA, and post hoc Tukey tests. Receiver-operating characteristic (ROC) analysis was applied to calculate the optimal positivity threshold for TDH parameters used in the diagnosis of SCE disease. Therefore, each value measured for antioxidants was considered as the cutoff point. Sensitivity, specificity and accuracy rates were calculated for each cut-off point and ROC curves were created. Summary statistics of variables were reported as mean ± standard deviation (SD) values. A value of P<0.05 was accepted as statistically significant. results The study population consisted of one herd of 20 heifers, 11 primiparous (first lactation) and 29 multiparous (>second lactation) cows. Cytology findings. During the study period, a total of 40 cows without vaginal discharge were included in the study. Cytological samples were obtained from 36 cows, and 4 samples were not readable. After the cytology examination, SCE was cytopathologically classified as acute or chronic according to evaluation of the cells in the collected samples. In cows determined to have acute endometritis, dense neutrophils were observed together with prismatic epithelium. In cows with chronic endometritis, the number of lymphocytes were higher (Fig. 1). In 55.55% (20/36) of the infertile cows with subclinical endometritis, the inflammation was determined to be active, and in 44.44% (16/36), it had become chronic. Calculating thiol/disulphide homeostasis. A statistically significant difference was determined between the control, acute and chronic SCE groups in respect of native thiol and total thiol variables (P<0.001). The mean native thiol and total thiol values in the acute and chronic SCE groups were lower than those of the control group. The disulphide level, disulphide/total thiol, native thiol/ total thiol and disulphide/native thiol ratios were found to be similar in all the groups (P>0.05). When the acute and chronic SCE groups were compared, the levels were determined to be statistically similar (P>0.05). In Groups I and II, TDH was found to be reduced when compared with control group. The serum TDH profiles of the cows are summarized in Table 1. A significant connection was determined between native thiol and total thiol variables and birth count (P<0.001). In this correlation, the mean native thiol and total thiol values of the heifers were higher than in the primiparous and multiparous cows. No significant difference was observed in respect of the averages of disulphide level, disulphide/total thiol, native thiol/total thiol and disulphide/native thiol ratios (P>0.05) ( Table 2). As a result of the ROC curve analysis, each point on the curve represents the sensitivity and specificity pair of the test according to a cutoff point. Any increase in sensitivity is achieved by a reduction in specificity. As the curve in the ROC plane approaches the upper left corner, the area under the curve (AUC), and therefore the accuracy of the test, increases. The significance of the difference of the areas under the curve from 0.5 was assessed using Z statistical analysis. The areas under the ROC curve for the native thiol, total thiol and disulphide antioxidants in the animals with SCE were significantly different (0.97, 0.98 and 0.65 respectively) (P<0.05). No statistically significant difference was determined between the AUCs for disulphide/total thiol, native thiol/total thiol and disulphide/native thiol ratios (0.54, 0.55 and 0.54 respectively) (P> 0.05). (Table 3; Fig. 2). Sensitivity, specificity and diagnostic accuracy ratios were determined for native, total thiol and disulphide values, and were found to be usable in the diagnosis of SCE according to the AUC results and showed a statistically significant increase related to the disease. The highest sensitivity and specificity values were 280.7 μmol/L for native thiol, 288.3 μmol/L for total thiol and 8.7 μmol/L for disulphide, which were determined as the optimal positivity thresholds. According to these threshold values, the diagnostic accuracy of native thiol, which can be used in the diagnosis of SCE, was calculated as 92.8%, that of total thiol was 89.3% and that of disulphide was 64.3% (Tables 4 to 6). Discussion Subclinical endometritis is an inflammatory disease and is one of the major problems affecting reproductive performance (KASIMANICKAM et al., 2004;GILBERT et al., 2005;SHELDON et al., 2006). In many inflammatory diseases, an increase in the production of pro-inflammatory cytokines has been associated with an increase in oxidative stress (OS) mediators. OS is known to play an important role in the pathogenesis of many reproductive events such as: embryonic losses, endometritis, follicular cysts and repeat breeder syndrome in cows (ANNE and JACQUEZ, 2002;CELI et al., 2011;EMRE et al., 2017;RIZZO et al., 2007;TALUKDER et al., 2014). In many studies, it has been suggested that in cases of uterus infections, Hp, SAA and ceruloplasmin levels were significantly higher compared to those of healthy animals (BISWAL et al., 2014;CHAN et al., 2004;KAYA et al., 2016). HEIDARPOUR et al., (2012) reported that serum MDA levels were higher in cows with endometritis. In another study, NO plasma concentrations were found to be higher in cows with clinical and subclinical endometritis than in healthy control cows (LI et al.,2010). Similarly, KRISHNAN et al., (2014) identified a relatively higher plasma concentration of NO and LPO as well as hydrogen peroxide production in SCE. MUSAL et al., (2004) found that albumin, Hp and SAA levels were higher in cows with endometritis than those of heathy animals. BRODZKI et al., (2015) showed that the serum levels of cytokines and acute phase proteins were higher in cows with subclinical endometritis compared to healthy cows. However, in another study conducted on pasturebased cows, there was no association of peripartum Hp with endometritis (BURKE et al., 2010). TDH is one of the important markers for oxidative stress (SEN, 1998;BISWAS et al., 2006). According to our review of the literature, no previous study has investigated TDH as a marker for OS in animals, except for rats. However, these parameters have been analysed in human medicine as the pathogenesis of many problems. For example, in studies on the relationships between facial paresis and TDH, native thiol and total thiol levels were found to be significantly lower, while the disulphide level was calculated to be higher compared to the control group (BABADEMEZ et al., 2017;DEMIR et al., 2018). When compared with control groups, serum thiol levels have also been determined to be significantly lower in obstetric diseases, such as pre-eclampsia, gestational diabetes mellitus, pregnancies complicated by idiopathic recurrent pregnancy loss and idiopathic intrauterine growth restriction (ÖZLER et al., 2015;ERKENEKLI et al., 2016;KORKMAZ et al., 2016;CETIN et al., 2018a). found that native and total thiol levels decreased and disulphide levels increased in pregnancies with hyperemesis gravidarum. In another study, there was an increase in the disulphide/thiol ratio in patients with idiopathic recurrent pregnancy loss, although there was no difference in disulphide levels (ERKENEKLI et al., 2016). However, in another study, no difference was found between patients with pregnancy complicated by preterm prelabor membrane rupture and a control group, in respect of maternal thiol/disulphide profiles (CETIN et al, 2018b). In the current study, native thiol and total thiol levels were determined to be lower in the acute and chronic SCE groups of cows than in the healthy control group of heifers. However, it should also be taken into account that the control group of heifers may have higher thiol levels and better antioxidant status than healthy multiparous cows. In addition, in both the acute and chronic SCE cases, native thiol and total thiol levels were lower than those of the healthy control group (P<0.001), whereas the disulphide levels, disulphide/total thiol, native thiol/total thiol and disulphide/native thiol ratios were similar in all the groups (P>0.05). In the comparison between the acute and chronic SCE groups, thiol levels were also similar (P>0.05). It was observed that OS may decrease natural thiol and total thiol levels due to the increase in OS in SCE groups. This indicates a higher OS level in SCE groups than in the heathy control group. However, unlike some previous studies GÜMÜŞYAYLA et al., 2016;AKTAŞ et al., 2017;BABADEMEZ et al., 2017), when the disulphide level was analysed in the current study, it was found to be at a similar level to that of the control group. BERNABUCCI et al., (2005) reported that plasma and erythrocyte SH levels were a better measure of oxidative status in transition dairy cows. They examined only the level of total thiol in their study, while in our study, besides the total thiol, native thiol and 4 different parameters were also examined. This is the first study which has measured the parameters of thiol/disulphide homeostasis in dairy cows with SCE. In addition, the results of our study suggest that the OS in SCE might be a mechanism independent of disulphide levels. In the light of these findings, it can be said that the results of the test might reveal a role in the etiopathogenesis mechanism of infertility problems. These findings are compatible with previously published data in clinical studies. Therefore, it may be possible to decrease or prevent the impact of oxidative stress on SCE through treatment. ROC curve analysis is a commonly used method for capacity evaluation and comparison of diagnostic tests. Using specificity and sensitivity values, this method identifies the best cut-off points for the categorization of experimental groups. The accuracy of the categorization depends on the size of the area under the ROC curve. This is a commonly used criterion for selection of the correct diagnostic test (GARDNER and GREINER, 2006;GÜRCAN and BABAK, 2013). ROC curve analysis has been shown to be a complementary calculation method to identify the degree of chronic endometritis (GÜRCAN and BABAK, 2013). The results of the current study showed the diagnostic accuracy of native thiol, which can be used in the diagnosis of SCE, to be 92.8%, while that of total thiol was 89.3% and that of disulphide was 64.3%. UELAND et al. (1996) suggest that altered redox thiol status in vascular patients should be considered in the light of antioxidantin cardiovascular disease. ELMAS et al. (2017) also showed that TDH can identify obese children with cardiovascular inflammation with adequate sensitivity and specificity. From the results obtained in the current study, it was concluded that thiol parameters could be used as an auxiliary diagnostic method in the diagnosis of SCE. When the age distribution of TDH was evaluated in the study, naturally, the levels were found to be higher in heifers (healthy animals). However, this study demonstrated that parity had no significant influence on TDH. Since parity did not change TDH, it can be said that SCE resulted in similar OS conditions in both primiparous and multiparous cows. However, native and total thiol concentrations were found to be numerically lower in primiparous cows when compared to multiparous cows (P>0.05). This could stem from the adaptation processes of heifers after their first birth to metabolic changes or the formation of a defence mechanism against free oxygen radicals, which are produced in a larger amount in this period. conclusions This is the first research to show thiol/disulphide homeostasis in Holstein dairy cows with SCE. The findings show that infertility in cows can be associated with their thiol balance. TDH was defined as useful indicator of oxidant/antioxidant imbalance and may be used as a practical marker in the diagnosis of SCE. However, the reason for the higher thiol level in the control group may have been due to the better antioxidant status of the selected heifers than the healthy multiparous cows. Nevertheless, to be able to clarify potential correlations and examine the cause and effect relationships that have been proposed in previous studies, and confirmed in the current study, there is a need for further studies, with a greater number of cases, which can present more comprehensive data and more detailed analyses. conflict of Interests Statement The authors declare that there is no conflict of interests regarding the publication of this article. Financial Disclosure Statement This study was financially supported by the Harran University (Project No. 17112).
2021-05-26T03:52:24.708Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "7b6bd0e1edfd540b0d6da5f82ac5e7f16004e05a", "oa_license": null, "oa_url": "https://doi.org/10.24099/vet.arhiv.0914", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7b6bd0e1edfd540b0d6da5f82ac5e7f16004e05a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
49904606
pes2o/s2orc
v3-fos-license
Question Relevance in Visual Question Answering Free-form and open-ended Visual Question Answering systems solve the problem of providing an accurate natural language answer to a question pertaining to an image. Current VQA systems do not evaluate if the posed question is relevant to the input image and hence provide nonsensical answers when posed with irrelevant questions to an image. In this paper, we solve the problem of identifying the relevance of the posed question to an image. We address the problem as two sub-problems. We first identify if the question is visual or not. If the question is visual, we then determine if it's relevant to the image or not. For the second problem, we generate a large dataset from existing visual question answering datasets in order to enable the training of complex architectures and model the relevance of a visual question to an image. We also compare the results of our Long Short-Term Memory Recurrent Neural Network based models to Logistic Regression, XGBoost and multi-layer perceptron based approaches to the problem. INTRODUCTION The task of automatically answering questions in the context of visual information has gained prominence in the last few years. Being able to answer open-ended questions about an image is a challenging task, but one of great practical significance. For instance, visually impaired individuals might inquire about different aspects of an image in the form of free-form questions. However, when Visual Question Answering (VQA) systems are provided with irrelevant questions, they tend to provide nonsensical answers. VQA systems in real world scenarios are expected to be sophisticated to identify the relevance of the posed free-form questions to an input image, to answer them better. There are two aspects of relevance of a question to the input image: (1) Non-visual questions which do not require any input image to answer the question (2) False-premise questions which require an input image but do not pertain to the provided input image to answer the question In this project, we formulate the problem as follows: Given an image and a natural language question, identify if the question is relevant to the input image. For visual versus non-visual question detection, we present the results of two approaches. The first approach is based on training a Logistic Regression model using unigrams, bigrams and trigrams of Part-of-Speech (POS) tags of the question. In the second approach, we use a Long Short-Term Memory (LSTM) Recurrent Neural Network trained on Part-of-Speech (POS) tags to capture the linguistic structure of questions. For the next sub-problem of identifying true versus false premise of a visual question to an image, we curate a much larger dataset compared to the existing datasets for the problem using variations in different existing data extraction methodologies. We also present the results of different models used for modeling the true versus false premise problem. In our initial approaches, we use Logistic Regression and XGBoost classifier using both visual and textual features to model the problem. We also explore several Long Short-Term Memory (LSTM) Recurrent Neural Network architectures and a multi-layer perceptron network to model the problem. Our code is available at [1]. RELATED WORK There have been significant advances in recent years on identifying the similarity between images and textual information. Text-based image retrieval [12] systems and visual semantic alignments in image captioning models [11], [8] are some examples of efforts in that direction. While some systems do not answer when the input is ill-formed or likely to result in failure, some others try to find the most meaningful answer to such inputs. [7] tries separating visual text from non-visual text in image descriptions and use them for enhancing image captioning systems. These ideas can be used to boost the performance of visual question relevance task. Much of our work is based on the problem and approaches presented in [16]. In this paper, the authors identify the two facets of question relevance for Visual Question Answering, i.e. categorizing the questions as visual versus non-visual questions, and then identifying if a question has a true premise for an image. The paper also provides several baselines for both problems. For the visual versus non-visual classification, the authors propose a heuristicbased and an LSTM-based approach. And for the true versus false premise problem, there are three baselines based on entropy from VQA models, question-caption similarity and question-question similarity. For the problem to identifying false premise, [13] also makes significant contribution for extracting the premise from a question. The authors of this work also go on to create a well-curated, much larger and more class-balanced dataset for the true versus false premise based on the VQA dataset [4]. In this paper, the concept of a premise in a question is explored in greater detail and a more diverse set of problems are addressed, such as -given an image and a question with a false premise, can we predict the premise in the question that cannot be answered by the image. However, the focus of our work is on exploring scalable algorithms and architectures for answering if the question can be answered in the context of the image. In the context of establishing semantic relationships between images and text, the work done in [2] is also quite relevant, where the authors of this paper propose a holistic embedding technique for combining multiple and diverse language representations for better transfer of knowledge, by mapping visual and language parts into a common embedding space. Although our problem is supervised, we believe that such an embedding could help improve the identification of question relevance for an image. DATASET While Visual Question Answering has been a widely explored problem, question relevance for the same is relatively very new, owing to which there are no existing large-sized and diversely represented datasets. For the first task of detecting visual versus non-visual questions, we refer to the methodology used in [16]. Since VQA 2.0 dataset [9] is now available, we use the training, validation and test questions for images from this dataset as visual questions. For non-visual questions, we use the philosophical and general knowledge questions provided by [16]. Combining the two sources and eliminating duplicate questions, we have 160,010 questions for training, 82,067 questions for validation and 148,927 questions for test datasets. The key limitations of this dataset are the class bias, and that the non-visual questions are not diversely representative of all possible non-visual questions. The dataset for the second sub-problem of detecting questions with false premise is based on the VQA corpus [4]. The data acquisition involves creating image-question pairs with true and false premises for the question. The questions in the VQA dataset can be assumed to have true premises, since they were manually generated. To identify the questions with false premises, we choose to use the questions from the same dataset, but for other images. Here, we explore three approaches: (1) Question Similarity In this approach, for every image, we use the set of truepremise questions from the VQA dataset [4], and extract k least similar questions from the set. As similarity measure, we try using doc2vec similarity and word2vec similarity for keywords in a question (nouns, verbs and adjectives). (2) Visual True vs False Premise Questions (VTFQ) Dataset In this approach, we use the dataset presented in [16], where the methodology is similar to the one described for question similarity, but instead of using a question similarity measure, the authors sample random questions from the set, and have Amazon Mechanical Turk (AMT) workers annotate the questions as relevant or irrelevant to the corresponding images. The VTFQ dataset consists of 10,793 question-image pairs with 1,500 unique images of which 79% of the pairs have false premise. (3) Question Relevance Prediction and Explanation (QRPE) Dataset This dataset is presented by [13], with a methodology that uses VQA 1.0 questions, COCO images and Visual Genome annotations. The approach first involves a premise-extraction for a given question, where a question premise is defined as a semantic tuple comprising of objects, objects and their attributes or objects and their relationships in an image. Then, for a given question-image pair in the VQA dataset, a set of all images is created which has exactly one premise as false for the question, which are referred as negative images. To make the false-premise detection problem challenging, the negative images that are most similar to the positive image (i.e. image with true premise for the question) are chosen. The QRPE dataset consists of 53,911 tuples of the form (I + , Q, P, I − ), where (I + , Q) is a pair of positive image and true premise question, I − is the negative image with P as the premise in Q that is false for I − . While a single QI pair for true and false premises is extracted using this approach. The problem with the first two approaches is that using random or least similar questions for an image would make the problem of false-premise detection much easier than the case where the a single premise of the question were to be false in the context of the image or if the negative images are very similar to the positive images. While this is addressed in QRPE dataset, it has only 53,911 positive and negative image pairs, generated from the VQA dataset that is much larger to begin with. This is largely because of the constraints imposed in the dataset construction such as restricting the questions and objects to be of specific categories. The constraints are placed in order to minimize the noisy samples since the data is generated heuristically. Since the dataset is too small, most of the current state-of-the-art models use pre-trained VQA or image captioning models trained on COCO dataset. We hypothesize that a much larger dataset with only a marginal reduction in robustness would help build a more effective model for classifying true vs false premise, since a large dataset would enable end-to-end training of deep architectures to optimize the classification performance for this task. To test this hypothesis, we present an extended QRPE dataset that is built by making some modifications to the methodology in [13]. The first difference is that we use VQA 2.0 [9], which has twice as many image-question pairs and also contains complementary image-question pairs. In addition, we relax some constraints on question types (while regulating robustness of the dataset) as well as increase the number of negative images generated for each question from 1 to 10. Lastly, we use all the image-question pairs in VQA 2.0 as true-premise pairs, including the ones that do not have negative images. This is because the goal of our dataset is to have a large dataset with good representation of both the classes, and not to generate (I + , Q, P, I − ) tuples. Dataset Construction: Since the true premise instances are directly derived from the VQA 2.0 dataset, the key challenge lies in generating a mapping of images to questions with false premises. To ensure that the irrelevance of the question does not become trivial, we use only the top 10 most similar images to the positive image for the question. To compute the similarity of images, cosine similarity of VGG 16 image features [17] is used, since it has been pre-trained on 1.3M images. To identify the set of image-question pairs with false premise, two kinds of premises are considered: To get the premises for a given image, we construct scene graphs by using the semantic tuple extraction pipeline used in the SPICE metric [3], which is an image captioning metric. We compare these premises with the Visual Genome scene graphs for the COCO images, and use the images whose attribute is an antonym of the attribute in the question. Table 1 shows the data characteristics of the dataset created using this methodology and the applied modifications. Some negative images generated for false first and second order premises are illustrated in Figures 1 and 2. We can notice how the objects in the negative images for the first order premise are different but look very similar to the object in the positive image (in this case, the dog). On the other hand, for the second order premise, the object (container) is the same in the negative images, but the attribute is different (large vs small). Thus, it is much harder to identify false second order premises than false first order premises. In some cases, it is not very obvious if the premise is false, for example, in this case, it may not be clear if the container should be considered large or small. This is why we focus more on classifying the first order pairs, as they are more definitive. APPROACH Among the different models experimented with, we first present the ones for visual-vs-non-visual question detection, followed by the models used for true-vs-false premise question relevance problem. Visual vs. Non-visual Question Detection We identify that non-visual questions have different linguistic structure than visual questions. For example, non-visual questions such as "Name the national rugby team of Argentina." or "Who is the president of Zimbabwe?" often have differences in structure in comparison to visual questions such as "Is this truck yellow?" and "What color are the giraffes?". Hence, we use Spacy [10] to process all questions to obtain Part-of-Speech (POS) tags as features. We compare two models, a Logistic Regression model versus an LSTM-RNN based approach. (1) Logistic Regression We trained a Logistic Regression model using POS tags of the questions as features. We also experimented with larger feature sets using bigrams and trigrams of POS tags. We implemented a scalable streaming version of logistic regression for training. We assume that the validation and test datasets fit in memory for this problem. (2) LSTM We also trained an LSTM model using the architecture from [16] for modeling visual vs. non-visual question detection. This architecture is shown in Figure 3. This model True vs. False Premise Question Relevance Detection While visual versus non-visual question detection depended only on the posed textual question, the true versus false premise question relevance requires joint modeling of image and the question. We obtain visual and textual representations to model them together. For all our models, we use a pre-trained VGGNet [18] convolutional neural network and obtain the fully connected seventh layer of the network output as visual features for the images. (1) Logistic Regression Our first approach to modeling the problem of true versus false premise question relevance was to use a scalable Logistic Regression classifier combining visual and textual features for the problem. For the visual features, we used Principal Component Analysis to reduce the 4096 dimensional FC7 features of an image obtained from a pre-trained VGGNet to 300 dimensions. For the textual features, we trained a FastText model [5] on all the VQA questions to obtain average word embeddings for the questions as features. We combine these representations to learn a model to classify if the question is relevant to the image or not. (2) XGBoost Gradient Boosting Classifier We use the same visual and textual feature representations described above for training an XGBoost Gradient Boosting Classifier [6]. We use disk-based implementation of XGBoost to accommodate the large training dataset size. (3) Multi-Layer Perceptron We also explore a multi-layer perceptron using 4096 image features concatenated with 300 dimensional Glove embedding features of the words in the question as input. We use two hidden layers with 5000 and 500 hidden units respectively. The output layer has one unit Figure 4: RelNet1: Image features after PCA is input to an LSTM layer at every time step. The Question is modeled using another LSTM layer whose output is also input to the final LSTM layer. modeling the probability of question relevance using a binary cross-entropy loss. (4) LSTM We also train different variants of LSTM architecture for jointly modeling image and textual inputs. These architectures are described briefly below. All our architecture variants are named as RelNet (Relevance Net for Question Relevance) for reference. In all architectures, the question is input the the LSTM using an Embedding layer. We initialize the embedding layer using 300 dimensional Glove embeddings [15]. (a) RelNet1 This architecture is shown in Figure 4. In this network, we used Principal Component Analysis to reduce the 4096 dimensional FC7 features of an image obtained from a pre-trained VGGNet to 300 dimensions. We use an LSTM layer to model the input question. The output of this LSTM layer is concatenated with the image features at every time step and then fed to another LSTM layer. This LSTM layer is trained to model the probability of the question being relevant to the image or not using a binary cross entropy loss. (b) RelNet2 This architecture is shown in Figure 5. In this network, we modify the architecture of RelNet1 by training an embedding layer to reduce the 4096 dimensional FC7 features of an image to 300 dimensions, instead of using Principal Component Analysis. (c) RelNet3 This architecture is shown in Figure 6. In this network, we modify the architecture of RelNet2. We do not concatenate the image features with every time step output of the first LSTM layer. Instead, we input the image features at the first time step only to the final LSTM layer. The output of the language LSTM layer is fed to the final LSTM layer from the second time step. (d) RelNet4 This architecture is shown in Figure 7. In this network, we do not have the first LSTM layer to model the input question. We directly input the question to the final LSTM layer from the second time step using an Embedding layer. EXPERIMENTS AND RESULTS We compare our models with several baseline approaches. We model LSTM based methods on a NVIDIA Tesla K80 GPU. The following sections presents the results for various models for both sub-problems. Visual vs. Non-Visual Question Detection The results for visual vs. non-visual models are presented in Table 2 and Table 3. Since there is a class imbalance problem in the datasets, we report the average per-class (i.e., normalized) metrics for all approaches. Table 2 compares the results for the three models of Logistic Regression, using unigrams, bigrams and trigrams of POS tags as features. As can be observed from the table, addition of bigrams of POS tags as features helped improve the precision and recall of both classes significantly in comparison to using only unigrams of POS tags as features. However, using trigrams as additional features didn't give significant improvement in the metrics. Additionally, using trigrams as features increases computational time exponentially. Hence, we use unigram and bigram based logistic regression to compare with results from LSTM model. We trained several models of LSTM with varying embedding and hidden vector dimensionality using a NVIDIA Tesla K80 GPU. We observed that all models performed similarly across different metrics. Hence, we provide the results of replicating the model provided in [16]. Since we use VQA 2.0 dataset [9] and different set of POS tags from Spacy, we identify that the results are different from the original paper for the same model. Table 4 compares the accuracy metric for various models on the three datasets. The two baseline models presented in the table are obtained from [13]. VQA-Bin baseline is modeled using a pre-trained deeper LSTM VQA architecture with fine-tuning for the binary question relevance detection task. QPC-Sim baseline uses a pretrained image captioning model to automatically provide natural language image descriptions and identifies question relevance based on a learned similarity between the question, the premise and the generated image caption. Since the baselines were not trained on our dataset, we present the baseline results for the QRPE dataset's first order and second order image question pairs. All our models are trained on the generated first order train dataset and tested on the generated first order test dataset, second order test dataset and QRPE test dataset. All our models are trained end-to-end without relying on other tasks like image captioning and visual question answering. By using a larger dataset, we are able to obtain good performance on the QRPE dataset by directly modeling question relevance. Usage of pretrained models like VQA and image captioning is limiting because of the need to have relevant datasets for those tasks which have similar images as the question relevance task. These models have inherent errors in their tasks and also induce additional errors for question relevance datasets which don't share the same images. Hence, by generating a larger dataset, we were able to directly model the question relevance detection task reasonably well. ANALYSIS For the first task of visual vs non-visual question detection, from table 3 we can observe that logistic regression performs better than LSTM based approach on some metrics and performs comparatively on others. However, we provide a scalable streaming implementation of logistic regression, which takes significantly less time for training in comparison to LSTM. It can also be inferred from the high precision and recall that this is a much simpler problem compared to the second sub-problem. One of the possible reasons for this, we believe, is that the data for non-visual questions is not well represented (since most of them are general-knowledge or philosophical questions) and the complex models are undesirably learning the ad-hoc attributes of the two classes. A possible future work in this regard would be to collate a richer set of datasets for non-visual questions from multiple sources. For the true versus false premise detection, it can be observed from the results in table 4 that our models perform quite well compared to the baselines on the extended first order and second order dataset. Among the LSTM architectures, RelNet1 and RelNet4 performed well on the test dataset. RelNet1 has the PCA dimensionality reduced information in it's image features, thereby eliminating the need to learn a rich image representation from the training data. RelNet4 is a much simpler model in terms of number of parameters and network layers for the model to learn. Hence, we believe that these two networks provided good results on the test datasets. From a computational perspective, all the RelNets and the multilayer perceptron model took 12-17 minutes per epoch and we trained each of the models for 20-30 epochs. Since the generated first-order dataset is large, we used a training data generator to generate images batch by batch. We use Keras to code all our models. For the XGBoost model, we use the open source disk-based implementation of XGBoost [6]. The Logistic Regression model is a streaming implementation which avoids loading the entire training data into memory. In order to test the generalization of our models as well as the quality of our dataset, we have also tested our models on the QRPE dataset, and found that the performance is as good as some of the baselines (VQA-bin), but not as good as the best performing models (QPC-Sim). This can be attributed to two key reasons. The first one is that the models on QRPE dataset are a lot more complex, since they use pre-trained VQA and image captioning models. Secondly, the construction of the dataset may itself introduce a bias in the model. A possible way to circumvent this bias is to use a combination of differently constructed datasets (like QRPE and VTFQ) as validation sets and minimize the generalization loss while training. Lastly, from the consistent under-performance of our models on the QRPE dataset compared to our test dataset, we can also infer that the classification task on QRPE dataset is more challenging, since we have loosen the constraints for filtering question types while constructing the extended dataset. CONCLUSION AND FUTURE WORK In this project, we attempt the problem of identifying relevance of posed questions to visual question answering systems by exploring several approaches that yield similar or better results (by virtue of larger training data). For the first sub-problem of identifying visual versus non-visual question detection, we provide a time-efficient and scalable implementation of logistic regression. This approach provides comparative or better results on all metrics in comparison to strong baselines set by LSTM based approaches. To solve the second sub-problem of true versus false premise using end-to-end classifiers, we propose an extended QRPE dataset using a modified QRPE data-generation pipeline and VQA 2.0, consisting of over 3.6M image-question pairs. We then experiment with multiple families of models for classifying true versus false premise such as XGBoost, Logistic Regression and LSTM-based RNN models. Each of these approaches have been trained end-to-end on the generated first order dataset, thereby avoiding using pre-trained VQA and image captioning models. While the models performed better than the baselines on our dataset, it did not outperform the state-of-theart model (QPC-Sim) on the QRPE dataset, which we believe is because our models have much simpler architectures compared to the pre-trained models that QPC-Sim uses. There are many directions for future work in this area. For the dataset of true versus false premise detection, third order false premises (which include the relationships between objects in the image) could also be included. Given the larger data, many deep generative models can be trained to outperform the pre-trained models. As for features, we have considered only CNN features for images and word embeddings for words, but many different possibilities of imaged and words can be explored, with an option of training the language model and CNN specifically for this task. A natural extension to the problem of question relevance using premises is to explain why the question is not relevant, i.e. which premises are false and what additional information is required to answer the question. With greater capabilities in identifying and commenting on the relevance of the questions for images, visual question answering would have a much larger applicability in a practical setting.
2018-07-23T06:01:44.000Z
2018-07-23T00:00:00.000
{ "year": 2018, "sha1": "b56b001dff2e13f8b6488a9473163fff31f2953f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a838a1184cb9ca86ae910509bb318266101ae656", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119321887
pes2o/s2orc
v3-fos-license
Path-dependent convex conservation laws For scalar conservation laws driven by a rough path $z(t)$, in the sense of Lions, Perthame and Souganidis in arXiv:1309.1931, we show that it is possible to replace $z(t)$ by a piecewise linear path, and still obtain the same solution at a given time, under the assumption of a convex flux function in one spatial dimension. This result is connected to the spatial regularity of solutions. We show that solutions are spatially Lipschitz continuous for a given set of times, depending on the path and the initial data. Fine properties of the map $z \mapsto u(\tau)$, for a fixed time $\tau$, are studied. We provide a detailed description of the properties of the rough path $z(t)$ that influences the solution. This description is extracted by a"factorization"of the solution operator (at time $\tau$). In a companion paper, we make use of the observations herein to construct computationally efficient numerical methods. Introduction We are interested in scalar conservation laws of the form (1.1) ∂ t u + ∂ x f (u)ż = 0 on (0, T ) × R, u(0, x) = u 0 (x) for x ∈ R, where 0 < T < ∞ is some fixed final time. The (rough) path z : (0, ∞) → R, the initial value u 0 : R → R, and the flux f : R → R are given functions, whereas u is the unknown function that is sought. The time derivative of z(t) is denoted byż. Regarding the flux, the standing assumption is f ∈ C 2 and f is strictly convex. Bear in mind that (1.1) reduces to a standard conservation law in the event z(t) = t. It is well known that such equations are well posed within the framework of Kružkov entropy solutions [12] or, equivalently, kinetic solutions [39]. More precisely, assuming for example u 0 ∈ (L ∞ ∩ L 1 )(R), there exists a unique function u ∈ C([0, T ]; L 1 (R)) satisfying u(0, x) = u 0 (x) and (1.2) ∂ t S(u) + ∂ x Q(u)ż ≤ 0 in the sense of distributions, for all convex entropy, entropy-flux pairs (S, Q), i.e., S ∈ C 2 convex and Q = S f . If z(t) is a Brownian path (i.e., a realization of a Brownian motion), then z(t) is merely Hölder continuous (infinite variation) and the conservation law (1.1) is no longer well defined; in this case one could replace (1.1) by the stochastic partial differential equations (SPDE) where • denotes the Stratonovich differential. Aiming for a different approach, Lions, Perthame, and Souganidis [34] recently introduced a pathwise notion of entropy/kinetic solution to (1.1), defined for any z ∈ C([0, T ]), which is consistent with the notion of Kružkov entropy solution for regular paths z(t). It is proved in [34] that the pathwise solution is stable with respect to uniform convergence of the path. More precisely, assuming u 0 ∈ BV (R) (u 0 is of bounded variation), we have the following result [34,Theorem 3.2]: Let u i be the pathwise entropy/kinetic solution of (1.1) with path z i and initial condition u i 0 , for i = 1, 2. Then there is a constant C > 0 such that for t ∈ [0, T ], Consequently, given a sequence of regular (say, Lipschitz) paths {z n } n≥0 converging uniformly to z as n → ∞, the corresponding Kružkov entropy solutions {u n } n≥0 of (1.1) converges to the entropy/kinetic pathwise solution u in C([0, T ]; L 1 (R)) as n → ∞. As such, the interpretation of (1.1) in terms of (1.4) is associated with the Stratonovich interpretation of (1.1). In view of the consistency between (1.2) and (1.4), in what follows we will refer to Kružkov entropy solutions and pathwise entropy/kinetic solutions both simply as entropy solutions. In this paper, we approximate the entropy solution u of (1.1) by a sequence {u n } n≥0 of solutions utilizing piecewise linear approximations {z n } n≥0 of the rough path z ∈ C([0, T ]). The "continuous dependence on the data" estimate (1.6) ensures that this approximation converges to the correct solution of (1.1). A motivation for exploring such approximations is their relevance to numerical methods. The computational difficulties associated with solving (1.1) numerically stem from the infinite variation of the rough path z(t). This forces the time step to be very small due to the well-known CFL stability condition, linking the temporal and spatial discretization parameters. Our main result, valid for convex flux functions f , states that it is possible to replace the rough path by a piecewise linear path of finite variation, and still obtain the same solution at a fixed time. In [26] we make use of this result to construct computationally efficient finite volume methods. Let us discuss in more detail our results relating to path-dependence of entropy solutions. Suppose the path z(t) is piecewise linear and continuous, and let u be the corresponding entropy solution to (1.1). Fix a time τ ∈ [0, T ]. We seek the "simplest" pathz such that the corresponding solutionũ satisfiesũ(τ, ·) = u(τ, ·). To motivate, fix a time interval [t 1 , t 2 ] with 0 ≤ t 1 < t 2 ≤ τ , and suppose for simplicity that z(t) ≥ z(t 1 ) on [t 1 , t 2 ]. Let v be the entropy solution to Then in either of the two following cases: }. Consequently, we may replace parts of the path z satisfying either (i) or (ii) by straight line segments, i.e., z by the path As the solutionũ (corresponding toz) satisfiesũ(t 2 ) = v(z(t 2 )) = v(z(t 2 )) = u(t 2 ) it follows that u(τ ) =ũ(τ ). In view of this, we can "simplify" the path z(t) by replacing the parts where it satisfies either (i) or (ii) by straight line segments. It is easy to determine which parts satisfies (i), i.e., where the path is monotone. Considering (ii), we need to determine the parts of the path z on which u (and v) is a classical solution (without shocks). To this end, let us recall the so-called Oleȋnik estimate for strictly convex fluxes f (andż = 1) [12]: For example with f = u 2 /2, the only admissible shocks are those for which the left value is larger than the right. Similarly, with f = u 2 /2 andż = −1, only upward jumping shocks are admissible. When z (t) oscillates (i.e., takes on positive and negative values, say ±1), one observes that shocks in u can only exist when the path z takes on values not already assumed at some earlier point in time. Consequently, starting from a piecewise linear path z, it is possible to "inductively" construct a new pathz(t), with smaller total variation, by replacing appropriate parts of z(t) by linear segments. This inductive procedure, which we describe in detail in an upcoming section, gives rise to a "minimal" path associated with the final time τ and the initial data u 0 . We refer to the resulting path as the oscillating running min/max path and label it Orm τ,u0 (z) (See Definition 2.2 below). Setting z = Orm τ,u0 (z), we haveũ(τ ) = u(τ ). Note that the application of the Oleȋnik estimate depended onż being piecewise constant. However, for sufficiently smooth u 0 , it turns out that that z → Orm τ,u0 (z) is well-defined for any continuous path z. Indeed, for a general path z, we proceed by suitable piecewise linear approximation. This approach may be viewed as a factorization of the solution operator. To see how this is related to the construction of the map z → Orm τ,u0 (z), we identify two paths z andz as long as u(τ ) =ũ(τ ), where z → u(τ ) andz →ũ(τ ). This naturally leads to a factorization of the solution operator (one for each fixed time τ ) as a composition of a quotient map and an injective map, see Figure 2. Up to precomposition by a nondecreasing function, the quotient map may be identified with the map z → Orm τ,u0 (z). The injective map is associated with the solution operator restricted to piecewise linear paths. Another question raised in this work is related to the optimal choice of paths relative to the continuous dependence estimate (1.6). To be more precise, in view of the above discussion, there exists for each path z a multitude of pathsz such that, for a given time τ ∈ [0, T ] and initial condition u 0 , the corresponding solutions u andũ to (1.1) satisfy u(τ ) =ũ(τ ). In order to improve (1.6) one may search for pathsz 1 ,z 2 satisfying u 1 (τ ) =ũ 1 (τ ), u 2 (τ ) =ũ 2 (τ ) such that sup 0≤t≤τ {|z 1 (t) −z 2 (t)|} is as small as possible. In Theorem 2.10 below, it is shown that this minimization problem may be bounded in terms of a second minimization problem solvable by dynamic programming. Before ending this introduction, we mention that recently many researchers studied the effect of adding randomness to conservation laws and other related nonlinear partial differential equations. This includes stochastic transport equations, which bears some resemblance to (1.3), is a low-regularity velocity field and the "transportation noise" is driven by a Wiener process W (t). For some representative results, see e.g. [3,21,37,38]. In a different direction, many mathematical papers [8,9,11,13,14,15,19,27,32,30,20,41,40] have studied the effect of Itô stochastic forcing on conservation laws, where f, σ are nonlinear functions and W (t) is a (finite or infinite dimensional) Wiener process. Numerical methods are looked at in [2,7,6,5,29,31,17,18,33]. The remaining part of this paper is organized as follows: In Section 2, the main results of the paper are presented, without proofs, along with the notation necessary to make the statements precise. In Section 3 proofs of the given results are presented. Main results To state the main results precisely, we introduce some notation and definitions. The regularity of u 0 is quantified by two numbers 0 ≤ M + , M − ≤ ∞ satisfying Furthermore, we define the sets where ρ ± z is strictly increasing/decreasing by The next lemma summarizes the essential properties of these sets. z are closed with respect to increasing sequences, i.e., if {t n } n≥0 ⊂ B ± z satsifies t n ↑ t for some t ≥ 0, then t ∈ B ± z . Hence where ∂ − denotes the left derivative and cl − denotes the closure with respect to increasing sequences. Consequently, for piecewise linear z there exists 0 ≤ N ± < ∞ and 0 ≤ s ± where we use the convention that the union is empty if N ± = 0. We may now give the precise definition of the Oscillating Running Min/Max. Even though the Oscillating Running Min/Max only depends on z, τ, M − , M + , we are often interested in a specific initial condition u 0 satisfying (2.1) for some given numbers 0 ≤ M − , M + ≤ ∞. In such situations we often write Orm τ,u0 (z) instead of Orm τ,M± (z). Let us mention that Orm τ,M± (z) is well defined for any path z ∈ C 0 ([0, T ]), given that 0 ≤ min {M + , M − } < ∞, see Lemma 3.9. In view of the above discussion, there emerges a natural equivalence relation on the set of paths. For convenience, the relation is here defined on an arbitrary interval. ). Let u i be the entropy solution to , we say that z 1 is equivalent to z 2 , written z 1 ∼ z 2 , on [t 1 , t 2 ] with initial condition u 0 . We are now ready to state the result alluded to above. As mentioned above, for piecewise linear paths, the Oleȋnik estimate implies that the solution u(t) is (spatially Lipschitz) continuous for t in certain regions of the path. In the following theorem this result is extended, via Theorem 2.4, to the case of a general path z ∈ C 0 ([0, T ]). Remark 2.6. Apriori, the left/right limits should be interpreted as essential limits and the statement should be restricted to points −∞ < x < y < ∞ such that these limits exists. However, whenever the lower or upper bound is finite, it implies that u(t, ·) belongs to BV loc (R) and the left/right limits exist in the classical sense. Remark 2.7. In [22] 1 , the authors investigate regularity properties of solutions to the equation where z is a continuous path, and F is a nonlinear function meeting the standard assumptions from the theory of viscosity solutions of fully nonlinear degenerate parabolic PDEs. An L ∞ -bound on the second derivative D 2 v is established in [22]. In the special case this estimate reduces to the Lipschitz (W 1,∞ ) bound ess sup x =y This estimate is similar to the one provided by Theorem 2.5, which in the special case f (u) = u 2 /2 can be recast as , for a.e. x, y ∈ R. Although the results are similar, both relying on the strict convexity of the flux but with the one in [22] restricted to f = u 2 /2, the proofs are different. We work at the level of conservation laws and use the method of generalized characteristics. The argument in [22] relies on semiconvexity preservation properties of Hamilton-Jacobi equations. In view of Theorem 2.4, the equivalence class of a given path is nontrivial. The following result yields a condition sufficient for two paths to be equivalent. Let Then Remark 2.9. We note that the existence of α 1 , α 2 is closely related to the problem of optimal transport (on R) [42]. Here we have two (continuous) transference plans, represented by α 1 , α 2 , which should satisfy two transportation problems. Recall that ρ + zi is constant whenever ρ − zi is decreasing, while ρ − zi is constant as long as ρ + zi is increasing, i = 1, 2. It seems likely that the condition (2.5) is also necessary, at least on a more restricted space of paths, cf. Lemma 3.8. For z 1 , z 2 as in Theorem 2.8, we write It is obvious that the relation ∼ • is both reflexive and symmetric, i.e., that Then there exist, at last in the piecewise linear setting, nondecreasing surjective maps ζ 1 , ζ 2 such that α 2 • ζ 1 = β 1 • ζ 2 , cf. Lemma 3.8. Hence, , we denote its equivalence class with respect to τ and M ± by [z] τ,M± , that is, Regarding the continuous dependence estimate (1.6), one may exchange the uniform distance between two paths by the distance between two equivalence classes: where ζ τ,M i ± ∼ z is shorthand notation for ζ ∼ z on [0, τ ] for any initial condition u 0 satisfying (2.1) with M ± = M j ± . Therefore, (1.6) may be replaced by in view of Theorem 2.8, one may hypothesize that the distance (2.6) can be estimated in terms of the minimization problem Our next result shows that this is indeed the case. To make the statement precise, we need to introduce some notation. the interpolation points associated with Orm τ,M± (z), cf. Definition 2.2, and set with the convention that κ ± = ∞ if the set is empty. Set ι(t) = t. Then Let us give a geometrical interpretation of the minimization problem To this end, let α : . Given a subset S ⊂ [0, τ ] 2 , denote by T α (S ) the first time α hits S : By Lemma 3.11 and the continuity of α, Consequently, . In other words, the value Φ[z 1 , z 2 ](α 1 , α 2 ) is dependent only on where the path α hits L + ∪ L − ; L ± are the line segments Figure 3. As a result, Φ[z 1 , z 2 ](α 1 , α 2 ) is a function of the path α, independent of its parameterization. From the view of factoring the solution operator, Theorem 2.10 is supplying a description of the metric induced by the uniform norm on the quotient space, cf. Figure 2. [z] → u(τ ) Based on the above observations, we now give an outline of how the minimization problem may be solved using dynamic programming. Introduce a cost function c : where 1 S is the characteristic function of S . Hence, For any s ∈ [0, τ ] 2 , let A s be the set of monotone paths connecting s and (τ, τ ): , where α 1 , α 2 are nondecreasing and continuous. . Define a value function is the sought value. Let us show how to compute V on the grid where τ i n Ni n=0 = T zi . First note that for s ∈ s1=τ , the set of admissible paths A s is simply any path tracing out the straight line connecting s and (τ, τ ). Hence, To compute V on the remaining part of G, define the squares As ∂ + Q j,k ⊂ ∂ − Q j−1,k ∪ ∂ − Q j,k−1 for j, k > 1 we may compute V on the entire grid G, starting in the upper right square Q 1,1 and trace our way down to the lower left square Q N1,N2 . Proofs of main results In this section we provide detailed proofs of Theorems 2.4, 2.5, 2.8, and 2.10. on [z(0), ∞) × R, and set u(t, x) := v(z(t), x). Then, formally, it follows that u solves (1.1). Let us take a closer look at this substitution, by considering the viscous approximation. That is, let v ε be the classical solution to the parabolic problem x) satisfies, for any convex entropy, entropy-flux pair (S, Q), . A priori, due to the factorż, the limiting solution does not necessarily dissipate the entropy. However, if ∂ x v(z(t), x) = ∂ x u(t, x) is bounded, then the dissipation vanishes as ε → 0 and u ought to be a solution. Also, ifż ≥ 0, then u ε ought to converge to the entropy solution to (1.1). Similarly, ifż ≤ 0, we let v ε solve the parabolic problem with flux −f , on (−∞, −z(0)) × R, and take u ε (t, x) = u ε (−z(t), x). These observations are formalized in the next two lemmas. We consider first the case that z is monotone. In the following discussion it will be convenient for us to talk about backward entropy solutions. For us the natural backward solution is the (forward) entropy solution to the problem with flux −f . That is, the entropy solution to on (−∞, 0] × R is obtained by solving the problem By the one-to-one correspondence v → w it is clear that the associated notion of backward entropy solution is well posed for any u 0 ∈ (L ∞ ∩ L 1 )(R). The backward/forward entropy solution to (3.2) on (−∞, ∞) × R is obtained by considering the forward solution for t ≥ 0 and the backward solution for t < 0. The fact that the initial condition was specified at time t 0 = 0 was somehow arbitrary, and the extension to general t 0 ∈ R may be obtained by the substitution t → t − t 0 . Let u be the entropy solution to (1.1) on [0, τ ] with initial condition u 0 , and v be the backward/forward entropy solution to . Furthermore u is nondissipative, i.e., for any convex entropy, entropy-flux pair (S, Q) and for any ϕ ∈ C ∞ c ([0, τ )×R), Proof. By the weak formulation of (3.3), using the Lipschitz continuity in x, for any [t 1 , t 2 ] ⊂ (z min , z max ), Consequently v is locally Lipschitz continuous in time. Let us consider an approx- By Lipschitz continuity, m v ∈ L ∞ ([z min + ε, z max − ε] × R) for ε > 0. Moreover, by the chain-rule, m v (z, x) = 0 for almost all (z, x) ∈ [z min , z max ] × R. Hence, we conclude that (3.4) holds. Combining Lemmas 3.2 and 3.3 we obtain Let z be a Lipschitz continuous path, satisfying for some τ > 0. Let v be the backward/forward entropy solution to Proof. Recall that v is composed of a backward and a forward solution. That is, where w ± are the entropy solutions to Consequently, by Lemma 3.3, We apply the convention (0) −1 = ∞. Proof. Let 0 ≤ t ≤ T be given. By assumption there exists a finite sequence {t n } N n=0 , 0 = t 0 < · · · < t N = T , such that the graph of z is a straight line on each interval [t n , t n+1 ], n = 0, . . . , N − 1. Without loss of generality, we will prove the result for t ∈ {t n } N n=0 . Let P n be the statement of the lemma for t = t n . We need to prove that P n+1 holds given the validity of P n , where P n precisely reads where ∆z n = z(t n+1 ) − z(t n ). Therefore, . 3.3. Equivalence. Let us first, for convenience, collect some consequences of the above results in terms of the equivalence relation, see Definition 2.3. Proof. Let u 1 , u 2 denote the entropy solutions to (1.1) associated with the paths z 1 , z 2 and initial condition u 0 . Next we provide preliminary version of Theorem 2.8. Before proving the claim, let us see why the result follows. Let D be any finite union of squares, i.e., D = ∪ (j,k)∈I Q j,k , where Then, for such D, let the lower and upper boundary be defined by (1) g i is left continuous. Proof of Claim 2. To this end, we might as well assume s 1 ≤s 1 and s 2 ≥s 2 , the other cases beeing either trivial, or analogous. As ρ 2 is nondecreasing but then, as ρ 1 is nondecreasing, ρ 1 (s 1 ) = ρ 1 (s 1 ). Similarly, ρ 2 (s 2 ) = ρ 2 (s 2 ). Consequently (3.12) follows. Likewise, it is easily seen that the straight line connecting s and s * belongs to S . Similarly fors and s * . To prove (3.13) we observe that On the other hand, Combining the two yields (3.13). Finally, for any 1 ≤ p ≤ ∞, and so u ε 0 → u 0 in L p (R). Proof of Theorem 2.4. Assume u 0 ∈ BV (R). We construct an approximation of the path z by supplementing the interpolation points {(τ m , z(τ m ))} Before proving the claim, we verify that Theorem 2.4 follows. As z is uniformly continuous on [0, τ ] we may pick the sequence {t n } ∞ n=0 such that z n → z uniformly on [0, τ ]. Let u n be the solution to (1.1) with path z n and initial condition u 0 . By (1.6) it follows that u n (τ ) → u(τ ) in L 1 (R). But by (3.14), u n (τ ) = u 0 (τ ) for all n ≥ 0. Hence u 0 (τ ) = u(τ ) which proves that the equivalence holds for all initial functions u 0 ∈ (L 1 ∩ L ∞ ∩ BV )(R). Next, we want to apply Theorem 2.4 to prove Theorem 2.8. An important step in this direction is the following observation, which is also of importance for Theorem 2.10.
2017-11-06T11:29:05.000Z
2017-11-06T00:00:00.000
{ "year": 2017, "sha1": "bd95d8bf35e7d3f85ab89b95ea8af732e960d415", "oa_license": "CCBYNCND", "oa_url": "https://www.duo.uio.no/bitstream/10852/70901/2/hkrs.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bd95d8bf35e7d3f85ab89b95ea8af732e960d415", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7902171
pes2o/s2orc
v3-fos-license
Attention, Visual Perception and their Relationship to Sport Performance in Fencing Attention and visual perception are important in fencing, as they affect the levels of performance and achievement in fencers. This study identifies the levels of attention and visual perception among male and female fencers and the relationship between attention and visual perception dimensions and the sport performance in fencing. The researcher employed a descriptive method in a sample of 16 fencers during the 2010/2011 season. The sample was comprised of eight males and eight females who participated in the 11-year stage of the Cairo Championships. The Test of Attentional and Interpersonal Style, which was designed by Nideffer and translated by Allawi (1998) was applied. The test consisted of 59 statements that measured seven dimensions. The Test of Visual Perception Skills designed by Alsmadune (2005), which includes seven dimensions was also used. Among females, a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual-Spatial Relationships, Visual Sequential Memory, Narrow Attentional Focus and Information Processing was observed, while among males, there was a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional Focus and Information Processing. For both males and females, a positive and statistically significant correlation between achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional, Narrow Attentional Focus and Information Processing was found. There were statistically significant differences between males and females in Visual Discrimination and Visual-Form Constancy. Introduction Each athletic activity has its own unique psychological characteristics. These characteristics are related to the activity's natural components and contents, as well as to its requirements for an athlete's motor abilities, tactical capabilities and higher mental capabilities, such as cognition, perception, memorization, attention and thinking. According to Deary and Howard (1989), there are performance skills in many sports activities that are difficult to observe. Using film analysis, these authors confirmed eye movements that are invisible to the naked eye. They described the phenomenon as optical anticipation. An example of this phenomenon is related to the difficulty of following a baseball pitch in the last 8-10 feet before it strikes the bat (Deary and Howard, 1989). Optical anticipation appears more clearly in fencing. The fencer, referee and even viewers can suffer from the phenomenon when they are reviewing and analyzing a filmed performance. Fencing is a sport that is characterized by rapid motor performance. For example, the execution of an attack takes fractions of a second. Journal of Human Kinetics volume 39/2013 http://www.johk.pl The difficulty of reviewing performances in fencing translates to a need for a high degree of optical concentration. Concentration is needed to follow the movements of the feet, body and armed hand of each fencer. A follow-up electrical system is a requirement for this sport. In fencing, each individual's ability level depends on many variables. Visual variables are the most important, including the accuracy and quality of vision. A visual acuity of 6/6 means that an athlete can see things clearly, but it does not mean that the athlete can determine his/her place in space, how quickly his/her opponent moves or whether the direction of an object will change. Visual processing is responsible for these abilities. Ariel (2012) suggested that the visual sense plays an important role in physical activity. The visual sense provides athletes with an estimated 80% of the sensory input that occurs during physical activity, especially in activities that require advanced perceptual senses. The perceptual senses are the visual skills that provide athletes with accurate and rapid information; they are considered to be the first step in information processing. The more unclear, incomplete or confused the information and data are, the lower the expected response from the athlete (Ariel, 2012). Although perception and attention are two separate processes, they are also related. Attention occurs first, but perception interferes with it; attention is a basic condition for perception to occur (Hagemann et al., 2010). Furthermore, attention and perception are mutually influenced and affected by each other. In many cases, attention can be directed from within an individual, which means that he/she can choose what to focus on or search for specific environmental stimuli to achieve a particular goal (Parkin, 2000). The direction of attention is usually affected by environmental stimuli located in the individual's area of attention (Hagemann et al., 2010). Attention is one of the most important mental processes for the growth of an individual's knowledge. Attention enables the individual to select various sensory stimuli to acquire skills and to form appropriate behavioral habits. Attention allows the individual to adapt to his/her environment (Parkin, 2000). Attention is considered to be one of the important psychological factors that determine superiority in fencing. Attention is of great significance for fencers. Mental abilities, such as attention, perception, intelligence, reaction and expectation, are considered to be the most important factors that must be managed. Mental abilities play a major role in motor behavior, as well as emotions and responses during participation in physical activity in sports. Using mental abilities and emotional factors at their highest limits enhances an athlete's effort during training and competitions. For these reasons, the levels of attention and visual perception in fencers were identified and their relationship to a fencer's achievement level was measured. Aims of the study This study aimed to identify the following parameters: -Male and female fencers' attention levels -Male and female fencers' visual perception levels -The relationship between attention and visual perception dimensions and fencers' achievement levels Material and Methods Data collection 1-The Test of Attentional and Interpersonal Style (TAIS) was designed by Nideffer and translated by Allawi (1998 (Alsamadone, 2005). Study method Taking into account the nature of the study, the researcher used the descriptive method. Participants The study sample included 16 fencers registered in the 2010/2011 season of the Egyptian Federation of Fencing. The sample was comprised of eight males and eight females who participated in the 11-year stage of the Cairo Championships. Survey study The researcher conducted a survey in a sample of six fencers from the same community, but who were not included in the study population between November 25 and November 30, 2010. For visual perception items, the validity coefficient ranged from 0.887 to 0.954, and the Alpha reliability coefficient was 0.912. For attention items, the validity coefficient ranged from 0.895 to 0.931, and the Alpha reliability coefficient was 0.916. Basic study: The researcher conducted the TAIS and the Test of Visual Perception Skills in the basic study sample at the Egyptian Fencing Club on December 2 -3, 2010. RESULTS Measurements and analysis showed that the highest attention dimension scores for both males and females were obtained for Broad External Attentional Focus (BET), Information Processing (INFP) and Narrow Attentional Focus (NAR) (Figure 1). Further analysis revealed that the highest visual perception scores for male fencers were obtained for Visual-Spatial Relationships (VSR), Visual Sequential Memory (VSM) and Visual Figure-Ground (VFG); for female fencers, the highest scores were Visual-Spatial Relationships (VSR), Visual Sequential Memory (VSM) and Visual Discrimination (VD) (Figure 2). For male fencers, the achievement level was correlated with VD and VSM. For female fencers, the achievement level was correlated with VD, VSR and VSM. For the combined group, the achievement level was correlated with VD and VSM (Table 1). Besides, for male fencers, the achievement level was correlated with BET and INFP; For female fencers, the achievement level was correlated with NAR and INFP. For the combined group, the achievement level was correlated with BET, NAR and INFP (Table 2). Analysis revealed, statistically significant differences between males and females in Visual Discrimination and Visual-Form Constancy (Table 3). Discussion The study sample presented high scores for the following dimensions of attention: BET, INFP and NAR. The dimension BET illustrates that fencers are able to integrate several external variables at the same time. The dimension INFP shows that individuals tend to process a great deal of information, and their informativecognitive worlds are filled with various types of information. Finally, the dimension NAR expresses fencers' ability to narrow their attention when desired, and it reflects their ability to focus on one thing or one idea. There was a lack of significant differences between male and female fencers, and in terms of importance, there was similarity in the attention dimensions between male and female fencers. There were differences between male and female fencers in terms of the high scores obtained for dimensions of visual perception. Visual Discrimination (VD) was the most important dimension for female fencers, whereas for male fencers, VD occupied the sixth most important dimension. Furthermore, female fencers were differentiated from male fencers in the Visual-Form Constancy (VFC) dimension. However, there was no clear evidence of significant differences between male and female fencers for the remaining dimensions of visual perception. This finding indicates the distinctiveness of female fencers in some of the visual perception dimensions. However, the entire study sample showed high scores for Visual Memory (VM) and Visual Sequential Memory (VSM). Athletes require different visual capabilities for different sports. Peripheral vision, optical depth, central vision, visual memory, visual concentration and visual reaction are the most important of these capabilities, and their importance varies according the different requirements of the game in question. Generally, sports with faster performance requirements, such as fencing, demand advanced visual capabilities and high distinctive visual abilities. Thus, there is a need for athletes to have intact senses. In the study sample, the attention dimensions that influenced the achievement level were Broad External Attentional Focus (BET), Information Processing (INFP) and Narrow Attentional Focus (NAR). Borysiuk and Waskiewicz (2008) indicated that BET and NAR are interrelated: the focus of optical vision on the detailed optical information is related to the target position and keeping the body in a balanced position (stability). While peripheral vision is responsible for discrimination among stimuli, contrasts, movements and timing (which make up three-dimensional vision), fencers require NAR in the stable state, and during movement, they require peripheral vision. Wood and Abernethy (1997) indicated that the sharpness of vision for moving objects changes from 60% to 70% per second. Hagemann et al. (2010) suggested that since fencing movements are extremely fast, early recognition of the target area of an opponent's attack is expected to be a key factor for success. This hypothesis was confirmed by the expertadvanced-novice differences observed under all of the experimental conditions. In particular, topranked fencers were able to extract markedly more information and use that information to predict the direction of their opponent's attack (Hagemann et al., 2010). Borysiuk and Waskiewicz (2008) suggested that the way information is acquired from the environment and the different perception speeds of individual senses affects the efficacy of technical and tactical actions. The above information is extremely useful in motor training for the development of fencing techniques. A fencer can prepare appropriate strategies to perceive the position of the opponent's blade -its point and bell guard in particular -to prepare for offensive actions. Knowledge about the opponent's movements, distance evaluations and visual concentration are the most useful signals. They provide the fencer and the coach with valuable feedback and permit a strategy of switching from focal vision to ambient vision and vice versa to be employed. These strategies develop concentration, which improves the ability to react to initial signals and anticipate the opponent's actions. They allow a fencer to recognize significant signals and reject misleading information, such as an opponent's feints (Borysiuk and Waskiewicz, 2008). The perception dimensions that were clearly related to and affected the achievement level in the study sample were Visual Discrimination (VD), Visual Sequence Memory (VSM) and Journal of Human Kinetics volume 39/2013 http://www.johk.pl Visual-Spatial Relationships (VSR). Sight plays a major role in the ability to reach high achievement levels because of its association with attention concentration. These dimensions are correlated because an increased ability to distinguish optical depth and improve visual concentration and visual memory (visual capabilities) has a positive impact on performance. Zarrad (2001) suggested that these visual capabilities represent an early stage in the preparation for information processing and visual stimuli. As a knowledge-related ability, visual perception is primarily based on other cognitive abilities that manage visual stimuli, such as attention, memory and thinking. Whereas most of the explanatory cognitive theories regarding visual perception consider it to be more efficient and faster when the optical memory and data storage of stimuli are accurate, we cannot separate visual perception from other cognitive processes, as visual perception overlaps and interacts with other aspects of cognition (Abdul Hamid, 2006). Conclusions Based on results of this study, the following conclusions can be drawn: 1. For fencers, the most important dimensions of attention are Broad External Attentional Focus (BET), Information Processing (INFP) and Narrow Attentional Focus (NAR). 2. The most important dimensions of visual perception for male fencers are Visual-Spatial Relationships (VSR), Visual-Sequential Memory (VSM) and Visual Figure-Ground (VFG); for females, the most important dimensions are Visual-Spatial Relationships (VSR), Visual Sequential Memory (VSM) and Visual Discrimination (VD). 3. There are no significant differences between male and female fencers in the dimensions of attention, but female fencers are differentiated by two dimensions of visual perception: Visual Discrimination (VD) and Visual-Form Constancy (VFC). 4. The most influential correlations between the dimensions of attention and fencer achievement levels are Broad External Attentional Focus (BET), Information Processing (INFP) and Narrow Attentional Focus (NAR). 5. The most influential correlations between the dimensions of visual perception and fencer achievement levels are Visual Sequential Memory (VSM), Visual-Spatial Relationships (VSR) and Visual Discrimination (VD).
2016-05-12T22:15:10.714Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "22b6e9fd295707d3962fe5acbb40520b3a34de5b", "oa_license": "CCBY", "oa_url": "https://content.sciendo.com/downloadpdf/journals/hukin/39/1/article-p195.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81d5afedbdc1cf542e6b8d6a8db8d270e456cb08", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
203462997
pes2o/s2orc
v3-fos-license
Stories of how to give or take – towards a typology of social policy reform narratives ABSTRACT Narrative stories are crucial to policy change, as they decisively contribute to how policy problems and policies are defined. While this seems to apply for social policy in particular, narrative stories have remained under-researched and not systematically compared for this area. In this article, we theorise on narratives in social policy by focusing on how similarities and differences between narratives in old- and new-social-risks policy reforms can be conceptualised, taking into account expansion and retrenchment. To systematically link those types of social policy reform with narrative elements, we rely on stories of control and helplessness, as well as the deservingness or undeservingness associated with different target populations. Thereby, distinct types of social policy reform narratives are identified: stories of giving-to-give, giving-to-shape, taking-to-take, taking-to-control, and taking-out-of-helplessness. The article concludes with empirical illustrations of those narrative types, which stem from the case studies presented in this Special Issue. Introduction In policy research, it is well known that there is no clear link between social problems and certain policies that are taken in response to them. At least since the argumentative turn (Fischer & Forester, 1993), most policy scholars have shared the view that stories, narratives, metaphors, symbols and the like play an important role for policymaking, which is why they now constitute important concepts in most established theories of the policy process (Weible & Sabatier, 2017). The aim of this Special Issue is to move these ideational concepts in the foreground of policy research by focusing on one central concept, namely that of narrative stories (Stone, 2012). Narrative stories areoften highly-simplifiedstories about how (good or bad) things happen. As 'the depiction of [. . .] a problem strongly suggests a [certain] solution to the problem' (Birkland, 2007, p. 73), stories about the problem can then be linked with specific policies. When it comes to policy reforms, narrative stories of change are especially important. They can either take the form of stories of decline (making the case that a crisis is likely to occur if measures to prevent this are not undertaken), or the form of stories of rising and progress (Stone, 2012, pp. 160-165). The 'narrative' has long exceeded some more or less defined analytical borders of literary as well as public research (Hajer, 1993;Majone, 1989;Stone, 1989). Being taken up by political advisors and consultancies, the term has moved to politics, and today is an integral part of political commentary in the media and popular debates. Not least, narratives have recently been connected with debates about post-truth politics and 'the dangers of (deceitful) storytelling' (Foroughi, Gabriel, McCalman, & Tourish, 2017), so that narratives employed by political actors have become a 'serious concern in society at large ' (ibid.). However, also a contrary, optimistic view is connected with narratives, stressing their power to foster cohesionsuch as EU commissioner Fischler stating what Europe needed for the future was not an institutional restructuring, but a new narrative (Welt, 2016). In post-industrial societies, as McBeth, Jones, and Shanahan (2014, p. 225) highlight, policy entrepreneurs 'expend considerable energy turning public policy debates into battles over competing narratives '. This Special Issue focuses on the role of narratives in social policy reform. Social policy is highly important in electoral terms and therefore one of the main fields where political parties compete (Häusermann, Picot, & Geering, 2013). Moreover, it has been an area of fundamental transformation over the past years: Next to the sheer size of its financial budget and continuous costs, it hasat least in Western European welfare statesoftentimes involved significant retrenchment (e.g. in pensions), as well as new, sizable investments (e.g. in childcare). All of those reforms have to be 'sold' in the policy process and communicated towards relevant political actors as well as voter groups (König, 2016). Unpopular social policy reforms, especially if not well communicated, can have disastrous political consequences. A case in point are the Hartz-reforms in German labour market policy, where political consequences of welfare state retrenchment have gone 'far beyond electoral punishment' (Fervers, 2019). For all those reasons, we should expect narrative stories to be of particular importance in the area of social policy. Cox argued that successful welfare reform depends on the 'social construction of the need to reform' (Cox, 2001, p. 464), i.e. the ability of political leaders to frame issues in a path-shaping way, which generates support for reforms and helps to overcome obstacles. Crucial in this regard is the social construction of target populations, which influences reform, and then also becomes 'embedded in policy as messages' (Schneider & Ingram, 1993, p. 334). By giving or taking social rights, social policy regularly involves the distinction of 'deserving' and 'undeserving' groups. Such deservingness constructions are an essential element of reform narratives. Yet, while ideational concepts are regularly being applied for analysing social policy change (see e.g. Béland & Mahon, 2016), narratives have received less attention (but see e.g. Needham, 2011;Newman & Vidler, 2006). Moreover, up to now, research on social policy has not become a focus within the Narrative Policy Framework (NPF) (to our knowledge only with the exception of Xiarchogiannopoulou, 2015). Against this backdrop, in this conceptual article and in the different contributions to this Special Issue of Policy and Society, we ask: • How are social policy problems constructed through narrative stories? • How are narrative stories used by policy actors to link specific policies and problems? • Are there systematic narrative differences between old-and new-social-risks policies? The aim is twofold. First, the Special Issue takes stock of the different ways narratives are used in recent social policy reforms dealing with so-called old and new-social-riskspolicies (Bonoli, 2005;Häusermann, 2012), by bringing together case studies from different policy fields and both 'established' and 'emerging' welfare states. Second, the Special Issue brings together different theoretical and methodological perspectives from policy process research working with narratives as well as ideational concepts in order to explain social policy reform. This enables us to assess how the concept of narratives can be integrated and analysed in different theoretical and empirical contexts. Through this, the Special Issue opens a new comparative angle of the role that narrative stories play in different social policy areas. 2. Narratives of social policy reforma conceptual framework 2.1 What are narratives? Ideational and, more specifically, narrative perspectives have a long-standing tradition in policy research, and were firmly established from the late 1980s and early 1990s with major works of Stone (2012Stone ( , first published 1988, Majone (1989), Hajer (1993), Fischer and Forrester (19932003), Yanow (1995), and others. 'Ideas' can comprise different aspects and very different types of ideas may be studied (Béland, 2016), ranging from rather broad policy paradigms (Hall, 1993) to narrower concepts such as framing (Rein & Schön, 1993). From an ideational perspective, framing processes and narrative stories are in the foreground of the policy process: Studied as 'argumentative forms of language' ( van Eeten, 2007, p. 253), they help to understand how problems are defined, how policies are attached to them, and how this 'normative leap' (Rein & Schön, 1993) is taking place, i.e. the linkage from description to prescription. Moreover, frames are used by policy actors to legitimise policy proposals, they are thus 'strategic and political in nature' (Béland, 2016). Then again, also the label of narrative 'can be read to imply different methods, units of analysis, and research goals' ( van Eeten, 2007, p. 251), which may be summarised in such different things as the 'narrative analysis of policy', the 'analysis of policy narratives', or even 'the narrative of policy analysis' (ibid.). Our approach to narratives of social policy reform is rooted in a social constructivist epistemology. While a material reality exists, such as e.g. increasing life expectancy, there can be very different ways of how peoplein our case: policymakersinterpret and thus make sense of this reality (Dodge, 2015). Against this backdrop, Stone argued that narratives are 'the principal means for defining and contesting policy problems' (Stone, 2012, p. 158). As such, they play at least implicitly a role in many theories of the policy process, such as e.g. the Multiple Streams Framework or the Advocacy Coalition Framework. Within the Narrative Policy Framework (NPF), they have relatively recently even been put centre stage (McBeth et al., 2014;Shanahan, Jones, McBeth, & Lane, 2013). The aim of the approach is to be a 'bridge between postpositivists, who assert that public policymaking is contextualised through narratives and social constructions, and positivists, who contend that legitimacy is grounded in falsifiable claims' (Shanahan et al., 2013, p. 453). One expression of this approach is that the NPF stipulates policy narratives to have 'generalized narrative elements (form) that can be applied across different policy contexts ' (McBeth et al., 2014, p. 228). Stone highlights that: 'We don't usually think of policy as literature, but most definitions of policy problems have a narrative structure' (Stone, 2012, p. 158), including some type of change, 'heroes', 'villains' and 'innocent victims' as well as some explanation 'of how the world works'. A narrative portrays a certain issue as a sequence of actions and/or events, thus establishing both a chronological order and a causal relation between them. Narratives therefore present a specificand, normally, highly simplifiedversion of the issue (Münch, 2016, pp. 84-85). When analysing narrative stories, attention to the actors who create these narratives is crucial. The NPF in its attempt to systematise the analysis of narrative stories presents a more detailed account of narratives' core elements, distinguishing setting (e.g. institutional context, economic conditions); characters (which may include victims, villains, and heroes); plot (providing the arc of action); and moral (most importantly: promoting a policy solution). Not all of those elements need to be present at all times, but to qualify as a policy narrative, the NPF holds that at least one mentioning of a character and of a public policy preference or stance is required (McBeth et al., 2014, p. 229). Analytically, narratives can be studied at the micro or meso level; under certain conditions also studies at the macro level are possible (Jones & McBeth, 2010). They are thus set below the level of discourses or paradigms, but of course the identification of narratives can tell something about dominant discourses or paradigms within a society (Hall, 1993). Against this backdrop, we assume that narrative stories may also contribute to paradigmatic change, as they can present some paradigms as normatively adequate while demonising othersespecially if these stories are often repeated by powerful actors and, consequently, rigidify in the public discourse (see e.g. Hay, 2001). The lower analytical level of narrative stories comes along with a valuable analytical advantage, which may even particularly apply to the social policy case: Ideational factors have been increasingly identified as crucial here to understand magnitude and direction of reform. However, the analytical focus is usually on broader ideas and paradigms such as the 'social-investment state'. These are at the same time criticised for being highly ambiguous concepts, which are understood in very different ways by political actors, and which can principally lead to diverse or even contradictory policy outputs (see e.g. Béland & Mahon, 2016;Garritzmann, Häusermann, Palier, & Zollinger, 2017). In contrast, the focus on narratives as they are used by political actors in the political process are arguably easier to measure, while at the same time they carry high analytical value to understand reformunderlying ideas as well as strategic action. Stone (2012) distinguishes four types of storylines within stories of change as well as stories of power: Stories of change can be broadly sub-divided into stories of decline versus stories of rising. Both may co-exist in a certain policy sector, and possibly intermingle. Stories of decline are more common, as actors use them to illustrate how things will get worse if nothing is done or rather: if not a certain measure is taken or, vice versa, exactly if it is taken (Stone, 2012, p. 160). There are specific variations of these broader forms, such as the stymied progress story (according to which there is no decline (yet), but this will happen if not this-and-that is done) and the change-is-only-an-illusion story. Stories of power may be broadly sub-divided into stories of helplessness and stories of control (Stone, 2012). Again, both are rather two sides of the same coin, as policymakers will, for instance, highlight the strength of their preferred solution to a certain problem ('control') by relating it to how we used to think that nothing can be done about this problem, or to how competing proposals are not effective ('helplessness'). Specific variations of stories of power include the conspiracy, according to which 'harm has been deliberately caused or knowingly tolerated' (Stone, 2012, p. 167), as well as the blame-thevictim story. A precondition for narrative stories becoming powerful is ambiguitythe multiple meanings of social phenomena (Kingdon, 1984). Only if we assume that the reality as we observe it cannot be reduced to one unequivocal meaning, can narratives play a dominant role in the policy process. Ambiguity can then be used strategically to 'create alliances around a common policy or rule' (Stone, 2012, p. 181). A 'fertility rate of 1.3ʹ in a country, for example, can then be either told as a story of insufficient policies for work-family reconciliation, as a story of structural transformations of educational pathways and family life, or as a story of today's young generation shying away from taking up responsibility. Against that backdrop, narratives are crucial for the problemdefinition process. In politics, 'we look for causes not only to understand how the world works but to assign responsibility for problems' (Stone, 2012, p. 207). Accordingly, in our different fertility rate stories, blamed are either politicians, structural conditionsor the childless individuals. Stone (2012, p. 208) introduces the concept of causal stories as another element of problem definition. With a view to less complex problems, four types of causal stories are distinguished, depending on whether problems are first thought to be subject to purposeful or purposeless action, and second whether their consequences are intended or unintended. 1 Many policy problems, however, require 'a more complex model of cause' (Stone, 2012, p. 215), of which examples would be blaming complex systems, institutional (such as 'structural' unemployment), or historical causes (such as consequences of past policy decisions and path-dependency). Stone (2012) describes how symbols, numbers and literary devices are used (strategically) within narrative stories to define problems; and Schlaufer (2018) has recently shown how evidence is related to all different narrative elements, e.g. to support a policy solution. Actors use synecdoches (where the whole is represented by one of its parts, such as 'typical cases'), for instance the 'Polish plumber' representing 'cheap labour' coming from Eastern European countries. Metaphors go beyond description, as they usually already 'imply a larger narrative story and a prescription for action' (Stone, 2012, p. 171). An example is the 'social hammock', which indicates too generous social benefits that would prevent beneficiaries from participating in the labour market. In the following section, we look at the role of narratives in social policy more closely. Narratives and social policy reform As elaborated in the introduction, we can expect narrative stories to play a particularly important role in the area of social policy. At the core of social policy are social rights, which give a certain entitlement (e.g. to a cash benefit or service) to certain groups, conditioned by specific eligibility criteria. People are thus immediately affected by social policy reforms and, correspondingly, these reforms (especially unpopular ones) need to be well communicated and may otherwise have disastrous political consequences (Fervers, 2019;Häusermann et al., 2013;König, 2016). Thus, the 'struggle over ideas' (Stone, 2012, p. 13) that, according to Stone, takes place by making use of narratives, becomes crucial. We hypothesise that narratives show systematic variation for different types of social policy reform. Taking into account Hemerijck's (2013) distinction of four related dimensions of welfare state recalibration, narratives focus particularly on three of those dimensions. 2 The first is functional recalibration, involving 'the changing nature of social risks' (such as altered family structures, unemployment, or population ageing) and 'the kinds of interventions that are required' (Hemerijck, 2013, p. 105). The second is distributive recalibration, which focuses on 'the rebalancing of welfare provision across policy clienteles and organized interest' (Hemerijck, 2013, p. 110), acknowledging that welfare provision is unequally distributed across social-risks categories, and so are the gains and losses of welfare reform. Third, however in a broader sense, the narratives also shed light on normative recalibration, which depicts orientations and adaptive pressures on values, symbols or images of 'social justice', involving e.g. equality, redistribution, rights and responsibilities of citizens and the state. Finally, although in a more indirect way, they also speak to institutional recalibration, which concerns reforms in levels or rules of decision-making, institutional re-design or shifting responsibilities of different welfare providers. Two crucial dimensions of reform type for the corresponding narratives that we expect are expansion versus retrenchment, as well as old-versus new-social-risks policies. It is well established in the welfare state literature how expansionary versus retrenching measures require differing (communicative) strategies from policymakers, most notably characterised by the concepts of credit-claiming and blame avoidance (Green-Pedersen, 2002;Pierson, 1994). Furthermore, social policy reforms have to deal with two kinds of policies, namely policies of old and new social risks (Borosch, Kuhlmann, & Blum, 2016). Old-social-risks policies are at the core of the traditional welfare state: They primarily focus on the (monetary) compensation of risks over the life course, such as old age, unemployment and illness. In contrast, new-social-risks policies are rooted in the transformation of the traditional male-breadwinner principle as well as the tertiarisation of employment; they address issues such as work-family balance or lone parenthood (Bonoli, 2005). Häusermann (2012) convincingly argued that social policy fields cannot be generally distinguished into 'old' and 'new' but that rather policy instruments directed at old and new social risks can be found in all social policy areas (though to differing extents). While there is a shared intersubjective understanding in the academic discourse that there are systematic differences between old-and new-social-risks policies, these categories themselves can of course also be portrayed as result of academic-discursive practice. In this Special Issue, we are interested in the question how policymakers construct different narratives to justify reforms in what we consider systematically different policy areas. In fact, although the strict dichotomy of old and new social risks can be questioned, especially with the sharp increase in mass unemployment since the financial crisis (Hemerijck, 2017, p. 8), few scholars would deny that policies addressing old or new social risks assign rather different roles to the welfare state, and that political actors need to take this into account when conducting reforms. For instance, policymakers may use similar narratives to 'sell' the retrenchment of public pension schemes and the retrenchment of unemployment benefits (which are both considered old-social-risks policies, albeit in different policy fields). In areas of new-social-risks policies, such as childcare or active labour market policies, we frequently find expansionary measures, which may be linked to shared narratives relating to the paradigm of a 'social investment welfare state' 3 (Morel, Palier, & Palme, 2012). Finally, it is interesting to see how narratives connected to 'old' and 'new' social policies in one particular policy field interact, such as the retrenchment of contribution-based pension schemes and the expansion of universal minimum pensions (see Blum, in this volume). What are now the elements along which narratives for the different reform types systematically differ? One crucial element of narratives is that they imply causesto provide meaning, but also to ascribe responsibility for problems (Stone, 2012, p. 206). With social risks and social problems, Stone (2012, p. 223) highlights that people will have, to a certain extent, 'stable, overall outlooks on responsibility', related also to the role of different welfare providers. To argue for social policy reform, narratives can essentially contain a story about who should get (or lose!) what and why, which is a question that is linked to the different dimensions of welfare state recalibration (Hemerijck, 2013). In fact, the social construction of target populations (Schneider & Ingram, 1993) is at the core of politics, consisting of 'shared characteristics that distinguish a target population as socially meaningful', and 'the attribution of specific, valence-oriented values, symbols, and images to the characteristics' (Schneider & Ingram, 1993, p. 335). Constructions of 'deserving' and 'undeserving' groups are key here, and they are also an important construction criteria in the social policy literature (van Oorschot, 2000). Deservingness criteria relate to a number of other criteria, such as 'intelligent/stupid', 'honest/dishonest', or 'public-spirited/selfish' (Schneider & Ingram, 1993, p. 335). In sum, they result in 'positive' or 'negative' constructions of groups, which are based on general ascriptions which serve to legitimise policy reforms (Schneider & Ingram, 1993, p. 339), and are being constructed and maintained through narrative stories in the public discourse. Combined with reflections on the power of these groups (e.g. size, voting strength, or organisational capacity), Schneider and Ingram identify 'four types of target populations' (Schneider & Ingram, 1993, 2017 and provide examples for each of the four types. We can call the different types the main characters of the narrative stories, and their associated constructions of 'deservingness' and 'undeservingness' can take the place of the distinction between heroes, villains, or victims: (1) Advantaged: powerful and positively constructed (e.g. the elderly, middle class, soldiers/military) (2) Contenders: powerful but negatively constructed (e.g. the rich) (3) Dependants: weak but positively constructed (e.g. mothers, children, poor families) (4) Deviants: weak and negatively constructed (e.g. welfare cheats, undocumented immigrants, drug addicts) The social construction of target populations will differ between countries (see e.g. van Oorschot, 2006 for European countries); and constructions may be more or less disputed/consensual. Locating groupsand, importantly, not individuals!between positive and negative social constructions is thus a matter of empirical research (Schneider & Ingram, 2017). The perception of who is considered deserving and who is not can of course be subject to change, pointing to distributive and particularly normative recalibrations of the welfare state. For instance, Hemerijck (2013, p. 108) concludes that under 'neoliberalism virtually all welfare beneficiaries were seen as "undeserving", unwilling to work, social profiteers'. Four types of social policy reform narratives In the following, we aim to theorise how group constructions are incorporated in different narratives of social policy reform. Relating the stories-of-change to the social policy case, it is crucial to note that Stone (2012, pp. 160-161) describes stories of decline as more common than stories of 'progress alone', and that this has later been confirmed by NPF research (Shanahan et al., 2013, p. 468;Schlaufer, 2018). With a view to the welfare state literature, it is plausible that 'decline stories' are particularly dominating in social policy cases in established welfare states, 4 given the need to construct reform imperatives, and that 'crisis narratives' (Kuipers, 2006, p. 33) help 'undermine the mechanisms of institutional reproduction and thereby create opportunities to establish new parameters and a new institutional framework' (ibid.). Mostly, facts and figures will be recited in the beginning to diagnose a certain problem, to which the reform solution can then be narratively linked, often as a means to regain power or even return to a story of progress and things getting better again (Stone, 2012, p. 160). As Goerres, Kumli, and Karlsen (2019, p. 4) have recently highlighted, all intended reform policies will be legitimised by explaining how they alleviate the 'unsustainability brought on by reform pressures'. Some differences according to reform type may exist, as e.g. Socialinvestment reforms sometimes entail a 'pure' story of progress (see below). Yet overall decline narratives can be considered prevalent in most sorts of social policy reform, and are thus ill-suited to distinguish different stories. Rather, to link the named narrative elements systematically with the type of social policy reform, we rely on the construction of (un-)deservingness by Schneider and Ingram combined with Stone's stories of control and helplessness. As explained above, we also follow Schneider and Ingram by highlighting different target populations to which the different stories refer, which can be defined as the main characters of the respective narratives. Thereby, narratives are distinguished for four ideal reform cases, namely: (I) expanding old-social-risks policies; (II) expanding new-social-risks policies; (III) retrenching old-social-risks policies; (IV) retrenching new-social-risks policies. Table 1 summarises how narratives may be expected to systematically differ for those four reform types, and which characters we are most likely to find. We assume that expanding social policies is generally easier to communicate to the public than retrenching them (Pierson, 1994). With regard to narrative stories, we argue that expansion is preliminary legitimised by constructing groups as deserving. These groups do however differ when it comes to old and new social risks-policies. (I) Regarding expansionary reforms in old-social-risks policies, the dominant understanding in the literature on established welfare states indicates that those are currently few. The narrative context is mostly one of decline, where pension or unemployment policies are under pressure. Yet key characters are the advantaged here (Schneider & Ingram, 1993), meaning that those groups are not only seen as deserving, but also powerful, with pensioners being a case in point. Thus, if expansion occurs, it seems most likely to be argued for by pointing to the deservingness of the reform beneficiaries. An example is 'people who have worked hard their whole life' deserving an increase in pension levels as an acknowledgement of their life-time achievements. In some cases, there can also be more progress-inspired stories, where due to positive economic and fiscal conditions, expansionary reforms for deserving groups are appropriate. We should find for this reform type also attempts to defend the status quo, i.e. not to cut back on existing benefits and therewith accept rising costs without any 'expansionary' effort as such, e.g. due to rising unemployment or increasing number of pensioners. Again, narratives can be expected to draw on the deservingness of the affected groups ('We can't take from them what they have earned!'). Narratives can thus be labelled as 'stories of giving-to-give', where, in view of a particular problem, spending to the affected, deserving group is described as a positive thing. Based on Green-Pedersen's (2002, pp. 39-40) analysis, this may particularly be found in the fields of pensions or health. (II) Turning our eyes to expansionary reforms in the field of new-social-risks policies, these seemin the current climatepotentially easier to narrate than could be expected for the affected target population of dependants, which is politically weak (Schneider & Ingram, 1993). Yet the groups affected by new-social-risks policies are also seen as highly deserving (e.g. children, single parents) and, what is more, corresponding problems are more and more understood to be not sufficiently covered in traditional welfare states (e.g. work-family reconciliation, care needs in old-age). On a more meta-level, indeed the whole paradigm of a 'new welfare state' (Esping-Andersen, Gallie, Hemerijck, & Myles, 2002) can be seen as reallocating resources through the expansion of new-social-risks policies, where beneficiaries are constructed as 'deserving' (in these sense of: under-protected) and thus in need of empowerment through social policy. Such empowerment e.g. happens through redefining care work from a private to an (also) public responsibility (normative recalibration). Correspondingly, narratives to defend the status quo and not retrench new socialrisks-policies can also be expected to reply strongly on the deservingness dimension ('we can't take from the most needy'). Social-investment policies can also give example of (pure!) stories of progress, where not always problem pressure or even a 'crisis narrative' (Kuipers, 2006) is constructed, but sometimes reform-related progress itself spans the 'reform imperative': Then, social policy is described as a productive factor, which can prepare for the knowledge-based economy and lead to economic growth (Morel et al., 2012, p. 12). In the field of expansionary social policies, we have thus hypothesised deservingness to be the central criterion. Narratives between old-and new-social-risks policies are distinguished by different political rationales and normative assessments (acknowledgement/ empowerment) as well as related target groups (advantaged vs. dependant). Vice versa, when retrenching social policies, which is politically more risky and thus more challenging when it comes to reform legitimisation, we hypothesise deservingness, but also control and helplessness as defining features, which are again related to different target groups. (III) When it comes to retrenching old-social-risks policies, first of all, undeservingness constructions become important. Green-Pedersen (2002) argued that benefits related to labour-market participation and social security are regarded as less 'deserving' per se than in other areas, such as old-age security or family policy, where benefits are more widely accepted. Constructions of certain groups as 'undeserving' and norms of 'individual responsibility' are thus particularly present in labour-market-related reforms, whereas they are less important e.g. in pensions: 'Benefits are given to the unemployed because they cannot find a job, not because they deserve it' (Green-Pedersen, 2002, p. 40). If certain groups are marked as 'undeserving', it can be avoided to give benefit cuts a negative connotation. The notion of undeservingness often corresponds to the 'blaming the victim' narrative (Stone, 2012, p. 212), which suggests that e.g. 'many poor people calculate the economic returns of receiving welfare versus the returns from working at low-wage jobs, find welfare yields higher returns, and so choose to take welfare and remain poor' (ibid.). In the case of blaming-the-victim stories, control is regainedafter a constructed situation of helplessnessby assigning the power to control to the individuals themselves (Stone, 2012). However, those taking-to-take narratives can only be employed if the target groups are constructed as deviants (e.g. welfare cheats) or possibly contenders (e.g. the rich). For the advantaged as traditional protégés of old-social-risks policies it would be politically too risky to describe beneficiaries as undeserving in order to legitimise retrenchment. In those cases, we should expect the construction of helplessness. In other words, the reform narrative is likely to be focused on how a situation is worsening through a situation one says one is unable to change (e.g. rising pension costs due to population ageing). Still, political control may also be regained by assigning some of the power to control the situation to advantaged individuals, e.g. through constructing situations requiring everyone to 'tighten their belts' and urging them to take their share as a matter of fairness (e.g. through making private pension provisions). 5 Narratives for retrenching old-socialrisks policies can therefore avoid the blame in different ways, either by shifting it to the affected group or to some 'external developments', which can possibly be controlled through joined responsibility. (IV) Finally, the comparative welfare state literature indicates that in new-social-risks policies, retrenchment reforms are less likely, but they do occur (see e.g. Borosch et al., 2016;van Kersbergen, Vis, & Hemerijck, 2014). As for retrenchment in old-social-risks policies, we should expect stories of undeservingness to be present as far as the target groups can be constructed as deviants (e.g. illegal immigrants) or contenders (e.g. the rich). Though the latter phenomenon may be not too frequent, there are cases where e.g. 'super-rich' have been excluded from benefit access (e.g. making parents with household income over 500,000 euro ineligible to parental leave benefits in Germany since 2011). Yet when dependants are affected by retrenchment reforms (e.g. children), we should expect helplessness constructions to be prominent. Policymakers would want to describe themselves as unable to do anything else but cut back on benefits or services, particularly in the context of (economic) crisis conditions. Even more than in the third reform type, the narrative may here include labelling reforms as 'without alternative' (König, 2016), since in contrast to old-social-risks protégés, no 'power to control' can be shifted to the recipients themselves (e.g. by 'becoming active' or taking selfresponsibility). In the welfare state literature, this has been described as a 'playing the crisis card' strategy (see Goerres et al., 2019, p. 3;Kuipers, 2006); and in terms of respective stories, blame is here less shifted than it is constructed to be 'absent' (in the sense of: 'no one is to blame, it's the crisis, and our hands are tied'). Admittedly, we have at times drawn with a broad brush here in order to distil the ideal types. The dichotomy of expansion and retrenchmentas that of old and new social risksdoes not always hold, and might sometimes be better described as restructuring of welfare. Nevertheless, we assume that our typology captures a wide range of reform efforts with regard to narrative strategies. Still, it may be objected that some empirically relevant cases, such as expansion of social policies for traditional 'deviants', are neglected, and also that sometimes different constructions of target groups might be found in empirical settings. For instance, the contribution by Galanti and Sacchi (2019, in this Special Issue) shows how expansion of unemployment benefits for young people (who are typically constructed as dependants) was narrated as a story of giving-to-give. Thus, most importantly, the outlined ideal types of social policy reform narratives need to be qualified in empirical application. 3. Narratives of social policy reforms: evidence from the special issue contributions The contributions of this Special Issue study the role of narrative stories in three different policy fields, namely pension, labour market, as well as child and elderly care policy, which are all characterised to a different degree by a mixture of old-and new-social-risks policies (Häusermann, 2012). The contributions show that different types of narrative stories can indeed be identified in processes of social policy reform. Moreover, they show that the analysis of narratives is highly compatible with established concepts and theories of the policy process. For instance, Béland (2019, in this volume) argues that narrative stories and the strategic use of these narratives by actors interact with political institutions in order to bring about policy change, while Galanti and Sacchi (2019, in this volume) combine the Multiple Streams Framework with the concept of narrative stories. As Schneider and Ingram (2017, p. 333) put it, their theory of the social construction of target groups was meant to complement other theories, explicitly addressing Stone's, by 'unpacking' the details of how issues are framed and making explicit the elements of policy design'. The contributions by Béland, Hagelund and Grødem, and Blum engage with the field of pensions. Béland (2019) analyses narrative stories in pension policy in Canada and the US in the last 25 years. With the narrative story of a demographic time bomb, Béland identifies a story of decline for justifying the retrenchment of old-social-risks policies, pointing also to how narratives can spread across borders. While in Canada incremental reforms of the existing pension system took place, in the US attempts to reform the pension system failed, which can be traced back to distinct national decision-making rules. The contribution illuminates the importance of taking policy legacies and institutional factors into account when analysing narrative stories, particularly when it comes from problem definition to policy adoption. Comparing two policy sectors in one country, Hagelund and Grødem (2019) analyse the different outcomes of occupational pension scheme reforms in Norway in the private and the public sector: While in the private sector, negotiations between the social partners and the state led to fundamental policy change, such an agreement could not be reached in the public sector. Relying on discursive institutionalism, Hagelund and Grødem show that the ability or inability to establish a strong coordinative discourse among the core actors can explain this outcome. Within both discourses, a rhetorical figure of the 'toiler' was present, highlighting the deservingness of old-age pensioners. However, when negotiations moved to the public sector, the metaphor was not sufficiently adjusted and ended up as a 'cognitive lock', hampering reform rather than promoting it. Blum (2019) studies how the taking-to-control narrative employed for the farreaching German pension reform of the early 2000s was (at least partly) deconstructed again, and which alternative narratives were told for pension reform proposals in recent years. In particular, Blum is interested in the argumentative couplings of those narratives, i.e. how the linkage between a problem, a certain policy proposal, and its strategicpolitical functions were argumentatively achieved. In quintessence, the study shows how a giving-to-give narrative was successfully employed for a pension reform in 2014, which mainly build on the deservingness of affected groups and was politically driven. By contrast, the varying, and contested deservingness of the affected groups made it much more difficult to establish a new narrative for the more 'problem-solving'-driven proposal of a minimum-pension scheme, a reform which has not found agreement to date. Bandelow and Hornung, Vogeler, as well as Galanti and Sacchi focus on the role of narrative stories in the field of labour market policy. Bandelow and Hornung (2019) compare the role of narratives in the recent French labour market policy reforms and the German Hartz reforms. They identify a similar narrative in both countries that puts forward overregulated labour markets as a key problem for long-term unemployment, and suggests deregulation as a solution. The structure of the narrative stories resembles taking-to-control stories, constructing a necessity to reform in the wake of an economic crisis. What is more, by relying on Discourse Network Analysis, Bandelow and Hornung show that instead of legitimising reforms to the public, narratives were rather a strategic means of programmatic elites to strengthen in-group identification. Vogeler's (2019) contribution also analyses an encompassing labour market reform that aimed to achieve a greater deregulation of the labour market, namely the 2017 Brasilian labour market reform, which was rooted in a neoliberal policy paradigm. Vogeler argues that the political process accompanying the reform was characterised by competing policy paradigms, that party competition was crucial for understanding this process, and that supporters and opponents of the reform used narrative stories in order to strengthen their preferred policy paradigm. Her analysis especially illuminates the different stories of change and stories of power that were used by supporters of the reform to strengthen the neoliberal policy paradigm. Combining the Multiple Streams Framework with narrative stories, Galanti and Sacchi (2019) analyse the reform process of the Italian Jobs Act (2014-16). They show the role of narrative stories in constructing labour market problems by focusing on both supporters and opponents of the reform, and highlight how the Prime Minister Renzi as policy entrepreneur made use of narrative stories. Galanti's and Sacchi's analysis shows that the reform process was characterised by different narrative stories, and that the supporters of the reform relied on both stories of giving-to-give and giving-to-shape. What is more, they show that during the reform process, the narratives more and more focused on Renzi's policy style, and less on the policies that were being implemented with the Job Act. Related to that, they conclude that the Job Act lacked a narrative that was able to successfully convey a new understanding of social policy; a finding that can be paralleled with Bandelow's and Hornung's analysis in this volume. Finally, two contributions deal with child and elderly care policy. Van Gerven (2019) analyses narrative stories in the recent political processes connected to population ageing in China, thus expanding the geographical scope of this Special Issue to an Asian country. The contribution identifies two types of stories when it comes to constructing policy problems: While stories of 'giving-to-give' are related to inequality and health care policies, stories of 'giving-to-shape' are linked to long-term care policies. The latter focus specifically on the deservingness of the elderly and are embedded in a larger story of decline that legitimises the limited role of the state. Nygard, Nyby and Kuisma (2019) study narrative stories used to legitimate Finnish family policy reforms in the period between 2007 and 2017. They state that family policy can generally be linked to both old-and new-social-risks policies. Furthermore, similar to Vogeler's contribution in this Special Issue, they link the narrative stories more broadly to paradigmatic changes following a social-investment perspective (new social risks), a more traditional redistribution perspective (old social risks), or a neoliberal austerity perspective. In their analysis of Finnish family policy over time, they identify a paradigm shiftwhich also conforms to a narrative shiftfrom the paradigms of social investment and redistribution to austerity. Reform legitimisation drew on different kinds of stories, including all stories of giving-to-shape, of taking-tocontrol and of taking-out-of-helplessness. For old-social-risk policies, the contributions show stories of taking-to-control, such as the 'demographic time bomb' argument told to legitimise pension retrenchment (Béland; Hagelund and Grødem) or the narrative of too generous benefits in labour market policies (Bandelow and Hornung). Yet stories of giving-to-give are also to be found, putting the deservingness of reform-affected groups at centre (Blum; Galanti and Sacchi). Where the deservingness of the targeted groups is less believed in, it can prove difficult to construct a narrative for expansionary old-social-risks policy reform (Blum). Different narrative strategies are employed for legitimising new-social-risks policy reforms, such as giving-to-shape stories in Chinese long-term care policies (Van Gerven) or stories-of-progress in (at least former) Finnish family policy reforms (Nygard, Nyby and Kuisma). If we finally abstract from the very different narrative stories to the overall rationale of the concept of narrative stories, we see that all contributions share the assumption that social policy reform is not an exercise in rational problem solving: Rather, it is a fight over the construction of dominant problem definitions and policy solutions that are deemed appropriate. The concept of narrative stories enables policy scholars to analyse how these constructions take place, and how they are used in social policy reform. Narratives also have strategic functions here in face of ambiguity (Stone, 2012, p. 181). Linking this to the welfare state literature, it has been pointed out how the ambiguity of social-investment ideas opens room for 'ambiguous agreements' (Palier, 2005), where actors pursue different goals with the same policy instrument (Häusermann & Kübler, 2010), and hence narratives can particularly be used strategically to build modernising reform coalitions (Garritzmann et al., 2017, p. 15). The systematic linking to the notion of constructed targeted populations (Schneider & Ingram, 1993) contributes to explaining the increasingly realised phenomenon that significant welfare state reform is not only happening through defensive 'blame avoidance', but that even retrenchment and painful reforms 'may be successfully legitimised, such that decision-makers want to publicly take responsibility for them' (Goerres et al., 2019, p. 2).
2019-09-19T09:17:04.306Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "65aa4d4e6f61184a082e982b89a34d56d258c62e", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14494035.2019.1657607?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "5b6b6b0a16745e6c54da91bcc06194c314a2bfd0", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
233253483
pes2o/s2orc
v3-fos-license
An Interactive Approach to Contraception Teaching Amongst Medical Undergraduates Contraception is an essential requirement for most women in the reproductive age group. This paper describes an interactive learning programme called Advancing Contraception Teaching (ACT) intended to increase knowledge and better foster workforce readiness amongst medical graduates. It was evaluated by a single-arm pre and follow-up post-test study design and feedback analysis. The Wilcoxon signed-rank test showed that the median of differences between the pre-test knowledge and post-test knowledge scores was significantly different (p < .001) and feedback was largely positive. The ACT programme is an effective interactive method of teaching contraception to medical undergraduates, demonstrating a positive learning gradient and an improved learning experience. It can be transformed into a remote learning programme with contact time required only for the hands-on section. It can also be used as a training module for junior doctors not exposed to a comprehensive undergraduate contraception teaching module. Introduction Contraception is a fundamental primary care right of women in the reproductive age group. This is relevant in the current global social environment, where increasingly early exposure to sexual encounters escalates the risk of unwanted pregnancies and sexually transmitted infections (STIs). Data from the United Nations Global perspective showed an estimated 33 million pregnancies worldwide were unintended (United Nations, 2015). A study published in Lancet Global Health showed that between 2010-14 a worldwide estimation of 44% of pregnancies were unintended (Bearak, et al., 2018). Unsafe sex has been identified by the World Health Organisation as the second most risk factor for disease disability and death in the lowerincome countries and the ninth in the higherincome countries. Unintended pregnancies and STIs are negative consequences of unsafe sex, hence the need for effective contraception education. Corresponding Author: Dr Valliammai Jayanthi Thirunavuk Arasoo, Monash University, Malaysia. Email: t.jayanthi@monash.edu It is important to ensure that physicians are competent in advising on available methods of contraception. They are integral to improving sexual and reproductive health, thus reducing rates of unwanted pregnancies and unsafe terminations. Approximately 90% of women rely on physicians' advice to make contraceptives choices (ESHRE Capri Workshop Group, 2014). Hence, medical education must facilitate professionals to become clinically competent and develop critical thinking to provide evidence-based contraceptive options and counselling. This paper describes an interactive learning programme called Advancing Contraception Teaching (ACT) intended to ensure that future clinicians are knowledgeable and confident of providing contraceptive options. Interactive learning here combines traditional lecture with active student participation in question-andanswer sessions, curated stations enabling students to work with each other and the instructor to enhance learning and finally learning practical skills on simulators. This study is based in Malaysia where unintended pregnancies and baby abandonment rates are high (The Star, 2018 (Yusof, et al., 2018). The contraceptive prevalence rate in Malaysia for modern methods was about 40% with an existing gap of unmet needs up to 18% (United Nations, 2017). This may indicate the glaring disparity between the obvious need for counselling and adequate contraceptive provision and the lack of it. The ACT programme is based on the pedagogical concept of active learning, where students participate and interact with the learning process, as opposed to a passive flow of information from tutor to student. Combining workshop-style learning with didactic teaching is known to be more effective than traditional didactic lectures alone (Satterlee, et al., 2008). To date, there are a limited number of studies looking into the effectiveness of using blended or interactive methods to teach contraception. This is an innovative, comprehensive and interactive method of teaching contraception to medical undergraduates that is easily replicable by other medical schools and can be used to better prepare junior doctors with no prior exposure to a comprehensive contraception module. Methods This project evaluation was assessed by the Monash University Human Research Ethics Committee (MUHREC -18344) to be a quality assurance project and was considered suitable for verbal consent. This study aimed to evaluate the impact of the ACT programme on medical students' knowledge and learning experience. This pretest and follow-up post-test single-arm study was conducted at Monash University Malaysia from February to May 2019. All 120 fourth-year students enrolled in the five-year Doctor of Medicine course were invited to attend the ACT programme during their General Practice and Women's Health rotation. The students were reassured that test scores would not affect their summative assessment for the MD course. Participation in the research was voluntary; students could participate in the ACT programme but decline participation in the evaluation. Students undergoing General Practice or Women's Health rotations in the same semester were recruited at the start of the rotation to ensure no prior exposure to contraception teaching. Being a single-arm prepost-study meant that the participating students were the study controls as the pretest assessed their knowledge before the intervention. The follow-up post-test to evaluate the effectiveness of the ACT programme was conducted 2 weeks later to mitigate short-term recall. The ACT programme was structured such that the pre-reading material was provided two weeks before the intervention to encourage preparation. The intervention started with a pretest that was followed by a lecture, with the opportunity for students to clarify any discrepancies in their knowledge and understanding of the subject. The workshop was conducted with 30 students in each session who were then subdivided into 6 students per group. They rotated every 20 minutes through five stations. Students were offered a paper-based evaluation survey upon completion of the workshop. A four-domain questionnaire and an open-ended question were used to evaluate the students' learning experience. Pre-test and Post-test Content The team of tutors from General Practice and Women's Health created 12 Multiple Choice Questions (MCQ) that tested students' knowledge of various contraceptive methods based on the curriculum. Each question had 5 branches that carried equal weightage of 1 mark each. No marks were deducted for wrong answers. The maximum marks achievable was 60. Both the pre-tests and post-tests used the same MCQs but the order was rearranged in the post-test. Workshop overview The workshop was divided into 5 stations covering components of contraception training, as depicted in Table 1. Each station was approached using case scenarios which facilitated clinical reasoning and counselling skills. In the long-acting reversible contraceptives stations, students were shown and allowed to practice insertions and removals of intrauterine contraceptive devices (IUCDs) and subdermal implants on simulators. The stations are as described in Table 1. Oral Contraceptives The combined hormonal station was conducted by a thorough discussion based on given scenarios. The discussion involved the initial assessment (history and examination) of patients requesting contraception. The students were introduced to Medical Eligibility Criteria (MEC) for contraception. They were asked to refer to the MEC and look for suitable contraception for women with various medical conditions. Further discussion took place on the various types of oral contraceptives (how to start, what to do if missed pills/delayed administration of patch or vaginal ring, potential side effects). The final part of the discussion was about Gillick's competence and Fraser guidelines which were based on the scenario of an underage girl requesting contraception. Injectables and Barrier contraceptives This station was conducted based on the contraceptive needs of three different scenarios. The first scenario was for a migrant of low socio-economic status, the second was for a perimenopausal woman and the third dealt with postpartum contraception advice. In all the scenarios, discussions not only looked into contraceptive advice but also the prevention of STIs and other risk factors that were noted in each scenario. Intrauterine Contraceptive Device Contraceptive (IUCD) In this station, students were initially quizzed on the risks and benefits of the IUCD and patient suitability. They were then shown the various instruments used in the process of an IUCD insertion, including a Cusco's speculum, sponge forceps, uterine sound, tenaculum or Vulsellum and a few different IUCD's, e.g. Mona Lisa Cu 375 and the Mirena. The tutor then demonstrated a step-by-step insertion and removal of IUCD. The pelvic models used for this station allow the actual insertion and removal of the IUCD. The students then individually performed the procedure under supervision. Subdermal Implants Summarised reading material was used during the discussion to consolidate understanding of subdermal implants. This was followed by a demonstration of subdermal implant insertion and removal on arm models. The students practised as teams of two where they demonstrated counselling for one who opts for the device. They then attempted inserting and removing the implant under supervision. Sterilization Students viewed a short video clip on laparoscopic tubal ligation. This was followed by a robust discussion on counselling patients for tubal ligation and vasectomy. The tutor facilitating the station acted as the simulated patient. Two case scenarios were discussed. The first case was an obese diabetic woman wanting tubal ligation. The option of a vasectomy for her healthy spouse was explored. The second involved counselling a multipara wanting tubal ligation. Data Collection The follow-up post-test was held two weeks after the workshop. Only pre-test and post-test submissions that could be matched were included in the quantitative analysis. Students' performance was reported using frequency statistics. The IBM SPSS software (programme version 24, ARMONK, NY) was used to analyse the collected data. Paired t-test was used to analyse the difference between the mean scores of the pre-tests and post-tests. A student evaluation form, with four specific questions assessing the usefulness of the programme and a section to provide feedback, was offered at the end of the workshop. Results Post-tests scores showed a significant improvement from that of the pre-test. Students also evaluated the usefulness of the programme positively. Pre-test and Post-test results Out of the 120 students who participated in the workshop, 115 students attempted the pre-test and 77 students attempted the post-test. The pre-test participation was higher as it was administered as part of the overview lecture whereas the post-test was administered after a two-week interval. Only submissions of those who had completed both the pre-and post-tests were analysed. There were also incomplete submissions which disabled the matching of pre-and post-test scores. This reduced the number to 69. The median Interquartile Range (IQR) of the pre-test and post-test knowledge scores (n=69) were 29.0 (IQR: 24.0 -31.5) and 34.0 (IQRbm: 31.5 -36.0) respectively. The Wilcoxon signed-rank test showed that the median of differences between the pre-test knowledge and post-test knowledge scores (median change in score of 4.0 (IQR: 0.5 -10.0)) was significant (p < .001). Student satisfaction of learning contraception through the ACT programme A total of 93 students (77.5%) responded to the questionnaire immediately after the workshop, using Likert scale responses (1= strongly disagree, 2= disagree, 3= neutral, 4= agree, 5= strongly agree) to the four questions evaluating the usefulness of the workshop. Mean scores and standard deviations were calculated for each question ( Table 2). The mean scores for each statement were all greater than 4, which indicates that the students demonstrated a positive perception about the usefulness of the programme. Feedback Analysis Three assessors were responsible for critically evaluating the feedback to minimise the researcher bias. Conventional content analysis was performed on the open-ended feedback section of students' overall thoughts of the ACT programme excluding the post-test as it was held 2 weeks after the workshop. The openended feedback section generated responses from 71 students. Some gave more than one response. These were analysed and categorised into two main categories of "appreciative" and "critical", and then further subdivided into themes. The six key themes that emerged from the feedback received is detailed in Table 3. The majority of students perceived the workshop as a positive learning experience. Some students expressed their appreciation of the interactive nature of this session and hoped more of such workshops could be done on other topics. Advancing Contraception Teaching (ACT) South-East Asian Journal of Medical Education Vol. 14, no. 2, 2020 104 "Great session, extremely helpful to have so many tutors in one place for us to ask questions and learn." "Very good interactive session." Scenario-based teaching Students felt that the scenario-based teaching session encouraged them to integrate their knowledge into clinical practice. "Very well organized workshop. Case based approach helps to relate theoretical knowledge to clinical practice." "The format of the workshop, the lecture prior to it, everything from start-end was arranged quite well. The consolidation of the theory into stations with scenarios has proven to be very effective in the learning process, also in remembering facts and the theory behind. It would be great if we could have more workshops like this on different topics every week, would be incredibly helpful for us." Duration Several students felt that the 20-minute duration for each station was inadequate and, as a result, stations that covered more material were a bit rushed. "The stations were more kind of a rush, maybe extra 5 minutes would help. Overall well executed." "This year's workshop is much more useful as it is more Interactive with plenty of scenarios. Room for improvement is on timing, I believe 30 minutes per station would be more beneficial, if not 25 minutes." Reading materials While the students generally appreciated the workshop, some students criticised the prereading material materials given to them. They expressed their preference for more condensed and comprehensive notes. "Workshop is helpful in consolidating information. Reading material had too many articles to read, would prefer more comprehensive & consolidated reading materials. Time for teaching station is just nice." Discussion This paper describes an innovative educational practice of teaching contraception to medical undergraduates. It evaluates the students' learning experience in the ACT programme using pre-test and post-test methods and feedback analysis. In this study, innovations were made from the traditional lecture-based teaching on contraception at our university. We provided pre-reading material two weeks before the pre-test so that students have background knowledge before attempting the pre-test. The active session then enhances and strengthens the previously acquired knowledge. Incorporating communication and clinical reasoning skills into the case scenarios and hands-on procedures/simulation further augments the learning experience as students can critically analyse case scenarios and plan a holistic comprehensive management. This helped with the retention of the knowledge gained. Although it was only a 17% improvement over the baseline score, the follow-up post-test conducted two weeks after the intervention showed significant statistical evidence of improvement in knowledge. This suggests that the ACT programme improved the students' performance in the MCQ test on the assessment of contraception. Counselling skills could not be assessed by the MCQs but the facilitators ensured that every student participated in the active session. They learned not only from participating and observing their peers but also from feedback given immediately. Although Problem Based Learning (PBL) and Simulation are used extensively by medical schools in Malaysia, active participation by facilitators might not be present. In a study conducted by a Malaysian medical school, students lamented the lack of interaction with the facilitators and some students dominating discussions while others remained passive (Barman, et al., 2006). The feedback on the effectiveness of the programme showed significant mean scores for each statement with all greater than 4 (on a scale of 1 to 5), indicating that the students had a positive perception about the usefulness of the programme. Traditional teaching methods were didactic and limited to a single discipline. In more recent years, medical schools have incorporated case discussions and PBL, with some element of handling contraceptive devices. There are limited reports on a comprehensive learning modality like ours. We integrated Women's Health with General Practice to consolidate learning of contraception, thus enabling students to apply the skills acquired in both primary care and gynaecology settings. The ACT programme consisted of an overview introductory lecture followed by the interactive workshop, hence making it a comprehensive approach to learning contraception. Increasingly, lecture-style teaching is being abandoned in favour of interactive workshops to promote student engagement. However, a multinational study on resident emergency physicians in four Asian hospitals showed that those who attended the lecture sessions followed by the clinical skills simulation performed better than those who did it in reverse. They opined that the didactic lectures equipped the participants with the background knowledge required to understand the clinical simulations better (Li, et al., 2012). Interactive workshops add value to the learning experience of undergraduate students. The format of integrating a clinical case and relating that to theoretical knowledge proved to be an effective learning process. Our interactive workshop employed a more comprehensive strategy compared to a team which studied standard lectures with interactive lectures on contraception. Both groups were given handbooks on contraception and attended lectures. Those in the interactive lectures group were additionally given 2-4 questions to think about and present their answers during the interactive lectures. However, no statistical difference in pre-test versus post-test scores was recorded (Cwiak, et al., 2004). An option of teaching contraception can be through team-based learning where small groups discussed the given topic before presentation. The ACT programme was curated to explore self-study followed by consolidation of learned materials through group discussion at the counselling sections of the various stations. The post-test showed significant improvement in scores although it was held two weeks after the workshop to negate short-term recall. Student feedback was also positive. Mody et al. (2013) explored compared team-based learning against traditional lectures. Students in team-based learning had curated pre-reading material as well as small group discussions before participating in mini-presentations of various contraceptive topics. Although there was no statistical significance in score improvements of pre-tests and post-tests between the groups, students in the team-based learning reported higher satisfaction scores (Mody, et al., 2013). A similar study conducted in Ethiopia also compared conventional didactic lectures on contraception to one that had added simulated sessions. The students in the intervention group had additional hour-long tutorials and seminars with simulated sessions on IUCD and implant insertions and removals. The intervention group demonstrated higher knowledge scores and were more skilled in both IUCD and Implant insertion and removal (Gebremeskel, et al., 2018). The ACT programme explored counselling as a form of consolidating students' understanding of the subject. Students found the session beneficial. Students who opt for an Obstetrics & Gynaecology rotation in Year 5 have the opportunity to attend a contraceptive clinic session to observe how patients are counselled on various methods of suitable contraceptives. This is in line with Swamy et al who opined that learning actively in a clinically realistic environment may promote the retrieval of the information in future clinical practice (Swamy, et al., 2013). Limitations of the study The limitations of this study are as follows. It was not done in a clinical setting to reflect readiness to practice as this study was conducted on medical undergraduates. There was also no pre-post control group comparing didactic session teaching to the comprehensive ACT programme. Furthermore, not all students completed the pre and post-tests as it was not compulsory to do so. Conclusion Contraception counselling is a skill that requires not only knowledge of all the available methods but also utilises clinical reasoning to select the most effective and appropriate contraceptive method for patients. This study demonstrates that effective interactive teaching of contraception helps to increase students' ability to do so. However, we need to refine the amount of pre-reading material given and increase the time spent on individual stations to enhance the ACT programme. It is also simple enough to be transformed into a remote learning programme with contact time required only for practicing device insertions. It can be utilised to teach junior doctors who did not have prior exposure to a comprehensive contraception module. Abbreviations ACT -Advancing Contraception teaching IUCD -Intrauterine Contraceptive Device IQR -Interquartile range MCQ -Multiple Choice Questions MEC -Medical Eligibility Criteria OCP -Oral Contraceptive Pill Ethics approval and consent to participate This project evaluation was assessed by the Monash University Human Research Ethics Committee (MUHREC -18344) to be a quality assurance project and was considered suitable for verbal consent. Availability of data and material The data that support the findings of this study are available from Monash University, Malaysia but restrictions apply to the availability of these data due to student confidentiality. Data is, however, available from the authors upon reasonable request that does not compromise student confidentiality.
2021-04-16T01:22:54.358Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "b9617b73d04d78ae64cb2f30bb82e0d4fa53f48e", "oa_license": "CCBY", "oa_url": "http://seajme.sljol.info/articles/10.4038/seajme.v14i2.225/galley/392/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b9617b73d04d78ae64cb2f30bb82e0d4fa53f48e", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
221299941
pes2o/s2orc
v3-fos-license
Pentraxin 3 Promotes Glioblastoma Progression by Negative Regulating Cells Autophagy Glioblastoma is the most malignancy tumor generated from the central nervous system along with median survival time less than 14.6 months. Pentraxin 3 has been proved its association with patients’ poor survival outcome in various tumor. Recently, several studies revealed its association with glioblastoma progression but the mechanism is remained unknown. Autophagy is a programmed cells death and acts critical role in tumor progression. In this study, pentraxin 3 is recognized as prognostic prediction biomarker of glioblastoma and can promote glioblastoma progression through negative modulating tumor cells autophagy. Transcription factor JUN is assumed to participate in cells autophagy modulation by regulating pentraxin 3 expression. This work reveals novel mechanism of pentraxin 3 mediated glioblastoma progression. Furthermore, JUN is identified as potential transcription factor involves in pentraxin 3 mediated tumor cells autophagy. INTRODUCTION Glioma is characterized as primary tumors that originate in brain parenchyma, which can be classified according to the type of glial cell involved in the tumor . World Health Organization characterizes glioma into four grades based on its malignancy. GBM, WHO grade IV, is the most vicious type with median survival time less than 15 months (Yang et al., 2011a;Hong et al., 2012;Ouyang et al., 2016). Current clinical treatment including maximum surgical resection followed by postoperative radio-therapy and concurrent chemo-therapy (Stupp et al., 2005;Gusyatiner and Hegi, 2018), but patients' survival outcome remains unsatisfactory. Recently, several factors have been identified and be applied to predict survival outcome in clinical such as the subtype of GBM (Verhaak et al., 2010) and the status of IDH1 (Wang et al., 2013). On account of GBM heterogeneity, insight on potential prognostic prediction factors are urgent need. Pentraxin 3, known as TSG-14, is an inflammatory molecule belongs to the pentraxin family and mainly secreted by inflammatory cells like dendritic cells and macrophages (Liu et al., 2011;Bonavita et al., 2015). Recently, PTX3 has been proved its role in tumor progression. For example, PTX3 affects tumor proliferation and apoptosis by interacting with the PI3K/AKT/mTOR signaling pathway in lung cancer and breast cancer (Thomas et al., 2017). PTX3 involves in the epithelial-mesenchymal transition in melanoma (Ronca et al., 2013) and breast cancer . Notably, PTX3 can interact with the fibroblast growth factor-2/fibroblast growth factor receptor system to promote tumor progression (Bonavita et al., 2015;Ying et al., 2016;Giacomini et al., 2018). In glioma, previous study confirmed that decreased the expression of PTX3 impaired glioma cells proliferation and invasion ability (Tung et al., 2016). However, the role of PTX3 in GBM is poorly understood. Cells autophagy is a programmed cell death and act as a response to unfavorable factors like hypoxia and nutrient deficiency (Stavoe and Holzbaur, 2019). Therefore, by activating cells autophagy can prevent tumor progression and increase tumor sensitivity to chemo-or radio-therapy Perez-Hernandez et al., 2019). Previous studies proved PTX3 can affect cells autophagy but their relationship in GBM is unknown (Giorgi et al., 2015;Wu et al., 2015). In this study, we analyzed the expression profile of PTX3, its ability to predict survival outcome and potential mechanisms in affecting GBM progression based on The Cancer Genome Atlas (TCGA) dataset. Results were verified in the Chinese Glioma Genome Atlas (CGGA) dataset. Then, we performed in vitro experiments to prove PTX3 affects tumor cells viability and autophagy. By integrating results from experiments in vitro and bioinformatics, we proved PTX3 negative modulates cells autophagy, and transcription factor JUN might regulate PTX3 expression. Cell Culture and Transfection Human GBM cells (U87-MG) are purchased from the Chinese Academy of Sciences. Glioma cells are maintained in DMEM medium with 10% fetal bovine serum and 1% penicillinstreptomycin, 5% CO 2 and 37 • C. Cells are randomly divided into different groups, the control group, the siRNA-NC (si-NC) group, the siRNA-PTX3 (si-PTX3) group, the overexpression JUN group and the overexpression JUN with siRNA-PTX3 group. The siRNA of PTX3 (5 -GGTCAAAGCCACAGATGTA-3 ) and JUN overexpression plasmid are obtained from RiboBio (Guangzhou, China). Five microliters of siRNA-PTX3 (or 2.5 µg overexpression JUN plasmid) and 5 µl lipofectamine (RiboBio, China) are added into 100 µl serum-free DMEM. Then, 1 ml DMEM is added and the mixed solution is incubated at 37 • C for 6 h. The medium is discarded after 72 h and cells are washed by PBS twice for further experiment. Autophagic flux assay adopted similar process as previous depicted. Cells were separated into five groups and was processed with autophagy inhibitor Bafilomycin A1 (Baf A1; Abcam). A: the control group without Baf A1; B: the si-PTX3 group without Baf A1; C: the si-PTX3 group with Baf A1; D: the control group with Baf A1; E: the si-NC group with Baf A1. CCK8 Assay The logarithmic growth phase transfected tumor cells were obtained and digested for CCK8 assay. 5 × 10 3 glioma cells and 100 µl of medium were placed into 96-well plates. The absorbance at 450 nm was measured per 24 h during the following 3 days. Colony Forming Assay Cells were digested and plated in 6-well plates (300 cells per well) and cultured with 5% CO 2 at 37 • C for 2 weeks. The colonies were then fixed with 4% methanol (1 ml per well) for 15 min and stained with crystal violet for 30 min at room temperature. After photograph, discoloration was performed with 10% acetic acid, and cells were measured absorbance at 550 nm. Immunofluorescence Assay Cells were fixed with 4% paraformaldehyde, then added with 0.3% triton at 37 • C. After be blocked with 3% BSA for 60 min, cells were incubated with rabbit anti-LC3B (1:200, ab51520; Abcam) overnight at 4 • C. At the second day, cells were incubated with fluorescein isothiocyanate (FITC)-conjugated secondary antibodies (green) at 37 • C for 90 min, and stained with DAPI (blue) at 37 • C for 10 min. Observation and photograph were conducted by confocal microscopy. Data Collection and Single-Cell Analysis RNA-seq data of glioma and corresponding clinical information were acquired from the TCGA database 1 and the CGGA database 2 . All data were transferred into TPM data for further analysis. For single-cell analysis, three GBM samples from GSE139448 are processed and normalized by R package "Seurat, " "NormalizeData, " and "FindVariableGenes" (Wang et al., 2020). The GO analysis based on PTX3 expression is perform as mentioned above. Expression profile of PTX3 and JUN are plotted by the R package "vlnplot." All data are obtained from online public database, and corresponding ethic statement can be found in their website. GO Analysis and Gene Set Enrichment Analyses (GSEA) Genes with the adjusted p-value < 0.05 and the absolute FC larger than 2.0 were considered to be statistically significant. Gene Ontology (GO) analysis on the aberrantly expressed genes were determined by the GSVA analysis, and false discovery rate (FDR) < 0.05 were considered statistically significant. The GSEA analysis was conducted to illustrate the relationship between PTX3 expression and hallmark gene sets from the Molecular Signatures Database (MSigDB). Survival Analysis Patients were subdivided into high and low groups according to median PTX3 expression. The overall survival (OS), progression free interval (PFI), and disease specific survival (DSS) rates of patients in low and high group were compared by the Kaplan-Meier method with log-rank test. ROC and AUC were performed to evaluate the prediction performance of PTX3 expression in various aspects, including 3,5-year OS, subtype of GBM (classical, mesenchymal, neural, proneural) and IDH status (wildtype, mutant). Mutation and Copy Number Variation Analysis Single nucleotide polymorphisms (SNPs) and somatic CNVs were downloaded from the TCGA database. CNVs regions on chromosome associated with PTX3 expression were analyzed using GISTIC 2.0 3 . Venn diagram is generated by TBtools [Chengjie Chen, Rui Xia, Hao Chen & Yehua He. TBtools, a Toolkit for Biologists integrating various HTS-data handling tools with a user-friendly interface. Preprint at https://www. biorxiv.org/content/10.1101/289660v1 (2018)]. Statistical Analyses PTX3 expression profile difference with in WHO grades, GBM subtypes and treatment outcome were analyzed using Wilcoxon rank testing. Kaplan-Meier survival curves were generated and compared by using the log-rank test. The Pearson correlation was applied to evaluate the linear relationship between gene expression levels. Univariate and multivariate Cox regression analyses, and LASSO regression analyses were performed by R/BioConductor (version 3.6.2 4 ). Statistical analyses of the colony-forming assay and the CCK8 assay were carried out by GraphPad Prism (version 8.0 5 ). Two-way ANOVA analysis followed with Tukey's analysis for more than two groups. P-value < 0.05 was considered to be statistically significant. PTX3 Expression Is Elevated in Glioblastoma PTX3 expression profiles of pan-cancer and normal tissue were obtained from the TCGA database and the Getx database. PTX3 expression in glioma is higher than normal tissue (P < 0.001; Figure 1A). In glioma, the expression of PTX3 is increased along with the tumor grade elevated (P < 0.001; Figure 1B). Based on treatment outcome after first course, PTX3 expression is significantly increased in patients with PD than other three groups (CR, PR, and SD) in glioma from the TCGA database (P < 0.001, Supplementary Figure S1A). Therefore, high PTX3 expression indicates worse survival outcome. The IDH status serves as prognostic prediction biomarkers in clinical (Chen et al., 2019), and the MGMT status can predict tumor sensitivity to temozolomide (Hegi et al., 2005). In our work, PTX3 is enriched in IDH wildtype GBM (TCGA: P < 0.001, Figure 1C; CGGA: P < 0.001, Figure 1D), unmethylated glioma (P < 0.001, Figure 1E). But no significantly expression difference is observed in GBM cased on the MGMT status (P > 0.05, Figure 1F) in the TCGA dataset. As for GBM subtypes, mesenchymal GBM exhibits the worst survival outcome and highest PTX3 expression while proneural GBM to the opposite in the TCGA microarray database (Figures 1G,H and Supplementary Figure S1B). Therefore, high PTX3 is associated with aggressive glioma. PTX3 Acts as Prognostic Prediction Biomarker and Indicates Worse Survival Outcome Patients were subdivided into high or low risk group based on median PTX3 expression to analyze survival outcome difference. In the TCGA database, high PTX3 expression suggested worse survival outcome than low PTX3 expression in glioma (P < 0.001; Figure 2A). Similarly, low risk group manifested better survival outcome relative to high risk group in GBM in the TCGA sequence (P = 0.007; Figure 2B) and microarray (P = 0.0024; Figure 2C) database. Same result was also confirmed in the CGGA database (P = 0.0012; Figure 2D). The survival outcome of patients receiving radiotherapy in low PTX3 group was better than high PTX3 group in the TCGA microarray database (B) Sequence data of PTX3 mRNA levels in WHO II, III, and IV from the TCGA dataset. PTX3 expression is related to the status of IDH in GBM from the TCGA microarray dataset (C, P-value < 0.001) and the CGGA dataset (D, P-value < 0.001). PTX3 expression is significant elevated in MGMT unmethylated group than MGMT methylated group in glioma (E, P < 0.001) from the TCGA sequence dataset, while similar difference is not observed in GBM (F, array, P < 0.001) from the TCGA array dataset. (G,H) PTX3 expression profiles in GBM subtypes from the TCGA microarray dataset. MES, mesenchymal; PN, proneural; NE, neural; CL, classical. NS, no significantly statistical; *P < 0.05; **P < 0.01; ***P < 0.001. ROC and AUC were calculated to reveal the prognostic prediction ability of PTX3. The 3, 5 years survival probability of PTX3 expression were calculated (3-years: AUC = 0.84, 5-years: AUC = 0.792, Figure 2G). The AUC calculated according to the status of IDH (AUC = 0.852, Figure 2H) and GBM subtypes (AUC = 0.79, Figure 2I) in the TCGA microarray database were also calculated. Same results were verified in the TCGA sequence dataset (IDH: AUC = 0.839, subtypes: AUC = 0.842, Supplementary Figures S1G,H). Univariate and multivariate Cox regression analysis were also to evaluate the prognostic prediction ability of PTX3 (Supplementary Tables S1, S2). Therefore, PTX3 promotes tumor progression and its expression can predict survival outcome. Biofunction Prediction of PTX3 Next, we predicted the potential biological functions of PTX3 by conducting the GO analysis (Figures 3A-C), the single-cell analysis (Figure 3D), and the GSEA analysis (Figures 3E-G and Supplementary Figure S1I). Results suggested that PTX3 involve in negative modulating cells autophagy and extracellular matrix disassembly. Therefore, PTX3 might promote tumor progression by inhibiting tumor cells autophagy. In order to explicit the association between PTX3 and genes involved in negative modulating autophagy pathway, we first identified differential expression genes (DEGs) between high and low PTX3 expression group. Then genes related to negative modulating autophagy pathway were obtained from the MSigDB 6 . Three genes, HMOX1, IL10RA, and TREM2, were identified by intersecting DEGs and autophagy related genes 6 http://www.gsea-msigdb.org/gsea/msigdb/cards/GO_NEGATIVE_ REGULATION_OF_AUTOPHAGY ( Figure 3H). The correlation coefficient was also calculated (Supplementary Figure S2). PTX3 Affects Tumor Cell Viability by Negative Modulating Cell Autophagy We next prove PTX3 can affect GBM cells viability. The CCK8 assay indicates that cells proliferation is inhibited by silencing PTX3 expression ( Figure 4A). The Western-blotting assay suggests the expression level of autophagy related proteins, beclin1 and LC3B, are elevated in the si-PTX3 group ( Figure 4B). Notably, the expression of LC3B-II is higher in the si-PTX3 group relative to other groups indicating activated cells autophagy. Next, the autophagic flux assay is performed to determine the source of LC3B-II. Increased LC3B-II expression in the si-PTX3 group cannot be reversed by adding autophagy inhibitor Baf A1 ( Figure 4C). Therefore, PTX3 can negative modulate cells autophagy. The LC3B expression profile is also examined by immunofluorescence suggesting U87MG cells in the si-PTX3 group tends to accumulate more LC3B in cytoplasm (Figure 4D). The colony forming assay also suggests the viability of U87MG cells is inhibited when PTX3 expression is decreased ( Figure 4E). Thus, PTX3 Expression Profile and Biofunction of Transcription Factor JUN According to gene expression correlation, we identify positive correlation between JUN expression and PTX3 expression ( Figure 5A and Supplementary Figure S3A). Likewise, JUN can predict survival outcome based on its expression (Supplementary Figures S3B,C). JUN can bind to chromosome 3 like polymerase (RNA) II (DNA directed) polypeptide A (regulator of message RNA synthesis) (Figure 5B). Previous study also proved that JUN binds to the promoter of PTX3 to regulate PTX3 expression (Chang et al., 2015). Next, we explore Figure S3E) and single-cell analysis (Figure 5E) also support that high JUN expression is associated with negative modulating cells autophagy. JUN Affect Glioblastoma Cells Proliferation, Viability and Autophagy The CCK8 assay indicate that increased JUN expression can promote tumor cells proliferation, but by silencing PTX3 expression can reverse that process ( Figure 6A). Next, the Western-blotting assay is performed to examine relationship between JUN, PTX3 and autophagy related proteins ( Figure 6B). JUN can significantly increase PTX3 expression relative to the control group. In the meantime, less LC3B-I is transferred into LC3B-II indicating cells autophagy is inhibited. Cells autophagy is re-activated when PTX3 expression is inhibited. The colony forming assay also supported JUN can affect tumor viability through affecting PTX3 expression ( Figure 6C). Therefore, JUN can affect U87MG cells proliferation, viability and autophagy. DISCUSSION Previous studies confirmed that elevated PTX3 level in tumor tissue as a biomarker of poor survival outcome (Locatelli et al., 2013;Tarassishin et al., 2014). In this work, we also prove PTX3 expression is associated with aggressive type of glioma. High PTX3 expression indicates worse survival outcome. By silencing PTX3 expression can impair tumor cells colony-forming and proliferation ability in vitro. Thus, PTX3 acts as a prognostic prediction biomarker of glioma. PTX3 was proved as an inflammatory factor belonging to the pentraxin family at the beginning (Bonavita et al., 2015). Its expression also enriches in immune cells based on the single-cell analysis. Therefore, PTX3 might able to affect tumor immune landscape (Netti et al., 2020). Tumor can be labeled as 'hot' or 'cold' according to their response to immunotherapy, and immunocytes infiltration degree decides tumor sensitivity to immunotherapy (Doni et al., 2019;Tomaszewski et al., 2019). Previous study proved PTX3 deficiency tumor manifested high macrophage infiltration, more cytokine production and high complement activation (Bonavita et al., 2015). However, the association between PTX3 and immunocytes is unclear and requires more researches. JUN oncogene, also known as c-Jun or AP-1, belongs to the Jun family and encodes the component of the activator protein-1 complex (Meng and Xia, 2011). Previous study has already confirmed that JUN can bind to PTX3 promoter to regulate its expression (Chang et al., 2015). Other studies illustrated JUN serves as critical role in tumor progression. For example, the MAPK/JNK pathway can regulate cells autophagy, and c-jun is recognized as one of its downstream target (Zhou et al., 2015). The PI3K/Art pathway and the NF-κB pathway activation can initiate JUN expression in head and neck cancer (Chang et al., 2015). Factors like astrocyte elevated gene 1 (Liu et al., 2017), microRNA-4476 (Lin et al., 2020), 3-phosphoinositide dependent protein kinase 1 (Luo et al., 2018) can also activate c-jun expression. Therefore, JUN participates in cells autophagy regulation by affecting PTX3 expression. Single nucleotide polymorphisms suggests mutation ratio of EGFR and PTEN are higher in high PTX3 expression group relative to low PTX3 expression group, while IDH1, ATRX and TP53 mutation are enriched in low PTX3 expression group. High EGFR and PTEN mutation are common in GBM, and actively participate in promoting tumor progression (Brennan et al., 2013;Han et al., 2016). TP53 is recognized as tumor suppressor and able to induce tumorigenesis (Wang et al., 2014). IDH1 and ATRX mutation have been confirmed as biomarker indicating better survival outcome in clinical (Wang et al., 2013;Aquilanti et al., 2018). Therefore, SNPs supported low PTX3 expression group indicates better survival outcome relative to high PTX3 expression group. CNVs indicates EGFR is amplificated in high PTX3 expression group while deletion regions like ERRFI1 and NF1 are mainly observed in high PTX3 expression group. Previous studies identified high EGFR expression promote tumor progression (Hatanpaa et al., 2010), and high ERRFI1 (Duncan et al., 2010) expression can slow tumor progression. In general, PTX3 is a prognostic prediction biomarker in GBM, and PTX3 promote GBM progression through negative modulating cells autophagy. AUTHOR CONTRIBUTIONS ZW, XW, and NZ prepared the manuscript, analyzed the data, and performed the experiments. HZ and ZD analyzed the data. MZ modified the manuscript. SF and QC designed the project and finally approved the manuscript to publish. All authors contributed to the article and approved the submitted version.
2020-08-26T13:07:12.889Z
2020-08-26T00:00:00.000
{ "year": 2020, "sha1": "721873a7e7878da8faaf99a2ec167f051257f5a6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.00795/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "721873a7e7878da8faaf99a2ec167f051257f5a6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236884158
pes2o/s2orc
v3-fos-license
Nerve growth factor and glutamate increase the density and expression of substance P-containing nerve fibers in healthy human masseter muscles Nocifensive behavior induced by injection of glutamate or nerve growth factor (NGF) into rats masseter muscle is mediated, in part, through the activation of peripheral NMDA receptors. However, information is lacking about the mechanism that contributes to pain and sensitization induced by these substances in humans. Immunohistochemical analysis of microbiopsies obtained from human masseter muscle was used to investigate if injection of glutamate into the NGF-sensitized masseter muscle alters the density or expression of the NMDA receptor subtype 2B (NR2B) or NGF by putative sensory afferent (that express SP) fibers. The relationship between expression and pain characteristics was also examined. NGF and glutamate administration increased the density and expression of NR2B and NGF by muscle putative sensory afferent fibers (P < 0.050). This increase in expression was greater in women than in men (P < 0.050). Expression of NR2B receptors by putative sensory afferent fibers was positively correlated with pain characteristics. Results suggest that increased expression of peripheral NMDA receptors partly contributes to the increased pain and sensitivity induced by intramuscular injection of NGF and glutamate in healthy humans; a model of myofascial temporomandibular disorder (TMD) pain. Whether a similar increase in peripheral NMDA expression occurs in patients with painful TMDs warrants further investigation. www.nature.com/scientificreports/ from different cells [19][20][21][22][23] . NGF binds to tyrosine kinase A (TrkA) receptors on nociceptive endings 24 , which in turn is retrogradely transported from the site of inflammation to the dorsal root ganglion leading to increased SP transcription 25 . In rats, experimentally induced inflammation of the gastrocnemius and soleus muscle increased the density of nerve fibers (mainly perivascular fibers) expressing SP or NGF 26 . However, administration of NGF alone has been shown to induce sensory hypersensitivity without any inflammatory response 27 . Therefore, it is of interest to know if these neurotransmitters also are involved in experimentally induced masseter myalgia. Especially since the majority of TMD does not show signs of inflammatory changes 28,29 . Myalgia can be initiated by noxious stimuli (mechanical or chemical) applied to the muscle, which in turn can activate specific receptors in the muscle nerve fibers as a consequence of the elevation of algogenic substances, such as glutamate 30 . In patients suffering from TMD-related myalgia, the glutamate level is elevated in the masseter muscle interstitial fluid as well as in the saliva and plasma 31,32 . In rats, injection of either NGF or glutamate induces muscle sensitization, in part, through the activation of peripheral N-methyl-d-aspartate (NMDA) receptors (glutamate receptors) 8,11,12,33 . While these findings in animals suggest an interaction between NGF and NMDA receptors, their interaction in human muscle is still unknown. Therefore, this study aimed to investigate the effect of NGF and glutamate injections on the density of nerve fibers in general and on the density of putative sensory afferent (that express SP) nerve fibers as well as their expression of NMDA-receptor subtype 2B (NR2B) and NGF in human masseter muscle, to correlate expression with pain characteristics, and to determine any possible sex-related differences in these effects of combined injections. Methods Participants. Advertisements were posted on the Aarhus University campus (Denmark) and on an internet-page (https:// aucobe. sona-syste ms. com/). In total, 15 healthy women and 15 age-matched healthy men (mean(SD) 24(4) years of age) were recruited. Screening for TMD was accomplished by using the diagnostic criteria for TMD (DC/TMD) 2 . Exclusion criteria were TMD related pain, facial pain, palpatory tenderness, neurological disorder, inflammatory diseases, fibromyalgia, whiplash-associated disorders, neuropathic disorders or pregnancy. Participants were informed not to use anti-inflammatory or analgesic medication for at least 24 h before the procedures and until the end of the experiment. All participants gave informed consent. The experiment followed the guidelines of the Helsinki declaration and was approved by the ethical committee in Aarhus, Demark (Midtjylland, approval No. 1-10-72-199-15). Study design. The study comprised three sessions (day 0, day 3, and day 4). On day 0, 0.4 mL NGF (25 μg/ mL sterile solution; Skanderborg Apotek, Aarhus, Denmark) was injected into the masseter muscle on the experimental side (left side). On day 3, glutamate (1 M, 0.2 mL sterile solution; Skanderborg Apotek, Aarhus, Denmark) was injected into the masseter on the same side. The solution was previously tested to have its effect on the nerve fibers through the activation of peripheral NMDA receptors and not due to its hypertonicity 8 . The technique followed for injections (glutamate or NGF) was standard and precise as previously described 34 . Aqueous solution was used as a solvent for both injections. Masseter microbiopsies were obtained both on day 0 from the control side and on day 4 from the experimental (injection) side as has been previously described in detail 35,36 . In each session, pressure pain threshold (PPT), chewing-evoked pain and temporal summation pain were recorded from both sides before (e.g. baseline) and 5 min after the injections. Pain intensity at rest was recorded directly after injections on day 0 and 3 ( Fig. 1a). The results from these examinations as well as DC/ TMD results have been presented elsewhere 15 . Immunohistochemistry and image analysis. Biopsy samples were fixed over-night at 4 °C, with 4% paraformaldehyde. Prior freezing (− 80 °C), samples were rinsed with phosphate-buffered saline (PBS) and dehydrated first with a 20% and then with a 40% sucrose solution. Sliced sections were treated with 10% normal donkey serum in PBS for 1 h, and then incubated for 24 h with primary antibodies against PGP 9.5 (1:250, anti-human mouse monoclonal, ABCAM Inc, Cambridge, England, ab72911), NR2B (1:200, anti-human rabbit polyclonal, ABCAM Inc, Cambridge, England, ab65783), SP (1:1000, anti-human guinea pig polyclonal, ABCAM Inc, Cambridge, England, ab10353), and NGF (1:20, anti-human goat polyclonal, R&D Systems Inc, 614 McKineley PL NE Minneapolis, AF-256-NA). Alexa 488 donkey anti-mouse, Alexa Fluor 546 donkey antirabbit, Alexa Fluor 633 donkey anti-goat (ThermoFisher, Burlington, ON, Canada), and Alexa Fluor405 donkey anti-guinea pig (Sigma-Aldrich, MO, USA) at a concentration of 1:700 were used as corresponding fluorescent secondary antibodies for PGP 9.5, NR2B, NGF and SP, respectively. The antibody specificity was checked by removing the primary antibody. Images for the stained sections were taken by a Leica TCS SPE Confocal Microscope (Leica microsystems, Wetzlar, Germany). The analysis was performed blindly by a researcher who did not participate in biopsy collection or coding. To distinguish and count PGP 9.5 positive nerve fibers and to measure their area, the analysis and image processing program known as ImageJ (Image Processing and Analysis in Java; National Institutes of Health, USA) was used. The program was also used to identify nerve fibers (PGP9.5) coexpression with other molecules (SP, NR2B and NGF) 36 . Fibers were recognized as positive whenever PGP9.5 fluorescent signals were greater than + 2 standard deviations (SDs) above the mean background of the image, and had a minimum width and length of 4 μm 11 . PGP 9.5 positive nerve fibers were found in different tissues within the muscle; however, the current study looked only at the nerve fibers that were associated with myocytes and connective tissue. Hematoxylin staining was performed to facilitate description of cell types within the biopsies. Groups of tubular or round well-defined cells with multiple nuclei at the periphery were regarded as myocytes 37 , while irregular tissue containing many cells and loose or dense fibers surrounding the myocytes was considered connective tissue 38 . Nerve fibers expressing SP were regarded as putative sensory afferent nerve fibers (Fig. 1b) www.nature.com/scientificreports/ the number of positive fibers in a tissue divided by the total area of the same tissue on an image and averaged over the number of images per subject. Expression frequency equals the number of PGP9.5 positive fibers that were co-expressed with other substances divided by the total number of PGP9.5 positive fibers in the image and averaged over the number of images for the subject. Specific details on image analysis using Image J programme are previously explained in another study by the authors 36 . Statistical analysis. Thirty participants were sufficient to test the hypothesis at a significant level of 0.05 and power of 0.80, showing an estimated difference of 30% and a standard deviation of 50% in nerve fiber expression. Moreover, it has been proved that groups of 12 or more are enough to detect significant sex differences related to experimental pain models 39,40 . When data were compared between days, only data from participants whose biopsies carried the same tissue (connective tissue or myocytes) on both days were included in the analysis. However, when data were analyzed for correlations, all actual data from participants were included (Table 1). Significant differences and interaction between factors (sex and day) in the density and expression frequency of nerve fibers were detected by using a parametric 2-way RM ANOVA test; this was followed by posthoc comparisons using the Bonferroni-test. The Pearson or Spearman correlations (depending on whether data www.nature.com/scientificreports/ was normally distributed) were used to examine the relationship between the peak pain intensity (5 min after glutamate injection on day 3) and the expression of SP, NGF and NR2B alone or their co-expressions (SP with NGF, SP with NR2B, NGF with NR2B, and all together) by nerve fibers on day 4, and also to test the relationship between the change (day 3 post glutamate injection/day 0 baseline) in mechanical pain characteristics and the expression of SP, NGF and NR2B on day 4. The level of significance was set to P < 0.05. For statistical analysis, the SigmaPlot for Windows version 14.0 software (Systat Software Inc., San Jose, CA, USA) was used. Results The results of the combined effect of NGF and glutamate on the density of nerve fibers and expression of receptors and neuropeptides are presented. The difference between connective tissue and myocytes on density as well as on nerve fibers expression has been presented elsewhere 36 . The combined effect of NGF and glutamate on the density of nerve fibers. The average density of PGP 9.5 positive nerve fibers associated with myocytes did not differ between days (F = 0.842, P = 0.374) or sex (F = 0.063, P = 0.805). However, the density of PGP 9.5 positive nerve fibers expressing SP was significantly greater on day 4 compared to day 0 (F = 6.970, P = 0.019). No sex-related differences were detected (F = 1.459, P = 0.247). No significant interaction was found between factors (F = 0.078, P = 0.783). There were no significant differences in the average density of PGP 9.5 positive nerve fibers between days (F = 0.494, P = 0.489) or between sexes (F = 0.100, P = 0.754) within connective tissue. No significant differences in the average density of PGP 9.5 positive nerve fibers expressing SP between days (F = 0.871, P = 0.361) or between sexes (F = 1.504, P = 0.233) were identified. The combined effect of injections on the expression of receptors and neuropeptides. Myocytes. Analyzed data from all participants showed an increase in the frequency of nerve fiber expression of SP alone (F = 13.713, P = 0.002), with NR2B (F = 10.599, P = 0.006) and with NGF (F = 5.151, P = 0.040) as well as all three markers together (F = 4.774, P = 0.046) on day 4 compared to day 0. No significant differences were detected between days in nerve fiber expression of NR2B or NGF alone or both of them in combination ( Table 2). There were no significant differences in the expression frequency of SP, NR2B, or NGF between sexes. However, within day 4, the frequency of nerve fibers expressing SP alone (P = 0.032) was significantly higher in women compared with men ( Table 2). Connective tissue. When data from all participants were used for analysis, no significant changes in the frequency of expression of SP, NR2B, or NGF by nerve fibers were detected between days (Table 3). However, the nerve fiber expression of SP (F = 6.296, P = 0.020), NR2B (F = 4.956, P = 0.037), SP with NR2B (F = 8.366, P = 0.008), SP with NGF (F = 4.375, P = 0.048) and all three markers together (F = 4.716, P = 0.041) was significantly higher in women when compared with men. The correlation between the expression of putative sensory afferent nerve fibers and mechanical sensitivity induced by NGF and glutamate. Myocytes. There was a significant positive correlation between the nerve fiber expression of SP alone and the percentage change in temporal summation pain (day 3 post glutamate injection/day 0 baseline) (r = 0.415, n = 23, P = 0.048). The nerve fiber expression of SP in combination with NR2B was also correlated with the percentage change in temporal summation pain (r = 0.437, n = 23, P = 0.036). No significant correlation was found for other mechanical sensitivity parameters (PPT and chewing-evoked pain) (P > 0.05). Table 2. The frequency (%) expression of nerve fibers associated with myocytes. The table presents the mean (SD) frequency of nerve fibers expressing markers alone and co-expressions (SP with NR2B, SP with NGF, NR2B with NGF, or All), from all participants, men, women on day 0 and 4. All = SP with NR2B and NGF. ## Significant differences between days in all participants (2-way RM ANOVA; P < 0.05). # Significant differences between days (Bonferroni; P < 0.05) within men or women. *Significant differences between men and women (Bonferroni; P < 0.05) within day 0 or day 4. www.nature.com/scientificreports/ Connective tissue. Data from all participants showed significant positive correlations between the expression of SP with NR2B by nerve fibers on day 4 and the percentage change (day 3 post glutamate injection/day 0 baseline) in temporal summation pain (r = 0.448, n = 28, P = 0.016) ( Fig. 2A) as well as chewing-evoked pain (r s = 0.394, n = 28, P = 0.037) (Fig. 2B). No significant correlations were found for PPT. The correlation between the expression of SP with NR2B and pain induced by glutamate. Myocytes. No significant correlations were found. Connective tissue. A significant positive correlation was detected between the nerve fiber expression of SP with NR2B and the peak glutamate-evoked pain intensity in all participants (r = 0.463, n = 28, P = 0.013) (Fig. 2C). However, no significant correlation was found when data analyzed from men (r = 0.315, n = 13, P = 0.294) and women (r = 0.495, n = 15, P = 0.060) were undertaken separately. Discussion The main findings of the current study were that: (1) the combination of NGF and glutamate injection into the masseter muscle increases the density and expression of putative sensory afferent nerve fibers that express SP; (2) the density of putative sensory afferent nerve fibers does not differ between sexes, but the expression of putative sensory afferent nerve fibers is greater in women than men. (3) The increase in temporal summation pain and chewing-evoked pain induced by NGF and glutamate is positively correlated with the expression of NR2B by putative sensory nerve fibers; and (4) peak glutamate-induced pain intensity is positively correlated with expression of NR2B by putative sensory nerve fibers in NGF-sensitized masseter muscle. SP is an important peptide that is involved in many physiological and pathophysiological processes, including nociception 41 . At the level of the trigeminal ganglion, SP was mostly found in unmyelinated small diameter nerve fibers 42 which comprise 10-30% of all trigeminal nerve fibers 43 . SP expressing sensory afferent nerve fibers project to the brainstem trigeminal sensory nucleus, and upon release increase the excitability of trigeminal sensory Table 3. The frequency (%) expression of nerve fibers within connective tissue. The table presents the mean (SD) frequency of nerve fibers expressing markers alone and co-expressions (SP with NR2B, SP with NGF, NR2B with NGF, or All), from all participants, men, women on day 0 and 4. All = SP with NR2B and NGF. *Significant differences between men and women (Bonferroni; P < 0.05) within day 0 or day 4. www.nature.com/scientificreports/ neurons 44,45 Injury induced by extraction of maxillary molars increases the number of SP immunoreactive neurons in the trigeminal ganglion 46 . Moreover, NGF increases the expression of SP by masseter ganglion neurons in female rats 11 . In rats, it has been shown that NGF can increase the level of SP at the dorsal root ganglion 49 . Thus, NGF contributes to central sensitization partly by increasing the release of neuropeptides including SP 47 . The current study is the first, to our knowledge, to show an increase in the frequency of nerve fiber expression of SP as a consequence of NGF and glutamate injections into human muscles. This finding may suggest the involvement of this nociceptive peptide in peripheral sensitization. However, a recent published article has demonstrated lack of changes of plasma SP levels in patients with TMD compared to healthy controls 32 . Moreover, injection of SP into the temporalis muscle was not painful, and into the tibialis muscle failed to produce hyperalgesia to PPT 48,49 . Large-scale trials targeting SP-antagonism in humans failed to successfully reduce pain 50 . These contradictory results could be attributed to differences between rats and humans in terms of the contribution of SP to pain mechanisms, as in humans, SP can contribute to other biological processes such as vasodilatation and inflammation 51,52 . It is still possible that SP acts to enhance glutamate-induced mechanical sensitization of masticatory muscle in humans 49,53 . There is some evidence which suggests that NGF modulates both the expression and function of NMDA receptors to alter the response properties of masseter muscle sensory afferent nerve fibers. NGF injection into the masseter muscle of healthy participants induced mechanical sensitization that lasted up to 3 weeks 13,16 . In rats, there is evidence that NGF-induced masseter muscle sensitization is caused by increased expression of NMDA receptors by sensory afferent fibers 11 . Injection of glutamate can also evoke masseter muscle sensitization through the activation of peripheral NMDA receptors 12,54 . Injection of glutamate into masseter muscles pretreated with NGF failed to cause an increase or decrease in PPT 15 , which could be an indication that glutamate and NGF can share a common pathway, thus preventing additional sensitization when these substances are used together 13 . It is also possible that a ceiling effect occurred by NGF on PPT and that additional sensitization by glutamate, thus was not possible. The current study showed that experimental myalgia induced by NGF and glutamate can increase the density of masseter sensory afferent nerve fibers as well as the expression of NR2B and NGF. This finding supports the suggested interaction of NGF and NMDA receptors 11 and may indicate their involvement in the peripheral mechanism that underlies muscle sensitization. Earlier studies showed that an injection of NGF into the masseter muscle reduced PPT more in women than in men 16,55 . The reduction of PPT in women was associated with an increased expression of NMDA-receptors by sensory afferent nerve fibers 56 . It has also been reported that injection of glutamate into the masseter muscle produces more intense pain in women than in men 40,57 . In female rats, increased estrogen levels are associated with higher expression of NR2B subunit-containing NMDA-receptors by masseter ganglion neurons and an increased sensory afferent nerve fiber discharge evoked by injection of NMDA into the masseter muscle 54 . NGF-induced mechanical sensitization was suggested to be due to the greater co-expression of SP with NR2B in masseter ganglion neurons of female as compared with male rats 11 . Consistent with animal reports, the current study showed sex differences related to the putative sensory afferent nerve fiber expression of NR2B subunitcontaining NMDA-receptors as well as NGF. Hence, one can speculate that the sex-related differences in NGF induced muscle sensitization and glutamate evoked muscle pain are similar in rats and humans, and thereby also attributable to the sensory afferent nerve fiber expression of NR2B and NGF. However, the present study found no association between the expression of NMDA-receptors by nerve fibers and the change in PPT in women. The lack of association in the current study compared to our previous study 56 , is not unexpected as no sex-related differences in masseter muscle sensitization were detected post injection of glutamate into the masseter muscle 8,40 . In a previous study we have presented a positive correlation between peripheral nerve fibers expression of NMDA receptors and pain characteristics after glutamate injection into the masseter muscle 56 . The present experimental pain model also demonstrates a significant association between the nerve fiber expression of peripheral NMDA-receptors and pain characteristics (pain intensity, chewing-evoked pain and temporal summation pain) reminiscent of pain complaints in individuals with TMD myalgia. Masticatory muscle pain and fatigue induced by chewing are common signs and symptoms of TMD myalgia 58 . Peak pain intensity reported by healthy participants after injection of glutamate into the masseter muscle is similar to the peak pain reported by patients with TMD 7 . Temporal summation of fixed mechanical stimulation can reflect central sensitization in an experimental pain model, and has been shown to be dependent on the activation of NMDA receptors 59 . Temporal summation pain reported by patients with TMD was higher than in healthy controls 60,61 . These findings together strengthen the theory that NMDA-receptors expressed peripherally by sensory afferent fibers and/ or in the central nervous system plays a role with regard to the pathogenesis of TMD myalgia 12,33,62 , and that increasing NMDA receptor expression may cause the muscle to be more sensitive to pain. A possible limitation for this study is that the number of sensory afferent fibers within muscle tissues was likely underestimated, as the expression of other neuropeptides associated with sensory afferent fibers was not assessed. For example, calcitonin gene-related peptide (CGRP) is thought to play an important role in nociception 63 and is highly expressed by masseter muscle sensory afferent nerve fibers in rats 11 . Indeed, coexpression of CGRP and NMDA receptors was also increased in female rats after masseter muscle injection of NGF. If NGF exerts a similar effect on these sensory fibers in humans, then our study also likely underestimated the increase in NMDA expression that could be induced by NGF. Unfortunately, at the time this study was conducted, there was no available CGRP antibody compatible with the NMDA and PGP 9.5 antibodies for humans. Future studies will be required to address this question. Conclusions Injections of NGF or glutamate are useful experimental models for the investigation of TMD myalgia 16 . The current study has demonstrated that cellular and molecular changes occur after combined injection of glutamate and NGF into the human masseter muscle. The combined injections increased the density of SP expressing nerve fibers as well as the expression of NMDA receptors and NGF by putative sensory afferent nerve fibers. This increase in expression was significantly greater in women than in men. Further, the expression of NMDA-receptors by putative sensory afferent nerve fibers was significantly associated with increased pain during functional activity and increased muscle pain sensitivity in both men and women. Hence, and in accordance with previous animal findings 11,12 the current human study can conclude that NMDA-receptors and NGF expressed by peripheral putative sensory afferent nerve fibers seem to play an important role in the mechanisms of muscle sensitization and sex-related differences in pain reports associated with this model of masseter muscle myalgia. However, whether these results are relevant to the mechanisms of clinical muscle pain, for example, in myofascial TMD, is an open question that warrants further investigation. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2021-08-04T06:17:15.095Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "d8a331db118f1c86f211f1e02fae49a21a1f70b8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-95229-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54776e3b884a58007304a14d3529b392b4eb92ef", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
19079344
pes2o/s2orc
v3-fos-license
Magnetically Bioprinted Human Myometrial 3D Cell Rings as A Model for Uterine Contractility Deregulation in uterine contractility can cause common pathological disorders of the female reproductive system, including preterm labor, infertility, inappropriate implantation, and irregular menstrual cycle. A better understanding of human myometrium contractility is essential to designing and testing interventions for these important clinical problems. Robust studies on the physiology of human uterine contractions require in vitro models, utilizing a human source. Importantly, uterine contractility is a three-dimensionally (3D)-coordinated phenomenon and should be studied in a 3D environment. Here, we propose and assess for the first time a 3D in vitro model for the evaluation of human uterine contractility. Magnetic 3D bioprinting is applied to pattern human myometrium cells into rings, which are then monitored for contractility over time and as a function of various clinically relevant agents. Commercially available and patient-derived myometrium cells were magnetically bioprinted into rings in 384-well formats for throughput uterine contractility analysis. The bioprinted uterine rings from various cell origins and patients show different patterns of contractility and respond differently to clinically relevant uterine contractility inhibitors, indomethacin and nifedipine. We believe that the novel system will serve as a useful tool to evaluate the physiology of human parturition while enabling high-throughput testing of multiple agents and conditions. Introduction The uterus is an organ of the female reproductive system. It is a hollow organ and has three main layers: a well-differentiated endometrium lining, a thick smooth muscle, known as "myometrium", and an outer serosal layer [1][2][3]. The myometrium is the main layer responsible for uterine contractions. Uterine contractions are very important for multiple reproductive functions, such as the menstrual cycle, the transport of sperms and embryo, pregnancy, and parturition [4,5]. Deregulation in uterine contractility can serve as a basis in common pathological disorders including preterm labor and premature birth, infertility, abnormal implantation, and irregular menstrual cycle [6][7][8]. Moreover, a coordinated activity of uterine myometrial cells is required for initiation and flow of a successful labor course [9,10]. On the other hand, if uterine contractility is impaired, it significantly affects the progression of normal labor [2]. In the past few decades, there has been progress in shedding more light on the physiology of endometrial functions in normal and pathological conditions. A better understanding of endometrial functions and their regulations resulted in the development of several important interventions in the areas of conception, contraception, and normalization of menstrual function [3]. However, although the importance of abnormal uterine contractility is well acknowledged, there has been rather insufficient research focusing on the role of the uterine myometrium in common disorders of the female reproductive system. A better understanding of human myometrium physiology and contractility is essential to designing and testing interventions that can prevent or treat the important clinical problems noted above. However, the challenge is to identify an assay that accurately and efficiently models uterine contractility. In vivo models are, in general, costly, low-throughput, and time-consuming [11], but more importantly, there are sharp differences between species in birthing patterns, reflecting different biological bases that render these assays poorly predictive [12]. Ex vivo tests, such as organ chamber systems, can be accurate predictors of uterine contractility, yet suffer from sample inconsistencies, scarcity, and equipment costs [13][14][15]. An in vitro cell culture model is a viable alternative that overcomes issues with not only scarcity and cost, but reproducibility and throughput. However, the majority of in vitro cell culture models are two-dimensional (2D) monolayers that poorly mimic native tissue environments. For one, the typical plastic or glass substrates are much stiffer than in vivo tissue, let alone uteri. These models also misrepresent extracellular matrix (ECM) structure and composition, as well as the cell-cell and cell-ECM interactions that the ECM supports. Lastly, biochemical and nutrient access is far different in 2D than in vivo, as every cell is uniformly exposed to the surrounding environment [16][17][18]. A potential solution lies in the three-dimensional (3D) cell culture models that can recreate tissue structure and ECM composition in vitro. A variety of different 3D cell culture platforms exist: protein gels, such as Matrigel ® (BD Biosciences, San Jose, CA, USA) and collagen that recreate ECM composition [19]; polymer scaffolds that reproduce tissue structure and material properties [20]; hanging drop spheroids that use water tension in liquid droplets to aggregate cells into spheroids [21,22]; round bottom plates that use plate geometry to aggregate cells in spherical bottom wells [23]; and nano-patterned plates, with wells <1 µm patterned within the well where cells aggregate. While these systems approximate tissue environments to varying degrees, they have technical and cost limitations, such as long fabrication times, specialized equipment, and either attachment to stiff substrates that influence cell behavior or detachment that makes 3D cultures difficult to handle. Moreover, these platforms can only create spheroids-not more complex patterns-such as rings or strips that would better approximate uterine smooth muscle contractility. Collagen gels could be a solution to recapitulate the collagenous myometrium in cylindrical or strip form, but these gels require a complex fabrication process that limits throughput. Thus, there is a need for a 3D cell culture platform that can create complex patterns with speed and ease to recapitulate uterine contractility in vitro. To fill in these gaps in our knowledge of uterine physiology, we are proposing here, for the first time, a three-dimensional (3D) in vitro model of human uterine myometrial cells for the evaluation of baseline uterine contractility physiology and as a function of various pathological conditions affecting pregnant woman's health. The human myometrial cells are magnetically bio-printed in hollow rings (similarly to the cross-sections of the uterus, a hollow organ) and their contractility is evaluated over time and as a function of tocolytic agents that have been shown to affect the uterine contractions clinically. We believe that the novel system introduced in this work will serve as a valuable tool for the evaluation of physiology of human uterine contractions with the ability for high-throughput testing of multiple agents and conditions simultaneously. Bioprinting Commercially Available Human Myometrium Cells In order to design an in vitro system for the evaluation of human uterine (myometrial) contractility, we initially determined the optimal conditions of the assay, including the cell/ring (well) concentration, and the times of levitation and printing. Using commercially available smooth muscle cells (SMC)-A and SMC-B, we found that magnetized myometrial cells can be bioprinted into competent rings at or above 50,000 cells/ring. However, our imaging tools, namely iPod (Apple Computer, Cupertino, CA, USA) and analytical software, could only detect full rings with at least 100,000 cells/ring, due to the cell's density. Thus, 100,000 cells/ring was the concentration used for the remainder of the experiments. Bioprinting Commercially Available Human Myometrium Cells In order to design an in vitro system for the evaluation of human uterine (myometrial) contractility, we initially determined the optimal conditions of the assay, including the cell/ring (well) concentration, and the times of levitation and printing. Using commercially available smooth muscle cells (SMC)-A and SMC-B, we found that magnetized myometrial cells can be bioprinted into competent rings at or above 50,000 cells/ring. However, our imaging tools, namely iPod (Apple Computer, Cupertino, CA, USA) and analytical software, could only detect full rings with at least 100,000 cells/ring, due to the cell's density. Thus, 100,000 cells/ring was the concentration used for the remainder of the experiments. Figure 1a schematically shows the process flow diagram and Figure 1b presents iPod images of SMC-A cells bioprinted with various cell densities. . Full rings were detectable by the software starting at 100,000 cells (100 K)/ring or 40,427 cells/mm 2 , which was used as the cell density for this assay. Scale bar = 5 mm. (c) The contraction of SMC-A ring area as measured in pixels as a function of time at various cell densities. The rings contracted immediately after printing, suggesting that the levitation and printing times of 2 and 1 h, respectively, were sufficient to produce a contractile ring. Cell/ring values in legend are in thousands of pixels. (d) Tocolytic effect of indomethacin on myometrial SMC contractility, as detected with iPod-driven imaging system, scale bar = 1 mm. When we allowed these rings to contract, we found that they contracted immediately after printing (Figures 1c,d and S1). This is in agreement with the nature of the smooth muscle cells and suggests that cells within the ring rearranged into a contractile state during levitation and printing, such that the ring could contract once the magnet was removed. These results show that the 2 h of levitation time and the 1 h of printing time were sufficient to print contractile rings as was also shown previously for endothelial smooth muscle cells and other cell types [24][25][26]. These parameters were used for subsequent experiments. . Full rings were detectable by the software starting at 100,000 cells (100 K)/ring or 40,427 cells/mm 2 , which was used as the cell density for this assay. Scale bar = 5 mm. (c) The contraction of SMC-A ring area as measured in pixels as a function of time at various cell densities. The rings contracted immediately after printing, suggesting that the levitation and printing times of 2 and 1 h, respectively, were sufficient to produce a contractile ring. Cell/ring values in legend are in thousands of pixels. (d) Tocolytic effect of indomethacin on myometrial SMC contractility, as detected with iPod-driven imaging system, scale bar = 1 mm. When we allowed these rings to contract, we found that they contracted immediately after printing (Figure 1c,d and Figure S1). This is in agreement with the nature of the smooth muscle cells and suggests that cells within the ring rearranged into a contractile state during levitation and printing, such that the ring could contract once the magnet was removed. These results show that the 2 h of levitation time and the 1 h of printing time were sufficient to print contractile rings as was also shown previously for endothelial smooth muscle cells and other cell types [24][25][26]. These parameters were used for subsequent experiments. Bioprinting Freshly Excised and Cryopreserved Patient-Derived Myometrium Cells Next, we have tested the response of the bioprinted uterine rings to commonly used agents that inhibit uterine contractility. For this purpose, we used tocolytic drugs clinically to inhibit uterine contractions during preterm labor, thus preventing preterm birth [27]. The most commonly used tocolytics are indomethacin (acts by inhibiting prostaglandin production) and nifedipine (a calcium channel blocker). Ibuprofen was used as a control drug for two reasons: (1) it has no effect on uterine contractility in the clinical setting; and (2) it has been previously shown to have an effect on other smooth muscle cells [24]. The drugs were added to the 3D bioprinted myometrial rings in various concentrations and the cell ring contractility was detected. Myometrial smooth muscle cells derived from three patients were bioprinted into rings at a concentration of 1 × 10 5 cells/well in a 384-well plate by first levitating the cells for 2 h and then by printing the cells into rings for 1 h as described above. These rings were dosed with varying concentrations of the clinically used tocolytics, indomethacin and nifedipine, and ibuprofen, as a control. Once printed and removed off the magnet, these rings began to immediately contract, as evidenced by the quick drop in ring area in negative control rings ( Figure 2). As can be seen from Figure 2, all tocolytics inhibited the contractions of the bioprinted myometrial rings and the time-response was concentration-dependent. Interestingly, different cells reacted differently to the three drugs. The contraction profiles of myometrial smooth muscle rings from different patients exposed to varying concentrations of different compounds. The myometrial smooth muscle rings were contractile as shown by the sudden drop in area of negative controls. As expected, clinically used tocolytics indomethacin, and nifedipine, had a inhibitory effect on myometrial smooth muscle rings contractility by slowing contraction, or in the case of both indomethacin and nifedipine, nearly stopping it. Overall, this assay detected dose-dependent effects on uterine contractility. Each concentration (dose) in the range of 0-1 µM is represented by a different color on the graphs as mentioned in the legend. Using the change in rings area after 2 h of contraction as the endpoint, a significant effect of concentration was found for the tocolytic agents but not for ibuprofen for Patients 2 and 3. As the data in Figure 3 show, indomethacin and nifedipine had relaxant effects on contraction, nearly stopping it. As expected, the dose responses and half-maximal inhibitory concentration (IC 50 ) values varied between patients (Table 1). Overall, this new myometrial contractility assay is able to detected dose-dependent and patient-dependent uterine responses. Using the change in rings area after 2 h of contraction as the endpoint, a significant effect of concentration was found for the tocolytic agents but not for ibuprofen for Patients 2 and 3. As the data in Figure 3 show, indomethacin and nifedipine had relaxant effects on contraction, nearly stopping it. As expected, the dose responses and half-maximal inhibitory concentration (IC50) values varied between patients (Table 1). Overall, this new myometrial contractility assay is able to detected dose-dependent and patient-dependent uterine responses. We further tested cryopreserved cells from the same patients and found that, after one month, the responses to the tocolytic agents were maintained. As an example, Figure S2 shows a comparison in contractility time and dose-dependent profiles of patient-derived uterine cells. We further tested cryopreserved cells from the same patients and found that, after one month, the responses to the tocolytic agents were maintained. As an example, Figure S2 shows a comparison in contractility time and dose-dependent profiles of patient-derived uterine cells. Discussion As mentioned above, robust studies to better understand the physiology of human uterine contractions cannot be performed in vivo and require in vitro models, which utilize human source. Moreover, since the contractility of the tissue is a phenomenon, which occurs in a three-dimensional environment enabling cell-cell interactions, it has to be evaluated in the same manner. In this work, we proposed and evaluated a 3D cell culture platform that could potentially overcome the above limitations to build a uterine contractility assay, which is based on magnetic 3D bioprinting [28][29][30][31]. The principle behind magnetic 3D bioprinting is the magnetization of cells, and their aggregation with magnetic forces to form and pattern 3D cell culture models. Cells are magnetized by incubation with a biocompatible nanoparticle assembly consisting of gold, iron oxide, and poly-L-lysine [31]. These cells can then be aggregated using magnetic forces, particularly into 3D patterns, such as spheroids or hollow rings, at the bottom of a well in a multi-well plate [24,25,[28][29][30][31][32]. Once aggregated, these cells interact and build ECM to recapitulate native tissue environments [29,31]. Neither the nanoparticles nor the magnetic forces have any deleterious effects on cell behavior [24][25][26][27][28][29][30][31][32][33]. Using this platform, we have designed a uterine contractility assay that is simple yet robust and predictive. We successfully bioprinted myometrial rings in 384-well formats ( Figure 1) and were able to follow up on changes in the ring area volume, which are the pharmacodynamics sign of the 3D structures contraction over time. Based on the smooth muscle cells physiology, bioprinted hollow myometrial rings were found to contract immediately after printing (Figure 1 and Figure S1). Exposure to tocolytic compounds, indomethacin and nifedipine, clinically used for inhibition of the myometrial contractions, affected the contraction in myometrial rings in a dose-dependent manner (Figure 2). On the other hand, the contractility of bioprinted rings to ibuprofen, which has no effect on uterine contractions in the clinic, was negligible. By testing varying dosages, an efficacy profile for the tocolytic drugs can be assessed ( Figure 3 and Table 1). Using the same principle, we have previously developed assays for wound healing in rings [24] and toxicity in spheroids [32]. We have shown that the uterine rings can be bioprinted from primary cells obtained from patients during a cesarean section. It is noteworthy that, using a small tissue biopsy that can be easily obtained in a clinical setting, we were able to assess a multitude of agents and conditions in a high-throughput manner. These findings show the feasibility of using the 3D uterine contractility assay in the future for the personalization of therapies for uterine contractility disorders, such as preterm labor, infertility, inappropriate implantation, irregular menstrual cycles, and others. Moreover, while there is no doubt that this novel assay should be validated and compared to relevant clinical data, the fact that myometrium samples from various patients had different responses to various agents commonly used in clinical settings can shed more light on the individual physiology of the uterine contractions and the consequent pathologies related to the same. Interestingly, freezing had no significant effect on the contractility profiles, and freshly excised cells had similar dose and time response curves to those of the cells from the same patient that underwent a freezing and thawing cycle ( Figure S2). In conclusion, we believe that, by using 3D bioprinting of human myometrial cells, we will address an unmet need for high-throughput evaluation of uterine contractility for basic, translational, and clinical research. We anticipate that this in vitro myometrial contractility assay can shed more light on the physiology of human uterine contractions and can be used as a valuable tool in the clinical setting to personalize therapies for uterine contractility disorders. Commercially Available Human Uterine Smooth Muscle Cells Human uterine smooth muscle cells (HUtSMCs) were obtained from PromoCell GmbH (Heidelberg, Germany). The cells are mentioned in the manuscript as SMC-A and SMC-B (PromoCell catalogue number C-12575 and C-12576, respectively). The cells were grown in PromoCell smooth muscle cell medium and 1% penicillin/streptomycin (P/S, Sigma, St. Louis, MO, USA). These cells were cultured in an incubator (37 • C, 5% CO 2 ) with daily media exchange for the first 3 days, and every other day thereafter. Prior to magnetic bioprinting, the cells were assessed for viability (CellTiter-Glo, Promega, Madison, WI, USA). Patients-Derived Human Uterine Myometrial Cells Primary human uterine smooth muscle cells (SMCs) were obtained from uterine biopsies from women undergoing scheduled cesarean section at term gestation (greater than 37 weeks of pregnancy) who have given written informed consent according to the Institutional Review Board (IRB)-approved protocol (University of Texas Health Science Center at Houston, HSC-MS-14-0370). Women with more than 3 contractions per hour, rupture of membranes, placenta previa, known infections, or uterine leiomyomas, and women under the age of 18, were excluded. Biopsies were taken from the upper edge of the lower segment of the transverse uterine incision (2 × 2 × 4 cm) and placed in Hank's balanced salt solution (HBSS) without Ca 2+ and Mg 2+ . Preparation of the biopsies into a cell culture was then performed [34]. The biopsies were finely minced and digested into cells with 0.1% trypsin (Sigma) and 0.1% DNAse (Sigma) in HBSS for 30 min in a shaking incubator (37 • C). After centrifugation (400× g for 5 min), the enzymes were replaced with 0.2% collagenase Type I (Sigma) in HBSS to digest for another 30 min in a shaking incubator. The resulting cell and tissue suspension was then filtered and centrifuged, and the cells were resuspended in Roswell Park Memorial Institute (RPMI) 1640 medium (Sigma) with 10% fetal bovine serum (FBS, Sigma) and 1% penicillin/streptomycin (P/S). These cells were then seeded and cultured as explained previously for the primary human uterine smooth muscle cells [34]. Cryopreservation of Cells from Uterine Samples from Patients For freezing (cryopreserving) the cells, we used different conditions: (a) flash freezing, in which the tissue was transferred immediately to a liquid nitrogen tank for long-term storage; (b) slow freezing, where the tissue was frozen stepwise at 4 • C for 20 min, −80 • C overnight, and then in liquid nitrogen; and (c) the cryobox method, in which the tissue was placed immediately into a CoolCell (Biocision, San Rafael, CA, USA) to freeze overnight at −80 • C, then transferred into liquid nitrogen. In all three cases, the cryoprotectant was 10% dimethylsulfoxide (DMSO) in SMC medium. The remaining unfrozen tissues were immediately harvested for cells as control. After one month of storage, the tissues were thawed, cryopreservation medium was replaced with HBSS without calcium and magnesium, the tissues were finely minced and prepared into a cell culture as described above. Based on the viability assay (CellTiter-Glo, Promega, Madison, WI, USA), we found that cryopreserving tissue using the cryobox method was the most efficient method to obtain >90% viable cells as compared to freshly processed tissues. Thus, further experiments with dose response agents related to cryopreserved (frozen) samples are based on this method. Magnetic 3D Bioprinting of Human Myometrial Cells SMCs were magnetically 3D bioprinted into rings for this uterine contractility assay. SMCs were printed in a similar manner to a previous study using primary human tracheal SMCs [24]. Briefly, monolayers of SMCs at 70%-80% confluence were magnetized by adding a magnetic nanoparticle assembly (NanoShuttle, NS, Nano3D Biosciences, Houston, TX, USA) at a concentration of 1 µL/1 × 10 4 cells for static incubation overnight. The method of the cell magnetization was previously described in details for other cell types [24][25][26]. The next day, the magnetized SMCs were detached, counted, and resuspended into cell-repellent 6-well plates (Greiner Bio-One, Frickenhausen, Germany) at a concentration of 3.2 × 10 6 cells/well in 2 mL of media (1.6 × 10 6 cells/mL). These SMCs were then levitated off the well bottom to aggregate and form an ECM endogenously by placing a magnetic levitation drive of six neodymium magnets atop the plate. Based on our prior publications with levitation and bioprinting of other cell types, ECM is being produced by the cells starting from 30 min of levitation [25,35]. After 2 h of levitation, the SMCs were resuspended in media and then redistributed into cell-repellent 384-well plates (Greiner Bio-One) at a concentration of 1 × 10 5 cells/well in 80 µL of media (1.25 × 10 6 cells/mL). We further used the levitation time of 2 h since, at a later time, SMCs formed very tight structures, which could not be dispersed for bioprinting. The SMC rings were then printed by placing the plate of cells atop a magnetic drive of 384 ring-shaped magnets (0.125 OD × 0.0625 ID) for 1 h to attract SMCs to the bottom of the well and form 1 ring/well. Myometrial Smooth Muscle Cell Ring Contractility Assay The contraction of magnetically 3D bioprinted SMC rings was used to assess the efficacy of various tocolytics. After printing SMC rings, compounds, diluted from stock in media, were added to the wells. Three tocolytic drugs were tested at eight concentrations in triplicate: indomethacin (Sigma), nifedipine (Sigma), which are clinically used; and ibuprofen (Sigma), which served as control. Negative control wells had media with 1% DMSO (Sigma) added. Once the compounds were added, the plate of SMC rings was removed off the magnets to allow the cells to contract, and moved immediately to an imaging system utilizing a mobile device (iPod, Apple Computer, Cupertino, CA, USA), as was done in previous studies. The mobile device was programmed using an app (Experimental Assistant, Nano3D Biosciences) to image the plate every 30 s for 10 h. Once the SMC rings finished contracting, the images were moved from the mobile device to a separate computer, where they were batch-analyzed to measure ring area over time using custom image analysis software written in Python programming language. The endpoint used to assess the tocolytic dose responses was the contracting area change after 2 h and normalized between the maximum and minimum contractions. The dose response was fit to a sigmoidal curve (OriginPro, OriginLab, Northampton, MA, USA), and the IC 50 was obtained from the curve. Statistical Analysis The contraction data from the uterine smooth muscle cell ring assay was statistically analyzed using ANOVA tests (OriginPro, Northampton, MA, USA), both one-way for the effect of drug concentration, and two-way for the effects of drug concentration and cell source. Significance was defined as p < 0.05. Error bars represent standard error. Conflicts of Interest: Nano3D Biosciences, The University of Texas Health Science Center at Houston (UTHSC), and Houston Methodist Research Institute (HMRI), along with their researchers, have filed a patent on the technology and intellectual property reported here. If licensing or commercialization occurs, the researchers are entitled to standard royalties. Glauco R. Souza has equity in Nano3D Biosciences, Inc. Jacob A. Gage and Hubert Tseng hold stock options in Nano3D Biosciences. UTHSC and HMRI manage the terms of these arrangements in accordance to their established institutional conflict-of-interest policies.
2017-05-19T19:44:29.575Z
2017-03-23T00:00:00.000
{ "year": 2017, "sha1": "acfc6f243468a7dbb0f77d530b22fafbc21eee7a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/4/683/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d63d8eb9689e72623f68b3dd8343354ad30ef736", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
199038578
pes2o/s2orc
v3-fos-license
miR-625 reverses multidrug resistance in gastric cancer cells by directly targeting ALDH1A1 Background: microRNAs (miRNAs) are emerging as critical regulators of multidrug resistance (MDR) in gastric cancer, a major cause of chemotherapy failure. miR-625 is downregulated in gastric cancer and negatively associated with metastasis. In the current study, we aimed to investigate whether miR-625 regulates MDR in gastric cancer. Methods: The level of miR-625 in gastric cancer cells with or without MDR was quantified by quantitative reverse transcription PCR (qRT-PCR) analysis. The sensitivity of gastric cancer cells to chemotherapeutic agents was assessed by MTT assay. The protein expression was determined by Western blot analysis, and the luciferase reporter assay was applied to confirm miR-625 regulation of the potential target. Results: miR-625 is downregulated in MDR gastric cancer cells compared with chemosensitive counterparts. In addition, miR-625 increases the sensitivity and promotes apoptosis of gastric cancer cells when treated with different chemotherapeutic agents. Moreover, miR-625 directly targets the aldehyde dehydrogenase 1A1 (ALDH1A1), and importantly, the restoration of ALDH1A1 expression rescues miR-625 effects on MDR in gastric cancer cells. Conclusion: miR-625 reverses MDR in gastric cancer cells by targeting ALDH1A1. Hence, our study identifies miR-625 as a novel regulator of MDR in gastric cancer cells, and implicates its potential application for overcoming MDR in gastric cancer chemotherapy. Introduction The gastric cancer is the fifth most common malignancies and the second leading cause of cancer-related death in the world. 1 Most patients are diagnosed with an advanced stage or relapse after surgical resection, and the systemic chemotherapy is currently the mainstay treatment for advanced gastric cancer. 2 However, in many cases, patients show a poor initial response or develop intrinsic or acquired resistance to chemotherapy, known as multidrug resistance (MDR), which becomes a huge obstacle for effective chemotherapy and leads to a poor prognosis of gastric cancer patients. 3 The development of gastric cancer MDR is a very complicated process and a large number of drug-resistant molecules have been shown to play an important role, such as P-glycoprotein/ABCB1 and MRP1/ABCC1. 4 However, the mechanisms underlying gastric cancer MDR are still not well understood. microRNAs (miRNAs) are a group of small noncoding RNAs that posttranscriptionally regulate gene expression through translational inhibition and mRNA destabilization. 5 Until now, the roles of several miRNAs in gastric cancer MDR have been investigated. For example, miR-15b and miR-16 modulate gastric cancer MDR by modulating apoptosis through targeting BCL2. 6 PTEN is a target of miR-19a/b, which mediates the promotive effect on MDR in gastric cancer. 7 Additionally, miR-106a induces MDR in gastric cancer by targeting RUNX3. 8 Moreover, miR-508-5p regulates MDR in gastric cancer by targeting ABCB1 and ZNRD1. 9 These studies suggest that miRNAs can regulate gastric cancer MDR by targeting different genes. In a recent study, miR-625 was found significantly downregulated and negatively correlated with lymph node metastasis in gastric cancer, and miR-625 was also shown to inhibit the invasion and metastasis of gastric cancer cells by targeting ILK. 10 However, to our best knowledge, whether miR-625 is associated with MDR in gastric cancer and the molecular mechanisms are not reported. In this study, we investigated the regulation and functional role of miR-625 in gastric cancer MDR by taking the advantage of SGC7901 cells and their MDR variants, including SGC7901/VCR and SGC7901/ADR. Cell lines and culture The cell line SGC7901 was purchased from the Shanghai cell bank of the Chinese Academy of Sciences. The cells were cultured in RPMI-1640 medium (Invitrogen) supplemented with 10% fetal bovine serum (Invitrogen), 100 U/ml penicillin sodium and 100 µg/ml streptomycin at 37°C in an incubator with 5% CO 2 . The MDR variants of SGC7901, including SGC7901/VCR and SGC7901/ADR, were established in our lab as described previously, 11 and cultured with the addition of 1 mg/ml VCR and 0.5mg/ml ADR, respectively, to maintain their MDR phenotype. All procedures were conducted in accordance with the protocols approved by the Ethics Committee of Linyi People's Hospital. qRT-PCR analysis SGC7901 cells were harvested and the total RNA was isolated using TRIzol reagent (Invitrogen). The expression of miR-625 was quantified by the stem-loop RT followed by TaqMan PCR analysis as previously described, 12 during which an All-in-One miRNA qRT-PCR Detection Kit (GeneCopoeia, Rockville, MD, USA) was utilized based on the manufacturer's protocols. The results were calculated by the 2 −ΔΔCT method, 13 and normalized to U6 snRNA. ALDH1A1 mRNA expression was quantified by qRT-PCR analysis using the SYBR Green PCR Kit (Takara Bio Inc., Otsu, Japan), and the results were normalized to GAPDH. Each reaction was conducted in triplicate. Primer sequences are listed as follows: miR-625 GSP: 5ʹ-GCGGCAGACTATAGAACTTT-3ʹ; R: 5ʹ-CAGTGCGT GTCGTGGA-3ʹ; ALDH1A1 F 5ʹ-AGGGGCAGCCAT TTCTTCTCA3ʹ; R 5ʹ-CACGGGCCTCCTCCACATT-3ʹ. In vitro drug sensitivity assay SGC7901 cells were transfected with antagomir-625, and SGC7901/ADR and SGC/7901VCR cells were transfected with mimic-mir625 using the Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. The transfection of antagomir-NC and mimic-NC was used as control. The drug sensitivity was determined as previously described. 6 Briefly, at 48 h after transfection, 5×10 3 cell were seeded into 96-well plates and then treated with different concentrations of chemotherapeutic agents, including vincristine (VCR), adriamycin (ADR), 5-fluorouracil (5-FU), and cisplatin (CDDP). At 48 h after treatment, 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H tetrazolium bromide (MTT) (Sigma, St Louis, MO, USA) assay was performed, and the absorbance at 490 nm was recorded using a spectrophotometer. The concentration at which each drug produced 50% inhibition of growth (IC 50 ) was calculated by the relative survival curve. Each treatment was performed with 5 replicates. Flow cytometry analysis of apoptosis SGC7901 cells were harvested and washed with PBS. Then, cell apoptosis was assessed using an Annexin-V-FITC apoptosis detection kit (BD, Franklin Lakes, NJ, USA) in combination of flow cytometry analysis as previously described. 14 A FACSCalibur flow cytometer (BD Biosciences) was used and data were analyzed using the FlowJo software (FlowJo, Ashland, OR, USA). Luciferase activity assay The 3ʹ-UTR of human ALDH1A1 cDNA containing the putative binding site for miR-625 was amplified by PCR and inserted downstream of the luciferase gene in the pGL3 vector (Promega, Madison, WI). A mutant 3ʹ-UTR ALDH1A1 construct was generated using the QuikChange II site-Directed Mutagenesis Kit (Stratagene, La Jolla, CA) according to the manufacturer's protocols. Both the wildtype (wt) and mutant (mut) constructs were confirmed by DNA sequencing. For determining luciferase activity, SGC7901 cells were plated with 1×10 5 cells per well in 24-well plates. 200 ng of pGL3-wt-3ʹ-UTR or pGL3-mut -3ʹ-UTR plus 50 ng pRL-TK Renilla luciferase (Promega, Madison WI USA) were cotransfected with 50 pmol of antagomir-NC, antagomir-625, mimic-NC, or mimic-mir 625 using the Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. At 48 h following transfection, the luciferase activity was measured using the Dual Luciferase Reporter Assay System (Promega). The firefly luciferase activity was normalized to that of Renilla luciferase. Each treatment was performed in triplicate. Statistical analysis Data are presented as the mean ± SEM. The two-tailed Student's t-test, one-way or two-way analysis of variance (ANOVA) was applied to compare the data. The statistical analysis was carried out using the SPSS11.0 software (SPSS Inc., Chicago, IL, USA). Differences were considered to be statistically significant if p<0.05. miR-625 expression is decreased in MDR gastric cancer cells We established two gastric cancer cell variants from parental SGC7901 cells, including SGC7901/ADR and SGC7901/ VCR, which are resistant to the treatment of adriamycin (ADR) and vincristine (VCR), respectively. 9 The resistant characteristic of SGC7901/ADR cells to ADR ( Figure 1A) and SGC7901/VCR (B) were obtained from parental SGC7901 cells by stepwise selection of vincristine (VCR) and adriamycin (ADR) treatment, respectively. 1 mg/ml VCR and 0.5 mg/ml ADR were used to maintain their MDR phenotype. SGC7901/ADR cells were treated with 10 µg/ml ADR (A) and SGC7901/VCR cells were treated with 10 µg/ml VCR (B) for consecutive days as indicated. The parental SGC7901 cell line was used as a negative control. The drug sensitivity was determined by MTT assay. The percentage of viable cells is shown (%). Each treatment condition was performed in 5 replicates. (C and D) The expression of miR-625 in SGC7901/ADR cells (C) and SGC7901/VCR cells (D) was determined by qRT-PCR analysis. Results were normalized to U6 snRNA and expressed as relative to SGC7901 cells. Each column represents the mean value from 3 replicates. Data are presented as the mean ± SEM. ANOVA with a post hoc Dunnett's test (A and B); Two-tailed Student's t-test (C and D). **P<0.01. and SGC7901/VCR cells to VCR ( Figure 1B) was confirmed by their robust cell viability under contentious drug treatment, as compared to sensitive SGC7901 cells. To explore whether miR-625 is associated with MDR development in gastric cancer, we compared its expression in SGC7901 cells with that in SGC7901/ADR and SGC7901/VCR. As analyzed by quantitative reverse transcription PCR (qRT-PCR) assay, in comparison with SGC7901 cells, miR-625 expression was markedly downregulated in both SGC7901/ADR cells ( Figure 1C) and SGC7901/VCR cells ( Figure 1D). Therefore, these results may imply a reverse correlation between miR-625 expression and MDR development in gastric cancer. miR-625 sensitizes MDR gastric cancer cells to chemotherapeutic agents To establish whether miR-625 is functionally involved in MDR development in gastric cancer, we antagonized miR-625 in SGC7901 cells with the transfection of specific antagomir (antagomir-625). Compared to negative control antagomir (antagomir-NC), the miR-625 level was indeed silenced upon antagomir-625 transfection (Figure 2A). We next treated these cells with commonly-utilized chemotherapeutic agents for gastric cancer, including ADR, VCR, 5-fluorouracil (5-FU), and cis-diamminedichloroplatinum (CDDP). 15 An in vitro drug sensitivity assay, which evaluated the concentration at which each drug produced 50% inhibition of growth (IC 50 ), showed that miR-625 inhibition in SGC7901 cells by antagomir-625 increased the IC 50 values for all the tested chemotherapeutic agents ( Figure 2B), suggesting that miR-625 inhibition increases MDR in SGC7901 cells, which parallels the phenomenon of miR-625 downregulation in MDR SGC7901 cells (Figure 1). Based on these results, we suspected that whether miR-625 restoration decreases MDR in SGC7901/ADR and SGC7901/VCR. To test this possibility, miR-625 was overexpressed in these cells by transfecting with mimic-mir625, which was validated by qRT-PCR analysis . At 2 days after transfection, the miR-625 level was determined by qRT-PCR analysis. Results were normalized to U6 snRNA. The expression relative to that in antagomir-NC group is shown. Each column represents the mean value from 3 replicates. (B) SGC7901 cells were transfected as in (A), and further treated with ADR, VCR, 5FU and CDDP for 2 days. The drug sensitivity was determined by MTT assay. The IC 50 is shown. Each concentration treatment was performed in 5 replicates. (C and D) SGC7901/ADR (C) and SGC7901/VCR (D) cells were transfected with mimic-mir625 or negative control (mimic-NC). At 2 days after transfection, the miR-625 level was was determined by qRT-PCR analysis. Results were normalized to U6 snRNA. The expression relative to that in mimic-NC group is shown. Each column represents the mean value from 3 replicates. (E and F) SGC7901/ADR cells (E) and SGC7901/VCR cells (F) were transfected as in (C and D), and further treated with ADR, VCR, 5FU and CDDP for 2 days. The drug sensitivity was determined by MTT assay. The IC 50 is shown. Each concentration treatment was performed in 5 replicates. Data are presented as the mean ± SEM. Two-tailed Student's t-test. **P<0.01; *P<0.05. ( Figure 2C and D). As expected, compared to mimic-NC, miR-625 overexpression mediated by mimic-mir625 transfection leaded to reduced IC 50 values for 4 chemotherapeutic agents in both SGC7901/ADR cells ( Figure 2E) and SGC7901/VCR cells ( Figure 2F). Therefore, these results indicate that miR-625 is able to reverse MDR in gastric cancer. miR-625 promotes apoptosis in MDR gastric cancer cells treated with chemotherapeutic agents The evasion to apoptosis induced by cytotoxicity of chemotherapeutic agents is an important molecular mechanism for tumor resistance to chemotherapy. 16,17 We asked whether miR-625 reverses MDR in gastric cancer cells by promoting apoptosis. To address this issue, we checked the expression of apoptosis-associated markers, including Bax, Bcl-2, and cleaved caspase-3. 18,19 Western blot analysis revealed that, miR-625 inhibition decreased the expression of Bax and cleaved caspase-3, and meanwhile, increased Bcl-2 expression in SGC7901 cells treated with 4 chemotherapeutic agents ( Figure 3A), which are consistent with the increased survival rate and MDR in these cells ( Figure 2B). Moreover, instead, miR-625 overexpression via mimic-mir625 transfection increased the expression of Bax and cleaved caspase-3, and decreased Bcl-2 expression in both SGC7901/ADR cells ( Figure 3B) and SGC7901/VCR cells ( Figure 3C) under the treatment of 4 chemotherapeutic agents, which are also in concert with results shown in Figure 2E and F. Altogether, these observations suggest that miR-625 reverses MDR in gastric cancer through resensitizing cells to apoptosis induced by chemotherapeutic agents. ALDH1A1 is a direct target of miR-625 miRNAs exert their versatile biological activities through regulating gene expression by means of targeting complementary mRNAs. 20 To elucidate how miR-625 reverses MDR in gastric cancer, its mRNA targets were predicted by TargetScan tool. 21 Among these putative targets, the aldehyde dehydrogenase 1A1 (ALDH1A1) attracted our attention ( Figure 4A), since its overexpression is associated with the aggressiveness and poor prognosis of gastric cancer, 22 and it also plays a role in cancer drug resistance. 23 We confirmed whether ALDH1A1 is a direct target of miR-625 in SGC7901 cells by luciferase reporter assay. As shown, miR-625 silencing increased the luciferase activity of wild-type but not the mutant form of ALDH1A1 construct ( Figure 4B). Oppositely, miR-625 overexpression decreased the luciferase activity of wildtype ALDH1A1 construct, but with the mutant form unaffected ( Figure 4C). These results prove that miR-625 can directly target ALDH1A1. In addition, consistently, along with the downregulation of miR-625 in SGC7901/ADR cells ( Figure 1C) and SGC7901/VCR cells ( Figure 1D), both the mRNA level ( Figure 4D) and protein level ( Figure 4E) of ALDH1A1 were elevated. Furthermore, miR-625 silencing resulted in increased ALDH1A1 expression in SGC7901 cells ( Figure 4F), and in agreement with this, miR-625 overexpression leaded to decreased ALDH1A1 expression in SGC7901/ADR cells and SGC7901/VCR cells ( Figure 4G). Thus, these data demonstrate that ALDH1A1 could be a direct target of miR-625. miR-625 reverses MDR in gastric cancer cells by suppressing ALDH1A1 Finally, we aimed to clarify whether ALDH1A1 contributes to miR-625 function in MDR in gastric cancer. For this purpose, we depleted ALDH1A1 expression in SGC7901 cells through small interfering RNA (siRNA) technique. Indeed, siRNA targeting ALDH1A1 resulted in pronounced reduction of ALDH1A1 expression in SGC7901 cells ( Figure 5A). Significantly, the mir-625 silencing-increased IC 50 values for 4 tested chemotherapeutic agents were vastly reversed by ALDH1A1 depletion (Figure 5B), proving that ALDH1A1 upregulation mediated by mir-625 silencing plays a critical role in increasing the MDR in gastric cancer. Consistently, in SGC7901/ADR cells, the decreased MDR by mir-625 overexpression was largely recovered when ALDH1A1 expression was restored by overexpression ( Figure 5C and D). Furthermore, similar results were obtained in SGC7901/ VCR cells (Figure 5E and F). Collectively, these lines of evidence establish ALDH1A1 as a critical target through which miR-625 reverses MDR in gastric cancer cells. cells were transfected with antagomir-625 or antagomir-NC in combination with luciferase reporter construct containing wild-type or mutant 3ʹ-UTR of ALDH1A1. At 2 days after transfection, the luciferase activity was measured. The firefly luciferase activity was normalized to that of Renilla luciferase. Each treatment condition was performed in triplicate. (C) SGC7901 cells were transfected with mimic-mir625 or mimic-NC in combination with luciferase reporter construct containing wild-type or mutant 3ʹ-UTR of ALDH1A1. At 2 days after transfection, the luciferase activity was measured. The firefly luciferase activity was normalized to that of Renilla luciferase. Each treatment condition was performed in triplicate. (D) The mRNA level of ALDH1A1 in SGC7901/ADR and SGC7901/VCR cells was determined by qRT-PCR analysis. Results were normalized to U6 snRNA. The expression relative to that in parental SGC7901 cells is shown. Each column represents the mean value from 3 replicates. (E) The protein level of ALDH1A1 in parental SGC7901 cells, SGC7901/ADR cells and SGC7901/VCR was determined by Western blot analysis. β-actin was utilized as a loading control. The representative results from 3 independent experiments are shown. (F) SGC7901 cells were transfected with antagomir-625 or antagomir-NC. At 2 days after transfection, the protein level of ALDH1A1 was determined by Western blot analysis. (G) SGC7901/ADR cells and SGC7901/VCR cells were transfected with mimic-mir625 or mimic-NC. At 2 days after transfection, the protein level of ALDH1A1 was determined by Western blot analysis. β-actin was utilized as a loading control. The representative results from 3 independent experiments are shown. Data are presented as the mean ± SEM. Two-tailed Student's t-test. **P<0.01; NS, not significant. Disscussion The acquired MDR is a major cause of chemotherapy failure during gastric cancer treatment. 24 Apart from the wellrecognized drug-resistant ATP-binding cassette transporters, such as P-glycoprotein (P-gp) and MDR-associated protein (MRP)1, 25 recent studies have revealed that several new molecules and mechanisms are also associated with the development of MDR in gastric cancer, including some miRNAs. 4,26,27 For example, miR-15b and miR-16 influence gastric cancer MDR via modulation of apoptosis by targeting BCL2. 6 Therefore, the connection between miRNAs and gastric cancer MDR may provide novel therapeutic targets for overcoming gastric cancer MDR. In the present study, we report that miR-625 reverses gastric cancer MDR, in which the targeted ALDH1A1 constitutes the predominant mechanism. Thus, we identify miR-625 as a novel miRNA regulator in gastric cancer MDR and also highlight an important role of ALDH1A1 in mediating miR-625 function. SGC7901/VCR and SGC7901/ADR cell lines are two MDR variants derived from SGC7901 cells, which are frequently used as experimental models in this research field. 28 The aberrant expression of some miRNAs has been observed in gastric cancer, and these dysregulated miRNAs have the potential to be used as biomarkers and therapeutic targets in gastric cancer. 29,30 In addition, the abnormal expression of some miRNAs has also been found in clinical gastric cancer tissues with MDR, like miR-30a. 31 Moreover, in SGC7901/VCR and/or SGC7901/ADR cells, multiple miRNAs display abnormal expression as compared with sensitive SGC7901, such as miR-19a/b, 7 miR-106a, 8 and miR-508-5p. 9 In the current study, we found that contrary to SGC7901 cells, miR-625 expression was markedly decreased in both SGC7901/VCR and SGC7901/ADR cells. Thus, we in the first time reveal a negative correlation between miR-625 expression and gastric cancer MDR. Notably, in a previous study, miR-625 expression was found significantly downregulated and reversely correlated with the lymph node metastasis in gastric cancer. 10 Together with our findings, it appears that miR-625 is not only negatively associated with progression and metastasis, but also with MDR development in gastric cancer, which possibly suggest it as a tumor suppressive miRNA in gastric cancer. In addition to drug resistance and metastasis, miR-625 also influences other activities of cancer, including proliferation of esophageal cancer, 32 migration and invasion of hepatocellular carcinoma, 33 and metabolism of melanoma. 34 To better understand the association between miR-625 and gastric cancer, it would be required to investigate whether miR-625 plays a role in other activates of gastric cancer and how it's expression is downregulated during tumorigenesis and MDR development of gastric cancer. As described above, some miRNAs have been shown to reverse gastric cancer MDR. For example, miR-15b and miR-16 were reported to reverse gastric cancer MDR by promoting apoptosis via targeting a common molecule BCL2, a well-known anti-apoptotic regulator. 6 Besides, miR-508-5p and miR-129-5p reverse gastric cancer MDR by targeting ABCB1 and ZNRD1, 9 and ABC transporters, 35 respectively. These previous studies suggest that different miRNAs reverse gastric cancer MDR through distinct mechanisms. We found that miR-625 resensitized MDR gastric cancer cells to four chemotherapeutic agents, including ADR, VCR, 5-FU, and CDDP, and that this was accompanied by the promoted apoptosis of gastric cancer cells, indicating that miR-625 reverses gastric cancer MDR through promoting apoptosis which is induced by chemotherapeutic agents. It has shown that miR-625 induces apoptosis and increases the chemosensitivity of glioma to temozolomide. 36 Moreover, miR-625 also promotes apoptosis and increases chemosensitivity of lymphoblastic leukemia cells to vincristine and cytarabine. 37 Therefore, we speculate that promoting the cytotoxicity-induced apoptosis derived from chemotherapeutic agents may be a common mechanism by which miR-625 reverses chemoresistance. Given this similarity shared by these findings, it is likely that miR-625 may reverse chemoresistance in other cancer types. Further efforts are needed to validate speculation. It has been established that the elevated expression and activity of ALDH1A1, which functions as a detoxifying enzyme, are important features of tumor-initiating and/or cancer stem cells in multiple types of cancers. 38 In fact, ALDH1A1 is involved in many biological processes, including oxidative stress response, cell differentiation, and drug resistance. 39 ALDH1A1 mediates temozolomide resistance in glioblastoma, 39 and confers gemcitabine in pancreatic adenocarcinoma cells. 38 ALDH1A1 also induces resistance to CHOP in diffuse large B-cell lymphoma. 40 Mechanistically, we provide evidence demonstrating that ALDH1A1 is a direct target of miR-625, and that miR-625 reverses gastric cancer MDR by suppressing ALDH1A1 expression. Thus, our study extends ALDH1A1 function to gastric cancer MDR. The limitation of our study is the lacking of molecular evidence elucidating how ALDH1A1 regulates MDR in gastric cancer. Previous studies have proven that the mechanisms that underlie ALDH1A1-conferred chemoresistance are associated with activation and upregulation of drug-transporters and survival proteins, like P-glycoprotein, AKT and BCL2. 41,42 Therefore, it is very possible the impairment of these mechanisms may also contribute to miR-625-promoted apoptosis and -reversed MDR in gastric cancer cells when treated with chemotherapeutic agents. It is interesting to test whether this is the case. Nevertheless, other possibilities can not rule out, since we discovered that ALDH1A1 restoration did not completely rescue the reversed gastric cancer MDR by miR-625. Except for ALDH1A1, more study efforts are required to discover other targets which mediate miR-625 effect on reversing gastric cancer MDR. Furthermore, investigating whether miR-625/ALDH1A1 axis reverses MDR in gastric cancer in vivo, such as in xenografted tumor model, and whether miR-625 and ALDH1A1 have possible association with MDR by examining clinical gastric cancer samples would not only strengthen our in vitro findings but also bring more profound significance in therapeutically exploiting it in the future. In conclusion, we reveal a novel role of miR-625 and ALDH1A1 in the modulation of gastric cancer MDR. Our findings hint that upregulating miR-625 level or directly targeting ALDH1A1 might provide clinical benefit for reversing gastric cancer MDR, thereby improving the effectiveness of chemotherapy for gastric cancer patients. Disclosure The authors report no conflicts of interest in this work. Dovepress Publish your work in this journal Cancer Management and Research is an international, peer-reviewed open access journal focusing on cancer research and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2019-08-02T13:24:53.022Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "292a8808148eb2e2fd59104555471e348edd8fca", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=51248", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "292a8808148eb2e2fd59104555471e348edd8fca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201322842
pes2o/s2orc
v3-fos-license
Infection Prevention and Control in the Tropics Tropical settings present unique challenges to the practice of infection prevention and control. These are multi-faceted due to differences in the climate, culture, social, and political milieu of low- and middle-income countries situated in the tropics, as well as the lack of resources. The emergence of communicable diseases and low vaccination coverage also lead to nosocomial augmentation of community outbreaks, further increasing the economic burden of hospital management. Addressing these challenges requires innovative, low-cost, and tailored solutions suited to the tropical environment. Standard Infection Control Practices Standard infection control precautions ("Standard Precautions") apply to all patients, regardless of their reason for admission or infection status (Table 20.1). 2 The most important element of Standard Precautions is hand hygiene, which can be handwashing with soap and water or alcohol-based gels or foams that do not use water. Guidelines published by the HICPAC/SHEA/APIC/ IDSA Hand Hygiene Task Force provide specific recommendations. 3 Alcohol-based hand rubs can be used where there is limited access to water. They have better acceptability, less skin irritation, and quicker application compared with soap and water, resulting in improved compliance. Commercially prepared products are available, but a low-cost gel can be prepared by hospital pharmacies using 20 mL of glycerin, propylene glycol, or sorbitol mixed with 980 mL of >70% isopropanol. Gels combining chlorhexidine and alcohol may be more effective than alcohol alone because of chlorhexidine's prolonged bactericidal effect, but are expensive. They should be limited to situations when a high degree of hand antisepsis is necessary, such as before surgical procedures and placing invasive devices. Alcohol-based hand rubs should be combined with feedback and awareness messages and other basic infection control practices. KEY FEATURES • Unique challenges of infection control in the tropics include: • Component Recommendation Hand hygiene Employ after touching blood, body fluids, secretions, contaminated items; immediately after removing gloves; between patient contacts Personal protective equipment (PPE) Gloves For touching blood, body fluids, secretions, contaminated items; for touching mucous membranes and non-intact skin Gowns During procedures and patient care activities when contact with clothing/exposed skin with blood/body fluids, secretions is anticipated Mask, eye protection During procedures and patient care activities when contact with clothing/exposed skin with blood/body fluids, secretions is anticipated rates of targeted HAIs can be calculated and compared across institutions, as well as before and after interventions. Although passive surveillance (based on clinical samples) is less costly and labor intensive, it can miss a reservoir of asymptomatic, colonized patients. Active surveillance involves screening asymptomatic patients for resistant organisms and can lead to rapid isolation of colonized patients. However, the patient populations that should be targeted for screening and the optimal screening method remain unresolved. Cost is another major limiting factor. Hospitals should assess what is feasible in their setting. To overcome surveillance shortcomings, the World Health Organization (WHO) has developed a low-cost, computer-based antimicrobial resistance (AMR) surveillance program (WHONET) that can monitor resistance trends and generate locally applicable guidelines on antimicrobial use. 4 An additional impediment to surveying resistant organisms is the lack of reliable culture and susceptibility data, as standardization and quality assurance of microbiology laboratories is not enforced in most developing countries. Burden of Health Care-Associated Infections in Developing Countries HAIs are a serious problem in high-income countries; 1.7 million cases and an estimated 100,000 deaths per annum are reported in the United States. 5 In a meta-analysis from low-income countries, HAIs were found to be greater than in the United States or Europe 6 ; gram-negative bacilli were the most common nosocomial pathogens. High rates of gram-negative HAIs have been documented in neonatal nurseries in low-income countries with rates threefold to twentyfold higher than in developed regions. 7,8 HAIs are responsible for increased morbidity and mortality, are a waste of precious resources, and subvert patient expectations of quality medical care. This increases negativity toward the health care system, especially because patients bear the costs of HAIs in many developing countries. Reducing the risk of HAIs in developing countries is a priority of the WHO. 9 Hospitals with inadequate vector control can amplify vectorborne illnesses such as malaria, dengue, leishmaniasis, and filariasis because of infected patients in an overcrowded environment. 14 Viral Hemorrhagic Fevers VHFs such as Lassa, Ebola, Marburg, and Crimean-Congo hemorrhagic fever present unique challenges for infection control measures. Nosocomial transmission can occur directly from the patient, when transferring the dead body, through contact with infectious fluids, contaminated equipment, or needle stick injuries. Standard precautions combined with strict contact precautions and single-room isolation, especially for acutely bleeding patients or in those with profuse diarrhea or vomiting, are recommended until discharge to prevent nosocomial transmission. 15 Patients can be cohorted in a designated area; failing that, they can be housed in a portion of a larger ward, in an uncrowded corner of a large hall, in rooms designated for airborne isolation, or in private rooms. In the Ebola virus outbreak of 2014, treatment centers were created for assessing, observing, and treating patients suspected of having Ebola. Health care workers should be specifically trained in caring for these patients and other personnel restricted. Personal protective equipment (PPE) should include a scrub suit, gloves, and waterproof boots (if the floor is soiled), over which a disposable gown, plastic apron, thick gloves, fluid-resistant particulate respirator (FFP2-or EN-certified equivalent or U.S. NIOSH-certified N95), and protective goggles or face shield should be worn. If this level of protection is not available, alternatives are old shirts for scrubs, Risk Factors for Health Care-Associated Infections in Developing Countries Universal risk factors for HAIs include severity of underlying disease and factors associated with poor patient outcomes, such as malnutrition, length of hospital stay, inter-hospital transfers, use of invasive medical devices (intravascular devices, urinary catheters, intubation, and mechanical ventilation), surgery, and prolonged and/or broad-spectrum antimicrobial therapy. In developing countries there are multiple additional contributors (Table 20.3). These include lack of surveillance to control infections and outbreaks, inappropriate antibiotic use, non-adherence to infection control practices, inadequate sterilization of medical equipment, reuse of single-use devices, and reservoirs of infection in places such as contaminated food and water in the hospital. Staff training, adequately sterilizing equipment, and improving compliance with hand hygiene are easier to address than are overcrowding and understaffing. Crossover of Community Infections into Hospitals Several recent outbreaks in the tropics have mandated development of infection control strategies specific to transmission dynamics of infectious agents (see Table 20. Outbreaks of cholera, measles, non-typhoidal Salmonella, and other fecal-oral-transmitted organisms have been reported. 12 Drivers of infections include overcrowding, improper patient isolation, the presence of visitors and outsiders, contaminated food products brought into the hospital, and infected hospital food-handlers. 13 High prevalence of multidrug-resistant organisms immunity, vaccinating the non-immune, prompt diagnosis, and early institution of airborne isolation precautions. 21 Emerging Infections Emerging infections such as Zika, Chikungunya, West Nile virus, and plague (see Fig. 20.1) require vector control measures and transmission-based precautions in addition to standard precautions. Tuberculosis Individuals co-infected with HIV and tuberculosis (TB) have rapidly progressive disease. Those with pulmonary disease are highly infectious via aerosolized droplet nuclei, posing challenges for infection control and a risk to health care and laboratory workers. U.S. CDC guidelines recommend rapid diagnosis and treatment, isolation in negative-pressure rooms, and special masks to prevent nosocomial transmission, which are rarely feasible in resource-poor settings. However, early diagnosis and treatment, outpatient evaluation of suspected TB patients, a separate TB ward with adequate ventilation using exhaust fans and large open windows to allow ultraviolet (UV) rays from sunlight, early collection of samples, disinfecting sputum containers, and treating the sputum with household bleach can be applied. 22 The WHO published guidelines to control TB transmission in health care settings 23 ; washable cotton gowns for disposable gowns, plastic bags for boots, plastic sheets or plastic cloth for aprons, commercially available eyeglasses for eye protection, and plastic bottles modified for sharps disposal. In regions prone to VHF outbreaks, a VHF coordinator should be appointed to oversee preparations and response and to coordinate activity and mobilize communities for rapid control. 16 Viral Respiratory Infections Respiratory viral illnesses with significant morbidity and a high transmission potential such as MERS-CoV and influenza need transmission-based precautions. The Centers for Disease Control and Prevention (CDC) and WHO advise standard contact and airborne precautions for these patients. [17][18][19][20] Contact precautions should exceed 24 hours in duration after symptom resolution. Health care worker vaccination against influenza virus may only provide minimal protection against novel influenza strains and therefore may not be feasible to sustain in areas with limited resources. disposal, regular laundry of linen, and encasing pillows and mattresses in plastic. 34 Waste disposal can decrease rodent infestations, and screened doors and windows, as well as traps, are used in vector control. New and more cost-effective light-emitting diode (LED) insect traps are effective. Device-Associated Infections Device-associated infections (DAIs) include central line (CL)-associated bloodstream infections, catheter-associated urinary tract infections, and ventilator-associated pneumonia. Invasive device use in developing countries has increased without prerequisite infection control measures, resulting in higher rates of DAIs than in industrialized countries. Surgical Site Infections Surgical procedures are associated with higher post-operative wound infection rates due to inadequate aseptic precautions. Although most data are anecdotal, surgical wound infections are reported to be as high as 12.5% in Vietnam and 19.6% in Kenya. Recent data also show that surgical site infection (SSI) rates are higher in warmer climates. 37 A surgical checklist developed by the WHO has reduced surgical mortality and morbidity by encouraging the use of simple measures by surgery, anesthesia, and nursing staff. Ensuring delivery of antibiotic prophylaxis in the operating room using verbal confirmation alone improved antibiotic prophylaxis compliance from 56% to 83%. Chlorhexidine-alcohol is the antiseptic of choice for pre-operative surgical-site skin cleansing, and is superior to povidone-iodine in preventing post-operative wound infections. 38 Chlorhexidine-gluconate-based scrubs are more effective than povidone-iodine-based aqueous scrubs in reducing bacterial contamination on staff hands before operations. S. aureus-associated post-operative wound infections can be decreased by treating nasal carriers of S. aureus with pre-operative mupirocin nasal ointment and chlorhexidine soap. However, application in developing countries may be limited by the need to identify S. aureus carriers using rapid DNA detection. Unsafe Injections and Needle Stick Injuries Unsafe injections and sharps injuries are instrumental in transmitting blood-borne pathogens such as hepatitis B and C and HIV. 39,40 It is estimated that 16 billion syringes are sold worldwide each year, the vast majority in developing countries; injection rates vary from 1.7 to 11.3 per person per year. Up to 75% of these may be non-sterilized. Needle stick injuries to health care workers are another source of blood-borne pathogen infection. Needle sticks result from lack of training, improper disposal and destruction of needles, attempts to recap needles, and other unsafe practices. Trainee staff and nurses are most at risk when drawing blood. Improving injection safety requires programmatic reform at a national level. Although expensive, the availability of needle disposal kits and disposable "auto-destruct" syringes should be increased. however, there is little evidence of efficacy and cost-effectiveness in low-resource settings. Antimicrobial Resistance AMR is a global health crisis, 24,25 with carbapenem and colistin resistance recently emerging among gram-negatives. 26,27 The proportion of resistant organisms such as methicillin-resistant Staphylococcus aureus, extended-spectrum β-lactamase-producing Enterobacteriaceae, and multidrug-resistant Pseudomonas aeruginosa and Acinetobacter spp. is substantially higher in developing countries. 6 Factors that predispose to AMR infections are misuse of broadspectrum antimicrobials (inappropriate prescription, suboptimal dosing and duration), low-potency antibiotic formulations, poor hospital hygiene, overcrowding, lack of infection control, unavailability of reliable diagnostic and susceptibility testing, and a lack of personnel trained in controlling infections. Managing AMR requires adherence to infection control and restricted antibiotic use. Hospitals and health care facilities should initiate antibiotic stewardship programs (ASPs) that can reduce AMR and associated costs. ASPs function best with the collaboration and support of physicians, infection control teams, nurses, microbiology laboratories, pharmacy services, quality management teams, and information systems. However, if they depend on the availability of diagnostic laboratories and information systems, it may be a challenge to implement them where there are limited trained personnel and resources. Moreover, it is difficult to restrict antibiotics where third-generation cephalosporins and fluoroquinolones are freely available over-the-counter and are widely used. 28 Countries that adhere to the WHO's essential drug policies provide greater access to essential drugs for vulnerable populations with less indiscriminate prescription of antimicrobials and injections. 29 Sepsis and HAIs in Neonates Newborn care and neonatal sepsis are major challenges. Lack of infection prevention antepartum and intrapartum, overcrowding, poor hand hygiene, and invasive devices for ventilatory support and vascular access contribute to high rates of infections in the newborn and especially in premature infants. High antibiotic use exerts antibiotic selection pressure, and an overwhelming proportion of neonatal intensive care infections are resistant to multiple antibiotic classes. 8,30 Pan-resistant Acinetobacter and Pseudomonas are common. Infection control in the labor ward and neonatal intensive care unit (NICU) requires hand hygiene and rational antibiotic use, along with an appropriately trained and motivated workforce. 7,8,31 Although preventive measures such as using chlorhexidine gluconate and catheter care bundles in NICUs have proven effective against neonatal HAIs, 32 resource constraints and lack of public health attention to generating inexpensive solutions hinder control efforts. Hospital Design in the Tropics Hospital design affects thermal comfort, availability of clean air, control of air movement, and indoor air quality. In many tropical regions, resource limitations and electric power shortages prevent using heating, ventilation, and air conditioning (HVAC) systems. Because airborne isolation of patients with TB, measles, and varicella pneumonia employ modifications in HVAC, isolating these patients is difficult without HVAC. Multi-bed hospitals can employ hybrid natural and mechanical ventilation to optimize air movement and exchanges per hour. Not many solutions are available for thermal comfort in hot and humid climates, but design features to decrease indoor temperature such as cantilevered roofs can be added to existing buildings. Cockroaches, ants, bedbugs, flies, and rodents abound in tropical regions and can carry microorganisms. 33 Their proliferation can be controlled by adequate plumbing, waste Health care workers, medical and allied health students, and the public should be educated about the dangers of unsafe injections; health care workers should be trained in safe practices. Surveillance of needle stick injuries and post-exposure prophylaxis for health care workers should be part of hospital infection-control programs. Strengthening Health Systems in the Tropics Infection control can be achieved if strong institutional commitment exists. Despite the challenges, studies reviewing the cost-effectiveness of even minimal infection control measures are universally optimistic. These measures lower the costs incurred from HAIs due to longer hospital stays, greater disease morbidity and mortality, and antimicrobial agent use. The effectiveness of infection control measures can be used to indicate the quality of hospital care. 41 Any intervention program should comprise a holistic approach that includes basic infection control measures. The most effective solutions will be those that are indigenously developed and implemented and improved through active learning cycles and feedback. Local research is necessary to identify critical points in infection transmission and solutions to address these.
2019-08-23T14:13:38.938Z
2019-05-28T00:00:00.000
{ "year": 2019, "sha1": "438b05a9c4df7eecb7a4205c08cd2aa5134d5bfd", "oa_license": null, "oa_url": "http://apps.who.int/iris/bitstream/10665/130596/1/WHO_HIS_SDS_2014.4_eng.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1d13ecdd4ba7d566d6603153b533e446a1a6f6c8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256049996
pes2o/s2orc
v3-fos-license
5G-Based Telerobotic Ultrasound System Improves Access to Breast Examination in Rural and Remote Areas: A Prospective and Two-Scenario Study Objective: Ultrasound (US) plays an important role in the diagnosis and management of breast diseases; however, effective breast US screening is lacking in rural and remote areas. To alleviate this issue, we prospectively evaluated the clinical availability of 5G-based telerobotic US technology for breast examinations in rural and remote areas. Methods: Between September 2020 and March 2021, 63 patients underwent conventional and telerobotic US examinations in a rural island (Scenario A), while 20 patients underwent telerobotic US examination in a mobile car located in a remote county (Scenario B) in May 2021. The safety, duration, US image quality, consistency, and acceptability of the 5G-based telerobotic US were assessed. Results: In Scenario A, the average duration of the telerobotic US procedure was longer than that of conventional US (10.3 ± 3.3 min vs. 7.6 ± 3.0 min, p = 0.017), but their average imaging scores were similar (4.86 vs. 4.90, p = 0.159). Two cases of gynecomastia, one of lactation mastitis, and one of postoperative breast effusion were diagnosed and 32 nodules were detected using the two US methods. There was good interobserver agreement between the US features and BI-RADS categories of the identical nodules (ICC = 0.795–1.000). In Scenario B, breast nodules were detected in 65% of the patients using telerobotic US. Its average duration was 10.1 ± 2.3 min, and the average imaging score was 4.85. Overall, 90.4% of the patients were willing to choose telerobotic US in the future, and tele-sonologists were satisfied with 85.5% of the examinations. Conclusion: The 5G-based telerobotic US system is feasible for providing effective breast examinations in rural and remote areas. Introduction Ultrasound (US) is a unique medical imaging technology that plays an important role in the diagnosis and management of breast diseases, owing to its advantages such as real-time scanning, convenience, and radiation-free features [1]. Breast cancer is not only the most frequently diagnosed cancer but also the leading cause of cancer-related death among women worldwide [2]. Women have a 12.3% risk of developing breast cancer during their lifetime [3]. Notably, men are also at a risk of possibly developing breast cancer, with a prognosis worse than that of women [4]. Compared to advanced breast cancer, early breast cancer is considered potentially curable [5]. Routine breast imaging is important for the detection of early breast cancer. Mammography and US are the most common screening modalities used for breast cancer detection. The density of mammary glands is higher in Asian women than in Western women; hence, US is more suitable for breast examination than mammography in Asian women [6][7][8]. Moreover, US is the primary imaging tool for definitive diagnosis, severity assessment, and treatment effect evaluation of non-tumour breast diseases, such as mastitis and gynecomastia [9,10]. Effective breast US examination is crucial to obtain credible results, particularly in some developing Asian countries with inadequate medical resources. However, US examination is highly operator-dependent, and there is a lack of experienced sonologists in rural and remote areas [11]. Consequently, many patients in these areas have to travel to large cities to access high-level medical care in large hospitals. This could lead to delayed diagnosis and treatment and increased economic burden on patients as well as increased medical resource pressure on higher-level hospitals [12]. The current medical resources are far from sufficient to meet this requirement, especially in rural and remote areas. Telemedicine has been developed to solve the problem of unbalanced distribution of medical resources [13]. Many studies have demonstrated that telemedicine can reduce the need for long-distance transportation of patients, leading to time and cost savings for the patients [14,15]. Tele-US, a branch of telemedicine, transmits US images through wired or wireless networks from remote areas to large hospitals for consultation with experienced sonologists [16]. Additionally, the telerobotic US system allows experienced sonologists in large hospitals to operate the transducer remotely and transmit US images in real-time through satellite, terrestrial, or broadband links [17,18]. Early research on different types of commercial robotic US systems has been published [19][20][21]. A series of studies by Adams et al. demonstrated that it is feasible to use a commercial telerobotic US system (MELODY system with a three degrees of freedom [DOFs] manipulator) for adult abdominal and obstetric examinations [19,20]. However, in the MELODY system, the tele-sonologist cannot control the pressure or movement of the transducer, thus requiring an on-site assistant to hold the robotic arm and apply pressure to the patient's body. A 2009 study by Boman et al. presented the development and technical assessment of the concept of cardiovascular consultation utilising long-distance, real-time echocardiography with the aid of the Medirob Tele system with a serial robot as a diagnostic tool in rural areas [21]. Nevertheless, this system is specific to echocardiography and not widely used in clinical practice. In comparison with previous other telerobotic US systems, a new generation of telerobotic US system (MGIUS-R3) with a greater number of DOFs and fifth generation (5G) data modules enables a tele-sonologist to remotely and in real-time control the subtle movements of the US transducer (rotating, rocking, and tilting) in the scanning area by manipulating the dummy US probe. Meanwhile, advances and popularisation in 5G mobile communication technology have further promoted the development of telerobotic US for clinical applications. During the coronavirus disease (COVID- 19) outbreak and pandemic, the use of the same type of 5G-based telerobotic US system for cardiopulmonary assessment of patients in the isolated ward or intensive care unit was reported in several studies [22][23][24]. However, the usefulness of 5G-based telerobotic US in breast examinations has not been elucidated. The implementation of this technology may provide a unique solution for breast examinations in rural and remote areas with limited medical resources. Thus, we designed a prospective and controlled study to assess the clinical value of the 5G-based telerobotic US system in breast examination in two different scenarios, a rural island and a remote county, to expand the scope of its application. Study Design This prospective study adhered to the tenets of the Declaration of Helsinki, and it was approved by the Institutional Review Board of Shanghai Tenth People's Hospital (NO. SHSY-IEC-4.1/21-334/01). Informed consent was obtained from all the en- Study Cohort Patients from Chongming Island, who were referred to Chongming Second People's Hospital for breast US examination from September 2020 to March 2021, were consecutively recruited for Scenario A. Chongming Second People's Hospital on Chongming Island, China is located 72 km away from Shanghai Tenth People's Hospital, a tertiary referral centre in the central zone of Shanghai, China. The inclusion criteria were age ≥18 years and ≤80 years and agreement to participate in the study with signed informed consent provided. The exclusion criteria were robotic arm failure of the telerobotic US system and incomplete US imaging data. Finally, 63 patients who underwent both conventional and 5G-based telerobotic breast US examinations were enrolled in this study for Scenario A. With the same inclusion and exclusion criteria applied for Scenario A, 20 patients from Anji County who underwent 5G-based telerobotic breast US examinations were recruited in May 2021 for Scenario B. Anji County, Zhejiang Province, China is located 220 km away from Shanghai Tenth People's Hospital. Sonologists and On-Site Assistant Six sonologists and one on-site assistant participated in this study. Five tele-sonologists with 5-20 years of work experience in breast US scanning conducted 5G-based telerobotic US examinations at Shanghai Tenth People's Hospital in central Shanghai. An on-site sonologist with 15 years of work experience in breast US scanning performed conventional US examinations at Chongming Second People's Hospital, Chongming Island. The on-site assistant, a hospital auxiliary personnel with one year of work experience, was in charge of guiding the patient examination in an orderly manner, operating the telerobotic US patient-side subsystem, applying coupling agents, protecting the patient's privacy, and recording the examination time. Before the study, each tele-sonologist attended a theoretical learning session on operating the telerobotic US doctor-side subsystem. Meanwhile, the on-site assistant attended a theoretical learning session on operating the telerobotic US patient-side subsystem and the basic anatomy of the breast and axilla. With the help of the on-site assistant, each tele-sonologist independently and completely executed telerobotic breast US scanning for three additional volunteers by following a standardised breast US examination protocol. 5G-Based Telerobotic US System This study adopted a commercial telerobotic US system (MGIUS-R3; MGI Tech Co., Ltd., Shenzhen, China), which included a doctor-side subsystem and a patient-side subsystem ( Figure 1). This telerobotic US system was a complete set of equipment that obtained European CE certification, Australian Drug Administration certification, and China CFDA certification. The two subsystems were connected by a 5G network (China Mobile Communications Corporation, Shanghai, China). The telerobotic US system included a commercial QueCtel 5G data module. The transmission between the 5G data modules and the host computer occurred through the M.2 Key-B interface (PCI-E 3.0 2X, up to 1.97 Gbps and approximately 15 Gbps, including the USB 2.0/3.0 interface for dialling and receiving AT commands for dialling). On the other hand, mobile data communication was applied between the 5G data module and 5G base station. The 5G communication had multifrequency band coverage at the same time, while the 5G data module had a corresponding frequency management mode that could conduct frequency searches from high to low frequencies. The microbase stations operated in a frequency band of approximately 5 GHz. In this frequency band, the maximum download and upload speeds were 930 Mbps and 130 Mbps, respectively. The end-to-end delay of the remote parameter tuning was less than 200 ms. The latest fabrication materials and technologies of radiofrequency devices were used to address the potential issue of the high frequency used in 5G, such as conversion communication had multi-frequency band coverage at the same time, while the 5G data module had a corresponding frequency management mode that could conduct frequency searches from high to low frequencies. The microbase stations operated in a frequency band of approximately 5 GHz. In this frequency band, the maximum download and upload speeds were 930 Mbps and 130 Mbps, respectively. The end-to-end delay of the remote parameter tuning was less than 200 ms. The latest fabrication materials and technologies of radiofrequency devices were used to address the potential issue of the high frequency used in 5G, such as conversion efficiency, linearity, and directionality in the process of converting electrical energy to electromagnetic waves. The doctor-side subsystem of Shanghai Tenth People's Hospital in central Shanghai was equipped with a robot control console, US imaging control system, and audio-video communication system (Figure 1). The robot control console consisted of a mobile dummy US probe (built-in gesture sensor and "UP" button) and a contact plate (built-in position and pressure sensors). By associating the robot coordinate system with the robot control console coordinate system and motion transformations, the robot control console could manage six DOFs of the robotic arm to achieve the desired movement of the probe. Thus, the action of the operator was consistent with the action of the robotic arm. The gesture sensor managed three DOFs for rotation, the position sensor managed two DOFs for the The doctor-side subsystem of Shanghai Tenth People's Hospital in central Shanghai was equipped with a robot control console, US imaging control system, and audio-video communication system (Figure 1). The robot control console consisted of a mobile dummy US probe (built-in gesture sensor and "UP" button) and a contact plate (built-in position and pressure sensors). By associating the robot coordinate system with the robot control console coordinate system and motion transformations, the robot control console could manage six DOFs of the robotic arm to achieve the desired movement of the probe. Thus, the action of the operator was consistent with the action of the robotic arm. The gesture sensor managed three DOFs for rotation, the position sensor managed two DOFs for the movement on the horizontal plane, and the pressure sensor and "UP" button managed one DOF for the down and up movements, respectively. Using the US system control panel on the doctor's side, all parameters and functions of the US imaging system, including gain, depth, focus, measurement, and colour Doppler parameters on the patient's side subsystem, could be adjusted and implemented in real-time by the tele-sonologist. During the telerobotic US examination, the quantified transmission delay was measured and dynamically displayed on the screen to the remote doctor. The colour of the icon was green when the delay time was less than 100 ms, yellow when it was 100-500 ms, and red when it was greater than 500 ms. It could assist the tele-sonologist in determining the delay status. The same patient-side subsystems were located in Chongming Second People's Hospital of Chongming Island and in a mobile car parked in Anji County ( Figure 1). The patient-side subsystem was equipped with a six-DOFs collaborative robotic arm (UR5; Universal Robots, Odense, Denmark), a portable US imaging system with a 5-12 MHz linear array transducer (Wisonic Clover 60; Huasheng Medical Systems, Shenzhen, China), and an audio-video communication system. The 6-DOF robotic arm could conduct sixdimensional motion in space to control the posture of the US transducer. The positioning accuracy was up to 0.1 mm. A force sensor at the front-end robotic arm provided real-time force feedback information. During the interaction between the US transducer and the human body, the force sensor recorded three-dimensional (3D) force information in real time. The vertical component of the 3D force was fed back to the controller as the actual contact force. Additionally, the sensitivity of the contact force could be accurate to 0.1 N. Meanwhile, the magnitude of the contact force was displayed synchronously and dynamically on the screen of the doctor-side subsystem to the tele-sonologist. Screen imaging of the US machine and scene, including the motion of the robotic arm, position of the US transducer, and posture of the patient, shot by an angle-adjustable camera with the function of amplification, could be transmitted to experts in real time and dynamically displayed using the doctor-side subsystem to the tele-sonologist. This system had multiple protective designs to ensure patient safety. The robotic arm had maximum limit settings for the moving speed (≤0.275 m/s) and contact pressure (5 N), and the patient-side subsystem had an emergency stop function for the robotic arm. If the robotic arm was out of control, the assistant would press the emergency stop button, and the robotic arm would stop immediately due to power stoppage to the robot. Furthermore, the robot motion stopped within 250 ms if the robotic arm detected that the collision force exceeded 120 N. 5G-Based Telerobotic Breast US Examination Protocol In Scenario A, the patients first underwent conventional breast US examination by an on-site sonologist at Chongming Second People's Hospital, using the same US imaging system (Wisonic Clover 60) as the reference standard. Subsequently, telerobotic breast US examinations were performed by the tele-sonologist at the Shanghai Tenth People's Hospital for these patients. Both types of US examinations followed a standardised breast examination protocol [25]. The tele-sonologists and on-site sonologists were blinded to each other's US examination findings. In Scenario B, the patients only underwent telerobotic breast US examinations by the tele-sonologist at Shanghai Tenth People's Hospital. The specific examination procedure of the 5G-based telerobotic breast US examination is described here. Before the examination, the on-site assistant attached the US linear array transducer to the robotic arm, and the tele-sonologist checked the 5G network connectivity and control of the robotic arm. The on-site assistant requested a breast US examination, registered the basic information of the patients (including name, sex, and age) in the patientside system, and entered it in the doctor-side system. The tele-sonologist then received the examination request, adjusted the examination mode to breast scanning, and asked the patients for their chief complaint and medical history via the audio-video communication system. The patients took off their clothes and laid down on the examination bed in the supine position, with their arms raised overhead. The tele-sonologist instructed the patient to assume the appropriate breast examination posture with the help of the on-site assistant via the audio-video communication system. The on-site assistant applied adequate coupling agents to the bilateral breasts and axillae of the patient. The tele-sonologist activated the examination button, and the robot arm was moved over the patient. The on-site assistant then dragged the US transducer and positioned it on the patient's breast (quick positioning). By manipulating the dummy probe, the tele-sonologist then scanned the four quadrants of the breast and the nipple and axilla ( Figure 2). tem. The on-site assistant applied adequate coupling agents to the bilateral breasts and axillae of the patient. The tele-sonologist activated the examination button, and the robot arm was moved over the patient. The on-site assistant then dragged the US transducer and positioned it on the patient's breast (quick positioning). By manipulating the dummy probe, the tele-sonologist then scanned the four quadrants of the breast and the nipple and axilla (Figure 2). For both conventional and 5G-based telerobotic US examinations, the information described below was noted in the protocol registration ( Figure 3). For each breast, at least six US images including the four quadrants (the thickest part), nipple, and axilla were stored. If one or more breast nodules were detected, then only the largest nodule or the one most suspicious of malignancy on US was chosen as the target nodule. If other breast lesions were detected, the most clinically significant lesion was selected as the target lesion. At least three US images (one grayscale US image, one grayscale US image with callipers, and one colour Doppler flow image) and a video of the target nodule/lesion at the largest cross-section were stored. When storing the US image, a picture of the overall external environment of the transducer's position was also stored for further analysis. For both conventional and 5G-based telerobotic US examinations, the information described below was noted in the protocol registration ( Figure 3). For each breast, at least six US images including the four quadrants (the thickest part), nipple, and axilla were stored. If one or more breast nodules were detected, then only the largest nodule or the one most suspicious of malignancy on US was chosen as the target nodule. If other breast lesions were detected, the most clinically significant lesion was selected as the target lesion. At least three US images (one grayscale US image, one grayscale US image with callipers, and one colour Doppler flow image) and a video of the target nodule/lesion at the largest cross-section were stored. When storing the US image, a picture of the overall external environment of the transducer's position was also stored for further analysis. The durations of both the conventional and 5G-based telerobotic US procedures were recorded from when the patient was ready on the examination bed to be examined to the end of image collection. Additionally, all data collection and organisation were completed The durations of both the conventional and 5G-based telerobotic US procedures were recorded from when the patient was ready on the examination bed to be examined to the end of image collection. Additionally, all data collection and organisation were completed by an independent coordinator. All patients were assigned sequential numbers prior to the examination, and folders containing the US images were named according to those numbers. US Images Interpretation Two sonologists with 12 and 13 years of work experience interpreted all the US images randomly. A consensus was reached after discussion in case of disagreement between the two interpreting sonologists. The assessment of the US images was performed in two steps. First, the quality of all the US images was scored using a subjective quality scoring method [26]. The scoring was as follows: 1 point: very poor (image quality is severely impaired); 2 points: poor (image quality is impaired); 3 points: fair (image quality hinders viewing slightly but acceptable for interpretation); 4 points: excellent (minor suggestions for improvement but viewing is unhindered); 5 points: perfect (no suggestion for improvement). Second, the target breast nodules were assessed. The US characteristics and categories of the breast nodules were assessed based on the Breast Imaging Reporting and Data System (BI-RADS) of the American College of Radiology [27]. Patients and Tele-Sonologists' Assessments After each 5G-based telerobotic US examination, the patients and tele-sonologists were asked to complete a corresponding questionnaire about their experience of the 5G-based telerobotic US examination. Statistical Analysis Statistical analysis was performed using Statistical Product and Service Solutions (version 20.0; IBM Corporation, Armonk, NY, USA). The measurements of the same breast nodules and axillary lymph nodes were compared between the conventional and telerobotic US findings using a paired-sample t-test. The intraclass correlation coefficient (ICC) with confidence intervals was calculated to evaluate the consistency between the US features and BI-RADS categories of the same breast nodules and interpreted as follows: ICC > 0.90 indicated excellent consistency, ICC = 0.75-0.90 indicated good consistency, ICC = 0.50-0.74 indicated moderate consistency, and ICC < 0.50 indicated poor consistency. Two-tailed p-values < 0.05 indicated a statistically significant difference. Patient Demographics and Examination Information The mean age of the patients in this study (n = 83; 2 males and 81 females) was 50.7 ± 13.1 years (range, 24-72 years). Among them, 11 females and 1 male belonged to the young-aged group of 20-34 years, 28 females to the middle-aged group of 35-49 years, and 44 females and 1 male to the old-aged group of 50-80 years. Safety and Duration of 5G-Based Telerobotic Breast US Examinations During the telerobotic breast US examination, none of the participants had any injuries in either scenario, highlighting the safety of the 5G-based telerobotic US system. The average duration for the 5G-based telerobotic breast US examinations was 10.3 ± 2.7 min (range, 5-22 min). In Scenario A, the average durations for the 5G-based telerobotic US examinations and conventional US examinations were 10.3 ± 3.3 min (range, 5-22 min) and 7.6 ± 3.0 min (range, 4-16 min), respectively. Overall, the average duration of the 5G-based telerobotic US examination was approximately 2.7 min longer than that of the conventional US examination. In Scenario B, the average duration for the 5G-based telerobotic US examinations was 10.1 ± 2.3 min (range, 8-14 min). 5G-Based Telerobotic Breast US Findings In Scenario A, 34 breast nodules were detected using 5G-based telerobotic US and 35 using conventional US. Moreover, 32 breast nodules identified on 5G-based telerobotic US examination were consistent with those detected on conventional US examination ( Figure 4). In addition to the breast nodules, two cases of gynecomastia, one of lactation mastitis, and one of postoperative breast effusion were diagnosed using both these US procedures. The 5G-based telerobotic US examinations missed three breast nodules classified as BI-RADS 3. Among them, one breast nodule was located in the outer quadrant of the left breast in a 72-year-old woman, and two breast nodules were located in the outer quadrant of the right breast in two obese women with body mass indexes of 33 and 34. Conventional US examinations missed two breast nodules classified as BI-RADS 3 ( Table 2). In Scenario B, breast nodules were detected in 65% of the patients (13 of 20) using 5G-based telerobotic US. There were 11 breast nodules (mean transverse diameter of 7.6 mm and anteroposterior diameter of 3.8 mm) classified as BI-RADS 3, and 2 breast nodules (mean transverse diameter of 6.1 mm and anteroposterior diameter of 5.3 mm) classified as BI-RADS 4. Two patients with suspicious malignant breast nodules belonged to the middle-aged and old-aged groups, respectively. US Images Quality Assessment of 5G-Based Telerobotic Breast Examinations Two sonologists scored all the US images using a five-point Likert scale. The average US image quality score of the 5G-based telerobotic US was 4.86. In Scenario A, the average US image quality scores of the 5G-based telerobotic and conventional US systems were 4.86 and 4.90, respectively. A paired-sample t-test found no significant difference between them (p = 0.159) ( Table 3). Moreover, 88.9% of the cases assessed using 5G-based telerobotic US had a score of 5 points. In Scenario B, the average US image quality score of the 5Gbased telerobotic system was 4.85. This result indicates that the 5G-based telerobotic US system can obtain high-quality US images. Diagnostics 2023, 13, 362 9 o US examination were consistent with those detected on conventional US examinat ( Figure 4). In addition to the breast nodules, two cases of gynecomastia, one of lactat mastitis, and one of postoperative breast effusion were diagnosed using both these procedures. The 5G-based telerobotic US examinations missed three breast nodules classified BI-RADS 3. Among them, one breast nodule was located in the outer quadrant of the breast in a 72-year-old woman, and two breast nodules were located in the outer quadr of the right breast in two obese women with body mass indexes of 33 and 34. Conventio US examinations missed two breast nodules classified as BI-RADS 3 ( Table 2). Consistency between the Findings of 5G-Based Telerobotic and Conventional US Examinations in Scenario A A paired-sample t-test revealed no significant differences between the 5G-based telerobotic and conventional US examinations in the transverse and anteroposterior diameter measurements of the same breast nodules and axillary lymph nodes (Table 4). Good interobserver agreement was observed in the US features of the same breast nodules for the parameters of shape, orientation, margin, echo pattern, posterior features, calcifications, and BI-RADS category between the 5G-based telerobotic and conventional US examinations (ICC = 0.893, 0.795, 0.874, 1.000, 0.963, 0.882, and 0.984, respectively) ( Table 5). Patients' Assessments In total, 91.6% of patients enrolled in the two scenarios reported no discomfort or uneasiness during the 5G-based telerobotic US examination, and 94% of the patients were not afraid of the robotic arm. Only one female had obvious tenderness in the bilateral whole breast. Furthermore, 92.7% of females considered the duration of the 5G-based telerobotic US examination as strongly or somewhat acceptable. Of the patients, 90.4% were willing to undergo 5G-based telerobotic US examination, and 89.2% were willing to pay an extra fee for it in the future. Overall, the 5G-based telerobotic US system was well accepted by the patients (Table 6). Tele-Sonologists' Assessments The tele-sonologists reported no obvious delay or difficulty during the majority of 5G-based telerobotic US examinations (97.6% and 81.9%, respectively). However, they ex-pressed concern in the scope of scanning of patients with large breasts. The tele-sonologists were satisfied with the duration of 86.7% of telerobotic US examinations, and 85.5% of US images were transmitted from the 5G-based telerobotic US system. Of the tele-sonologists, 84.3% were willing to use the 5G-based telerobotic US system as a routine US examination tool (Table 6). Table 6. Answers of patients and tele-sonologists to the questionnaires in the two Scenarios. Except where indicated, data in parentheses are percentages. Discussion Our work possesses several strengths. To the best of our knowledge, this is the first prospective and two-scenario study exploring the practical value of the 5G-based telerobotic US system in breast scanning and diagnosis. We propose a standardised breast US examination protocol for the 5G-based telerobotic US system that can efficiently check the bilateral whole breast and axillae. It will facilitate standardised breast US examinations as well as training and clinical development of this cutting-edge technology. Through the application of 5G network technology, 83 breast US examinations were successfully completed with the use of a telerobotic US system diagnostic system in two scenarios at a community hospital located on a rural island and a mobile car located in a remote county, respectively. In terms of the safety, duration, US image quality, consistency with conventional US results, and acceptability for patients and tele-sonologists, this study demonstrates that a 5G-based telerobotic US diagnostic system has a relatively high level of feasibility for the diagnosis and management of breast diseases. Many patients in rural and remote areas with limited medical resources are forced to go to higher-level hospitals for US examinations. In Scenario A, there were only two large community hospitals on Chongming Island, with a high proportion of patients in the older age groups. Chongming Second People's Hospital, one of the two hospitals in this study, has only two junior sonologists dedicated for US. In Scenario B, hospitals or imaging centres with limited medical resources were not available in Anji County. Chongming Island and Anji County are 72 km and 220 km away from central Shanghai, respectively, corresponding to at least 1.5 h and 3 h of one-way driving time without traffic jams, respectively. In Scenario A, the average duration of 5G-based telerobotic breast US examinations was approximately 26% longer than that of conventional US examinations (10.3 ± 3.3 min vs. 7.6 ± 3.0 min). This is consistent with the result of a previous study by Arbeille et al., which found that the duration of teleoperated foetal US examinations was approximately 30% longer than that of the conventional US examinations [28]. Notably, the average duration of the 5G-based telerobotic breast US examinations in Scenario A (10.1 ± 2.3 min) was similar to that in Scenario B (10.3 ± 3.3 min). Despite the slightly increased examination time with 5G-based telerobotic US, most patients found it acceptable, as reported in the questionnaire survey conducted in our study. Considering the need for the corollary equipment and an on-site assistant, the examination cost of the 5G-based telerobotic US system is higher than that of the conventional US system. However, from a societal perspective, 5G-based telerobotic US examination as a distant diagnosis strategy will result in an overall cost reduction owing to a reduction in the travel-related expenses of the patients [29]. Moreover, the questionnaire survey revealed that most patients from the two scenarios had a high acceptance of the 5G-based telerobotic US system and were willing to pay an additional examination fee. Thus, 5G-based telerobotic US will help save travel time and expenses for patients from rural and remote areas lacking medical resources. In addition, 5G-based telerobotic US can also effectively minimise the transport of large numbers of patients to higher-level hospitals and relieve the burden on these hospitals. In light of the global spread of COVID-19 and the emergence of new viral variations, it is important to reduce unnecessary human transportation to prevent and control the epidemic. Furthermore, sonologist groups can provide subspecialty expertise to a greater number of underserved communities to expand their reach and create additional revenue opportunities. Additionally, work-related musculoskeletal disorders are prevalent among sonologists, with previous studies indicating that approximately 90% of sonologists complain of musculoskeletal discomfort or pain when conducting US scans [30]. By reducing the scanning forces applied by sonologists and equipping an ergonomic arm cushion, a 5G-based telerobotic US system could help relieve occupational chronic musculoskeletal injuries. During 5G-based telerobotic breast US examination, none of the patients suffered any injuries, and 98.8% reported no discomfort in both scenarios. This system has dual protection devices to ensure patient safety and provides real-time force feedback information from the force sensor visualised as a column to ensure complete patient comfort. Nevertheless, one female patient (1.2%) experienced obvious breast tenderness during telerobotic US examination, probably because she was premenstrual, which made the bilateral whole breast regions swollen and sensitive [31]. Thus, inquiries regarding the patient's menstrual cycle should be routinely made, and the contact force applied should be appropriately reduced in those who are menstruating or premenstrual. Two-way and real-time data communication of the telerobotic US system between the tele-sonologist and patient sites requires a large broadband capacity. These data flows generally include robot control, US system control, force feedback, US videos, and multiple audio-visual signals. US video data requires more bandwidth than the other data, which affects the performance of the US diagnosis to a certain extent [17,32,33]. Arbeille et al. used a robotic arm to perform remote abdominal US examinations via a bandwidth network of 250 Kbps, and the degradation of US image quality led to six (17%) cases of missed lesions [17]. Our study indicated that the quality of US images transmitted from the telerobotic US system over the 5G network was comparable to that of conventional US system, fulfilling the diagnostic demand. The transmission speed (peak rate of up to 20 Gbps) of the latest 5G network is 1.5 times better than that of the current 4G network, and the delay (approximately 1-10 ms) is reduced by a factor of 10 [34]. The application of 5G telecommunication technology substantially increases the data transmission capacity and resolves the issues of latency, which allows long-distance original US image acquisition, transmission, analysis, and processing, with high-precision synchronisation of multiple audio-visual signals. In Scenario A, two cases of gynecomastia, one of lactation mastitis, and one of postoperative breast effusion were diagnosed and 32 breast nodules were detected using the two US methods. Moreover, there were no major differences between the parameters measured using the two US methods. The 5G-based telerobotic US detected 92.7% of the breast abnormalities. This indicates that the 5G-based telerobotic US system has high diagnostic performance. The 6-DOF robotic arm facilitates fine breast examination. The tele-US control panel could be used to achieve remote adjustment of the US parameters, including frequency, gain, depth, and parameter measurement. However, three breast nodules were missed in the 5G-based telerobotic US examination. When storing the US image, a picture of the overall external environment of the transducer position was also stored. By analysing the photographs and US data, the potential reasons for the missed nodules were investigated. Poor contact between the US transducer and a loose skin surface of older female patients might have increased the chances of incomplete breast scanning. A greater amount of coupling agents should be applied to ensure optimum contact force. Moreover, it is difficult to scan large-sized breasts in obese patients. Although the 6-DOF arm can move flexibly along the contours of the human body, it still has restrictions on the side of the body, resulting in difficulties in scanning the outer quadrant of the breast. It is necessary to adjust the patient's position to achieve appropriate contact with the transducer on the side [22]. This would be beneficial not only for the location of breast lesions but also in follow-up examinations. Both static US images and dynamic videos were stored during the examination, which was conducive for repeated continuous observations. In contrast, two breast nodules were detected only with the 5G-based telerobotic US system and not with the conventional US system. This could be attributed to the operator's dependency on US examination [20]. In Scenario B, the positive disease detection rate in the mobile car group was 65%. Among these females, two with suspicious malignant breast nodules (BI-RADS 4) were recommended to undergo further examinations and treatments. Takeuchi et al. reported the testing of a mobile robotic tele-echo system placed in an ambulance that successfully transmitted clear real-time echo images of the patient's abdomen to the destination hospital from where this device was being remotely operated through private networks [33]. Owing to its unique advantages, the mobile car equipped with a 5G-based telerobotic US system can be parked in several remote areas where hospitals or imaging centres are not available. This technology holds the potential to allow patients to stay in their home community for US examinations while improving their access to imaging expertise offered at larger centres. The 5G-based telerobotic US system is still in its prime and requires further advancements. It does not yet provide the realistic functions of haptic feedback and 3D information. Haptic feedback restoration in medical robotics platforms is gaining growing attention. It is also based on the recent integration of haptic feedback in the well-known da Vinci system, which can show how the operator can make better decisions [35]. Thus, additional investigations based on force sensing might also help overcome the occurrence of uncomfortable conditions in patients. Meanwhile, the two-dimensional vision information from the camera obviously affects 3D space perception during breast examination. The degree of immersion often referred to as the "sense of being there" experienced by the operators is a factor in the success of tele-health installations. Further efforts can use high-definition 3D video conferencing technology, which can offer a compelling mechanism to achieve a sense of immersion and contribute to an enhanced quality of use [36,37]. Furthermore, to achieve remote US instrument control, a special US imaging system was selected. We hope this device can be compatible with more brands of US imaging system in the future, which will help generalize this technology. Despite the positive outcomes of our study, it has several limitations. First, it was a single-center study with limited-size data. Multicenter and large-scale studies are needed to validate our results. Second, there was inherent subjectivity in our simplified questionnaire survey for the tele-sonologists. This was for rapidly evaluating the direct feelings of tele-sonologists during the telerobotic US examinations, but might result in inaccurate assessment. Third, the 5G-based telerobotic US examination was not compared with the conventional US examination in the mobile car setting (Scenario B). However, these differences were highlighted in Scenario A, and the purpose of this study was to consider the clinical practice management considerations for implementing 5G-based telerobotic US in a real-world setting. Finally, the tele-sonologists in the doctor-side subsystem in this study were all senior sonologists. We did not investigate the telerobotic breast US examination results of tele-sonologists with different work experiences, nor did we determine whether experienced tele-sonologists could easily master the telerobotic US system. Conclusions This research preliminarily demonstrates that a 5G-based telerobotic US system can achieve comparable quality images and diagnostic results for breast examination over the conventional US system. Moreover, it provides a novel and potential solution for the application of quality breast US in rural areas lacking experienced sonologists or in remote areas where hospitals or imaging centres are not available. It might help to provide increased access to breast examination in rural and remote areas and greater equity in the delivery of health care services. Currently, artificial intelligence (deep learning and machine learning methods) studies are rapidly evolving and have many potential applications in breast US, such as lesion detection, BI-RADS classification, breast cancer risk prediction, and breast cancer molecular subtype prediction [38][39][40][41]. Further studies about integrating artificial intelligence technology into the 5G-based telerobotic US system would greatly facilitate the detection and diagnosis of breast disease accurately and efficiently and increase the application scope of this technology. Informed Consent Statement: Written informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets generated and analysed during the current study are not publicly available but are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-01-21T16:07:11.608Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "4cbee85528df4f5db64483e388b0daac57a6573e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/3/362/pdf?version=1674057222", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93fa67ff26dd7c025bf604a24c6a9e6347b53107", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
199326162
pes2o/s2orc
v3-fos-license
Leadership Succession and Sustainability of Small Family Owned Businesses in South East Nigeria Despite the acclaimed contributions of small businesses to growth of various economies, Nigeria inclusive, majority of family-owned businesses are confronted with similar challenge in the area of business continuity. With the dynamic and turbulent nature of Nigeria’s business environment, increasing number of small family businesses operating in Nigeria have either shutdown or stopped operating at the retirement, incapacitation or death of the business proprietors/owners, owing to the absence of a clear succession plan, vision disconnect between the owner and the successor, lack of interest, requisite drive, technical knowledge and capabilities to manage the business prudently. Hence, this study investigates leadership succession and sustainability of small family-owned businesses in Anambra, South East Nigeria. The study employed the survey research design, carried out in Onitsha and Nnewi commercial and industrial hubs of Anambra State. The simple random sampling technique was employed to select sample of 298 registered small business owners. A five-point Likert structured 6-item questionnaire was adopted for data collection. The study employed Pearson Product Moment Correlation to determine the relationship between the dependent and independent variables. Also, the Paired Sample t-test was employed to verify the existence of statistical evidence proving that the mean difference between the paired observations in the hypothesis is significantly different from zero. The findings revealed that, mentoring and human capital development has significant influence on sustainability of small family owned businesses. The study therefore recommended that, family businesses owners should identify the successor early enough and adopt mentorship as a process to equipping the successor, who must however willingly show genuine interest and is not coerced into the business, and adequate time should be devoted for training of chosen successors, in order to equip them with relevant skills that will make How to cite this paper: Onyeukwu, P. and Jekelle, H.E. (2019) Leadership Succession and Sustainability of Small Family Owned Businesses in South East Nigeria. Open Journal of Business and Management, 7, 1207-1224. https://doi.org/10.4236/ojbm.2019.73085 Received: June 6, 2019 Accepted: July 7, 2019 Published: July 10, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Introduction In Nigeria and other developing and developed nations operating free market economies, small businesses considered as privately owned indigenous enterprises are perceived as catalysts and engines of economic growth, employment generation, wealth creation and redistribution, which make their roles and contribution to economic and national development invaluable. Most of these small businesses in Nigeria, like in other countries of the world are owned by individuals who operate them together with family members, or are known as family-owned businesses. This is the case of businesses in Onitsha and Nnewi commercial and industrial hubs of Anambra state, in South East Nigeria known for their uncommon entrepreneurial aptitude [1]. One of the problems faced by small businesses in Nigeria and others globally has been planned succession by way of purposeful transfer of operational, management and ownership controls to the next generation from the proprietors; which Obasan [2] asserts that effective management of succession issues is the single most important legacy one generation can bequeath to the next. In practice, succession is the mentoring of employees or family members for leadership development, to sustain or restore business survival, performance or growth when successors advance to management positions after the departure, retirement, incapacitation or death of incumbents [3]. Leadership succession in an organisation is a thoughtful and logical effort to ensure continuity of needed leadership in critical positions, retain knowledge and develop both intellectual and practical capital (competitive advantage) to exploit future opportunities, by encouraging personnel development. Summarily stated, succession entails planning which is a pre-emptive effort to guarantee seamless transition of an enterprise from its proprietor to a chosen successor through consistent mentoring and effective human capital development; which involves articulating a strategic action plan to guarantee the required human measures to make firm sustainability and growth possible [4]. According to empirical literatures [5], most family-owned businesses that endeavour to effect leadership and control succession of the business enterprise to family members in the succeeding generation, often have unexpected outcomes that supposed family members do not usually possess the capabilities necessary for the sustainable management and operation of such enterprise [6]. The resultant effect is that only very few enterprises outlive the first generation [6], of which a range of dynamics are likely responsible; however, a vast number of these enterprises fail because succession action plans were not considered [7]. Of importance is the fact that leadership succession with family businesses is often complicated owing to the sentiments and associations involved, and also the perceived myths and suspicions surrounding the discussion of issues relating to death and fear of aging and financial status realities. Several studies have also ascribed most proprietors' hesitancy to make succession plans, to factors such as the proprietor's strong passion and emotional attachment to the enterprise, retirement, incapacitation or death phobia, financial insecurity and absence of interest. It is therefore not unexpected that most small business owners rarely retire from their own businesses, but prefer dying while still managing the business [1]. Factors central to influencing succession plans are the role of mentoring and human capital development of prospective successors. Adopting the strategy of shared vision and goals of small family businesses to the potential successor is fundamental to effective leadership succession; while mentoring the prospective successor in the operational strategies of the business has the capacity to boost expertise and tacit knowledge in the firm, thus increasing the possibility of succession [4]. The essence of human capital development is to ensure that the appropriate person is taken through the stages of preparation for leadership responsibilities at the right time, to ensure acquisition of knowledge and skills necessary for the survival and growth of the business; hence the need to equip the successor for seamless transition to next generation. Therefore, the need to examine leadership succession and sustainability of small family owned businesses in Anambra, South East Nigeria. Statement of the Problem Despite the acclaimed contributions of small businesses to growth of various economies, Nigeria inclusive, majority of family-owned businesses are confronted with business continuity challenge, as 95% of these small businesses rarely survive up to the third generation of enterprise ownership, less than one-third make it up to the second generation and less than half of a second generation make it to the third generation when the owner retires or dies. With the dynamic and turbulent nature of Nigeria's business environment, increasing growth in marketplace competition, technological awareness and revolution, increasing number of family businesses operating in Onitsha and Nnewi areas of Anambra State, and other States in Nigeria have either shut down or stopped operating effectively at the incapacitation, retirement or death of the proprietors, owing to the absence of a clear succession plan, disconnect between the vision of the business owner and that of the relatives or children who have benefited from the fortunes of the business to acquire educational and professional qualifications; scepticism about transferring the business to a relative or children who have no interest, requisite drive, technical knowledge and capabilities to manage the business prudently. Therefore, small family-owned businesses are open to significant risk in sustainability due to inadequate or total absence of succession Open Journal of Business and Management plans; which is the basis for this study. Objectives of the Study The primary purpose of this study, therefore, is to investigate leadership succession and sustainability of small family-owned businesses in Anambra, South East Nigeria; while the specific objectives are; 1) to examine the influence of mentoring on sustainability of small family owned businesses in Anambra State. 2) to determine the influence of human capital development on sustainability of small family-owned businesses in Anambra State. Leadership Succession Globally, succession is described as the process in which managerial control is transferred from one executive or one generation to another [5]. Succession comprises all pre and post activities undertaken which precedes the actual transfer of control [8]. It ensures leadership continuity in key positions, retain and develop intellectual and knowledge capital for the future, and encourage individual advancement [1]. Within the family business environment, succession is described as the measures and procedures that ensues the transfer of leadership from one member of the family to another [9]. It is the process by which transfer of ownership is made and control of commercial infrastructure or factors of production built by one generation of family to the next [10]. Furthermore, succession involves the handover of any form of commercial investment from the proprietor to a family member deemed capable and chosen to continue the business operations [6]. Hence, the choice of a successor in family business is often streamlined to members of the proprietor's nuclear family [11]. Succession planning being a proactive measure facilitates a seamless transition of business from the proprietor to a chosen successor [4]; hence, the success of succession from one leader or generation to another is significant to developing competitive edge and sustainability of any organisation regardless of the nature P. Onyeukwu, H. E. Jekelle Open Journal of Business and Management of its ownership. Hence, succession process galvanizes firms towards strategic approaches of employee capacity evaluation, talent identification and management, and leadership development [12] [13]. A well-structured succession gives business organisations (family businesses inclusive) adequate time to offer specialised human capital development to the talented member of the family who has shown capacity and is likely to be selected as replacement for the retiring owner. Some of the broad benefits of pre-planned succession in a family business are the active mentoring of a choice successor; plugs talent and capacity gaps for the future of the business; reduces the possibility of business shut-down at the incapacitation or demise of its owner, helps the business remain competitive, will help in the short and long term business growth [14]. For successful leadership transfer and sustainability of the business prior to the successor taking over leadership control, there is need for the out-going leadership to feel comfortable with the family and business dimension, attributes of successor's capacity and qualifications. The process of succession in family-own businesses encompasses numerous interests and participants that include family members, other executives in the business, bankers, suppliers and customers, which has the tendency of affecting future business relationships [2]; thus, for a smooth succession process, key participants must come to have an understanding to continue the working relationship with one another. Other important factors to consider prior to the completion of the succession process are, firm continuity, sustainability of profitability, growth and competitiveness beyond the owner's capacity. Mentorship The concept of mentorship has been described as the act of developing identified individuals with talents and equipping them with the needed know-how and skills to be effective [15] in a particular area. It covers all premeditated undertakings encouraged by firms to develop its employees (which may be family members in the case of family-owned businesses) in order to sustain and improve the firm's competitive advantage [16]. Mentorship, according to Nnabuife and Okoli [1], encompasses the process that brings both experienced and inexperienced individuals together for knowledge sharing and transfer, work capacity and personal effectiveness development [17]. Mentoring in family business enables the sharing and transfer of wealth of knowledge acquired in the line of business [18] over the years to the mentee (next generation), to ensure the continuity of the patriarch's legacy [19]. The transfer of knowledge across generations within family-owned business helps bridge the gaps in practical learning and decision making not captured by formal educational system [20]. Hence, knowledge being transferred and shared forms the core of the firm's foundation bequeathed to the next generation for competitive advantage needed in sustaining the profitability, growth and success of their family-owned business [1]. However, the kind of mentorship experi-Open Journal of Business and Management enced in these family-owned business are often categorised between informal and formal mentorship [21]; owing to the fact that they are usually not appropriately structured, neither are they unstructured relationships [22]. In essence, mentoring is a globally acknowledged tool employed in building potential leaders, strengthen organisational and employee capabilities, broaden employee intelligence [19], develop organisational knowledge, and sustain the firm's competitive advantage [23]. Human Capital Development Human capital being one of a firm's intangible and most valuable assets encompasses all inherent and acquired capabilities of individuals engaged within the firm [24]. These capabilities Elikwu [25] asserts to be various academic qualifications, technical, conceptual and analytical skills, experiences, potentials and competencies. Greenwood, et al., [26] argues that if an organisation knows the contribution of its human capital to the firm's performance, it can therefore be measured and effectively managed. Further to being considered as every firm's core strategic resource [27], human capital has also been acknowledged to possess the inimitable potentials, premised on the assumption that every employee possesses the dynamic capability to uniquely add value in diverse ways. The inimitability concept is associated with the human free will theory; hence, Reed, Srinivasan and Doty [28] assert that, the ability to uniquely add value in diverse ways allows human capital to be associated with the resource-based view, which posits that the inimitability of an organisation's internal resources [29] enables its' capability to contribute towards sustainable competitive advantage [30]. Hence, when human resource function is captured in a firm's strategy, it has potentials of enhancing organisational performance [26]. Sustainability The concept of sustainability is similar to the concept of going concern attributed to businesses, which requires the sustenance and maintenance of such businesses on a long term basis prior to being termed as a going concern. Thus, for family-owned businesses, Ogundele, Idris & Ahmed-Ogundipe [5] asserts that the extent to which its life can be stretched while achieving its primary goal is referred to as sustainability, which is invariably affected by diverse factors such as proper succession planning [1], ability to anticipate and respond to change, among others, operation of the business as a separate legal entity, which entails separating personal funds and assets from business funds; put in place a system capable of carrying on the business operation independently of the owner; mentoring, training and equipping the successor with the internal workings of the business [2]. The study adopted a survey approach and employed a purposive sampling technique. A sample size of 30 family businesses was selected and primary data collected using structured questionnaire and interview questions. The study employed qualitative and quantitative methods of research, while collated data was analysed using the SPSS 17 software program. The results revealed that most of the family business initiators do not always have the notion of sustainability in mind before they die and hence do not prepare for succession. Empirical Review Akpan and Ukpai [4] examined how succession planning influences survival of small businesses operating within Makurdi metropolis. The study adopted the descriptive survey design, a population of 560 family business owners and 120 proprietors drawn from the population as sample size for the study. Data were collected using a close-ended questionnaire; the analysis was carried out using mean and standard deviation, while posited hypothesis was tested using ANOVA. The study indicated that manpower training significantly influences longevity of small businesses; however, no significant variance exists in the mean responses of female and male small business owners. The finding implies that there is need for small business owners to put a succession plan in place to achieve desired business sustainability. The study therefore recommended that small business owners should identify successors in good time in order to have adequate time for required training and development that will ensure the survival of the family business through several generations. The data for the study was collected through administration of structured questionnaire, while the study adopted both inferential and descriptive statistics to arrive at conclusions on the correlation between variables being investigated. The posited hypotheses were tested using multiple regression analysis, of which the findings revealed that a significant correlation exists between leadership-knowledge transfer-innovative-talent development mentorship and employees' performance. The study therefore recommended that manufacturing businesses should consider mentorship practices as part of employee performance enhancement strategy. Methodology The study adopted survey design and the study was carried out in Onitsha and Nnewi commercial and industrial hubs of Anambra State. The population of the study was anchored 965 owners of small businesses with characteristics of family-owned businesses registered with the Anambra State Ministry of Commerce. A sample size of 340 was derived using Taro Yamane formula, augmented by 20% to ensure a useable returned sample of 283 or more. Out of the three hundred and forty (340) copies of structured questionnaire administered, only two hundred and ninety-eight (298) representing 87.6% were returned and found Open Journal of Business and Management good for the data analysis; which was derived using the simple random sampling technique. The structured questionnaire was adopted and restructured into 6-item questions by the researcher called: Mentoring and Human Capital Development of Successors Influences Sustainability of Small Family Business (MHCDSISSFB) questionnaire, and was used for data collection. Data for this study was collected mainly from primary source through questionnaire adopted from Akpan and Ukpai [4] and restructured administered to the small family businesses. SPSS statistical package version 21 was used in analysing the data. The answer options for the questionnaire were developed using Likert scale with: SA-Strongly Agree, A-Agree, U-Uncertain, D-Disagree, SD-Strongly Disagree. In analysing the data collected from the questionnaire, descriptive statistics was employed. Pearson's Product moment correlation was employed in testing the two hypotheses. Also, the Paired Sample t-test was employed to verify the existence of statistical evidence proving that the mean difference between the paired observations in the hypothesis is significantly different from zero. The test applied 95% confidence interval reliability and 5% level of significance. Results and Findings Analysis of the demographic information of the participants shows that 68% of the entire population were males while the remaining 32% were females. From their highest educational qualification, the response shows that 41.9% were graduates of the B.Sc., B.A., B. Tech or the Higher national Diploma, 21.6% were holders of NCE/ND certificates, 26.5% were holders of SSCE/WASSCE while the remaining 10% were holders of the various post-graduate degrees. Result from statement one in Table 1 shows that 4.4% and 29.5% of the sample size agreed and strongly agreed that family members are by choice part of the daily running and operations of the business. However, majority of the respondents representing 38.6% and 20.8% disagreed and strongly disagreed respectively. Most of these proprietors complained that their family members, mores especially their children have their own dreams they are pursuing which does not align with that of the family business. This implies that majority of the business owners compel their family members to be part of the business. Result from statement two of the table indicates that, 46.3% and are of the opinion that involvement of family members in decision making will ensure emergence of a well-equipped successor. This was further strengthened by 8.7% who strongly agreed. However, 35.9% of the sample size disagreed to that; while 5.4% strongly disagreed. This implies that, involvement of family members in decision making processes that affects the survival, profitability and growth of the business will ensure emergence of a well-equipped successor. Result from statement three of the table indicates that, most of the respondents 45% and 2.8% agreed and strongly agreed that, entrepreneurial mentoring improves successors' innovation/creativity for survival and growth of the business. Open Journal of Business and Management However, 32.9% and 4.4% disagreed and strongly disagreed respectively. This implies that majority of the respondents are of the opinion that the process of entrepreneurial mentoring helps the successor understand the intrigues and trends in the business; which helps improve the successors' creativity/innovation for survival and growth of the business when in control. Also, result from statement four of the table indicates that, 48.7% and 20.4% of the respondents variously agreed that, it is important that a successor works outside the organisation before joining the business. However, 19.8% and 7.4% of the respondents variously disagreed. This implies that, working knowledge and exposure to various organisational goals, visions and missions is needed by a successor to perform well. Result from statement five of the table indicates that, majority of the respondents strongly agreed that small businesses thrive when successors are trained on effective resource management. This was affirmed by a total of 55% of the entire sample size. However, 35% and 7% of the respondents disagreed and strongly disagreed respectively. This implies that, human capital development in the area of effective resource management will make small businesses thrive when successors are in control. Furthermore, result from statement six of the table indicates that, most of the respondents, representing 49.4% and 11.4% of the sample size agreed and strongly agreed that acquisition of relevant skills enhances successors' operational ability to manage small business; while a total of 37% of the respondents disagreed. This implies that for successors to effectively manage the business when in control, they require various forms of organisational skills. (Table 2 and Table 3 Table 4 shows the correlation among the paired observations employed in testing the hypotheses. In the first pair, the correlation value of 0.692 with its corresponding probability value of 0.07 shows that the paired observations (that is, shows responses mentoring-responses on sustainability of family-owned businesses) were strongly and positively correlated. This is because the correlated p-value of 0.007 is less than the 5% Confidence Interval, thus, we accept the H 1 that the statistical difference between the paired means is not zero. Test of Hypotheses Also within the second pair, the correlation value of 0.701 and its corresponding p-value of 0.000 shows that the paired observations were strongly and positively correlated, as such, the H 1 was also accepted that the statistical difference between the paired means is not zero. From Discussion of Major Findings The correlation result revealed that (rho) value gave 0.898 indicating a positive and very strong correlation among the variables; while the calculated p-value for the entire variables in Table 4 is (0.000), which is less than the level of significance at 0.05. We reject the null hypothesis (H0 1 ) and accept the alternate hypothesis. Hence, we conclude that Mentoring has influence on sustainability of small family-owned businesses in Anambra State. The implication is that when family members are by choice part of the daily running and operations of the business, there interest to make the business succeed is kindled and there are a shared goal and vision. Also when small family business owners involve family members in decision making, this strengthens the chances of a better-equipped successor emerging. Furthermore, it implies that entrepreneurial mentoring improves successors' innovation/creativity for survival and growth of the business. This finding supports the finding of Nnabuife and Okoli [1] which revealed that, mentorship is very important in the quest to preserve the sustainability of family-owned businesses. This finding is also in alignment with the result of Adedayo, Olanipekun and Ojo [6] which indicated that a strong positive relationship exists between succession planning and the organisation's sustainability, once the founder brings and grooms the successor with his experience to understand the management intricacies of the business. Furthermore, the finding is in agreement with the findings of Cherono, Towett and Njeje [33] whose study revealed that a significant correlation exists between leadership-knowledge transfer-innovative-talent development mentorship and employees' performance; and Ofobruku and Nwakoby [23] whose study revealed a positive effect of mentoring on performance of employees; there is a more positive effect of career support on performance of employees than psychosocial support. The correlation result in Table 3 indicated that the (rho) value gave 0.955 indicating a positive and very strong correlation between the variable of interests, while the p-value for the entire variables is (0.000), which is less than 0.05, there exist enough evidence to reject the null hypothesis (H0 2 ) and conclude that human capital development has influence on sustainability of small family-owned businesses in Anambra State. The implication is that, it is important that the successor acquire some working experience outside the business in order to appreciate employees working relationship to the survival of the business; acquire knowledge and training in resource management and also acquire relevant skills to enhance operational ability of the successor. This finding is in agreement with the finding of Akpan and Ukpai [4] which revealed that manpower training influences longevity of small scale businesses. Also, the finding aligns with Akinyele, Ogbari, Akinyele and Dibia [3] whose study confirmed that succession planning and career development has significant impact on organisational survival. Hence, there is perceived employees need for career development opportunities for progressive advancement, in order to be adequately positioned for the institution's succession needs, thus ensuring the survival and perpetuity of Open Journal of Business and Management the university. However, the finding contradicts the position of Osibanjo, Abiodun amd Obamiro whose finding indicated that supervisor support, career development and turnover rate all have insignificant correlation with organisational survival. Conclusions and Recommendations The study therefore concludes that there is a positive and very strong correlation between mentoring and sustainability of family-owned businesses and that there is also a positive and strong relationship between human capital development and on sustainability of small family-owned businesses, which implies that mentoring and human capacity development have positive relationship with the longevity of family-owned businesses. In line with the conclusions and findings of this study, the following recommendations were made; 1) Family businesses desirous of continuity beyond the existence of the founder should identify the successor early enough and adopt mentorship as a process to equipping the successor, who must however willingly show genuine interest in the business. 2) That adequate time should be devoted for training of chosen successors, in order to equip them with relevant skills to see their businesses survive and successfully pass on from one generation to another.
2019-08-03T00:34:34.740Z
2019-05-05T00:00:00.000
{ "year": 2019, "sha1": "2fe5af91d386a7208409ed4059a2cc3bb9cdeed7", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=93599", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5c90703531efa46f25c51b167df15845d2fa0dd3", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
235898468
pes2o/s2orc
v3-fos-license
Non-Oil Sector and Economic Growth in Nigeria: The National Accounts Perspective This study examines the impact of expansion in non-oil sector on sustainable economic growth of Nigeria economy. The study sourced data from the Central bank of Nigeria (CBN) statistical bulletin covering the periods of 2000 – 2019. An economic growth model was formulated using the study variables and the model was estimated using vector auto-regression (VAR) techniques, other diagnostic tests such as Roots of Characteristic Polynomial for VAR model stability, Augmented Dickey-Fuller test for time series stationarity, and granger causality tests were conducted to ensure the reliability of the model estimates. The analysis revealed that the estimated model is stable while the VAR and variance decomposition results shows that real gross domestic product is strongly endogenous in the short run but weakly endogenous in the long run. Further findings suggest that in the long run non-oil sector is strongly endogenous to real gross domestic product (92% contribution). The study, therefore, recommends diversification of the Nigerian economy by focusing more attention on agriculture, solid minerals, and service sectors as they tend to influence economic growth in the long run. More so, improved frameworks of accounting in areas of non-oil revenues are desirable for the accountancy profession. Introduction From the creation of the state called "Nigeria'', it has always been an agrarian society through cash crops like palm produce, cocoa, groundnut, rubber, and timber. However, the discovering of oil and its boom in the 1970's made the country to swing into the production of oil that has resulted to a total neglect of the richly agro productiveness of the country. Anyaehie, and Areji (2015) noted that at the independence of Nigeria in 1960, the main resource for the nation's was agriculture and extraction of solid minerals until oil discovery took over and the country forgot its starting points and lost direction. As enunciated in the study of Adams (2016) ;Okezie, and Azubike (2016) that Nigeria was a major producer of groundnuts (peanuts), cocoa, coffee, cotton, palm oil, and rubber, but lost it because of the over-dependence of oil. The near-total dependence on the oil sector having considered its volatility and fluctuation in price level globally, has dire implications for the emerging economies. Even though oil revenue has been contributing immensely to the economic growth of the country, it has not been reliable sources of revenue due to the unpredictable global crisis and fluctuation in the price level. For instance, Igberaese (2013), observed that in 1973-1974in 1980-1981in 1981-1986, the oil production decreased and oil price collapsed; in 2001, there was Asian crisis while; in 2007-2011, there was global crisis which affected the prices of oil in the global market. Also, in June 2014 up till 2015, there was a dwindling of oil prices from $112 per barrel and down-edged to $38 per barrel; mainly aggravated by Middle East unrest and war. The global diminution in the oil supply and slow demand further down-edged the price to $31.4 in 2016 and it has a negative implication on emerging countries (Mobosi, Okafor, &Asoh, 2017). Hitherto, governments of Nigeria has not totally turn away from oil revenue, probably because she believes that the world will recover to normalcy in due time. However, following the pandemic outbreak in late 2019 and coupled with the shrink in price of crude oil as a result of the United State of America's reduction in the number of barrels the country imports from nations affected the global market terribly. As such, a country whose major revenue is dependent on oil has had no option than to re-consider diversification of its economy to the non-oil sector. This has in no small means contributed immensely to the shaking-economy of major oil exporters like Nigeria, Saudi Arabia, Iraq and Libya. The 2019-2020 year has been a turbulent one for Nigeria due to the double shock of oil price decline and the covid-19 pandemic that has prompted a call for total reliance of non-oil revenue to provide for the government and the public. The non-oil revenues are proceeds generated from sources other than the oilproducing activities. These revenues include those from companies not engaged in oil & gas explorations. These other activity sectors comprise agriculture, industry, constructions, trade and service. The reasons why diversification in the Nigerian economy is imperative is made more manifest from the consultative forum of the Minister of finance, Budget and Planning and the Organized Private Sector held on 10 th July, 2020 concerning the impact of Global pandemic caused by COVID'19 on the implementation of the national budget. The forum highlighted the following issues, that crude oil prices declined sharply in the world market, with Bonny Light crude oil price dropping from a peak of US$72.2pb on January 7, 2020 to below US$20pb in April, 2020; In effect, the US$57 crude oil price benchmark on which the 2020 budget was based became unsustainable. The impact of these developments is about 65% decline in projected net 2020 government revenues from the oil and gas sector, with adverse consequences for foreign exchange inflows into the economy. There is a growth rate projection decline in Sub-Saharan Africa of about -3.2% in 2020 and a steep recovery rate of 3.4% in 2021 (Ministry of finance, Budget and Planning, 2020). Also, Nigeria has experienced decline in real GDP from 2.55% in the 4 th quarter of 2019 to 1.87% in the 1 st quarter of 2020. This was based on effect of the Global pandemic of COVID'19. Generally, Real GDP was projected to contract by 4.2% in 2020 (NBS, 2020). The Central Bank of Nigeria (2014) reported that agriculture became the second leading sector after oil since Nigeria GDP fell in 1970 from 48% to 20.6% in 1980 and subsequently grew in 2005 to 23.3%. Again, the sectorial contribution to Nigerian economic growth stood at 39.21 in 2013 and 41.93% growth in GDP in the third quarter of 2014 (Orji 2018). There are diverse views and mixed literature as to which of either oil or non-oil revenue will sustain the economy and bring about the even development and economic growth that a country aspires for in meeting up the constitutional obligations of its government. As envisaged in Central Bank of Nigeria (1998), in Aigbedion, andIyayi (2007), that the oil sector has contributed more to the total federally generated revenue than the non-oil revenue sectors and should not be neglected but rather diversify it because the sector has grown steadily over the years such that in 1970 and 1998, earning from oil rose from 75.3% to a peak of 84.1% respectively. Other researchers are of the view that oil-based economy has not sufficiently sustained the nation since poverty level and unemployment rate is still on the increase and advocated for a sustainable means that is away from oil. Olayungbo, and Olayemi (2018) and Oyakhilomen and Zibah (2014) advised the government to channel her revenue-base to a more plural sector where agriculture plays the lead. This advice is purported to aid our agrarian society which ordinarily is in line with the encapsulated theory of comparative advantage. The notion of diversification is a means to channeling effort in producing that which the nation has more potentials. Hitherto, most studies supported a total overhaul of the economy rather than only the petroleum sector. The call has become a national debate. In view of the above, Onodugo, Amujiri, and Nwuba (2015), argued that diversification is very essential since crude oil is an exhaustible assets and its reliance can no longer sustain the Nigerian economy. Suberu, Ajala, Akande, Olure-Bank (2015) stated that options for diversifying an economy is numerous, and it ranges from agriculture, industrialization, tourism, financial services, entertainment, information and communication technology and mining. As encapsulated in the works of Uzonwanne (2015), that diversification of the economy requires active participation in wide range of sectors, and strongly integrated into diverse regions that are able to generate vigorous growth and enormous potential in sustaining economic growth. Overtime, it has been observed that successive government had been shying away from a total neglect of the oil sector due to its volatility. Successive government had made efforts to implement policy that would lead to a well sustained economy. More so, from 2010-2011, a strategic reform known as Agricultural Transformation Agenda (ATA) was formulated, which stipulated that agriculture is a good venture and therefore every policy should be about supporting it. This ATA was seen as a good proposal to re-engage key stakeholders in Nigerian agriculture to diversify towards how an agrarian society could be self-sustained. In 2016, the Central Bank of Nigeria under the leadership of President Mohammadu Buhari launched the Anchored Borrowers Programme (ABP) which was aimed at fasttracking access of rural famers to finance productivity. It facilitated a method where loans were given to farmers for improved agricultural produces; and encouraged mechanised farming to large production. More recently, the call for Nigeria to move away from oil and rely on locally made products called "Grow What You Eat'' by the President Mohammadu Buhari led administration has been seen as a good move, most especially in this twofold shock of the country (fall in oil price and covid-19). Hence, this study seeks to ascertain the extent the non-oil revenue component (agriculture, solid mineral, trade, and service) can sustain the economy of Nigeria. As Orji, (2018) argued that if Nigerian economy is to be rescued to the path of sustainable growth and external viability, then, it must enthusiastically attend to the question of the place of non-oil export as regard to Nigeria Published by ECSDEV, Via dei Fiori, 34, 00172, Rome, Italy http://ecsdev.org economic growth and what dynamics are accountable for the advancement of the non-oil sector. Review Of Related Literature Nigeria has been known as an agrarian economy bestowed with substantial natural resources. These natural resources include; agricultural, solid mineral, mines and many others. Agriculture has narrowly been seen as the science of production of food and cash crops for exportation and consumption for mankind. Orji (2018) documented that mineral resources are categorised by the Geological Survey Department according to their uses as; Mineral fuels (lignite, coal, thorium, bitumen and uranium); Metallic minerals (manganese, lead, copper, iron, nickel, zinc, aluminum, and tin) and; Structural building minerals (limestone, stone, gypsum, asbestos, gravel, marble sand, anti-ceramic minerals -fluorspar, clay, dolomite, feldspar, asbestos). Principally, these mineral resources are found in rural areas in most cases. Despite that, the rural area in Nigeria has not been developed very well to the extent of making reference to their God given gifts. No provision of good roads network, high unemployment rates, poor standard of living, and no access to good water and other essential public goods to the people of rural areas in the country is lacking. In the study of, Anríquez, and Stamoulis (2007), it was observed that in the global rating of 1.2 billion extremely poor people, 75% live in rural areas and for the most part they depend on agriculture, forestry, fisheries and related activities for survival. Omorogiuwa, Zivkovic, and Ademoh (2014) maintained that Nigeria has 75 percent of its land suitable for agriculture, but only about 40% is cultivated for this purpose thereby giving enough room for the country to focus on and as well attend to the food security, agricultural plans and employment for all. David, Noah, and Agbalajobi (2016) opined that though contribution of the mining sector to the GDP in the country is about 0.5% which is very poor and not favourable as matched to some Sub-Saharan countries such as DR Congo, Botswana, Namiba; the sector has the potentials to resuscitate the nation and brings the needed economic development if well employed. Ariyo (1997) in Uzonwanne (2015) argued that agriculture in Nigeria has undergone many years of disrepair, unbalanced and defectively conceived government policies, mismanagement, dearth of meaningful inducement to farmers by government, basic infrastructure and numerous bureaucratic bottlenecks in executing policies. Sharing the view of Yesufu (1960), Olayungbo, et al., (2018) he saw reasons why government should consider horse-trading oil for non-oil revenue, because as it were, about 70% of the rural population of Nigeria engaged in one type of agricultural activity or the other and between 1963 and 1964, and the non-oil sector has greatly contributed up to 65% of nation's gross domestic product (GDP). Doki, et al., (2019) see diversification as a turning point that would help in growing the GDP Per Capita of the country until such a time when the observed-turned point is no longer beneficial and it becomes necessary for the country to re-specialize. Diversification refers to a strategic direction that a nation used for the expansion of production or markets by means of either internal or external development (Adams, 2016). In the view of Anyaehie, et al., (2015), it does not always mean an increase in output but it encompasses stabilisation of economies by diversifying their economic base. Hence, diversification is seen as adaptive ability and it undergoes a long-term prospects. Some of the sectorial contributors to the pool of non-oil revenue in Nigeria has been projected to have performed well if not for the recklessness and abandonment of these sectors by the successive governments. Diversification is to be considered according to country-specific needs. It is a must for a country like Nigeria to embrace diversification in other to come out of the mess of price volatility of oil. In the Nigerian case, options for diversifying the economy abound. It ranges from; agriculture, solid minerals, trade and services. As justified in the study of, Akinlabi, and Tijani, (n.d) that agricultural sector is projected to contribute 34.4 percent variation in gross domestic product (GDP) between 1970 and 2010 in Nigeria but has suffered neglects during the hey-days of the oil boom in the 1970s. This agricultural sector comprises of crop production, livestock, forestry and fishing. According to the National Bureau of Statistics (2019), trade comprises of saleable item that can generate revenues to the federal government while services includes art, entertainment and recreation, transportation, information communication technology, education, real estate and human health and social services. Arguably, the mining sector contribution to the economic growth seems saturated and envisaged diminishing returns. Economic growth on the other hand, refers to an increase in a country's national output over a period of time, usually one year. It is usually measured as the percentage increase in the Gross Domestic Product (GDP) of a country over a period of one year. Basically, the concept of economic growth is categorized into two measures which are the Nominal and Real. Doki and Tyokohol (2019) observed that the nominal absorbed inflation in its computation while the real GDP is the calculation where adjustment is made to eliminate the distorting effects of inflation. This study concentrated on the real gross domestic product due to the necessary provision for inflation. Ericsson, and Löf (2019), observed that the contribution of minerals and mining to GDP and exports has gotten to a maximum during mining boom in 2011 and regrettably the figures for mining's contribution had declined for most countries by 2016. Mobosi, et al., (2017) argued that all the sectors that make up the non-oil sector such as; agriculture, industry, construction, services and trade should be adequately developed since they have shown serious contribution to output growth during the period of economic boom. The sector was the nation glory before the discovery of crude oil and having realized the pitfall the dependence on oil has caused the nation, the government has made enormous efforts to re-awaken the dead glory. NEITI (2013) has it in the audit report of 2007-2010, that over N2.21billion was remitted as royalty from companies operating in the oil sector, which about N51.4billion was realised as taxes while annual surface rent payments amounted to over N173.94million alongside N122.92 million in levies to the Nigerian government. Most countries of the world now channel their strength in producing goods and rendering service that they were originally known for. Succinctly documented in O'Toole (2007) that comparative cost advantage are now the reasoning behind some countries producing agricultural and mineral commodities while others produce industrial goods. In a situation where a country's share of agriculture in overall employment is huge, broad-based growth in agricultural incomes should be actually encouraged to prompt growth in the overall economy (Oyakhilomen, et al., 2014). The underpinning theory of this study is the comparative cost advantage theory. It was Published by ECSDEV, Via dei Fiori, 34, 00172, Rome, Italy http://ecsdev.org propounded by the Adams Smith in 1776, but was further pushed in by David Ricardo. The proponents of comparative cost advantages argued that a country should embark on the production of those goods or rendering of those services they can do best. Comparative cost advantage is believed to favour diversification of Nigerian economy simply because it centers on the fact that a country like Nigeria should produce more of her "best''. In this case, Nigeria should comfortably rely on producing agro produce than the dependence on the exhaustible oil which she has no control over. Igberaese (2013) observed that the mainstream economics argues that countries should produce and ex-port according to their comparative advantage and this will benefit countries if they accept the cost advantage of the trading country and focus on producing a commodity in which they can play a leading role. Edeme, Onoja, and Damulak (2018), believed that proper records and account of the solid minerals has not reflected in the nation's economy and as such there is need for wide awakening of inter-agency cooperation to monitor the size of mineral resources illicitly dripping the shores of the nation without proper account. The need to increase revenue from this sector has brought about many programmes initiated to revitalize the agricultural sector by the government (Ogunbiyi & Abina, 2019). Some of these programmes in time past are; Anchored Borrowers Programme (ABP), National Economic Empowerment Development Strategy (NEEDS)and Agricultural Transformation Agenda (ATA). The most recent among the programme is diversification from oil dependence to non-oil based. This diversification should be holistic in nature and such; it should embrace accountability and transparency. Suberu, et al. (2015) researched on the diversification of the Nigerian economy towards a sustainable growth and economic development. It employed the descriptive method of analysis. It is divulged that for the nation to break loose from the challenges intrinsic in a mono-economy, particularly one dominated by oil revenue, which is subject to price shocks and unfavourable quota arrangement, there is the need for diversification. It suggested agricultural sector is to be the probable choices for diversifying the economy. As a matter of fact, the unimpressive performance of non-oil revenue in time past has promoted oil revenue source as alternative source. However, the oil sector is characterized by external factors which ranges from price fluctuation and oil demands. The period between December 2019-July 2020 has left the world with unending ugly stories about the fall in oil price which COVID-19 has brought, hence, agrarian nations opt for diversification of their economy to the non-oil revenue. Uzonwanne (2015) saw the need to research on economic diversification in Nigeria in the face of dwindling oil revenue. It made use of secondary data with the help of descriptive method. Its data revealed that Nigeria's over dependency on oil has contributed to the poor management of human capital/resources which has led to the migration of many talented citizens of the country to other countries in search of better life. It maintained further that the neglect of non-oil revenue has led to the constant depreciation in GDP of the country. Hence this clarion calls for urgent diversification of the Nigerian economy. More so, there exists a positive relationship between economic growth in Nigeria and diversification of other sectors since proper management of human resources, huge investment and concentration on agriculture has brought economic value. It therefore, recommends that Nigerian government should urgently create an enabling environment that will favour diversification of the economy that would discourage monoeconomy system and pay more attention to heterogeneous economy. Oyakhilomen, et al.(2014) researched to find the relationship between agricultural production and the growth of Nigerian economy with the aim of poverty reduction. It employed the Time series data with the help of unit root tests and the bounds (ARDL) testing approach to co-integration. Its result revealed that agricultural production was significant in influencing the favourable trend of economic growth in Nigeria. It was recommended that adequate policies should be designed and implemented in alleviating rural poverty through puffed-up investments in agricultural development . Orji, (2018) studied the expansion of the Nigeria's economy via solid minerals and agriculture in the light of declining economy. The study employed correlation, co-integration, and regression tests. The result revealed that agricultural commodity export prices have significant and positive effect on Nigerian economic growth. It also revealed that solid mineral production has significant short and long-run impact on the Nigerian economy. The study recommended the implementation of a comprehensive inventory of mineral resource prospective as well as actively upholding the development of these resources for both local and foreign consumption. Onodugo, et al. (2015) examined the diversification of the Nigerian economy as regards to economic development. It discovered that for Nigerian economy to be diversified there is need to have a dire paradigm shift in economic policies and political will in order to implement changes in policies. Its data also revealed that the neglect of agriculture has led to the constant depreciation in GDP of the country. Olayungbo, and Olayemi (2018) studied the dynamic relationships among non-oil revenue, government spending and economic growth in Nigeria. It's estimated the error correction model, impulse response and granger causality test; there was a mixed finding on the report. Firstly, it revealed a negative effect of government spending on economic growth while non-oil revenue showed positive effect on economic growth. Secondly, it found that non-oil revenue has negative shocks on economic growth while the government spending shock was positive. David, et al., (2016) analysed the role of mining sector to Nigeria economic development. It made use of time series data with the help of Error Correction model in ascertaining the relationship between the mining sector's to the economic development. The finding discloses that the value of solid mineral has positive relationship on economic development in the country. It therefore recommended that Nigerian needs urgently develop her enormous mining potentials in such a manner that could lead to diversifying the economic and greatly achieve rapid economic growth. Olajide, Akinlabi, and Tijani, (n.d) examined the association of Agricultural resource on economic growth in Nigeria. Economic growth was proxy with gross domestic product (GDP). It employed the ordinary least square regression method (OLS). The results disclosed a positive relationship between gross domestic product (GDP) and agricultural output in Nigeria. It concludes that the government should make prosperous effort to improving the agricultural sector by granting the farmers incentives, access to good roads and providing adequate funding. Okezie, and Azubike (2016) researched on the impact of Non-oil revenue to government revenue and economic growth in Nigeria. It considered secondary data obtained from the statistical bulletin of the Central bank of Nigeria which was analyzed Published by ECSDEV, Via dei Fiori, 34, 00172, Rome, Italy http://ecsdev.org using the Ordinary Least Squares Regression. Result of the analysis showed a positive and significant contribution of non-oil revenue to economic growth and positive but slightly insignificant contribution to government revenue. It proposed diversification for the government; any effort to sabotage this course must be nipped in the bud as the development of the non-oil sector remains a veritable channel for tapping into Nigeria's hidden wealth. Ogunbiyi, and Abina (2019) researched on the nexus between oil and nonoil revenue on economic growth. The economic development was proxy with human development index and stands for the dependent variable while oil and non-oil revenue were used as independent variable. It obtained data from the central bank of Nigeria bulletin and index mudi for the period 1981 to 2018. It employed the Descriptive Statistics, Augmented Dickey-Fuller Unit Root test, Johansen Co-integration and Error Correction Estimates. The estimated result discloses that oil revenue has a negative but significant relationship with human development index. It is believed that the negative contribution came as a result of the resource curse ideology. On the other hand, non-oil revenue has a positive but insignificant association with dependent variable. Aptly, it considered the need for diversification of exportable product. Mobosi, et al., (2017) studied with the aim to finding why government diversification policy, Industrial Sector has impacted on the output of growth of Nigeria. It made use of time series data from the period 1970 -2016. It adopted the Error Correction Classical Linear Regression approach and trend analyses. Hence, on the average, the results revealed that industrial share to GDP and output growth per capita in Nigeria exhibits positive reaction to the observable changes in the index of government diversification (DIV), human capital per person (HK), and number of persons employed (EMP) and domestic credit allocated to private sector by banks (CRA). Edeme, et al., (2018) studied the role of solid mineral development in attaining sustainable growth in Nigeria. It adopted time-series with emphasis on GDP per capita, foreign trade balance, solid minerals output, domestic interest rate, gross domestic savings and inflation from 1960-2015. The Linear Growth Regression model showed that solid minerals positively and significantly affect sustainable growth. It further revealed that solid mineral is greatly significant but negatively associated with foreign exchange due largely to illegal movement of mineral commodities across the shores of the country. It recommended that much attention should be focused on the development of the solid minerals to help the economy from the vagaries of the present economic woes. Doki, et al., (2019), carried out a study on how export diversification affects economic growth. It made use of the Bounds Co-integration test and the Error Correction Model (ECM) under the Autoregressive Distributed Lags (ARDL). The result showed that export diversification has positive but immaterial effect on economic growth in Nigeria in the long and short run. It thus, recommended that the government should intensify the effort to diversify the economy and properly channels towards the manufacturing and service industry export with the optimism that bulk of revenues comes from these sectors. Published by ECSDEV, Via dei Fiori, 34, 00172, Rome, Italy http://ecsdev.org ∆ is the first difference operator, α constant parameters and ‫ﻉ‬t is a stationary stochastic process. To determine the order of integration of series, equation 12 is modified to capture second difference on lagged first and n lags of second difference as follows: µ and Ә are constant parameters. The n lagged difference terms are captured so that the error term ‫ﻉ‬t and ‫ﻉ‬it in both equations are serially independent. A stationary time series is said to be integrated of order zero or 1(0), and a time series Yt is defined to be integrated of order one or 1(1) if ∆yt is a stationary time series (Gujarati, 2003). Also, we conducted a Granger causality test to estimate the short-run link among the variables. Descriptive statistics test was carried out to determine if a data set had a normal distribution. It describes the averages of the mean, median, and standard deviation which are measures of spread and variation, skewness which looks at the symmetry and Kurtosis which looks at the centrality of the peak. From the result above, none of the non-oil revenue variables exhibited negative average values. The skewness values for all the variables were negative which implies that they are skewed to the left, however, & are approximately symmetric as its values -0.1198 and -0.2258 are greater than -0.5 while (-0.5478) (-0.9368) and (-0.6509) are all moderately skewed as its values were greater than -1. Furthermore, the Jarque-Bera Statistics accepts the null hypothesis that all the variables are normally distributed. To ensure that there is no randomness in the data series, a Runs-test of randomness was carried out. The runs-test of randomness suggests that we reject the null hypothesis that the sequence was produced in a random manner and accept the alternative of no randomness in the series as the p-values of all variables were less than 1%. Prior to assessing the conditional variance, it is practical to test for unit roots in the series using Augmented Dickey-Fuller(ADF). The ADF unit root test indicates that RGDP and SOLMNR were stationary at order 2, SRVCS and TRADE were stationary at order 1 while AGRIC was stationary at level. A correlation test was carried out to understand the relatedness amongst the variables using Pearson Test of Correlation The table above shows the correlation matrix between Real Gross Domestic Product and contributions of non-oil revenue variables to RGDP. The result shows that the correlation between RGDP and other non-oil revenue variables are positive and statistically significant. This is expected as each of the variables is part of the aggregate RDGP. The stability of the AR roots of the polynomial was tested using AR root table and AR root graph diagnostic test. These two test reports the inverse roots of the characteristic AR polynomial; the estimated VAR is stable (stationary) if all roots have modulus less than one (for table) and lie inside the unit circle (for the graph). If the VAR is not stable, certain © 2021 The Authors. Journal Compilation © 2021 European Center of Sustainable Development. results (such as impulse response standard errors) are not valid. There will be roots, where is the number of endogenous variables and is the largest lag. If you estimated a VEC with cointegrating relations, − roots should be equal to unity. The result shows that all the VAR is stable as all the Modulus except one is less than one, and this result can be confirmed from the AR roots graph. On the RGDP model, RGDP strongly influences its self, going by t-statistics of 1.686 (RGDP (-1)), the past realization of RGDP is associated with 141.9% increase in RGDP on average ceteris paribus. For the AGRIC coefficient, a percentage increase in AGRIC accounts for a 92% increase in RGDP, solid mineral (SOLMNR (-2)), and services (SRVCS (-1)) also have a significant influence on real gross domestic product. On the AGRIC model, the past realization of AGRIC strongly influences its self, going by the t-statistics of 4.76 in, solid mineral (SOLMNR (-1)) and trade (TRADE (-1)) also have a great influence on AGRIC. In the SOLMNR model, only past realization of solid mineral (SOLMNR (-1)) has a strong influence on its self, other variables exhibit a weak influence. On SRVCS model, both real gross domestic product (RGDP (-2)), agriculture itself (AGRIC (-1)(-2)) solid mineral (SOLMNR (-1)) and services (SRVCS (-1)) strongly predicts revenue on government services. Likewise, on the trade model, all other non-oil revenue sources significantly predicted government revenue on trade. On the OLS estimate of the individual models, the R 2 adjusted shows a good model fitting as its values were close to 1 with a very high F-statistics. A Variance Decomposition (VD) test was carried out to know how much of the future uncertainty of one time series is caused by the future shocks into the other time series in the model. This evolves, so the shocks on time series X1 may not be very important in the short-run but very important in the long run. In this study, the researchers selected 5 years as the forecast period; years one and two will be interpreted as the short-run period while 3 to 5 will be interpreted as the long-run period. From the result of the RGDP model, in the short run, 100% of forecast error variance in the real gross domestic product is explained by the variable itself in year one. In year two, 74.6% of the forecast error variance is explained by the variable while approximately 25% was explained by other variables with AGRIC having the highest impact (10.8%). That means other variables in the model do not have any strong influence on RGDP in the short run. The variables have a strong exogenous impact in the short run. In the long run, analysis shows that real gross domestic product does not have a strong influence on itself, its forecast error variance in year three is 7.5% while revenue from agriculture, revenue from services, and revenue from solid mineral were 35.2%, 30.9%, and 26.3% respectively. Similar findings were also evident in years four and five. This implies that in the long run, real gross domestic product is strongly influenced by agriculture, services, and solid mineral revenues. Results and Discussions From the result of the AGRIC model, in the short run, 99.9% of forecast error variance in real revenue from agriculture is explained by the variable itself in year one. In year two, 47.6% of the forecast error variance is explained by the variable while approximately 52% was explained by other variables with SRVCS having the highest impact (31.5%). That means other variables in the model do not have any strong influence on AGRIC in year one but in year two, they have a significant influence on the variable Published by ECSDEV, Via dei Fiori, 34, 00172, Rome, Italy http://ecsdev.org itself. In the long run, analysis shows that revenue from agriculture does not have a strong influence on itself, its forecast error variance in year three to five is below 40% while other variables control over 60% of the variation with revenue from services and solid mineral controlling the major share. The result from SOLMNR shows that it does not influence itself in the short and long run. In the short run, less than 35% variability in its forecast error variance is caused by itself while in the long run, less than 25% variability in its forecast error variance is caused by itself. Real gross domestic product accounts for 63.1% and 40.0% of the variability in revenue from services in year one and two respectively while revenue from agriculture and services strongly influence solid mineral revenues throughout the long-run periods. On SRVCS and TRADE models, the variables do not have a strong influence on themselves both in the short and long run. Revenue from services is basically influenced by revenue from agriculture and solid minerals while on the TRADE model, real gross domestic product, and agriculture strongly influence it in the short run; agriculture, services, and solid minerals influence it in the long-run. The aim of conducting pairwise Granger causality tests is to check whether an endogenous variable can be treated as an exogenous or explanatory variable. From the result on the table above, none of the variables has a significant probability value, suggesting that none has the power to granger cause RGDP single handedly, but they all can jointly cause a change in RGDP at 10% level of significance (P-value 0.0819). These findings is in line with the variance decomposition results of RGDP, where most of the variables jointly pose a strong influence on RGDP forecast error variance in the long-run period. Conclusion and Recommendations This study has explored the impact of Non -oil sector diverfiscation on economic growth of Nigeria. The study conclusion revolves around fact that RGDP is strongly endogenous in the short run but weakly endogenous in the long run. But specifically our results reveal that in the long run, that real gross domestic product does not have a strong influence on itself, its forecast error variance in year three is 7.5% while revenue from agriculture, revenue from services, and revenue from solid mineral were 35.2%, 30.9%, and 26.3% respectively; similar findings were also evident in years four and five. This implies that in the long run, real gross domestic product is strongly influenced by agriculture, services, and solid mineral revenues. Further findings suggest that in the future, revenue from agriculture, solid minerals, and services would strongly influence economic growth as they were strongly endogenous to RGDP. The results of our study corroborate with the viewpoints of prior studies conducted by Orji, (2018) ;Edeme, et al. (2018); and Doki, et al., (2019). The study, therefore, recommends diversification of the Nigerian economy by focusing more attention on agriculture, solid minerals, and service sectors as they tend to influence economic growth in the long run. More so, improved frameworks of accounting in areas of non-oil revenues are desirable for the accountancy profession. The monetary authorities such as CBN should intensify action to grant free interest loan in conjunction with commercial banks to small and medium business enterprise that are into agro and allied production. Available infrastructures such as constant power supply, good road network and security architecture should be provided by government to facilitate production, trade and services. In addition, incentives and tax-holidays should be granted to local and international investors who are interested in investing in non-oil sectors of the Nigerian Economy. Appropriate legislation should be enacted by regulatory authorities to stop illegal mining and perhaps deployment of sophisticated technology to boost our mining sector for employment generation and boost to government revenue at all levels.
2021-05-10T00:03:51.547Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "0e3d79922cf84cb220d63734863fefb3584817ba", "oa_license": "CCBYNC", "oa_url": "http://ecsdev.org/ojs/index.php/ejsd/article/download/1163/1146", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "004b342f36185e725b77c9f4556268154de2889d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
55108076
pes2o/s2orc
v3-fos-license
Studies on nickel ( II ) and palladium ( II ) complexes with some tetraazamacrocycles containing tellurium The synthesis of 10-membered and 12-membered tellurium-containing tetraazamacrocyclic complexes of divalent nickel and palladium by template condensation of diaryltellurium dichlorides, (aryl = p-hydroxyphenyl, 4hydroxy-3-methyl-phenyl, p-methoxyphenyl) with 1,2-diaminoethane and 1,3-diaminopropane in the presence of metal dichloride is reported. The resulting complexes were subjected to elemental analyses, magnetic measurements, and electronic absorption, infrared, and proton magnetic resonance spectral studies. The formation of the proposed macrocyclic skeletons and their donor sites were identified based on the spectral studies. A distorted octahedral structure for the nickel complexes and a square-planar structure for the palladium complexes in the solid state are suggested. INTRODUCTION The coordination chemistry of organotellurium ligands containing hard donor atoms, such as nitrogen and oxygen, along with soft tellurium is interesting as such ligand framework can provide insight into the competitive coordination behavior between hard and soft donors towards a metal center. 1,2[10][11][12] Some recent publications [13][14][15] show the development of tellurium-containing macrocycles.Srivastava et al. 16 reported the route to the synthesis of metal complexes with tellurium-containing macrocycles.In continuation of earlier work, 17,18 326 RATHEE and VERMA herein the synthesis and characterization of divalent nickel and palladium complexes with six novel tellurium tetraazamacrocycles (Te 2 N 4 system) are reported. RESULTS AND DISCUSSION The formation of diaryltellurium(IV) dichlorides by the reactions of TeCl 4 with phenol, 19 o-cresol 20 and anisole 21 involves two-step reactions.The first step is an electrophilic substitution of the phenyl ring by a trichlorotellurium moiety at the para position to the hydroxyl or the methoxy groups.This can be represented by the following equation: R-H + TeCl 4 → RTeCl 3 + HCl (1) (R = p-hydroxyphenyl, p-methoxyphenyl or 4-hydroxy-3-methyl-phenyl). In the second step, these aryltellurium trichlorides further react with phenol/ /o-cresol or anisole to give the diaryltellurium(IV) dichlorides as per the following equation: These diaryltellurium dichlorides when refluxed with 1,2-diaminoethane or 1,3-diaminopropane in presence of NiCl 2 /PdCl 2 in 2:2:1 molar ratios yielded 10--membered and 12-membered tetraazamacrocyclic complexes, respectively, as shown in Scheme 1.These complexes are colored, crystalline solids, fairly stable in dry air and soluble only in polar donor organic solvents.The analytical data and physical properties of the complexes are presented in Table I. Infrared spectra The important IR bands and their assignments are reported in Table II.The spectra are quite complex and an attempt has thus been made to draw the conclusions by comparing the spectra of metal complexes with those of corresponding constituent diarytellurium dichlorides and diaminoalkanes. The metal complexes under study did not show bands characteristic of free NH 2 group, instead the entire complexes exhibit a single sharp absorption band at around 3180-3250 cm -1 (sometimes overlapped with O-H) attributed to ν N-H vibrations.The assignment of this sharp band is based on the fact that macrocyclic ligands that have a coordinated secondary amino group have bands 18,[22][23][24] in the vicinity of 3200 cm -1 .This contention finds support 22 from appearance of bands of medium to strong intensity at 1627-1655 cm -1 and 809-827 cm -1 , assigned as N-H deformation coupled with N-H out-of-plane bending vibrations. RATHEE and VERMA The bands at 1156-1185 cm -1 may reasonably be assigned to C-N stretching vibrations. 18,25,26The above observation strongly suggests 18,22,25,26 that the proposed macrocyclic framework was formed.The formation of a tellurium-containing macrocyclic ring is supported by appearance of new weak intensity bands around 420-410 cm -1 due to Te-N. 18,27Evidence for the formation of proposed macrocycles and coordination through N atoms is further supported by new medium to weak intensity bands at around 480-450 cm -1 , assignable to Ni-N stretching. 28The M-Cl and Pd-N vibrations could not be ascertained due to the non-availability of far-infrared data. Proton magnetic resonance spectra The proton chemical shifts for metal complexes with 10-membered and 12--membered tetraazamacrocycles that are soluble in DMSO-d 6 are presented in Tables III and IV, respectively.The phenyl protons in metal complexes resonated slightly upfield (6.89-7.88ppm) from those of the parent diaryltellurium dichlorides, 19,20,29 due to an increase in electron density at the tellurium atom as a result of the replacement of 2 TELLURIUM CONTAINING TETRAAZAMACROCYCLIC METAL COMPLEXES 329 Cl by 2 N atoms.Ethylenediamine, H 2 N-(CH 2 ) 2 -NH 2 shows 30 two sets of four equivalent protons each at 1.19 (a) and 2.74 ppm (b).The metal complexes did not show any signal attributable to free -NH 2 groups, instead a broad singlet at around 1.74-2.04ppm, which may be assigned to coordinated secondary amino group, 31 was observed.This confirms the formation of the proposed 10-membered macrocycle skeleton.The deshielding of the -NH-protons further suggests the donation of electron density to the metal ions.The methylene protons in these metal complexes resonated at 2.17-2.50ppm, as a multiplet as reported for other tetraazamacrocyles derived from ethylenediamine. 18,24,26 32protons resonances at 1.15 (4H), 2.76 (4H) and 1.59 (2H) ppm due to amino, methylene (adjacent to N) and middle methylene groups, respectively.The metal complexes did not show any signal due to free amino groups.Instead, a broad singlet at 1.78-1.91ppm, assignable to a coordinated secondary amino group, 31 confirms the formation of a 12-membered tellurium-containing tetraazamacrocycle skeleton.The middle methylene protons and those adjacent to the N-atoms resonate at 2.01-2.50ppm and 2.86-3.33ppm, respectively.This behavior of the complexes under study is quite similar to those of other tetraazamacrocycles derived from 1,3-diaminopropane. 18,24,3330 RATHEE and VERMA Furthermore, the independence of aryl proton chemical shifts on the nature of metal ions precludes the possibility of a Te-M bond.The proton magnetic resonance studies on these Ni(II) and Pd(II) complexes support the tetradentate nature of these ligands through four N-atoms, as predicted by infrared studies. Electronic absorption spectra and magnetic studies The electronic absorption and magnetic moment data for the complexes are presented in Table V. TABLE V. Electronic absorption spectra and magnetic moment data for the metal complexes The electronic spectral data of all the six Ni(II) complexes exhibited three spin allowed transitions from 3 A 2g → 3 T 2g , 3 T 1g (F), and 3 T 1g (P), which appeared 9850-10700 cm -1 , 13600-15052 cm -1 and 24096-28668 cm -1 , respectively.This spectral pattern corresponds to an octahedral/distorted octahedral geometry 16,34,35 Also, the ratio of ν 2 /ν 1 ≈ 1.4 is indicative 36,37 of an octahedral stereochemistry for all these Ni(II) complexes.The third spin allowed d-d transitions appeared as a broad shoulder on the CT bands and extended up to 22×10 3 cm -1 , as reported 38 for other tetraazamacrocyclic complexes of Ni(II).The magnetic moment values of the studied Ni(II) complexes (2.90-3.44 μ B ) also suggest an octahedral stereochemistry for these complexes and rules out the possibility of a square-planar geometry. 39,40The Pd(II) complexes under study displayed two bands at 20829-24752 cm -1 and 26900-28800 cm -1 , which may be assigned to 1 A 1g → 1 A 2g , and 1 A 1g → 1 B 1g .These transitions in the Pd(II) complexes establish a square-planar coordination around palladium. 34,41This was further supported by their diamagnetic nature. Based on the above studies, nickel appears to be hexa-coordinated, especially in the solid state, presumably in a distorted octahedral fashion involving four N atoms of the tetraazamacrocycles and two chlorine atoms, whereas palladium is tetra-coordinated in a square-planar arrangement involving four N atoms of the macrocyclic ring.However, the proton magnetic resonance spectral pattern of the nickel complexes also indicated 42 the presence of diamagnetic square-planar configuration.Probably, dissociation of the chloride anions occurs in solution and the nickel (II) complexes form an equilibrium of octahedral (paramagnetic) and square-planar (diamagnetic) species.EXPERIMENTAL All the preparations were performed under a dry N 2 atmosphere and the solvents were dried and purified by standard methods before use. Synthesis of metal complexes with tellurium containing 10-membered and 12-membered tetraazamacrocycles The complexes were prepared by template condensation of the diaryltellurium dichlorides with diaminoalkanes in the presence of metal dichlorides in 2:2:1 molar ratio.A general procedure is given below. A saturated methanolic solution of 4.0 mmol of diaryltellurium dichloride (1.538 g, 1.650 g, 1.650 g for bis(p-hydroxyphenyl), bis(4-hydroxy-3-methyl-phenyl) and bis(p-methoxyphenyl) tellurium dichlorides, respectively) was added to ethylenediamine (0.240 g, 4.0 mmol) or 1,3-diaminopropane (0.296 g, 4.0 mmol) in about 10 mL dry methanol under constant stirring.An immediate change in color was observed along with a little turbidity.The contents were stirred and refluxed for about 3 h.This was followed by addition of a solution of 2.0 mmol of metal dichloride (0.575 g and 0.355 g for NiCl 2 .6H 2 O and PdCl 2 , respectively) in about 10 mL methanol.This resulted in a distinct color change along with slight precipitation of a solid product.The solution was then refluxed for about 6 h and cooled.The small amount of colored solid that separated was filtered off and the filtrate was concentrated to about one third of its original volume and kept in a freezer (0 °C) overnight to obtain a second crop of the crystalline product.This was filtered and washed with benzene and dried in a vacuum desiccator over P 4 O 10 .The purity of these compounds was controlled by TLC using silica gel-G. Analytical methods and physical measurements Carbon, hydrogen, and nitrogen analyses were obtained micro-analytically from the Sophisticated Analytical Instrumentation Facility (SAIF), Panjab University, Chandigarh.The tellurium and chlorine contents were determined volumetrically 43 and palladium gravimetrically. 43Nickel was estimated by atomic absorption spectrophotometry.The IR spectra were recorded in the region 4000-400 cm -1 at the SAIF on a Perkin Elmer Model 2000 FTIR spectrometer using the KBr pellet technique.The 1 H-NMR spectra were recorded at the Kurukshetra University, Kurukshetra on a Bruker XWIN-NMR Avance 300 operating at 300.13 MHz in DMSO-d 6 using tetramethylsilane as an internal reference.The magnetic suscepti- Scheme 1. Formation of the Ni(II) and Pd(II) complexes. TABLE I . Physical characteristics and analytical results of the metal complexes TABLE II . Important IR data (cm -1 ) for the metal complexes (mixed with moisture band; sstrong, m -medium, vs -very strong)
2018-12-05T16:24:57.115Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "e9f87900b0be42c2686dabd65d62b845e0fb1797", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0352-51391100200R", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e9f87900b0be42c2686dabd65d62b845e0fb1797", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
29321343
pes2o/s2orc
v3-fos-license
Neurogenic bowel dysfunction in patients with spinal cord injury, myelomeningocele, multiple sclerosis and Parkinson's disease. Exciting new features have been described concerning neurogenic bowel dysfunction, including interactions between the central nervous system, the enteric nervous system, axonal injury, neuronal loss, neurotransmission of noxious and non-noxious stimuli, and the fields of gastroenterology and neurology. Patients with spinal cord injury, myelomeningocele, multiple sclerosis and Parkinson's disease present with serious upper and lower bowel dysfunctions characterized by constipation, incontinence, gastrointestinal motor dysfunction and altered visceral sensitivity. Spinal cord injury is associated with severe autonomic dysfunction, and bowel dysfunction is a major physical and psychological burden for these patients. An adult myelomeningocele patient commonly has multiple problems reflecting the multisystemic nature of the disease. Multiple sclerosis is a neurodegenerative disorder in which axonal injury, neuronal loss, and atrophy of the central nervous system can lead to permanent neurological damage and clinical disability. Parkinson's disease is a multisystem disorder involving dopaminergic, noradrenergic, serotoninergic and cholinergic systems, characterized by motor and non-motor symptoms. Parkinson's disease affects several neuronal structures outside the substantia nigra, among which is the enteric nervous system. Recent reports have shown that the lesions in the enteric nervous system occur in very early stages of the disease, even before the involvement of the central nervous system. This has led to the postulation that the enteric nervous system could be critical in the pathophysiology of Parkinson's disease, as it could represent the point of entry for a putative environmental factor to initiate the pathological process. This review covers the data related to the etiology, epidemiology, clinical expression, pathophysiology, genetic aspects, gastrointestinal motor dysfunction, visceral sensitivity, management, prevention and prognosis of neurogenic bowel dysfunction patients with these neurological diseases. Embryological, morphological and experimental studies on animal models and humans are also taken into account. Abstract Exciting new features have been described concerning neurogenic bowel dysfunction, including interactions between the central nervous system, the enteric nervous system, axonal injury, neuronal loss, neurotransmission of noxious and non-noxious stimuli, and the fields of gastroenterology and neurology. Patients with spinal cord injury, myelomeningocele, multiple sclerosis and Parkinson's disease present with serious upper and lower bowel dysfunctions characterized by constipation, incontinence, gastrointestinal motor dysfunction and altered visceral sensitivity. Spinal cord injury is associated with severe autonomic dysfunction, and bowel dysfunction is a major physical and psychological burden for these patients. An adult myelomeningocele patient commonly has multiple problems reflecting the multisystemic nature of the disease. Multiple sclerosis is a neurodegenerative disorder in which axonal injury, neuronal loss, and atrophy of the central nervous system can lead to permanent neurological damage and clinical disability. Parkinson's disease is a multisystem disorder involving dopaminergic, noradrenergic, serotoninergic and cholinergic systems, characterized by motor and non-motor symptoms. Parkinson's disease affects several neuronal structures outside the substantia nigra, among which is the enteric nervous system. Recent reports have shown that the lesions in the enteric nervous system occur in very early stages of the disease, even before the involvement of the central nervous system. This has led to the postulation that the enteric nervous system could be critical in the pathophysiology of Parkinson's disease, as it could represent the point of entry for a putative environmental factor to initiate the pathological process. This review covers the data related to the etiology, epidemiology, clinical expression, pathophysiology, genetic aspects, gastrointestinal motor dysfunction, visceral sensitivity, management, prevention and prognosis of neurogenic bowel dysfunction patients with these neurological diseases. Embryological, morphological and experimental studies on animal models and humans are also taken into account. INTRODUCTION ETIOLOGY SCI etiology is generally divided into traumatic and nontraumatic causes [12] . The onset of NTD occurs at 21-28 d of embryonic development [13] . MMC results from lack of closure of the neural tube during this stage [14] . Its etiology is complex, involving both genetic and environmental factors [15] . A maternal effect as well as a gender-influenced effect, have been suggested as part of its etiology [16] . Although there are more than 200 small animal models with NTD, most of them do not replicate the human disease phenotype. The candidate genes studied for risk association with spina bifida include those important in folic acid metabolism, glucose metabolism, retinoid metabolism, apoptosis, and those that regulate transcription in early embryogenesis [17] . MS is an etiologically unknown disease with no cure [7] . It is the leading cause of neurological disability in young adults, affecting over two million people worldwide. MS has been considered a chronic, inflammatory disorder of the CNS white matter in which demyelination results in the ensuing physical disability. Recently, MS has become increasingly viewed as a neurodegenerative disorder in which axonal injury, neuronal loss, and atrophy of the CNS can lead to permanent neurological damage and clinical disability [9] . GI dysmotility in PD has been attributed to the peripheral neurotoxin action [18] . Recently, it has been suggested that sporadic PD has a long prodromal period and several nonmotor features develop during this period. Hawkes et al [19] proposed that a neurotropic viral pathogen may enter the brain via nasal route with anterograde progression into the temporal lobe or via gastric route, secondary to the swallowing of nasal secretions. These might contain the neurotropic pathogen that, after penetration of the epithelial lining, could enter the axons of the Meissner plexus and, through transsynaptic transmission, reach the preganglionic parasympathetic motor neurons of the vagus nerve. This would allow retrograde transport into the medulla and from there into the pons and midbrain until the substantia nigra is reached [19] . A summary of suggested pathogenesis of GI disorders underlying PD is shown in Table 1. EPIDEMIOLOGY Traumatic SCI represents a significant public health problem worldwide [20] . Each year, 11 000 individuals are estimated to have SCI in the United States [21] with a mortality rate of 27.4 per million people. An annual incidence of 33.6 per million is reported in Greece and 19.5 per million in Sweden [22] , while in Denmark the number of SCI patients is about 3000. NTD is the second most common birth defect, with an incidence of 1/1000. MMC is the most common subtype (66.9%) [16] . NTD is rarely reported in black Americans and Japanese, but is not so rare in Cameroon and sub-Saharan black Africans, with an incidence of 1.9 cases per 1000 births [23] . In Switzerland, the incidence of NTD in children is 0.13 per thousand, corresponding to 9-10 affected newborns each year [15] , while in Thailand, the incidence is 0.67 per 1000 births [24] . NTD is reported in adolescents aged 15-18 years [25] and in young adults aged 20-23 years [26] . MS affects young and middle-aged people [27] , the mean age at disease onset is 30.7 ± 6.4 years, and it is believed that pregnancy, postpartum status and vaccines [8] , as well as infection with Epstein-Barr virus [28] , may influence the onset and course of the disease. An increase in females and an almost universal increase in the prevalence and incidence have been reported, challenging the theory of a geographical gradient of incidence in Europe and North America [29] . It affects 100 000 people in the United Kingdom [30] , with a prevalence of 30.9/100 000 in Herzegovina [31] . An association between the risk of MS and the season of birth suggested that decreased exposure to the sunshine in the winter leading to low vitamin D levels during pregnancy is an area that needs further research [32] . PD is the second most common neurodegenerative disease after Alzheimer's disease [11] , affecting one million people in the United States each year [33] , and 20% of the population aged > 65 years in Mexico [34] . It is described in sporadic and familial forms [35] (at least 2 individuals are affected within 2-3 consecutive generations of a family). SYMPTOMS Neurophysiologic testing of the sacral reflex is useful in the diagnosis of sacral lower motor neuron lesions, and increased elicitability of the penilo-cavernosus reflex is reported in patients with chronic SCI [36] . Patients with SCI may present [4] with brain anatomical changes of loss of motor control, chronic neuropathic [37] and abdominal pain [38] , urinary [39] and sexual dysfunction [40] , decubiti [41] , neurogenic immune depression syndrome [42] , and an increased risk of having a depressive disorder [43] . Spinal cord lesions affect colorectal motility, anorectal sensation, anal sphincter function, and cause neurogenic constipation [44] . Defecation is abnormal in 68% of cases, digital stimulation is required by 20%, suppositories by 10% and enemas by 28% of cases. Time spent in each defecation is more than 30 min in 24% cases. In children aged four years or older, daily fecal incontinence occurred in 14% and weekly incontinence in 14% cases [45] . SCI patients usually do not perceive the normal desire for defecation, rather describing it as abdominal distension, hardened or cool abdomen, hardening of the legs, abdominal pain, chills and dizziness, itching of the head, and a feeling of pain at the sacrum level [4] . Additionally, SCI subjects may develop autonomic dysreflexia in response to noxious stimulus [46] . Cardiovascular dysregulation, characterized by paroxysmal high blood pressure episodes, is the most prominent feature and is precipitated by manual emptying of rectal contents and by gastric and bowel distension [47] . Regarding the gravity of this issue, an NBD score (0-6 very minor, 7-9 minor, 10-13 moderate and 14 severe) [48] , an international bowel function basic [49] and extended [50] SCI data set, as well as an international standard to document the remaining autonomic function after SCI [40] have been developed. Prenatal screening with α-fetoprotein and ultrasonography have allowed the prenatal diagnosis of NTD in current obstetric care [51] . In an animal model with naturally occurring spina bifida (curly tail/loop tail mouse), using standard enzyme linked immunosorbent assay techniques, detection of amniotic fluid levels of the neurofilament heavy chain, glial acidic fibrillary protein and S100B, seems to provide important information for balancing the risks and benefits, both to mother and child, of in utero surgery for MMC [52] . Colorectal problems are common in children with MMC and their impact on the quality of life becomes more severe as the child grows up. Diagnosis of MS is made according to the McDonald and the Poser criteria, with the McDonald criteria showing a higher sensitivity for diagnosis [53] . Bowel symptoms are reported to be common in MS, including constipation (29%-43%) and fecal incontinence (over 50%), and 34% of patients spending more than 30 min a day managing their bowel movement [30] . Neurogenic dysphagia is also present [54] . Autonomic dysreflexia may occur in MS [55] , characterized by hypertensive attacks, palpitations, difficulty in breathing, headaches and flushing [56] . Autonomic symptoms are disorders of micturition, impotence, sudomotor and GI disturbances, orthostatic intolerance as well as sleep disorders [57] . Neuropsychiatric symptoms include abnormalities in cognition, mood and behavior (major depression, fatigue, bipolar disorder, euphoria, pathological laughing and crying, anxiety, psychosis and personality changes). Major depression is a common neuropsychiatric disorder, with an approximate 50% lifetime prevalence rate [58] . Pediatric MS has been identified as an important childhood acquired neurologic disease [59] . GI diagnosis in PD [60] includes history, clinical examination, barium meal, breath test, stomach scintigraphy and colonic transit time [61] . Oropharyngeal dysphagia is recognized by difficulty in transferring a food bolus from the mouth to the esophagus or by signs and symptoms of aspiration pneumonia or nasal regurgitation [62] . PD is actually considered a neurodegenerative process that affects several neuronal structures outside the substantia nigra. Reports have shown that the lesions in the ENS occurred at a very early stage of the disease, even before CNS involvement [11] . GI symptoms are very important, as GI diseases may also display neurological dysfunction as part of their clinical picture [63] . PD patients have motor and non-motor fluctuations classified into three groups: autonomic, psychiatric, and sensory [64] . GI dysfunction is the most common non-motor symptom which comprises sialorrhea, swallowing disorders [65] , dysphagia [66] , acid regurgitation, pyrosis [67] , early satiety, weight loss, constipation [68] , incomplete rectal emptying, the need for assisted defecation and an increased need for oral laxatives [69] . Genetic factors Data was obtained from 1066 NTD families, 66.9% with MMC, suggesting a maternal effect, as well as a genderinfluenced effect in the etiology of NTD [16] . Telomerase, the reverse transcriptase that maintains telomere DNA, is important for neural tube development and bilateral symmetry of the brain. However, it is reported that variants in the telomerase RNA component (TERC) are unlikely to be a major risk factor for the most common form of human NTD, lumbosacral MMC [70] . The association between a polymorphism in the ABCB1 gene and PD has been observed. The ATP-binding cassette, sub-family B, member 1 (ABCB1) gene encoding P-glycoprotein (P-gp), has been implicated in the pathophysiology of PD due to its role in regulating the transport of endogenous molecules and exogenous toxins. ABCB1 polymorphisms thus constitute an example of how genetic predisposition and environmental influences may combine to increase the risk of PD [71] . On the other hand, extensive ENS abnormalities in mice transgenic for PD-associated α-synuclein gene mutations precede CNS changes. Most PD is sporadic and of unknown etiology, but a fraction is familial. Among familial forms of PD, a small portion is caused by missense (A53T, A30P and E46K) and copy number mutations in SNCA, which encodes α-synuclein, a primary protein constituent of Lewy bodies, the pathognomonic protein aggregates found in neurons in PD [72] . Gastrointestinal motor dysfunction and visceral sensitivity Fecal incontinence in SCI, MMC and MS is mainly due to abnormal rectosigmoid compliance and recto-anal reflexes, loss of recto-anal sensitivity and loss of voluntary control of the external anal sphincter [73] . On the other hand, constipation is probably due to immobilization, abnormal colonic contractility, tone and recto-anal reflexes, or side effects from medication. SCI patients have a higher incidence of esophagitis and esophageal motor abnormalities [74] , gastric stasis, paralytic ileus, abdominal distension [75] , partial or complete loss of the sensations upon defecation, constipation [75] , hemorrhoids [76] , and need for assisted digital evacuation than controls [75] . Studies have shown a range of neurological alterations, such as low amplitude, slowly propagating abnormal peristaltic esophageal contraction [74] , a decrease in phase Ⅲ of the interdigestive motor complex [77] , reduction in gastric emptying [78] , delayed GI transit, higher colonic myoelectric activity, reduced emptying of the left colon, and a suboptimal postprandial colonic response [79] . Visceral sensitivity testing according to Wietek et al [80] may be a future requirement, in addition to the American Spinal Injury Association (ASIA) criteria, in the assessment of the completeness of cord lesions in patients diagnosed with complete spinal cord transection, as some report the sensation of distension of the rectum. In our laboratory, with barostat methodology, we found that complete supraconal SCI patients preserve rectal sensation, and present with impaired rectal tone and impaired response to food. This data supports the fact that barostat sensitivity studies can complement ASIA criteria to confirm a complete injury. Our results also suggest that intact neural transmission between the spinal cord and higher centers is essential for noxious stimulus, but not for non-noxious stimuli, that patients with supraconal lesions may present PP visceral hypersensitivity, and that incontinence and constipation may not be related solely to continuity of the spinal cord [4,81] . Suttor et al [82] , using a dual barostat in six cervical SCI patients without NBD, reported that intact neural transmission between the spinal cord and higher centres is not essential for normal colorectal motor response from feeding to distension. Lumbosacral neuropathy was demonstrated in 90% of SCI subjects [83] using translumbar and trans-sacral motorevoked potentials. In MMC, studies have revealed swallowing disorders characterized by difficulty in bolus formation, nasopharyngeal and gastroesophageal reflux, tracheobronchial aspiration, and vocal cord paralysis [84] , as well as a longer mean colonic transit time not related to the level of the spinal lesion [85] and reduction in anal sphincter pressure [86] . Ventriculoperitoneal shunt malfunction may occur in patients with MMC, and severe constipation that increases intra-abdominal pressure resulting in raised intracranial pressure, seems to be one of the causes [87] . Visceral sensitivity studies with the barostat reveal that constipated children with MMC present with impaired rectal tone, impaired response to food and postprandial visceral hypersensitivity [88] . GI dysfunction occurs in MS as in other neurologic diseases [63] . Slow gastric emptying rate [89] , increased colonic transit time [90] , absent PP colonic motor and myoelectric responses [91] , altered maximal contraction pressures and anal inhibitory reflex threshold [92] , impaired function of the external anal sphincter, and increased thresholds of conscious rectal sensation [93] have been reported. Paradoxical puborectalis contraction is common in MS patients with constipation [94] and it seems that autonomic dysreflexia occurs due to bladder distension [56] . A summary of suggested pathogenesis of GI disorders underlying spinal cord injury, myelomeningocele, and multiple sclerosis is shown in Table 2. In PD, dysphagia, impaired gastric emptying and constipation may precede its clinical diagnosis for years [61] . ENS involvement could be critical as it may represent a point of entry for a putative environmental factor to initiate the pathological process [11] . On the other hand, the mechanisms related to enteric autonomic dysfunctions may involve the enteric dopaminergic or nitrergic systems. It has been reported that rats with a unilateral 6-hydroxydopamine lesion of nigrostriatal dopaminergic neurons develop marked inhibition of propulsive activity compared with sham-operated controls. Results suggest that disturbed distal intestinal transit may occur as a consequence of reduced propulsive motility, probably due to an impairment of a nitric oxide-mediated descending inhibition during peristalsis [95] . Neurogenic dysphagia may also appear in PD. It may be caused by a disruption in different parts of the CNS (supranuclear level, level of motor and sensory nuclei taking part in the swallowing process and peripheral nerve level) or a neuromuscular disorder [54] . It is also suggested that levodopa plays a role in the oral phase of deglution in PD [96] . Dysphagia is present in up to 50% of PD cases and seems to be correlated with manometric irregularities [97,98] . Castell et al [97] have described esophageal manometric abnormalities in 73% of PD patients characterized by complete aperistalsis or multiple simultaneous contractions (diffuse esophageal spasm) of the distal esophagus. They also reported repetitive proximal esophageal contractions [99] , a very interesting finding supporting a previous report of a link between PD, achalasia [100] , and scleroderma (e.g., PD and achalasia have Lewy bodies in the esophageal myenteric plexuses and the substantia nigra, as well as evidence of degeneration of the dorsal motor nucleus of the vagus), and esophageal manometric abnormalities were found in these three diseases. A link between PD and Helicobacter pylori (H. pylori) [101] has also been described, where HP eradication may improve the clinical status of infected patients with PD and motor fluctuations by modifying l-dopa pharmacokinetics [102] . Neurotensin, a 13 amino acid neurohormone located in the synaptic vesicles and released from the neuronal terminals in a calcium-dependent manner, is involved in the pathophysiology of PD and other neurodegenerative conditions [103] . Constipation and gastric atony are important 5039 December 14, 2011|Volume 17|Issue 46| WJG|www.wjgnet.com [94] Multiple sclerosis Paradoxical puborectalis contraction Constipation [94] Multiple sclerosis Bladder distension Autonomic dysreflexia [56] Myelomeningocele Severe constipation Ventriculoperitoneal shunt malfunction [87] Myelomeningocele Visceral hypersensitivity Constipation and impaired rectal tone and response to food [88] Myelomeningocele Higher spinal level of cord lesion, completeness of cord injury and longer duration of injury Severe neurogenic bowel dysfunction [20] Spinal cord injury Noxious stimulus Autonomic dysreflexia [46] Spinal cord injury Manual emptying of rectal contents and gastric and bowel distension Cardiovascular dysregulation [47] non-motor symptoms [104] . There is a trend toward a decreased gastric motility in PD patients as compared with healthy controls due mainly to a significant reduction in the amplitude of peristaltic contractions [105] ; other authors have found gastric dysrhythmias indicating gastric pacemaker disturbances [106] . Slow transit in the colon has been reported [107] , and using ano-rectal manometry, decreased basal anal sphincter pressures, prominent phasic fluctuations on squeeze pressure, and a hyper-contractile external sphincter response to the rectosphincteric reflex have been documented. It has also been suggested that dystonia of the external anal sphincter causes difficult rectal evacuation and the loss of dopaminergic neurons in the ENS may lead to slow-transit constipation [73] . MANAGEMENT Managing SCI bowel function is complex, time consuming and remains conservative [75] . The use of manual evacuation [108] , treatment with oral laxatives [108] and abdominal massage [109] have all been reported. Transanal irrigation is reported safe and can be used in most patients suffering from NBD [110] , its results represent a lower total cost than conservative bowel management [111] ; however, its rate of success is only 35% after 3 years [110] . Recent approaches include sacral neuromodulation [112] and dorsal penile/clitoral nerve neuromodulation for the treatment of constipation, as well as magnetic stimulation for NBD treatment [113] . Other options include colostomy, ileostomy, malone anterograde continence enema, and sacral anterior root stimulator implantation [114] . However, good quality research data is needed to evaluate the effects of these treatments for this condition. For MMC patients with constipation, polyethylene glycol [44,115] and the use of transanal irrigation [116] seem to be effective, however, a majority of children found the procedure time consuming and did not help them to achieve independence at the toilet [117] . For incontinence, the approaches included intravesical [118] and transrectal electro-stimulation [119] ; nevertheless these procedures lack well-designed controlled trials. For constipation and incontinence, biofeedback is used [120] . Surgical closure of MMC is usually performed in the early postnatal period, however, not all patients benefit from fetal surgery in the same way [121] . The management of cervical MMC is early surgical treatment with microneurosurgical techniques. Surgical excision of the lesions with intradural exploration of the sac to release any potential adhesion bands is safe and effective [122] . The current therapies for MS are few, symptom-related, and experimental [7] . In patients seen due to constipation, incontinence, or a combination of these symptoms a beneficial effect of biofeedback was attributed to some but not to all patients [123] . Other approaches include oral administration of probiotic bacteria, Lactobacillus casei and Bifidobacterium breve, which do not seem to exacerbate neurological symptoms [124] . An overactive bladder is successfully treated in 51% of cases with anticholinergic medication [125] . The use of agonists or antagonists of prostaglandin-receptors may be considered as a new therapeutic protocol in MS. The reason is that prostaglandins as arachidonic acid-derived autacoids play a role in the modulation of many physiological systems including the CNS, and its production is associated with inflammation, which is a feature in MS [126] . Levodopa, a prodrug of dopamine, is one of the main treatment options in PD [127] . However, in contrast to motor disorders, pelvic autonomic dysfunction is often refractory to levodopa treatment [128] . One point to bear in mind is that treatments should facilitate intestinal absorption of levodopa [128] . Current levodopa products are formulated with aromatic amino acid decarboxylase inhibitors such as carbidopa or benserazide to prevent the metabolism of levodopa in the GI tract and systemic circulation [127] . Food appears to affect the absorption of levodopa, but its effects vary with formulations and studies suggest that a high protein diet may compete with the uptake of levodopa into the brain, thus resulting in reduced levodopa effects [127] . Regarding disturbed motility of the upper GI-tract, hypersalivation is reported to be reduced by anticholinergics or botulinum toxin injections [61] while therapy for dysphagia includes rehabilitative, surgical, and pharmacologic treatments [129] . Regarding constipation, tegaserod improves both bowel movement frequency and stool consistency [130] . Mosapride citrate, a 5-HT4 agonist and partial 5-HT3 antagonist, in contrast to cisapride, does not block K (+) channels or D2 dopaminergic receptors [131] . Other prokinetics agents include metoclopramide, domperidone, trimebutine, cisapride, prucalopride, and itopride [132] . Polyethylene glycol [61] , functional magnetic stimulation [133] , and psyllium are also used [134] . However, the clinical significance of any of these results is difficult to interpret and it is not possible to draw any recommendation for bowel care from published trials, until well-designed controlled trials with adequate numbers of patients and clinically relevant outcome measures become available [134] . Recently, stem cells have been used as an alternate source of biological material for neural transplantation to treat PD. The potential benefits for this are relief of parkinsonian symptoms and a reduction in the doses of parkinsonian drugs employed. However, the potential risks include tumor formation, inappropriate stem cell migration, immune rejection of transplanted stem cells, hemorrhage during neurosurgery and postoperative infection [135] . PREVENTION AND PREDICTORS An analysis of predictors of severe NBD in SCI shows that those with a cervical injury or a thoracic injury had a higher risk of severe NBD than those with a lumbar spine injury. Also those classified as ASIA a had a 12.8-fold higher risk of severe NBD than persons with ASIA D. Besides, a longer duration of injury (≥ 10 years) was considered as another risk factor of severe NBD. Moderate-to-severe depression was associated with reduced bowel function. The results showed that a higher spinal level of cord lesion, completeness of cord injury and a longer duration of injury (≥ 10 years) could predict the severity of NBD in patients with SCI [20] . It is reported that clinical variables are not the best predictors of long-term mortality in SCI. Instead, the significant effect of poor social participation and functional limitations seem to persist after adjustment for other variables [136] . Folic acid supplementation reduced the incidence of NTD in several geographical regions. However, the incidence is still high and associated with a serious morbidity [137] . A study done in newborn babies with NTD and their mothers revealed an association between NTD and decreased hair zinc levels, so large population-based studies are recommended to confirm the association between zinc and NTD [138] . The prevalence of scoliosis in patients with MMC has been reported to be as high as 80%-90%. A study aiming to determine clinical and radiographic predictors of scoliosis in patients with MMC reported that the clinical motor level, ambulatory status, and the level of the last intact laminar arch are predictive factors for the development of scoliosis. It is suggested that in patients with MMC, the term scoliosis should be reserved for curves of > 20 degrees, it is also noteworthy that new curves may continue to develop until the age of fifteen years [139] . Other authors attempting to obtain a spine deformity predictor based on a neurological classification performed at five years of age report that groupⅠ(L5 or below) is a predictor for the absence of spinal deformity, group Ⅲ (L1-L2) or Ⅳ (T12 and above) is a predictor for spinal deformity and group Ⅳ is a predictor of kyphosis. This data confirms that future spinal disorders are expected in some patients, while no spinal deformity is expected in others [140] . Other reports indicate that the horizontal sacrum is an indicator of the tethered spinal cord in spina bifida aperta and occulta, as signs and symptoms indicative of a tethered spinal cord appear to correspond to increases in the lumbosacral angle [141] . It is also reported that behavior regulation problems in children with MMC are predicted by parent psychological distress, and that more shunt-related surgeries and a history of seizures predict poorer metacognitive abilities [142] . It seems that adults with MMC and shunted hydrocephalus may be at risk for decreased survival [143] . Inadequate serum vitamin D concentrations are associated with complications of some health problems including MS, which support a possible role for vitamin D supplementation as an adjuvant therapy [144] . In addition, it has been suggested that the favorable effect of sunlight ascribed to an increased synthesis of vitamin D may prevent certain autoimmune diseases, particularly MS. For this reason, limited sunbathing should be publically encouraged [145] . It has also been suggested that altering the composition of the gut flora may affect susceptibility to experimental autoimmune encephalomyelitis, an animal model of MS [146] . This data could have significant implications for the prevention and treatment of autoimmune diseases. In relation to this, an interest-ing new proposal shows that the GI tract is a vulnerable area through which pathogens (such as H. pylori) may influence the brain and induce MS, mainly via fast axonal transport by the afferent neurons connecting the GI tract to brain [147] . Symptoms such as dysphagia, impaired gastric emptying and constipation may precede the clinical diagnosis of PD by years and, in the future, these symptoms might serve as useful early indicators of the premotor stage [61] . Motor handicaps, such as rigor and action tremor, are independent predictors of solid gastric emptying [148] . It is currently recommended that the approach to PD should include strategies for detecting the disease earlier in its course and, eventually, intervening when the disease is in its nascent stage. The term Parkinson's associated risk syndrome has been coined to describe patients at risk for developing PD. These patients may have genetic risk factors or may have subtle, early non-motor symptoms including abnormalities in olfaction, GI function, cardiac imaging, vision, behavior, and cognition [149] . Embryology and morphology Considerable insight into both normal neural tube closure and the factors possibly disrupting this process has been reported in recent years, yet, the mechanisms by which NTD arises as well as its embryogenesis remain elusive [150] . Normal brain development throughout childhood and adolescence is characterized by decreased cortical thickness in the frontal regions and region-specific patterns of increased white matter myelination and volume. Subjects with MMC show reduced white matter and increased neocortical thickness in the frontal regions, suggesting that spina bifida may reflect a long-term disruption of brain development that extends far beyond the NTD in the first week of gestation [151] . These variations in the diffusion metrics in MMC children are suggestive of abnormal white matter development and persistent degeneration with advancing age [152] . In rat fetuses with retinoic acid induced MMC, the normal smooth muscle and myenteric plexus development of the rectum and normal innervations of the anal sphincters and pelvic floor suggest that MMC is not associated with a global neuromuscular alteration in development of lower GI structures [153] . Besides, fetal surgery for repair of MMC allows normal development of anal sphincter muscles in sheep. Histopathologically, in the external sphincter muscles, the muscle fibers were dense, while in the internal sphincter muscles, endomysial spaces were small, myofibrils were numerous, and fascicular units were larger than those in unrepaired fetal sheep [154] . Studies in the development of the pelvic floor muscles of murine embryos with anorectal malformations, demonstrate that the embryos show an impaired anatomic framework of the pelvis possibly caused by neural anomalous development, whereas muscle development proceeded physiologically. These results support the hypothesis that pelvic floor muscles may function in children with anorectal malformations, in whom neural abnormalities such as MMC have been ruled out, if the surgical correction is appropriately completed [155] . A mouse model was reported about the sharing of the same embryogenic pathway in anorectal malformations and anterior sacral MMC formation [156] . Indeed, some of the brain malformations associated with MMC in human patients are also found in the uncorrected fetal lamb model of MMC [157] . The late stage of gestation is important due to the presence of morphological changes. A study of in-utero topographic analysis of astrocytes and neuronal cells in the spinal cord of mutant mice with MMC revealed that at day 16.5 of gestation there is a deterioration of neural tissue in MMC fetuses, mainly in the posterior region, progressing until the end of gestation with a marked loss of neurons in the entire MMC placode. This study delineated the quantitative changes in astrocytes and neurons associated with MMC development during the late stages of gestation [158] . Data supported by other investigators show, in Curly tail/loop tail mouse fetuses, that around birth the unprotected neural tissue is progressively destroyed [159] . Traditionally, PD is attributed to the loss of mesencephalic dopamine-containing neurons; nonetheless, additional nuclei, such as the dorsal motor nucleus of the vagus nerve and specific central noradrenergic nuclei, are now identified as targets of PD [160] . Early in 1988, Wakabayashi [161] described the presence of Lewy bodies in Auerbach's and Meissner's plexuses of the lower esophagus, indicating that these are also involved in PD. Later on, the presence of α-synuclein immunoreactive inclusions in neurons of the submucosal Meissner plexus, whose axons project into the gastric mucosa and terminate in direct proximity to fundic glands, was reported [162] . The authors propose that these elements could provide the first link in an uninterrupted series of susceptible neurons that extend from the enteric tract to the CNS. The existence of such an unbroken neuronal chain lends support to the hypothesis that a putative environmental pathogen capable of passing the gastric epithelial lining might induce α-synuclein misfolding and aggregation in specific cell types of the submucosal plexus and reach the brain via a consecutive series of projection neurons. A recent study aimed at characterizing the neurochemical coding of the ENS in the colon of a monkey model of PD, showed that this element induces major changes in the myenteric plexus and to a lesser extent in the submucosal plexus of monkeys. This data reinforces the observation that lesions of the ENS occur in the course of PD and that this might be related to the GI dysfunction observed in this pathology [163] . Experimental approaches and animal models Animal models used in MMC include an ovine model constituted by fetal lambs [164] , fetal sheep [165] , a Macaca mulatta model [166] , a mice model [158] , and a fetal rabbit model [167] . Several experimental approaches have been used. To study the correction of a MMC-like defect in pregnant rabbits, a spinal defect was surgically created in some of their fetuses at 23 d of gestation. The spinal defect was successfully repaired, and the fetal rabbit model was established for the study of intrauterine correction of an MMC-like defect [167] . A new gasless fetoscopic surgery for the correction of a MMC-like defect in fetal sheep served as an alternative to current techniques used for fetal endoscopic surgery [165] . A Macaca mulatta model was used for replicating MMC and to evaluate options for prenatal management, such as the collocation of an impermeable silicone mesh which protects the spine from amniotic liquid with results similar to skin closure [166] . In-utero analyses of astrocytes and neuronal cells in the spinal cord of mutant mice with MMC using the curly tail/loop-tail mice model have been reported. At day 16.5 of gestation, a deterioration of neural tissue in MMC fetuses was observed mainly in the posterior region, progressing until the end of gestation with a marked loss of neurons in the entire MMC placode. These results support the current concept of placode protection through in-utero surgery for fetuses with MMC [158] . Recently, the notion of prenatal neural stem cell delivery to the spinal cord as an adjuvant to fetal repair of spina bifida has been proposed [164] . The main animal model in MS was developed in mice and is called experimental autoimmune encephalomyelitis [7] . In this experimental model, it was reported that gut flora may influence the development of experimental autoimmune encephalomyelitis [146] , and that despite reported blood-brain barrier disruption, CNS penetration for small molecule therapeutics does not increase in MS-related animal models [168] . The migratory potential, the differentiation pattern and long-term survival of neural precursor cells in this experimental autoimmune encephalomyelitis mice model were investigated. The results suggest that inflammation triggers migration whereas the anti-inflammatory component is a prerequisite for neural precursor cells to follow glial differentiation into myelinating oligodendrocytes [169] . A new exciting finding with this model is that a novel regulator of leukocyte transmigration into the CNS, denominated extracellular matrix metalloproteinase inducer (EMMPRIN), indeed regulates leukocyte trafficking through increasing matrix metalloproteinase activity. Amelioration of the clinical signs of experimental autoimmune encephalomyelitis by anti-EMMPRIN antibodies was critically dependent on its administration around the period of onset of clinical signs, which is typically associated with significant influx of leukocytes into the CNS. These results identify EMMPRIN as a novel therapeutic target in MS [170] . Several experimental approaches in PD deal with GI issues using diverse animal models as rats, mice and primates. The advent of transgenic technologies has contributed to the development of several new mouse models, many of which recapitulate some aspects of the disease; however, no model has been demonstrated to faithfully reproduce the full constellation of symptoms seen in human PD [171] . As GI dysmotility in PD has been attributed in part to peripheral neurotoxin action, rats with salsolinol induced PD were studied to evaluate its effects on intramuscular interstitial cells of Cajal, duodenal myoelectrical activity and vagal afferent activity. The results suggest a direct effect of salsolinol on both interstitial cells of Cajal and the neuronal pathways for gastro-duodenal reflexes [18] . Delayed gastric emptying and ENS dysfunction in the rotenone model of PD suggested that enteric inhibitory neurons may be particularly vulnerable to the effects of mitochondrial inhibition by Parkinsonian neurotoxins and provide evidence that Parkinsonian GI abnormalities can be modeled in rodents [68] . Studies assessing the responses of myenteric neurons to structural and functional damage by neurotoxins in vitro reveal that neural responses to toxic factors are initially unique but then converge into robust axonal regeneration, whereas neurotransmitter release is both vulnerable to damage and slow to recover [172] . The prototypical parkinsonian neurotoxin, MPTP, as a selective dopamine neuron toxin in ENS and used in a mouse model, shows loss of enteric dopaminergic neurons and changes in colon motility [173] and its use in a primate animal model reveals changes in the myenteric plexus and, to a lesser extent, in the submucosal plexus. These models further reinforces the observation that lesions of the ENS occur in the course of PD which might be related to GI dysfunction observed in this pathology [163] . In order to determine the changes in the dopaminergic system in the GI tract, two kinds of rodent models were used. In one, 6-hydroxydopamine was microinjected into the bilateral substantia nigra of a rat. In the other, MPTP was injected intraperitoneally into mice. The results suggest that the different alterations of dopaminergic system observed in the GI tract of the two kinds of PD models might underline differences in GI symptoms in PD patients and might be correlated with the disease severity and disease process [174] . In a similar rat model, it is reported that a unilateral 6-hydroxydopamine lesion of nigrostriatal dopaminergic neurons led to a marked inhibition of propulsive activity compared with sham-operated controls, suggesting that disturbed distal gut transit, reminiscent of constipation in the clinical setting, may occur as a consequence of reduced propulsive motility, likely due to an impairment of nitric oxide-mediated descending inhibition during peristalsis [95] . Observations in Parkinsonian primates showed that when the implanted undifferentiated human neural stem cells survived, they had a functional impact as assessed quantitatively by behavioral improvement in this dopamine-deficit model [175] . Nonmotor symptoms of PD studied in an animal model with reduced monoamine storage capacity suggests that monoamine dysfunction may contribute to many of the nonmotor symptoms of PD, and interventions aimed at restoring monoamine function may be beneficial in treating the disease [176] . In a clinical approach, it was demonstrated that delay in gastric emptying did not differ between untreated, early-stage and treated, advancedstage PD patients, suggesting that delayed gastric emptying may be a marker of the pre-clinical stage of PD [177] . CONCLUSION This article reviews the current knowledge in all the fields of the neurological diseases with neurogenic bowel dysfunction, and the common issues in need of clarification. The hope is that with a full perspective of the situation, researchers can generate new ideas that can be useful for prevention, cure, or at least for the mean time, a better quality of life for the patient.
2018-04-03T01:25:57.921Z
2011-12-14T00:00:00.000
{ "year": 2011, "sha1": "61a074ecf2cc052fd289d1307566c742d5d79fe4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v17.i46.5035", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2b975745cd991dd508a6a28cf489b08b9eb74948", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91749639
pes2o/s2orc
v3-fos-license
The dynamics of total organic matter (tom) on sangkuriang catfish (clarias gariepinus) farming at upt ptpbp2kp and the effectiveness of freshwater bivalve (anodonta woodiana) in reducing the total organic matter with varying density Fish farming activities often leave organic waste which can degrade the water quality. The efforts to decrease the amount of organic matter biologically is needed such as the use of aquatic animals for reducing harmful residues. The purposes of this study were to observe the dynamics of total organic matter (TOM) on Sangkuriang Catfish (Clarias gariepinus) farming and to understand the varying density of freshwater bivalve (Anodonta woodiana) to decrease the total organic matter (TOM) on the residual of Sangkuriang Catfish (Clarias gariepinus) farming activity. This study employed the survey and experimental method. The survey results, the total organic matter from inlet to outlet increases about 319% of Sangkuriang Catfish (Clarias gariepinus) farming, and for the experimental results, the best treatment of freshwater bivalve (Anodonta woodiana) surface based covered was obtained at 75% with decreasing was about 88% total organic matter (TOM) by immersion for 16 hours. Introduction Aquaculture is defined as cultivation of fish, shellfish (oysters, mussels, clams, and crustaceans), or plants (seaweed and algae) in inland or coastal areas which is related to the maintenance process to increase the fish production [1]. Recently, the aquaculture sector has been numerous in the total fish production in the world [2]. Ponds are usually used for aquaculture production system which about 40% of the world production not only for freshwater fishes but also all crustaceans are cultured in ponds. Ponds, with the intensive system, will accumulate the organic level during culture cycle because of much from external inputs (feeds and fertilizers) [3]. Total Organic Matter (TOM) is an important else of sediment parameter and a primary source of food for benthic organisms and necessary for structuring the composition of the benthic fauna. The amount of organic matter will lead to contamination in the sediments. When sediments contain a large amount of organic matter which means the contaminants will be in a particle form, While sediments contain a small amount of organic matter, the contaminants will present in the pore water [4]. Absolutely, an attempt for diminishing the total organic matter which exceeds the limit is needed such as biologically way which does not give an adverse effect. Anodonta woodiana is a kind of freshwater bivalve which lives in bottom waters and relatively settled. Anodonta woodiana also called by Kijing Taiwan or Chinese pond mussels is a filter feeder organism which is able to be a bioindicator in a polluted environment. Pollutants which enter inside the body of this freshwater bivalve would be analyzed the profile of hemocyte (THC and DHC), and there is found a hyaline cell which is an indicator for foreign objects [5,6,7]. The purposes of this study were to investigate the dynamics of Total Organic Matter (TOM) on Sangkuriang Catfish (Clarias gariepinus) at UPT PTPBP2KP, Kepanjen, Malang and to observe the effective density of freshwater bivalve (Anodonta woodiana) for decreasing on the residual of Sangkuriang Catfish (Clarias gariepinus) farming activity at UPT Freshwater Fisheries Sumberpasir laboratory, University of Brawijaya, Malang, East Java. The chosen areas were determined into 3 stations were an inlet, Sangkuriang Catfish (Clarias gariepinus) pond, and outlet. Station 1 was Inlet station located in the entrance point of water storage before it flowed into the Sangkuriang Catfish pond with sampling once a day. Sangkuriang Catfish pond was the station 2, with the length area of this study is approximately 2 m x 3 m and the depth of the pond is 1.5 m with a total pond is 10 units. The sample in the station 2 was taken 2 times a day, those were at morning (before feeding) and at noon (after feeding). And the station 3, was outlet located in the exit point of water from residual of Sangkuriang Catfish sampling conducted once a day. For Freshwater bivalve (Anodonta woodiana) treatment was conducted at UPT Freshwater Fisheries Sumberpasir laboratory, University of Brawijaya, Malang, East Java. . Method 2.2.1. Sample preparation Water sample for measuring the total organic matter (TOM) in the three stations (inlet, Sangkuriang Catfish, and outlet) by using water sampler, then put into a 1.5 liters bottle and keep into a coolbox with ice until use to avoid degrading of organic matter by decomposer bacteria. The sample was taken each week along five weeks. Freshwater bivalve (Anodonta woodiana) is a biofilter animal which used in this study were taken from a fish pond in UPR Sumber Mina Lestari, Dau, Malang City, East Java. Then, the freshwater bivalve was acclimatized in the freshwater about 24 hours without feeding before using for the experiment. Physical and chemical parameters Physical and Chemical Parameters assay in this study was applied to obtain the water quality status in the water sample. For water, physics assay was temperature (alcohol thermometer), and for chemical, the assay was pH (Testr 30), Dissolved Oxygen (DO) (Lutron DO-5510), Ammonia (Spectrophotometer UV-vis) and TOM (KMnO4). Temperature, pH and DO were assayed in-situ, while Ammonia and TOM were assayed ex-situ at UPT Freshwater Fisheries Sumberpasir laboratory, University of Brawijaya, Malang. This measurement was carried out in triplicate and was checked in every week until five weeks. Animal experiment Freshwater bivalve (Anodonta woodiana) size was used in this study was 8 to 9 cm. The 25 tanks on 106.76 cm 2 each were filled with 10 L water of residual of Sangkuriang Catfish (Clarias gariepinus) farming activity from Sumberpasir laboratory. Thus, were separated into 5 groups, those were A group was 0% with no freshwater bivalve as a control; B group was 100% freshwater bivalve-coverage with a total of 20 organisms; C group was 75% freshwater bivalve-coverage with a total 15 organisms; D group was 50% freshwater bivalve-coverage with a total of 10 organisms; and E group was 25% density of freshwater bivalve with total 5 organisms. Each group was performed in five replication followed by [8] TOM assay was checked by different time points. Each group was checked in every 4 hours starts from the zero hours to the sixteenth hour. The decrement of TOM was assayed conventionally by using KMnO4 as an oxidizing agent. Data analysis The data analysis was used in this study was determined by one-way analysis of variance (ANOVA) in Sigmaplot ver 12.0 followed by Tukey's test. Data are explained as a mean ± standard deviation which significant differences required p < 0.05. Physico-chemical assay on UPT PTPBP2KP Kepanjen The physicochemical assay was conducted to obtain the water quality status in the water sample. Based on the physical and chemical analysis, the average of temperature was 27.2 to 28.0 o C; pH was 7.0 to 7.3; DO was 4.5 to 6.5 mg/L; Ammonia was 0.1 to 0.3 mg/L (Table 1). Based on Indonesian Government Regulation Number 82 of 2001 (Aquaculture standard), the standard of physical and chemical parameters was pH 6 to 9; DO ≥ 3 mg/L; Ammonia ≤ 0.02 mg/L. The temperature relativity in this study was good enough. The relative temperature of African catfish Clarias gariepinus was 27.1 to 27.3 o C [9]. pH and DO were appropriate to standard for aquaculture, but ammonia exceeds the standard for aquaculture. TOM analysis on Sangkuriang Catfish (Clarias gariepinus) at UPT PTPBP2KP Kepanjen showed different results. The difference of TOM level was caused by the differences of aquaculture activity. TOM level in the inlet always smaller than TOM in the Sangkuriang Catfish pond or and in the outlet. In the inlet the average of TOM was 27.98 mg/L, then in the Sangkuriang Catfish pond the average of TOM was 87.23 mg/L (before feeding) and 62.34 mg/L (after feeding), on outlet was 89.31 mg/L. TOM has a strong correlation with ammonia when ammonia increase due to TOM level which increases. Naturally, Ammonia arising in the water because of the microbiological decomposition of nitrogen compound on the organic matter [10]. Therefore, the increasing of TOM from the inlet to outlet was about 319%. On the inlet, the total organic matter was not too high because the water resource was from groundwater with sedimentation treatment before streamed to the Sangkuriang Catfish pond. In the Sangkuriang Catfish pond showed the different result in before and after feeding with before feeding higher than after feeding, it might be caused the organic matter in the pond before feeding was supplied by the excretion from its fish and another organic matter such as phytoplankton and benthic. [11] explained that Plants, animals, and microorganisms can be recognized as organic matter. Almost all organisms use carbohydrates as a source of energy but some the bacteria like to consume fewer molecules like nucleic acids and proteins. And the causes of an organic matter after feeding was lower than before feeding is because the fish were fed by 0.5 kg per day with density was 30 fish with each body weight 1 kg, which means only 1.7% of body weight. this percentage was not beyond the standard of daily feed of fish was 2% body weight [12]. Mostly, the aquaculture system often limits the capacity for self-purification. These systems are considered low stability and continuous exchange of matter and energy which can decrease the internal entropy [13]. On the outlet, occurred the accumulation of organic matter, thus making TOM will be the highest value. TOM with a small amount in the pond is necessary, but when TOM exceed the standard, it can be harmful to aquatic organisms because of the development of anaerobic conditions at the sediment-water. Therefore, organic compounds are often decomposed to decrease as like NO2, H2S, NH3, and CH4 which is toxic to fish in low concentrations [14]. Analysis of TOM after freshwater bivalve Anodonta woodiana treatment Total Organic Matter (TOM) is a compound that contains an organic material such as dissolved, suspended and colloidal ingredients. [15] explained that the illustration of organic matters formed in the water ecosystem is when the solar energy trapped by plants for photosynthesizing, then the plants fed by herbivores and the herbivores eaten by carnivores. These ways are leading to an accumulation of feces and dead plants and animal bodies in the pool. The high level of organic matter in the waters will give a risk for balancing of water organism. Recently, in several countries, waste disposal of the concentrated organic matter is called pollution, including remaining of aquaculture activity [16,17]. In this study, TOM level from remaining of Sangkuriang catfish (Clarias gariepinus) allowed the decrement by freshwater bivalve (Anodonta woodiana) as an organic feeder. Figure 1. showed that on A group (control) there was a significant decreased of TOM after the fourth hour to sixteenth hour from 53.8 ± 0.00 mg/L to 13.39 ± 2.10 mg/L which means 75% decrease level. It happened because the organic matter suspended on the bottom of the tank without freshwater bivalve helped. For the B group, the TOM was decreased from 53.8 ± 0.00 mg/L to 9.61 ± 2.77 mg/L in the fourth hour to sixteenth hours which means 82% decrease. C group showed the highest decrement of TOM from 53.8 ± 0.00 mg/L to 6.57 ± 1.38 mg/L which means 88% on the fourth hour to sixteenth hour. For D and E groups showed that both groups occurred deflation of TOM with percentages 83% and 84% with TOM value were 53.8 ± 0.00 mg/L to 8.85 ± 2.36 mg/L and to 8.34 ± 1.70, respectively. For all treatments, reported that the treatment with freshwater bivalve inside showed the higher result for reducing the TOM compared to control (no freshwater bivalve). Freshwater bivalve has a strong influence on ecosystem processes in the freshwater system. Thus, can be an important filter feeder which directly impacts on benthic processes with burrow in the sediments [18]. Figure 1. Tom Levels (mg/L) after treatment with varying density of Freshwater Bivalve (Anodonta woodiana) and varying time points. All treatment data followed by standard of deviation (STD) as the error bars is P<0.05. Conclusion Total Organic Matter (TOM) on the Sangkuriang Catfish (Clarias gariepinus) achieved the highest value was 319% at outlet and the freshwater bivalve (Anodonta woodiana) can be used as agent of TOM degradation with varying density of freshwater bivalve (Anodonta woodiana) was 75% (C group) as the most effective group with decreasing value of TOM was about 88% in 16 hours immersion compared to all treatments. For the future, the massive application is needed to confirm this hypothesis.
2019-04-03T13:11:53.697Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "518094f00427b0d16567ce5b9152889a89de4733", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/236/1/012022/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ab4e6c9800d36ffb43bd3bd6879b35cd2ca5ebb7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
232120398
pes2o/s2orc
v3-fos-license
Stemless Total Shoulder Arthroplasty With Orthobiologic Augmentation Total shoulder arthroplasty (TSA) has evolved over the years and is used for a variety of indications, with arthritis being the most common. Stemless TSA is a unique bone-preserving design that can eliminate rotational malalignment. Additionally, recent literature has found utility in the use of biological mesh and a platelet-rich plasma injection to improve healing. The purpose of this article is to outline the process of TSA using a stemless system and how to incorporate the use of amnion matrix and platelet-rich plasma into the surgical technique. T otal shoulder arthroplasty (TSA) is an increasingly performed surgical technique used to restore function and reduce pain when the glenohumeral joint is compromised. [1][2][3] Common indications for shoulder replacement include osteoarthritis (OA), inflammatory arthritis, and proximal humeral fractures. 4 These pathologies can cause reduced joint mobility, persistent pain, and weakness. In 2008, OA was the primary diagnosis for 77% of TSAs performed in the United States. 5 Population-based studies suggest that 16.1% to 20.1% of adults older than 65 years have radiographic evidence of glenohumeral OA. 6 The gold standard of treatment for end-stage glenohumeral OA has been the TSA, with some populations reaching a 10-year implant survival rate of 96%. 7,8 TSAs have typically consisted of a humeral stem, a prosthetic head, and a glenoid component. 9 Advancement in the design has changed the components from constrained designs to more anatomic designs. 9,10 Although traditional stemmed implants have been studied extensively and have shown good outcomes, they face challenges regarding humeral bone loss and the difficult treatment of periprosthetic fractures and cases of revision surgery. [11][12][13] Stemless shoulder replacements have been shown to accurately represent the humeral head position independent of the humeral shaft orientation. [13][14][15] This allows for the center of rotation, inclination, retroversion, and offset to be accurately re-created. 13,15,16 In addition to fewer intraoperative complications, stemless designs preserve the humeral bone stock, which facilitates the treatment of periprosthetic fractures and cases of revision surgery. 11,12,15,[17][18][19] Despite improvements in implant design, stemless TSAs do not change the surgical approach and still require the detachment of the subscapularis tendon. Failure of the subscapularis repair is an important source of post-arthroplasty complications. [20][21][22][23][24] The tendon requires repair during surgery, which is limited by surgical fixation and at risk of complications such as repair failure, tearing, anterior instability, glenoid loosening, and polyethylene wear. [25][26][27] The use of platelet-rich plasma (PRP) and amnion matrix for rotator cuff repairs could improve outcomes by promoting healing. 28 The use of orthobiologics may be applicable to the subscapularis closure after TSA. The purpose of this article is to describe a technique for a stemless TSA with the use of a biological membrane and PRP to facilitate the healing process (Video 1). Surgical Technique Preoperative Considerations Preoperative assessment consists of a physical examination and radiographs to identify the degree of OA (Fig 1). The patient is placed in the beach-chair position and anesthetized with general anesthesia. All bony prominences are well padded, and the neck is well positioned. The operative shoulder is treated with skin preparation solution and then draped in sterile fashion. An arm holder is used for positioning. Surgical Approach to Glenohumeral Joint A curvilinear incision is started at the border of the midsection of the clavicle and proceeds down the arm following the deltopectoral groove. Blunt dissection is performed, and the cephalic vein is mobilized away from the interval. Once the interval is achieved, a deep Kolbel retractor is used to expose the biceps tendon and subscapularis. No. 2 FiberWire (Arthrex, Naples, FL) is used to tag the subscapularis tendon. The subscapularis is then released and retracted. The joint capsule is incised, the humeral head is subluxated, and the arm is rotated externally for maximum exposure. Humeral Head Resection A rongeur is used to excise osteophytes to view the native contour of the humeral head and neck. The humerus is dislocated anteriorly with a large Darrach retractor to expose the head. A freehand cut of the anatomic head is made using a sagittal saw at the anatomic neck, with care taken to protect the rotator cuff with the Darrach retractor (Fig 2). Trunnion Preparation The humeral head implant is sized using the native head resection. The trunnion guide is then compared with the humerus and fixed into the bone by advancing the pegs (Fig 3). Reaming is performed by hand using a core reamer to create the implantation site (Fig 4). The Preoperative standing radiograph (anteroposterior view) of the right (R) shoulder showing signs of glenohumeral arthritis after arthroscopic debridement. Notable bone spurring is apparent with inferior osteophyte formation. Owing to the patient's young age and good bone density, the patient is a candidate for stemless arthroplasty. Intraoperative image of the right shoulder taken from anterior to posterior showing humeral head resection. The patient is placed in the beach-chair position with the right shoulder draped into the surgical field. A freehand cut of the anatomic head is made using a sagittal saw at the anatomic neck, with care taken to protect the rotator cuff by using Darrach retractors. A guide is available depending on surgeon preference. guide is then removed, and a protective cap is placed on the humeral head. Glenoid Arthroplasty A Kolbel retractor is placed to expose the glenoid. The labrum is excised to expose the cortical rim. By use of a pin guide, a guidewire is advanced centrally into the glenoid. A semicircular reamer is used to denude the cartilage, followed by use of a 6-mm drill to make the center hole. A glenoid drill guide is placed over the glenoid and secured with the drill through the superior hole. The inferior keel is reamed and then expanded using a broach. After irrigation, simple cement is introduced into the glenoid fossa. The glenoid component (Arthrex Univers Vaultlock) is then impacted. Humeral Head Replacement The protective cap is removed, and the trunnion guide is replaced. The centering drill guide is then fully seated into the trunnion guide. A drill center guide pin with laser lines is advanced to the lateral cortex without breaching it. The pin is used to measure the depth of the trunnion, so care must be taken not to breach the lateral cortex (Fig 5). After the size is determined, the guide and template are removed. The trunnion implant is placed over the drill guide. This implant is impacted into place with the handle placed over the guide. The guide is removed, and the cage screw is placed in the trunnion and advanced using a screwdriver inserted through the impactor handle (Fig 6). The humeral head implant (Arthrex Eclipse) is then placed on the trunnion, impacted, and checked (Fig 7). The shoulder is reduced and taken through the motion arc with 40 of external rotation and 60 of internal rotation. There should be roughly 50% posterior translation. The joint is then copiously irrigated. Subscapularis Repair and Orthobiologic Augmentation The rotator cuff interval is closed and the subscapularis is repaired in a side-to-side fashion with No. 2 FiberWire sutures. An amnion matrix (Arthrex) is then applied over the subscapularis repair and fixed using No. 0 Vicryl sutures (Ethicon, Somerville, NJ) with the epithelial layer facing up (Figs 8 and 9). Prior to implantation, 40 mL of the patient's blood is drawn and centrifuged down for a PRP injection. Ten milliliters of PRP is injected into the amnion matrix as well as intra-articularly (Fig 10). The deltopectoral interval is closed using Vicryl. The subcutaneous tissue and skin are closed. Follow-up imaging shows intact hardware with bone infiltration (Fig 11). Table 1 shows pearls and pitfalls related to the procedure. Discussion The demand for shoulder arthroplasty is projected to increase by 8.2% per year in patients younger than Intraoperative image of the right shoulder taken from anterior to posterior showing the cage screw placed in the trunnion and advanced using a screwdriver inserted through the impactor handle. The patient is placed in the beach-chair position with the right shoulder draped into the surgical field. The cage screw will hold the trunnion and implant in place. 55 years; this rise indicates a need for TSAs with better outcomes for younger, active patients that can easily undergo revision. 29 The stemless implant has secure bony fixation and ingrowth, avoids stem-related complications, and decreases blood loss and the operative time. 13,30,31 In patients with post-traumatic OA, stemless TSAs are also useful because they can better accommodate operative hardware that would normally obstruct a conventional stem. 32 The main disadvantage of stemless TSAs is the need for good humeral bone density, limiting its use in elderly patients. 32 Table 2 lists advantages and disadvantages of the proposed technique. The proposed technique (Video 1) uses a stemless prosthesis and orthobiologic augmentation to hasten TSA recovery. An amnion matrix is used to promote healing of the subscapularis tendon. Postoperative subscapularis failure, which has an incidence rate of approximately 3%, 26 has been associated with anterior instability and subluxation of the humeral head, as well as the need for additional surgery. 33 Incorporation of amnion promotes healing in partial rotator cuff tears and may be applicable to subscapularis repair. 28,34 Although relatively few clinical data are available, amnion matrices have been shown to improve the outcomes of supraspinatus tears in a Intraoperative image of the right shoulder taken from anterior to posterior showing the humeral head implant (Arthrex) after being placed on the trunnion and impacted. The patient is placed in the beach-chair position with the right shoulder draped into the surgical field. Once the implant is impacted into place, the shoulder is taken through the range-ofmotion arc. STEMLESS TSA WITH ORTHOBIOLOGIC AUGMENTATION e535 canine model. 35 More generally, amnion matrices can facilitate tendon healing and may result in a quicker recovery. 36 Our technique used PRP to further promote healing of the subscapularis and surrounding tissues. Application of PRP to a recently repaired rotator cuff tendon has been shown to increase the vascularity of the tendon in the early stages of healing. 37 Despite mixed evidence, several studies have indicated that PRP reduces pain and blood loss and improves shortterm outcomes of arthroplasty. [38][39][40][41] Although the use of PRP injections during TSA has not been well studied, the relative safety and the evidence of PRP's positive effect on soft-tissue healing make the use of PRP a prudent decision to optimize TSA outcomes. In conclusion, the use of the stemless system along with biological agents likely produces an optimal outcome for patients with adequate humeral bone stock and those wishing to return to active lifestyles. When compared with traditional stemmed arthroplasty, the proposed method preserves more of the original anatomy, spares potential operative complications, and offers flexibility for revision. Orthobiologic augmentation potentially provides repair optimization with few risks. Longitudinal studies are required to determine more accurate life span estimates. Size the trunnion so that it covers the cortical bone. Perform freehand cutting of the humeral head to match the native joint anatomy. Use the impactor handle to hold the trunnion in place to prevent angling or rotation while advancing the cage screw. Pitfalls Avoid placing the amnion matrix with the epithelial layer down. Avoid sizing the trunnion using the osteophytes. Avoid drilling through cortical bone when measuring the cage screw. Avoid oversizing the cage screw by using the last size passed by the guide pin into the drill guide.
2021-03-06T05:08:27.499Z
2021-01-30T00:00:00.000
{ "year": 2021, "sha1": "2d6e07243f086c67d73689c6dc15c0ee5308177a", "oa_license": "CCBYNCND", "oa_url": "http://www.arthroscopytechniques.org/article/S2212628720303182/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d6e07243f086c67d73689c6dc15c0ee5308177a", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
14386250
pes2o/s2orc
v3-fos-license
Physical activity and neurocognitive functioning in aging - a condensed updated review This condensed review gives an overview about two methodological approaches to study the impact of physical activity on cognition in elderly, namely cross-sectional studies and randomized controlled intervention studies with pre- and post-measures. Moreover, this review includes studies investigating different types of physical activity and their relation to cognitive functions in older age. Behavioral data are considered but the main focus lies on neuroscientific methods like event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Background Healthy aging is associated with a decline in sensory, motor and specific cognitive functions. However, such declines depend on various factors, such as genetics and lifestyle [1]. In particular, physical activity and training not only improve physical and motor but also cognitive functions [2][3][4][5][6][7][8][9][10][11] and reduce risk for cognitive decline and dementia in later life [8,[12][13][14][15][16][17][18][19]. Generally, executive functions that control lower-level functions appear to benefit most from physical activity as documented in several review articles [2,20,21]. Neurophysiological intervention studies show both general effects such as an increased cerebral blood flow and specific effects on certain brain structures and functions [5,11,22]. Different types of exercise appear to exert distinct effects on brain and cognition [23]. Cross-sectional studies overall show a positive association between regular physical exercise and cognitive functions. However, in this type of studies the causal relationship between physical activity and cognitive functions cannot be unequivocally drawn. Nevertheless, metaanalyses of behavioral and neuroscientific studies showed stable associations between physical fitness and cognitive, mainly executive functions in older age [20]. Randomized intervention studies with control groups that represent a more valid method to draw causal relationship between variables even show more consistent effects on cognition in older adults [2,5,24], but see [25]. The aim of the next two sections is providing an overview about cognitive changes associated with physical activity in elderly individuals across electrophysiological and fMRI studies using cross-sectional and randomizedcontrolled trial designs. Cross-sectional (correlative) studies Structural MRI studies revealed a consistent positive relationship between cardiorespiratory fitness and both brain volume and functional activation in cortical regions including anterior cingulate, lateral prefrontal, and lateral parietal cortex [5,11]. Also higher physical fitness is associated with greater gray matter volume in the hippocampus [26]. Those regions are linked to cognitive, in particular executive and memory functions, which are known to deteriorate with increasing age. Indeed, high physical activity is related to high cognitive performance [8]. The review of Lautenschlager et al. [27] also suggests a robust and dose-dependent relationship between physical exercise and cognitive performance in older adults. Concerning the specificity of cognitive functions Desjardins-Crépeau et al. [28] accumulated evidence that high physical fitness was associated with greater processing speed and better executive functions while memory performance assessed by the Ray Verbal Memory Test (RAVLT) was less improved. The fMRI study conducted by Prakash and co-authors [22] found less interference and higher accuracy accompanied by enhanced activity in anterior brain regions in physically active seniors in several executive control tasks. Another correlational study using a larger sample size of older adults yielded association between older adults' physical fitness and their reaction times on the incompatible trials of the flanker task, such that physically active adults responded faster and showed higher working memory capacity during 2-back task than their low active peers [29]. In a longitudinal correlational study of about 1400 participants aged 19-94 years, Wendell et al. [30] found that neuropsychological performance was positively associated with maximum oxygen consumption. In a longitudinal correlational study with nearly 5000 participants, Chang et al. [31] found a significant association between midlife habitual physical activity and executive functions measured 25 years later. These data suggest that long-term physical activity may counteract age-related decline of executive functions. Generally, previous research indicates that older adults who engage in physically active recreational activities or have higher cardiovascular fitness are at lower risk for cognitive decline compared to inactive older adults [12-19, 21, 22, 25, 32-37]. Event-related potentials derived from the electroencephalogram (EEG) during performance of cognitive tasks offer insights in the mechanisms underlying sensory and cognitive functions. As the ERPs have excellent time resolution, there is the possibility to analyze each process separately. Different ERP components reflect specific sensory, cognitive and central motor functions and can thus help pinpointing the origin of behavioral effects. However, due to a low spatial resolution of this method the spatial aspect is less informative. Instead, more informative are temporal and functional properties of the neurobehavioral gains due to physical activity. It has been shown that not only performance in executive tasks and fMRI activity pattern but also some ERP components are modulated by physical activity [9,20]. Using a cross-sectional design, Berchicci et al. [38] investigated movement-related cortical potentials (MRCPs) in a sample of 130 participants between 16 and 86 years of age, divided into regularly physically active and low active groups. They found faster responses and shorter latency of MRCPs in the former than the latter group. Interestingly, the magnitude of speed and latency differences began to increase after age of 30, suggesting that older people benefit more from physical activity than younger ones. Other cross-sectional studies also observed a positive relationship between physical activity and cognitive functions in elderly. For example, Taddei and colleagues [39] used a cross-sectional design to study performance of young fencers, older fencers, and non-fencers in a go/no-go task. In keeping with previous research on young fencers [40], they reported faster reaction times and an earlier and larger N2 in fencers compared to non-fencers. Hillman et al. [41] found lower mixing and switching costs and shorter latencies and larger amplitudes of the P3b component in physically active than inactive older individuals. Themanson et al. [42] reported lower mixing costs but not switching costs in active vs. low active older adults. More recently, Dai and colleagues [43] compared three groups of older adults with open-skills (e.g. tennis), close-skills (e.g. jogging) and irregular physical activity. The total duration of the physical activity between groups varied between 11 and 13 years. The open skill groups revealed the lowest mixing costs, following by the closed-skill and the no activity groups. No effects on local switching costs were obtained. The ERPs showed larger P3b amplitudes in both active groups supporting the findings of Hillman et al. [41]. In a recent cross-sectional study Gajewski and Falkenstein [44,45] investigated the impact of life-long physical activity (about 50 years) on executive functions and ERPs. The authors compared two groups of either physically high or low active healthy older men. Physical activity was quantified in three different time scales: long-term activity (across decades) was assessed by self-reports, mid-term (last 2 years) activity was assessed by a questionnaire with detailed information about the weekly time spending on physical activity and current activity was assessed by a bicycle ergometry. The groups differed in all three dimensions significantly. In other respects the groups were carefully matched. The study targeted executive functions, namely inhibition (Stroop task), task switching and working memory (memory based switch task). In both tasks physically active seniors showed better behavioral performance, particularly under interference and task switching needs. The interference score in the Stroop task was negatively correlated with physical activity. In both tasks the ERPs revealed a shorter latency of the P2, reflecting faster recall of stimulus-response mappings and generally more negative amplitudes over fronto-central brain regions like N2 and N450, indicating enhanced inhibitory processing in the high active than the low active group. Similar enhanced frontocentral activity was observed in young vs. old subjects (unpublished data), suggesting that the ERP pattern (e.g. amplitudes, timing or even morphology of ERP components) in physically active seniors becomes similar to the ERPs observed in young adults. A further study tested auditory distraction while the subjects had to respond to short and long auditory stimuli [46]. Occasionally the auditory stimuli had a slightly different frequency which was task-irrelevant. The frequency deviations impaired performance more in physically low active than high active seniors. This was accompanied by a stronger frontal positivity (P3a) and increased activation of anterior cingulate cortex, suggesting a stronger involuntary shift of attention towards task-irrelevant stimulus features in low active compared to highly active seniors. The results showed also a positive relationship between physical fitness and attentional control, presumably due to more focused attentional resources and enhanced inhibition of irrelevant stimulus features in physically active seniors. As in the review of Desjardins-Crépeau [28] in the reported study the short-and long-term memory performance did not differ between physically high active and low active individuals. Also other tasks such as visual search and simple Go/Nogo tasks showed no group difference. Hence, it appears that not all but specific executive functions (mostly inhibition of distracting events) are improved in older subjects by lifelong physical activity. Finally, a recent review by Prakash et al. [21] provides evidence for physical activity to be associated with a modest reduction in relative risk of cognitive decline. An evaluation of the physical activity-cognition link across the life span provides modest support for the effect of physical activity on preserving and even enhancing cognitive vitality and the associated neural circuitry in older adults, with the majority of benefits seen for tasks that are supported by the prefrontal cortex and the hippocampus. Randomized controlled intervention studies with pre-and post-measure As shown, most of the cross-sectional studies reveal good evidence for the positive association between physical exercise and cognition, particularly executive functions in older population. However, given the correlative nature of these studies, causation cannot be established. Therefore, a number of randomized, controlled intervention studies with pre-and post-measures have been conducted with older adults. The duration of such trainings was very different. Berryman et al. [47] compared the effects of different short-term (8 week) physical interventions in 51 older adults. All groups showed similar improvements in cognition, with maximum effect on inhibition. Forte et al. [48] trained 42 older adults for 3 months in either coordination (e.g. multicomponent training, prioritizing neuromuscular coordination, balance, agility, and cognitive executive control) or classic resistance training for muscle strength conditioning like machine exercises. Inhibitory control improved after the intervention, independent of training type. Similarly, Liu-Ambrose et al. [49] found an improved inhibitory control using a Stroop test after 12 months resistance training relative to the control group. Predovan et al. [50] reported lower interference susceptibility in seniors after 3 months aerobic training study relative to non-trained persons. Increase in physical capacity was associated with lower interference scores. Albinet et al. [51] trained older adults with an aerobic exercise vs. a stretching program for 12 weeks. Executive control was measured with the Wisconsin Card Sorting Test. Only the participants in the aerobic group improved their test performance. These results confirm that physical training improves executive functions. However, also some specific memory functions appear to profit from physical training. Erickson et al. [52] administered a 12 months aerobic exercise program to sedentary older adults. The active participants showed an improvement in spatial memory. Colcombe and Kramer [2] conducted a meta-analysis of randomized controlled trials concerning the effect of physical training on cognition in healthy older adults. They obtained a stable effect of physical training with moderate effect size (0.48). The largest effects size was observed for executive control processes (0.68). The effects were larger when aerobic training was combined with strength and flexibility training. In addition, session duration of at least 30 min and the total duration of at least 6 months appear to be necessary to produce stable effects on cognition. A later review of Kramer and Erickson [8] suggests a moderate-intensity exercise on average one hour per session and frequency at least 3 sessions per week show greater cognitive and brain effects. A more recent meta-analysis including randomized controlled studies from 1966 until 2009 with a total number of 2,049 participants showed modest improvements in processing speed, attention and executive functions, whereas memory effects were less consistent [24]. A metaanalysis by Hindin and Zelinski [53] indicates that aerobic exercise interventions have moderate to medium-sized effects on executive function and memory. The review of Kirk-Sanchez and McGough [54] stated that many studies demonstrate positive effects of exercise on cognitive performance, while others show minimal to no effect. This is line with the recent meta-analysis of Kelly et al. [25] that analyzed 25 randomized controlled studies, investigating cognitive effects of aerobic exercise, resistance training and Tai Chi. The authors found evidence that resistance training and Tai Chi have cognitive benefits among seniors, whereas no consistent significant effects were found for aerobic exercise. Hence, the variables that influence the impact of physical activity on cognition have to be explored in more detail in the future. Important variables are training type, frequency, duration and intensity but also factors like age, base level of physical performance and sensory and general cognitive abilities have to be considered. Several recent studies investigated different types of physical training. Voelcker-Rehage et al. [23] compared a 12-month cardiovascular with coordination training in older adults. Changes in brain activation were investigated by functional magnetic resonance imaging (fMRI). Both groups improved in executive functioning and perceptual speed. The fMRI revealed unspecific changes for both groups in prefrontal areas, and also training-specific changes in other brain regions. In a follow-up study the group showed that motor fitness due to coordination training lead to an increase of subcortical brain areas relevant for motor control [55]. Recent reviews have summarized the relationship between physical activity and cognitive functions in elderly and focused on different types of physical activity like aerobic, resistance and coordination training and evaluated the underlying neuronal mechanisms [21,56,57]. Physical training interventions result in various effects on brain structure and function. Gomez-Pinilla and Hillman [9] proposed that exercise influences cognition by affecting molecular events related to the management of energy metabolism and synaptic plasticity. Chapman et al. [57] found increased cerebral blood flow (CBF) and a related increase of memory performance in older adults after three months of physical training. In the 12-months-study of Erickson [52] cited above the participants who completed a 12month aerobic training program showed an increase of the volume of the anterior hippocampus, which was related to an improvement in spatial memory. Increased hippocampal volume was associated with greater serum levels of BDNF, a mediator of neurogenesis. Different types of physical activity affect specific neurotrophins relevant for neural plasticity. Whereas aerobic exercise up-regulates metabolism of the BDNF, resistance exercise seems to stimulate immuneglobulin factor 1 production. Both are thought to facilitate neurogenesis, synaptogenesis and angiogenesis through partly interacting pathways [56]. Thus, combined physical interventions may have the largest impact on neural plasticity. In a recent study Kleemeyer et al. [58] investigated the effects of a six-months fitness training on hippocampal microstructure and volume in sedentary elderly adults. More positive changes in fitness were associated with more positive changes in tissue density, and more positive changes in tissue density were associated with more positive changes in volume. The authors conclude that fitness-related changes in hippocampal volume may be brought about by changes in tissue density. In a randomized intervention study Gajewski et al. [59,60] compared the effects of a combined strength and aerobic training to those of two other active groups (PC-based cognitive training, relaxation training) and a no contact group on 142 healthy seniors (65 and above). The active trainings were administered by skilled trainers for 4 months twice a week with 90 min session length. A battery of psychometric tests was administered before and after the training. While the effects of the cognitive training were largest, also the physical training yielded specific effects on cognition, e.g. on processing speed, executive functions (speed in the interference tables in the Stroop test and PC-based task switching), and some aspects of memory (delayed recall in the VLMT). However, working memory, as measured with the ratio of detected targets in the 2-back task, was not improved. Instead, the reaction times of target detection were faster. Thus, the improvements mainly affected the speed of performance, while quality of performance was less improved, inversely to the effects of cognitive training, which mainly improved accuracy. Hence, physical training may improve performance of elderly in everyday situations where speed rather than accuracy is relevant, such as braking in traffic situations, suggesting a faster coupling of perception with action or alternatively a lower motor threshold without changing perceptual and cognitive abilities. However, no effects on electrophysiological measures, which were used in some of the tests, were found. This may be due to the fact that the training duration and/or intensity were too low to observe stable effects on electrophysiological level. As with all interventions which require effort and extra time, motivation has to be maximized to yield high compliance of older trainees. Hence, virtual-reality enhanced types of exercise ("exergaming") may increase motivation and also success. Anderson-Hanley et al. [61] investigated the effect of stationary cycling with virtual reality tours ("cybercycling") on cognitive functions. The authors found that cybercycling improved executive functions and enhanced BDNF more than traditional exercise. Chao et al. [62] reviewed studies that investigated the effects of the Nintendo Wii™ exergames on cognition, physical function, and psychosocial outcomes in older adults. Indeed, positive effects on physical function, cognition and quality of life were found. A similar but more everyday-like approach is natural physical activity which consists not only of aerobic and strength but also of coordinative and cognitive exercise. Such a multilevel exercise is dancing. Effects of dancing on cognitive performance as well as on brain structure and functions have been found [63,64]. Kattenstroth and colleagues [63] investigated effects of a 6-month dance class (1 h/week) in physically non-active elderly men and women. The participants learned step sequences of increasing complexity without a dance partner. Beneficial effects were found for dance-related parameters such as posture and reaction times, but also for cognitive, tactile, motor performance, and subjective well-being. However, the data basis concerning dancing and similar activities is by now very scarce [65]. As a most important everyday life outcome, dancing appears to prevent falls, a major source of illness and death in aged people [66]. Since dancing is much more motivating and joyful as a standard exercise, more intervention studies on different types and formats of dancing are necessary. Such studies should also use neurophysiological measures in order to shed more light on the effects of such interventions at the brain level. In particular the electroencephalogram and event-related potentials should be used more frequently to unveil changes of brain mechanisms due to the interventions. Apart from physical activity, cognitive activity appears to improve cognition like switching between tasks, visual search or working memory in older adults, as also clearly seen in the above-mentioned study [59,60]; see also [56,67] for overviews. Since cognitive and physical training appear to tackle different aspects of cognition it should be advantageous to combine both [68]. Indeed, the combination appears to yield larger effects on cognition in older adults than physical or cognitive training alone [69][70][71][72][73][74][75][76]. It has been argued that cognitive and physical training produce synergistic effects than either one individually [56,77]. However, the underlying biological and functional mechanisms of the synergistic effects are currently unknown. Hence, more combination studies using neuroimaging methods are necessary. Also, apart from combining aerobic and coordinative training, dancing requires multiple cognitive functions which confirms its suitability as combined intervention to improve physical and mental wellbeing in older adults. Conclusion Physical activity certainly enhances physical fitness but also cognitive fitness. Physical activity is related to unspecific and specific brain changes, the latter depending on the type of activity. Such brain changes are accompanied by improved cognitive functions. Higher-level functions such as executive functions are more improved than lower-level functions. Combined programs which embrace both aerobic, force and coordination training are more favorable since the different aspects of such training induce different brain and behavioral changes. It is probably even more effective to combine complex physical training with cognitive training. Insofar, natural exercises such as regular dancing which affect physical, coordinative and cognitive functions, offer the maximum benefit to preserve and even improve physical and mental fitness in advanced age. In future studies such combined or natural activities should be explored in intervention trials with adequate active control conditions. From a scientific perspective such studies should not only use subjective and behavioral measures but also non-intrusive neurophysiological methods such as electroencephalography.
2016-05-12T22:15:10.714Z
2016-01-21T00:00:00.000
{ "year": 2016, "sha1": "68e53001118bd9b4eb71795a8438a7377d65c7a7", "oa_license": "CCBY", "oa_url": "https://eurapa.biomedcentral.com/track/pdf/10.1186/s11556-016-0161-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68e53001118bd9b4eb71795a8438a7377d65c7a7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
216459845
pes2o/s2orc
v3-fos-license
Nontrivial magnetic field related phenomena in the single-layer graphene on ferroelectric substrate The review is focused on our recent predictions of nontrivial physical phenomena taking place in the nanostructure single-layer grapheme on ferroelectric substrate, which are related with magnetic field. In particular we predicted that 180-degree domain walls in a strained ferroelectric film can induce p-n junctions in a graphene channel and lead to the unusual temperature and gate voltage dependences of the perpendicular modes {\nu} of the integer quantum Hall effect. The non-integer numbers and their irregular sequence principally differ from the conventional sequence {\nu}= 3/2, 5/3. The unusual v-numbers originate from significantly different numbers of the edge modes, {\nu}1 and {\nu}2, corresponding to different concentration of carriers in the left (n1) and right (n2) ferroelectric domains of p-n junction boundary. The difference between n1 and n2 disappears with the vanishing of the film spontaneous polarization in a paraelectric phase, which can be varied in a wide temperature range by an appropriate choice of misfit strain originated from the film-substrate lattice mismatch. Next we studied the electric conductivity of the system ferromagnetic dielectric - graphene channel - ferroelectric substrate. The magnetic dielectric locally transforms the band spectrum of graphene by inducing an energy gap in it and making it spin-asymmetric with respect to the free electrons. It was demonstrated, that if the Fermi level in the graphene channel belongs to energy intervals, where the graphene band spectrum, modified by EuO, becomes sharply spin-asymmetric, such a device can be an ideal non-volatile spin filter. The practical application of the system under consideration would be restricted by a Curie temperature of a ferromagnet. Controlling of the Fermi level (e.g. by temperature that changes ferroelectric polarization) can convert a spin filter to a spin valve. Introduction After single layer graphene as a conducting channel for FET was obtained for the first time experimentally in 2004, many remarkable effects, caused by it's Dirac-like spectrum, like Klein paradox etc., were observed in deck-top experiments [1,2,3,4], although before they were treated to be a part of high energy physics only. On the other hand, many other effects, like Quantum Hall Effect (QHE), which were observed previously in low temperature experiments only, were studied at ambient conditions; moreover, the unconventional integer QHE was predicted theoretically and observed experimentally [5,6,7]. In recent years special attention is paid to studies of graphene in different "smart" systems (like graphene on ferroelectric, see e.g. [8]) in regard of their possible application in ultrafast non-volatile electronic devices of new generation; and to graphene application in spintronics. This our review is focused on recent predictions of unusual physical phenomena taking place in the nanostructure single-layer grapheme on ferroelectric substrate, which are related with magnetic field. In particular we predicted that 180-degree domain walls in a strained ferroelectric film can induce p-n junctions in a graphene channel and lead to the unusual temperature and gate voltage dependences of the perpendicular modes ν⊥ of the integer quantum Hall effect [9]. The non-integer numbers and their irregular sequence principally differ from the conventional sequence ν⊥= 3/2, 5/3…. The unusual v⊥numbers originate from significantly different numbers of the edge modes, ν1 and ν2, corresponding to different concentration of carriers in the left (n1) and right (n2) ferroelectric domains of p-n junction boundary. The difference between n1 and n2 disappears with the vanishing of the film spontaneous polarization in a paraelectric phase, which can be varied in a wide temperature range by an appropriate choice of misfit strain originated from the film-substrate lattice mismatch. Next we studied the electric conductivity of the system ferromagnetic dielectric -graphene channel -ferroelectric substrate [10]. The magnetic dielectric locally transforms the band spectrum of graphene by inducing an energy gap in it and making it spin-asymmetric with respect to the free electrons. It was demonstrated, that if the Fermi level in the graphene channel belongs to energy intervals, where the graphene band spectrum, modified by EuO, becomes sharply spin-asymmetric, such a device can be an ideal non-volatile spin filter. The practical application of the system under consideration would be restricted by a Curie temperature of a ferromagnet. Controlling of the Fermi level (e.g. by temperature that changes ferroelectric polarization) can convert a spin filter to a spin valve. 4 Integer quantum hall effect in a graphene channel with p-n junction at domain wall in a strained ferroelectric film It had been demonstrated theoretically that the Dirac-like spectrum of graphene [1][2][3][4], and, consequently, additional double degeneration of a zero Landau level (LL), which is common for conduction and valence bands, results in a unconventional form of a integer quantum Hall effect (IQHE) [5][6][7]: Here xy  is the xy-component of conductance tensor and ν is the number of edge modes [6]. The Hall plateaus are centered around the values ) is an integer. Nonzero k numbers are given by expression [6]: Symbol "[]" stands for the integer part of a number; n is the 2D concentration of electrons in graphene channel, and is the density of magnetic field flux, threading the 2D surface corresponding to the degree of the k-th LL occupation. Abanin and Levitov [11] explained the peculiarities of IQHE observed in graphene with p-n junction across the conduction channel [12]. It has been demonstrated, that the electron and hole modes can mix at the p-n boundary in the bipolar regime, leading to current partition and quantized short noise. On the contrary, recently the formation of IQHE with p-n junctions created along the longitudinal direction of graphene cannel had been studied [13], and the enhanced conductance can be observed in the case of bipolar doping. In both cases (p-n junction along and across the channel) IQHE observation can be exploited to probe the behavior and interaction of quantum Hall channels. Recently [14,15] we have studied the conductivity of graphene channel with p-n junction induced by a 180-degree ferroelectric domain wall (FDW) in a ferroelectric substrate. P-N junction in graphene at FDW was studied experimentally in Refs. [16,17]. Later on we have studied p-n junction dynamics induced in a graphene channel by a FDW motion in the substrate [18] and demonstrated the possibility how to vary a number of p-n junctions in a channel between the source and drain electrodes by the motion of FDW in ferroelectric substrate [19]. On the basement of these works we have studied [9] the peculiarities of IQHE in graphene channel with p-n junction at FDW in a strained ferroelectric film on a thick rigid substrate electrode. Noteworthy, that the temperature of phase transition from the ferroelectric (FE) to paraelectric (PE) phase taking place in a strained ferroelectric film can be changed by a misfit strain in a wide temperature 5 range including room temperatures [20]. Utilizing the fact, we revealed the unusual features of the IQHE in graphene at room temperature, unknown earlier. Despite we have an integer character of QHE on both sides of FDW, the total number of perpendicular modes proves to be non-integer in a very unusual way. Let us focus on the result in more details. System under consideration in Ref. [9] is shown in Fig.1 We regarded that the graphene channel is highly homogeneous (see e.g. Ref. [22] for experiment), and so the scattering at short-range graphene defects is much smaller than the scattering at surface 6 ionized impurities. We consider a single-layer graphene at ambient conditions; therefore there should not be any significant thermal effect for graphene edge states, which are considered can within the approximation of sharp boundaries (see e.g. Refs. [23] and [11]). For a ferroelectric this is possible only if the thickness of 180-degree FDW is much smaller than the domain width and graphene channel length. To minimize the surface-induced domain wall broadening [24], the dead layer between the ferroelectric and graphene should be absent and the concentration of free carriers in graphene should be high enough to provide the effective screening of the spontaneous polarization in a ferroelectric. Since the modern interface engineering of epitaxial hetero-structures allows one to control the existence of dead layers and their properties [25], one can consider the temperature dependence of the 180-degree FDW intrinsic width ( ) T w given by expression [25,26] ( ) [20], and corresponding expression for the out-of-plane polarization has the form: where TC is a Curie temperature of a bulk ferroelectric, 12 Q is the electostriction coefficient, , and so it acquires the form: is the "background" permittivity of ferroelectric [28,29]; 0  is dielectric permittivity of vacuum. We derived [9] that approximate analytical expressions for the values n1 and n2, [20] and its thickness h due to the finite size effects [32], namely, , where S l is the screening length in graphene layer that is typically much smaller than 0.5 nm [30,31]; and the expression for ( ) , can be made close to room temperature by using appropriate film-substrate pair, which defines the misfit strain m u and decreasing the film thickness below 100 nm due to the existence of the thickness-induced phase transition into a PE phase [32,33]. The value      = From Fig.2 the number of perpendicular modes v⊥ varies from small integers to high non-integer numbers, in the vicinity of transition temperature from the ferroelectric to paraelectric phase. Thus, we predicted that 180-degree FDWs in a ferroelectric substrate, which induce p-n junctions in a graphene channel, lead to nontrivial temperature and gate voltage dependences of the perpendicular and parallel modes of the unconventional IQHE. Unexpectedly the number of perpendicular modes v⊥, corresponding to the p-n junction across the graphene conducting channel, varies from integers to different non-integer numbers depending on the gate voltage, temperature and oxide layer thickness, e.g. v⊥=1.94, 2,…5.1, 6.9, …9.1…, 23, 37.4 for the first Hall plateaus, at that smaller numbers corresponds to temperatures in the vicinity of transition from the ferroelectric to paraelectric phase. Magnetic dielectric -graphene -ferroelectric system as a promising non-volatile device for modern spintronics A field effect transistor (FET) with a graphene channel on a dielectric substrate was created in 2004 for the first time [34], and multiple attempts have been made to use the unique properties of the new 2D-material in spintronics [35,36]. Shortly after it was concluded that the graphene is poorly attractive for spintronics, since the magnetoresistance of the graphene-based spin valve, of magnetization at ferromagnetic contacts, are very small [35]. However, effective spin valves with a graphene "spacer" and cobalt contacts have been created soon [37], and intensive efforts have been made to improve them [38,39], including the solution to use a single-layer graphene channel in an active ferromagnetic element [40]. To realize this, a dielectric ferromagnet EuO was imposed on the part of the graphene channel to induce the strong spin polarization of graphene π-orbitals. The splitting of the graphene band states into the subbands with the orientation of the spin values "up" and "down" occurs, and EuO induces the energy gap between these bands [41]. Recently [42] we have shown that it is possible to create a non-volatile spin valve / filter similar to that proposed in Ref. [40], where, however, the appropriate location of the Fermi level is provided not by the gate voltage, but by the spontaneous polarization of the ferroelectric substrate (see Fig. 3a). The single-layer graphene channel is considered as an infinitely thin two-dimensional (2D) gapless semiconductor of rectangular shape with length L and width W. Since we regard that L is smaller than the free path of the electron  , the channel conductivity takes place in the ballistic regime. The graphene channel is placed on a single-domain ferroelectric film with a spontaneous polarization S P . Since the ferroelectric substrate can be used for the doping of a graphene conductive channel by a significant number of carriers without the traditional application of the gate voltage [ , where e is the electron charge. The sign "+" of S P corresponds to a positive bound charge at the grapheneferroelectric interface, and thus the graphene doping with electrons. The sign "-" corresponds to the negative bound charge at the interface and to the channel doping with holes. As in Ref. [40], a magnetic dielectric EuO with a length of L l  is superimposed on the graphene channel and the top gate is placed above the magnetic dielectric. Isolated single-layer graphene is a 2D gapless semiconductor with the linear band spectrum near the Dirac point [35], is the wave vector value, 6 10 = F v m/s is a Fermi velocity, and the signs "+" and "-" correspond to the conduction and valence bands, respectively. In the graphene channel section located under EuO the spectra [41] becomes: where the subscript  designates the two values of the spin projection ( m/s, respectively [40]. The gap in the energy spectrum of graphene induced by a dielectric ferromagnet is shown in Fig. 3b. Note here, that first principle calculations [40] Although the electron would pass through a non-magnetic graphene channel of length L without dispersion, however, the presence of the section with length L l  in the channel, where the graphene has pronounced magnetic properties, leads to the necessity to account for the local scattering of carriers with different spin signs. The conductivity of the graphene channel, taking into account the double degeneration of graphene at points ' , K K , will be described by the modified Landauer formula [46,47]: is the number of conductance modes. The transmission coefficient is the probability that the electron will pass without a scattering the "magnetic" section of length l , which depends on the value of the electron spin. For the full conductivity it is necessary to sum for both spin values. Using the relation L l  we can assume that for an electron in the graphene channel, which can be located at the width of this channel W. Symbol "Int" denotes the integer part. Using relation between 2D-concentration of electrons and Fermi energy in graphene, , we obtain for the de Broglie wavelength: [48]. The graphene screening length S l is usually smaller (or significantly smaller) than 0.1 nm [50,51]. f b  is an effective dielectric permittivity of interfacial or "passive" layer on ferroelectric surface [14]. Equation (2.5b) is valid under the condition of an ideal electric contact between the ferroelectric film and graphene channel, and the absence of dead layer is assumed. ( ) ( ) (2.6) Here the factor e P v E The influence of temperature and film thickness on Fermi energy is presented in Fig. 4 [16], and, therefore, on the square root of the polarization. If, for sufficiently long channels, the conduction regime becomes diffusive, it will result to the additional factor is the electron free path corresponding to the Fermi level energy [47]. If the scattering occurs predominantly on ionized impurities in substrate (the most common case) then the conductivity will depend linearly on the carrier concentration and polarization [47]. Conclusion To resume, in Ref. [9] we revealed the unusual features of the IQHE in graphene, unknown earlier. Despite we have an integer character of QHE at both sides of FDW, the total number of perpendicular modes proves to be non-integer in a very unusual way. The nature of this effect principally differs from the traditional non-integer QHE (see e.g. [52]); and the electron gas does not condensate into a special liquid state occurs in this case. Similar non-integer effect has been observed earlier for the p-n junction in graphene channel created by two gates [12], however the presence of ferroelectric substrate with FDWs modifies the character of non-integer QHE and introduces new smart details into it. In [10] we considered the conductivity of the magnetic dielectric placed on the graphene conductive channel, which in turn was deposited at the ferroelectric substrate. The magnetic dielectric locally transforms the band spectrum of graphene by inducing an energy gap in it and making it asymmetric with respect to the spin of the free electrons. The range of spontaneous polarization of ferroelectrics (2 − 5) mC/m 2 , which can be easily realized in thin (10 -100) nm films of proper and incipient ferroelectrics, was under examination. If the Fermi level in the graphene channel belongs to energy intervals where the graphene band spectrum, modified by EuO, becomes sharply spinasymmetric, such a device can be an ideal non-volatile spin filter. However, it cannot operate without the top gate. Note, that the problem solved in Ref. [10] has a framework character. The practical application of the system under consideration is restricted by a relatively low ferromagnetic transition temperature of However, as it was demonstrated by Hallal et al. [53], alternative magnetic insulators with higher Curie temperatures can cause similar local transformation of graphene band spectrum. According to the first principles calculations [53], energy gaps, imposed in graphene by magnetic insulator Y3Fe5O12 (YIC), are similar to the ones described by Eq.(2.2); however, the "useful" energy ranges with spin asymmetry are several times wider there, which makes such a system more convenient for practical usage. High ferromagnetic transition temperature of YIC ( K T m C 550 = ) permits the system to operate under ambient conditions. However, quite recently Song [54] demonstrated that the gateinduced spin valve based on graphene/YIG (or on graphene/EuS) also induces a heavy electron doping, 0.78eV, which corresponds to a giant spontaneous polarization of 1.5 C/m 2 . Therefore, a system magnetic insulatorgraphemeferroelectric can be treated as a promising one for spintronic devices of new generation.
2020-04-02T09:13:46.303Z
2020-03-27T00:00:00.000
{ "year": 2020, "sha1": "f89fe4f3e378cf272ab6e3de157e0540a2f074c4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2011.13791", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9fab076fd86477e06c2298ea31ce0fc23b099b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
247055143
pes2o/s2orc
v3-fos-license
Characterization of Plastics and Polymers: A Comprehensive Study Since the beginning of polymer synthesis, a huge modification and development have taken place in this field which resulted in a world where people cannot think of a single day without a polymer product. They are light in weight, affordable and have the potential to provide a similar strength like the traditional metallic objects. As the demand for polymer products is increasing rapidly, its characterization and evaluation of mechanical properties become essential for making a reliable, scientific and cost-effective product design. Recently the incorporation of biopolymers in the global market has made this industry more attractive to consumers and government bodies. But the main challenge lies in its characterization as plastics and polymers found in nature or synthesized artificially have a wide range of physical and chemical properties. So, a particular set of instruments that can evaluate the properties of a group of polymers, may not be usable for a different group with the same accuracy and technique. Researchers are working incessantly to find out effective methods for this purpose and many of them are successful in finding some crucial properties of polymers like strength, elastic modulus, viscosity, hardness etc. The present work briefly discusses the superiority of polymer materials and the research work that has been carried out so far for the characterization of the same. Study on the biopolymer, its characterization and necessity in the context of environmental sustainability are also included in this literature. Introduction In the advancement of civilisation, the role of plastic and polymers is too much to say in a few sentences. These two terms are interrelated, plastics are a special type of polymers having huge applications as raw materials for the manufacturing of various modern appliances while polymers form a big umbrella under which plastic exists. So all polymers cannot be termed as plastics. As the name indicates, a polymer is formed by the combination of monomer units that have a wide range of physical and chemical properties. From the small household application to the heavy industrial equipment, its presence is very much dominant. The reason behind it is its favourable properties i.e. lightness, durability, chemically inertness (in most cases) and many other desired mechanical properties like hardness, strength per unit weight, scratch resistance etc [1,2]. Even our body is full of natural polymers i.e. Polysaccharides, Polynucleotides (DNA), Proteins etc [3]. The applicability of polymers has now expanded beyond the common domestic applications. Like the polyurethanes can be used in multipurpose areas viz. furniture, automobile industry, coating, construction and even in biomedical engineering due to its antimicrobial properties [4]. All these things made it impossible to think of a world without polymer and its products but it has a very adverse impact too on the environment. So, it is reasonable to invest effort not only in replacing plastics products with other biodegradable material but also in the recycling of the same by the maximum possible extent. That is why this is one of the most demanded topics of research nowadays. Several works have been done so IOP Publishing doi: 10.1088/1757-899X/1225/1/012033 2 far in this field and some of its fruitful results are advanced waste management processes, biopolymers, bioplastics (biodegradable), reinforced plastics and so on [5][6][7]. Polymers being a widely used material in many fields, their testing and characterization become an obvious thing for designing more reliable products with known or predictable material properties. The natures and properties of polymers are quite complicated to test and evaluate because of their wide range of physical states and unavailability of a unique testing methodology [8,9]. From a soft foam, viscous latex and rubber to a very hard and brittle thermoplast, all are different types and classes of polymers having different properties. Several unique physical testing methods are required for each unique class of polymers and that is the most challenging issue in this field of research [10]. Physical testing is mostly used to determine the mechanical properties of polymer namely hardness, tensile strength, flexural rigidity, scratch and wear resistance etc. Apart from that, a more detailed mechanical characterization can also be done for a sophisticated design of plastic products like creep, fatigue, fracture, impact strength, toughness, thermal stress etc and these data also plays a vital role in the failure analysis of the same [11,12]. Only the analysis of mechanical properties is not sufficient for the characterization of polymers as they can also be susceptible to chemical degradation, environmental corrosion, microbial(for biopolymers) and photolytic(due to sunlight) degradation, moisture-induced damage etc. These factors obviate the need for detailed chemical characterization of polymeric substances [13]. Properties of plastics and polymers are found to be highly influenced by functionalization, hydrocarbon chain length, incorporation of fibres, nanoparticles and different types of micro and nano fillers [14][15][16][17][18]. It has been found that the interfacial property of graphene oxide and a polymer matrix can be enhanced by controlling the functional group polarity of the polymer. Optimum polarity can increase the hydrogen bond density at the interface resulting in higher adhesion and superior strength [14]. Similarly, a study by Jian et al reveals that the interfacial shear strength and Young's modulus of CNT-epoxy nanocomposite polymer are improved by chemical functionalization on CNTs [19]. In modern applications, the use of polymer coating has drawn the attention of many engineering industries due to its favourable surface properties. A significant portion of polymer coatings are used to increase the durability of the surface of the substrate which resulted in a rigorous study and research for the evaluation of the important surface properties like scratch, wear, hardness, corrosion resistance etc in recent years [20][21][22][23][24]. Due to the unique properties of plastics and polymers and its constant worldwide demand, it will continue to be an integral part of the civilisation in future and the research in this field will also go on in the search of new material with more flexibility and advanced properties which will be able to serve the need of the respective engineering industries in a better way. Present work is focused on the analysis and comparative study of different types of plastics and polymers. Specifically, the characterization processes of synthetic and biopolymers and the path towards sustainability are discussed in detail. About plastics and polymers Polymerization is a basic chemical process that is made by combining the monomer units(Ethylene, Propylene etc) in a chain or a complex network with the help of a specific chemical reaction mechanism basically, addition and condensation [25] (Fig 1). Polyethene and polypropylene, the most used plastics in our everyday life are originated from crude oil and natural gas refinery. Firstly naphtha is separated from the by-products of the same and then passed through the cracking process at around 800 0 C where the monomers i.e. ethylene and propylene are formed. It is then compressed and cooled to a liquid state and polymerisation is initiated by a free radical reaction with the help of an 'initiator' or 'catalyst' like organic peroxide. Finally, they are collected as plastic pellets and sent to These two are the example of addition or chain growth polymerisation which mainly takes place by a free radical exchange while the condensation polymerisation is one which gives water as a by by the substitution of Hydroxyl ( boundary. In fact, polymers were there from the creation of the universe and life and a wide range of natural polymerisation processes had taken place so far in our envi increased demand for bio-based polymers has inspired chemists, microbiologists and process engineers to work on their synthesis in detail. Research has revealed that biopolymers can be produced from renewable resources or from polyamides etc which can be synthesized or degraded in the environment eco interaction of microorganisms [26,27] plastics are classified based on their source as shown in Table 1. Characterization of traditional polymers and In polymer science, the role of synthetic polymers extracted from fossil fuels has been significant from its beginning due to its available production method and desirable material properties. For this reason, the modern polymers synthesized from renewable resources have not been able to replace their position completely till now. In our modern applications the fossil (polyethylene), PP(polypropylene), PET(Polyethylene terephthalate), PS(polystyrene), PVC(pol chloride) etc. are still dominating as they are light, cost the design criteria properly. Among them PE and PP are crystalline at room temperature and can be used as a moulded object, especially the P (b) Representation of addition and condensation polymerisation processes Finally, they are collected as plastic pellets and sent to factories for casting into different products. These two are the example of addition or chain growth polymerisation which mainly takes place by a free radical exchange while the condensation polymerisation is one which gives water as a by bstitution of Hydroxyl (OH ) radical. Polymerisation process is not limited to a small boundary. In fact, polymers were there from the creation of the universe and life and a wide range of natural polymerisation processes had taken place so far in our environment. In recent years, the based polymers has inspired chemists, microbiologists and process engineers to work on their synthesis in detail. Research has revealed that biopolymers can be produced from renewable resources or from organic substances like starch, cellulose, lactic acid, polysaccharide, polyamides etc which can be synthesized or degraded in the environment eco [26,27]. Some basic types of biodegradable and non plastics are classified based on their source as shown in Table 1. Characterization of traditional polymers and innovations in this field In polymer science, the role of synthetic polymers extracted from fossil fuels has been significant from its beginning due to its available production method and desirable material properties. For this reason, synthesized from renewable resources have not been able to replace their position completely till now. In our modern applications the fossil-based plastics i.e. PE (polyethylene), PP(polypropylene), PET(Polyethylene terephthalate), PS(polystyrene), PVC(pol chloride) etc. are still dominating as they are light, cost-effective, available in different forms and meet the design criteria properly. Among them PE and PP are crystalline at room temperature and can be used as a moulded object, especially the PET has glass transition temperature well above the room Representation of addition and condensation polymerisation processes. factories for casting into different products. These two are the example of addition or chain growth polymerisation which mainly takes place by a free radical exchange while the condensation polymerisation is one which gives water as a by-product radical. Polymerisation process is not limited to a small boundary. In fact, polymers were there from the creation of the universe and life and a wide range of . In recent years, the based polymers has inspired chemists, microbiologists and process engineers to work on their synthesis in detail. Research has revealed that biopolymers can be produced organic substances like starch, cellulose, lactic acid, polysaccharide, polyamides etc which can be synthesized or degraded in the environment eco-friendly with the . Some basic types of biodegradable and non-biodegradable based on their source [28] based plastics (fossil resources) caprolactone) (PCL) poly(butylene succinate/adipate) poly(butylene adipate-co-terephthalate) In polymer science, the role of synthetic polymers extracted from fossil fuels has been significant from its beginning due to its available production method and desirable material properties. For this reason, synthesized from renewable resources have not been able to replace their based plastics i.e. PE (polyethylene), PP(polypropylene), PET(Polyethylene terephthalate), PS(polystyrene), PVC(polyvinyl effective, available in different forms and meet the design criteria properly. Among them PE and PP are crystalline at room temperature and can be ET has glass transition temperature well above the room temperature which provides it higher hardness and dimensional stability. It is obvious that characterisation of any material requires proper testing methods to predict its physical and chemical properties which in fact, is the main requirement of engineering product design. There are several physical and chemical testing methods of polymers available today and many more are in the research stage [10,13]. Nowadays, as the use of recycled plastics has been emphasized worldwide as a part of sustainable development, its characterization becomes an important factor in design. As an example, the PET scrap is seen to be reusable after its separation from unwanted foreign particles, homogenization and heat treatment which gives it uniform mechanical properties. Its tensile properties, dynamic viscosity and thermo-oxidative stability are also studied for a detailed mechanical characterization [29]. PE and PP are widely abundant in domestic waste and there is a good possibility of recycling them if the waste is collected at the maximum extent by the municipality and other concerned authorities. There are many existing post-processing methods i.e. grinding, blending, homogenization, composite mixing, heat treatments which can make them reusable. It is found that the tensile properties improve if the plastic scrap mixture is finely milled to powder and also the possibility of alloy formation occurs among the polymers [30]. Researchers also suggest that the addition of waste rubber dust from the textile industry with the recycled PP blend elevates its bending, tensile and impact strength and damping characteristics [31]. Besides, PVC also contributes to plastic waste disposal significantly due to its heavy use in data, power transmission cable and fluid transmission pipes. A detailed mechanical characterization and composition analysis of the recycled PVC (r-PVC) reveals its reusability and processability after being separated from the waste. Due to some degradation in its mechanical properties over time researchers suggest the use of blended r-PVC with other recycled plastics or additives to improve its properties [32,33]. Research in the field of polymer science has been done extensively for the last few decades in the search of more effective polymers with superior mechanical and chemical properties and its demand is increasing day by day as it is benefitting the industries as well as our society a lot [34,35]. Polymer alloy can sometimes be a good replacement for the pure one i.e. PET/PP blend is more versatile than the individual polymers in many fields. It gives better stiffness, thermal and mechanical properties, favourable permeability etc [36]. Super polystyrene also is an example of modifying the existing polymer which is the copolymer of styrene and 1,1-Diphenylethylene. It exhibits long term service capability at elevated temperature and higher glass transition temperature due to the presence of bulky group DPE making it highly suitable in use as an insulating material, pump housing, fuse boxes, microwave dishes etc [2]. Recently a detailed characterization of Polyvinylpyrrolidone, a new generation polymer, is done which revealed its usefulness and versatility in biomedical applications as it is biocompatible, chemically stable, non-toxic, highly soluble in different organic solvents and capable to form complex with both hydrophilic and hydrophobic substances. It has also good electrical properties, adhesiveness and high solubility in water which makes it suitable for use in many beyond medical applications [1]. Menčík et al [37] carried a detailed study on the characterization of viscoelastic plastics viz. polymethyl methacrylate, epoxy resin and tooth enamel.They successfully performed nanoindentation test on the said polymers to estimate their material properties like elastic modulusand surface hardness. Besides, a separate creep test over a period of 3700 secondsis performed using a Berkovich indenter for evaluating the creep data.The average value of hardness and elastic modulus is found to be 0.205GPa and 3.66GPa respectively. Nowadays the study and characterization of polymer and polymer composites are being done extensively in various research and development sectors. Its application is found to be useful in almost every industry like renewable energy, aircraft, automobile and electric equipment, biomedical application and even in the modern drug synthesis process [38]. Carbon nanotube(CNT) is one of the most promising materials for future innovation because of its many exceptional properties which include excellent mechanical, thermal and electrical properties. Requirement of high strength material with lighter weight in various engineering industries like automobile, aerospace, biomedical, defence etc. has resulted in a rigorous study on CNT based composite and nanocomposite materials [39,40]. Its outer surface is hydrophobic in nature and chemically inert which makes it suitable in many applications like, CNT-Polymer based film electrode can be successfully utilized to detect glucose in an aqueous system [41,42]. Not only that, it is also found experimentally that anticancer drug-loaded multiwalled CNT has the ability to target and destroy the cancer cells more effectively [43]. Characterization of CNT reinforced epoxy composite is done by Dan et al using the finite element simulation to evaluate its mechanical properties like stressstrain relationship and tensile strength and the same has been validated by experimental data [44]. Advanced research suggests that the brake liner made of CNT reinforced epoxy and phenolformaldehyde composite exhibits superior friction and wear characteristics as compared to the traditional asbestos liner [45]. Gupta et al conducted mechanical characterization on carbon fibre reinforced polycarbonate polymer composite using different techniques like universal testing machine, three point bending, compressive loading etc. They successfully found out the effect of carbon fibre on the tensile, compressive, flexural properties, toughness and ductility of the 3D printed polymer composite [11].Due to the increasing health concern in our modern era synthesis and characterization of antimicrobial plastics and polymers are being done widely especially for biomedical applications [46]. Researchers found that the existing synthetic polymers can be made antibacterial by applying a layer of antimicrobial peptide(APM) on their surfaces. As it is expensive and complex in production, the synthesis of other cost-effective polymers is also being studied [47,48]. In recent years, the use of wood plastic composite (WPC) in automobile, aerospace and building structure components has been incorporated significantly due to its low production cost, desired strength and sustainability as compared to glass fibre composites. WPC can be manufactured by the recycled plastic blend separated from construction and demolition waste [49]. It is found from the mechanical characterization that it is less strong than the reference material made with low-density PE(LDPE) but shows more stiffness than the reference. A similar kind of study and characterization is performed for the WPC where it is manufactured by recycled PE/PP blend reinforced with external fibre material which is a more effective and advanced method to meet the required strength criteria of design [50]. Modern high-performance engineering plastics can be a better option for the future because of their many favourable properties compared to the traditional ones but their synthesis cost is too high. Aromatic poly imino ether(PIE) is one of them which shows a promising future because of its low synthesis cost, high thermal stability and higher decomposition temperature [51]. Semicrystalline polymers also playing an important role in materials synthesis because its exceptional thermomechanical properties which are desirable in many applications. So its characterization also becomes necessary for further product development. Voyiadjiset al [52] conducted nanoindentation test on poly(ether-ether-ketone)(PEEK) to study its localized nanomechanical properties. Several other studies are also going on in this field like avoidance of density variation caused by the chain termination on the surface of semicrystalline polymers. Chain termination and dangling of fibres at the end surfaces can reduce the density by 17% of the average density [53]. Research done in this field is quite vast and multipurpose. Newer methods and techniques are yet to come for the study and characterization of plastics and polymers which will certainly enrich science and technology and consequently the human civilization. Biopolymer, its characterization and sustainability The journey of plastics and polymers started a long time ago in the second half of the 19 th century when humans developed celluloid from cellulose fibre which is a natural polymer extracted from wood or straw [54]. From then a tremendous development has been done in this field for satisfying the need of our civilisation which resulted in a world full of plastics and polymers. Consequently, the dependence on artificial plastics has increased which has been affecting our environment and ecosystem for a long time. Microplastic is one of the most dangerous plastic derivatives found mainly in soil and water. They are tiny plastic particles produced by the gradual decay of plastic molecules from various sources viz tube, tire, waste container, textile, packaging industries etc [55]. Hence, the study, synthesis and characterization of alternate polymers i.e. the biopolymers has been the epicentre of research for the last few decades [56][57][58]. Biopolymers are those types of polymers that are produced from renewable and natural resources while biodegradable polymers are another type of biopolymer that dissolve or degrade easily in the environment producing non-toxic compounds or gases [26,59]. Bioplastics/polymers may be biobased(from biomass), biodegradable or can possess both the characters. So a fully biobased plastic may not be biodegradable and a fully oil-based(fossil) plastic may be completely biodegradable. Degradation of these polymers is also highly dependent on the sink it finally ends up in like seawater, marine water, soil etc. Haider et al [56] gave a detailed review on the biodegradation characterization of polymers and suggested the need for practical experimental study apart from laboratory experiments so that all the random factors in the environment can be taken into account. Several microorganisms like aerobes, anaerobes, photosynthetic bacteria, archaebacteria etc are responsible for the biodegradation of bioplastics which are mainly abundant in soil and compost [40]. Hydrolysis is one of the most important biodegradation processes which is capable of decomposing polyesters, polyethers, cellulose derivatives, starch etc [58]. It is already discussed that the demand for biopolymers has been increasing gradually and consequently the modification of existing biopolymers are also being done by composite blending, fibre reinforcement, addition of external functional groups etc in the existing hydrocarbon chain. These processes are seen to be effective in achieving superior properties as compared to the existing ones. When polyvinyl alcohol(PVA) is blended with corn starch and lignocellulosic fibre its thermal stability, water permeability and biodegradation rate improves which are suitable for specific application based product design [60]. Research suggests that potato and yam can be good raw materials for starch-based polymer synthesis due to their high starch content and excellent biodegradability in soil. Besides the resulting polymer shows moderate thermal stability which is confirmed by thermogravimetric analysis(TGA) and a workable tensile strength(0.6-1.9MPa) suitable for low strength applications [61,62]. Composite blending is a widely accepted modern polymerisation process and blending of synthetic polymers with biopolymers is a part of that. Its objective is to find the resulting characteristics of the composite polymer and whether it is more beneficial than the virgin ones or not. As an example, the mixture of low density polyethylene(LDPE) with corn starch has been studied which reveals that it exhibit reduced melt flow index(MFI) and increased elastic modulus [63]. Among biopolymers, the edible film is showing an increasing demand in the food processing industry as they are digestible and possess no threat to the human body. An edible film is a thin layer of edible polymer that can be placed over or between food components and plays a vital role in food preservation and distribution [64,65]. There are many processes of production of edible film and still, research is continuing for finding a more efficient and cost-effective synthesis process. Edible film produced from grass pea flour in the presence of transglutaminase enzyme has good potential because of its desired properties. It is found experimentally that the presence of microbial transglutaminase makes it mechanically resistant and gives a better digestive property. Besides, the scanning electron microscopy(SEM) study confirms its homogeneous structure obtained by enzyme treatment [66]. It's a matter of deep concern that the earth's mineral oil source is limited and it will be completely depleted in a few decades. So, the human civilisation cannot be fully dependent on traditional synthetic polymers for a long time. Researchers are trying to find alternate sustainable resources for its replacement and polyhydroxyalkanoates(PHAs) can be used quite satisfactorily for the same purpose. They are fully biodegradable, immunologically inert, highly suitable for biomedical applications and exhibits many desirable properties like mineral-based polymers [67]. PHAs are one type of polyesters that are produced from microbial fermentation. Because of its low production yield, the overall cost of PHAs increases causing a big barrier to its large scale use in the polymer industry [68]. Besides, proteins, lipids, fibres and polysaccharides are also widely available biopolymers obtained from plant and animal sources [69]. Efforts have been made to utilize these polymers in product synthesis by suitable modification and homogenization techniques. Among them, proteins have many available sources based on plants and animals like oilseeds, eggs, milk etc and most importantly, the wastes and surpluses from the food processing industry contribute the most. A detailed characterization of protein-based polymers reveals that the protein collected from these sources can be made usable by suitable mixing and other thermomechanical treatments. Experiments show that the addition of plasticizer, degree of blending and moulding process improve the value of young modulus, tensile strength, water uptake capacity and other properties of the polymer significantly [70]. The diverse properties of polymers and their composites along with the help of additive manufacturing have brought phenomenal changes in the biomedical field. They are capable of providing required strength, corrosion resistance and antimicrobial properties which make them the perfect raw material to produce customized biodevices, scaffolds for tissue culture, artificial bone replacements, heart valves, drug carriers and so on [71][72][73].The wide availability, easy processibility and many other impressive properties of plant-based cellulose fibre is also a good replacement for the synthetic polymer. It possesses higher mechanical strength as compared to the protein-based polymers and even, the natural fibre reinforced composite polymer has the potential to compete with the existing metals and ceramics [74]. Conclusion The modern civilisation which we are living in is indeed nothing but the result of incessant innovation and advancement in the field of science and technology. The prime objective of any scientific research is to understand the way the universe works and to utilize our available resources for the welfare of mankind in the most efficient manner. It is discussed many times in this article that the role of plastics and polymers in our life is huge. From our body to every corner of the outer world, there is the existence of polymers that we often do not realize. Due to its immense importance, research in this field is still going on on a large scale. In most cases, plastics are proven to be more flexible, userfriendly, lightweight and have a higher strength to weight ratio as compared to metals. Not only that, plastic products are aesthetically and ergonomically far superior to traditional metals. The inventions of newer synthesis processes resulted in the creation of innovative polymers with excellent properties which had never existed before. Due to its increasing demand, there is tough competition among the manufacturers and day by day newer plastic and polymer products are being launched in the market with better performance and reliability. Characterization helps to know the actual nature and characteristics of the polymer which in fact, determines the capability of that material to perform in a specific application. Detailed knowledge about a particular material also improves the design methodologies and safety of the designed product. However, the characterization and testing procedures of polymers are not standardized, unlike the metals because of their widely available physical and chemical states. In the past few decades, due to the rising consciousness of the sustainable exploitation of natural resources, the emphasis on biopolymers have been increased significantly. Undoubtedly, the biopolymer is going to replace the traditional mineral oil-based polymer soon and it is the only way to make a habitable planet for our future generations. The advancement in the field of biopolymers done so far is promising and the future scope of study in this field is widening day by day as the whole world is trying to move towards a greener and cleaner environment with minimal wastage. The main advantages of biopolymers are that they are made from renewable resources and most of them can be made fully biodegradable in a suitable medium(sink). So finally, it can be concluded that the characterisation of plastics and polymers is one of the prime requirements for product design, development and quality assurance. More innovations are yet to occur in this field which will hopefully be able to serve the need of human civilisation in a better way.
2022-02-23T20:07:29.128Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "6f6c36ea41e4e16521e722ee1ce5c45a77893fea", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1225/1/012033", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6f6c36ea41e4e16521e722ee1ce5c45a77893fea", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
188070190
pes2o/s2orc
v3-fos-license
Investigating the unwinding behavior of technical yarns and development of a new sensor system for the braiding process A new cost-efficient sensor module for the detection of thread tension anomalies in braiding machines was developed. The sensor module is mainly attached to the body of the braiding machine and works by contactlessly detecting the positions of the levers of the yarn tensioning units of the bobbin carriers through magnets and stationary Hall effect sensors as the bobbins pass by. This way, time-discrete estimations of the tension of the moving braiding yarns can be calculated. The sensor module was validated by investigating the unwinding behavior of several kinds of technical yarns from bobbins on a stationary test stand which simulates the unwinding process during braiding. Flawless reference measurements revealed that the signals from the Hall probe are in good agreement with precise yarn tension measurements obtained simultaneously from a deflection roller based yarn tension measurement device. Further measurements with purposefully provoked unwinding-related irregularities showed that braiding defects are foreshadowed by prominent variations in yarn tension which the Hall effect sensors are able to detect. Finally, experiments with the sensors installed into a running braiding machine were conducted. In this near-production environment, the sensor module was capable of identifying irregularities soon enough before major braiding defects evolved. Introduction The quality of braided textiles from reinforcement fibers and the stability of the process are negatively affected by irregularities that occur during braiding. Since irregularities can lead to braiding defects which cause material waste and machine downtime, machine productivity is reduced and additional costs for error cause analysis and error correction time arise. Previous investigations conducted by Ebel et al. [1] have shown that braiding defects are often induced by a small cause and evolve through various stages to a major failure event. The more advanced a defect is when it is detected, the higher the aforementioned additional error costs are. Thus, the development of an online monitoring system for the braiding process which is able to detect the evolution of braiding defects already in early stages is highly desirable. As typical braiding defects, Ebel et al. [1] mention a generally fuzzy braid due to fiber damage, loops in the fabricated preform due to a loss of yarn tension, gaps in the preform as well as yarn breakages. Whereas the fuzzy braid and loops in the preform can be seen as an optical damage, gaps in the preform and yarn breakages have a significant impact on the mechanical performance of the finished part and on machine uptime. They elaborate that these gaps and yarn breakages are commonly caused by a specific effect named fibrous ring. This effect is described as a ring-shaped accumulation of carbon filaments which impedes the yarn from unwinding from the bobbin and consequently increases the yarn tension. It origins primarily from yarn predamage and partly from unsuitable rewinding parameters. 1234567890''"" 13th International Conference on Textile Composites (TEXCOMP-13) IOP Publishing IOP Conf. Series: Materials Science and Engineering 406 (2018) 012065 doi:10.1088/1757-899X/406/1/012065 Furthermore, they point out that especially the occurrence of yarn breakages as a result of the formation of fibrous rings significantly reduces the productivity of a braiding machine. In their endurance braiding tests, which were performed on an axial braiding machine with 60 carriers, a horn gear diameter of 120 mm and a horn gear speed of 120 rpm, they manufactured triaxial hoses with glass fiber as braiding yarns (braiding angle: 45°) and carbon fiber as UD yarns. They observed a machine downtime of up to 26 % of the total production time due to the necessity of manually repairing or rethreading yarns which had broken due to fibrous rings. However, Ebel explains in his work [2] that in a production environment with trained workers, this portion of machine downtime may be lower compared to the investigated research environment. Mierzwa et al. [3] delineate the extent of the deterioration in mechanical properties due to local yarn gaps in braided preforms (braiding angle: 45°) fabricated from T700SC 24k 60E carbon fiber yarns from Toray Industries, Inc. In their test program, they purposefully introduced 4 mm wide gaps into the preforms and infiltrated them using the VAP (Vacuum Assisted Process) and the RTM (Resin Transfer Molding) process. In their subsequent coupon tests of the specimens manufactured by the VAP method, they observed a 36 % reduction in tensile and a 33 % reduction in compressive strength when the gap was oriented perpendicularly to the loading direction. However, they did not observe any significant reduction in strength of the specimens produced by the RTM process. They concluded that -in contrast to the rigid RTM tooling on both sides of the preform -the flexible vacuum bag promoted fiber undulations in the vicinity of the gap which lead to the loss of strength. Existing systems for process irregularities detection during braiding In order to avoid the effects of braiding defects described above, some sensor systems for the monitoring of the braiding process already exist. On the one hand, there are systems which make use of tactile sensors that are stationary attached to the body of the braiding machine (bobbin carrier independent systems). On the other hand, there are approaches which include the installation of sensors onto the bobbin carriers (bobbin carrier dependent systems). One of the commercially available bobbin carrier independent systems comprises rudimentary switches which jut into the tracks of the bobbin carriers. These switches are activated by extensions of the levers or sliders which are part of the yarn tensioning mechanism of the bobbin carriers when a total loss of yarn tension arises (cf. Figure 1). Such a system may serve as a trigger to stop the braiding machine on occurrence of a yarn breakage to avert the production of a braid with loose or missing yarns. An advantage of such a system is its simplicity and cost-efficiency. However, this kind of system causes a considerable amount of machine downtime because it only responds when a yarn has already broken. Due to the above-mentioned necessity of manually repairing or rethreading yarns after a breakage has occurred, resolving a braiding process defect in this final stage is much more labor intensive than in earlier stages like gap formation due to a moderate increase in yarn tension. The latter defect can in most cases simply be resolved by removing a fibrous ring on the respective bobbin. Another bobbin carrier independent system invented by Lenkeit [4] makes use of a force sensor with a skid attached to it. The sensor and its skid are arranged between the plane spanned by the uppermost thread guiding elements of the bobbin carriers and the braiding point in a way that the yarns periodically touch and slide along the skid of the force sensor as the bobbins travel through the braiding machine along their closed tracks. In doing so, the skid deflects the yarns by a defined angle. Hence, the tension of each thread can be calculated from the force measured by the sensor at discrete time intervals. Such an arrangement can detect process irregularities that result in a variation in yarn tension before a thread has already broken. Nonetheless, a major drawback is the yarn damage that may be caused when the yarns touch the skid at high speed. This point is particularly relevant when processing carbon fiber yarns which consist of thin, brittle filaments. An example for a bobbin carrier dependent sensor system is provided by Braeuner [5]. He designed a whole new bobbin carrier with a slide that is displaced by the yarn tension against a resilient element. The yarn tension is determined by sensing the position of the slide along its track on the bobbin carrier. Furthermore, Braeuner's invention also comprises a communication module and an actively driven material buffer so that the yarn tension can be controlled wirelessly while the braiding machine is running. A similar bobbin carrier dependent sensor system was developed by von Reden [6]. He mounted a linear potentiometer as a force sensor, an actively driven bobbin, a control unit, a data transfer unit and an energy supply unit onto a specially constructed bobbin carrier (cf. Figure 2). Moreover, the spindle and its motor which move the yarn deflection element up and down parallelly to the central axis of the bobbin is characteristic for his bobbin carrier. This way, the yarn is always unwound perpendicularly from the bobbin and a defined angle of attack of the yarn tension on the yarn deflection element is ensured. This is needed to obtain precise measurements of the yarn tension from the linear potentiometer. Both of these bobbin carrier dependent systems are able to measure the yarn tension during the braiding process very accurately and are therefore capable of detecting irregularities at short response times. Major drawbacks of this kind of systems are however the comparatively high costs for energy supply units, sensor and communication hardware as well as the effort it takes to install the hardware on all bobbins (up to several hundreds) of a braiding machine. Conscious of the existing sensor systems and their inherent strengths and weaknesses, we are currently working on an online monitoring system for the braiding process which is cost-efficient on the one hand while being sufficiently precise to determine unusual variations in yarn tension on the other hand. Furthermore, the system shall be on a modular basis so that it can be used in the production of a broad spectrum of products ranging from mass goods such as shoelaces (trimmed-down version of the monitoring system) to high performance carbon fiber reinforced composite parts (full version of the monitoring system). The aim is to be able to predictively stop the braiding machine before process irregularities lead to yarn breakages and consequently to labor intensive error correction. The first sensor module of the system in development with its measurement principle and validation experiments are depicted in the paper at hand. New concept for thread tension anomalies detection during braiding During operation of the braiding machine, the bobbin carriers are moved through the machine by rotating horn gears along closed, intersecting tracks. At the same time, yarn is tangentially unwound from the bobbins. If the thread tension drops below a desired value, a thread tensioning unit at each bobbin carrier prevents the bobbin from rotating around its central axis by means of a locking pin (cf. Figure 1) which engages with lateral notches in the bobbins. As more yarn is pulled by the braiding machine, the yarn tension increases. Two times the unwinding yarn tension plus a frictional component is applied to the lever of the yarn tensioning unit by a 180° deflection pulley (cf. Figure 1 and Figure 3). This way, the increasing yarn tension lifts the lever against a spring incorporated inside the bobbin carrier until the release force is reached. At this point, the lever retracts the locking pin. The given yarn tension then causes the bobbin to rotate around its central axis and yarn is unwound from the bobbin. This in turn leads to another drop in yarn tension which causes the lever to move downwards and reengage the locking pin. During braiding, cycles of engaging and disengaging the locking pin according to the given yarn tension succeed each other. The newly developed sensor module for the detection of thread tension anomalies makes secondary use of the above-depicted and already existing yarn tensioning unit of each bobbin carrier as a kind of spring balance. Thereby, the number of required additional components is reduced. To measure the thread tension as the bobbins travel through the machine, the position of the lever of the yarn tensioning unit is detected by the new sensor module. For this, the module comprises magnets which are mounted onto the levers of the yarn tensioning units of the bobbin carriers, Hall effect sensors which are stationary attached to the body of the braiding machine and an Arduino microcontroller as a computing device. Additionally, LEDs as a visual indicator to mark the position of an anomalous bobbin carrier to maintenance personnel were arranged near the braiding machine. As magnets, permanent, cylindrical neodymium magnets with a diameter of 8 mm, a height of 3 mm, an energy density of approximately 342-366 kJm -3 and a maximum service temperature of 80 °C (quality class N45) were used. In order to reduce their susceptibility to corrosion, the magnets were coated with an epoxy resin film. Firstly, the magnets need to be attached to the yarn tensioning units of the bobbin carriers. For this, a single magnet is pressed into a recess in a 3D printed housing (cf. Figure 3). The housing in turn features a slot so that it can tightly be pushed onto the extension of the lever of the thread tensioning unit of the bobbin carriers. For a more elaborate version than the prototype described herein, a lever of the thread tensioning mechanism with an integrated magnet is conceivable. Secondly, the Hall probe (an Iduino SE022 analog Hall sensor module) needs to be held in place by a 3D printed, height-adjustable fixture in such a way that it is able to detect the magnetic flux density of the field created by the magnet that is attached to the lever of the yarn tensioning unit. Due to the fact that this lever is rotatably placed to the bobbin carrier, the distance between the magnet and the hall probe as well as the orientation of the magnet to the probe alter with varying yarn tension. Hence, the analog signal from the hall probe is a non-linear function of the yarn tension. The corresponding mapping function can be obtained from experiments. Every time a bobbin carrier passes by the stationary sensor, the corresponding yarn tension can now contactlessly be estimated. Since the sensor is arranged next to the closed track along which the bobbin carriers are travelling during operation of the braiding machine, a single sensor may serve to determine thread tension anomalies of all bobbins on one track. Due to the fact that there are two of these closed tracks with opposite directions of bobbin movement in a rotational braiding machine, at least two Hall sensors are required to monitor all braiding yarns of a machine. However, the system can only measure 1234567890''"" 13th International Conference on Textile Composites (TEXCOMP-13) IOP Publishing IOP Conf. Series: Materials Science and Engineering 406 (2018) 012065 doi:10.1088/1757-899X/406/1/012065 the corresponding yarn tension of a bobbin in discrete time intervals. If multiple sensors are arranged along the track, "blind spots" can be reduced and along with that the overall response time of the system can be improved. Certainly, all sensor fixtures need to be adjusted to exactly the same height in this case to generate comparable sensor signals. To conclude this chapter, it is to note that the measurement principle described above can be applied to radial as well as axial braiding machines as long as their bobbin carriers comprise a spring based thread tensioning unit. Other sensor types such as optical, acoustic or inductive sensors to determine the position of the lever of the thread tensioning unit were also considered. The given requirements regarding precision and especially costs were, however, best met by the chosen approach via magnets and Hall effect sensors. Validation experiments of the sensor concept To simulate and to closely study the unwinding process during the braiding process, an unwinding test stand is available at the Chair of Carbon Composites at TU Munich. In order to validate the new sensor module, the stationary bobbin carrier of this unwinding test stand was equipped with the components of the senor module mentioned above (housing for magnet, neodymium magnet, height-adjustable fixture and Hall probe). Furthermore, a self-constructed rotary position transducer was added to keep track of the length of the yarn that is being unwound as well as to precisely adjust the unwinding speed. The test stand works as follows: A NEMA-23 bipolar precision stepper motor with a 15:1 gear box from Phidgets Inc. winds the yarn onto a reel, thereby unwinding it from a bobbin which is located on a stationary bobbin carrier. The unwinding process is recorded by an SLR camera, the position of the lever of the thread tensioning unit is determined by the Hall sensor and the yarn tension is measured by a load cell mounted onto a 90° deflection roller (M1391 from Tensometric Messtechnik GmbH). The data is acquired using a USB-6009 data acquisition device from National Instruments and MATLAB R2015b. Three different yarn materials were investigated, as there are: a double folded polyester (PES) monofilament yarn with a diameter of 0.25 mm, a double folded PES multifilament yarn with a titer of 300 tex of the individual yarns and a carbon fiber yarn of the type Tenax®-E HTS40 F13 12K with a titer of 800 tex. All of the three yarns were tested at unwinding speeds of 40 mms -1 and 80 mms -1 , respectively. To keep the duration of a single test at about 20 minutes, 50 m of yarn were wound onto the bobbins for the configurations with the lower and 100 m of yarn were wound onto the bobbins for the configurations with the higher speed. Moreover, all configurations of yarn material and unwinding speed were investigated with a compression spring of the yarn tensioning mechanism with a release force equivalent to a mass of 350 g. Additionally, the PES monofil was tested with a 130 g-spring and the PES multifil as well as the carbon fiber yarn were analyzed with a 700 g-spring. Finally, three reference measurements of each configuration to study the behavior of the yarn materials when they are unwound flawlessly as well as five measurements with provoked irregularities were conducted. Flawless reference measurements on a stationary test stand The first question that needed to be answered by the validation experiments was if the measurements from the cost-efficient sensor module described in the previous chapter were in reasonable agreement with the measurements obtained simultaneously from the deflection roller based yarn tension measurement device. Exemplary measurement results of yarn tension and voltage of the Hall sensor of a flawless test with the carbon fiber yarn are shown in Figure 4. The diagram in Figure 4a reveals that the periodic oscillations in yarn tension with the higher frequency -created by cycles of engaging and disengaging the locking pin and determined by the deflection roller based yarn tension measurement device -are well represented by the Hall sensor module. In the diagram in Figure 4b, a superimposed fluctuation in yarn tension with a lower frequency, which was coincidentally discovered, is noticeable. Since the frequency is equivalent to the rotation frequency of the bobbin, this superimposed fluctuation is presumably caused by the central axes of the bobbin and bobbin carrier not being perfectly straight. Even these fine variations in yarn tension are captured by the Hall sensor. Measurements with other materials and unwinding parameters were in line with these findings. Apart from this slight superimposed variation in yarn tension, a second, stronger superimposed fluctuation in yarn tension was observed when conducting experiments with the 700 g-spring. To illustrate this, a comparison between measurements obtained from test runs with a 350 g-spring and a 700 g-spring is shown in Figure 5. Whereas the yarn tension remained in a corridor with constant lower and upper bounds of about 3 N and 4.5 N for the 350 g-spring (cf. Figure 5a), especially the upper bound of the yarn tension varied between about 9 N and 16 N for the 700 g-spring (cf. Figure 5b). Matching the measurements with the videos obtained from the SLR camera revealed that the values of the upper bound of the yarn tension depend on the position in longitudinal direction of the bobbin where the yarn is being unwound. Unwinding the yarn from the lower (upper) part of the bobbin lead to a lower (higher) upper bound of the oscillating yarn tension. This is due to the fact that the first yarn deflection element is -in contrast to the one displayed in Figure 2 -fixed on the bobbin carrier that was used. Since there is always a bit of play of the bobbin on the bobbin carrier, the varying angle of attack of the yarn tension on the bobbin causes the bobbin to be lifted from (pushed down towards) the locking pin. This means that the locking pin needs to be retracted less (more) to disengage with the lateral notches in the bobbin. Such behavior was particularly pronounced when the testing the PES multifilament as well as the carbon fiber yarn with the 700 g-spring. When testing all of the materials with the 350 g-spring, this behavior was partly observable and it was not observable when testing the PES monofilament with the 130 g-spring. Since the measurements obtained from the Hall effect sensor showed similar patterns, the authors concluded that this measurement concept was in principle suitable for detecting thread tension anomalies during braiding. Measurements with provoked irregularities on a stationary test stand The second question that had to be addressed was if the sensor module, when integrated into a braiding machine, was capable of detecting unwinding related braiding process irregularities soon enough before defects have reached their final stage (yarn breakage). In order to clarify this issue, flaws were purposefully introduced into the yarns. The carbon fiber yarn was manipulated by predamaging it during the rewinding step with sand paper with a particle size of 800 Mesh. This reinforces the tendency of the yarn to form fibrous rings during unwinding. The double-folded PES multifil was manipulated by rewinding it onto the bobbins with diverging yarn tensions of 3.8 N and 8.0 N (determined by a portable yarn tension measurement device of the type DTMX-500-U from Hans Schmidt & Co. GmbH). Shortly before the unwinding test, the yarn with the lower tension during rewinding was unwound one revolution from the bobbin while the other yarn remained unaffected -a flaw that can be introduced by maintenance personnel when replacing an empty bobbin in a machine, for example. The double-folded PES monofil was manipulated by rewinding it at diverging yarn tensions only, namely 4.6 N and 12.1 N. In a production environment, only slight yarn tension differences occur. However, with several kilometers of yarn being wound onto a bobbin, even small differences in yarn strain can accumulate to large differences in yarn length. The extreme manipulation procedures cause both of the PES yarns to reliably develop loops at the bobbin during unwinding at the test stand. Eventually, the loops become knotted and impede the unwinding process. A simple, hypothetical trigger criterion for the sensor module was then formulated: As soon as a Hall sensor detects a lever of the thread tensioning unit of a bobbin carrier in a running braiding machine which is in its uppermost position, the corresponding bobbin is considered to show a process irregularity. The idea behind this criterion is as follows: During braiding, the machine constantly pulls the yarn. The yarn tension ultimately reaches the release force of the spring of the bobbin carrier, the locking pin is retracted, the bobbin is then free to rotate and yarn can be unwound. Consequently, the yarn tension must drop and the lever of the yarn tensioning unit also moves to a lower position. If an unwinding-related irregularity occurs, the yarn cannot be unwound properly from the bobbin, although the locking pin is retracted. Since the braiding process goes on, the machine keeps pulling the yarn. Therefore, the yarn tension increases even more, causing the lever of the yarn tensioning unit to move further upwards than the point of the release force. This position of the lever beyond the point of the release force (lever deflection) occurs when an irregularity is present. In the diagram in Figure 6a, an exemplary measurement of an unwinding test conducted with a carbon fiber yarn with a provoked fibrous ring is shown. Prominent rises in yarn tension throughout the test run precede the final yarn breakage. The rises in yarn tension are accompanied by comparatively long lever deflections that are detected by the Hall effect sensor. The diagram in Figure 6b shows a representative measurement curve of an unwinding experiment with a manipulated PES multifilament. There are also rises in yarn tension during the experiment observable that precede the final yarn breakage. However, the lever deflections are shorter than the deflections during the experiments with the carbon fiber yarn. In the diagram in Figure 6c, an exemplary measurement of a PES monofilament is depicted. In general, few lever deflections shortly before the yarn broke were observable with this kind of material. Figure 6. Measurements of the yarn tension (blue) and the Hall voltage (red) during unwinding a manipulated carbon fiber yarn (a), a manipulated PES multifilament yarn (b) and a manipulated PES monofilament (c) at an unwinding rate of 40 mms -1 and a 350 g-spring. The unwound lengths of yarn from the bobbins when the lever was detected in its uppermost position were analyzed for all tests which showed unwinding irregularities. The condensed results of this analysis are depicted in Table 1. The threshold above which the lever was assumed to be in its uppermost position was set to 4.2 V for the data shown in the table. Different thresholds may lead to slightly different results. However, the general findings remain the same. For reasons of space, the detailed results concerning the influences of unwinding speeds and release forces of the compression springs are not discussed herein. The table reveals that there is a considerable number of lever deflections which foreshadow a yarn breakage or an overload of the stepper motor. Since the sensor module only acquires time-discrete estimations of the yarn tension, it is crucial to know how long in terms of unwound yarn length the lever deflections last. The table shows that there are in fact very short, and therefore almost undetectable, minimum lever deflections (1-2 mm) for all of the three yarn materials. However, the significantly higher mean values of the unwound yarn lengths during single lever deflections suggest that most of the deflections are detectable by the sensor system. This statement is underpinned when regarding the mean cumulated length of unwound yarn -a measure for the likelihood that any lever deflection of a moving bobbin carrier is detected by a stationary sensor. It says that there is on average at least 1.1 m of yarn unwound from a manipulated bobbin when the lever of the yarn tensioning mechanism is deflected upwards. Nevertheless, the overall minimum of the longest lever deflection per test indicates that the stationary sensors may not always predictively detect all major braiding defects. With this key figure being in the range of about 0.5 m for the carbon fiber yarn and considering typical circumferences and fiber angles of braided carbon composite parts, a single sensor per track is expected to work very well for the detection of fibrous rings. This is because the length of the yarn unwound from a bobbin during a full circulation through the machine is in most cases less than the determined lead time. The lead time in terms of unwound yarn length is already shorter in the case of the PES multifil (237 mm). Due to the fact that typical products made from this material (e.g. shoelaces) also show significantly smaller circumferences, the detection probability of unwinding irregularities may still be characterized as sufficient. However, the detection of loops which entangle during unwinding of the double-folded PES monofil is not fully guaranteed since the lead time was -in the worst case measurement -only 47 mm. Test of the sensor module in an operating braiding machine On the stationary test stand, the measurement principle has shown to theoretically be capable of detecting fibrous rings when processing the carbon fiber yarn. The sensor module was then tested in a running braiding machine to validate its functionality in a near-production environment. For this, 64 bobbin carriers of an RF 1/128-100 braiding machine from Herzog GmbH were equipped with bobbins onto which 100 m of the carbon yarn were wound. For the first test, no bobbin was manipulated. For the second test, one bobbin was manipulated in a way that it already featured a fibrous ring before braiding. Four bobbin carriers from the same track, including the one carrying the manipulated bobbin, were equipped with magnets. One stationary Hall effect sensor was installed and the height of its fixture adjusted in a way that the sensor was able to detect the positions of the levers of the yarn tensioning units of the bobbin carriers when they pass by. A cylindrical mandrel with a diameter of 65 mm and a length of approximately 2 m was then overbraided. The speed of horn gear rotation was set to 60 rpm and the haul-off speed of the mandrel was set to 12 mms -1 . This resulted in a braiding angle of approximately 47°. The measurement curve from the first, flawless test is depicted in Figure 7a. In the beginning of the experiment, the braiding yarns are not fully tensed, yet. Due to the fact that there are 32 horn gears in the machine, it takes 16 s for a bobbin carrier to complete one circulation through the braiding machine at the adjusted speed. Since four bobbin carriers from the same track were equipped with magnets, there are peaks in the signal from the Hall effect sensor every 4 s. Also, every fourth peak is induced by the same bobbin carrier. The maximum values of the peaks are around 3 V. These values indicate that the yarn is unwound regularly from the bobbins. By contrast, in the diagram which is shown in Figure 7b, there are two prominent peaks in the Hall voltage with maximum values of more than 4 V 1234567890''"" 13th at about 32 s and 48 s. These high peak values are caused by the increased yarn tension which is induced by the fibrous ring that was purposefully introduced before braiding. Already observable at second 64 and clearly visible from the regular sequence of small peaks starting at second 80, the yarn that is excessively tensioned by the fibrous ring eventually breaks. This means that the sensor module was capable of detecting an unusual rise in yarn tension well before a yarn breakage occurred. Conclusion and outlook The development of a first sensor module as part of a larger, comprehensive sensor system for the braiding process was presented. The sensor module is bobbin carrier independent and works by contactlessly detecting the position of the lever of the yarn tensioning mechanism of the bobbin carriers through magnets and Hall effect sensors as the bobbins pass by the sensors. This way, the module enables estimating the yarn tension of braiding yarns in discrete time intervals. Validation experiments on a stationary test stand with carbon fiber yarns, a double-folded PES multifil and a double-folded PES monofil were conducted. The experiments with several spring release forces and unwinding speeds as variation parameters proved that the detection of the position of the said lever by means of the sensor module provides a good estimation of the yarn tension. Furthermore, braiding irregularities were purposefully provoked by incorporating flaws into the bobbins that cause irregularities which hamper the unwinding process. The results on the mean cumulated unwound length of the yarn during a lever deflection obtained from the experiments with provoked irregularities revealed that the sensor module is in general very well capable of detecting process irregularities before they have led to major braiding defects like yarn breakage. However, if the worst case test runs are taken as the assessment basis, it becomes apparent that the lead time, in terms of braidable yarn length until final yarn failure, is significantly lower for the PES monofil (47 mm) than for PES multifil and the carbon fiber yarn (237 mm and 510 mm, respectively). If a high error detection reliability is required, the number of stationary Hall sensors along the tracks of the bobbin carriers, which acquire the data in a time-discrete manner, has to be increased according to the circumference of the braid and the braiding angle. The sensor system was also tested during operation of an RF 1/128-100 braiding machine from Herzog GmbH. In this a near-production environment, the sensor module was able to detect an anomalous rise in yarn tension caused by a purposefully introduced fibrous ring soon enough before the yarn broke. Future work will involve the development of additional sensor modules which will be part of the striven, integrated sensor system. Ideas for these modules include the measurement of reaction forces at the braiding ring, the optical observation of the braid formation zone as well as the development of a tension control unit for the rewinding step. Subsequently, real-time capable algorithms which process the data gathered by all sensor modules need to be implemented. Finally, the cost-effectiveness and the economic benefit of the whole system has to be evaluated under near-production conditions.
2019-06-13T13:18:52.025Z
2018-09-21T00:00:00.000
{ "year": 2018, "sha1": "e46179b821edf2f21e0dc404901fae0517bcfa36", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/406/1/012065", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c49abbbd753231d6ca0606638680b82b6f1a074d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
209489226
pes2o/s2orc
v3-fos-license
Evaluation of alkenones, a renewably sourced, plant‐derived wax as a structuring agent for lipsticks Abstract OBJECTIVE Waxes are used as structuring agents in lipsticks. There are a variety of waxes combined in a single lipstick to provide good stability, pleasant texture and good pay‐off. Due to a significant growth for natural, green and sustainable products, there is a constant search for alternatives to animal‐derived and petroleum‐derived ingredients. In this study, a green, non‐animalderived wax, namely long‐chain ketones (referred to as alkenones), sourced from marine microalgae was formulated into lipsticks and evaluated as a structuring agent. METHODS Alkenones were used as a substitute for microcrystalline wax, ozokerite and candelilla wax, typical structuring agents. In total, 384 lipsticks were formulated: L1 (control, no alkenones), L2 (alkenones as a substitute for ozokerite), L3 (alkenones as a substitute for microcrystalline wax) and L4 (alkenones as a substitute for candelilla wax). Products were tested for hardness (bending force), stiffness, firmness (needle penetration), pay‐off (using a texture analyser and a consumer panel), friction, melting point and stability for 12 weeks at 25 and 45°C. RESULTS Alkenones influenced each characteristic evaluated. In general, lipsticks with alkenones (L2‐L4) became softer and easier to bend compared to the control (L1). In terms of firmness, lipsticks were similar to the control, except for L4, which was significantly (P < 0.05) firmer. The effect on pay‐off was not consistent. L2 and L3 had higher pay‐off to skin and fabric than L1. In addition, L4 had the lowest amount transferred, but it still had the highest colour intensity on skin. Alkenones influenced friction (glide) positively; the average friction decreased for L2‐L4. The lowest friction (i.e. best glide) was shown in L4. Melting point of the lipsticks was lower when alkenones were present. Overall, L4, containing 7% of 4 alkenones in combination with microcrystalline wax, ozokerite and carnauba wax, was found to have the most desirable attributes, including ease of bending, high level of firmness, low pay‐off in terms of amount, high colour intensity on skin and low friction (i.e. better glide). Consumers preferred L4 the most overall. CONCLUSION Results of this study indicate that alkenones offer a sustainable, non‐animal and non‐petroleum‐derived choice as a structuring agent for lipsticks. Introduction Lipstick has been one of the most popular colour cosmetic products worldwide for decades across generations [1][2][3][4]. They serve the purpose of enhancing the appearance of the lips by adding colour and/or shine. Waxes are key components in lipsticks because they provide rigidity and appropriate hardness to the stick. Typically, multiple waxes are combined in a single lipstick to achieve the desired hardness, stability, texture and pay-off [5,6]. A variety of waxes are available on the market today for lipsticks; they mainly differ in hardness, melting point and source. Commonly used waxes include microcrystalline wax (a synthetic ingredient derived from petroleum), ozokerite (a mineral wax derived from shale), beeswax (animal-derived) and candelilla wax and carnauba wax (plantderived ingredients). Candelilla wax and carnauba wax are only available in certain parts of the world. Candelilla wax is obtained from the leaves of a small shrub native to northern Mexico and the southwestern United States, Euphorbia cerifera and Euphorbia antisyphilitica [7]. Carnauba wax is obtained from the leaves of Brazilian Carnauba palm tree (Copernicia cerifera) that grows along riverbanks, valleys and lagoons in the northern and northeastern part of Brazil [8]. Availability of these waxes could be potentially affected by environmental factors, which has been seen in the case of candelilla wax. In the last two decades, there has been a shortage of candelilla wax on the market due to climatic factors. According to forecasts [9], candelilla wax supply and price will continue to fluctuate. As a result, there is a high demand for suitable structuring agent alternatives in lipsticks and lip balms [10,11]. Cosmetic consumers are becoming more conscious and increasingly concerned with ingredients used in their products [12]. As trends, such as green, sustainable, natural and vegan, are gaining popularity [13], synthetic and animal-derived ingredients are less favoured among certain consumer groups. Therefore, companies are continuously searching for new natural and sustainable ingredients they could use in their products. Alkenones are an off-white waxy solid at room temperature (Figure 1) with a melting point range of 71.1-77.4°C [14]. Alkenones are a family of unique lipids biosynthesized by certain haptophyte microalgae [15], including the industrially grown Isochrysis sp. (Chromista, Haptophyta) [16]. Hence, they are a marine-based vegan ingredient. They are renewable, and therefore, green which fits well into the popular natural/sustainable and vegan trend. Alkenones can be grown in many locations; therefore, their availability is not as limited as that of some other waxes. Marine-based ingredients, including microalgae-sourced ingredients, are becoming popular in cosmetics [17] due to their natural origin and richness in vitamins [18], minerals [19], proteins and essential fatty acids [20]. Given their waxy nature and reasonably high melting point, we argue that alkenones represent an attractive and as-yetundeveloped class of natural ingredients that may find useful applications in a variety of cosmetic and personal care applications. Alkenones used in this study have been previously described and characterized extensively [21][22][23], and their potential application in personal care products, specifically in sunscreens, has been studied by our research group [14,24]. In a previous study, we found alkenones to be a viable alternative to microcrystalline wax and ozokerite in lipsticks and lip balms [14] based on visual observations during a 10-week period at two different temperatures. The goal of this study was to compare alkenones to waxes commonly used in lipsticks and instrumentally evaluate the effect of alkenones on the hardness, stiffness, firmness and pay-off of lipsticks. Additionally, we evaluated consumers' preference for lipsticks made in this study. Materials The marine microalgae Isochrysis was purchased from Necton S.A. (Olhão, Portugal). Alkenones were isolated and purified from the Isochrysis biomass as previously described [25]. and a preservative blend (Propylene glycol (and) diazolidinyl urea (and) methylparaben (and) propylparaben) were purchased from Making Cosmetics (Snoqualmie, WA, USA). All ingredients were cosmetic grade. Two commercial lipsticks were tested in this study as controls, including Wet and Wild Silk Finish Lipstick in Dark Wine colour (hereinafter referred to as 'C1') purchased at a local store (Dollar General, Toledo, OH) and L'Or eal Colour Riche Comfortable Creamy Matte Lipstick in Matte-ly in Love colour (hereinafter referred to as 'C2') purchased online (Amazon.com). Formulation of lipsticks Phase A was heated to 80°C to melt the waxes. Phase B components were added to melted phase A, and then, the mixture was removed from heat. Phase C was added to phase A/B, and the mixture was poured into a metal lipstick mould whereas still hot. When settled, the sticks were topped off and the mould was placed in the freezer (À18°C) for 15 min. The lipsticks were then removed from the mould and inserted into plastic lipstick cases. Bending force and stiffness Samples were kept in stability cabinets (at 25 and 45°C) for 24 h before testing. Bending force of each lipstick was determined using a TA.XTPlus texture analyser (Texture Technologies Corp., Hamilton, MA) with a TTC bending force fixture (Figure 2a). To determine the bending force and stiffness, each lipstick was raised to maximum length and clamped horizontally. The upper bending fixture stroke of the sample was approximately 3 mm away from the tip. Test mode was set to 'measure compression', and target mode was set to 'distance'. Trigger force was 20.0 g. The pre-test, test and post-test speeds were set to 1.0, 1.0 and 10.0 mm s À1 , respectively. Five lipsticks of each type were tested. Exponent Stable Micro Systems software (version 6.1.10.0) was used to generate hardness curves. Once the trigger force was attained, the probe moved down 7 mm and the sample was bent until it broke away from the main body. This was shown as the maximum force value, which indicated the hardness of the sample. The gradient of the slope during the bending action referred to 'stiffness' (or resilience) of the sample [29]. Needle penetration test Samples were kept in stability cabinets for 24 h before testing. Firmness of each lipstick was determined using a TA.XTPlus texture analyser (Texture Technologies Corp., Hamilton, MA) with a TTC needle penetration fixture (Figure 2b). Hardness was tested at room temperature (25°C) and elevated temperature (45°C). To determine hardness, each lipstick was placed on the side onto a plate placed centrally under the needle probe. Test mode was set to 'measure compression', and target mode was set to 'distance'. The needle penetration distance was 7 mm, and trigger force was 5.0 g. The pre-test, test and post-test speeds were set to 1.0, 2.0 and 10.0 mm s À1 , respectively. Five lipsticks of each type were tested. Exponent Stable Micro Systems software (version 6.1.10.0) was used to generate hardness curves. During the test, the needle moved down to penetrate the sample. The maximum force value was measured as the force required to penetrate the sample to the specified distance (i.e. 7 mm). The needle penetration test can also indicate the presence of unwanted air bubbles or grainy texture, which would be seen as fluctuations in the force values during penetration, as a result of either incomplete colorant dispersion or the working and cooling processes during manufacture. This 'grainy' texture would be perceived by the consumer as undesirable. Pay-off Pay-off to fabric of each lipstick was determined using a TA.XTPlus texture analyser (Texture Technologies Corp., Hamilton, MA) with a TTC vertical friction rig and a stationary vertical plate (Figure 2c). The vertical friction rig was secured onto the machine arm and placed vertically, parallel to the stationary vertical plate, with a 5 mm distance between the rig and plate. The fabric was cut into 15 9 13 cm pieces (Mainstays 200 thread count fitted sheet, Walmart, Bentonville, AR, USA) and ironed. To determine pay-off, each lipstick was prepared by cutting a clean flat surface with a sharp blade and attached to the vertical friction rig. Each piece of fabric was weighed before the study, then mounted on the stationary vertical plate using four binder clips to create a flat, smooth surface. Test setting was 'Cycle Until Count in Tension' for three cycles for a distance of 55 mm at a speed of 5 mm s À1 . Each piece of fabric was weighed after the test, and weight difference was calculated. Kinetic friction, that is resistance to maintain the movement at a specific constant speed, was also measured using the same setup. Two lipsticks from each batch were used for the pay-off and friction test, and each lipstick was tested three times. Pay-off to skin (i.e. forearm) was also tested using volunteers. Each lipstick was applied to the forearm in three layers (moving the lipstick up-down-up) onto a 5-cm long area of the inner forearm. Each lipstick was weighed on a balance before and after the test in order to calculate the amount of lipstick transferred to skin. At the end of the pay-off test, a picture was taken of each volunteer's forearm and colour intensity was evaluated visually. Consumer study Fourteen consumers, of ages ranging between 18 and 29 years, were recruited for the panel. Consumers from both genders and any ethnicity were invited to participate. A main inclusion criterium was that participants must had prior experience with lipstick. The majority of participants were female (93%). Ethnicities included Caucasian/White/European (68%), Asian/Pacific Islander (18%), African American/African/Black/Caribbean (7%) and other (7%). Panellists were asked not to wear any lipstick or lip balm prior to arriving to the consumer test. They were instructed to first look at the lipsticks in the lipstick case, then apply each lipstick one by one to the lips, applying as much lipstick as they typically would in real-world conditions, using the same number of strokes for each lipstick. In addition, panellists were asked to complete a paperbased survey before, during, and after applying the lipsticks. Testing was administered in a research laboratory under artificial daylight type of illumination at room temperature (between 22°C and 25°C). The consumer panel test was performed by ACT Solutions Corp (Newark, DE). The survey included four different sensory methods, that is check-all-that-apply (CATA) questions, preference, ranking and rating tests (Figure 3). In vitro dermal irritation potential of alkenones was evaluated previously by Consumer Product Testing Company, Inc. (Fairfield, NJ). Results indicated that alkenones are safe for topical application and not expected to cause irritation in vivo [30]. Differential scanning analysis Melting point was determined using differential scanning calorimetry (DSC) analysis. Using a Mettler MT aluminium crucible. DSC was performed at a 10°C min À1 ramp from 0 to 200°C using a DSC Q20 (TA Instruments, New Castle, DE) attached to a F25-ME refrigerated/heating circulator (Julabo, Allentown, PA). Nitrogen gas was purged at a rate of 50 mL min À1 . TA Universal analysis software was used to obtain the scans. Stability Stability of the lipsticks was monitored at two temperatures, room temperature (25°C) and an elevated temperature (45°C) in stability cabinets for 12 weeks. Samples were checked visually and tested instrumentally at day 1, weeks 4, 8 and 12 in their final containers. Data analysis Differences in the lipstick hardness, stiffness, firmness and pay-off were evaluated using one-way ANOVA followed by Tukey's multiple comparison test using SPSS Statistics 21 software (IBM, Armonk, NY). A P value less than 0.05 was taken as the minimal degree of statistical significance. Hardness and stiffness Lipstick hardness is an essential characteristic; lipsticks must not bend, crumble, crack or break during application. Hardness depends primarily on the type and amount of waxes in the formulation, as well as the oil:wax ratio. In our samples, the oil:wax ratio was constant, only a single wax was different in each lipstick. Hardness (force needed to bend) of L1 and L4 did not change significantly over the 12 weeks at 25°C; however, L2 and L3 became softer over time at room temperature (P < 0.05) (Figure 4a). The difference in hardness between the control (L1) and L4 at 25°C was notable throughout the testing period; L4 was statistically significantly softer and easier to bend (P < 0.05). Two commercial lipsticks were tested as well (C1 and C2) at both temperatures as controls to understand how our lipsticks compared to commercially available lipsticks. Testing of C1 and C2 was only performed in a single timepoint. Hardness of C1 and C2 at 25°C fluctuated greatly, C1 was similar to L4, and C2 was similar to L1-L3. Hardness of all of our lipsticks was considered acceptable since the values were similar to the commercial products. At 45°C, hardness of all lipsticks was lower, as expected. Higher temperature softens waxes; therefore, the lipsticks became softer. Similar to 25°C, lipsticks at 45°C had similar hardness throughout the testing period. An interesting observation was that the hardness of L4 at 45°C started high, compared to L1, and then significantly decreased over the 12 weeks (P < 0.05). No other change was observed. L2 was the softest at 45°C. It can be summarized that alkenones made the lipsticks softer at 25°C, this difference was not as noticeable at 45°C. Stiffness refers to how easily the lipstick can be bent. A stiff sample lacks flexibility and is hard to bend. At 25°C, the stiffness of L4 was the lowest out of L1-L4 throughout the testing period, and it was also the closest to the commercial lipsticks (Figure 4b). The values for L1-L3 were higher, meaning they were less flexible. At 45°C, L4 was the least flexible on Day 1, but over the weeks, all lipsticks containing alkenones became less stiff than the control. Results indicate that alkenones made the lipsticks more flexible, especially in a higher concentration when used as an alternative for candelilla wax. Needle penetration test Firmness is a characteristic that needs to be balanced with softness. If a lipstick is too firm, it is difficult to apply and may feel waxy and dragging, especially at lower temperatures. If a lipstick is too soft, it may have an undesirably too high pay-off, leading to a sticky sensation and faster product usage [31]. Firmness of the lipsticks remained consistent throughout the testing period at 25°C. L4 was the firmest compared to all other lipsticks (P < 0.05) ( Figure 5). C1 and C2 were the least firm, suggesting that they were softer than our lipsticks. At 45°C, all lipsticks became softer and easier to penetrate, as expectedsimilar to bending. At elevated temperature, the firmness of most lipsticks remained relatively consistent, except for L4, which became significantly softer over the testing period (P < 0.05). Alkenones did not Table II Pay-off to fabric and skin (average AE SD). Pay-off to fabric (mg) Pay-off to skin (mg) Figure 6 (a) Lipsticks formulated and tested in this study, L1 to L4 from left to right. (b) Visual representation of pay-off to skin after three strokes. affect firmness markedly in a lower concentration; however, they increased firmness significantly in a higher concentration. Pay-off Pay-off is often used as an indirect measure of how well a product works from a consumer's perspective. When choosing lipsticks, consumers usually swatch samples to see the colour and test payoff. Too high of a pay-off can lead to product build-up on the lips and a waxy sensation. Too low of a pay-off demands that consumers apply the product in multiple layers to achieve the desired colour and coverage. Friction is the resisting force that arises when one surface slides over another [29]. A lipstick that generates a lot of friction feels 'sticky' and loses some of the glide and easy application that consumers' desire. As for pay-off to fabric, C2, L1 and L4 were lower pay-off lipsticks, whereas C1, L2 and L3 had higher amounts transferred (P < 0.05) (Table II). In the case of pay-off to skin, both C1 and C2 and L4 had a lower pay-off, and L1-L3 had higher values. From our lipsticks, L2 and L3 transferred the highest amounts, whereas L4 transferred the lowest amount to both fabric and skin. This difference is probably related to interaction of the waxes and oils in the lipsticks [32]. Interestingly, whereas L4 transferred the lowest amount of product to both fabric and skin, it still had the highest colour intensity on the skin after the same number of strokes (determined visually, Figure 6b). These results imply that L4 would be the most desired because it had a low amount of product transferred, but still achieved the most intensive colour on skin, which can translate into slower product consumption. The kinetic friction force decreased over the cycles either dramatically (≥30%) or slightly (<30%) (Figure 7). The highest average friction was observed for L1 and L2. Decrease in friction was very high for L2 (32% change between cycle 1 and 3), whereas it was negligible for L4. There was not a clear target for the extent of decrease based on the commercial products; the decrease for C1 was 32%, whereas for C2 it was only 6%. Pay-off most likely affected the friction during application. Lipsticks with higher payoff values showed higher change in friction. L4 had the lowest payoff to both fabric and skin, and it had the lowest overall change in friction as well. The minimal change in friction implies that L4 was easy to glide from the beginning, whereas our other lipsticks were more dragging first and then became smoother to apply. Consumer study In the container, consumers found all lipsticks, except for L3, evenly coloured ( Figure 6a). L1, L2 and L4 were scored as matte, whereas L3 was glossy/shiny in the container according to the participants. When applied to the skin, all lipsticks were described as glossy/shiny and evenly coloured. In the preference test, L4 was selected as the most preferred lipstick by 71% of the participants, whereas L2 and L3 were at second place. No consumer indicated they would purchase/use L1. In the ranking test, consumers were asked to rank samples based on the pay-off to skin. Overall ranking of the lipsticks was the following: L4 > L2 > L3 > L1. In the rating test, most consumers (64%) liked L4 the most, followed by L3, then L1 and L2. The commercial products were not included in the consumer study because we did not know the pigment content of those lipsticks. Colour intensity of the commercial lipsticks was higher, they probably contained more pigments than our lipsticks. Since colour is determined by the amount of pigments used in a lipstick, including all of them in the consumer study would not have been a fair comparison. Overall, consumer study results indicated that participants felt L4 had the best coverage and colour intensity (was the most pigmented) on the skin and was soft, smooth and the easiest to apply. These results are in good correlation with our instrumental pay-off and friction (Figure 7) and colour intensity results (Figure 6b). Visually we also found L4 to have the most pigmented colour on skin from the same number of strokes. It should be noted that all of our lipsticks contained the same pigment solid content; therefore, any difference in colour on the skin can be attributed to the composition. DSC The melting point of the lipsticks is displayed in Table III. The DSC thermogram of each lipstick exhibited melting range with one or more characteristic endothermic melting peak(s) within the range. The melting point of our control (L1) was higher and similar to C1 and C2, whereas L2-L4 had lower melting points. Although alkenones had a very similar melting point range as the comparator waxes, lipsticks with alkenones had lower melting points overall. A lower melting point may affect the stability of lipsticks as those with lower melting points may also soften at a lower temperature, leading to a product that is more sensitive to accidental exposure to higher temperature (e.g. leaving a lipstick in a hot car). Stability study All lipsticks remained stable at 25°C and 45°C, except for L2 which started sweating at week 8 at 45°C. Although differences were seen in the melting point of lipsticks with and without alkenoneslipsticks with alkenones had a lower melting point -, no significant negative effect of alkenones was observed on the stability of lipsticks. Conclusions In this study, alkenones were used as an alternative to three commonly used waxes in lipsticks, and the effect of alkenones on the quality and performance of lipsticks was evaluated. Alkenones influenced each characteristic evaluated. In general, lipsticks with alkenones (L2-L4) became softer, easier to bend than the control (L1). L2-L4 was more flexible than the control. In terms of firmness (measured via the needle penetration test), lipsticks were similar to the control, except for L4, which was significantly firmer. The effect on pay-off was not consistent; no trends were observed. L2 and L3 had higher pay-off to skin and fabric than the control. L4 had the lowest amount transferred, but it still had the highest colour intensity on skin. Alkenones influenced friction (glide) positively, the average friction decreased for L2-L4 compared to L1. The lowest friction (i.e. best glide) was shown in L4. Melting point of the lipsticks was lower when there was alkenones in the mixture. Overall, L4, the lipstick containing the highest amount of alkenones in combination with microcrystalline wax, ozokerite and carnauba wax was found to have the most desirable attributes, including ease of bending, high level of firmness, low pay-off in terms of amount, high colour intensity on skin and lower friction (i.e. better glide). The consumer study revealed that consumers liked L4 the most overall.
2019-12-28T14:03:20.174Z
2019-12-27T00:00:00.000
{ "year": 2020, "sha1": "9fce319832714f5dbbd8d0d0882f6f006bb26c0d", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ics.12597", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "8b6c32ca7614a2326d8c6949b200126517a3af1c", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
12822927
pes2o/s2orc
v3-fos-license
Bone marrow origin of Ia-positive cells in the medulla rat thymus Irradiated rats were reconstituted with bone marrow from F1 hybrids. Ia antigen of donor-bone marrow origin was detected by an immunoperoxidase technique on cryostat sections and found predominantly in the medulla of rat thymus 2 wk after reconstitution. These Ia-bearing cells increased in number with time after reconstitution, but the Ia on the cortical epithelial cells remained of host origin. The nature of the bone marrow-derived cells and their implication for major histocompatibility complex restriction are discussed. The thymus is of key importance in the development of the T cell repertoire, including the acquisition of tolerance to self antigens and the ability to recognize foreign antigens in association with their own major histocompatibility complex (MHC) antigens (1-3). The Ia antigens of the MHC have been characterized from several species, and are present on B cells, dendritic cells, and a subpopulation of macrophages in rats (4)(5)(6)(7). In mouse and human thymus, Ia antigen has been localized on the elongated epithelial cells in the cortex with more confluent staining of the medulla (8)(9)(10)(11). This paper describes a similar localization for Ia antigen in rat thymus. Radiation chimeras were used to show that Ia-bearing cells in the medulla were bone marrow-derived, but the majority of Ia-bearing cells in the cortex remained of host origin. This finding may help to explain conflicting results obtained on the effect of the recipient's thymus phenotype on MHC restriction in radiation chimeras (2,3,12). Materials and Methods Rats. Two inbred strains, PVG.RT1 c and PVG.RTt", and their F~ hybrids were from the specific-pathogen-free unit of the Sir William Dunn School of Pathology, University of Oxford, Oxford, England (13). Chimeric rats were prepared by irradiating male PVG.RT1 c rats • -• Localization Procedure. Lymphoid organs were removed at various times after reconstitution and 5-pro cryostat sections cut. The staining procedure was as described previously (7), except that the peroxidase-labeled antibody was used at 40/Lg protein/ml to ensure saturation and maximum sensitivity. rat strains tested. Both monoclonals cross-react with Ia antigens from mouse strains and map to the I-A region (4). Fig. 1 (A and B) shows the strong labeling obtained in the thymic cortex and medulla of a PVG.RT1 c rat stained with M R C O X 4. In the cortex, the staining has a lattice-like pattern, suggesting that it is present on the thymic epithelium as has been described in the h u m a n and the mouse (8)(9)(10)(11). In the medulla the staining was strong and more confluent, but could be associated with large irregular cells in some areas ( Fig. 1 A and B). Areas containing unlabeled lymphocytes were also visible. T h e specificity of the method and the failure of M R C O X 3 to recognize Ia antigen in P V G . R T 1 c tissue is illustrated in Fig. 1C. Localization of la Antigens in the Thymus of Irradiated Rats Reconstituted with FI Bone Marrow. Chimeric rats were prepared by irradiation of P V G . R T 1 ¢ ( M R C O X 3 negative) rats and reconstitution with bone marrow from (PVG.RT1 u × P V G . R T I~) F I hybrids ( M R C O X 3 positive). Localization of Ia antigen by M R C O X 3 in the thymus 2 wk after reconstitution (Fig. 2 A) showed antigen of donorbone marrow origin mainly on cells in the medulla, with a few positive cells scattered in the cortex. W h e n both donor-derived and host Ia antigens were demonstrated using M R C O X 4 (Fig. 2B), staining was widespread throughout the cortex and the medulla. Thus the majority of the Ia antigen in the thymic cortex of the chimera remained of recipient type. After 2 wk, there was considerable variation within the thymus; some parts resembled normal thymus whereas other parts that showed virtually no restoration with lymphocytes gave heavy confluent staining with M R C (Fig. 2). Thymuses examined at 4, 8, and 12 wk after reconstitution had increasingly normal morphology as the organ became repopulated with lymphoid cells. However, cells with Ia of donor origin were abundant and remained confined mainly to the medulla (Fig. 3). These stained cells were large with irregular outlines (Fig. 4 A), and some of the staining may be a result of internal antigen. In all the chimeras examined (~ 12 wk), there was no staining of the epithelial network in the cortex with M R C O X 3 (Fig. 4B), although this stained well with M R C O X 4. M R C O X 3 gives weaker staining of Ia-positive cells in PVG.RT1 u rats than M R C O X 4 (4), but it is still clearly detectable in the F1 hybrid (Fig. 4 C). Discussion In the thymuses of normal and chimeric rats, Ia antigen was present on two cell types of different origin. In the cortex, Ia antigen was distributed in a lattice-like pattern on epithelial cells (Figs. 1-3) and remained of host origin in radiation chimeras. This suggests that it is produced by these cells, and not acquired from bone marrow-derived cells. The medulla contained large, irregular cells that were strongly stained for Ia antigen that was of donor bone marrow origin (Figs. 2 and 3). Although Ia antigen has been detected on -2 0 % of rat thymocytes by analysis on the fluores- cence-activated cell sorter (4, 13) the labeling was weak and would not account for the labeling observed here. The staining of Ia-positive cells in the medulla of the thymus resembled that obtained in the T-dependent areas of spleen and lymph node ( [8]; and A. N. Barclay and G. Mayrhofer, unpublished observations). This and because they are bone marrow derived suggest that these cells may be analogous to dendritic or interdigitating cells. Cells in this category, which includes Langerhans's cells, bear Ia (5,6), and at least some can present antigen (5,6,12). Cells with a similar ultrastructure have been observed in the thymus (5). If the medullary Ia-bearing cells can present antigen, their function may be to produce tolerance to self antigens. The medulla is a likely place for such a process as it is more accessible to both circulating cells (15) and macromolecules (16) than the cortex. The presence of increasing amounts of donor-bone marrow-derived Ia antigen in the thymus during reconstitution of radiation chimeras affects the interpretation of all experiments in which similar chimeras are employed to study the effect of the thymus epithelium on restriction of T cells to Ia antigen. It may help to explain some of the conflicting data obtained in mice (2, 3) as results will depend on which Ia antigen is important for restriction, i.e., the Ia on cortical epithelial cells or the Ia on medullary cells. Longo and Schwartz (12) have recently shown that T cells produced by the thymus shortly after reconstitution of radiation chimeras were restricted to host Ia antigen, but that later populations were restricted to donor Ia antigen in the mouse. This together with our results suggest that the final restriction of T cells to Ia antigen may be imposed at the level of Ia-bearing medullary cells. Whether the Iabearing epithelial cells in the thymus cortex have a role on restriction remains to be resolved. Summary Irradiated rats were reconstituted with bone marrow from F1 hybrids. Ia antigen of donor-bone marrow origin was detected by an immunoperoxidase technique on cryostat sections and found predominantly in the medulla of rat thymus 2 wk after reconstitution. These Ia-bearing cells increased in number with time after reconstitution, but the Ia on the cortical epithelial cells remained of host origin. The nature of the bone marrow-derived cells and their implication for major histocompatibility complex restriction are discussed. We thank Dr. Don Mason and Dr. Alan Williams for helpful discussions. Received for publication i 9 Janua~y 1981 and in revised form 16 March 1981.
2018-04-03T04:07:31.059Z
1981-06-01T00:00:00.000
{ "year": 1981, "sha1": "872fadfc04b87402a3d58284ccdd10c961be7434", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/jem/153/6/1666.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "872fadfc04b87402a3d58284ccdd10c961be7434", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
259276734
pes2o/s2orc
v3-fos-license
Comparative effectiveness of biologics for patients with moderate-to-severe psoriasis and special area involvement: week 12 results from the observational Psoriasis Study of Health Outcomes (PSoHO) Introduction Psoriasis localized at the scalp, face, nails, genitalia, palms, and soles can exacerbate the disease burden. Real-world studies comparing the effectiveness of treatments for these special areas are limited. Methods Psoriasis Study of Health Outcomes (PSoHO) is an international, prospective, non-interventional, study comparing the effectiveness of anti-interleukin (IL)-17A biologics (ixekizumab and secukinumab) compared to other approved biologics and the pairwise comparative effectiveness of ixekizumab relative to five other individual biologics for patients with moderate-to-severe psoriasis. To determine special area involvement, physicians answered binary questions at baseline and week 12. The proportion of patients who achieved special area clearance at week 12 was assessed. Missing outcome data were imputed as non-response. Comparative treatment analyses were conducted using frequentist model averaging. Results Of the 1,978 patients included, 83.4% had at least one special area involved at baseline with the scalp (66.7%) as the most frequently affected part, followed by nails (37.9%), face/neck (36.9%), genitalia (25.6%), and palms and/or soles (22.2%). Patients with scalp, nail, or genital, but not palmoplantar or face/neck psoriasis, had significantly higher odds of achieving clearance at week 12 in the anti-IL-17A cohort compared to the other biologics cohort. Patients with scalp psoriasis had a 10–20% higher response rate and significantly greater odds (1.8–2.3) of achieving clearance at week 12 with ixekizumab compared to included biologics. Conclusion Biologics demonstrate a high level of clearance of special areas at week 12 in a real-world setting. Patients with scalp, nail, or genital involvement have significantly higher odds of clearance at week 12 with anti-IL-17A biologics compared to other biologics. Introduction: Psoriasis localized at the scalp, face, nails, genitalia, palms, and soles can exacerbate the disease burden. Real-world studies comparing the e ectiveness of treatments for these special areas are limited. Methods: Psoriasis Study of Health Outcomes (PSoHO) is an international, prospective, non-interventional, study comparing the e ectiveness of antiinterleukin (IL)-A biologics (ixekizumab and secukinumab) compared to other approved biologics and the pairwise comparative e ectiveness of ixekizumab relative to five other individual biologics for patients with moderate-to-severe psoriasis. To determine special area involvement, physicians answered binary questions at baseline and week . The proportion of patients who achieved special area clearance at week was assessed. Missing outcome data were imputed as non-response. Comparative treatment analyses were conducted using frequentist model averaging. Results: Of the , patients included, . % had at least one special area involved at baseline with the scalp ( . %) as the most frequently a ected part, followed by nails ( . %), face/neck ( . %), genitalia ( . %), and palms and/or soles ( . %). Patients with scalp, nail, or genital, but not palmoplantar or face/neck psoriasis, had significantly higher odds of achieving clearance at week in the anti-IL-A cohort compared to the other biologics cohort. Patients with scalp psoriasis had a -% higher response rate and significantly greater odds ( . -. ) of achieving clearance at week with ixekizumab compared to included biologics. Introduction Psoriasis (PsO) is a common, chronic, immune-mediated inflammatory disease that can affect all parts of the body, yet the involvement of some special areas of the body is associated with a disproportionate impact on daily functioning and quality of life (1)(2)(3)(4). Reduction in a patient's quality of life is likely due to the associated symptoms, treatment challenges, or the visibility of psoriasis lesions in these special areas, including the scalp, face, nails, genitals, and palms and soles (5)(6)(7)(8). However, large real-world studies that evaluate and compare the effectiveness of different treatments for PsO localized in these special areas are still limited (2,3,9). The Psoriasis Study of Health Outcomes (PSoHO) is a large, international, prospective, non-interventional study that compares the effectiveness of biologics for patients with moderate-to-severe PsO (10, 11). In this study, we investigate the prevalence of special area involvement in a real-world setting and the comparative effectiveness of approved biologics for the treatment of patients with special area involvement of the scalp, genitalia, nails, face and/or neck, or palms and/or soles. We evaluate the comparative effectiveness of anti-interleukin (IL)-17A biologics compared to other approved biologics for the clearance of PsO in these special areas and provide pairwise comparative effectiveness of ixekizumab (IXE) compared to five other individual biologics (10-12). Methods Details of the PSoHO study and enrolled patients have been published previously (10, 11). Briefly, the PsoHO study enrolled 1,981 adult patients from 23 countries with a confirmed diagnosis (at least 6 months before baseline) of moderate-to-severe PsO who initiated or switched biologic treatment during routine medical care (10). At baseline and week 12, physicians answered binary questions to determine special area involvement of the scalp, genitalia, nails, face, and/or neck and palms and/or soles. Prescribed biologics were grouped into the anti-IL-17A antibodies cohort [IXE and secukinumab (SEC)] and a second cohort of other biologics [brodalumab, adalimumab (ADA), certolizumab, etanercept, infliximab, ustekinumab (UST), guselkumab (GUS), risankizumab (RIS), and tildrakizumab]. Only treatment groups with more than 100 patients are shown (IXE, SEC, GUS, RIS, ADA, and UST). Descriptive statistics and comparative effectiveness analyses using frequentist model averaging (FMA) are reported as previously published (10). Pairwise comparisons of baseline demographics between the anti-IL-17A and other biologic cohorts and IXE compared to other individual biologics were performed using Fisher's exact test or chi-square for categorical variables and analysis of variance (ANOVA) or exact P-value from the median test (Monte Carlo estimate) for continuous variables. For each special area, analyses were completed for patients with special area involvement at baseline and a valid result at week 12. Adjusted comparative analyses between cohorts or treatments determined the odds ratios (ORs) of patients with involvement of a specific special area at baseline who achieved complete clearance at week 12. Models were adjusted for the covariates previously described (10). Statistical significance is indicated when the 95% confidence intervals (CIs) do not cross the null hypotheses (OR = 1). Unadjusted response rates for this outcome are also reported with missing data imputed as non-response. The impact of any potential unmeasured confounding was assessed using the E-value (13). All patients were required to give informed consent for participation in the study. The study was registered at the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCEPP) (14) and was conducted according to Good Pharmacoepidemiology Practices guidelines and the Declaration of Helsinki. Results Of the 1,978 patients with special area data at baseline, 83.4% (n = 1,650) had at least one special area involved, with the scalp (66.7%) the most frequently affected, followed by nails (37.9%), face and/or neck (36.9%), genitalia (25.6%), and palms and/or soles (22.2%) ( Table 1). Of these 1,650 patients, 66% had more than one special area involved and 5.0% had involvement in all five special areas ( Figure 1). Compared to the other biologics cohort, the anti-IL-17A cohort had higher unadjusted response rates and at least 50% greater odds of achieving clearance of scalp (OR 1.5; CIs 1.2, 1.9), genital (OR 1.6; CIs 1.1, 2.5), or nail (OR 1.9; CIs 1.4, 2.4) psoriasis at week 12 ( Figure 2). No significant differences between cohorts were determined for patients with either face and/or neck or palmoplantar involvement, although slightly higher unadjusted response rates for clearance of these areas were achieved in the anti-IL-17A cohort compared to the other biologics cohort. In patients who received the EMA-approved on-label dosing, treatment results for special area clearance were comparable to those of the entire patient cohort (Supplementary Figure S1). Other biologics (n = , ) For patients with scalp involvement, the E-value for scalp clearance for the comparison of the anti-IL-17A cohort with the other biologics cohort was 1.75 [FMA OR (95% CI) = 1.5 (1.2, 1.9)], and the E-value for the lower confidence limit of the point estimate was 1.42. This E-value analysis indicated no substantial confounding (a risk ratio association of >1.75 for both the treatment selection and outcomes would be required to impact the observed treatment estimate). At week 12, IXE-treated patients had a higher unadjusted response rate (74%) for scalp psoriasis clearance compared to patients treated with all other studied biologics (54-65%) ( Figure 3A) (Supplementary Table 1). Moreover, patients treated with IXE had 1.8-2.3 higher odds of achieving scalp psoriasis clearance at week 12 than patients treated with any of the other comparator biologics. For patients with genital involvement, treatment with SEC, IXE, and RIS resulted in unadjusted response rates of over 80% ( Figure 3B). Significantly, IXE also had 2.6 times higher odds of genital psoriasis resolution at week 12 compared with UST. The greatest variability in unadjusted response rates for biologics was shown for the resolution of nail involvement (40-67%) with the highest response rate shown with IXE ( Figure 3C). IXE-treated patients also had significantly higher odds of nail clearance at week 12 than GUS and ADA. No statistically significant differences in comparative effectiveness were observed between IXE and other treatments for the clearance of face and/or neck or palmoplantar involvement (Figures 3D, E). All biological treatments resulted in a high proportion of patients with clearance of face and/or neck involvement (73-84%) at week 12, but lower unadjusted response rates were reported for patients with palmoplantar (65-79%) involvement. In patients who received the EMA-approved on-label dosing (1,764/1,978; 89.2%), treatment results for special area clearance were comparable with those of the entire patient cohort (Supplementary Table 2), with the exception that ixekizumab-treated patients had significantly higher odds of nail clearance than risankizumab-treated patients (OR 2.0; CIs 1.1, 3.3; Supplementary Figure S2C). FIGURE Unadjusted response rates and comparative adjusted odds ratios for the anti-IL-A cohort compared to the other biologics cohort for patients with scalp, genital, nail, face and/or neck or palmoplantar involvement at baseline and with complete clearance of these special areas at week . Comparative results are statistically significant if % CIs of the odds ratios do not cover . Missing data imputed as non-response. CI, confidence interval; IL, interleukin. Discussion In this real-world study population of 1,978 patients with moderate-to-severe PsO, the involvement of one or more special areas was prevalent. This aligns with other studies showing that PsO in one special area can increase the likelihood of involvement of other special areas, as well as for more severe disease (3,5,9,15). In PSoHO, anti-IL-17A biologics show significantly greater effectiveness for scalp, genital, or nail psoriasis clearance compared with other included biologics in real-world clinical practice. The anti-IL-17A cohort also shows numerically higher response rates for clearance of all special areas at week 12 compared with the other biologics cohort. Since lack of effectiveness for special areas is one of the main reasons that patients report non-compliance with topical treatments (16), knowing the comparative effectiveness of biologics in clearing various special areas can help to inform treatment decisions. The data presented here confirm the effectiveness of anti-IL-17A biologics (10, 11,17,18) and extend this result to PsO in special areas of the body that are regarded as burdensome and sometimes difficult to treat. The PSoHO study shows that approximately two-thirds of patients with special area involvement have more than one special area involved. This aligns with other studies showing that PsO in one of these special areas can be a risk factor with an increased likelihood of having the involvement of other special areas, as well as for more severe disease (3,5,9,15). Scalp psoriasis was the most common special area for patients in PSoHO (66.7%), which reflects other real-world studies that record a prevalence ranging from 38 to 65% (3,5,9,19). Patients with scalp involvement report greater disease and itch severity compared with those without scalp involvement (3,7). Topical treatments are often the first option for treatment, even though the presence of hair makes the scalp less accessible, even for foams and solutions (2,19). However, data from this study highlight the effectiveness of anti-IL-17A biologics, and, in particular, IXE at week 12 for the treatment of scalp psoriasis. Higher response rates and significantly higher odds of scalp clearance at week 12 were achieved with anti-IL-17A biologics compared with the other biologics. With more than 74% of patients achieving scalp psoriasis clearance, IXE-treated patients had a higher unadjusted response rate compared to SEC (62%), GUS (61%), RIS (65%), ADA (58%), and UST (54%) and significantly greater odds (1.8-2.3) of achieving scalp psoriasis clearance at week 12. These results confirm primary PSoHO data (10) and extend them to patients with scalp psoriasis. More than a quarter of patients in the PSoHO study reported the presence of genital psoriasis, which is within the range of previous reports (20). Approximately 29-63% of patients with PsO are impacted by genital psoriasis at some point during the course of their disease (16, [20][21][22]. However, genital psoriasis remains significantly underdiagnosed, with one study reporting 60% of . /fmed. . FIGURE Unadjusted response rates and comparative adjusted odds ratios of ixekizumab versus individual treatments for patients with baseline involvement and clearance at week of (A) scalp psoriasis (B) genital psoriasis (C) nail psoriasis (D) face and/or neck psoriasis (E) palmoplantar psoriasis. Comparative results are statistically significant if % CIs of the odds ratios do not cover . + Denotes that the lower CI is < . : The lower CI for the ixekizumab compared to guselkumab odds ratio for genital clearance is . . The lower CI for the ixekizumab compared to risankizumab odds ratio for nail clearance is . . Missing outcome data imputed as non-response. CI, confidence interval. Frontiers in Medicine frontiersin.org . /fmed. . patients with PsO were never examined in the genital area by their dermatologist (23). Furthermore, the burden of genital psoriasis is profound and has a significant impact on sexual health resulting in greater stigmatization and lower self-esteem than visible special areas (20,24,25). For the treatment of genital psoriasis, patients had significantly higher odds of clearance in the anti-IL-17A cohort compared to the other biologics cohort. This result also reflects that SEC and IXE treatment led to the highest proportion of patients with resolution of genital psoriasis at week 12. These results support other recent studies showing the rapid resolution of genital psoriasis with IXE (22,26,27). The prevalence of nail psoriasis varies widely in the literature from 10 to 82% (28) but was reported for over a third of patients in PSoHO. Compared with other special areas, the management of nail psoriasis is particularly challenging (29,30). This was reflected in the PSoHO data as treatment of nail psoriasis resulted in the greatest variability in response rates across biologics. Nevertheless, patients treated with anti-IL-17A biologics had significantly higher odds of clearance at week 12 compared with other biologics. IXE had 2-27% higher response rates than other individual biologics (40-64%), and IXE-treated patients had significantly higher odds of achieving clearance than GUS and ADA. These data mirror the IXORA-R and SPIRIT-H2H clinical trials data, whereby IXE demonstrated superior efficacy compared to GUS, as well as ADA, in the resolution of nail psoriasis at week 24 (31, 32). However, the use of binary questions gives rise to substantially higher unadjusted response rates than those expected using more formal assessments, such as the modified nail psoriasis severity index (mNAPSI) (33). Additionally, it would be premature to make a final assessment of nail psoriasis at 12 weeks, as longer periods are required for the nail plate to grow out and for treatment effectiveness to be evaluated. This is exemplified by one study, in which differences in treatment effectiveness between IXE and UST only emerge beyond 12 weeks (34). As such, it is prudent to wait for longer-term PSoHO results that will also include specific assessments of nail psoriasis, such as the mNAPSI. Facial psoriasis was previously considered to be uncommon, yet in line with other studies (5,35), PSoHO shows over a third of patients have psoriasis in this special area. Compared to other body areas that may be hidden more easily, people with facial psoriasis often feel stigmatized, which can result in isolation, depression, and reduced quality of life (36, 37). In PSoHO, there was a consistently high proportion of patients (>70%) who achieved clearance of facial psoriasis at week 12 irrespective of the biologics used, with the highest response rates with RIS, ADA, and IXE. Similar to other studies, 22.2% of PSoHO patients had palmoplantar involvement, which, together with nail psoriasis, is arguably the most difficultto-treat special area (5,38). Patients with palmoplantar psoriasis report greater physical disability, pain, fatigue, and lower qualityof-life scores than those without palmoplantar involvement (3,39). Interestingly, no significant differences between treatments were found, though unadjusted response rates for palmoplantar psoriasis clearance were numerically the highest for UST, SEC, and IXE. Observational studies have inherent limitations, including measured and unmeasured confounding bias compared with randomized clinical trials. However, the application of FMA can accommodate some of these uncertainties in model choice through the machine learning framework. The statistical precision of these comparative analyses was constrained by the number of representative patients with involvement of each special area and the respective covariates used. Limitations of this study include the grouping of non-anti-IL-17A biologics into a single category, the use of binary questions without corresponding scores, such as palmoplantar PASI (PPASI), psoriasis scalp severity index (PSSI) or mNAPSI, and the relatively short follow-up period of 12 weeks. Longer treatment periods may be necessary to fully assess and conclude the comparative effectiveness of the biologics included. Additionally, some special areas may also be challenging for the physician to differentiate, such as between the face and the scalp, which may result in overlap. It is also not possible to exclude the possibility that patients used topical treatments in addition to biologics and remains to be investigated. This study contributes to our understanding of the treatment in these special areas by providing the comparative effectiveness of different biologics for achieving clearance of special areas after 12 weeks. In general, biologics demonstrate a high level of clearance of these special areas at week 12 in a real-world setting. In particular, patients with scalp, nail, or genital, but not palmoplantar or face and neck, involvement have significantly higher odds of achieving clearance of these areas at week 12 with anti-IL-17A biologics compared with other biologics. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by all of the necessary central or local IRB and/or Ethics Committee approvals have been obtained for this multisite, international study by United BioSource LLC (UBC). The patients/participants provided their written informed consent to participate in this study. Author contributions CS and ER were involved with the conception and design of the work. CS, NH, and CM carried out the analysis of data. SP, ER, LP, RV, NT, NH, GG, CS, CM, and PB were involved with the interpretation of data for the work. NH, CS, and CM drafted the work. All authors contributed to the critical revision of the manuscript and approved the submitted version. Funding This study and manuscript was funded by Eli Lilly and Company.
2023-06-29T13:07:24.305Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "464795bc0dc99d563d884565a9c6f826ebc2e7e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "464795bc0dc99d563d884565a9c6f826ebc2e7e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
78591030
pes2o/s2orc
v3-fos-license
Subungual exostosis : satisfactory aesthetic and functional outcome five years after exeresis Case report of a 16-year-old female patient with diagnosis of subungual exostosis in the right hallux with clinical and histopathological diagnosis, submitted to total excision of the lesion and follow-up for five years with excellent aesthetic result. CASE REPORT A 16-year-old patient, female, phototype II, presented a painful bone excrescence in the right hallux (Figure 1) during a dermatologic examination.An ultrasound of the preoperative lesion was performed, which elucidated the irregularity of the bone contour of the distal phalanx of the first toe.A complete marginal excision of the lesion was performed with the material that had been submitted for histopathological analysis, which confirmed the diagnosis of bone exostosis (Figures 2 and 3).Five months after surgery (Figure 4) both the radiography and ultrasound were unaltered, showing normal nail plates.During the postoperative period the patient was already saying how satisfied she was by the absence of pain and by the good appearance of the region.After five years, it was possible to observe an excellent aesthetic result, without nail dystrophy and absence of functional impairment of the affected hallux (Figure 5). INTRODUCTION Subungual exostosis is a benign bone tumor, encapsulated by fibrocartilage, which mainly affects the distal hallux phalange, with a higher occurrence in adolescents and young female adults. 1 Its etiology remains unknown, with a probable association with previous traumas, which would explain its greater occurrence in the first toe.Clinically, it presents as a painful nodule or painful hardened tumor at the distal end that produces lifting and deformity of the nail.Among the differential diagnoses, malignant tumors, viral wart, fibroma, pyogenic granuloma or subungual osteochondroma can be cited.Performing an imaging examination, such as an ultrasound or radiography allows for the visualization of abnormal bone growth with opacity and soft tissue involvement.] DISCUSSION Subungual exostosis is a rare benign tumor, however it represents the bone condition most frequently associated with lesions in the nail, with probable traumatic etiology.The diagnosis is clinical and may be paired with radiography.In the case reported, the patient presented alteration on the distal phalanx with irregularity of the bone contour, characteristic of the disease.Pain is a very common symptom because it is a bone alteration.The presence of this symptom becomes important when considering differential diagnoses, such as malignant tumors, viral wart, fibroma, pyogenic granuloma or subungual osteochondroma. 7Surgical treatment with the resection of the whole tumor area is the recommended therapy, aiming to minimize damage to the nail bed and ungual matrix, and to avoid onychodystrophy, a common complication of the treatment.The patient in this case, after five years of the exeresis, presented excellent aesthetic result, without nail dystrophy, absence of functional impairment of the affected hallux, and, most importantly, no local recurrence.l Figure 1 : 3 :Figure 4 : Figure 1: Painful bony protrusion in the right Figure 3: Nail bed presenting a nodular hyperplastic lesion in the reticular chorion, the central area of which consists of osteoid tissue surrounded by hyaline cartilaginous tissue Figure 5 : Figure 5: After five years, excellent aesthetic results without nail dystrophy
2019-03-16T13:11:27.454Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "25ef9bb3ece22733fe0b86dac5a441d904952a44", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/scd1984-8773.2016831655", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "25ef9bb3ece22733fe0b86dac5a441d904952a44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9679639
pes2o/s2orc
v3-fos-license
A New Way of Measuring Openness : The Open Governance Index Much has been written and debated regarding open source licenses – from the early days of the GPL license to the modern days of the Android open source platform. Yet we believe that there is one very important aspect of open source projects that has been neglected: open source governance models. While licenses determine rights to use, copy, and modify, governance determines the rights to visibility, influence, and derivative creation (Table 1). And while licenses apply to the source code, governance applies to the project or platform. More importantly, the governance model describes the control points used in an open source project – such as Android, Qt, or WebKit – and is a key determinant in the success or failure of a platform. The governance model used by an open source project encapsulates all the hard questions about a project. Who decides on the project roadmap? How transparent are the decision-making processes? Can anyone follow the discussions and meetings taking place in the community? Can anyone create derivates based on that project? What compliance requirements are there, and how are these enforced? Governance determines who has influence and control over the project or platform – beyond what is legally deemed in the open source license. In today’s world of commercially-led mobile open source projects, it is not enough to understand the open source license used by a project. It is the governance model that determines whether or not decision making within an open source project is open, accessible, and transparent to all users or whether it is concentrated amongst a specific set of users. Open source software is now “business as usual” in the mobile industry. While much attention is given to the importance of open source licenses, we argue in this article that the governance model can be as necessary to a project’s success and that projects vary widely in the governance models – whether open or closed – that they employ. Open source governance models describe the control points that are used to influence open source projects with regard to access to the source code, how the source code is developed, how derivatives are created, and the community structure of the project. Governance determines who has control over the project beyond what is deemed legally necessary via the open source licenses for that project. The purpose of our research is to define and measure the governance of open source projects, in other words, the extent to which decision-making in an open source project is “open” or “closed”. We analyzed eight open source projects using 13 specific governance criteria across four areas of governance: access, development, derivatives and community. Introduction Much has been written and debated regarding open source licenses -from the early days of the GPL license to the modern days of the Android open source platform. Yet we believe that there is one very important aspect of open source projects that has been neglected: open source governance models. While licenses determine rights to use, copy, and modify, governance determines the rights to visibility, influence, and derivative creation (Table 1). And while licenses apply to the source code, governance applies to the project or platform. More importantly, the governance model describes the control points used in an open source project -such as Android, Qt, or WebKit -and is a key determinant in the success or failure of a platform. The governance model used by an open source project encapsulates all the hard questions about a project. Who decides on the project roadmap? How transparent are the decision-making processes? Can anyone follow the discussions and meetings taking place in the community? Can anyone create derivates based on that project? What compliance requirements are there, and how are these enforced? Governance determines who has influence and control over the project or platformbeyond what is legally deemed in the open source license. In today's world of commercially-led mobile open source projects, it is not enough to understand the open source license used by a project. It is the governance model that determines whether or not decision making within an open source project is open, accessible, and transparent to all users or whether it is concentrated amongst a specific set of users. Open source software is now "business as usual" in the mobile industry. While much attention is given to the importance of open source licenses, we argue in this article that the governance model can be as necessary to a project's success and that projects vary widely in the governance models -whether open or closed -that they employ. Open source governance models describe the control points that are used to influence open source projects with regard to access to the source code, how the source code is developed, how derivatives are created, and the community structure of the project. Governance determines who has control over the project beyond what is deemed legally necessary via the open source licenses for that project. The purpose of our research is to define and measure the governance of open source projects, in other words, the extent to which decision-making in an open source project is "open" or "closed". We analyzed eight open source projects using 13 specific governance criteria across four areas of governance: access, development, derivatives and community. Our findings suggest that the most open platforms will be most successful in the long term, however we acknowledge exceptions to this rule. We also identify best practices that are common across these open source projects with regard to source code access, development of source code, management of derivatives, and community structure. These best practices increase the likelihood of developer use of and accessing information around the actual decisionmaking process (accessibility) are governance criteria that are not readily captured in describing governance models as either flat or hierarchical. In this article, we firstly explain the key governance criteria that we used to analyze eight different mobile open source projects and the outcome of this analysis. We then examine why Android has been so successful given that we find it is also the least open mobile open source project. Following from this, we identify best practices used by the most successful open source projects across the four governance areas of access, development, derivatives, and community. Finally, we suggest areas for future research and provide some conclusions regarding our research findings to date. Analysis of Governance Models We set out with an ambitious goal: to measure openness -the degree to which an open source project is "open" or "closed" -in ways that are rarely discussed publicly or covered in its license. We set out to define and measure the governance of open source projects in a transparent and comprehensive manner -much like how open source licenses are defined and classified into "copyleft", "permissive", and so on. Unlike open source licenses, the governance model is made up of less visible terms, conditions, and control points that determine access, influence, decisions, and derivatives of that project. We researched eight mobile open source projects: Android, MeeGo, Linux, Qt, WebKit, Mozilla, Eclipse, and Symbian. We selected these projects based on breadth of coverage; we picked both successful (Android) and unsuccessful projects (Symbian); both single-sponsor (Qt) and multi-sponsor projects (Eclipse); and both projects based on meritocracy (Linux) and on membership status (Eclipse). Our research, carried out over a six-month period, in- Based on our analysis, we published a report in which we proposed the Open Governance Index (OGI), a measure of open source project "openness" (Vision Mobile, 2011; http://www.visionmobile.com/research.php#OGI). The OGI comprises 13 metrics (Box 1) across the four areas of governance: 1. Access: availability of latest source code, developer support mechanisms, public roadmap, and transparency of decision making Are "Open" Projects More Successful? A successful open source project demonstrates longterm involvement of users and developers, along with a substantial number of derivatives, and the project continually develops, matures, and evolves over time. Our research suggests that platforms that are most open will be most successful in the long-term. Eclipse, Linux, WebKit, and Mozilla each testify to this through their high OGI scores (Table 2). In terms of openness, Eclipse is by far the most open platform across access, development, derivatives, and community attributes of governance. It is closely followed by Linux and WebKit, and then Mozilla, MeeGo, Symbian, and Qt. Seven of the eight platforms reviewed fell within 30 percentage points of each other in the OGI. Our research has identified certain attributes of successful open source projects. These attributes are: timely access to source code, strong developer tools, process transparency, accessibility to contributing code, and accessibility to becoming a committer. Equal and fair treatment of developers (i.e., "meritocracy") has become the norm and is expected by developers with regard to their involvement in open source projects. We also note that there are common areas where most open source projects struggle to be "open". These attributes coalesce around decision making with regard to the project roadmap and committing code to the pro-ject. In particular, we find that open source projects that originate from commercial organizations struggle most with relinquishing project control, which is not surprising, considering the structured and hierarchical decision-making structure of most organizations. The Android paradox Android ranks as the most closed project we examined, with an OGI score of 23%. Yet, at the same time, it is one of the most successful projects in the history of open source. Is Android proof that open governance is not needed to warrant success in an open source project? Android's success has little to do with the open source licensing of the public codebase. Android would not have risen to its current ubiquity were it not for Google's financial muscle and famed engineering team. Development of the Android platform has occurred without the need for external developers or the involvement of a commercial community. Google has provided Android at "less than zero" cost, since its core business is not software or search, but driving ads to eyeballs. As is now well understood, Google's strategy has been to subsidize Android such that it can deliver cheap handsets and low-cost wireless Internet access in order to drive more eyeballs to Google's ad inventory. More importantly, Android would not have risen were it not for the billions of dollars that OEMs and network Developer support mechanisms -are project mailing lists, forums, bug-tracking databases, source code repositories, developer documentation, and developer tools available to all developers? Is the project roadmap available publicly? Transparency of decision mechanisms -are project meeting minutes/discussions publicly available such that it is possible to understand why and how decisions are made relating to the project? Transparency of contributions and acceptance process -is the code contribution and acceptance process clear, with progress updates of the contribution provided (via Bugzilla or similar)? Transparency of contributions to the project -can you identify from whom source code contributions originated? Accessibility to become a committer -are the requirements and process to become a committer documented, and is this an equitable process (i.e., can all developers potentially become committers?). Note that a "committer" is a developer who can commit code to the open source project. The terms "maintainer" and "reviewer" are also used as alternatives by some projects. Transparency of committers -can you identify the committers to the project? Does the contribution license require a copyright assignment, a copyright license, or patent grant? Are trademarks used to control how and where the platform is used via enforcing a compliance process prior to distribution? Are go-to-market channels for applications derivatives constrained by the project in terms of approval, distribution, or discovery? Is the community structure flat or hierarchical (i.e., are there tiered rights depending on membership status?) Access 1. Best Practices Based on our research of major mobile open source projects, we have outlined the best practices for governance models. These practices are listed across the four key areas of governance: access, development, derivatives, and community. Access The minimum requirement for any project to be an open source project is source-code access such that developers can easily read, download, change, and run the code. There should be no developer discrimination; all source code should be available to all developers in a timely manner. Restrictions with regard to source code should be at a minimum, and there should be no preferential access to specific developers because this can cause friction and lead to branching of the project. All open source projects should use open source licenses that are approved by the Open Source Initiative (OSI; http://www.opensource.org). The next most important requirement is ease of access to developer tools, mailing lists, and forums, such that developers can get up to speed on the specifics of the project and build and run the code with minimum effort. Development As much as possible, a simple code contributions process should operate freely and without any hindrance. While we appreciate valid intellectual property con-cerns, such as the risk of copyright infringement, these should not complicate the contributions process any more than necessary. We also note that none of the projects reviewed in this study mandate copyright assignment; this is a good example of why copyright assignment is largely unnecessary. A broad copyright (and ideally patent) license for use of the work should suffice, provided the project has researched and identified the appropriate open source license under which to distribute the project. Copyright assignment is only ever needed when the project decides to change the terms under which it licenses the source code of the project, and this should be largely unnecessary, provided that the correct open source license is identified in the first place. Given that the success of open source projects is largely based on the accrual of developer interest and support, we identify the transparency of decision-making and equitable treatment of all developers (such that they can become project committers) as being critical to longterm success. Restriction of commit rights to specific developers or organizations is a sure way to lose developer support in the long run because developers become frustrated with the inability to commit code themselves, especially if their contributions are continually rejected or ignored. Developers often need to know where the project is headed, how it will get there, and why it is headed in that direction. They also often want the opportunity to influence the project to meet their own needs (i.e., to "scratch their own itch"). The main means by which developers can achieve this influence is by being able to commit code to the project. Therefore, it should be possible for all developers to commit code to the project, once they have shown sufficient knowledge of the code to do so. This is where meritocracy comes into play: those that "do" should be rewarded accordingly. Additionally the project should provide transparent project metrics regarding where contributions come from and who committed them. With regard to the actual development process itself, the project should have a policy of contribution to upstream projects first (if the project comprises other open source projects) such that changes and benefits accrue to up-stream and down-stream projects. Derivatives Compliance frameworks are becoming more and more common among open source projects in order to deter fragmentation and ensure that applications are transwww.timreview.ca A New Way of Measuring Openness: The Open Governance Index Liz Laffan ferable across multiple platforms or operating systems. However, the best mechanism to keep compliance requirements honest is to make the compliance process as independent and transparent as possible such that it cannot be manipulated by any one developer or organization. For example, MeeGo has asked the Linux Foundation to manage its trademark compliance requirements so that they are independent of the project. Community A number of the projects we reviewed use a not-forprofit foundation structure to provide independence, such that the platform is not controlled by any one organization. Other projects have established a formal association with the Linux Foundation, and this lends strong "open source credibility" to the project. Another aspect of open source communities is the method by which authority is exercised within the community. For example, we note that both Linux and Mozilla use the benevolent dictator model, where decisions regarding disputes are made by one person. Whilst this process may work, it is still centralization of authority and decision-making, and as such it does not easily allow for others to permeate this decision-making process. Evolving the Open Governance Index We aim to continue the discussion on governance, to refine our criteria even further, and to make the OGI measure as meaningful as possible for the open source community. One of the first suggestions has been with regard to having a time dimension to the criteria (i.e., does openness change over time Our vision for the Open Governance Index is to for it to be a robust, and as much as is possible, an objective measure of governance for open source projects. We believe that this is necessary such that users and contribut-ors to open source projects, including commercial entities, understand the means by which they can, or cannot, influence the direction and content of the project. Conclusion Today, open source software is "business as usual" in the mobile industry. It is proven that open source platforms such as Android can be as successful as proprietary platforms in terms of platform adoption, device sales, and applications development. And while open source plays a key role in developer attraction, it does not predetermine success. The mobile open source project space is undergoing consolidation to the extent that: 1. Symbian is no longer an active project, having been closed by Nokia and brought in-house while Nokia refocuses its effort using the Windows Mobile platform. 2. Nokia sold the commercial licensing rights for Qt to Digia in March 2011 and advised in November 2011 that they would "abnegate ownership" of Qt to focus on being maintainers only. This consolidation does not detract from the fact that the mobile open source platforms can be very successful -witness Linux, Eclipse, and Android -but it does reiterate the importance of organizational support to the success of any open source project and community. To become a successful opens source project we find that there are best practices, as we have detailed in this article, which should be used to provide the best possible likelihood for success. "Open governance" goes hand-in-hand with "open source"; it is about ensuring that developers and users have equal freedoms not to just use, but also to modify and build on the project. In
2017-11-29T13:48:59.192Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "4b364eb362cb37fa5b18c781e7d2c04e3234cf0e", "oa_license": "CCBY", "oa_url": "https://timreview.ca/sites/default/files/article_PDF/Laffan_TIMReview_January2012.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4b364eb362cb37fa5b18c781e7d2c04e3234cf0e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
97418040
pes2o/s2orc
v3-fos-license
Adsorbent Ability of Treated Peganum harmala-L Seeds for the Removal of Ni (II) from Aqueous Solutions: Kinetic, Equilibrium and Thermodynamic Studies The main goal of this study was to evaluate the performance of new adsorbent, treated Peganum harmala-L seeds (TPHS), for the removal of Ni (II) from aqueous solution. Batch experiments were performed as a function of various experimental parameters.The adsorption studies included both equilibrium adsorption isotherms and kinetics. Equilibrium data fitted very well with the Langmuir isotherm model. Maximum adsorption capacity was determined 91.74mg/g at pH 7. Kinetics studies showed better applicability for pseudo-second-order model for both adsorbents. The negative value of ΔG∘ confirmed the feasibility and spontaneity of TPHS for Ni (II) adsorption. Introduction Removal of heavy metals from wastewaters and industrial wastes has become a very important environmental issue.Nickel salts are commonly used in silver refineries, electroplating, zinc base casting, storage battery industries, printing, and in the production of some alloys that discharge significant amount of nickel in various forms to the environment.At higher concentrations, Ni (II) causes lung, nose, and bone cancer; headache; dizziness; nausea and vomiting; chest pain; tightness of the chest; dry cough and shortness of breath; rapid respiration; cyanosis; and extreme weakness.Hence, it is essential to remove Ni (II) from industrial wastewaters before it is discharged into natural water sources [1].Adsorption is considered as effective, efficient, and economic method for water purification [2]. Since the performance of an adsorptive separation is directly dependent on the quality and cost effectiveness of the adsorbent, the last decade has seen a continuous improvement in the development of effective noble adsorbents in the form of activated carbons [3], zeolites [4], clay minerals [5], chitosan [6], lignocelluloses [7], natural inorganic minerals [8], and so forth.Adsorption onto activated carbon (AC) has proven to be one of the most effective and reliable physicochemical treatment methodologies.AC from cheap and readily available sources has been successively employed for removal of heavy metals [9].There is only limited research on the preparation of activated carbons or modified natural adsorbent using Peganum harmala-L and its application for removing nickel from wastewaters.Peganum harmala, commonly called Esfand, Wild rue, Syrian rue, African rue, is a plant of the family Nitrariaceae.This plant is native from the eastern Iranian region west to India.Peganum harmala-L is abundant and inexpensive in Iran.However, microorganism-based and other biomasses often need to be prepared before application as biosorbent of metal ions.This would increase the cost of the overall wastewater treatment process.Nickel was selected as adsorbate because its compounds have applications in many industrial processes such as nonferrous metal, mineral processing, paint formulation, electroplating, batteries manufacturing, forging, porcelain enameling, and copper sulphate.The chronic toxicity of nickel to humans and the environment has been well documented.High concentration of nickel causes cancer of lung, nose, and bone [10]. This work investigates the potential of treated Peganum harmala-L seeds in the removal of nickel ions from aqueous solutions.Batch studies were conducted using synthetic Ni ions solution to assess the adsorption kinetic, isotherm, and thermodynamics models.The structure of raw and treated Peganum harmala-L was characterized by FTIR and SEM analyses.A series of tests such as pH, adsorbent dosage, contact time of reaction and initial concentration of solution and temperature were done to study their effects on the Ni ion adsorption onto treated Peganum harmala-L seeds. Preparation of Treated Peganum Harmala-L Seeds. Peganum harmala-L seeds were obtained from Khomein (located in the south of Markazi province) in Iran.Row seeds were air-dried, crushed, and impregnated with diluted H 2 SO 4 (3 : 1).Then the materials were treated in a hot air oven at 80 ∘ C for 24 h.The carbonized material was washed with distilled water and then soaked in 1% NaHCO 3 solution to remove any remaining acid.It was washed with distilled water until the pH of the TPHS reached 6.5, dried at 105 ∘ C, and sieved to the particle size smaller than 0.125 mm. Preparation of Ni (II) Solutions. The Ni ion concentration in the solutions was determined by Atomic Absorption Spectrometer (model AA 680 made of SHIMADZU).Before the analysis, the samples were diluted to the concentration of less than 100 ppm Ni (II) with distilled water. The pH was measured via the Metrohm 827 made of Swiss pH meter.All chemicals were reagent grade and used without further purification. Synthetic stock solution of 1000 mg/L of Ni was prepared by dissolving analytical grade Ni (NO 3 ) 2 ⋅6H 2 O in double distilled water.Working solutions of the desired concentration were then prepared by successive dilution.All solutions were made using deionized distilled water. Batch Adsorption Experiments. Batch experiments were conducted in order to study the effect of important parameters like pH, adsorbent dose, contact time, and initial ion concentration on the adsorptive removal of Ni ion using TPHS.The batch adsorption experiments, 10 mL solution of Ni (II) of initial concentration (100 mg/L), were contacted with 10 mg TPHS.The contents were placed in a stirrer and quietly shacked at 150 rpm.After shaking, the solution was filtered, and then remaining Ni concentration was analyzed.The effects of pH (2-10), contact time (5-90 min), the adsorbent dosage (1.0-15.0mg), ion concentration (0.1-500 mg/L), and temperature (288-318 K) were tested.The amount of adsorbed Ni (II) at equilibrium ( ) and Ni (II) removal efficiency (%) were calculated from the following equations: where and are the final and initial concentrations of Ni (II) (mg/L), respectively, is the volume of Ni (II) solution (), and is the mass of TPHS sample which is used in experiment (g). SEM Analysis and FTIR Spectroscopy. The characterization of surface structure of the untreated and treated adsorbent was done using Scanning Electron Microscopy. To determine the functional groups involved in the metal adsorption process, the untreated and treated adsorbents were analyzed using a Fourier Transform Infrared Spectrometer (Bruker CO.Tensor 27, Germany) in the range of 400-4000 cm −1 . Pseudo-First-Order and Pseudo-Second-Order Kinetic Models.The pseudo-first-order equation is given by the Lagergren and presented in (2); The equation can be expressed in a linear form as follows: where (mg/g) is the amount of Ni (II) adsorbed at time (min), (mg/g) is the amount of the Ni (II) adsorbed on the adsorbent at time under equilibrium conditions, and 1 is the pseudo-first-order rate constant (1/min).The rate constant 1 (1/min) was calculated from the slope by plotting ln ( − ) versus for nickel.The pseudo-second-order model is given by Ho and Mckay and shown in which can be rewritten as where 2 is the pseudo-second order rate constant (mg/g⋅min).The linear plot of / against was made in order to calculate the second-order rate constant 2 and equilibrium adsorption capacity from slope and intercept, respectively. Intraparticle Model. Weber and Moris proposed the kinetic model for the diffusion controlled sorption process. The intraparticle diffusion equation is given by ( 6), where is the intraparticle rate constant (mg/g⋅min 0.5 ), as follows: The slope of the plot of against 0.5 leads to the value of the intraparticle rate constant and (mg/g) is a constant that gives an idea about the thickness of the boundary layer; that is, the larger value of , the greater the boundary layer effect. Adsorption Isotherms. In order to describe the adsorption isotherm, three important isotherms were selected in this study, the Langmuir isotherm ( 7), the Freundlich isotherms ( 9), and the Temkin isotherm (10).The Langmuir isotherm [14] is as follows: where is the equilibrium concentration (mg/L), is the amount adsorbed onto the adsorbent (mg/g), is for complete monolayer adsorption capacity (mg/g), and is the equilibrium adsorption constant (L/mg). The essential features of the Langmuir isotherm can be expressed in terms of dimensionless constant separation factor [15]: where (mg/L) is initial concentration of adsorbate and is the Langmuir constant (L/mg).There are four probabilities for the value: for favorable adsorption, 0 < < 1; for unfavorable adsorption, > 1; for linear adsorption, = 1; and for irreversible adsorption, = 0.The Freundlich isotherm [16] is as follows: where (mg/g) is the Freundlich adsorption constant and 1/ is a measure of the adsorption intensity.Values range 1 < < 10 indicate adsorption is considered favorable.The Temkin isotherm [17] is as follows: where (L/mol) is the binding equilibrium constant corresponding to the maximum binding energy, (mol/g) is related to the heat of adsorption, (8.314 J/mol⋅K) is the universal gas constant, (K) is the absolute temperature (K), and is the constant related to the heat of adsorption. Adsorption Thermodynamics. Thermodynamic parameters, the change in Gibbs free energy (Δ ∘ ), enthalpy (Δ ∘ ), and entropy change (Δ ∘ ), in order to determine the feasibility of adsorption, were calculated using the following equations: where = / , is the amount of ion adsorbed (mg/g), is the equilibrium concentration (mg/L), is temperature (K), and is the gas constant (8.314J/mol⋅K) [18].Δ ∘ and Δ ∘ values were calculated from the slope and intercept of the plot of ln versus 1/. Effect of pH on the Adsorption. The pH of the system is very important on the adsorption capacity due to its influence on the surface properties of the adsorbent and ionization/dissociation of the adsorbate molecule.The pH effect (from 2.0 to 10.0) on the Ni (II) adsorption by TPHS is shown in Figure 3(a).As the initial pH value increased from 2.0 to 8.0, the removal of Ni (II) by TPHS increased from 65.96% to 89.79%.Further, increasing the pH value from 8.0 to 10.0 did not significantly enhance the removal of Ni (II) (from 89.79% to 93.17%).The main reason for these phenomena could be explained as follows.There were a lot of H 3 O + ions that competed with Ni (II) for the exchange sites in the adsorbent when the initial pH value was low [19], thus was prevented the removal of Ni 2+ .When there was an increase in the initial pH value, the concentration of H 3 O + ions decreased, and more Ni (II) could react with the released effective exchange sites.As can be seen in Figure 3(a), the highest nickel ions uptake was observed at pH 10 which was 93% due to precipitation of Ni ions from 8 to 10.Thus, the optimized pH was 7, and the rest of the experiments were considered at this optimized pH. The mechanism the biosorption of Ni (II) onto TPHS can be represented by the following expressions: removal percentage.Thus, 7.0 mg of TPHS was fixed as the optimum dosage, and the rest of the studies were carried out at this optimum adsorbent pre-treated. Effect of Contact Time on the Adsorption. Equilibrium time is one of the most important parameters in the design of economical wastewater treatment systems.The effect of contact time on the removal of Ni (II) by the TPHS at 100 mg/L for = 7mg showed rapid adsorption of Ni (II) in the first 20 min and, thereafter, the adsorption rate decreased gradually and the adsorption reached equilibrium in about 90 min as shown in Figure 3(c).It may also be observed from Figure 3(c) that more than 84.62% of ion adsorption is taking place within the contact time of 20 min and increases gradually thereafter.The rapid adsorption at the initial contact time was due to the availability of more active surface of the adsorbents, which leads to fast adsorption of the ion from the solution.The later slow rate of ion adsorption probably occurred due to the less availability of active site onto the surface of adsorbent as well as the slow diffusion of the solute into the adsorbent pore [20].Hence, 90 min contact time was chosen as the optimized time for the adsorbent for latter experiments. Effect of Initial Metal Concentration on the Adsorption. The effect of on the removal of Ni ions by TPHS is shown in Figure 3(d).When the increased from 0.1 to 500 mg/L, the amount of Ni ions adsorbed per unit mass of TPHS ( ) increased from 0.12 to 89.69 mg g −1 , whereas the percentage of Ni ions removal decreased from 90.0% to 12.55% with the increase in .The percentage removal of the ions decreased slowly in the concentration range of 0.1-50.0mg L −1 but reduced rapidly from 50.0 to 500.0 mg/L.Ion removal is highly concentration dependent at higher concentrations.This can be explained by the fact that the adsorbent has a limited number of active sites that become saturated above a certain concentration.At low ion concentrations, the ratio of surface active sites to the total ions in the solution is high, and hence all ions may interact with the active functional groups on the surface of the adsorbent and be removed from solution.However, with increasing ion concentrations, the number of active adsorption sites is not enough [21]. Effect of Temperature on the Adsorption. The temperature can affect the adsorption rate.The effect of temperature on nickel adsorption was investigated in the range of 288-318 K.The result of nickel removal at is presented in Figure 3(e).By increasing temperature of solution, adsorption capacity increased.It was found that by increasing it from 288 to 318 K, the removal percentage increased from 84.62 to 89.93%. Kinetic Study. In order to investigate the adsorption behavior of metal ions on adsorbent, the pseudo-first, pseudo-second order kinetic and intraparticle models were used.The fitted linear form of the pseudo-first and pseudosecond order model is shown in Figures 4(a) and 4(b).The values of adsorption rate constants such as 1 , 2 , and for the two kinetic models were calculated by the method described in Section 2.5.Table 1 shows these adsorption rate constants, with regression coefficients ( 2 ).Regarding the value of 2 of the pseudo-second-order and the pseudofirst order one, the pseudo-second-order model is more appropriate.The kinetic experimental data have been predicted well with this model and the correlation coefficient value for experiment was more than 0.99.For Ni 2+ , the value predicted from the second-order-model is much more comparable to the experimental value than that from the first-order-model.The pseudo-first-order and pseudo-second-order models cannot identify the diffusion mechanism.To determine whether the intraparticle diffusion is the rate limiting step As can be seen from Figure 4(c), the intercept of the line does not pass through the origin and the correlation coefficients ( 2 ) are less than 0.99 suggesting that two or more steps are involved in the nickel adsorption onto the prepared adsorbent.The deviation of straight line in Weber and Morris model may be due to difference in the rate of mass transfer in the initial and final stages of adsorption [10]. Equilibrium Study. Isotherms state the particular correlation between the content of Ni (II) adsorbed onto TPHS surface in given empirical conditions and the equilibrium concentration of Ni (II) in the liquid phase.Linear plot of the Langmuir isotherm of Ni (II) ion adsorption on TPHS is shown in Figure 5(a). The calculated constants for the described models in Section 2.6 and correlation coefficients are given in Table 2.As can be seen from Table 2, the Langmuir isotherm shows an inadequate fit of experimental data.The maximum adsorption capacity of TPHS ( ) and the adsorption energy coefficient calculated from the slope and the intercept of the linear plot were 91.74 mg/g and 0.049 L mg −1 at temperature 25 ∘ C, respectively.Figure 5(b) shows the variation of separation factor ( ) with initial Ni ion concentration.The results that the values were in the range of 0-1 (from 0.9951 to 0.0392) indicate that the adsorption of Ni ion by adsorbent is favorable.The value is approaching zero with the increase of means that the adsorption of Ni ion onto TPHS is less favorable at high initial Ni ion concentration. Linear plots of Freundlich and Temkin isotherm of Ni (II) ion adsorption on TPHS were shown in Figures 5(c) and 5(d).The Freundlich and Temkin constants calculated from the linear equations were summarized in Table 2.The value of 1/ was in the range of 0-1 that shows favorable adsorption on adsorbent.The correlation coefficient of the Freundlich and Temkin values was lower than the Langmuir value.The suitability of Langmuir data for interpretation of experimental suggests that ions adsorption is limited to monolayer coverage [27].From linear regression of the data points, the 2 value of 0.81 for Ni (II) is rather low which indicates that biosorption of Ni (II) did not follow the Temkin isotherm closely. Comparison of Adsorption Capacity with Different Adsorbents Reported in the Literature.The maximum monolayer adsorption capacities of TPHS adsorbent for the removal of Ni (II) were compared with those of other adsorbents reported in the literature and the values are shown in Table 3.It is clear from Table 3 that the adsorption capacity of TPHS adsorbent was greater than the previously reported.increasing randomness at the solid-solution interface during the adsorption process [28].According to Huang et al. [29], Δ ∘ value physisorption was smaller than 40 kJ/mol and this value for our study shows that the adsorption of Ni (II) onto TPHS was a physisorption process (Δ ∘ = 11.979kj⋅mol −1 ). Conclusions This study demonstrated that treated adsorbent which was developed from Peganum harmala-L seeds could be used as an effective adsorbent for the removal of Ni (II) from aqueous solution.The equilibrium time for adsorption of Ni (II) from aqueous solutions was achieved within 90 min of contact time.Kinetics studies showed better applicability for pseudo-second-order model for this adsorbent.The isotherm study indicated that adsorption data correlated well with the Langmuir isotherm model.The maximum monolayer adsorption capacities for the removal of Ni (II) using TPHS adsorbent was found 91.74 mg/g.Thermodynamic constants were also evaluated using equilibrium constants changing with temperature.The negative values of Δ ∘ suggested that the adsorption was spontaneous in nature.Finally, it can be concluded that the use of TPHS as an adsorbent may be an alternative to more costly materials, such as ion-exchange resins and carbon nanotubes, for the treatment of liquid wastes containing toxic Ni (II) metal ion. 3 . 3 .IndianFigure 3 : Figure 3: (a) Effect of pH, (b) effect of MPHS dosage, (c) effect of contact time, (d) effect of initial concentration, and (e) effect of temperature on removal of Ni ion by TPHS. Figure 4 : Figure 4: (a) Plot of the pseudo-first-order model.(b) Plot of the pseudo-second-order model.(c) Plot of the intraparticle model of Ni (II) on TPHS. 𝑇 = 25 dosage = 10 g/L Din and Mirza [25] Modified Moringa oleifera leaves powder 138.04 = 20 ∘ C pH = 6.0 adsorbent dosage = 10 g/L Reddy et al. [26] Treated Peganum harmala-L seeds 91.74 = 25 ∘ C pH = 7.0 adsorbent dosage = 0.7 g/L This study in the adsorption of Ni (II) onto TPHS, the intraparticle diffusion model proposed by Weber and Morris was used to analyze the kinetic results.The plot of versus 1/2 for the adsorption of Ni (II) onto TPHS is shown in Figure 4(c). Table 1 : Kinetic constants for the adsorption of Ni ions on TPHS. Table 2 : Isotherm constants for the adsorption of Ni ions on TPHS. Table 3 : Thermodynamic constants for the adsorption of Ni ions on TPHS. Table 4 : Thermodynamic constants for the adsorption of Ni ions on TPHS.
2019-04-06T13:02:57.300Z
2014-03-27T00:00:00.000
{ "year": 2014, "sha1": "a196576eb11e2f8040fbfd0e1dc381d4786371db", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2014/459674.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5c0a7d0fca611772e0fc708b4453f2451b904fef", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
88519627
pes2o/s2orc
v3-fos-license
Hierarchical sparsity priors for regression models We focus on the increasingly important area of sparse regression problems where there are many variables and the effects of a large subset of these are negligible. This paper describes the construction of hierarchical prior distributions when the effects are considered related. These priors allow dependence between the regression coefficients and encourage related shrinkage towards zero of different regression coefficients. The properties of these priors are discussed and applications to linear models with interactions and generalized additive models are used as illustrations. Ideas of heredity relating different levels of interaction are encompassed. Introduction Regression modelling is an important means of understanding the effect of predictor variables on a response. These effects can be hard to estimate if the predictor variables are highly correlated (the problem of collinearity) or there are large numbers of predictor variables. Standard estimators such as least squares tend to have large standard errors in these cases which make interpretation difficult and can lead to over-fitting. These problems can be addressed using traditional variable selection procedures, such as stepwise regression and subset selection in the classical paradigm (Hastie et al., 2001) or "spikeand-slab" priors in the Bayesian framework (Mitchell and Beauchamp, 1988). More recently, alternative regularization methods have been proposed which use penalized maximum likelihood methods in a classical framework or absolutely continuous priors in a Bayesian framework. These methods assume that the effects are sparse which means that only a subset (which is often considered small) of the predictor variables has an effect on the response. Many methods have been proposed which eliminate the subset of variables which have little or no effect and so allow better estimation of the important effects. These methods can, in turn, lead to more interpretable models and better outof-sample prediction. Most work in the area of regularization has not explicitly included any known relationships between the predictor variables in the analysis. Many penalty functions in penalized maximum likelihood methods are expressed as a sum of terms for each predictor variable whereas priors in Bayesian analyses are constructed as products of terms for each regression coefficient (implying a priori independence between the regression coefficients). However, in many data sets, there are known relationships between the predictor variables which we wish to include in the analysis. For example, suppose that we use a linear model with main effects and two-way interaction terms. One commonly used heuristic in variable selection is that a two-way interaction term can only be included if both main effects terms are included. This can be interpreted as a belief that the absolute size of the two-way interaction coefficient is related to the two associated main effect coefficients (if either main effect has a small absolute coefficient then the interaction term must also have a small absolute coefficient). Of course, other assumptions could be made but it is clear that it is often natural to assume a relationship between the usefulness of the interaction term and the usefulness of the main effects. Several approaches have been developed in the literature to allow these relationships to be included in the analysis. Perhaps, the most popular is the group lasso (Yuan and Lin, 2006) where predictor variables are divided into groups and a penalty function is developed for which the penalized maximum likelihood estimates of the regression coefficients in a group are either all zero or all non-zero. Therefore, variable selection occurs at the level of the groups rather than the individual predictor variables. A similar approach for non-overlapping groups was developed by Jacob et al. (2009). The group lasso approach was extended to linear models with two-way interactions by Yuan et al. (2007) and to more complicated problems by Yuan et al. (2009). The Bayesian approach to regularization defines a prior distribution for the regression coefficients which encourages a proportion of those coefficients to be shrunk to a value close to zero (or exactly zero). Many priors have been proposed including: the double exponential (Park and Casella, 2008;Hans, 2009) (leading to the Bayesian Lasso), the normal-gamma (Caron and Doucet, 2008;Griffin and Brown, 2010) the Bayesian elastic net (Hans, 2011), the horseshoe prior (Carvalho et al., 2010), Normal-Exponential-Gamma (NEG) (Griffin and Brown, 2011),the generalized Beta mixtures (Armagan et al., 2011), the generalized t or double Pareto prior (Armagan et al., 2013) and the exponential power prior (Polson et al., 2013). All these priors are scale mixtures of normal distributions and these papers have shown how the shape of the mixing distribution is critical for successful modelling of sparse effects. Some work has used maximum a posteriori (MAP) estimates whilst others have looked at full posterior inference. The former have usually been proposed for problems with large numbers of regressors and used the EM algorithm for inference. Full posterior inference is usually implemented using Markov chain Monte Carlo methods and this approach will be followed in this paper. Prior distributions which include known relationships between variables have also been considered. A Bayesian version of the group Lasso was developed by Kyung et al. (2010) and Raman et al. (2009). A rather different approach is taken by Griffin and Brown (2012) who defined priors which allow correlation between the effects rather than the absolute effect sizes (as implied by the group Lasso). It has also been applied to unifying and robustifying ridge and g-priors for regression in Griffin and Brown (2013). The variable selection problem in the linear model with interactions has been approached by Chipman et al. (1997) using "spike-and-slab" prior distributions. More recently, structured priors have been proposed in biological application, e.g. Stingo et al. (2011) and Li and Zhang (2010). In this paper, a method for building prior distributions for structured regression problems (where relationships between the predictor variables can be assumed) is developed. The prior involves organising the regression coefficients in a hierarchical structure where the regression coefficients at one level depend on a subset of the effect sizes at a higher level. This is a fairly general structure which can include both overlapping and non-overlapping groups in a simple way, whilst also expressing much more complicated structures. The paper is organized as follows. Section 2 explains the use of normal-gamma and normal gamma-gamma (or generalized beta mixture) priors for sparse regression problems. Section 3 develops hierarchical structured regression models using a hierarchical prior with motivating examples in section 3.1 of the linear model with interaction terms and the generalized additive model (GAM). The general construction is given in section 3.2 and its use in specific statistical models in section 3.3. Section 4 briefly discusses computational strategies for models using these priors. Section 5 includes applications of the models introduced in section 3.1 to data sets. Section 6 gives a brief discussion. Shrinkage characterisation and proofs of theorems are given in the Appendices A and B respectively. Continuous priors for sparse regression The normal linear regression model for an (n × 1)-dimensional vector of responses y and an (n × p)-dimensional design matrix X is where ∼ N(0, σ 2 I n ), 1 is a (n × 1)-dimensional vector of 1's, α is an intercept and β is a (p × 1)-dimensional vector of regression coefficients. The prior for α and σ −2 is chosen to be the scale-invariant choice p(α, σ 2 ) ∝ σ −2 and we will concentrate on the choice of prior for the regression coefficients β, which will be assumed independent of α and σ 2 , in the rest of the paper. It is assumed that the variables have been measured on comparable scales (or scaled to have comparable scales). A commonly chosen prior for regression coefficients is the normal distribution, which is conjugate for a linear regression model with normal errors. It is well-known that this prior is not suitable for problems where the regression coefficients are thought to be sparse (that is a proportion of the regression coefficients are zero or very close to zero). Scale mixtures of normals are typically used as priors for these problems (Polson and Scott, 2011) in which the prior density can be expressed as where G is a distribution function, whose density is g (if it exists). The "spikeand-slab" prior (Mitchell and Beauchamp, 1988) and the stochastic search variable selection prior (George and McCulloch, 1993) fit into this structure with G chosen to be a discrete mixing distribution with two possible values. These priors have been extensively applied in Bayesian analyses. Alternatively, G can be chosen to be absolutely continuous. The conditional variance Ψ i has a simple interpretation as "importance" or relevance (Tipping, 2000;Bishop and Tipping, 2000) of a variable in the regression (larger scales are associated with larger absolute values of coefficients and so imply more importance) . In this paper, we will look at generalizations of normal-gamma prior distribution (Caron and Doucet, 2008;Griffin and Brown, 2010) or a generalized beta mixture prior distribution (Armagan et al., 2011) for β i . The generalised beta mixture prior distribution, which we will refer to as the normal-gamma-gamma prior distribution, is expressed in hierarchical form: The prior mean of β j is 0 and the prior variance is V[β j ] = E(Ψ j ) = λd c−1 if c > 1. The hyperparameters have simple interpretations: d is a scale parameter, λ controls the behaviour of the distribution close to zero and c controls the tail behaviour of the distribution. The marginal density of β j is not available in closed form but the marginal distribution of Ψ j is a gamma-gamma distribution which has the density . This prior will be written Ψ j ∼ GG(λ, c, d); and corresponds to the invertedbeta-2 distribution of Raiffa and Schlaifer (1961, section 7.4.2). The authors showed that the monotone transformation Ψ j +d Ψ j has a beta distribution with parameters λ and c implying that the median is d if λ = c. This is a useful characterisation if the mean does not exist which happens if c ≤ 1. In particular, this is true for the horseshoe prior which occurs if λ = c = 1/2. Several of the absolutely continuous priors for regression coefficients described in Section 1 can be written as special cases including the NEG prior which arises when λ = 1 and the normal-gamma prior distribution which arises if c/d = µ as c → ∞. Shrinkage results for regression models which express the posterior expectation and variance in terms of the least squares estimate of β and the variance of its sampling distribution (for n > p) have been derived by several authors including Carvalho et al. (2010), Griffin and Brown (2010) and Polson and Scott (2012). We now show generally for scale mixtures of normals that the amount of shrinkage in a simple linear regression only depends on the t statistic for the regression coefficients (rather than the least squares estimates and the standard error separately). The proof and some illustrative graphs of shrinkage are given in Appendix A. In the case of the scale mixture of normals considered in this paper, we have π β (β) = N(β|0, Ψ)g(Ψ) dΨ and so π τ (τ ) = N(τ |0, Ψ )g Ψ (Ψ ) dΨ where g Ψ (Ψ ) = SE 2 g(SE 2 Ψ). Returning to our specific cases, the normal-gamma prior leads to Ψ ∼ Ga(λ, γ i /SE 2 ) and Ψ ∼ Ga(λ, γ i ); with γ i ∼ Ga(c, d SE 2 ) for the normal-gamma-gamma prior. Therefore, the shrinkage induced by the posterior expectation (relative to the least squares estimate) can be expressed in terms of a scale defined relative to the standard error. This simplifies the presentation of the shrinkage function for different choices of prior as they can be presented relative to a standard scale. The effect of changing the standard error or the scale of the prior distribution is just to re-scale the x-axis of the graphs. The sparsity of a set of regression coefficients can be considered to be the proportion which have values close to zero. The graphs in Figure 13 in Appendix A suggest that smaller values of λ will increasingly favour sparser sets of regression coefficients since small values of t are likely to be shrunk very close to zero. This is intuitively reasonable since this parameter controls the shape of the distribution of Ψ i at small values for both priors, gamma and gamma-gamma. Consequently, we define the sparsity shape parameter for a prior distribution in terms of the prior density of Ψ i as where p(Ψ i ) is the prior density of Ψ i . This will be simply λ in the case of both the normal-gamma and normal-gamma-gamma prior distributions and indicates the shape of the prior distribution of Ψ i close to zero. The use of the supremum or least upper bound leads to clearer results in some special cases discussed in section 3.2. Motivating Examples Before developing our general hierarchical shrinkage results we first set the context in terms of linear models with interactions and GAMs. These illustrate the need for priors which can express relationships between regression coefficients with different levels of sparsity for some regression coefficients. Linear models with interaction terms Variable selection and regularization methods for linear models with interactions have received some attention in the literature. The model assumes that response y i observed with covariates X i1 , . . . , X ip can be expressed as where i ∼ N(0, σ 2 ). Many authors have argued for the principle of marginality or the idea of effect heredity (Chipman et al., 1997). These approaches assume that the inclusion of an interaction should be contingent on the inclusion of main effects. Chipman et al. (1997) introduced two forms of the heredity principle. The first is strong heredity which states that an interaction can only be included if both main effects are included. The second is weak heredity which states that an interaction can be included if at least one main effects is included. In models with higher order interactions, there is clearly potential for many different rules for including an interaction which would depend on the inclusion or exclusion of main effects and lower order interactions. Chipman et al. (1997) described a "spike-and-slab"-based approach to this problem which allows both strong and weak heredity. An extension of the LARS algorithm (Efron et al., 2004) to include these principles is described in Yuan et al. (2007), which is generalized to other regression problems in Yuan et al. (2009). The use of strong or weak heredity suggests beliefs which are inconsistent with an assumption of prior independence between the regression coefficients. It is also clear that, a priori, the scale of the interaction coefficient should depend on the magnitude, but not the sign, of the main effect coefficients with the coefficients of the interactions being sparser than the coefficients of the main effects. Generalized additive models The GAM (Hastie and Tibshirani, 1993) is a non-linear regression model which represents the mean of the response as a linear combination of potentially nonlinear functions of each variable so that Reviews of Bayesian analysis of these models are given by Kohn et al. (2001) and Denison et al. (2002). A common approach assumes that each non-linear function can be represented as a linear combination of basis functions so that where g(x, τ j1 ), . . . , g(x, τ jK ) are a set of basis functions with knot points τ j1 , . . . , τ jK . This leads to a linear model for the responses The set of knot points is often chosen to be relatively large and many γ jk 's are set to zero to avoid over-fitting. In a Bayesian framework, this is usually approached as a variable selection problem and so we effectively have p different variable selection problems (one for each variable). We will refer to this as selection at the basis level. There is also potentially the more standard variable selection problem of choosing a subset of the variables which are useful for predicting the response. The effect of the j-th variable is removed from the model if θ j1 , . . . , θ jq and γ j1 , . . . , γ jK are all set to zero. We refer to this as selection at the variable level. In this model, prior independence between the coefficients for the j-th variable (θ j1 , . . . , θ jq ) and (γ j1 , . . . , γ jK ) seems unreasonable and dependence in size (rather than the sign) of these coefficients will be reasonable in many problems. Typically, we would like different amounts of sparsity at the basis level and the variable level which suggests a prior with at least two sparsity parameters. General construction The examples in section 3.1 illustrate the need for priors which allow dependence in the size of regression coefficients but not their sign with hyperparameters that control the amount of sparsity implied by the prior for different regression coefficients. The Bayesian group lasso (Kyung et al., 2010;Raman et al., 2009) is one example of a prior which allows dependence between the size of regression coefficients but no correlation in the signs. It is assumed that the regression coefficients are divided into disjoint groups This induces correlation in the conditional variances of the regression coefficients, Ψ i D (i) jj for j = 1, . . . , p i , but not necessarily in the regression coefficients (the correlation between b ij and b ik will be zero if D The group lasso prior is a simple way of building dependence between regression coefficients if they can be divided into groups. We consider a more general structure for the prior of the regression coefficients, β = (β 1 , . . . , β p ), in (1). We assume that the elements of β are independent conditional on Ψ = (Ψ 1 , . . . , Ψ p ) and β j ∼ N(0, Ψ j ), j = 1, . . . , p. The parameters Ψ j is the conditional variance of β j and smaller values of Ψ j imply typically smaller values of |β j |. Building hierarchical priors for Ψ allows the construction of a prior with correlated Ψ but not β. This form of dependence is important. The scale Ψ j can be interpreted as the importance of the j-th variable in the regression and so correlation Ψ j and Ψ k implies a relationship between the importance of the j-th and k-th variables. Lack of correlation between the regression coefficient would imply, for example, no correlation in the sign of regression coefficients, which is a natural assumption in many regression problems. The construction could be extended to a model where the regression coefficients are correlated by assuming that β are dependent conditional on Ψ. We assume that the regression coefficients can be arranged in levels. Let there be L levels and β (l) be the (p l × 1)-dimensional vector of regression coefficients in the l-th level. The regression coefficients at a particular level will have the same sparsity a priori and their scales will usually depend on scales of regression coefficients in lower levels. For example, in the linear model with two-way interactions described in section 3.1.1, one level would include the main effects and the other level would include the interactions. As we have already discussed, it seems natural to allow the scale of a two-way interaction to depend on the scale of the two associated main effects in the prior and and that the interactions are a priori sparser than the main effect. Our general prior assumes that j are given independent prior distributions with mean 1 and s (l) j is the sparsity shape parameter of η j d which mimics the normal-gamma prior distribution where the sparsity shape parameter is the shape parameter of the gamma distributions and d can be interpreted as a scale parameter. The function f jl will usually be a simple function using combinations of additions and multiplications to allow easy calculation of its expectation and clear understanding of the sparsity. Products have the useful property of being small if one element in the product is small and sum have the useful property of being small if all elements in the sum are small. The Bayesian group lasso arises from taking a single level, setting Ψ j if i and j are in the same group and choosing η (l) j to have a gamma distribution. A sensible choice of sparsity is important for good estimation of the regression coefficient. Therefore, it is important to consider the sparsity shape parameters of the distributions of the regression coefficients induced by this prior. The sparsity within the l-th level is controlled by the sparsity shape parameter of the marginal distribution of Ψ (l) j . It is also interesting to consider the sparsity shape parameter of the distribution of Ψ (l) j conditional on Ψ (1) , . . . , Ψ (l−1) . We refer to the sparsity shape parameter of the marginal distribution of Ψ (l) j as the marginal sparsity shape parameter and the sparsity shape parameter of the conditional distribution of Ψ (l) j given Ψ (1) , . . . , Ψ (l−1) as the conditional sparsity shape parameter. We similarly distinguish between the shrinkage induced by the marginal and conditional distributions. The conditional sparsity shape parameter and shrinkage are more easily understood than the marginal sparsity shape parameter and shrinkage. The conditional sparsity shape parameter is given by the sparsity shape parameter of η (l) j and the conditional shrinkage has scale of d . Therefore, smaller values of f jl Ψ (1) , . . . , Ψ (l−1) lead to larger amounts of shrinkage at all values of the t-statistic in the characterisation of Appendix A. To characterise the marginal sparsity shape parameter, we will consider functions, f jl Ψ (1) , . . . , Ψ (l−1) , formed through products or sums. An interesting special case is the product of two gamma random variables Ψ = η 1 η 2 for which the density has an analytic expression, where K ν (·) is the modified Bessel function of the third kind (Abramowitz and Stegun, 1964, pg. 374). The distribution is called the K-distribution (Jakeman and Pusey, 1978) in several areas of physics. Using a small value approximation (Abramowitz and Stegun, 1964, eqn 9.6.9), this density at a value of Ψ near zero is approximately proportional to and so the sparsity shape parameter is min{λ 1 , λ 2 } which is in agreement with Theorem 1. Theorem 1 can be extended to the gamma-gamma case giving: Therefore, the shape close to zero of the products of either a normal-gamma or normal-gamma-gamma distribution is controlled by the shape parameters λ 1 , . . . , λ K rather than the other parameters. The previous results relate to the shape of the prior density for Ψ i close to zero when it is defined through products or sums. The appropriateness of the marginal sparsity shape parameter can be checked by comparing the shrinkage profiles for a product or a sum of normal-gamma (or normal-gammagamma) distributed random variables and for a single normal-gamma (or normal-gamma-gamma) distributed random variable with the marginal sparsity shape parameter of the product or sum. If the concept of marginal sparsity is useful then we would expect the shrinkage profiles to be similar. Figure 1 and Figure 2 show the shrinkage curves for different choices of products of two and three normal-gamma distributions respectively with d = 1/SE 2 . The marginal sparsity shape parameter is λ 1 and the shrinkage curve for a single normal-gamma prior with sparsity shape parameter of λ 1 is also shown. The shape of the shrinkage curves are very similar for different choices of λ 2 with shrinkage decreasing slightly as λ 2 becomes larger. The effect is more pronounced if λ 1 is smaller. The results with the product of three normal-gamma distributions are similar. This suggests that the sparsity shape parameter (although fairly crude) does give comparable forms of shrinkage for different values of t. Figures 3 and 4 show similar graphs for the NGG case with different values of c which show results that are very similar to the normal-gamma case. Linear model with interaction terms In our framework, we interpret strong heredity as a prior belief that β jk will be strongly shrunk to zero if either β j or β k are strongly shrunk to zero. We interpret weak heredity as a prior belief that β jk will be strongly shrunk to zero if both β j and β k are strongly shrunk to zero. These prior beliefs can be represented using a hierarchical sparsity prior. First, we define two levels: the main effect level and the interaction level. The first level (the main effect level) has p 1 = q terms listed as β 1 , . . . , β q and the second level (the interaction level) has p 2 = q(q − 1)/2 terms listed as β jk for k = 1, . . . , j − 1, j = 1, . . . , p. In the case of strong heredity, we use the prior The prior variance of β jk is small if either η k is small (and hence also the prior variances of β j and β k ). Therefore, an interaction term β jk will tend to be small (since its variance is small) if either β j is small (which implies that its prior variance is small) or β k is small (which implies that its prior variance is small). In the case of weak heredity, we use the prior The prior variance of β jk is small if the prior variances of both β j and β k are small. Therefore, the interaction terms will tend to be small if and only if both β j and β k are small (using similar reasoning to the strong heredity case). In general we would assume that λ 2 < λ 1 since the interactions will tend to be sparser than the main effects. This implies that the marginal and conditional sparsity shape parameters of the main effects are λ 1 and the t the marginal and conditional sparsity shape parameters of the interactions are λ 2 GAM models In section 3.1.2, we discussed how inference in the GAM model could be seen as a two-level variable selection problem (at the basis level and at the variable level). This can be approached using a hierarchical sparsity prior by defining the first level (the variable level) by p 1 = pq terms θ jk for j = 1, . . . , p, k = 1, . . . , q and the second level (the basis level) by p 2 = pK terms γ jk for j = 1, . . . , p and k = 1, . . . , K. We propose the prior The prior implies that all polynomial term coefficients have the same conditional prior variance. A small value of the parameter η j implies that the j-th variable is unimportant and will effect the shrinkage of both the polynomial coefficients θ j1 , . . . , θ jq and basis function coefficients γ j1 , . . . , γ jK leading to shrinkage at the variable level. The variable selection problem at the basis level is achieved through the different values of η jk which allow some basis function coefficients to be set very close to zero. The results in section 3 suggest that the marginal sparsity shape parameters of the basis functions are min{λ 1 , λ 2 }, and the conditional sparsity shape parameters of the basis functions are λ 2 . The marginal and conditional sparsity shape parameters of the variables are λ 1 . GAM models with interactions A more elaborate form of GAM allows for functions, f jl (·, ·), modelling nonlinear interaction effects. In this case, the GAM model is extended to where, again, i ∼ N(0, σ 2 ). A hierarchical sparsity prior can be constructed for this problem by combining the prior for a GAM with only main effects and the prior for the linear model with interactions. We define a first level (the variable level) with p 1 = pq M terms θ (M ) jk for j = 1, . . . , p, k = 1, . . . , q M and a second level (the interaction level) with p 2 = p(p − 1)/2q 2 I terms θ (I) jklm for j = 1, . . . , p, k = 1, . . . , j−1, l = 1, . . . , q I , m = 1, . . . , q I . The third and fourth levels contain the basis functions for the main effect and the interactions respectively. The third level has p 3 = pK terms γ (M ) jk for j = 1, . . . , p, k = 1, . . . , K and the fourth level has p 4 = p(p − 1)/2K 2 terms γ (I) jklm for j = 1, . . . , p, k = 1, . . . , j − 1, l = 1, . . . , K, m = 1, . . . , K. The proposed prior, with strong heredity, is jk ∼ GG λ 2 , c, If η (1) j is small then both the main effects θ jklm will be small. This allows variable selection at the main effect and interaction term levels. The prior also links the priors for the interaction and main effects (and, consequently, their associated basis function coefficients) since η jk will be small if both η (1) j and η (1) k are small. The marginal sparsities are λ 1 for level 1, min{λ 1 , λ 2 } for level 2, min{λ 1 , λ 3 } for level 3 and min{λ 1 , λ 2 , λ 4 } for level 4. The conditional sparsities are λ 2 for level 2, λ 3 for level 3 and λ 4 for level 4. Computational strategy Posterior inference with these priors can be made using Markov chain Monte Carlo methods. In this section, we will describe the general strategy for inference rather than describe algorithms for specific models. We will assume the general model i is a (n × p l )-dimensional matrix whose columns are given by the variables in the l level and i i.i.d. Typically, the distribution of η (l) j has parameters which are denoted φ (l) . The Gibbs sampler will be used to sample from the posterior distribution of the parameters (α, β, σ, Ψ, d, φ) where β = {β (l) |l = 1, . . . , L}, Ψ = {Ψ (l) |l = 1, . . . , L} and φ = {φ (l) |l = 1, . . . , L}. The full conditional distributions of (α, β) and σ 2 follow from standard results for Bayesian linear regression models. The parameters Ψ, d and φ are updated one-element-at-a-time by adaptive Metropolis-Hastings random walk steps using a variation on the algorithm proposed by Atchadé and Rosenthal (2005). The output of adaptive Metropolis-Hastings algorithms are not Markovian (since the proposal distribution is allowed to depend on the previous values of the Markov chain) and so standard Markov chain theory cannot be used to show that the resulting chain is ergodic. Relatively simple conditions are given for the ergodicity of adaptive Metropolis-Hastings algorithms by Roberts and Rosenthal (2007). Our algorithms meet these condition with the additional restriction that Ψ, d and φ are bounded above (at a very large value). Suppose that we wish to update φ (l) at iteration i (the same idea will also be used to update the elements of Ψ and d). A new value φ (l) is proposed according to φ (l) . The notation σ 2 (i) φ (l) makes the dependence on the previous values of the chain explicit and the induced transition density of the proposal is denoted q σ 2 (i) φ (l) φ (l) , φ (l) . The value φ (l) is accepted or rejected using the standard Metropolis-Hastings acceptance probability . The variance of the increment is updated by where 1/2 < a ≤ 1. This algorithm leads to an average acceptance rate which converges to τ . We choose a = 0.55 and τ = 0.3 (following the suggestion of Roberts and Rosenthal (2009)) in our examples. The posterior distribution can be highly multi-modal and so it is necessary to use parallel tempering to improve the mixing. An effective, adaptive implementation is described by Miasojedow et al. (2013). Example 1: Blood glucose data A blood glucose data set has been studied by Hamada and Wu (1992) amongst others. Yuan et al. (2007) analysed these data using their extension of the LARS algorithm which includes both strong and weak heredity. The data has one two-level factor and seven three-level factors. The experimental design and data are given in Yuan et al. (2007). We followed their analysis by fitting a linear model with interactions and by including the three-level factors as linear and quadratic effects using orthogonal polynomials. The model in section 3.1.1 was extended to allow for quadratic effects and assumed that The prior proposed in section 3.3.1 was extended with δ j ∼ N 0, λ 1 d η (1) j . The parameter c was chosen to be 2 giving a heavy tail to the NGG distributions (but also a finite variance). The priors for the hyperparameters of the model were as follows. The sparsity parameter for the main effects was given the prior λ 1 ∼ Ex(1) which centred the prior over a heavy-tailed version of the Bayesian lasso. We defined λ 2 = rλ 1 and assumed that the interactions were sparser than the main effects which implied that r < 1 and so we choose r ∼ Be(2, 6) which implied that E[r] = 1/3 suggesting that the interactions will be much sparser than the main effects. The scale parameter, d, was given the prior p(d) ∝ (1 + d) −2 which implied that E[d] = 1 with a heavy tail. The marginal posterior distributions of the regression coefficients using the strong heredity prior are presented in Figure 5. The most important terms were the interaction between C and H which had posterior medians which are well away from zero and some 95% credibility intervals which did not include zero. In particular, the interaction between the linear and quadratic effects of C with the quadratic effect of H were the most important terms. The interactions of AH also showed some signs of being important since, although the posterior median was zero for both regression coefficients, the 95% posterior credibility intervals placed substantial mass on positive and negative values for the linear and quadratic effects respectively. The linear and quadratic effects of C also seemed important with posterior medians away from zero and support for a wide-range of values. All other effects had posterior medians which were very close to zero with a 95% credibility interval concentrated around 0. The marginal posterior distributions of the Ψ's are shown in Figure 6. The variable C had the largest posterior median main effect followed by A and H. In terms of the interactions, it was clear that AH and CH had the largest upper point of the 95% credible interval which illustrated the importance of these interactions in the model. All these results were consistent with inference about the regression coefficients but gave a clearer picture of the importance of different variables. The prior with weak heredity was also fitted and the results showed a very similar picture to those using the strong heredity prior. The Ψ's for the main effect of C and H were estimated to be slightly smaller than under strong heredity and the other main effects were estimated to be slightly larger. This reflected the importance of the interaction of CH in the model. Under strong heredity, there was stronger evidence of the importance of the main effects of C and H. The Ψ's for the importance of the interactions between AH and CH were estimated to be slightly smaller. The inference about the sparsity shape parameters λ 1 and λ 2 and the scale parameter d are shown in Table 1. The parameter λ 1 had a posterior median 0.48 which indicated that some effects were close to zero. The parameter λ 2 had a much smaller posterior median which indicated that the interactions were much sparser than the main effects. These results were consistent with the inferences about the regression coefficients. We considered all variables to be continuous apart from svi which is binary (it should be noted that Gleason score is ordinal and has 4 observed levels (scores of 6, 7, 8 and 9) in the data). Previous modelling had often included the continuous variables as linear effects. An exception is Lai et al. (2012) who considered flexibly modelling their effects. We followed this approach using the GAM model in section 3.1.2 with the prior described in section 3.3.2. All continuous variables were normalized to have a minimum of 0 and a maximum of 1. A piecewise linear spline basis function was assumed for each variable so that where (x) + = max{0, x} and τ k = k−1 K−1 for k = 1, . . . , K. In this example, we use K = 60. The priors for the hyperparameters were: λ 1 ∼ Ga(1, 1), λ 2 ∼ Ga(1, 10), and p(d) ∝ (1 + d) −2 . These priors were also used in Example 1 and the justification is the same. The prior for λ 2 implied that E[λ 2 ] = 0.1 which suggested much greater sparsity in the basis function (since only a few are typically needed to model the functional effect). x and can be interpreted as the variable-dependent linear regression effect for the j-th variable. The effect of lv was clearly important with an effect with the posterior median increasing from 0.88 to 2.91 over the range of the data. The effect of lw also seemed important and relatively constant over the range of the data. The other variables were clearly less important with a posterior median which is constant and close to zero and a narrower 95% credible intervals than the other variables. The effect of svi had a posterior median of 0.58 with a 95% credible interval of (0.08, 1.06) which indicated the importance of this variable for the regression model. Table 2 and a summary of the posterior distributions of variable-specific λ 2 are shown in Figure 9. The posterior median of λ 1 is close to 1 indicating that only some of the variables are important but that there is not a high degree of sparsity. The parameter λ 2 indicate the sparsity in the coefficients of the spline basis for each variable. A smaller value of λ 2 indicates that less splines are needed to model Example 3: Computer data Data on the characteristics and performance of 209 CPUs were considered by Ein-Dor and Feldmesser (1987) and subsequently analysed by Gustafson (2000) using Bayesian non-linear regression techniques. The response is performance of the CPU. In common with Gustafson (2000), we consider 5 predictors: A, the machine cycle time (in nanoseconds); B, the average main memory size (in kilobytes); C, the cache memory size (in kilobytes); D, the minimum number of input channels; and E, the maximum number of input channels. In a similar spirit to Gustafson (2000), we modelled the data using a GAM with interactions as described in section 3.3.3 together with the prior structure. Gustafson (2000) used a square root transformation of the predictors since these a highly skewed. In principle the distribution of variables shouldn't matter in non-linear regression modelling. However, knots are evenly spaced and so it would be useful to have data relatively unevenly spread across the range of the knots. We found that a log transformation of the response lead to better behaved residuals than the untransformed response and also transformed the variables by f (x) = log(1 + x). All transformed variables were subsequently transformed to have a minimum of 0 and a maximum of 1. We assumed a non-linear form for the effect of each variable and the interactions effects. We used the model in (3) with q M = 1, q I = 0 and g j (x, τ ) = (x − τ ) + . The number of knots was K = 10. There were 5 main effects and 10 interactions which leads to 1055 regression parameters in the model. The priors for the hyperparameter of the model were as follows. The sparsity parameters for the main effects and interaction terms were chosen as in Example 1 with λ 1 ∼ Ex(1) and λ 2 = rλ 1 where r ∼ Be(2, 6) which implied that E[r] = 1/3 suggesting that the interaction were much sparser than the main effects. The conditional sparsity shape parameters for the nonlinear terms were chosen to be λ 3 ∼ Ga(1, 10) and λ 4 ∼ Ga(1, 100) which indicated that nonlinear terms were less likely to be included in the interaction function than the main effects function (which reflectd the larger number of terms in the interaction function). The scale parameter, d, was given the prior p(d) ∝ (1 + d) −2 which implied that E[d] = 1 but with a heavy tail. The estimated main effects and interactions are shown in Figure 10. The effect of A, D and E were small whereas B and C had an increasing, nonlinear effect with a largest effect of roughly 4 for B and roughly 2 for C. The interaction effects mostly had a posterior median of zero. The main exception was the interaction between B and C which has a posterior median of -4 when both B and C are 1. This indicated that the effect of large values of B and C were over-estimated by the linear effects alone. Figure 11 shows the posteriors for the Ψ's for the main effects and interactions. These results were consistent with the estimated effects. The variables B and C had the largest posterior medians and upper point of the 95% credible interval for the main effects. Similarly, the interaction between B and C had the largest posterior median and upper point of the 95% credible interval than the other interactions. dicates that most effects are relatively important (although this is estimated with a wide 95% credible interval due to the small number of regressors). The posterior median of λ 2 indicates that the interactions are much sparser than the main effects. The sparsity parameters λ 3 and λ 4 indicate the amount that the effects deviate from linearity. The posterior medians of λ 3 for B and C are much larger than for the other variables. This indicates a departure from linearity which is confirmed by the estimate regression effects in Figure 10. The posterior distributions of λ 4 are fairly similar indicating little difference in the level of departure from normality for the interactions (the relatively small effect of the interactions leads to a small amount of information about these parameters). Discussion This paper describes a hierarchical approach to prior construction in sparse regression problems. We assume that variables can be divided into levels and the relationship between the regression coefficients can be expressed hierarchically. The framework allows control of both the conditional sparsity and marginal sparsity of groups of regression coefficients at different levels of the prior. These priors have natural applications in problems with such structure such as a model with interactions and non-linear Bayesian regression models. These priors are able to find sparse estimates in situation where there are large numbers of parameters. We feel that these approaches will have many applications. For example, Kalli and Griffin (2012) use a simple, two stage prior hierarchical prior in a regression model with time-varying regression coefficients. This allows the control of both sparsity of the variables (where are values in the series are shrunk to zero) and sparsity within a series for a particular variable. Figure 13 shows the shrinkage curves, 1 − S(t), for the normal-gamma and normal-gamma-gamma prior distributions with different choices of λ (for the normal-gamma prior) and λ and c (for the normal-gamma-gamma prior). Since the prior mean of Ψ is fixed, the NGG curves will tend to the NG curve as c → ∞. The graphs show that the shape of the shrinkage curve for small values of t is largely controlled by the scale and the value of λ. Deceasing λ leads to larger amounts of shrinkage for small t and the change from high to low levels of shrinkage happens more abruptly. The scale changes the position of the shrinkage curve with a larger mean of Ψ i leading to more shrinkage It follows that E[β|β] = (1 − S(t))β The prior π τ is just the prior for β induced through rescaling by a factor of the standard error. Part (ii) In this case, Ψ ∼ Ga( K i=1 λ i , 1) and so the sparsity shape parameter is K i=1 λ i . B.3 Proof of Theorem 2 Part (i) Suppose that λ 1 = min{λ i } then a constant, since we are integrating kernels of GG(λ i −λ 1 , λ 1 +c i , 1) distribution and λ i ≥ λ 1 . Therefore, the sparsity parameter of the marginal distribution of Ψ i is given by the simple form of min{λ i }.
2014-07-22T11:01:16.000Z
2013-07-19T00:00:00.000
{ "year": 2014, "sha1": "31d159e34907916b6b88933777b42a54c6d7a617", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "31d159e34907916b6b88933777b42a54c6d7a617", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
98068215
pes2o/s2orc
v3-fos-license
The effects of lateral halogen substituents on the low-temperature cybotactic nematic phase in oxadiazole based bent-core liquid crystals We have previously demonstrated that the incorporation of lateral methyl groups on oxadiazole-based liquid crystals leads to relatively low-temperature cybotactic nematic phases which, in some cases, supercool to room temperature. We report here the synthesis and phase behaviour of related compounds that possess lateral halogen groups and in some cases, lateral methyl groups as well. Derivatives with three lateral substituents (one halogen and two methyl groups) in a specific pattern supercool in the nematic phase to room temperature. As was the case with the previously reported trimethylated derivatives, the low-temperature nematic phase is glassy in nature. Two of the new trisubstituted derivatives (with bromo and chloro groups) remain in the nematic phase upon subsequent heating until transitioning to the isotropic phase indicating that the low-temperature nematic phase may be more stable than that shown by the trimethylated analogue. Preliminary X-ray diffraction analysis confirms the presence of a tilted cybotactic nematic phase. In addition, the splitting observed in the wide-angle scattering feature is indicative of enhanced local biaxial packing. Introduction Since it was first proposed in 1970, [1] there has been a great deal of interest in the biaxial nematic (N b ) phase due to the potential for enhancements to display applications such as faster electro-optic switching and improved viewing angles. [2][3][4][5] Evidence from 2 H NMR spectroscopy, conoscopic microscopy, and X-ray diffraction (XRD) was originally thought to be proof for the N b phase in bent-core liquid crystals derived from a 1,3,4-oxadiazole bisphenol (ODBP) moiety. [6,7] However, more recent work (on oxadiazoles and other bent-core mesogens (BCM)) indicates that the unique behaviour observed may be field induced and due to the presence of cybotactic (smectic-like) clusters within the nematic phase (designated N CybC or N CybA ). [8][9][10][11][12][13][14][15][16][17][18][19][20] Further evidence of the cluster model has been found by examining the effects of an applied electric or magnetic field on the nematic phase in BCM. [21][22][23][24] Dramatic shifts in transition temperatures (up to 11 K) were observed as the strength of the field was increased. Such substantial shifts can be best accounted for by considering that the applied field is coupling to biaxial clusters rather than simply to individual molecules. As additional proof, smectic layers within a nematic phase of a BCM have been imaged using cryo-transmission electron microscopy. [25] The cluster model has also been used to explain some other interesting features of BCM, such as the fact that chiral domains are observed when the sample is viewed by polarising microscopy (POM), [26][27][28][29][30][31][32] and ferroelectric switching behaviour in the nematic phase of 1,2,4-oxadiazolebased liquid crystals. [33][34][35][36][37] However, some differences of opinion regarding the precise cause of the observed behaviour in the nematic phase remain. [38][39][40][41] Clearly, there continues to be much to learn about the fundamentals of how the bent-core molecules are interacting to establish supramolecular structures in the nematic phase and this is being investigated using a variety of techniques. [42][43][44][45][46][47][48][49] Moreover, several recent studies suggest that fast electro-optical switching about the minor director of a BCM in a field-induced N b phase is possible, so continued investigations into this unique liquid crystal phase hold technological promise. [50][51][52] One of the challenges in studying these types of compounds is that most of the derivatives that show the unique N Cyb phase do so at very high temperatures. In order to more fully comprehend details about the phase (and to be able to exploit possible practical applications), it is necessary to prepare derivatives that exhibit this phase closer to room temperature. Currently, there are only a handful of derivatives that show a cybotactic nematic phase at relatively low temperatures including some cyanoresorcinol based derivatives, [13] fluorenone-or thiadiazole-based derivatives with pendant alkoxy groups, [53][54][55][56][57][58][59][60] and low molar mass organosiloxane multipodes. [61,62] Some of these derivatives supercool in the nematic phase to room temperature or below. In these instances, the low-temperature phase tends to be very viscous or glass-like. Our approach to lowering transition temperatures has been to incorporate lateral methyl groups on the aromatic rings of the oxadiazole system. [63][64][65][66][67] This has led to considerably lower transition temperatures compared to the non-methylated analogues; derivatives with three lateral methyl groups supercool to room temperature in the nematic phase ( Figure 1) and XRD studies indicate the presence of cybotactic clusters. In the case of the compound shown below, OC4 2MePh(mono3MeODBP), the nematic phase persists at room temperature for nearly 24 hours before crystallisation occurs. In addition, at room temperature, the nematic phase was no longer fluid but glassy. This result is consistent with the phase behaviour observed in other bent-core systems that show supercooling of a nematic phase to low temperatures. The supercooling effect can be attributed to two factors which are both related to the presence of lateral substituents and prevent facile crystallisation: (1) restricted internal rotation allows for the presence of a variety of conformers and (2) rotation about the long axes of the molecules is slowed. Recently, we reported a novel splitting of the wide-angle XRD pattern in bent-core molecules. It represents the first XRD evidence of local biaxial order in this class of nematogens. This behaviour is only present in the trimethylated derivatives that supercool in the nematic phase to room temperature and is suggestive of enhanced biaxial ordering in the cybotactic clusters of such mesogens. [65,66] The effects of lateral polar groups (halogens, cyano, etc.) on BCM have also been investigated. [5,13,26,27,[68][69][70][71][72][73][74] This is of interest because such groups will exaggerate the electrostatic profile (molecular dipole), and computational studies indicate that increasing the transverse dipole might stabilise the biaxial nematic phase (this idea has been investigated experimentally as well). [75][76][77][78] It has been demonstrated that subtle changes in the nature of the lateral groups can alter both the stability and types of liquid crystalline phases observed. For example, in resorcinol based bent-core liquid crystals, the presence of a lateral cyano group at the central benzene ring leads to the formation of a nematic phase, while a derivative with a lateral methyl group in the same position shows no nematic phase. [13] Given these precedents, we were interested to learn how lateral halogen substituents would affect the nematic phase in oxadiazole-based liquid crystals. The steric effects of these groups can be compared using the van der Waals (VDW) volumes of each substituent. A fluoro group has a VDW volume of 6.20 cm 3 /mol, Cl (12.24 cm 3 /mol), and Br (14.60 cm 3 / mol). [79] Therefore, the steric effects of the chloro and bromo groups on the phase behaviour should be comparable to that of the methyl group (which has a VDW volume of 13.67 cm 3 /mol), while the fluoro group should create a much weaker steric effect. Of course, the halogen substituents will alter the molecular dipoles, so both steric and electronic effects must be considered. In this paper, we will discuss the phase behaviour of oxadiazole-based liquid crystals that possess a lateral halogen substituent (F, Cl, or Br) on one of the inner benzene rings. Synthesis The target compounds were prepared as shown in Scheme 1 using methodology previously described. [16a] The appropriately substituted 4-hydroxybenzoic acid is reacted with 4-hydroxybenzhydrazide in the presence of a carbodiimide coupling agent (EDC) and HOBt, to give compound 1. Ring closure with an excess of thionyl chloride yields the oxadiazole bisphenol core, 2. This synthetic intermediate can then be reacted with the appropriate benzoic acid using carbodiimide coupling to yield the target compound, 3. Nine derivatives have been prepared which differ only in the position of lateral methyl groups (if any) and the identity of the halogen (in all cases the halogen is at the 3 position of one of the inner benzene rings). In the past, we have prepared derivatives that possess a very long terminal alkoxy group or a very short alkoxy group. Given that we were interested in the nematic phases of these compounds, we focused solely on short chain derivatives. Therefore, all of the derivatives described in this paper have two butoxy end chains. Figure 2 shows a representative structure along with the short-hand nomenclature. The chloro and bromo derivatives are named in a similar fashion with the exception that the core is designated (mono3ClODBP) or (mono3BrODBP). Phase behaviour A table showing all transition temperatures and enthalpies on the initial heating run is provided in Supplemental Material Table S1. The phase beha-viour (heating and cooling) of the derivatives with no methyl group (OC4 Ph(mono3XODBP)) and a methyl group at the 3 position of the outer benzene rings (OC4 3MePh(mono3XODBP) is shown in Figure 3. For the compounds without a lateral methyl group, the fluoro derivative (OC4 Ph (mono3FODBP)) exhibits a wide nematic range of over 100°C with a clearing point of 280°C. This is approximately the same clearing temperature as observed for the originally reported (unsubstituted) oxadiazole derivative but the fluorine analogue has a larger nematic range. The presence of the fluorine obviously causes very little steric disruption and seems to stabilise the nematic phase somewhat compared with the non-halogenated species, as the onset temperature in the former compound is lower by 40°C . The chloro derivative (OC4 Ph(mono3ClODBP)) has a slightly lower clearing temperature than the fluoro derivative, but it too has a large nematic range of about 120°C. The bromo derivative (OC4 Ph (mono3BrODBP)) has about the same nematic onset temperature as the fluoro derivative, but a slightly shorter nematic range. The fluoro derivative shows only a small amount of supercooling in the nematic phase, but this effect is considerably larger for the bromo and chloro derivatives with both crystallising below 100°C. For derivatives with a methyl group at the 3 position of both outer rings (OC4 3MePh (mono3XODBP)), the clearing points are reduced considerably compared to the non-methylated analogues, and the nematic ranges are reduced as well (Figure 3). The bromo derivative (OC4 3MePh (mono3BrODBP)) exhibits the shortest nematic range (18°C) and lowest clearing point (183°C). All three of these compounds supercool in the nematic phase with crystallisation occurring below 100°C. As expected, based on our studies with trimethylated analogues, the trisubstituted derivatives with methyl groups at the 2 position of the outer benzene rings (OC4 2MePh(mono3XODBP)) show the most promising phase behaviour. All three of these derivatives show lower nematic onset and clearing temperatures than the other halogenated analogues. In addition, all of them supercool in the nematic phase to room temperature. A bar graph illustrates this phase behaviour ( Figure 4); an additional segment is included for the subsequent heating run. While the fluoro analogue crystallises upon heating, as was observed for the previously reported trimethylated derivatives, the bromo and chloro derivatives do not and remain in the nematic phase all the way to the isotropisation point. This is illustrated in Figure 5 which shows a differential scanning calorimetry (DSC) thermogram of the chloro derivative, OC4 2MePh(mono3ClODBP), including both the first and second heating and cooling cycles. However, if this compound is held at 70°C in the nematic phase for several hours, slow recrystallisation is observed. Thermograms for the fluoro and bromo derivatives can be found in the Supplemental Material ( Figures S1 and S2). As was the case for the previously described trimethylated derivative, OC4 2MePh (mono3MeODBP), all of the halogenated derivatives with the same substitution pattern form a viscous nematic glass at room temperature. But while the previously reported trimethylated compound crystallises after 24 hours at room temperature, the chloro and bromo derivatives remain in the nematic phase at room temperature for up to 2 weeks before crystallising. Attempts to observe a T g by DSC have been challenging as this transition is known to be a weak event for low molar mass (monomeric) species. However, we were able to observe a possible T g at 66°C for OC4 2MePh(mono3ClODBP) when a large sample size and rapid heating rate were used (see 20 Figure S3). This result is consistent with microscopy studies as the nematic phase of this compound seems to lose most of its fluidity by 60°C upon cooling. When these derivatives are studied by POM, the nematic phase shows a mixture of 4 and 2 brush disclinations ( Figure 6) as was observed for the related compounds with lateral methyl groups described in [63]. In comparing the phase behaviour of the three halogenated derivatives, OC4 2MePh (mono3XODBP), both steric and electronic effects must be considered. Since the dipole can be altered considerably depending upon the conformation of the outer benzene rings, we have chosen to calculate the dipoles for simplified molecules (the 2,5-diphenyl-1,3,4-oxadiazole unit) using Spartan '10 DFT B3LYP parameters (Figure 7). The fact that the fluorine substituent is so much smaller than the chlorine and bromine is likely the reason for the higher clearing temperature of the fluoro derivative. The steric effects of the bigger chloro and bromo substitutents are also likely responsible for the fact that these compounds remain in the nematic phase upon heating from the supercooled nematic phase while the fluoro derivative crystallises upon heating in the glassy nematic phase. The larger groups can more effectively restrict intramolecular flexibility and rotation of the molecules about their long axes thereby kinetically preventing crystallisation by disrupting packing. If the larger dipoles for the chloro and bromo derivatives were responsible for the phase behaviour, we might then expect the clearing temperatures for these compounds to be higher as well. It is more difficult to rationalise the difference in phase behaviour between the chloro, bromo, and trimethylated derivative (OC4 2MePh (mono3MeODBP), Figure 1) given that all three of the substituents are similar in size. Since the trimethylated analogue crystallises upon heating from the supercooled nematic phase while the bromo and chloro derivatives do not, it seems unlikely that steric differences can account for this behaviour. However, the dipoles mentioned above for the bromo and chloro derivatives are higher than for the trimethylated analogue (3.14 D). The direction of the dipole is different as well; for the halogenated compounds, the dipole angles across the oxadiazole moiety in the general direction of the halogen, while for the trimethylated derivative, the dipole generally bisects the oxadiazole moiety. It may be that these electronic factors conspire to stabilise the nematic phase for the chloro and bromo compounds or act to raise the barrier to rotation, thus slowing the crystallisation process. X-ray diffraction studies Preliminary XRD measurements were performed on OC4 2MePh(mono3ClODBP). The small-angle XRD patterns taken on cooling from the isotropic melt under an aligning magnetic field (horizontal in all figures) are shown in Figure 8. They show the typical four-spot feature indicative of the tilted, i.e. smectic C-like, cybotactic nematic (N CybC ) phase. We verified that the cooled sample retained a nearly unchanged pattern after being kept on the shelf at room temperature for 24 hours (Figure 8f). The q-space position of the intensity maxima [4a,b] in the pattern at T = 100°C corresponds to a molecular length L = 35.6 ± 0.3 Å, a smectic layer spacing d = 23.8 ± 0.4 Å and a tilt angle β = 48°± 2°. These values are very similar to those previously reported for the trimethylated analogue, [63] as expected from the similar steric attributes of chloro and methyl groups. In the wake of the recent observation of a splitting in the wide-angle XRD diffuse feature of trimethylated BCM, [65,66] we performed additional wideangle measurements on the chloro derivative. Selected patterns taken on cooling under magnetic field are shown in Figure 9(a)-(c). Similarly to what was observed in the trimethylated counterparts, the equatorial wide-angle diffuse feature can be resolved into two diffraction peaks, labelled p1 and p2 in Figure 9(d)-(f), corresponding to distinct transverse intermolecular distances d 1 ≈ 4.7 Å and d 2 ≈ 3.8 Å. These values can be associated with the in-plane width of aromatic rings and the face-to-face distance between stacked π-systems, respectively, providing direct XRD evidence of (local) biaxial ordering. Overall, the splitting was comparable to that previously reported for the trimethylated analogue, the effect being stronger at low temperatures but still clearly detectable close to the clearing point. The possibility to resolve such a minor difference in the transverse intermolecular distances, a peculiarity of this class of trisubstituted BCMs, confirms an enhanced local biaxial packing of these mesogens compared to other BCMs studied so far. Further investigation is ongoing to elucidate the specific roles played by steric hindrance and dipole interactions in restricting rotation. General methods All experiments were performed at room temperature and atmospheric pressure unless otherwise stated. All chemicals were acquired from Sigma-Aldrich Company in Milwaukee, Wisconsin and used without further purification unless otherwise stated. NMR spectra were obtained using a JEOL ECA-400 MHz FT-NMR with sample changer and auto tuning probe. A TA Instruments Model 2920 DSC and a Nikon Labophot-2-Pol microscope equipped with a Mettler Toledo Hot Stage FP82HT and a FP90 Central Processor were used for thermal analysis. Silica Gel 60 (230-400 mesh) from EMD, and Ottawa Sand Standard, 20-30 mesh, were used for column chromatography. All non-commercially available alkoxybenzoic acid derivatives were prepared according to published procedures. [64] Elemental analyses were carried out by Atlantic Microlab, Inc. (Norcross, GA). XRD measurements were performed at the European Synchrotron Radiation Facility (ESRF), Grenoble, France, on the BM26B DUBBLE beamline. Samples were studied in glass capillaries (1 mm diameter) mounted into a temperature-controlled sample holder allowing application of a static magnetic field (1 T for small-angle measurements, 2.2 T for wide-angle measurements) orthogonal to the X-ray beam. The beam energy was 12 keV (λ = 1.03 Å), while the sample-to-detector distance was 1.286 m for small-angle measurements (a vacuum chamber was inserted between the sample and the detector to reduce air scattering) and 0.189 m for wide-angle measurements. General synthetic procedures Examples of each type of representative synthetic step are given below. Details for all other derivatives (including elemental analyses (Table S2)) are provided in the Supplemental Material. 3-fluoro-4-hydroxy-N-(4-hydroxybenzoyl) benzhydrazide 3-fluoro-4-hydroxybenzoic acid (0.502 g, 3.21 mmol), 4-hydroxybenzhydrazide (0.488 g, 3.20 mmol), and 1-hydroxybenzotriazole (HOBt) (0.446 g, 3.30 mmol) were added to a 250 mL threeneck round bottom flask equipped with a stir bar, and dissolved in dimethylformamide (DMF) (15 mL). The reaction mixture was then put under a nitrogen atmosphere. Upon dissolution, N-(3-dimethylaminopropyl)-N′-ethylcarbodiimide hydrochloride (EDC HCL) (0.632 g, 3.29 mmol) was added. The solution was allowed to stir for 3 days, after which it was slowly pipetted into stirring DI water (200 mL). A white precipitate formed which was collected by Buchner filtration (0.749 g, 81.1% yield) and used without further purification. 1 2-(4-hydroxy-3-fluorophenyl)-5-(4-hydroxyphenyl)-1,3,4-oxadiazole 3-fluoro-4-hydroxy-N-(4-hydroxybenzoyl) benzhydrazide (0.749 g, 2.60 mmol) and thionyl chloride (7.5 mL, 103 mmol) were added to a 50 mL round bottom flask which was fitted with a reflux condenser and a water trap. The creamy white solution was refluxed for 20 minutes before it turned amber in colour, and this was then refluxed for an additional 2 hours. The solution was then poured onto ice which resulted in the formation of a yellow precipitate. The product was collected by Buchner Figure 9. (a-c) Wide-angle XRD patterns of OC4 2MePh(mono3ClODBP) taken on cooling from the isotropic under a horizontal aligning magnetic field. Because of the pattern symmetry, only the upper equatorial feature is shown. (d-f) Fit of the corresponding q-scans along the equatorial axis by means of three Pearson VII lineshapes (note that p0, the low hump in the small q region, is just peripheral scattering from the four-spot feature). filtration and washed with cold water, and then was recrystallised using a 3:2 ethanol:water solution to give a pale yellow solid (0.5239 g, 74.1% yield). 1 OC 4 2MePh(mono3FODBP) 2-(4-hydroxy-3fluorophenyl)-5-(4-hydroxyphenyl)-1,3,4-oxadiazole (0.152 g, 0.551 mmol), DMAP (0.034 g, 0.28 mmol), and 4-butoxy-2-methyl benzoic acid (0.232 g, 1.12 mmol) were added to a 2-neck round bottom flask equipped with a stir bar and dissolved in dry dicholoromethane (25 mL). This solution was then put under a nitrogen atmosphere. After stirring for 5 minutes EDC HCl (0.216 g, 1.14 mmol) was added and the reaction mixture was allowed to stir for 3 days, after which TLC indicated that the reaction was complete (2:1:1 Hexane:DCM:Acetone, R f = 0.62). DI water (25 mL) and dichloromethane (25 mL) were then added and the solution allowed to stir for 5 minutes before being transferred to a separatory funnel. The organic extract was then washed once with water (25 mL), once with brine (25 mL), and once with 2 M HCl (25 mL). The combined aqueous washings were backextracted with dichloromethane. The combined organic extracts were dried over magnesium sulphate, filtered, and solvent removed by rotary evaporation resulting in a white powder. The crude product was then purified using flash column chromatography (20:10:1.5 dichloromethane:hexanes:ethyl acetate, R f = 0.16). Solvent was then removed by rotary evaporation to give a white solid. The product was recrystallised using 1:30 chloroform:ethanol to give a white crystalline solid (0.214 g, 59.6% yield). 1 13
2019-04-06T13:12:44.208Z
2015-10-06T00:00:00.000
{ "year": 2015, "sha1": "05a6a9ffbd9947cd275acd5d6404b47b21348bad", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/The_effects_of_lateral_halogen_substituents_on_the_low_temperature_cybotactic_nematic_phase_in_oxadiazole_based_bent_core_liquid_crystals/1568844/3/files/2350973.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4218596ee288314f244c921662a7f6d46f767b08", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
2750528
pes2o/s2orc
v3-fos-license
Adherence to Antimicrobial Inhalational Anthrax Prophylaxis among Postal Workers, Washington, D.C., 2001 In October 2001, two envelopes containing Bacillus anthracis spores were processed at the Washington, D.C., Processing and Distribution Center of the U.S. Postal Service; inhalational anthrax developed in four workers at this facility. More than 2,000 workers were advised to complete 60 days of postexposure prophylaxis to prevent inhalational anthrax. Interventions to promote adherence were carried out to support workers, and qualitative information was collected to evaluate our interventions. A quantitative survey was administered to a convenience sample of workers to assess factors influencing adherence. No anthrax infections developed in any workers involved in the interventions or interviews. Of 245 workers, 98 (40%) reported full adherence to prophylaxis, and 45 (18%) had completely discontinued it. Experiencing adverse effects to prophylaxis, anxiety, and being <45 years old were risk factors for discontinuing prophylaxis. Interventions, especially frequent visits by public health staff, proved effective in supporting adherence. n October 2001, two letters with Bacillus anthracis spores were mailed to offices on Capitol Hill, Washington, D.C. Both letters were processed at the Washington, D.C., Processing and Distribution Center (DCPDC) of the U.S. Postal Service (USPS). Inhalational anthrax developed in four DCPDC postal workers; two died. More than 2,000 workers and business visitors to the private work areas of DCPDC were potentially exposed to aerosolized B. anthracis spores during October 12-21 (1,2). To prevent inhalational anthrax, 60 days of antimicrobial therapy was recommended (primary: ciprofloxacin 500 mg/orally twice a day or doxycycline 100 mg/ orally twice a day; alternative: amoxicillin 500 mg/orally twice a day). Although inhalational anthrax most often develops in the first 7-10 days after exposure, incubation periods as long as 43 days have been reported in Sverdlovsk, Russia (3); in animal studies, inhalational anthrax occurred after 58 days despite 30 days of antimicrobial therapy (4). Therefore, completion of the full 60 days of prophylactic antimicrobial therapy was essential for all postal workers potentially exposed to B. anthracis spores at the DCPDC. Adherence to long-term drug regimens is problematic, and multiple factors influence adherence status, such as regimen factors (e.g., number of pills needed daily), structural factors (e.g., ability to access drugs), individual factors (e.g., cognitive limitations, depression), and health-care provider factors (e.g., ability to listen to and communicate effectively with patients) (5)(6)(7)(8)(9)(10). Among the DCPDC workers, typical adherence issues associated with short-course antimicrobial therapy were complicated by the high levels of stress associated with the bioterrorism event and the illnesses and deaths of coworkers, stigma from other postal workers and community members because of erroneous concerns that DCPDC workers were contagious, and the relatively longer duration and potential adverse effects associated with the therapy. The DCPDC facility was closed October 21, 2001, and employees were displaced to work in other area mail facilities, contributing to ongoing disruptions of the workers' daily lives and further complicating adherence. Last, the dynamic nature of the bioterrorist event created a system of evolving health-risk communication that, combined with the many inconsistent sources of information about the event and anthrax, contributed to confusion and misinformation. In response to the first bioterrorism-related outbreak of inhalational anthrax in the United States, strategies to promote BIOTERRORISM-RELATED ANTHRAX adherence to antimicrobial prophylaxis among more than 2,000 DCPDC workers were rapidly implemented. To facilitate future adherence activities in similar events, we evaluated the interventions that were used to support adherence and examined the factors that influenced adherence to the prophylactic regimen in DCPDC workers. Qualitative Data Collection Qualitative data were collected from open-ended interviews (i.e., ones in which interviewer writes down exact responses of interviewee) with convenience samples of the postal worker population throughout the 60-day period to develop and evaluate the interventions and to collect information on the determinants of adherence. The findings from the qualitative interviews were used to develop and validate the close-ended questions (i.e., those with a defined set of answers to choose from, such as yes or no) included in the quantitative survey questionnaire. Information was collected through observation, one-on-one contact, informal small group discussions, and focus group interviews with workers, as well as through interactions with USPS management, worker union representatives, and USPS Employee Assistance Program personnel. Two staff members from the Centers for Disease Control and Prevention (CDC) conducted five focus group interviews with DCPDC workers during December 13-16, 2001. DCPDC shift supervisors selected six to eight workers to participate in each focus group. During the interviews, workers' responses were noted verbatim on a large flip chart visible to participants at all times. The first author also carried out individual qualitative open-ended interviews during routine interactions with workers throughout December 2001. The first author conducted all analyses. Notes were immediately reviewed for accuracy at the completion of all interviews and entered into a word-processing software program. Qualitative analysis included several rounds of coding by subject or theme, as well as content analysis and comparison of responses across groups. Analysis focused on both commonly repeated themes (reported by at least 50% of the respondents) and rare points of view. Interventions to Promote Adherence To develop appropriate adherence interventions, we obtained support from the USPS management, Employee Assistance Program, and postal service unions. We conducted open-ended interviews with postal workers from various jobs and shifts and incorporated known adherence strategies (5,6,8,10,11) to develop interventions. Public health staff carried out repeated group questionand-answer sessions and informal contact with workers. These sessions consisted of large and small group and one-on-one interactions to counsel workers. Motivational messages were distributed through the USPS communication infrastructure. In addition, several types of written materials were distributed at the worksite and to workers' homes, including booklets of frequently asked questions about anthrax and antimicrobial therapy, antimicrobial pocket guides with calendar memory aids, and handouts describing ways to minimize stress and recognize the known adverse effects of antimicrobial therapy, such as gastrointestinal upset and yeast infection. Posters and table tents, both with motivational messages, were placed in the workplace. We also provided a letter for workers to take to their personal health-care provider clarifying which area postal workers needed extended prophylaxis and the recommended regimens. This letter was also distributed directly to area health-care providers. Further, after free antimicrobial agents were no longer available, access to antimicrobial agents and reimbursements was facilitated. Finally, clinical team members and a local health-care provider answered specific questions about adverse effects or potential drug interactions, and the local health-care provider consulted with workers free of charge. In addition, multiple Morbidity and Mortality Weekly Reports (12)(13)(14), Health Alert Network alerts, and live broadcasts were disseminated throughout the prophylaxis period to give health-care providers detailed information on which groups needed extended prophylaxis, the recommended regimens, and clinical signs of inhalational anthrax disease. Quantitative Survey At five mail facilities, trained interviewers administered a close-ended questionnaire to a convenience sample of all DCPDC employees working the day shift (7 a.m. Most (80%) of the displaced DCPDC employees worked at these five facilities. Compared with the day shift, more employees work the swing shift and night shift, when the mail collected during the day is processed. The questionnaire collected information on demographic characteristics, adherence behaviors, enablers and obstacles to adherence, and information about the implementation of interventions. To assess adherence, workers were asked to respond to five questions located throughout the survey. (For example, "Are you still taking antibiotics for anthrax?" [Possible responses: No, Yes, Declined] and "If you forgot to take any of your pills yesterday, how many pills did you miss?" [Possible responses: None, One, Two, Three.). Because we were interested in adherence to the recommendation to complete 60 days of prophylaxis, workers were divided into one of three categories. Adherence was defined as full if workers reported they continuously took their antimicrobial therapy throughout the 60-day period, never reduced their dosage, and did not forget any pills the previous day. Adherence was defined as intermediate if workers reduced the dosage, forgot a pill the previous day, or stopped their antimicrobial therapy and restarted at least once. Adherence was defined as discontinued if workers stopped their antimicrobial therapy and never restarted. To analyze predictors of nonadherence, we carried out a three-step logistic regression modeling procedure. First, we modeled overall nonadherence (intermediate adherence and discontinued groups combined) compared with full adherence. For this model, we were interested in understanding the differences between those workers who were fully adherent and those who were not fully adherent, including workers who completely discontinued therapy. Second, we modeled intermediate adherence compared with full adherence. For this model, we were interested in understanding the differences between those who were nonadherent but who had not completely discontinued therapy and those who were fully adherent. Third, we modeled the discontinued group compared with the full adherence group. For this model, we were interested in assessing the differences between those who had completely discontinued therapy and those who were fully adherent. Variables examined were based on previously published articles on adherence and those associated with perceived risk and potential exposure to B. anthracis spores in this setting. Inhalational anthrax developed in employees who worked on a sorter machine and in the government mail section of the DCPDC (2). Variables included age, sex, race, perceived risk of breathing in B. anthracis spores, work location during exposure period, work description during the time of interview, trouble remembering to take pills, experiencing anxiety, physical signs of stress, severity of adverse effects, and adverse effects negatively affecting work performance. For all analysis, SAS 8.2 (SAS Institute, Inc., Cary, NC) was used. For univariate analysis, two-tailed p values were calculated by chisquare test for dichotomous variables. Potential covariates for the logistic regression models included those with p<0.20 in univariate analysis, and possible confounders. We followed a backward elimination strategy to remove nonsignificant covariates in building final parsimonious models. A p<0.05 was determined to be statistically significant. For all qualitative and quantitative interviews, workers were informed that their participation was voluntary and anonymous. Anthrax infections did not develop in any of the workers who participated in the interventions or interviews. Comparison of Adherence among Workers Among those who completed the questionnaire, 98 (40%) reported full adherence, 45 (18%) discontinued prophylaxis and never restarted, and 102 (42%) were classified as intermediate. Overall, 186 (76%) workers were taking prophylaxis at the time of the interview, including 88 (86%) of the 102 classified as in the intermediate group. Among the intermediate group, 14 (14%) reported discontinuing prophylaxis and restarting at least once, but they were not taking antibiotics at the time of the interview. A total of 45 workers from the discontinued group and 48 workers from the intermediate group reported stopping prophylaxis. Among the 102 workers classified as intermediate, 40 (39%) reported ever reducing the dosage, 65 (64%) forgot to take at least one pill the previous day, and 48 (47%) reported discontinuing prophylaxis and restarting at least once. Among those who restarted, 20 (42%) missed at least one pill the previous day, and 22 (46%) reported they had ever reduced the dosage. We examined reasons for stopping prophylactic antimicrobial therapy (Table 1). Most workers reported that several factors influenced their decision to discontinue prophylaxis; 60% cited five or more reasons. Trouble managing adverse effects to antimicrobial agents was the most common reason. Concern over possible long-term adverse effects associated with prolonged antimicrobial therapy was the second most common Lack of support at work 16 (17) Difficulty getting appointment with health-care provider 9 (10) Advised by health-care provider 7 (7) Expense of health-care provider visit or antibiotic 6 (6) BIOTERRORISM-RELATED ANTHRAX reason for stopping. Similar reasons were given by the workers who reported reducing the dosage of the prescribed antimicrobial therapy. Workers who stopped therapy also reported lacking sufficient information about anthrax and antimicrobial therapy, specifically, information from USPS or CDC. Predictors of Nonadherence We wanted to understand the differences between those who were not fully adherent, excluding those who completely discontinued therapy, compared with those who were fully adherent. We therefore modeled intermediate adherence compared with full adherence. Characteristics of these populations and univariate analysis are in Table 2. Independent predictors of intermediate adherence included experiencing "a lot" of adverse effects to antimicrobial therapy, trouble remembering to take pills, as well as age <45 years (Table 3). Experiencing "a lot" of adverse effects, trouble remembering to take pills, and age <45 years were also risk factors for nonadherence in a model combining the intermediate adherence and discontinued groups compared with full adherence (data not shown). We wanted to understand the differences between those who completely discontinued therapy and those who were fully adherent. We therefore modeled the discontinued group compared with the full adherence group. Characteristics of these populations and univariate analysis can be found in Table 4. Independent predictors of discontinuing therapy included experiencing "a lot" of adverse effects, anxiety, and age <45 years (Table 5). Those workers who reported a high perceived risk of having breathed in B. anthracis spores during October 12-21, 2001, were significantly less likely to have discontinued therapy. Those who experienced five or more physical signs of stress were also significantly less likely to have discontinued therapy. Postal Workers' Experiences and Qualitative Evaluation of Interventions A total of 38 workers participated in five focus groups, and 22 participated in individual qualitative interviews. The age, sex, and race/ethnic characteristics of qualitative interview participants were similar to those of respondents to the survey questionnaire. When asked in focus groups and individual qualitative interviews about what adherence interventions were helpful, workers consistently cited repeated visits by public health staff to worksites. Workers reported that the ability to ask personal questions and the distribution of various materials covering multiple health-and work-related issues helped workers complete prophylaxis and promoted adherence by providing accurate and needed information about anthrax, antimicrobial therapy, risk for disease, and the outbreak investigation. Workers reported that this information helped reduce their stress levels and motivated them to continue prophylaxis. Workers recalled receiving little information at the free antimicrobial distribution sites, and some had forgotten or misunderstood the initial information given. Several opportunities to speak with public health staff were necessary to clarify questions, especially as new issues arose. However, some workers complained that public health staff could not provide adequate answers to all their questions, such as those related to the long-term status of viable B. anthracis spores inhaled into the lung, the long-term effects of extended antimicrobial therapy, environmental sampling results, the need for personal protective gear, and other occupational health concerns. In the questionnaire, 82% of workers reported they wanted to receive public health information in a variety of formats, including both orally and written, as well as information from the media. The questionnaire showed that only 3% of workers did not participate in oral communication interventions, 2% did not receive written materials distributed to employees at the worksite or at their homes, and 21% did not see posted signs and messages at work. Discussion After the first bioterrorism-related anthrax outbreak in the United States, we rapidly developed and implemented multiple adherence interventions to prevent inhalational anthrax in >2,000 DCPDC workers. This was the first time adherence interventions have been conducted and evaluated in an applied public health bioterrorism response. Our interventions promoted the message that adherence was essential for the full 60 days of antimicrobial therapy. Further, the interventions were carried out during the entire 60-day period. Seventy-six percent of postal workers were taking antimicrobial prophylaxis at the time of the evaluation. Despite differences in assessing adherence, the adherence found in this study was relatively high compared with other studies of adherence to short-course antimicrobial therapy. For example, Ley (15) reported approximately 50% adherence in a review of adherence studies to short-course antibiotics, and Brookoff (16) reported only 31% adherence to a 10-day course of doxycycline (n=386) for outpatient treatment of pelvic inflammatory disease. Many issues hindered adherence in this anthrax outbreak, including adverse effects of the antimicrobial prophylaxis, such as gastrointestinal upset and yeast infection, trouble remembering to take the pills, perceived risk, anxiety, and physical signs of stress. Although these factors occurred in the context of a bioterrorism event, similar adherence obstacles BIOTERRORISM-RELATED ANTHRAX have been reported elsewhere (5,7,17,18). Additional issues complicating adherence among postal workers included the large number of workers affected, occupational health and other work-related issues, limited capacity of local departments of health to undertake a program to promote adherence for a large number of people in an emergency, and the hysteria and media coverage associated with this bioterrorism event, which likely magnified miscommunication and workers' confusion. In developing the intervention protocols, we drew upon lessons learned from adherence strategies for isoniazid treatment for latent tuberculosis infection and highly active antiretroviral therapy for HIV infection. Studies of these strategies conclude that interventions must be multifaceted, ongoing, flexible, individualized, and repetitive to achieve optimal adherence levels (5,8,9,(18)(19)(20). Our interventions included many of these characteristics, such as repeated visits, clarifying questions, counseling workers, incorporating pill-taking into daily routines, and providing workers with as much information as possible about anthrax and antimicrobial therapy. Inhalational anthrax as a disease and bioterrorism-associated disease are complex issues and relaying this information to people was difficult. Therefore, multiple formats (verbal, written, and graphic) were necessary to effectively communicate information to workers. Many workers mistook signs of stress (e.g., complaints of fatigue, lack of sexual drive, and increased crying) for adverse effects of the antimicrobial therapy. Further, the stress associated with the bioterrorist event magnified the adverse effects associated with prophylaxis. For some symptoms, distinguishing between adverse effects of stress and those of the antimicrobial therapy, such as gastrointestinal upset, was impossible. Those who worked close to areas where coworkers with inhalational anthrax had worked reported more physical signs of stress, had a higher perceived risk of having breathed in B. anthracis spores, and were also more likely to have continued therapy. Those who had anxiety were more likely to have discontinued therapy. Published articles report associations between anxiety or depression and nonadherence (7,17), and some researchers posit that the inability to cope with anxiety is the better predictor of nonadherence (17). These findings highlight the importance of communicating early and repeatedly the known adverse effects people should expect, and how to manage all potential effects, including those caused by prophylaxis and stress or anxiety related to bioterrorist events. Only self-reports were collected to assess adherence in this evaluation. Several studies suggest that self-reporting overestimates adherence, while reports of nonadherence are usually valid (5,7). Therefore, our results may have overestimated adherence, but it is unlikely that we overestimated the number of persons who discontinued prophylaxis. Data were collected from a convenience sample and may not be representative of all DCPDC workers. A March 2002 phone survey among DCPDC workers (62% response rate) reported similar age, sex, and race/ethnicity characteristics (21). Because we did not have a control group who did not receive interventions to promote adherence, we cannot measure the effectiveness of our interventions; however, our adherence findings were similar to those of other studies that were not implemented in the setting of a bioterrorist emergency response (7,8,11). In addition, the evaluation was conducted during the holiday season, the busiest time of the year for the USPS, and we were permitted to conduct the questionnaire only with workers on the day shift (7 a.m.-3 p.m.). The experiences of day-shift workers may be different from those who work other shifts, although, based on the qualitative findings carried out with workers from all shifts and the continual interactions with workers throughout the 60day period, these findings likely reflect the experiences of most DCPDC workers. Last, our evaluation may have been affected by the general media coverage of the bioterrorism events. Nonadherence is common and should be expected in all settings, especially in a bioterrorism-related context that involves further challenges and complications to adherence. Considering the large number of workers who took less than the recommended regimen, evaluating adherence promotion interventions during bioterrorist outbreaks is very important. In emergency settings, adherence programs may overburden local departments of health because they require ongoing personal interactions and are labor-intensive when large numbers of people are affected. Efforts to develop a plan to promote adherence in the event of a bioterrorism outbreak, which could be tailored to the situation and implemented immediately, will aid future public health emergency responses where adherence to recommended prophylaxis is necessary to save lives. During occupational exposures, supplementing occupational health resources may be necessary. To optimally promote adherence, such plans should incorporate continual interaction with the affected persons, provide consistent and clear messages, and include interventions that help persons incorporate pill-taking into daily routines and manage known adverse effects, including those caused by prophylaxis, anxiety, and stress related to bioterrorism events.
2017-07-07T02:29:15.311Z
2002-10-01T00:00:00.000
{ "year": 2002, "sha1": "aca74ed84b9351a6b550ab5487589f3b8fe4e390", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid0810.020331", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5effc1b26eff7bbff14a8352a7627c9f508cd2fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20248763
pes2o/s2orc
v3-fos-license
Induction and nosocomial dissemination of carbapenem and polymyxin-resistant Klebsiella pneumoniae Introduction : Polymyxins are antimicrobial agents capable of controlling carbapenemase-producing Klebsiella pneumoniae infection . Methods: We report a cluster of four patients colonized or infected by polymyxin-resistant and Klebsiella pneumoniae carbapenemase (KPC)-producing K. pneumoniae. Results: Three patients were hospitalized in adjacent wards, and two were admitted to the intensive care unit. The index case maintained prolonged intestinal colonization by KPC-producing K. pneumoniae . Three patients received polymyxin B before the isolation of polymyxin-resistant K. pneumoniae . Conclusions: Colonization by KPC-producing K. pneumoniae and previous use of polymyxin B may be causally related to the development of polymyxin-resistant microorganisms. a Time of hospitalization. b Antimicrobial administration before the isolation of polymyxin-resistant K. pneumoniae . c Antibiotics administered after the isolation of this bacterium. Klebsiella pneumoniae has adapted to the extensive and intensive use of antibacterial drugs in hospitals.Over the last 30 years, K. pneumoniae went from having partial resistance to ampicillin and narrow-spectrum cephalosporins to current pandemic resistance to broad-spectrum cephalosporins due to extended-spectrum beta-lactamase production as well as multidrug resistance to penicillins, cephalosporins, and monobactams (1) .Likewise, carbapenemase-producing strains such as Klebsiella pneumoniae carbapenemase (KPC) and New Delhi metallo-beta-lactamase producers, which are not susceptible to imipenem or other carbapenem drugs, have rapidly emerged and spread worldwide (2) (3) .Colistin and polymyxin B are among the few remaining drugs able to combat these multidrug-resistant strains and having satisfactory efficacy in the treatment of patients infected with KPC-producing K. pneumoniae (4) .However, KPC producers also resistant to polymyxins have recently been detected (5) . Here, we report the cases of four patients involved in a cluster of polymyxin-resistant and KPC-producing K. pneumoniae in order to clarify the factors associated with the induction and dissemination of this extensively drug-resistant strain. Infection by polymyxin-resistant KPC-producing K. pneumoniae occurred in patients admitted to the University Hospital, Faculty of Medicine of Ribeirão Preto, University of São Paulo, Brazil -a public university hospital that provides tertiary medical care -in November 2011.The hospital units where transmission and/or isolation of the extensively drugresistant clone occurred are located on the 6 th floor (hematology ward and bone marrow transplant unit), 5 th floor (geriatric ward), and 2 nd floor (intensive care unit [ICU]).Clinical and epidemiological data were obtained retrospectively from the patient's medical records. Bacterial identification and initial susceptibility testing were performed by using the VITEK 2 automated microbial identification system (BioMérieux, Mercy L'Etoile, France).E-test ® strips (BioMérieux) were used to determine the minimum inhibitory concentrations (MICs) of colistin and polymyxin B for KPC-producing K. pneumoniae isolates in duplicate.The MICs for colistin were interpreted according to the European Committee on Antimicrobial Susceptibility Testing (EUCAST) guidelines (6) .The breakpoints proposed for Acinetobacter baumannii (7) were adopted for the MICs of polymyxin B. Polymyxin-resistant and KPC-producing K. pneumoniae were isolated from the four patients from December 8, 2011 to January 27, 2012.All patients had serious underlying diseases, and three Previous antimicrobials were receiving immunosuppressive drugs (Table 1).All four patients had been hospitalized previously.Two were readmitted to the bone marrow transplant unit, one to the hematology ward, and one to the geriatric ward; the latter two were also hospitalized for some time in the ICU (Figure 1).Their clinical data are presented below. Case 1 Case 1 was of a 39-year-old woman with acute myeloid leukemia who was admitted to the hematology ward for chemotherapy.She developed pulmonary aspergillosis and bloodstream infection with KPC-producing K. pneumoniae as a consequence of neutropenia and had been treated with voriconazole, polymyxin B (25 days), and tigecycline (14 days).Although KPC-producing K. pneumoniae was further isolated within 10 days of the initiation of treatment with polymyxin B, both infections were controlled.The patient was readmitted and received to bone marrow transplantation.She developed another bloodstream infection with KPC-producing K. pneumoniae and was treated again with polymyxin B (25 days) plus tigecycline (21 days).The two polymyxin B treatments were separated by 50 days.The patient developed severe mucositis after bone marrow transplantation, and polymyxin-resistant and KPC-producing K. pneumoniae was isolated from an oropharyngeal ulcer 14 days after restarting polymyxin B. However, she died from a new bloodstream infection with multi-susceptible K. pneumoniae.Throughout both admissions, polymyxin-susceptible and KPCproducing K. pneumoniae was isolated from 10 rectal swabs, revealing persistent colonization for more than 100 days. Case 2 Case 2 was of a 74-year-old woman who had been suffering from Sheehan syndrome for 30 years and developed acute myeloid leukemia.While receiving chemotherapy, she suffered several complications and consequently stayed in the ICU for 20 days.She developed febrile neutropenia and received courses of antibacterial drugs including polymyxin B. On the 42 nd day of hospitalization, a rectal swab culture revealed KPC-producing K. pneumoniae.Polymyxin-resistant and KPC-producing K. pneumoniae was subsequently isolated from two blood cultures on the 49 th day and a urine culture on the 53 rd day.The patient was being treated with meropenem, followed by the addition of amikacin.However, she developed septic shock and died. Case 3 Case 3 was of a 36-year-old man who experienced acute myeloid leukemia relapse after bone marrow transplantation and was readmitted to the bone marrow transplantation ward for chemotherapy.He developed febrile neutropenia, tonsillitis, pulmonary infiltrate, and cellulitis at the venous catheter implantation site.Antibacterial agents and voriconazole were administered continuously.On the 49 th day of hospitalization, polymyxin-resistant and KPC-producing K. pneumoniae was isolated from two consecutive blood cultures.The patient was treated empirically with polymyxin B and tigecycline for two days, but suffered septic shock and died. Case 4 Case 4 was of a 64-year old woman who had chronic arterial disease, diabetes mellitus, and chronic renal failure.She was FIGURE 1 -A cluster of carbapenem-and polymyxin-resistant Klebsiella pneumoniae cases in four patients: hospital ward, transference to intensive care unit, and time of isolation of carbapenemase-producing K. pneumoniae susceptible (x) or resistant to polymyxins (X), and death (+). admitted to the geriatric ward and subsequently transferred to the ICU seven days after case 2 left the unit.She underwent surgery for revascularization of the right leg.However, a surgical site infection occurred, from which A. baumannii and KPC-producing K. pneumoniae were isolated.Empirical antibiotics therapy was replaced with polymyxin B and gentamicin.After 19 days of polymyxin B treatment, she developed fever and cellulitis at the venous catheter insertion site.The catheter was removed, and polymyxin-resistant and KPC-producing K. pneumoniae was isolated from its tip.The patient was treated with tigecycline, and her fever stopped. Polymyxin B 20,000 IU/Kg/day was administered for cases 1, 2, and 4. The microbiological data of the patients are shown in Table 2.In addition to resistance to polymyxin B and colistin, three KPC-producing K. pneumoniae isolates were also resistant or had intermediate susceptibility to tigecycline.However, all isolates were susceptible to amikacin. The present report describes the clinical and hospital factors associated with the emergence and transmission of polymyxin-resistant K. pneumoniae.Three previously studied isolates (cases 2-4) carried the bla KPC2 and gnrS1 genes and belong to sequence type (ST)-11, an internationally occurring high-risk clone (8) ; these bacterial isolates simultaneously carry genes encoding virulent phenotypes and related to multidrug resistance.Different multidrug-resistant K. pneumoniae clones associated with patient colonization or infection, such as extended-spectrum and CTX-M beta-lactamase producers have been detected in the hospital where the cluster occurred (9) .The isolation of carbapenem-resistant K. pneumoniae from blood cultures and other samples has increased in recent years; these isolates were characterized as KPC-2 producers as well as ST-258, ST-11, and ST-48 clones (10) .This epidemiological change is similar to those that have occurred in hospitals in other regions and countries, i.e., increases in cases of infection attributed to the ST-11 and other clones of KPC-producing K. pneumoniae (11) . The dissemination of Gram-negative bacilli resistant to imipenem and other carbapenem drugs led to increased use of polymyxins for infected patients in hospitals.However, previous use of colistin is the only independent factor for the isolation of Gram-negative bacilli resistant to this antibiotic (12) .In three of the four cases reported herein, polymyxin B was administered immediately for over 14, 19, and 25 days, respectively, before the isolation of polymyxin-resistant and KPC-producing K. pneumoniae.The highest MIC was observed in an isolate from case 1, who received polymyxin for a longer period (25 + 14 days).The emergence of polymyxin B-resistant isolates has been observed during treatment with this drug for KPC-producing K. pneumoniae infection or colonization probably due to the selective pressure of polymyxin B on the heterogeneous bacterial population (13) .An in vitro study of multidrug-resistant colistin-susceptible K. pneumoniae revealed colistin had a rapid bactericidal effect albeit a low post-antibiotic effect; bacterial regrowth was attributed to the heteroresistance phenomena, which was detected in 15 of 16 isolates (14) .In case 1, the persistence of bacteremia during the first KPC-producing K. pneumoniae infection for up to 10 days of polymyxin B administration as well as rectal colonization for more than 100 days despite the therapeutic course with this antibiotic are suggestive of heteroresistance. Other factors may be involved in the development of polymyxin resistance.In the present cluster patients had severe organic alterations and was invaded with catheters and drains.Three patients received immunosuppressants, and all four received broad-spectrum antibiotic therapy for more than 30 days, which favored infection with Gram-negative bacilli.Advanced age, a history of surgery, and the administration of
2018-04-03T04:09:58.496Z
2015-06-26T00:00:00.000
{ "year": 2015, "sha1": "26116a75343130726460930c471207842ee0ca80", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rsbmt/a/YVRtmk7LN6RtpVqHbRQYDdx/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f2a7b6c8a43a0b0224e8ec1380e3e7f0c919d3c8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252603967
pes2o/s2orc
v3-fos-license
Smartphone-Based Social Distance Detection Technology with Near-Ultrasonic Signal With the emergence of COVID-19, social distancing detection is a crucial technique for epidemic prevention and control. However, the current mainstream detection technology cannot obtain accurate social distance in real-time. To address this problem, this paper presents a first study on smartphone-based social distance detection technology based on near-ultrasonic signals. Firstly, according to auditory characteristics of the human ear and smartphone frequency response characteristics, a group of 18 kHz–23 kHz inaudible Chirp signals accompanied with single frequency signals are designed to complete ranging and ID identification in a short time. Secondly, an improved mutual ranging algorithm is proposed by combining the cubic spline interpolation and a two-stage search to obtain robust mutual ranging performance against multipath and NLoS affect. Thirdly, a hybrid channel access protocol is proposed consisting of Chirp BOK, FDMA, and CSMA/CA to increase the number of concurrencies and reduce the probability of collision. The results show that in our ranging algorithm, 95% of the mutual ranging error within 5 m is less than 10 cm and gets the best performance compared to the other traditional methods in both LoS and NLoS. The protocol can efficiently utilize the limited near-ultrasonic channel resources and achieve a high refresh rate ranging under the premise of reducing the collision probability. Our study can realize high-precision, high-refresh-rate social distance detection on smartphones and has significant application value during an epidemic. Introduction Recently, the outbreak of novel coronavirus pneumonia (COVID-19) is a "black swan" event facing all mankind. The world's politics, economy, and culture have undergone tremendous changes. In the process of fighting against the virus, people gradually realized that due to an epidemic that will not disappear quickly, it is necessary to establish a long-term prevention and control mechanism for such infectious diseases [1]. In response to a sudden epidemic, a much-debated question is whether the regular epidemic prevention and control of governments tend to go to two extremes: One is to allow the epidemic to rage for economic development, and the other is to prohibit all social activities to prevent the spread of the epidemic. Both of them have not dealt with the unbalanced relationship between epidemic prevention and economic development [2]. Therefore, using techniques to strictly control resident social safety distance and locking contact is critical in ensuring normal social operation and reducing the cost of epidemic prevention, rather than banning all social activities [3][4][5][6]. The accurate measurement of social distance is a basic technology throughout the prevention, investigation, and research of the epidemic [7]. In the aspect of preventing In response to the challenges and problems, we first suppose a novel acoustic social distance detection system based on mobile phones, and its performance is evaluated. Hence, our main contributions are: • Firstly, a high-precision smartphone-based social distance detection technology with a near-ultrasonic signal is proposed; In response to the challenges and problems, we first suppose a novel acoustic social distance detection system based on mobile phones, and its performance is evaluated. Hence, our main contributions are: • Firstly, a high-precision smartphone-based social distance detection technology with a near-ultrasonic signal is proposed; • Secondly, combined with short-distance crowd channel characteristics, a group of 18 kHz-23 kHz Chirp signals with single frequency signals are designed and optimized to support ranging and coding; • Thirdly, a precise mutual ranging algorithm is performed by using the cubic spline interpolation and two-stage search to obtain the robust mutual ranging results against multipath and NLoS affect. Both realize social distance detection with a high refresh rate and high accuracy; • Additionally, combined with Chirp BOK, FDMA, and CSMA/CA, a hybrid channel access protocol is proposed. The simulation based on measured parameters verifies the protocol can increase the number of concurrencies and reduce the probability of collision; • Furthermore, a considerable amount of experiments of several scenarios are carried out and demonstrate the robustness and the feasibility of this system; • The system architecture of the system involved in this article is shown in Figure 2. In response to the challenges and problems, we first suppose a novel acoustic social distance detection system based on mobile phones, and its performance is evaluated. Hence, our main contributions are: • Firstly, a high-precision smartphone-based social distance detection technology with a near-ultrasonic signal is proposed; • Secondly, combined with short-distance crowd channel characteristics, a group of 18 kHz-23 kHz Chirp signals with single frequency signals are designed and optimized to support ranging and coding; • Thirdly, a precise mutual ranging algorithm is performed by using the cubic spline interpolation and two-stage search to obtain the robust mutual ranging results against multipath and NLoS affect. Both realize social distance detection with a high refresh rate and high accuracy; • Additionally, combined with Chirp BOK, FDMA, and CSMA/CA, a hybrid channel access protocol is proposed. The simulation based on measured parameters verifies the protocol can increase the number of concurrencies and reduce the probability of collision; • Furthermore, a considerable amount of experiments of several scenarios are carried out and demonstrate the robustness and the feasibility of this system; • The system architecture of the system involved in this article is shown in Figure 2. Related Works Nevertheless, there are several technologies supporting positioning and ranging based on smartphones, but they remain challenging in the application of smartphone mutual ranging. Related Works Nevertheless, there are several technologies supporting positioning and ranging based on smartphones, but they remain challenging in the application of smartphone mutual ranging. Wi-Fi Among them, Wi-Fi based on Wi-Fi RSSI [15], fingerprint [16], and Wi-Fi RTT [17] are compatible with smartphones and require no additional custom hardware. Google in Android 9 was able to reach an accuracy of 1 m-2 m based on a new API. Whereas all of them require special Wi-Fi access points (AP), which causes impossibility to interact ranging between smartphones. Wi-Fi RSSI was carried out earlier, but it is difficult to accurately estimate the channel attenuation model due to the complex indoor environment and the serious NLoS, which will affect the positioning accuracy. Wi-Fi AOA and Wi-Fi CSI are also applicable to high-precision distance measuring, but they are incompatible with smartphones [18,19]. BLE BLE RSSI obtains better performance than Wi-Fi RSSI. Google and Apple [20] use the interface to measure social distance using Bluetooth RSSI, and the government of Singapore launched the Bluetooth-based Blue Trace project [21]. To assist coronavirus contact tracing, Ref. [22] made a report about BLE received signal strength in different scenarios, which provides a novel application. However, it also finds that the relationship between BLE strength and transmission distance is not unique, which cause a challenge in contact tracing. BLE can also utilize fingerprints to realize a positioning accuracy of 4 m [23]. BLE AOA/AOD promotes the prosperity of indoor positioning with centimetre-level positioning accuracy [24]. Besides, IBeacon system introduced by Apple Company is based on the RSSI ranging method, and the positioning accuracy can reach 2-3 m [25]. Quuppa offers a one-size-fits-all technology solution for tracking tags and devices in real time with centimetre-level accuracy [26]. However, due to the short signal transmission distance, it is necessary to deploy many base stations (BS) to achieve high-precision positioning. BLE AOA is expensive to produce. IBeacon provides a novel smartphone mutual ranging method, but because RSSI cannot be applied in long distances, it is only suitable for the ranging range of 1 m~2 m. Therefore, BLE is limited by its characteristics. UWB UWB can also achieve ranging accuracy of 5 cm to 10 cm in ideal scenarios [27] based on RSSI [28], TDOA [29,30], and TOA [31]. UWB ToF ranging can obtain reliable dm-level accuracy. UWB systems are also generally resistant to multipath interference [32], but only a few mobile phones, such as Samsung Galaxy Note 20 or Apple iPhone 11, have UWB modules, which can cause additional energy consumption [33]. Furthermore, UWB only supports the interaction between the mobile phones and the tags rather than the interaction between phones and phones. Hence, it is also challenging to utilize UWB modules in smartphone-based social distance detection. PDR PDR mainly uses the accelerometer to measure the speed and then uses the magnetometer and gyroscope to determine the heading, so as to calculate the relative displacement of the pedestrian [34,35]. However, due to the serious electromagnetic interference in the environment, it is difficult for PDR to accurately estimate the heading angle, resulting in increased positioning errors. Hence, UWB and PDR cannot be applied to smartphone-based social distance detection. Acoustic Many acoustic ranging and positioning systems have been developed recently; Refs. [36][37][38][39] combined the indoor positioning and tracking system of the acoustic signal custom equipment to obtain the location information of the target. However, this method usually requires additional equipment such as acoustic BSs, which results in a limit in social distance detection. To realize robust ranging, Refs. [40,41] utilize the Chirp signal, but the NLOS effect is not considered in the algorithm experiment. In [42], fractional Fourier transform is applied to avoid the multipath effect, but the complexity of it is overwhelmed. A novel encoding and distance detection system by Chirp is shown by [43], but the signal length is too long (300 ms), causing a low-ranging refresh rate. Although Ref. [44] is one of the few studies using acoustic signals to detect social distance, the precision and robustness are not satisfactory. When implementing distance detection systems, ranging accuracy, range, cost, and smartphone compatibility are considered as four principal factors. The comparison is presented in Table 1. In Table 1, the acoustic signal is suitable for social distance detection based on smartphones. In summary, none of the technologies mentioned can well achieve mutual ranging between mobile phones, thereby detecting social distance easily and robustly. Mutual Ranging Principal of BeepBeep If the clocks between the two nodes can be synchronized accurately, the distance between the two nodes can be calculated by the time of arrival (TOA). However, due to the calling synchronization error of the operating system, it is difficult to achieve real-time synchronization. To address this situation, mutual ranging can be applied. Figure 3 shows the process of smartphone mutual ranging [14]. precision and robustness are not satisfactory. When implementing distance detection systems, ranging accuracy, range, cost, and smartphone compatibility are considered as four principal factors. The comparison is presented in Table 1. In Table 1, the acoustic signal is suitable for social distance detection based on smartphones. In summary, none of the technologies mentioned can well achieve mutual ranging between mobile phones, thereby detecting social distance easily and robustly. Mutual Ranging Principal of BeepBeep If the clocks between the two nodes can be synchronized accurately, the distance between the two nodes can be calculated by the time of arrival (TOA). However, due to the calling synchronization error of the operating system, it is difficult to achieve real-time synchronization. To address this situation, mutual ranging can be applied. Figure 3 shows the process of smartphone mutual ranging [14]. Firstly, the time difference p T is presented as: Firstly, the time difference T p is presented as: where T S and T R are the time of arrival between node A and node B, and the time of arrival between node B and node A, respectively. The final distance between smartphones is shown as: where v s is the speed of sound. Due to the delay in the processing of the operating system and application software, this method can calculate the distance between nodes without precise clock synchronization. Hence, it is significant to accurately measure T S and T R . Above analysis ( Figure 3, Equations (1) and (2)) is based on the conceptual framework proposed by BeepBeep. However, in practice, the signals sent by mobile phones are often affected by reverberation and NLoS, which makes it impossible to accurately detect the arrival time. Meanwhile, as Figure 4 shows, there are usually N nodes (respectively A, B, C, . . . , N) in a node cluster in the broadcast mode. According to the unilateral bidirectional ranging, theoretically, each node needs to send N − 1 signals (such as node A. It is necessary to conduct interactive ranging with Node B to Node N respectively). In Equation (2), T S and T R are processed differently. Hence, although the nodes that measure each other may not receive the signal from each other immediately, it will not affect the ranging accuracy [45]. Whereas due to the inability of precise clock synchronization between nodes, Signals sent by multiple nodes are randomly sent at the receiving end, resulting in overlapping collisions, that is, multiple access interference (MAI). this method can calculate the distance between nodes without precise clock synchronization. Hence, it is significant to accurately measure S T and R T . Above analysis ( Figure 3, Equations (1) and (2)) is based on the conceptual framework proposed by BeepBeep. However, in practice, the signals sent by mobile phones are often affected by reverberation and NLoS, which makes it impossible to accurately detect the arrival time. Meanwhile, as Figure 4 shows, there are usually N nodes (respectively A, B, C, ..., N) in a node cluster in the broadcast mode. According to the unilateral bidirectional ranging, theoretically, each node needs to send N-1 signals (such as node A. It is necessary to conduct interactive ranging with Node B to Node N respectively). In Equation (2), S T and R T are processed differently. Hence, although the nodes that measure each other may not receive the signal from each other immediately, it will not affect the ranging accuracy [45]. Whereas due to the inability of precise clock synchronization between nodes, Signals sent by multiple nodes are randomly sent at the receiving end, resulting in overlapping collisions, that is, multiple access interference (MAI). In summary, the mutual ranging principle of BeepBeep is difficult to overcome the interference of ranging error and MAI. Therefore, it is significant to design a suitable multiple access protocol (MAC) and robust signal detection algorithms. Signal Frequency Band In social distancing detection scenarios, it is necessary to use sound signal frequency bands that cannot be heard by human ears but can be sent and received by mobile phones. First of all, this study uses a standard sound source to test the microphone performance of a variety of mobile phones in an anechoic chamber. Figure 5 shows the frequency responses of different commercial mobile phone microphones. In order to avoid interference from ambient noise, we conduct the test in an anechoic chamber. Use the In summary, the mutual ranging principle of BeepBeep is difficult to overcome the interference of ranging error and MAI. Therefore, it is significant to design a suitable multiple access protocol (MAC) and robust signal detection algorithms. Signal Frequency Band In social distancing detection scenarios, it is necessary to use sound signal frequency bands that cannot be heard by human ears but can be sent and received by mobile phones. First of all, this study uses a standard sound source to test the microphone performance of a variety of mobile phones in an anechoic chamber. Figure 5 shows the frequency responses of different commercial mobile phone microphones. In order to avoid interference from ambient noise, we conduct the test in an anechoic chamber. Use the sound source to emit white noise in the full frequency band. In order to avoid the interference of the frequency response characteristics of the sound source on the test results, we use a microphone with relatively balanced frequency response characteristics to record audio at the same time as the device under test to calibrate the frequency response characteristics. In an experiment, the microphone and the device under test record the acoustic signal for 20 s at the same time, and obtain the corresponding frequency response curve. The measurement result is obtained by subtracting the frequency response curve of the standard microphone from the frequency response curve of the device under test. It can be found that the signal strength of different frequency signals recorded by the mobile phone is different, and there is a certain decline in the higher frequency band. The frequency band available in mobile phones is 0 Hz-23 kHz. response characteristics. In an experiment, the microphone and the device under test record the acoustic signal for 20 s at the same time, and obtain the corresponding frequency response curve. The measurement result is obtained by subtracting the frequency response curve of the standard microphone from the frequency response curve of the device under test. It can be found that the signal strength of different frequency signals recorded by the mobile phone is different, and there is a certain decline in the higher frequency band. The frequency band available in mobile phones is 0 Hz-23 kHz. Notably, the signal frequency band should make the signal inaudible. According to the research on human hearing, the human ear has different sensitivity to sound signals from 20 Hz to 20 kHz. Under the same sound pressure level, it is most sensitive to the sound signal of 1 kHz-5 kHz, while the sound signal above 18 kHz is not. It can be heard by the human ear without causing additional noise pollution. Figure 6 presents the relationship between the hearing threshold of the human ear and the average sound pressure level (SPL) of a linear frequency modulation (LFM) signal with 18 kHz and 23 kHz in the above frequency bands as a function of distance. The LFM signal is used to cover the full frequency band. The theory relationship between hearing threshold and frequency is shown as [46]: 1000 It can be found that the maximum SPL does not exceed the human hearing threshold of 18 kHz. Figure 6b illustrates that the sound pressure level of near ultrasonic signal attenuates rapidly at long distance. Two hop or more apart node clusters can perform mutual ranging at the same time without long-distance conflict. Therefore, the frequency band can be used in social distance detection. Notably, the signal frequency band should make the signal inaudible. According to the research on human hearing, the human ear has different sensitivity to sound signals from 20 Hz to 20 kHz. Under the same sound pressure level, it is most sensitive to the sound signal of 1 kHz-5 kHz, while the sound signal above 18 kHz is not. It can be heard by the human ear without causing additional noise pollution. Figure 6 presents the relationship between the hearing threshold of the human ear and the average sound pressure level (SPL) of a linear frequency modulation (LFM) signal with 18 kHz and 23 kHz in the above frequency bands as a function of distance. The LFM signal is used to cover the full frequency band. The theory relationship between hearing threshold and frequency is shown as [46]: It can be found that the maximum SPL does not exceed the human hearing threshold of 18 kHz. Figure 6b illustrates that the sound pressure level of near ultrasonic signal attenuates rapidly at long distance. Two hop or more apart node clusters can perform mutual ranging at the same time without long-distance conflict. Therefore, the frequency band can be used in social distance detection. According to the frequency response curve of the mobile phone and the hearing threshold of the human ear, this paper uses the acoustic signal frequency range of 18 kHz-23 kHz for social distance detection, which is called near-ultrasonic signal. According to the frequency response curve of the mobile phone and the hearing threshold of the human ear, this paper uses the acoustic signal frequency range of 18 kHz-23 kHz for social distance detection, which is called near-ultrasonic signal. Signal Wave Designed-Chirp Signal Since the acoustic signal will be affected by the multipath effect, environment noise and the Doppler effect, which makes traditional single-frequency coded signals difficult to use [47]. To deal with the problem, a well-compressed chirp signal is used. Here, Chirp is defined as: where f 0 is the starting frequency (carrier frequency), f 1 is the end frequency, k is the frequency modulation slope and k = f 1 − f 0 T , and T is the signal time. For Up-Chirp, k > 0 and k < 0 for Down-Chirp. Figure 7 shows the Up-Chirp with a frequency range of 17 kHz-18 kHz. Here, the sampling rate is set to 48 kHz based on smartphone devices. According to the frequency response curve of the mobile phone and the hearing threshold of the human ear, this paper uses the acoustic signal frequency range of 18 kHz-23 kHz for social distance detection, which is called near-ultrasonic signal. Signal Wave Designed-Chirp Signal Since the acoustic signal will be affected by the multipath effect, environment noise and the Doppler effect, which makes traditional single-frequency coded signals difficult to use [47]. To deal with the problem, a well-compressed chirp signal is used. Here, Chirp is defined as: where 0 f is the starting frequency (carrier frequency), 1 f is the end frequency, k is the frequency modulation slope and Figure 7 shows the Up-Chirp with a frequency range of 17 kHz-18 kHz. Here, the sampling rate is set to 48 kHz based on smartphone devices. However, when the speaker sends out near ultrasonic band signals, audible noise will appear at the beginning and end of the sound, which is called low-frequency leakage. In order to solve this problem, the waveform of the chirp signal is reconstructed, and the window of Blackman window and rectangular window is used to process the LFM signal () xn . Here, the rectangular window can save all of the signal but there is no correction capability for low frequency leakage. The Blackman window can smooth the signal at the start and end of the signal, but it will cause a loss of signal strength. Hence, using two However, when the speaker sends out near ultrasonic band signals, audible noise will appear at the beginning and end of the sound, which is called low-frequency leakage. In order to solve this problem, the waveform of the chirp signal is reconstructed, and the window of Blackman window and rectangular window is used to process the LFM signal x(n). Here, the rectangular window can save all of the signal but there is no correction capability for low frequency leakage. The Blackman window can smooth the signal at the start and end of the signal, but it will cause a loss of signal strength. Hence, using two windows can not only realize the stable change of signal amplitude but also retain the energy intensity of the middle part of the signal to ensure the transmission distance. where N is the length of the signal. Chirp signals can easily use the auto-correlation properties to determine the type and arrival time of the signal due to its compression. The auto-correlation can be described as: where T is the time of signal duration, and kT 2 = |( f 1 − f 0 )T|. Equation (7) demonstrates the larger ( f 1 − f 0 )T and the larger r(t), which results in a smaller probability of decoding and error of ranging. Figures 8 and 9 illustrate under different SNR, the correlation value relationship between ( f 1 − f 0 ) and T. The noise of SNR is white Gaussian noise and the additive white Gaussian noise (AWGN) channel is implemented to evaluate anti interference capability of the signal. To evaluate the quality of the cross-correlation results, the impulse response width (IRW) is introduced. A large IRW will result in lower ranging accuracy and an IRW that is too small will lead to a waste of frequency band and time resources. Figure 8 shows that in SNR of 20 dB, with a bandwidth of 1 kHz, as T increases, the IRW decreases steadily. When T = 20 ms, the IRW is 2.7 ms, and when T is longer than 20 ms, the IRW will be smaller than 2 ms, which will cause a waste of time. Figure 9 shows that when T is 20 ms, the IRW decreases as the bandwidth increases. If the bandwidth is smaller than 1 kHz, the IRW will be larger than 4 ms, which will cause the ranging accuracy to decrease. Besides, the simulation results also illustrate that the cross-correlation can well offset the interference caused by white Gaussian noise. In order to consider the ranging accuracy and refresh rate, we chose a Chirp signal of ( f 1 − f 0 ) = 1000, T = 20 ms. Chirp signals can easily use the auto-correlation properties to determine the type and arrival time of the signal due to its compression. The auto-correlation can be described as: where T is the time of signal duration, and 2 10 () and the larger () rt , which results in a smaller probability of decoding and error of ranging. Figures 8 and 9 illustrate under different SNR, the correlation value relationship between 10 () ff − and T. The noise of SNR is white Gaussian noise and the additive white Gaussian noise (AWGN) channel is implemented to evaluate anti interference capability of the signal. To evaluate the quality of the cross-correlation results, the impulse response width (IRW) is introduced. A large IRW will result in lower ranging accuracy and an IRW that is too small will lead to a waste of frequency band and time resources. Figure 8 shows that in SNR of 20 dB, with a bandwidth of 1 kHz, as T increases, the IRW decreases steadily. When T = 20 ms, the IRW is 2.7 ms, and when T is longer than 20 ms, the IRW will be smaller than 2 ms, which will cause a waste of time. Figure 9 shows that when T is 20 ms, the IRW decreases as the bandwidth increases. If the bandwidth is smaller than 1 kHz, the IRW will be larger than 4 ms, which will cause the ranging accuracy to decrease. Besides, the simulation results also illustrate that the cross-correlation can well offset the interference caused by white Gaussian noise. In order to consider the ranging accuracy and refresh rate, we chose a Chirp signal of The frequency band available in mobile phones is 18 kHz-23 kHz, however, in the case of high refresh rates and crowded access nodes, there is a limit on the frequency band. Chirp signal can achieve high-precision delay estimation, Chirp Binary Orthogonal Sub-Band Signal Wave Designed The frequency band available in mobile phones is 18 kHz-23 kHz, however, in the case of high refresh rates and crowded access nodes, there is a limit on the frequency band. Chirp signal can achieve high-precision delay estimation, Chirp Binary Orthogonal Keying (BOK) is able to encode 0 and 1 by up-Chirp and down-Chirp. However, under Chirp BOK, the amount of data that can be transmitted per unit of time is small, resulting in an excessively long total signal length and a low refresh rate for multi-node ranging. To overcome the problem, we proposed a sub-band signal that uses Chirp for ranging and ID identification (coding and encoding), and a single-frequency signal is utilized for auxiliary coding. Based on the analysis of signal waves and signal bandwidth, this paper divides 18 kHz-23 kHz into 5 sub-bands and performs channel allocation based on this. In order to complete the multi-node ranging and encoding and decoding in a short time, that is, to achieve a higher refresh rate, this paper superimposes the Chirp signal and the single-frequency signal in the time domain and completes the ranging and ID identification functions at the same time. In addition, pseudo-orthogonality exists between Chirp signals in different frequency bands, up-Chirp and down-Chirp, which is convenient for increasing the concurrency when multi-node mutual ranging. The design of the signals is shown in Figure 10. To superimpose the two signals in the time domain to realize ranging and coding functions at the same time, the signal design needs to consider the following elements: 1. Up-Chirp and down-Chirp can be concurrent. When the up-Chirp and the down-Chirp overlap and conflict at the receiving end, it is possible to distinguish which Chirp signal the single-frequency signal corresponds to; 2. It still guarantees a high delay estimation resolution; 3. Due to the possible Doppler frequency shift, a guard interval needs to be reserved between different single-frequency signals. Take the 18 kHz-19 kHz Chirp signal as an example, there are three signals shown in Figure 11. Figure 11a,d,g represents the superposition of the Chirp signal and the 18,500 Hz signal, the original Chirp signal, and the Chirp signal and the 19,000 Hz signal, respectively. Figure 11b,e,h shows the cross-correlation results with Chirp of 18 kHz-19 kHz. Under the same frequency band as the single-frequency signal, the cross-correlation side lobes will increase, which results in a decrease in the ranging performance of the Chirp signal. However, as in Figure 11e,h, if the single-frequency signal band does not overlap the Chirp frequency band, there will be no significant impact. To superimpose the two signals in the time domain to realize ranging and coding functions at the same time, the signal design needs to consider the following elements: 1. Up-Chirp and down-Chirp can be concurrent. When the up-Chirp and the down-Chirp overlap and conflict at the receiving end, it is possible to distinguish which Chirp signal the single-frequency signal corresponds to; 2. It still guarantees a high delay estimation resolution; 3. Due to the possible Doppler frequency shift, a guard interval needs to be reserved between different single-frequency signals. Take the 18 kHz-19 kHz Chirp signal as an example, there are three signals shown in Figure 11. Figure 11a,d,g represents the superposition of the Chirp signal and the 18,500 Hz signal, the original Chirp signal, and the Chirp signal and the 19,000 Hz signal, respectively. Figure 11b,e,h shows the cross-correlation results with Chirp of 18 kHz-19 kHz. Under the same frequency band as the single-frequency signal, the cross-correlation side lobes will increase, which results in a decrease in the ranging performance of the Chirp signal. However, as in Figure 11e,h, if the single-frequency signal band does not overlap the Chirp frequency band, there will be no significant impact. Based on the above analysis, the following signals are designed in this paper. In the process of mutual ranging, the signal sent by the transmitter is the Chirp signal or the time domain superposition of the Chirp signal and the single-frequency signal. Among them, the frequency band of the Chirp signal component is separated from the frequency band of the single-frequency signal component. For example, Figure 10b shows that the 18 kHz-19 kHz up-Chirp can be matched with the 19,250 Hz single-frequency signal in the 19 kHz-20 kHz frequency band, and the 19 kHz-18 kHz down-Chirp can be matched with the 19,750 Hz single frequency signal in the 19 kHz-20 kHz frequency band. Take the four kinds of signals within 18 kHz-19 kHz as an example, which are respectively allocated to four nodes in the same node cluster C for transmission. The reason the single-frequency signals matched with the up-Chirp and the down-Chirp are different is that when the up-Chirp and the down-Chirp overlap and conflict at the receiving end, the single-frequency signal can be distinguished according to the corresponding Chirp signal. A total of 18 kHz-23 kHz of a 5 kHz available frequency band, with 500 Hz as the guard interval, a total of 10 kinds of single-frequency signals (respectively 18 , with five frequency bands the up-Chirp signal and the down-Chirp signal form a total of 20 kinds of signals, which are respectively assigned to nodes 1 to 20 in the same node cluster for transmission. Robust Mutual Ranging Algorithm Based on the orthogonality and the relativity of the Chirp signals, they are able to be utilized to make signal ranging by generalized cross correlation (GCC). However, the traditional correlation algorithm is not able to obtain a good accuracy of time delay estimation. In order to rectify the problem, as Figure 12 shows, a robust mutual ranging algorithm named as env-two-stage is introduced. Firstly, a discrete GCC function is used to roughly calculate the time delay. Secondly, a cubic spline envelope is implemented to remove small noise and environmental effects. Thirdly, a two stage search is utilized to obtain an accurate ranging result. In addition, due to the Doppler effect that causes signal recognition errors, a guard interval is required between single frequency signals. Take the walking speed of a natural person as an example of 146 cm/s, the frequency offsetf is: where f represents the origin signal frequency, ν is the propagation speed of the sound wave, ν r is the speed of the mobile receiver, and ν s is the movement speed of the sender. Based on Equation (8), the guard interval of 250 Hz is enough to deal with the Doppler effect. Based on the above analysis, the following signals are designed in this paper. In the process of mutual ranging, the signal sent by the transmitter is the Chirp signal or the time domain superposition of the Chirp signal and the single-frequency signal. Among them, the frequency band of the Chirp signal component is separated from the frequency band of the single-frequency signal component. For example, Figure 10b shows that the 18 kHz-19 kHz up-Chirp can be matched with the 19,250 Hz single-frequency signal in the 19 kHz-20 kHz frequency band, and the 19 kHz-18 kHz down-Chirp can be matched with the 19,750 Hz single frequency signal in the 19 kHz-20 kHz frequency band. Take the four kinds of signals within 18 kHz-19 kHz as an example, which are respectively allocated to four nodes in the same node cluster C for transmission. The reason the single-frequency signals matched with the up-Chirp and the down-Chirp are different is that when the up-Chirp and the down-Chirp overlap and conflict at the receiving end, the single-frequency signal can be distinguished according to the corresponding Chirp signal. A total of 18 kHz-23 kHz of a 5 kHz available frequency band, with 500 Hz as the guard interval, a total of 10 kinds of single-frequency signals (respectively 18 , with five frequency bands the up-Chirp signal and the down-Chirp signal form a total of 20 kinds of signals, which are respectively assigned to nodes 1 to 20 in the same node cluster for transmission. Robust Mutual Ranging Algorithm Based on the orthogonality and the relativity of the Chirp signals, they are able to be utilized to make signal ranging by generalized cross correlation (GCC). However, the traditional correlation algorithm is not able to obtain a good accuracy of time delay estimation. In order to rectify the problem, as Figure 12 shows, a robust mutual ranging algorithm named as env-two-stage is introduced. Firstly, a discrete GCC function is used to roughly calculate the time delay. Secondly, a cubic spline envelope is implemented to remove small noise and environmental effects. Thirdly, a two stage search is utilized to obtain an accurate ranging result. Traditional Correlation Algorithm Discrete GCC function (also called match filter) of receive signal () Xi and the reference signal () Yi is: However, the computational complexity of Equation (9) is high. The frequency domain GCC is introduced to decrease the complexity, which is defined as: However, as Figure 13 shows, 0 T will be affected and lagged by the multipath effect which is presented as: Traditional Correlation Algorithm Discrete GCC function (also called match filter) of receive signal X(i) and the reference signal Y(i) is: However, the computational complexity of Equation (9) is high. The frequency domain GCC is introduced to decrease the complexity, which is defined as: (10), the complexity of GCC will be decreased at o(n log 10 m). After GCC, the time of arrival, the Up-Chirp and the Down-Chirp can be defined and judged. The time of arrive T 0 is shown as [48]: However, as Figure 13 shows, T 0 will be affected and lagged by the multipath effect which is presented as:R whereR xy (n) is the GCC signal received, R s (t) is the origin GCC result, n(t) is the environment noise, α i is the coefficient of the ith path. Traditional Correlation Algorithm Discrete GCC function (also called match filter) of receive signal () Xi and the reference signal () Yi is: However, the computational complexity of Equation (9) is high. The frequency domain GCC is introduced to decrease the complexity, which is defined as: However, as Figure 13 shows, 0 T will be affected and lagged by the multipath effect which is presented as: Robust Mutual Ranging Algorithm Due to this problem, Tweet [49] used parabolic interpolation to remove the multipath signal. To find the first direct arrival signal, Ref. [40] applied a two-stage search. However, the parabolic interpolation cannot fully fit the signal information, especially in a strong Robust Mutual Ranging Algorithm Due to this problem, Tweet [49] used parabolic interpolation to remove the multipath signal. To find the first direct arrival signal, Ref. [40] applied a two-stage search. However, the parabolic interpolation cannot fully fit the signal information, especially in a strong multipath effect. A two-stage search will also be limited by small-scale environmental noise. Therefore, a novel robust distance detection algorithm is proposed based on improved cubic spline interpolation and coarse and fine Search. Supposing there are n sample points in a receiveR xy (n), the subinterval ∆ can be divided as: where a and b are the start point and the end of the signal. Define h i = n i+1 − n i , M(n) = R xy (n), M i = M(n i ), the cubic spline interpolation is shown as: where Compared to the traditional spline cubic fitting, the proposed method can reduce computational complexity. After spline cubic fitting, the receive f (n) can be divided into groups corresponding to 1 s time ranges f r (n). First, the time of arrival from the speaker of smartphone A to the microphone of smartphone A T AA is shown as: Then, the time of arrival from the speaker of smartphone B to the microphone of smartphone A T BA can be calculated as: where i = 1, 2, . . . , T AA − gap, j = T AA − gap, . . . , N, gap = 2000. Then, T BB and T AB can be obtained as above. To estimate the accurate time of arrival, for example of T AA , the fixed of it is shown as: where Peak AA represents the maximum correlation value at T AA . Ideally, T AA is the real time of arrival. In practice cases, due to environmental noise and the small scale of multipath, there will be spurious peaks, which result in bigger errors. Figure 14 illustrates the case of spurious peaks. For this case, let a increase gradually by 0.01 from a min to 1, let TS be a set of T AA , di f f TS = TS(k + 1) − TS(k), k = 1, 2, . . . , N − 1. Here, N is the length of TS. Based on the above description, the final time of arrivalT AA is: Figure 14a shows the example of the env-two-stage. Under multipath effect and environmental noise, compared to the Peak method (the error is 15 cm), the algorithm has better performance (the error is 6 cm). Figure 14b presents a case that the algorithm does not apply cubic spline interpolation, spurious peaks will always occur. It can be found that combined with cubic spline interpolation, the error will be more minor. better performance (the error is 6 cm). Figure 14b presents a case that the algorithm does not apply cubic spline interpolation, spurious peaks will always occur. It can be found that combined with cubic spline interpolation, the error will be more minor. Improved Identification of ID Sub-Bands To realize efficient ID identification of different Sub-Bands, FrFt is utilized to deal with Chirp signals. FrFt can be understood as the representation method on the fractional Fourier domain formed by the signal in the time-frequency plane after the coordinate axis is rotated counterclockwise around the origin by any angle [50]. The FrFt of signal x(t) is expressed as: , and s f is the sampling rate of the system. Based on the FrFt, the designed signal with the optimal α can be transformed into an impulse response signal, the output is shown in Figure 15. The result of FrFt is not disturbed by single frequency signals. Improved Identification of ID Sub-Bands To realize efficient ID identification of different Sub-Bands, FrFt is utilized to deal with Chirp signals. FrFt can be understood as the representation method on the fractional Fourier domain formed by the signal in the time-frequency plane after the coordinate axis is rotated counterclockwise around the origin by any angle [50]. The FrFt of signal x(t) is expressed as: K p (t, u) is defined as: where A α = (1 − i cot α)/2π [51]. As for the Chirp identification, the optimal α is defined as 2arc cot(− f 0 − f 1 f s )/π, and f s is the sampling rate of the system. Based on the FrFt, the designed signal with the optimal α can be transformed into an impulse response signal, the output is shown in Figure 15. The result of FrFt is not disturbed by single frequency signals. The center frequency of the Chirp signal is expressed as: where x = argmaxK(u, p). According to Equation (20), the type and frequency range of Chirp can be identified. Since the signal bandwidth is preset (1 kHz), the type of chirp signal can be identified based on the value of f center . After that, according to the result of FrFt, short-term Fourier transform (STFT) is used to identify single frequency signals. Figure 16 illustrates the results of FrFt and STFT under SNR of 5 dB. The different nodes can be distinguished by the proposed method. According to Equation (20), the type and frequency range of Chirp can be identified. Since the signal bandwidth is preset (1 kHz), the type of chirp signal can be identified based on the value of center f . After that, according to the result of FrFt, short-term Fourier transform (STFT) is used to identify single frequency signals. Figure 16 illustrates the results of FrFt and STFT under SNR of 5 dB. The different nodes can be distinguished by the proposed method. Hybird Channel Access Schemes According to Section 3.1, due to MAI, it is necessary to efficiently and fairly allocate and use the acoustic channel when measuring each other in a crowd composed of multiple nodes [52]. To solve the optimization problem, Ref. [53] selects a signal sequence, but it supposes perfect synchronization between the transmitted quadrature signals and no multipath. In actual scenarios, there are only pseudo-orthogonal near-ultrasonic signals, Hybird Channel Access Schemes According to Section 3.1, due to MAI, it is necessary to efficiently and fairly allocate and use the acoustic channel when measuring each other in a crowd composed of multiple nodes [52]. To solve the optimization problem, Ref. [53] selects a signal sequence, but it supposes perfect synchronization between the transmitted quadrature signals and no multipath. In actual scenarios, there are only pseudo-orthogonal near-ultrasonic signals, that is, there will still be some interference between the signals. This makes the use of near-ultrasonic mutual ranging to face an extreme conflict challenge, as shown in Figure 17, so algorithms are needed to avoid conflicts as much as possible. Here, a hybrid MAC protocol is proposed to deal with a crowd conflict situation. The Carrier Sense Multiple Access (CSMA) method improves the ALOHA method. After monitoring the channel, CSMA can adopt three types of backoff algorithms. Due to the large difference between the signal energy sent by the node and the received signal energy in the wireless local area network, it is impossible to monitor while sending and Carrier Sense Multiple Access with Collision Detection (CSMA/CD) method cannot be used. For near-ultrasonic signals of mobile phones, duplex transmission and reception can be performed, but more time resources will be wasted due to the long propagation delay of near-ultrasonic signals. Therefore, the idea of avoiding CSMA/CA is used for channel access. In CSMA/CA mode, each node can use the Distributed Coordination Function (DCF). Each node independently competes for the channel transmission right. When a node needs to send a signal, it first randomly backs up to avoid generating Collision (unless the channel has not been used recently and the channel is idle). The node that successfully receives the frame needs to send an acknowledgment immediately, if it does not receive an acknowledgment, it doubles the number of back-off time slots and retransmits until the upper limit of the number of retransmissions is reached [54]. that is, there will still be some interference between the signals. This makes the use of nearultrasonic mutual ranging to face an extreme conflict challenge, as shown in Figure 17, so algorithms are needed to avoid conflicts as much as possible. Here, a hybrid MAC protocol is proposed to deal with a crowd conflict situation. Figure 17. Simulation of Multi-Node Near Ultrasonic Ranging Conflict. Each row represents the time axis of each node, "+" indicates the moment when the sender sends the signal, "o" indicates the moment when the receiver receives the signal, and "x" indicates the end moment of the signal received by the receiver. Node 6 decides to send a signal at the red "+" moment, and the signal arrives at all other nodes in turn. The arrival time is marked with a red "o" in other rows (corresponding to all other nodes) and ends at a red "x". Due to the long propagation delay, when the signal sent by node 6 has not yet reached node 8, node 8 mistakenly thinks that the channel is idle and is ready to send a signal, so it sends a signal at the blue "+" moment, and the signal sent by node 8 reaches other. For all nodes, the time of arrival is marked with a blue "o" in other rows (corresponding to all other nodes) and ends at a blue "x". In this process, due to the long propagation delay, node 8 mistakenly thinks that the channel is idle, so it collides with the signal sent by node 6. • CSMA/CA The Carrier Sense Multiple Access (CSMA) method improves the ALOHA method. After monitoring the channel, CSMA can adopt three types of backoff algorithms. Due to the large difference between the signal energy sent by the node and the received signal energy in the wireless local area network, it is impossible to monitor while sending and Carrier Sense Multiple Access with Collision Detection (CSMA/CD) method cannot be used. For near-ultrasonic signals of mobile phones, duplex transmission and reception can be performed, but more time resources will be wasted due to the long propagation delay of near-ultrasonic signals. Therefore, the idea of avoiding CSMA/CA is used for channel access. In CSMA/CA mode, each node can use the Distributed Coordination Function (DCF). Each node independently competes for the channel transmission right. When a node needs to send a signal, it first randomly backs up to avoid generating Collision (unless the channel has not been used recently and the channel is idle). The node that successfully receives the frame needs to send an acknowledgment immediately, if it does not receive an acknowledgment, it doubles the number of back-off time slots and retransmits until the upper limit of the number of retransmissions is reached [54]. • FDMA Frequency division multiple access (FDMA) divides the total bandwidth into multiple orthogonal channels, and each user occupies one channel. Using the FDMA method, multiple nodes can send ranging signals at the same time, so that more ranging tasks can be completed in unit time. However, when the total bandwidth resource is limited, the more mutually orthogonal frequency bands are divided and the narrower the Figure 17. Simulation of Multi-Node Near Ultrasonic Ranging Conflict. Each row represents the time axis of each node, "+" indicates the moment when the sender sends the signal, "o" indicates the moment when the receiver receives the signal, and "x" indicates the end moment of the signal received by the receiver. Node 6 decides to send a signal at the red "+" moment, and the signal arrives at all other nodes in turn. The arrival time is marked with a red "o" in other rows (corresponding to all other nodes) and ends at a red "x". Due to the long propagation delay, when the signal sent by node 6 has not yet reached node 8, node 8 mistakenly thinks that the channel is idle and is ready to send a signal, so it sends a signal at the blue "+" moment, and the signal sent by node 8 reaches other. For all nodes, the time of arrival is marked with a blue "o" in other rows (corresponding to all other nodes) and ends at a blue "x". In this process, due to the long propagation delay, node 8 mistakenly thinks that the channel is idle, so it collides with the signal sent by node 6. • FDMA Frequency division multiple access (FDMA) divides the total bandwidth into multiple orthogonal channels, and each user occupies one channel. Using the FDMA method, multiple nodes can send ranging signals at the same time, so that more ranging tasks can be completed in unit time. However, when the total bandwidth resource is limited, the more mutually orthogonal frequency bands are divided and the narrower the bandwidth of each frequency band, which will reduce the accuracy of delay estimation. Therefore, if only FDMA is used, the number of concurrent nodes and node capacity is limited. • Chirp BOK Because of the autocorrelation and energy aggregation characteristics of the Chirp signal in its time and frequency domains, Chirp BOK is a spread spectrum communication system which utilizes up-chirp and down-Chirp signals to transmit binary data. However, only the Chirp BOK method is used for signal encoding, and the amount of data that can be transmitted per unit of time is small, resulting in an excessively long total signal length and a low refresh rate for multi-node ranging. The 20 kinds of signals in the near-ultrasonic signal time-frequency diagram designed in this paper can uniquely encode 20 nodes in the cluster, and different signals are sent by different nodes, respectively. Compared to the frame structure in IEEE 802.11, the signal frames for social distancing scenarios have the following differences: (1) The frame is mainly used for encoding nodes and delay estimation, and the frame length is relatively fixed; (2) In the social distance measurement, if a control frame is added, the length of the control frame and the frame body is similar, and the acknowledgement character (ACK) frame will significantly increase the mutual ranging period, so the social distance measurement scene does not introduce the ACK confirmation frame and control frame. The protocol should be re-designed based on the social distance background. 1. The near-ultrasonic signal is used for ranging and coding. Since the length of the frame is close to the maximum signal propagation delay between nodes, a longer propagation delay will increase the probability of collision. Therefore, on the basis of CSMA/CA, drawing on the idea of P-persistent, when the channel is idle and the backoff is over, it will be sent with 100% probability, but now it will be sent with P probability; 2. Based on the signals designed, combined with the pseudo-orthogonal characteristics of FDMA and Chirp BOK, multiple pseudo-orthogonal signals can be sent simultaneously; 3. In the CSMA/CA method, when a node needs to send data for the first time (rather than a failed retransmission), if the channel is idle, it can be sent after waiting for Distributed Inter-frame Spacing (DIFS). However, in the perception of social safety distance, at the beginning of each round of node-cluster mutual ranging, nodes may send data centrally, so a fallback scheme is adopted for all nodes. In a round of node mutual ranging, each node only needs to send a signal once. After each node is assigned a different signal, each node first randomly selects the backoff time. Even if the channel is idle, the nodes randomly back off for a period of time before sending. If there are N nodes in total, node i, (i = 1, 2, . . . , N) randomly selects an integer Z i in the range of [0, CW i−1 ] and sets the backoff counter to Z i × T slot , where T slot is the length of the time slot, and CW i is determined by the node density and the motion status of node i. The node monitors the channel for a period of interframe space (IFS). If the channel is idle (the pseudo-orthogonal signal is also considered to be idle when the channel is detected), it starts to count down the back-off counter. For an orthogonal frame, the counter is suspended, and after the frame ends, the duration of the channel IFS is monitored again. If the channel is idle (the channel is considered idle when a pseudo-orthogonal signal is detected), the timer value continues to decrease. When the counter decreases to zero, the transmission frame is sent with probability P and delayed to the next time slot with probability (1−P). If it is the latter and the channel is idle, it still sends the signal with probability P, delays it to the next time slot with probability (1−P) and repeats, or another node starts to send the signal. Overview of the System To summarize, the framework of this system is shown in Figure 18. The system is divided into transmitter and receiver. The main task of the transmitter is to assign different signals to different nodes. Firstly, select a certain frequency band, such as frequency band A (18 kHz-19 kHz), then select the upper Chirp or the lower Chirp and then select the time domain to superimpose the single frequency signal or not to superimpose the single frequency signal, where the frequency of the single frequency signal is the same as that of the upper sweep frequency Chirp or the lower frequency signal. Sweep Chirp one-to-one correspondence. Through the above process, different signals are assigned to different nodes. For example, node 1 is assigned a superimposed signal of an 18 kHz-19 kHz up-Chirp and a 19,250 Hz single-frequency signal. After dealing with low-frequency leakage, each node sends a signal in the designed channel access mode, and the signal reaches the receiving end after being affected by the environment. signal is the same as that of the upper sweep frequency Chirp or the lower frequency signal. Sweep Chirp one-to-one correspondence. Through the above process, different signals are assigned to different nodes. For example, node 1 is assigned a superimposed signal of an 18 kHz-19 kHz up-Chirp and a 19,250 Hz single-frequency signal. After dealing with low-frequency leakage, each node sends a signal in the designed channel access mode, and the signal reaches the receiving end after being affected by the environment. Secondly, signals from multiple nodes may collide and overlap at the receiver. The receiving end first uses Fractional Fourier (FrFt) to decode multiple Chirp components in the received signal. Then, according to the decoded frequency band up-Chirp or downsweep Chirp, find the corresponding possible single-frequency signal, if the frequency band energy of the single-frequency signal in the received signal exceeds the threshold value, then the single-frequency signal exists. The above process can decode the signals of different nodes and then estimate the time delay by the method of env-two-stage to obtain the time stamps of the signals of different nodes. Finally, for the MAI problem, we propose a new MAC protocol that integrates CSMA/CA, FDMA, and ChirpBOK. Through this protocol, the conflict of mutual ranging between different nodes can be well avoided, the period of mutual ranging can be shortened, and the efficiency of mutual ranging can be improved. Secondly, signals from multiple nodes may collide and overlap at the receiver. The receiving end first uses Fractional Fourier (FrFt) to decode multiple Chirp components in the received signal. Then, according to the decoded frequency band up-Chirp or downsweep Chirp, find the corresponding possible single-frequency signal, if the frequency band energy of the single-frequency signal in the received signal exceeds the threshold value, then the single-frequency signal exists. The above process can decode the signals of different nodes and then estimate the time delay by the method of env-two-stage to obtain the time stamps of the signals of different nodes. Finally, for the MAI problem, we propose a new MAC protocol that integrates CSMA/CA, FDMA, and ChirpBOK. Through this protocol, the conflict of mutual ranging between different nodes can be well avoided, the period of mutual ranging can be shortened, and the efficiency of mutual ranging can be improved. Experiment of Ranging In order to further verify the applicability of the proposed ranging method, the three cases of the experiment are carried out. The ranging result will be compared with the traditional method, which only uses maximum cross-correlation (named as Peak) [41,48,[55][56][57], two-stage search [40] (only use two-stage search) and env-two-stage proposed. The two smartphones are Huawei Mate40 Pro and Huawei P20Pro. Each experiment is conducted for 150 runs. Figure 19 shows the experimental scenarios, which are respectively: Case 1 LoS (Figure 19a) Indoor, quiet, meeting room: The experimental ranging distances are 1 m, 2 m, 3 m, 4 m, and 5 m. There is no obstruction between the two smartphones, and signals are transmitted directly between smartphones. The environmental temperature is 30 • C; Case 2 NLoS by people (Figure 19b) Indoor, quiet, corridor of nucleic acid amplification testing: The experimental ranging distance is 3 m, and a human body forms a shield at 20 cm from one of the smartphones to simulate people holding mobile phones. The environmental temperature is 30 • C; Case 3 NLoS in canteen (Figure 19c) Indoor, noisy, canteen: The experimental ranging distance is 1.5 m and the smartphones are placed on the table, with an acrylic board used as a shield between the mobile phones to simulate the situation of placing mobile phones when people eat. The environmental temperature is 22 • C. distances are 1 m, 2 m, 3 m, 4 m, and 5 m. There is no obstruction between the smartphones, and signals are transmitted directly between smartphones. environmental temperature is 30 °C; Case 2 NLoS by people (Figure 19b) Indoor, quiet, corridor of nucleic amplification testing: The experimental ranging distance is 3 m, and a human body f a shield at 20 cm from one of the smartphones to simulate people holding mobile pho The environmental temperature is 30 °C; Case 3 NLoS in canteen (Figure 19c) Indoor, noisy, canteen: The experimental ran distance is 1.5 m and the smartphones are placed on the table, with an acrylic board as a shield between the mobile phones to simulate the situation of placing mobile ph when people eat. The environmental temperature is 22 °C. To facilitate the collection of experimental data, this article uses Android Stud develop Android-side testing software for basic interface calls and system testing. A same time, the Netty open source framework is used to develop asynchronous netw applications for the test. The host computer controls multiple mobile phone To facilitate the collection of experimental data, this article uses Android Studio to develop Android-side testing software for basic interface calls and system testing. At the same time, the Netty open source framework is used to develop asynchronous network applications for the test. The host computer controls multiple mobile phones to automatically conduct a large amount of data collection and a large number of ranging experiments. Figure 20 shows the mobile phone test software interface. The SEND and RECORD buttons on the interface are used for customized acoustic signal sending and recording in signal design and testing. The CONNECTION button is used for communication between multiple mobile phones and the host computer, and the host computer can send instructions for receiving and sending sound signals to multiple mobile phones respectively. Figure 20 shows the mobile phone test software interface. The SEND buttons on the interface are used for customized acoustic signal sending an signal design and testing. The CONNECTION button is used for communi multiple mobile phones and the host computer, and the host comp instructions for receiving and sending sound signals to multiple m respectively. To use the near-ultrasonic signal for time delay estimation and decoding, it is necessary to obtain the original, unprocessed acoustic sig generally, when the mobile phone is used to answer the call, in order to re of the call, the multiple microphones of the mobile phone cooperate, and be automatically processed. In this article, the AudioSource parameter UNPROCESSED, so that the original audio PCM file can be obtained in Android 7 and above and then the algorithm for converting to a WAV file application, and the file header is added to the WAV file. At the s SAMPLE_RATE_INHZ (sampling rate) of 48,000 Hz, AUDIO_FORMAT (n format) select 16BIT PCM format to obtain high-quality sound signals. Notably, in actual situations, people could hold mobile phones in va In order to verify the signal characteristics generated by different pos imager is utilized. The acoustic imager can display the sound field strengt by means of a heat map. The four different poses, signal cross-correlation results are shown in Figure 21. Figure 21a-d shows that though the sound partially attenuated, the sound field strengths of the acoustic signals fr phone in the four postures are concentrated at the location of the speaker that the signals from the mobile phone in the above postures can be re Figure 21e-h illustrates the signal correlation performance is good under This result can also illustrate that the proposed scheme is not affected by holding the smartphone and the pocket of the clothes. In the absence of obv the designed signal sent by the smartphone can be regarded as a LoS signa To use the near-ultrasonic signal for time delay estimation and encoding and decoding, it is necessary to obtain the original, unprocessed acoustic signal. However, generally, when the mobile phone is used to answer the call, in order to reduce the noise of the call, the multiple microphones of the mobile phone cooperate, and the audio will be automatically processed. In this article, the AudioSource parameter is selected as UNPROCESSED, so that the original audio PCM file can be obtained in the system of Android 7 and above and then the algorithm for converting to a WAV file is added in the application, and the file header is added to the WAV file. At the same time, the SAMPLE_RATE_INHZ (sampling rate) of 48,000 Hz, AUDIO_FORMAT (number of data format) select 16BIT PCM format to obtain high-quality sound signals. Notably, in actual situations, people could hold mobile phones in various postures. In order to verify the signal characteristics generated by different poses, an acoustic imager is utilized. The acoustic imager can display the sound field strengths of the signal by means of a heat map. The four different poses, signal cross-correlation, and imaging results are shown in Figure 21. Figure 21a-d shows that though the sound signal will be partially attenuated, the sound field strengths of the acoustic signals from the mobile phone in the four postures are concentrated at the location of the speaker, which shows that the signals from the mobile phone in the above postures can be regarded as LoS. Figure 21e-h illustrates the signal correlation performance is good under four postures. This result can also illustrate that the proposed scheme is not affected by the gesture of holding the smartphone and the pocket of the clothes. In the absence of obvious obstacles, the designed signal sent by the smartphone can be regarded as a LoS signal (Case 1). MAC Simulation Settings Based on the experimental results, the proposed hybrid MAC simulation parameters are set up as shown in Table 2. The evaluation indicators are a total period of node group mutual ranging and the number of nodes in conflict. The results will be compared with ALHOA. MAC Simulation Settings Based on the experimental results, the proposed hybrid MAC simulation parameters are set up as shown in Table 2. The evaluation indicators are a total period of node group mutual ranging and the number of nodes in conflict. The results will be compared with ALHOA. Figure 22 shows the signal spectrum in the time and frequency domain and crosscorrelation results in Case 1 at 5 m. It can be found that at this time, the signal reverberation and multipath are not serious, the time delay is short, and the peak of cross-correlation is clear. IFS m + + Competition window 8 Simulation step 0.1 mm Speed of sound 343 m/s CSMA P-resist (P = 0.8) Figure 22 shows the signal spectrum in the time and frequency domain and crosscorrelation results in Case 1 at 5 m. It can be found that at this time, the signal reverberation and multipath are not serious, the time delay is short, and the peak of crosscorrelation is clear. The ranging results in Case 1 are shown in Figure 23. The box figure illustrates that in Case 1, the error distribution produced by the env-two-stage is more concentrated, with fewer outliers and smaller errors. Notably, the negative errors are only used in the box figure to show the distribution of the errors. In the CDF plot and summary table, the error is the positive error by absolute value. Currently, since the influence of multipath is not serious, the cubic spline interpolation method has the greatest correction effect on the error. The ranging results in Case 1 are shown in Figure 23. The box figure illustrates that in Case 1, the error distribution produced by the env-two-stage is more concentrated, with fewer outliers and smaller errors. Notably, the negative errors are only used in the box figure to show the distribution of the errors. In the CDF plot and summary table, the error is the positive error by absolute value. Currently, since the influence of multipath is not serious, the cubic spline interpolation method has the greatest correction effect on the error. Figure 24 shows the CDF of different methods of ranging in case 1. From the CDF results in Figure 24a, under LoS, both Peak and two-stage share a similarity of performance. Meanwhile, the env-two-stage obtains a better performance in Figure 24b. Our results demonstrated that the env-two-stage gives clearly better results than other comparison algorithms in LoS. Figure 25 shows the spectral (a) and cross-correlation (b) properties of signals in case 2 at a test distance are 3 m in NLoS, where the barrier is people. Obviously, the reverberation of the signal is very serious and it is difficult to search for the correct time delay. Figure 25 shows the spectral (a) and cross-correlation (b) properties of signals in case 2 at a test distance are 3 m in NLoS, where the barrier is people. Obviously, the reverberation of the signal is very serious and it is difficult to search for the correct time delay. The experiment results in Case 2 are shown in Figure 26. As can be seen in Figure 26a, under NLoS, the error distribution of env-two-stage is closer to a normal distribution than other methods. In addition, env-two-stage can achieve a mean ranging error of 0.206 m, which is the best of the contradistinctive algorithms. The reason of it is that in the situation of body occlusion, the envelope can filter small-scale environmental noise and the two-stage can overcome the multipath effect. The experiment results in Case 2 are shown in Figure 26. As can be seen in Figure 26a, under NLoS, the error distribution of env-two-stage is closer to a normal distribution than other methods. In addition, env-two-stage can achieve a mean ranging error of 0.206 m, which is the best of the contradistinctive algorithms. The reason of it is that in the situation of body occlusion, the envelope can filter small-scale environmental noise and the two-stage can overcome the multipath effect. Case 2 NLoS by People The experiment results in Case 2 are shown in Figure 26. As can be seen in Figure 26a, under NLoS, the error distribution of env-two-stage is closer to a normal distribution than other methods. In addition, env-two-stage can achieve a mean ranging error of 0.206 m, which is the best of the contradistinctive algorithms. The reason of it is that in the situation of body occlusion, the envelope can filter small-scale environmental noise and the two-stage can overcome the multipath effect. Case 3 Canteen In Case 3, as Figure 27a presented, the signal will be reflected by the desktop and the glass plate, which results in a poor quality of the signal. Figure 27b demonstrates that in Case 3, the first peak and the largest peak are indistinguishable from the signal after crosscorrelation processing. Hence, traditional methods cannot be implemented in Case 3. Case 3 Canteen In Case 3, as Figure 27a presented, the signal will be reflected by the desktop and the glass plate, which results in a poor quality of the signal. Figure 27b demonstrates that in Case 3, the first peak and the largest peak are indistinguishable from the signal after cross-correlation processing. Hence, traditional methods cannot be implemented in Case 3. Figure 28 demonstrates that in Case 3, env-two-stage is still the best method. In Figure 28a the env-two-stage has an excellent performance in general. In Figure 28b, although median err of the two-stage is smaller than the env-two-stage, it has more outliers. On the contrary, the traditional method Peak cannot work in Case 3. Figure 28 demonstrates that in Case 3, env-two-stage is still the best method. In Figure 28a, the env-two-stage has an excellent performance in general. In Figure 28b, although median error of the two-stage is smaller than the env-two-stage, it has more outliers. On the contrary, the traditional method Peak cannot work in Case 3. All the experimental results are summarized in Table 3. These results are obtained in Case 1, Case 2, and Case 3. The results verify that the detection accuracy of the env-twostage algorithm is significantly better than the traditional Peak and two-stage algorithm, especially in NLoS scenes. There may be two reasons. The first is that the envelope can remove small-scale multipath and environmental noise. The second is that the algorithm part of two-stage can find the first path in the case of severe reverberation. These can also be explained in Case 1, at the distance of 1 m, the Peak algorithm obtains a better performance due to the noise and multipath are not serious in the current situation. Figure 28 demonstrates that in Case 3, env-two-stage is still the best method. In Figure 28a, the env-two-stage has an excellent performance in general. In Figure 28b, although median error of the two-stage is smaller than the env-two-stage, it has more outliers. On the contrary, the traditional method Peak cannot work in Case 3. All the experimental results are summarized in Table 3. These results are obtained in Case 1, Case 2, and Case 3. The results verify that the detection accuracy of the env-twostage algorithm is significantly better than the traditional Peak and two-stage algorithm, especially in NLoS scenes. There may be two reasons. The first is that the envelope can remove small-scale multipath and environmental noise. The second is that the algorithm part of two-stage can find the first path in the case of severe reverberation. These can also be explained in Case 1, at the distance of 1 m, the Peak algorithm obtains a better performance due to the noise and multipath are not serious in the current situation. MAC Simulation Based on experiment results, the pure contention channel access protocol and the proposed hybrid channel access protocol are simulated, respectively. In each simulation, different numbers of nodes are randomly placed within a range of five meters in diameter. In order to simulate the difference in the estimation ability of social distance between people, the distance between nodes is greater than 0.8 m. The simulations were performed 100 times to compare the efficiency of mutual ranging and the conflict situation. The specific parameters are shown in Table 2. Figure 29 shows the relationship between node conflict situation and personnel density in the situation of CSMA/CA. It is found that with the increase of node density, the total period of node cluster mutual ranging and the probability of conflict continuously increase. The total period of mutual ranging of more than 500 milliseconds is too long, and the refresh rate is low for people whose topology changes dynamically. Through conflict avoidance, the conflict probability is reduced, but after the node density increases, the conflict probability increases rapidly. In the case of different numbers of nodes (node density), 100 simulations were carried out, respectively, and the node group mutual ranging period and the number of nodes colliding were obtained under the hybrid channel access method.The results are shown in Figure 30. In the case of different numbers of nodes (node density), 100 simulations were carried out, respectively, and the node group mutual ranging period and the number of nodes colliding were obtained under the hybrid channel access method.The results are shown in Figure 30. It is found that with the increase of node density, the total period of node cluster mutual ranging continues to increase, but since the pseudo-orthogonal signals based on FDMA and Chirp BOK can be partially concurrent, the conflict probability is zero within the population density of 0.66 people/m 2 , the population density continued to increase and conflict probability remained at a very low level. At the same time, the total ranging period can meet the demand. By adopting the hybrid channel access method, time resources can be utilized more evenly. Although the signals between different frequency bands are orthogonal and can be concurrent, the signals of multiple nodes are sent in a certain period, which will affect the ranging accuracy in this period. The hybrid channel access protocol adopts the idea of CSMA/CA to control the concurrency within a certain range and to solve this problem at the same time. The performance of the two channel access protocols is compared, as shown in Figure 31. There are three conclusions that can be summarized: 1. As the density of nodes (personnel) increases, it takes longer for node clusters to complete a round of mutual ranging under the three channel access protocols; 2. Compared with competitive channel access, the hybrid channel access protocol proposed in this paper greatly shortens the total period of node cluster mutual ranging and can complete the task of mutual ranging in a shorter time, which is good for dealing with crowds. The characteristics of the topology structure change over time, and this improvement is more obvious at higher node densities; 3. Compared with the competitive channel access, the hybrid channel access also greatly reduces the collision probability under the same node density, and the advantage is more obvious when the node density is larger. Among them, under the hybrid access mode, the number of conflicting nodes within 10 nodes (corresponding to a node density of 0.51 people/m 2 ) is 0. Because there are 10 pseudo-orthogonal sub-channels in total, when assigning customized signals to nodes, the pseudo-orthogonal signals (pseudo-orthogonal of signals in different frequency bands and pseudo-orthogonal between Chirp BOKs) are preferentially selected. It is found that with the increase of node density, the total period of node cluster mutual ranging continues to increase, but since the pseudo-orthogonal signals based on FDMA and Chirp BOK can be partially concurrent, the conflict probability is zero within the population density of 0.66 people/m 2 , the population density continued to increase and conflict probability remained at a very low level. At the same time, the total ranging period can meet the demand. By adopting the hybrid channel access method, time resources can be utilized more evenly. Although the signals between different frequency bands are orthogonal and can be concurrent, the signals of multiple nodes are sent in a certain period, which will affect the ranging accuracy in this period. The hybrid channel access protocol adopts the idea of CSMA/CA to control the concurrency within a certain range and to solve this problem at the same time. The performance of the two channel access protocols is compared, as shown in Figure 31. There are three conclusions that can be summarized: 1. As the density of nodes (personnel) increases, it takes longer for node clusters to complete a round of mutual ranging under the three channel access protocols; 2. Compared with competitive channel access, the hybrid channel access protocol proposed in this paper greatly shortens the total period of node cluster mutual ranging and can complete the task of mutual ranging in a shorter time, which is good for dealing with crowds. The characteristics of the topology structure change over time, and this improvement is more obvious at higher node densities; 3. Compared with the competitive channel access, the hybrid channel access also greatly reduces the collision probability under the same node density, and the advantage is more obvious when the node density is larger. Among them, under the Conclusions In this paper, a novel smartphone-based social distance detection technology with a near-ultrasonic signal is presented and evaluated. Through the comprehensive design of the signal and the channel access, the high-precision, high-refresh-rate social distance measurement based on smartphones is realized. The main findings of the paper are as follows:
2022-09-30T15:27:33.150Z
2022-09-27T00:00:00.000
{ "year": 2022, "sha1": "85da794eec67495ed6d4016000cfd09c6e14d027", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/19/7345/pdf?version=1664512316", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "166ca4e5fa2dd6c11be83df54f00d41641e508f5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
219730031
pes2o/s2orc
v3-fos-license
Laparoscopic-based perivascular renal sympathetic nerve denervation: a feasibility study in a porcine model Background This study aims to evaluate the effects and safety of laparoscopic-based perivascular renal sympathetic nerve denervation (RDN) in a porcine model fed a high-fat diet. Method Thirty-six high-fat diet-fed Bama minipigs were randomly divided into an RDN group (n = 18), in which minipigs received laparoscopic-based perivascular RDN, and a sham group (n = 18). All pigs were fed the high-fat diet after the operation to establish a model of obesity-induced hypertension. Bama pigs in the RDN and sham groups were killed at 3 time points [2 days after RDN (n = 6), day 90 (n = 6) and day 180 (n = 6)]. Result The systolic blood pressure (SBP) and noradrenaline (NE) concentration in the kidney tissue were significantly lower in the RDN group than in the sham group at 2 days (113.83 ± 3.26 mmHg vs 129.67 ± 3.32 mmHg, P = 0.011, and 112.02 ± 17.34 ng/g vs 268.48 ± 20.61 ng/g, P < 0.001, respectively), 90 days (116.83 ± 3.88 mmHg vs 145.00 ± 4.22 mmHg, P = 0.001, respectively) and 180 days (129.33 ± 2.87 mmHg vs 168.57 ± 2.86 mmHg, P < 0.001, and 152.15 ± 16.61 ng/g vs 318.97 ± 24.84 ng/g, P < 0.001, respectively) after the operation. The diastolic blood pressure (DBP) was significantly lower in the RDN group than in sham group at 90 and 180 days after the operation (72.17 ± 2.7 mmHg vs 81.50 ± 2.22 mmHg, P = 0.037, and 76.83 ± 2.75 mmHg vs 86.33 ± 2.22 mmHg P = 0.021, respectively). Based on the pathological evaluation, the renal sympathetic nerve fascicles were successfully disrupted by radiofrequency energy after laparoscopic-based perivascular RDN, but the intima was intact. Tyrosine hydroxylase (TH) expression was decreased, while the expression of the S100 protein was increased in treated renal arteries after RDN. Conclusions Laparoscopic-based perivascular RDN prevented the occurrence and development of hypertension, and thus it may be an efficient and safe method for controlling blood pressure in an experimental model. regarded as a promising method for the treatment of resistant hypertension [3]. With minimal surface wounding, this technique is proposed to destroy the renal sympathetic nerve contained in the arterial wall, suppress the overactivated sympathetic nervous system, lower blood pressure and decrease the intake of medicine by patients suffering from hypertension. However, the results of these studies remain controversial based on the results of the outcome of the Symplicity HTN-3 trial [4]; the positive findings of the SPYRAL HTN-ON MED [5] and RADIANCE-HTN SOLO [6] trials published in LANCET and presented at EURO PCR in May 2018 have provided some insights into the effectiveness of RDN. Comparing the results of different studies, the ablation devices used in these studies are presumed to be one of the significant factors contributing to the success of RDN. A reliable ablation method may be necessary for a successful RDN procedure. Since renal sympathetic nerves are mainly located near the adventitia and are not distributed through the intima of renal arteries based on anatomy studies, we therefore attempted to perform laparoscopic-based perivascular RDN in this study. The purpose of this study was to evaluate the safety and efficacy of laparoscopic-based perivascular RDN in a porcine model. Materials and methods Thirty-six male Bama pigs (8-month old, 22.42 ± 0.78 kg) were purchased from Beijing Strong Century Minipigs Breeding Base [Beijing, China, approval number: SYXK(jing) 2018-0040]. All pigs were maintained at a controlled temperature and humidity on a 12:12-h darklight cycle. All experimental procedures were approved by the Animal Care Ethics Committee of the Henan Provincial People's Hospital, and were conducted in accordance with the American Physiology Society's "Guides for the Care and Use of Laboratory Animals" published by the National Institutes of Health. Thirsty-six Bama swine were randomly assigned into two groups: an RDN group (n = 18) and a sham group (n = 18). All animals were fed separately. A high-fat (4100 kcal/kg) diet consisting of protein (10%), fat (41%), carbohydrates (43%) and minerals (6%) was initiated on the first day after RDN. The daily high-fat diet intake was approximately 5% of the body weight of each animal, and was adjusted dynamically every 2 weeks. Before the operations, animals were provided a liquid diet for 2 days, followed by fasting for 12 h with free access to water. Venous blood was collected for measurements of renal function and lipid levels while pigs were under general anesthesia. Anticoagulant therapy was provided by an initial unfractionated heparin bolus (100 IU/kg IV), and 1000 IU was supplemented every additional hour. Renal arterial angiography and optical coherence tomography (OCT) (Light-Lab Imaging, Inc., USA) were performed to record the images of both renal arteries before the operation. Each minipig in the RDN group (n = 18) underwent bilateral laparoscopic-based perivascular RDN, and every minipig in the sham group (n = 18) underwent the same procedure except for radiofrequency ablation. Renal arterial angiography and OCT were performed again for follow-up before all swine were killed. Follow-up renal arterial angiography and OCT were performed to identify injury to the endothelium, lumen stenosis, thrombosis and any other abnormalities in the renal arteries of 6 pigs in the sham group and 6 pigs in the treatment group prior to killing at 3 time points [2 days (n = 6), 90 days (n = 6) and 180 days (n = 6) after RDN]. Immediately thereafter, the pigs were heparinized and euthanized under deep anesthesia. The kidneys were removed and stored in liquid nitrogen, and the renal arteries were removed and stored in formalin and liquid nitrogen for further processing prior to pathological and molecular biological detection, respectively. Anesthesia and surgical technique The pigs were sedated by administering an intramuscular injection of ketamine hydrochloride (5 mg/kg), midazolam (0.5 mg/kg), and Atropine (0.5 mg) to induce anesthesia, and an infusion channel was established by piercing the edge of the vein. Intravenous supplemental propofol (3 mg/kg), sufentanil (1 g/kg) and vecuronium bromide (0.1 mg/kg) were administered before endotracheal intubation and connected to the anesthesia machine. Sevoflurane (0.5-1.5 mac) was inhaled throughout the operation and its flow was controlled through the anesthesia ventilator (Dräger Fabius, Germany). The ventilator parameters were set as follows: tidal volume 12-15 ml/kg, respiratory rate 16-18 breaths/min, and a suction/inhalation ratio of 1:2 to maintain anesthesia. This technique was used for anesthesia and monitored by a senior anesthesiologist. Pigs were placed in the lateral position and secured with straps. The first incision was made approximately 2 cm below the intersection of the posterior axillary line and rib margin, and pneumoperitoneum was achieved with CO 2 insufflation. The second incision was made approximately 2 cm above the intersection of the midaxillary line and spina iliaca. The third incision was made symmetric to the first incision in the anterior axillary line. A 10-mm trocar was then inserted in the second incision. Subsequently, two 5-mm trocars were inserted into the first and the third incisions (Fig. 1). A celioscope lens was introduced through the 10-mm trocar and 2 graspers were introduced through the two 5-mm trocars. After the selected renal artery was exposed and separated by the 2 graspers, a 7 Fr diameter radiofrequency (RF) ablation catheter (Biosense Webster, Diamond Bar, CA, USA), which was connected with an RF generator (Biosense Webster, Diamond Bar, CA, USA), was introduced to the retroperitoneum through a 5-mm trocar, and carried by a grasper. We applied the catheter for discrete radiofrequency ablations of 8 W or less lasting up to 120 s each to obtain up to six ablations separated both longitudinally and rotationally from the adventitia of the renal arteries. A saline solution was injected into the top of the RF catheter to control the temperature. After the ablation of one renal artery, the incisions were sutured layer by layer, and then the contralateral renal artery was ablated as well. After surgery, gentamicin sulfate (10 mg/ kg) was administered intramuscularly for 7 days. This technique was performed by experienced urologists. Body weight and detection of blood biochemical parameters Fasting pigs were weighed before the operation and every 2 weeks thereafter. Blood samples were collected from the superior vena cava after anesthesia at the 4 time points described above. Serum creatinine, total cholesterol (TC) and triglyceride (TG) levels were detected using an automatic biochemical analyzer (Rayto, Shenzhen, China). Serum neutrophil gelatinase-associated lipocalin (NGAL) and cystatin C levels and renal noradrenaline (NE) concentrations were detected using enzyme-linked immunosorbent assay (ELISA) kits (USCN Business Co. Ltd., Wuhan, China). Verification of laparoscopic-based perivascular renal nerve denervation BP was measured using a BP-2010E monitor (Softron, China) at baseline and 2, 90 and 180 days after surgery to verify the effectiveness and safety of the laparoscopicbased perivascular RDN procedure, and the change in BP compared with the baseline (ΔBP) was also calculated. The NE concentration in the renal tissue was detected using an ELISA kit (USCN Business Co. Ltd., Wuhan, China). Renal arteries stored in formalin were embedded in paraffin and sliced into 5-μm sections. Slides containing the renal vessels were stained with hematoxylin and eosin (HE) to evaluate the microscopic structures of the renal sympathetic nerve fascicles and the arterial wall. In addition, immunohistochemical staining for the S100 protein (ab868, Abcam, Cambridge, UK) and TH (tyrosine hydroxylase) (ab75875, Abcam, Cambridge, UK) was performed. An independent, experienced neuropathologist examined all histology sections for evidence of injury to the nerve, renal arterial walls and peri-arterial renal connective tissue containing the renal nerves in a blinded manner. The renal arterial tissue was homogenized in RIPA buffer (Biotime, Beijing, China) using a Polytron homogenizer and incubated at 4 °C for 30 min. Lysates were centrifuged at 12,000g for 10 min at 4 °C; the supernatants were collected and the concentrations were determined using the BCA method. Equal amounts of solubilized proteins were separated on 10% polyacrylamide SDS gels and transferred onto polyvinylidene difluoride membranes (Millipore, USA). Membranes were incubated with 5% nonfat milk for 1 h at room temperature and then incubated with the indicated antibodies, rabbit anti-S100 (diluted 1:500), rabbit anti-TH (diluted 1:1000) (both from Abcam, USA) or rabbit anti-GAPDH (diluted 1:500) (Biotime, China), overnight at 4 °C. The membranes were incubated with a horseradish peroxidaseconjugated anti-rabbit secondary antibody (Biotime, China, diluted 1:2000 in TBST) for 1 h at room temperature and then detected using an ECL Kit (Millipore, USA) in 3 separate experiments, and the average of the 3 values was recorded. Statistical analysis Body weights of pigs, BP, ΔBP, serum TC/TG, Cr, NGAL, cystatin C, S100 protein, and TH levels in the renal arterial wall and NE levels in the renal tissue were compared between the RDN group and sham group at the same time points. Continuous data are presented as means ± standard errors (SE). The normality of the distribution was assessed with the Shapiro-Wilk test. Variables with normal distributions were compared using t tests, whereas variables with skewed distributions were compared with the Mann-Whitney U test. All statistical analyses were performed with SPSS 20.0 software (SPSS Inc., Chicago, IL, USA). P values < 0.05 were considered statistically significant. Result All 36 pigs underwent surgery. Of these animals, 18 underwent bilateral laparoscopic-based perivascular RDN, and the remaining pigs underwent sham operations. No side effects were observed and no pigs died unexpectedly during the experiment. Each artery in the RDN group was ablated at 6 points longitudinally and rotationally, the ablation points were uniformly distributed in the main renal artery, and the ablation time of every point was 120 s. The mean energy delivered to the tissues was 7.9 ± 0.43 W, the temperature was 43.95 ± 1.45 °C and the impedance was 210.78 ± 4.71 Ω. Body weight, serum total cholesterol, triglyceride, creatinine, cystatin C and neutrophil gelatinase-associated lipocalin levels After the pigs consumed the high-fat diet, body weight and serum TC and TG levels were significantly increased. The body weight of the Bama pigs increased significantly from 21.69 ± 0.78 kg at baseline to 64.15 ± 3.12 kg (P < 0.001) at day 180 in the RDN group, but the difference was not significant compared with the sham group. Serum TC and TG levels were increased from 2.56 ± 0.14 mmol/l and 1.07 ± 0.10 mmol/l at baseline to 3.64 ± 0.29 mmol/l (P = 0.004) and 1.73 ± 0.13 mmol/l (P = 0.001), respectively, at day 180 in the RDN group, but were not significantly different from the values of the sham group. The serum creatinine, cystatin C and neutrophil gelatinase-associated lipocalin (NGAL) levels were not significantly different between the two groups ( Fig. 2). Changes in blood pressure at the 4 time points Before the surgery and high-fat diet feeding, the baseline systolic blood pressure was 127.67 ± 2.67 mmHg in the RDN group and 128.78 ± 2.08 mmHg in the sham group (P = 0.743), while diastolic blood pressure was 75.61 ± 1.70 mmHg in the RDN group and 74.50 ± 2.87 mmHg in the sham group, and the differences in SBP and DBP were not significant between the two groups (P = 0.678). Arteriography, optical coherence tomography and pathological evaluation of the atrial lumen and arterial wall Arteriography is the gold standard for identifying vessel narrowing of the vascular lumen, while OCT, which has a high axial resolution of 10-20 μm, accurately visualizes arterial wall lesions. All renal arteries were assessed using arteriography and OCT scanning. Spasms were immediately observed after laparoscopic-based perivascular RDN (Fig. 4), but no spasms were observed during subsequent assessments on days 2, 90 and 180. No aneurysmal changes, thrombi or other abnormalities were noted in the lumen or arterial wall in the 180-day study. HE staining did not reveal thrombi, dissections, aneurysms, perforations, hematomas, neointimal formation, or negative remodeling. Representative images of the arterial lumen at different time points are shown in Fig. 5. Pathological and immunohistochemical evaluations of nerve fascicles Two days after laparoscopic-based perivascular RDN, HE staining of the affected arterial section indicated that nerve fascicles surrounding renal arteries exhibited vacuolization and nuclear pyknosis, and the endoneurium became disorganized. In contrast, the tissues of the sham group were intact. The S100 protein is a marker of Schwann cells (SCs), which wrap around the axons of nerve cells. TH is the rate-limiting enzyme involved in catecholamine synthesis within the postganglionic nerve terminals, and TH expression has also been used as a functional marker of the activity of renal sympathetic nerve fascicles. Immunohistochemical staining for the S100 protein and TH protein revealed no significant differences between the sham group and RDN group 2 days after laparoscopicbased perivascular RDN. At day 90, the affected nerve epineurium resembled a thick layer, the nerve bundles had atrophied, and endoneurial and perineural fibrosis were observed. Stained sections from the RDN group showed a marked decrease in levels of the TH protein but an increase in levels of the S100 protein compared with the sham group. At day 180, the thick layer, endoneurial and perineural fibrosis were also observed. In addition, some disordered nerve regrowth mixed with connective tissue was observed at the site of radiofrequency ablation, along with a decrease in immunohistochemical staining for the S100 protein and TH protein compared Representative images of renal arteries in the sham group and RDN group captured on days 2, 90 and 180 after surgery. a, e, i Representative images of untreated renal arteries (RA, OCT, and HE staining, respectively); the renal arterial wall appears intact without evidence of injury or inflammation. b, f, j Representative images of the renal artery 2 days after RDN (RA, OCT, and HE staining, respectively); no absolute injury was observed in the arterial wall using HE staining. c, g, k Representative images of the renal artery 90 days after RDN (RA, OCT, and HE staining, respectively); no spasms, stenosis, plaques or dissection were observed. d, h, l Representative images of the renal artery 180 days after RDN (RA, OCT, and HE staining, respectively); no spasms, stenosis, plaques or dissection were observed with sham group but a slight increase compared with day 90. Representative images of the sympathetic nerve fascicles at different time points are shown in Fig. 6. Expression of the TH and S100 proteins in renal arteries, and norepinephrine concentration in the renal tissue Two days after surgery, the expression of both the TH and S100 proteins was not significantly different between the RDN group and sham group. Significantly lower TH expression was observed in the RDN group than in the sham group at 90 days and 180 days after surgery, while the expression of the S100 protein in the atria was increased at day 90 and day 180 after RDN compared with the sham group. (Fig. 7). Discussion According to several studies, RDN may be an efficient approach for treating resistant hypertension. Despite the negative results of SYMPILICTY HTN-3 [4], the newly discovered findings from the SPYRAL HTN-ON MED [5] and RADIANCE-HTN SOLO [6] studies have encouraged all researchers. The ablation devices used in these two studies may be one of the significant factors contributing to the success of RDN. In the present study, we performed laparoscopic-based perivascular RDN and a sham operation in two groups of minipigs. In the RDN group, the nerve fascicles contained in the renal arterial wall were successfully destroyed. Following high-fat diet intake, both SBP and DBP increase with the progression of obesity, consistent with several previous studies [7,8]. However, SBP, ΔSBP, DBP, ΔDBP and NE levels in the renal tissue of the RDN group were significantly lower than in the sham group in the current study, indicating that laparoscopic-based perivascular RDN may be a feasible strategy to prevent the emergence and development of hypertension, and may be a potential treatment for hypertension. Our previous Fig. 6 Representative images of pathological sections of renal nerves from the sham group and RDN group at 2, 90 and 180 days after surgery. a, e, i Representative images of untreated nerves (HE staining and immunohistochemical staining for TH and S100, respectively); normal nerve fascicles surrounded by a thin perineurium and epineurium were observed in the untreated renal artery, and immunohistochemical staining for TH and S100 protein showed moderate expression. b, f, j Representative images of nerve fibers at 2 days after RDN (HE staining and immunohistochemical staining for TH and S100, respectively). A broken perineurium surrounded the atrophic renal nerve and no inflammatory component was observed; immunohistochemical staining for TH and the S100 protein showed no obvious differences from the tissue before RDN. c, g, k Representative images of nerve fibers at 90 days after RDN (HE staining and immunohistochemical staining for TH and S100, respectively). The epineurium and endoneurium are thickened and fibrosis is observed in the perineural tissue and nerve fascicles; immunohistochemical staining showed a slight decrease in TH expression and an increase in S100 expression. d, h, l Representative images of nerve fascicles at 180 days after RDN (HE staining and immunohistochemical staining against TH and S100, respectively). Some newly regenerated nerve fascicles were observed. Immunohistochemical staining for TH and S100 showed moderate expression study [9] and other studies [10,11] also confirmed that ablating renal nerves from the adventitia of the renal artery destroys the renal sympathetic nerve and suppresses the overactivated sympathetic nervous system. NE, the main catecholamine neurotransmitter in the sympathetic nervous system, is an important indicator of the function of the sympathetic nerve fascicle. As shown in several studies [9,12], RDN results in a marked decrease in the NE spillover rate, indicating that RDN disrupts the function of the sympathetic nerve fascicles in the renal arteries and suppresses the overactivated sympathetic nervous system. In the present study, significantly lower NE levels were observed in the renal tissue from the RDN group than in the sham group after the operation, indicating the decreased activity of sympathetic nervous system. This decrease was also confirmed by the low expression of TH detected using immunohistochemical staining and western blotting. TH is the enzyme responsible for catalyzing the conversion of the amino acid l-tyrosine to l-3,4-dihydroxyphenylalanine (L-DOPA), a precursor of dopamine, which, in turn, is a precursor for the important neurotransmitter NE. Lower expression levels of TH indicated a reduction in NE synthesis rates. During the self-healing process, TH expression was partially recovered at day 180 compared with day 2 and day 90, but was still lower than the sham group. This finding may indicate that perhaps the function of the injured peripheral nerve is partially but not completely restored, despite the regeneration of small nerve fascicles in the arterial wall. HE staining indicated vacuolization and nuclear pyknosis of the affected nerve fascicles, and the endoneurium became disorganized after RDN, followed by endoneurial and perineural fibrosis, the formation of a thick layer and disordered nerve regrowth at the ablation site. Most of the affected nerve fascicles were enlarged visually, which may be caused by fibroblast proliferation and fibrous scar formation. Moreover, the regenerating nerve fascicles infiltrated into the surrounding tissue. This phenomenon is consistent with the process of peripheral nerve injury and regeneration [13] and has also been reported in other studies [14]. The S100 protein is a marker of SCs, which plays an important role in the development and regeneration of nerve fascicles after peripheral nerve injury. The increased expression of the S100 protein indicates the proliferation of SCs, as well as the repair and regeneration of renal nerve fascicles. Compared with the results obtained on day 2, the expression of the S100 protein in the RDN group was significantly increased on day 90 and day 180, which may be caused by the new growth of nerve fibers. Undoubtedly, with the promotion of ablation devices, the availability of catheter-based endovascular RDN, which is performed in an interventional manner, may be substantially improved, similar to the results of SPYRAL HTN-ON MED and RADIANCE-HTN SOLO trials. Nevertheless, the problems associated with this procedure should not be ignored. Currently, although the number of patients who undergo percutaneous endovascular RDN is limited and the follow-up period is short, some side effects have been reported [15], i.e., lumen stenosis, atrial wall hematoma, intimal tears, intraluminal thrombosis, intima-media thickening [16][17][18][19], etc. Most of these factors may be caused by the direct delivery of RF energy or stimulation of the endothelium by the wire and catheter. Theoretically, by decreasing the direct stimulation with RF energy, wire and catheter of the arterial intima, perivascular RDN may be much safer than percutaneous endovascular RDN. Based on the results of routine renal angiography, we performed OCT scanning, which has a high axial resolution of 10-20 μm [20], to visualize some vascular lesions that are not apparent on angiography [21] and to verify the safety of laparoscopic-based perivascular RDN. According to the results of renal arteriography, OCT and pathological evaluation, we did not observe any obvious injury or other serious complications in the 180-day study of the effects of laparoscopic-based perivascular renal sympathetic nerve denervation, although spasms were observed in the transient renal arteries either at the ablation sites or at other sites immediately after RDN in some segments. This phenomenon has also been reported in other studies of endovascular RDN [21], and while the mechanism is unknown, it may partially be due to the RF stimulation of the smooth muscle of the renal arterial wall. The spasms disappeared in the follow-up renal arteriography and OCT scans. No statistically significant differences were observed in the serum creatinine, cystatin C and neutrophil gelatinase-associated lipocalin levels at each time point, indicating that laparoscopic-based perivascular RDN did not obviously alter renal function. In summary, laparoscopic-based perivascular RDN destroys the nerve surrounding renal arteries and decreases the increased SBP, DBP and secretion of the sympathetic-active molecule NE in obese pigs, but does not exert obvious effects on the arterial lumen and arterial intima. The intima is crucial for maintaining a normal vascular structure and function and exerts atheroprotective effects in vivo by releasing substances that promote anticoagulation, inhibit inflammation, and induce vasodilatation [22], causing injury to the endothelium that may be associated with a high risk of renal artery atherosclerosis. Without contrast agent infusion, X-ray exposure, or renal arterial intima direct stimulation, laparoscopicbased perivascular RDN also represents an alternative to traditional RDN, if needed by patients. Conclusions As shown in the present study, laparoscopic-based perivascular RDN successfully destroys sympathetic nerve fascicles and prevents the increasing trends in SBP, DBP, renal NE levels and TH expression in the renal arteries of Bama minipigs fed a high-fat diet, indicating that laparoscopic-based perivascular RDN prevents the occurrence and development of hypertension and may be an alternative strategy for controlling blood pressure. Limitation This study has some limitations. One limitation is that we did not establish a hypertension model before the operation, because of the long duration. We fed minipigs a high-fat diet after RDN to increase their weight and blood pressure, and our aim was to analyze the effect of laparoscopic-based perivascular RDN on the increase in BP. This preclinical study performed was in a small number of Bama minipigs. The results also should be confirmed in additional high-quality trials with large samples and longer follow-up. Moreover, as an initial attempt to confirm the feasibility of this approach, specialized catheters or forceps for perivascular RDN are unavailable, and thus appropriative devices must be designed and produced before this method is applied in clinical practice.
2020-06-18T14:36:35.058Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "d64acf4ca41dec5aaed9b2de99840f0c2e3f0bcc", "oa_license": "CCBY", "oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-020-00422-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d64acf4ca41dec5aaed9b2de99840f0c2e3f0bcc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1984151
pes2o/s2orc
v3-fos-license
A Model for Calcium Permeation into Small Intestine An in vitro model was used to simulate the intestinal permeation of calcium ions depending on the type of salt (carbonate, fumarate, citrate, or gluconate), its concentration (1.0, 2.5, 5.0, or 10 mM/l), and pH (1.3, 4.2, 6.2, or 7.5). To simulate the conditions for calcium permeation in a patient in a fasting state, the solutions were placed in contact with segments of small intestine of pig: stomach, duodenum, jejunum, and ileum. The percent permeation, its rate, and half-time were measured in each case. In all cases, the maximum permeation was seen at 1 mM concentration, depending on pH: 100% for carbonate at pH 1.3; 82% for fumarate, pH 6.2; 79.5% for citrate at pH 4.2, and 81% for gluconate at pH 7.4. The maximum rate of permeation (% h−1) was also observed at 1 mM: 2.16 for carbonate at pH 1.3, 0.29 for fumarate at pH 6.2, 0.26 for citrate at pH 4.2, and 0.28 for gluconate at pH 7.4. The shortest half-time permeation (t 1/2, h) for 1 mM solutions depended also on pH (in parentheses): carbonate 0.3 (1.3), fumarate 2.4 (6.2), citrate 2.6 (4.2), and gluconate 2.5 (7.4). The results suggest that calcium carbonate and citrate can be recommended to patients with normal gastric acidity and hyperacidity while fumarate and gluconate to patients with hypoacidity. 1,000 mg elemental calcium as inorganic and organic salts [1,5]. Common forms of oral administration of calcium include calcium carbonate, gluconate, lactogluconate, glucolactobionate, dobesilate, citrate, lactate, and others [6]. The calcium content differs in various salts. Among these salts, calcium carbonate is more often used because it contains the highest amount of elemental calcium ∼40%. However, despite the large variety of calcium preparations available for calcium supplementation, their effectiveness is inadequate because of poor absorption after oral administration [6]. To make calcium absorption more effective, researchers look for substances that enhance its gastrointestinal absorption [3,7]. To learn how much substance passes through GI cell membranes after oral administration, both in vitro models using Franz diffusion cells and in vivo models are used in absorption studies [8][9][10]. The purpose of this study was to determine the degree and half-time of Ca(II) ion permeation depending on the type of calcium salt (carbonate, fumarate, citrate, or gluconate), its concentration (1, 2.5, 5, or 10 mmol/l), and pH of the medium (1.3, 4.2, 6.2, or 7.5). The organic salts are commonly used in supplementation protocols [1,6]. Calcium carbonate was used as reference. The permeation of Ca(II) ions at these concentrations and pH was measured using swine small intestine. Tested ions passed from the donor environment (stomach) to the acceptor environment, which corresponded to the natural condition of different parts of the GI tract (stomach, duodenum, jejunum, or ileum). An in vitro model was applied to study the permeation process. The results were used to estimate the permeation of the different solutions tested after oral administration to a patient in fasting state, where gastric pH is about 1.3 [11]. Reagents The purity of all reagents was 99.9-99.99%, i.e., contaminants cannot be detected by conventional methods of analysis. accordance with the methodology of maintenance of organs and tissues intended for transplantation [12]. Once removed from the carcass and dissected, the intestines were washed with 0.9% NaCl solution to an absorbance value with an extinction coefficient ε<0.02 at 278 nm, and then quickly frozen at −20°C until needed for the experiments. Body Fluids Body fluids simulating the natural conditions of certain gastrointestinal sections were used: artificial gastric juice containing pepsin at pH 1.3 to simulate stomach, artificial intestinal liquids at pH 4.2 and 6.2 containing pepsin and pancreatin to simulate the duodenal and jejunal environments, and artificial intestinal fluid at pH 7.5 containing pancreatin to imitate the ileal environment [11]. Determination of Calcium Ions For the quantitative determination of Ca(II), a UV-Vis spectrometer (Marcel Media, France) was used following a validated method [13]. Research Model The experimental model consisted of a standard Franz diffusion cell [8]. The cell consisted of two chambers holding 2 ml each. One acted as the donor chamber (donor compartment, D) and the other as the acceptor chamber (acceptor compartment, A). The two chambers were kept at the same level and were separated by the small intestine tissue to be tested. Chamber D was filled with 2 ml of the artificial gastric juice in which a calcium salt was dissolved (carbonate, fumarate, citrate, or gluconate) at different concentrations (1, 2.5, 5, or 10 mmol/l). Compartment A was filled with 2 ml of the appropriate fluid simulating different sections of the small intestine at pH 1.3, 4.2, 6.2, or 7.5. After 0, 0.25, 0.50, 0.75, 1, 2, 3, 4, 5, and 6 h, the entire amount of liquid was collected from A and the absorbance was read at 570 nm to determine the amount of Ca(II). The experiment was designed following a Latin square 4×3 scheme [14]. The plan of the experiment is presented in Table 2 The process of Ca(II) penetration was traced according to the type of calcium salts: a1carbonate, a2-fumarate; a3-citrate, or a4-gluconate; the calcium concentration: b1-1 mM, b2-2.5 mM, b3-5 mM, or b4-10 mM; and the pH of the acceptor environment: c1-1.3, c2-4.2, c3-6.2, or c4-7.5. Ions penetrated following first-order kinetics. The ion transfer rate constant (k) was calculated by means of the following formula: where C 1 is the total Ca(II) in A at time t 1 =0 h, and C 2 is the total Ca(II) in A at t 2 =6 h. From the rate constant, the half-time of penetration is given by: Statistical Analyses The results are presented as the mean (x) of ten samples. The standard deviation (SD) and correlation coefficients (r 2 ) were calculated. Correlations were calculated between calcium concentration and acceptor environment pH as well as the degree and half-time of Ca(II) penetration. The Student t test was used to establish statistical significance, set at p<0.05. The software packages Excel (Microsoft) and Statistic for Windows 5.1 (StatSoft Inc.) were used for all calculations. Results and Discussion The experiments were designed to simulate the conditions under which calcium salts are administered orally on an empty stomach. In such case, the gastric pH is about 1.3 [11]. Calcium ion solutions passed from the chamber mimicking the stomach to other simulating different parts of the gastrointestinal tract under conditions that are dependent on the chemical form of calcium, its concentration, and the pH in the acceptor chamber. The degree of ion migration from D to A was varied: 9.6-100% for carbonate, 18.3-81.2% for fumarate, 17.8-79.5% for citrate, and 21.2-81.0% for gluconate. One hundred percent penetration is seen for 1 mM calcium carbonate. Increasing the concentration 10-fold caused a near 10-fold decrease (9.6%) of the degree of penetration. The effect of concentration on ion penetration and correlations (r 2 ) are given in Fig. 1. The degree of ion penetration for the organic forms of calcium was significantly different as measured in relation to calcium carbonate solutions, with the exception of 5 mM fumarate. The lower the calcium concentration, the greater degree of ion penetration: calcium citrate (r 2 =−0.908), carbonate (r 2 =−0.821), and fumarate (r 2 =−0.811). No such significant dependence was observed in case of calcium gluconate solutions (r 2 =−0.368). These in vitro results are consistent with previously published data on the effects of dose and molecular mass of the calcium salt anion: fumarate, gluconate, and chloride on the absorption of Ca(II) ions. The amount of absorbed Ca(II) was not associated with the molecular mass of the anion [15]. The degree of calcium absorption in vivo, however, depends primarily on the needs of the body. It was found that at the age of 25, absorption is reduced to 20-30%, and after 35 years of age, it drops down to 15%. In patients with osteoporosis, the body's requirement for calcium increases significantly [1,4,16]. All tested substances were salts of weak acid and strong base, and their highest concentration was 10-fold lower than the concentration of hydrochloric acid in the stomach, conditions ensuring total ionization (α=100%). Figure 2 presents the effect of acceptor environment pH on ion penetration and correlations (r 2 ). Interestingly, the degree of Ca(II) penetration at pH 1.3 was not the greatest and differed significantly depending on the type of salt. Only the Ca(II) ions from calcium carbonate penetrated completely into an environment of pH 1.3. Under these conditions, penetration was 57.2% ions from citrate, 39.7% from gluconate, and 18.3% from calcium fumarate. Calcium carbonate is not soluble in water, but in contact with gastric hydrochloric acid, it changes into the chloride becoming freely soluble and best absorbed [7]. This result suggests that calcium carbonate could be administered on an empty stomach or before meals. Calcium from carbonate did not pass in near neutral to alkaline media. The environment with a pH of 4.2 favored 79.8% penetration of calcium from citrate, while more neutral conditions favored ions from fumarate (82%) and gluconate (81%). Penetration was directly correlated to acidity of the acceptor environment in the cases of carbonate (r=−0.987) and citrate (r=−0.654), and inversely for fumarate (r=0.900) and gluconate (r=0.412), suggesting that calcium citrate could be used in patients with hyperacidity [1.17]. Calcium in citrate is bound to a weak organic acid, which favors absorption. The amount of ionized calcium remains stable for several hours even after pancreatic juice secretion [7]. The half-time of permeation of the different solutions tested t 1/2 and the respective correlations are shown in Fig. 3. The correlation was directly dependent on the concentration for carbonate (r 2 =0.995), citrate (r 2 =0.988), and fumarate (r 2 =0.711) but not for gluconate (r 2 =0.023). Calcium carbonate is one of the most commonly administered calcium salts due to the highest element content in the molecule ∼40%. If administered in a concentration of 1 mM (40 mg of calcium), 20 mg ions would pass within 30 min. Our results show that the longest penetration time (41.3 h) was recorded for 10 mM calcium carbonate. Thus, 10 mM calcium carbonate (400 mg elemental calcium) would result on the absorption of only 200 mg of this element in about 41 h. What this means is that the oral administration of 400 mg calcium (as carbonate) in a fasting patient would result in stomach retention of the element, limiting its use as supplement. It is then preferable to administer the salt in smaller doses. Patients, however, usually prefer to take the entire daily ration of the supplement in a single dose, which would be of little help. In view of this, it seems that it would be much more effective to administer calcium gluconate or fumarate. The Ca(II) penetration half-time from 10 mM solutions of these salts was 8.22 and 15.68 h, respectively, five and 2.6 times shorter in comparison with calcium carbonate. It should be noted, however, that the calcium content in gluconate is only ∼8.9%. Thus, supplementation with this salt should be in larger doses as compared with other calcium salts tested. To provide 500 mg of calcium, a patient should take up to 5.6 g of calcium gluconate, 2.4 g citrate, 2.6 g of fumarate, and only 1.25 g of calcium carbonate. This makes calcium fumarate an interesting choice [15][16][17]. Its effectiveness and tolerance were determined in the fight against calcium and phosphorus disturbances in patients on dialysis due to chronic renal failure compared with treatment with calcium carbonate [16]. To obtain a similar metabolic compensating effect of were also found when it was administered to female rats with experimentally induced osteoporosis [16]. In rats, the absorption of calcium fumarate is comparable with other calcium salts [18,19]. The effect of pH on Ca(II) ion penetration half-time in the acceptor environment is shown in Fig. 4. The parameters of penetration from selected salts depending on the conditions are presented in Table 3. Higher pH in the acceptor environment favored a longer time for calcium ions to migrate from its solutions: carbonate (r 2 =0.927) and citrate (r 2 =0.682). With a higher pH, ions from fumarate (r 2 =−0.951) more rapidly penetrated into the acceptor. No significant relationship was observed in case of calcium gluconate (r 2 =−0.055). These results indicate that calcium from carbonate and citrate is well absorbed in patients with normal stomach acidity and hyperacidity, while supplementation with fumarate and gluconate will be more effective in patients with hypoacidity. Conclusion On the basis of in vitro studies, we can draw conclusions on the absorption of the tested calcium salts after oral administration in a fasting state [10]. Taking into account the degree of Ca(II) penetration, the examined salts can be classified into groups of substances of high (>90%), medium (50-89%), or poor absorption (<50%). Calcium carbonate will be well absorbed at a concentration of 1 mM (40 mg calcium) at pH 1.3. Substances with medium absorption would include calcium citrate also at 1 and at 2.5 mM (40 and 100 mg of calcium, respectively) at pH values of 4.2 and 1.3; calcium fumarate at pH 6.2 and 7.4 and gluconate in an environment with a pH value of 7.4. The degree and half-time of Ca(II) penetration depend on the type of calcium salt, its concentration, and acceptor pH. During oral supplementation with calcium salts, they should be administered at low doses to make them more accessible to the organism. Suitable conditions for absorption of calcium salts may provide higher bioavailability and thus higher treatment efficacy during supplementation. Calcium from carbonate and citrate should be used in patients with normal stomach acidity and hyperacidity, while calcium fumarate and gluconate should be recommended for supplementation in patients with hypoacidity.
2014-10-01T00:00:00.000Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "9605565b4571796937b9d33b6f8e690bdedd1e99", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12011-010-8827-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9605565b4571796937b9d33b6f8e690bdedd1e99", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
225787984
pes2o/s2orc
v3-fos-license
A Mesh Deformation Method for CFD-Based Hull form Optimization : Computational fluid dynamics (CFD) is an e ff ective tool for ship resistance prediction and hull form optimization. A three-dimensional volume mesh is essential for CFD simulation, and mesh generation requires much time and e ff ort. Mesh deformation can reduce the time for mesh generation and simulation. The radial basis function (RBF) and inverse distance weighted (IDW) methods are well-known mesh deformation methods. In this study, the two methods are compared and a novel mesh deformation method for hull form optimization is proposed. For the comparison, a circular cylinder polyhedral mesh was deformed to the National Advisory Committee for Aeronautics (NACA) 0012 mesh. The results showed that the RBF method is faster than the IDW method, but the deformed mesh quality using the IDW method is better than that using the RBF method. Thus, the RBF method was modified to improve the deformed mesh quality. The centroids of the boundary layer cells were added to the control points, and the displacements of the centroids were calculated using the IDW method. The cells far from the ship were aligned to the free surface to minimize the numerical di ff usion of the volume of fluid function. Therefore, the deformable region was limited by the deformed boundary, which reduced the time required for mesh deformation. To validate its applicability, the proposed method was applied for varying the bow shape of Japan Bulk Carrier (JBC). The resistances were calculated with the deformed meshes. The calculation time was reduced to approximately one-third using the result of the initial hull form as the initial condition. Thus, the proposed mesh deformation method is e ffi cient and e ff ective enough for CFD-based hull form optimization. Introduction Computational fluid dynamics (CFD) is one of the general tools for estimating the resistance of a ship in calm water. Naval architects iterate the hull form variation, grid generation, and CFD calculation to minimize the resistance. Even though the hull form variation is extremely localized and low, the grid must be regenerated. Many shipbuilding and design companies are exerting efforts to automate the procedure to reduce the time and cost. Moreover, many studies on CFD-based design optimization have been conducted. An optimization algorithm to minimize the iterations is the most important and the hull form variation method according to the design parameter is very essential. Kim and Yang [1] applied two surface modification methods for hull form optimization. One was based on a radial basis function (RBF), while the other was based on a sectional area curve. Kim and Yang's [1] RBF method uses only 6 control points as design variables to minimize the resistance of Korea Research Institute for Ships and Ocean Engineering (KRISO) Container Ship (KCS) in three speeds. The resistance of the modified hull was evaluated using a method based on the Neumann-Michell theory, which uses only a surface mesh. Kim et al. [2] applied the same method used by Kim and Yang [1] to improve the resistance and seakeeping performance of the US Navy surface combatant David Taylor Model Basin (DTMB) 5415. Mahmood and Huang [3] optimized a bulbous bow to minimize the total resistance using a genetic algorithm. They used ANSYS FLUENT and GAMBIT (Ansys, In., Canonsburg, PA, USA) for resistance calculation and mesh generation, respectively. A GAMBIT journal file was created to automate the hull form variation and volume mesh generation in accordance with the design parameters. Zhang et al. [4] proposed an improved particle swarm optimization algorithm, where Siemens STAR-CCM+ (Siemens Industry Software Ltd., Plano, TX, USA) was used for volume mesh generation and resistance calculation. The hull form was varied using an arbitrary shape deformation (ASD) technique proposed by Sun et al. [5]. The ASD technique is based on a B-spline and requires that the volume is set up outside the body with many control points and connections. The volume mesh deforming method has been developed to simplify the optimization process and reduce the turnaround time as shown in Figure 1. Mesh deformation is much faster than grid generation, and the simulation with a deformed mesh uses the results of the original mesh as the initial condition. Successive calculation also reduces the calculation time. Morris et al. [6] developed a mesh deformation method based on the RBF method. The control points of the RBF method were used as design parameters. The method was independent of both the flow solver and grid generator. Morris et al. [6] applied a method for optimizing airfoils with feasible sequential quadratic programming. They concluded that the method was extremely fast and efficient, and the deformed mesh quality was very high. Sieger et al. [7] compared the classical free-form deformation (FFD), direct manipulation FFD, and RBF methods with each other. They concluded that the RBF method was much faster and more precise than the other two methods. Luke et al. [8] proposed a mesh deformation method based on inverse distance weighted (IDW) interpolation. Their method interpolated the translational displacement and rotational displacement using the IDW method. The parallelization of the algorithm was also described. They showed that the non-orthogonality of the boundary layer of the deformed mesh using the IDW method is better than that by the RBF method if the rotation of the body surface is high. He et al. [9] applied the IDW method to optimize an airfoil that starts with a circle. To show the robustness of the IDW method, a two-dimensional (2D) mesh for the circle was deformed to the mesh of NACA 0012. They concluded that the IDW method is better than the RBF method in terms of non-orthogonality of the boundary layer. TransFinite Interpolation (TFI) method is also a popular and efficient method for structured grid. However, the TFI method is difficult to apply to polyhedral mesh because of the irregular distribution of mesh points [10]. In this paper, Section 2 introduces the RBF, IDW and improved RBF methods. To compare the quality and time for deformation, a polyhedral mesh for circular cylinder are deformed to the mesh for NACA 0012. The results show that the RBF method have problem with non-orthogonality in boundary layer cell. The IDW method takes much longer time than that of the RBF method. The non-orthogonality of the improved RBF method is as good as IDW method and the turnaround time is shorter than any other methods. To check the applicability of the improved RBF method, the polyhedral mesh for Japan bulk carrier (JBC) resistance calculation is deformed and the mesh is calculated in Section 3. Because of the mesh topology is identical with the original mesh, the result of the original mesh is used as the initial condition of deformed mesh. Therefore, the time for solution converging is reduced by two-thirds. than the RBF method in terms of non-orthogonality of the boundary layer. TransF olation (TFI) method is also a popular and efficient method for structured grid. However ethod is difficult to apply to polyhedral mesh because of the irregular distribution of m s [10]. n this paper, Section 2 introduces the RBF, IDW and improved RBF methods. To compar y and time for deformation, a polyhedral mesh for circular cylinder are deformed to the m ACA 0012. The results show that the RBF method have problem with non-orthogonali ary layer cell. The IDW method takes much longer time than that of the RBF method. The gonality of the improved RBF method is as good as IDW method and the turnaround tim r than any other methods. To check the applicability of the improved RBF method edral mesh for Japan bulk carrier (JBC) resistance calculation is deformed and the me ated in Section 3. Because of the mesh topology is identical with the original mesh, the resu iginal mesh is used as the initial condition of deformed mesh. Therefore, the time for solu rging is reduced by two-thirds. sh Deformation Methods n this section, the RBF and IDW methods are introduced and compared. In addition ved RBF method to remedy its shortcomings is proposed. A mesh for a circular cylinder med to the mesh of NACA 0012 using the RBF, IDW, and proposed methods. The red poin Mesh Deformation Methods In this section, the RBF and IDW methods are introduced and compared. In addition, an improved RBF method to remedy its shortcomings is proposed. A mesh for a circular cylinder was deformed to the mesh of NACA 0012 using the RBF, IDW, and proposed methods. The red points in Figure 2 denote the control points on the surface of the circle and the blue ones indicate the displaced points on the surface of NACA 0012. The number of control points is 256 and the points move only in the vertical direction. The vertical movements make the normal vectors of the boundary faces rotate. The initial grid for the circular cylinder is shown in Figure 3. The grid is a polyhedral mesh generated using snappyHexMesh, a standard built-in utility of OpenFOAM (The OpenFOAM Foundation Ltd., London, U.K.). The number of cells is 103,160 and the thickness of first boundary layer cell is 1% of the cylinder diameter. The top, bottom, right, and left boundaries were set as fixed boundary conditions. The front and back boundaries were set as symmetric boundary conditions. To satisfy the fixed boundary condition, the grid points of the fixed boundary were added to the control points. The displacements of the fixed boundary were set as zero. RBF Method The RBF method is an interpolation method that uses the distance between the grid point and control points. It was proposed by Sieger et al. [7], Boer et al. [11], Botch and Kobbelt [12], Jakobsson and Amoignon [13], and Michler [14]. The basic formula for displacement is presented in Equation (1). Here, R i is the distance between the grid point and the i-th control point, and U is a basis function. In this study, a thin plate spline (TPS) is applied as a basis function. The TPS provides a minimal and smooth displacement distribution. The details about basis functions can be found in [15]. The unknowns of Equation (1), a i and w i , are obtained by calculating Equation (4). The partitioned matrix K is determined by the distance between the control points, and P is composed of coordinates of the control points. Column vector → v contains the displacements of control points. Equation (4) is calculated with LU decomposition instead of an iterative method because the matrix is dense and small. Because the size of the matrix in Equation (4) is proportional to the number of control points, every four points on the fixed boundary is added to the control points. Sheng and Allen [16] applied a greedy data reduction algorithm to reduce the matrix size and calculation time. Coulier and Darve [17] developed the inverse fast multipole method to reduce the computational time. After the mesh deformation, the normal component of displacement of the grid points on the symmetric plane is removed to satisfy the symmetric boundary condition. The deformed mesh using the RBF method is illustrated in Figure 4. The overall domain is deformed smoothly, but the boundary layer is thinner than the initial grid and the non-orthogonality is worse than in the initial grid. The maximum skewness and maximum non-orthogonality are compared in Table 1. The mesh deformation takes approximately 45 s. The results are similar to those of He et al. [9], who conducted similar mesh deformation using the RBF and IDW methods with a 2D structured grid. IDW Method The IDW method is an interpolation method. The displacement of the grid points is calculated using Equation (9), where denotes the distance as defined in Equation (3). and in Equation (10) are the rotation matrix and translation displacement of the boundary face, respectively. The weighting function is given by Equation (11). Luke et al. [8] suggested these values: = 3, = 5, and = 0.25. is recommended as the maximum distance of mesh points, and is the area of the boundary face. In this study, , , and are set as 3, 0, and 0, respectively. Because of the irregular face area distribution of the boundaries, the thickness of the boundary layer becomes uneven. The calculation time is also reduced since the second term is not calculated. IDW Method The IDW method is an interpolation method. The displacement of the grid points is calculated using Equation (9), where R i denotes the distance as defined in Equation (3). M i and T i in Equation (10) are the rotation matrix and translation displacement of the boundary face, respectively. The weighting function is given by Equation (11). Luke et al. [8] suggested these values: a = 3, b = 5, and α = 0.25. L de f is recommended as the maximum distance of mesh points, and A i is the area of the boundary face. In this study, a, b, and α are set as 3, 0, and 0, respectively. Because of the irregular face area distribution of the boundaries, the thickness of the boundary layer becomes uneven. The calculation time is also reduced since the second term is not calculated. Figure 5 depicts the deformed mesh using the IDW method. The non-orthogonality of the boundary cell is much better than that by the RBF method. The thickness of the boundary cell is almost equal to that of the initial grid except for the leading and trailing edges. The thickness around the leading and trailing edges is slightly larger than that of the initial grid. The displacements of grid points far from the deformed boundary are smaller than those by the RBF method. The grid quality of deformed mesh is compared with that of the initial grid in Table 2. The time for mesh deformation is approximately 115 s, which is ≈2.6 times larger than that by the RBF method. Improved RBF Method The drawback of the RBF method is the non-orthogonality of the boundary layer. To reduce the non-orthogonality, the centroids of the first boundary cells are added as control points. The displacements of the new control points are calculated using the IDW method. To calculate the translational and rotational displacement, the RBF calculation is repeated twice. First, the boundary face is deformed with only the initial control points. The displacements of the centroids of the first boundary cells are calculated with the displacements of the deformed boundary face by the IDW method. Second, the displacements of the volume mesh grid points are calculated with the grid points of the deformed boundary and the centroids of the first boundary cells. To reduce the calculation time, every 4 points of the grid points and centroids is used as control points. w( ) = × + (11) Figure 5 depicts the deformed mesh using the IDW method. The non-orthogonality of the boundary cell is much better than that by the RBF method. The thickness of the boundary cell is almost equal to that of the initial grid except for the leading and trailing edges. The thickness around the leading and trailing edges is slightly larger than that of the initial grid. The displacements of grid points far from the deformed boundary are smaller than those by the RBF method. The grid quality of deformed mesh is compared with that of the initial grid in Table 2. The time for mesh deformation is approximately 115 s, which is ≈2.6 times larger than that by the RBF method. It is difficult to apply both the RBF and IDW methods to problems involving the free surface because the grid around the free surface has to be aligned to the free surface to minimize the numerical diffusion of the volume of fluid function (VOF). Therefore, the deformable region must be limited. The cells around the deformed boundary are designated as deformable cells, while those outside of the regions are set as frozen cells. The faces sheared by the deformable and frozen cells are designated as fixed faces. The points on the fixed faces are added to the fixed control points. Figure 6 displays the deformed mesh. The non-orthogonality of the boundary cell is better than those by the RBF and IDW methods. The thickness of the boundary cells around the leading and trailing edges is well preserved. The cells in yellow circle are deformable cells. displacements of the new control points are calculated using the IDW method. To calculate the translational and rotational displacement, the RBF calculation is repeated twice. First, the boundary face is deformed with only the initial control points. The displacements of the centroids of the first boundary cells are calculated with the displacements of the deformed boundary face by the IDW method. Second, the displacements of the volume mesh grid points are calculated with the grid points of the deformed boundary and the centroids of the first boundary cells. To reduce the calculation time, every 4 points of the grid points and centroids is used as control points. It is difficult to apply both the RBF and IDW methods to problems involving the free surface because the grid around the free surface has to be aligned to the free surface to minimize the numerical diffusion of the volume of fluid function (VOF). Therefore, the deformable region must be limited. The cells around the deformed boundary are designated as deformable cells, while those outside of the regions are set as frozen cells. The faces sheared by the deformable and frozen cells are designated as fixed faces. The points on the fixed faces are added to the fixed control points. Figure 6 displays the deformed mesh. The non-orthogonality of the boundary cell is better than those by the RBF and IDW methods. The thickness of the boundary cells around the leading and trailing edges is well preserved. The cells in yellow circle are deformable cells. The quality of the deformed mesh is compared with the initial mesh in Table 3. The deformed mesh quality using the proposed method is as good as that by the IDW method. Moreover, the proposed method is much faster than the others owing to the limited deformation region. Mesh Deformation for Hull Form Variation The proposed mesh deformation method was applied to the mesh for ship resistance calculation to examine its applicability to CFD-based optimization. The ship used in the calculations is the JBC The quality of the deformed mesh is compared with the initial mesh in Table 3. The deformed mesh quality using the proposed method is as good as that by the IDW method. Moreover, the proposed method is much faster than the others owing to the limited deformation region. Mesh Deformation for Hull Form Variation The proposed mesh deformation method was applied to the mesh for ship resistance calculation to examine its applicability to CFD-based optimization. The ship used in the calculations is the JBC model. The scale ratio and the draft are 1:40 and 0.4125 m (16.5 m in full scale), respectively. The speed is 1.179 m/s (14.5 knots in the model). The initial grid used for the JBC resistance calculation is illustrated in Figure 7. The number of cells is 2,402,361. The Y+ of first layer thickness is approximately 50 and the number of boundary layers is 4. The expansion ratio of the boundary layer is approximately 1.3. The running attitude of the ship is fixed as even keel condition. The simulation was conducted using interFoam, a standard solver of OpenFOAM. kOmegaSST and nutUSpaldingWallFunction were used as the turbulence model and wall function, respectively. Three points on the forward perpendicular (F.P.) line were moved by 5 mm (0.2 m in full scale) to make an alternative hull form. The hull surface is split by the yellow dashed line in Figure 8. The hull surface in front of the line was set as deformable patch, while that behind the yellow line was set as fixed patch. Three points on the forward perpendicular (F.P.) line were moved by 5 mm (0.2 m in full scale) to make an alternative hull form. The hull surface is split by the yellow dashed line in Figure 8. The hull surface in front of the line was set as deformable patch, while that behind the yellow line was set as fixed patch. Three points on the forward perpendicular (F.P.) line were moved by 5 mm (0.2 m in full scale) to make an alternative hull form. The hull surface is split by the yellow dashed line in Figure 8. The hull surface in front of the line was set as deformable patch, while that behind the yellow line was set as fixed patch. The deformed hull is depicted in Figure 9. The red lines indicate the JBC station lines, whereas the blue lines denote the station lines of the deformed hull. Figure 10 The deformed hull is depicted in Figure 9. The red lines indicate the JBC station lines, whereas the blue lines denote the station lines of the deformed hull. Figure 10 displays a slice of the deformed mesh on the F.P. Because of the small deformation, the variation in mesh quality is small enough to be ignored. The time to deform a mesh with 2 million cells is approximately 118-120 s with a core of Intel Xeon CPU E5-2630 v3 2.4 GHz. The turnaround time is reasonably small. The mesh deformations and CFD simulations are conducted by shell script. The resistance histories of the JBC and deformed hulls are compared in Figure 11. The calculations of deformed hulls converged much faster than the JBC calculation because the result of the JBC calculation was used as the initial condition for the deformed hull resistance calculations. The The deformed hull is depicted in Figure 9. The red lines indicate the JBC station lines, whereas the blue lines denote the station lines of the deformed hull. Figure 10 displays a slice of the deformed mesh on the F.P. Because of the small deformation, the variation in mesh quality is small enough to be ignored. The time to deform a mesh with 2 million cells is approximately 118-120 s with a core of Intel Xeon CPU E5-2630 v3 2.4 GHz. The turnaround time is reasonably small. The mesh deformations and CFD simulations are conducted by shell script. The resistance histories of the JBC and deformed hulls are compared in Figure 11. The calculations of deformed hulls converged much faster than the JBC calculation because the result of the JBC calculation was used as the initial condition for the deformed hull resistance calculations. The calculation of the JBC resistance took approximately 60 s, whereas the calculation of deformed hulls took 20 s in flow time. The variations in the resistances are small because the hull form variation is small. The resistance coefficients are compared in Table 4. The resistance histories of the JBC and deformed hulls are compared in Figure 11. The calculations of deformed hulls converged much faster than the JBC calculation because the result of the JBC calculation was used as the initial condition for the deformed hull resistance calculations. The calculation of the JBC resistance took approximately 60 s, whereas the calculation of deformed hulls took 20 s in flow time. The variations in the resistances are small because the hull form variation is small. The resistance coefficients are compared in Table 4. The pressure distributions and wave height contours are compared in Figures 12 and 13, respectively. It was found that the result of the initial hull form can be used as the initial condition for the deformed hull resistance calculation. The pressure distributions and wave height contours are compared in Figures 12 and 13, respectively. It was found that the result of the initial hull form can be used as the initial condition for the deformed hull resistance calculation. JBC Case 1 Case 2 Case 3 Resistance coefficient 4.186×10 -3 4.187×10 -3 4.184×10 -3 4.186×10 -3 The pressure distributions and wave height contours are compared in Figures 12 and 13, respectively. It was found that the result of the initial hull form can be used as the initial condition for the deformed hull resistance calculation. Conclusions In this study, two methods for mesh deformation, namely, the RBF and IDW methods, were compared. Moreover, an improved RBF method was proposed for a largely deformed mesh. The RBF method was much faster than the IDW method, but the quality of the deformed mesh using the IDW method was better than that by the RBF method. The quality of the deformed mesh by the RBF method was improved by adding the centroids of boundary cells to the control points. The displacements of the centroids were calculated using the IDW method. The deformable region was limited for the problem involving the free surface. The limitation also reduced the calculation time. The improved RBF method was applied to the mesh for the JBC resistance calculation to validate its applicability. The resistance was calculated by varying the bow shape with three control points. It took approximately 120 s for the mesh to deform, which is short enough to apply to practical problems. The calculation result of the initial hull form was used as the initial condition for the deformed hull form, which reduced the calculation time to approximately one-third of that of the initial hull form. Thus, the improved RBF method proposed in this study is effective and efficient for hull form variation. In the future, the CFD-based hull form optimization will be conducted using the proposed mesh deformation method together with an optimization algorithm such as sequential quadratic programming or an adjoint variable method. Funding: This study was supported by a research fund from Chosun University (K207177004). Conflicts of Interest: The authors declare no conflicts of interest. Conclusions In this study, two methods for mesh deformation, namely, the RBF and IDW methods, were compared. Moreover, an improved RBF method was proposed for a largely deformed mesh. The RBF method was much faster than the IDW method, but the quality of the deformed mesh using the IDW method was better than that by the RBF method. The quality of the deformed mesh by the RBF method was improved by adding the centroids of boundary cells to the control points. The displacements of the centroids were calculated using the IDW method. The deformable region was limited for the problem involving the free surface. The limitation also reduced the calculation time. The improved RBF method was applied to the mesh for the JBC resistance calculation to validate its applicability. The resistance was calculated by varying the bow shape with three control points. It took approximately 120 s for the mesh to deform, which is short enough to apply to practical problems. The calculation result of the initial hull form was used as the initial condition for the deformed hull form, which reduced the calculation time to approximately one-third of that of the initial hull form. Thus, the improved RBF method proposed in this study is effective and efficient for hull form variation. In the future, the CFD-based hull form optimization will be conducted using the proposed mesh deformation method together with an optimization algorithm such as sequential quadratic programming or an adjoint variable method.
2020-07-02T10:30:11.918Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "56d11a6fdf4039eddf2bf1c035c70877e04b48da", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/8/6/473/pdf?version=1593484304", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2e94bb8c2078bf48f36c5261e187f6f240eed0f9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
140307182
pes2o/s2orc
v3-fos-license
A Ternary Magnetic Recyclable ZnO/Fe3O4/g-C3N4 Composite Photocatalyst for Efficient Photodegradation of Monoazo Dye To develop a highly efficient visible light-induced and conveniently recyclable photocatalyst, in this study, a ternary magnetic ZnO/Fe3O4/g-C3N4 composite photocatalyst was synthesized for the photodegradation of Monas dye. The structure and optical performance of the composite photocatalyst were characterized using X-ray diffraction (XRD), transmission electron microscopye (TEM), energy dispersive spectroscopy (EDS), photoluminescence (PL) spectra, ultraviolet–visible diffuse reflection, and photo-electrochemistry. The photocatalytic activities of the prepared ZnO/Fe3O4/g-C3N4 nanocomposites were notably improved, and they were significantly higher than those of pure g-C3N4 and ZnO. Given the presence of the heterojunction between the interfaces of g-C3N4 and ZnO, the higher response to visible light and separation efficiency of the photo-induced electrons and holes enhanced the photocatalytic activities of the ZnO/Fe3O4/g-C3N4 nanocomposites. The stability experiment revealed that ZnO/Fe3O4/g-C3N4-50% demonstrates a relatively higher photocatalytic activity after 5 recycles. The degradation efficiency of MO, AYR, and OG over ZnO/Fe3O4/g-C3N4-50% were 97.87%, 98.05%, and 83.35%, respectively, which was due to the number of dye molecules adsorbed on the photocatalyst and the structure of the azo dye molecule. Azo dyes could be effectively and rapidly photodegraded by the obtained photocatalyst. Therefore, the environment-friendly photocatalyst could be widely applied to the treatment of dye contaminated wastewater. Introduction As a major global environmental issue, a significant amount of pollutants is discharged into the lakes, rivers, and ground water due to rapid industrialization, which leads to water pollution. It was estimated that approximately 10-15% of organic dyes are discharged, which has carcinogenic and mutagenic effects on humans [1]. Therefore, methods that degrade industrial waste water, particularly organic dyes, are currently under investigation by researchers. Among various methods, the use of a photocatalytic technology with photocatalysts to degrade environmental pollutants was considered a potential approach [2,3]. Furthermore, ZnO is one of the most widely used photocatalysis, given its high photosensitivity, low-cost, and environmentally friendly nature [4,5]. However, pure ZnO is subject to three major drawbacks. First, it can only absorb ultraviolet (UV) light of solar energy with a wavelength less than 368 nm due to its wide band gap (3.37 eV), which limits its practical applications when sunlight is the energy source [6]. Second, a faster recombination of its photogenerated electron-hole pairs leads to a lower photocatalytic activity [7]. Third, the re-collection of ultrafine ZnO nanoparticles from the waste water using filtration and centrifugation is difficult to achieve, which limits its large-scale practical applications in the industry. Hence, in recent years, there were several attempts to develop multi-functional photocatalyts based on ZnO nanomaterials, with a high recyclability and excellent photocatalytic performances in the UV and visible irradiation ranges. Different strategies were therefore implemented to overcome the first and second drawbacks of ZnO, such as doping, surface modification with metal nanoparticles, and the development of heterostructure [8][9][10]. Among these, coupling ZnO with a narrow band gap semiconductor with high conduction band (CB) can effectively increase the range of light absorption and accelerate the separation rate of the electron-hole pairs. Graphite-like carbon nitride (g-C 3 N 4 ), which has a band gap of 2.70 eV, was explored as a promising metal-free material for the conversion of solar energy into electricity or chemical energy [11,12]. Moreover, it attracted significant attention due to its excellent photocatalytic performance, chemical and thermal stabilities, and favorable electronic structure, given the strong covalent bonds between the carbon and nitrogen atoms. However, a high recombination rate of photo-induced electron-hole pairs limited its enhanced photocatalytic performance [13]. Wide-bandgap semiconductors could be combined with g-C 3 N 4 to achieve improved charge separation [7,14,15]. Based the abovementioned methods, the combination of ZnO (wide-bandgap semiconductor) and g-C 3 N 4 (narrow bandgap semiconductor) as a composite photocatalyst prevents the recombination of photogenerated electronhole pairs and extends the light-absorption range of ZnO to the visible light spectrum. However, in most of the reported works, ZnO/g-C 3 N 4 photocatalysts have low catalytic performance and are difficult to recovery and reusability. Fortunately, Fe 3 O 4 was widely used in the preparation of magnetic photocatalysts, due to its good magnetic low-cost, good stability, and environmentfriendly nature [16]. Hence, the preparation of novel visible-light-driven magnetic ZnO/Fe 3 O 4 /g-C 3 N 4 photocatalysts is significant, and it is important to further improve the photocatalytic efficiency. In addition, how the structure of monoazo dyes affects the photodegradation process of the photocatalyst has not been reported yet. So it is very interesting to explore and provide a relaible theoretical basis for the application of photocatalysts in the efficient and fast treatment of dye wastewaters. In this study, a novel and efficient photocatalyst of ZnO/Fe 3 O 4 /g-C 3 N 4 nanocomposites was successfully prepared. The crystal structure, chemical states, and optical properties of the photocatalyst were characterized using X-ray diffraction (XRD), transmission electron microscopye (TEM), energy dispersive spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), photoluminescence (PL), vibrating sample magnetometry (VSM), and UV-vis diffuse reflectance spectroscopy (DRS). The photocatalytic performance of the photocatalyst was investigated by its degradation of methyl orange (MO) under visible light irradiation. The degradation of different monoazo dyes (MO, alizarin yellow R (AYR), and orange G (OG)) over ZnO/Fe 3 O 4 /g-C 3 N 4 was also investigated. Moreover, to further evaluate the possible mechanism of the photocatalytic degradation of azo dyes, a free radical capture experiment and PL technique were employed. 100-mL flask. Thereafter, the solution was stirred at 70°C for 60 min in a nitrogen atmosphere, after which, 5 mL of aqueous ammonia (25%) was added to the solution under stirring. The obtained dark brown suspension was stirred for an additional 60 min and washed twice using water and ethanol, successively. The solid was then separated from the liquid phase using a magnetic field. The prepared dark brown sample was dried in an vacuum oven at 40°C for 12 h. Preparation of ZnO/Fe 3 O 4 The photocatalyst was prepared based previous studies [17]. In a representative synthesis, solution A was prepared using the method that involves the dissolution of zinc acetate (2.196 g) in ETOH (60 mL) and stirring at 60°C in a water bath for 30 min. Moreover, solution B was obtained by adding 5.040 g of oxalic acid solution to 80 mL of ETOH under stirring at 50°C for 30 min. Solution B was then added dropwise to the warm solution A and stirred continuously at room temperature for 1 h to obtain the sol. Thereafter, to obtain a homogenous gel, the sol was aged in a sealed environment for a period of time. The product was dried for 24 h in a vacuum oven at 80°C. Finally, ZnO was obtained by thermal treatment at 400°C for 2 h. To prepare ZnO/Fe 3 O 4 , 0.12 g of Fe 3 O 4 was dispersed in solution A. Preparation of ZnO/Fe 3 O 4 /g-C 3 N 4 For the preparation of ZnO/Fe 3 O 4 /g-C 3 N 4 , a homogeneous mixture was obtained by vigorously grinding 1 g of ZnO/Fe 3 O 4 and melamine with a mass ratio of 1:1 and then dispersing the mixture in 20 ml of deionized water. The suspension was ultrasonicated for 1 h. Thereafter, the precursors were dried at 70°C overnight to remove the solvent, and then the obtained solid was annealed at 550°C for 2 h in air. The magnetic ZnO/ Fe 3 O 4 /g-C 3 N 4 -50% composite was then successfully obtained. The amount of g-C 3 N 4 was adjusted by controlling the amount of melamine (0.25 g, 1 g, and 2.3 g) during the preparation of the ZnO/Fe 3 O 4 /g-C 3 N 4 nanocomposites, and the relevant products were denoted as ZnO/Fe 3 O 4 /g-C 3 N 4 -20%, ZnO/Fe 3 O 4 /g-C 3 N 4 -50%, and ZnO/Fe 3 O 4 /g-C 3 N 4 -70%, respectively. Characterization Methods The XRD spectra of the samples were analyzed using an Rigaku Giegerflex D/Max B diffractometer with Cu-Kα radiation. The TEM was conducted together using a Tecnai G2F20 (USA) microscope. EDS spectra were performed by using an energy-dispersive X-ray spectrometer (EDS) attached to the TEM instrument. A surface area analyzer (Micromeritics, ASAP-2020, USA) was used to characterize the pore volume, pore size distribution, and specific surface area of the samples under N 2 adsorption at 77 K. To determine the optical band gap of the photocatalys, the UV-visible absorption spectrum was obtained using a UV-Visible spectrophotometer with a reflectance standard of BaSO 4 (Hitachi UV-4100, Japan). The surface composition and chemical states of the samples were investigated using XPS (250XI ESCA) equipped with an Mg Kα X-ray source (1253.6 eV). The PL spectra of the samples were determined using a fluorescence spectrophotometer (FLsp920, England) at room temperature, with an Xe lamp as an excitation light source. Photoelectrochemical measurements were conducted in three-electrode quartz cells with a 0.1-M Na 2 SO 4 electrolyte solution. Platinum wire was used as the counter electrode, and Ag/AgCl were used as the reference electrodes, respectively. The working electrode was prepared as follows: 10 mg of the as-prepared photocatalyst was suspended in 1 mL of deionized water, which was then dip-coated onto a indium-tin oxide (ITO) glass electrode with dimensions of 10 mm × 20 mm and then dried under an infrared lamp. Photocatalytic Activity for Azo Dye Photocatalytic experiments were conducted using a 500-W Xe lamp with a 420-nm cut-off filter at 25°C, to study the visible light degradation of the MO, AYR, and OG solutions. In a traditional test, 10 mg of catalyst was added to 50 mL of azo dye solution (30 mg/L). The mixture was kept in the dark for 30 min to promote the adsorption of azo dye on the surface of the photocatalyst. The mixture was then irradiated under an Xe lamp to degrade the azo dye. After the degradation experiment, each sample was filtered with a 0.45 -μm filter membrane to remove the photocatalyst particles for analysis, and the concentrations of MO, AYR, and OG in the supernatant liquid were measured using a UV-5100 N spectrophotometer at λmax = 466 nm, 373 nm, and 475 nm, respectively. The degradation efficiency (η) of the azo dye was calculated as follows: where C 0 and C t are the concentrations of the azo dye at the initial and specified irradiation times, respectively. [17]. The strongest peak of the g-C 3 N 4 sample corresponds to the (002) plane of its layer structure at 2θ = 27.3°. As reported, the g-C 3 N 4 structure has a weak diffraction peak at 2θ = 13.2°, which is attributed to the (100) crystal plane of g-C 3 N 4 . The width of the diffraction peak decreased, which indicates the influence of geometric constraints on the nanopore wall [7]. The XRD patterns of the ZnO/Fe 3 O 4 / g-C 3 N 4 -x samples included all the typical peaks of g-C 3 N 4 , ZnO, and Fe 3 O 4 . The diffraction peaks located at 30.4°, 35.7°, and 43.4°correspond to the (220), (311), and (400) planes of Fe 3 O 4 [18,19]. Moreover, the peak intensity of the characteristic peak of g-C 3 N 4 was gradually strengthened with an increase in the amount of g-C 3 N 4 , whereas the peak intensity of ZnO and Fe 3 O 4 gradually decreased. No g-C 3 N 4 characteristic peak was observed in the ZnO/Fe 3 O 4 /g-C 3 N 4 -20% samples, which can be attributed to the low content of the g-C 3 N 4 in the composite. From the XRD analysis results, no other peaks were observed in all the samples, thus confirming the high purity of the samples. TEM and EDS The structure of the sample was evaluated using TEM, as shown in Fig. 2. The TEM image of pure ZnO diplays the typical hexagonal wurtzite structure (Fig. 2a), which is consistent with the XRD results. The TEM image of g-C 3 N 4 (Fig. 2b) displays its layered platelet-like morphology structure, and smooth paper-fold thinner sheets, which is similar to the morphology of graphene nanosheets. As seen from the TEM image of ZnO/Fe 3 O 4 / g-C 3 N 4 -50% (Fig. 2c), a large amount of photocatalysts accumulated on the layered structure of g-C 3 N 4 . The EDS results for ZnO/Fe 3 O 4 /g-C 3 N 4 -50% are presented in Fig. 3. It can be seen that the sample contained peaks of Zn, C, N, Fe, and O elements, which also proved that the ZnO/Fe 3 O 4 /g-C 3 N 4 composite was prepared successfully. However, the peak value of Fe is relatively low, suggesting that the content of Fe 3 O 4 is low in ZnO/ Fe 3 O 4 /g-C 3 N 4 composites. Given that Cu was used as a carrier in the TEM analysis, characteristic peaks of Cu were detected in the EDS analysis [20]. XPS To investigate the surface composition and chemical states of the prepared composite catalysts, XPS was conducted on ZnO/Fe 3 O 4 /g-C 3 N 4 -50%. The survey spectrum scan reveals the presence of C, N, O, Zn, and Fe (Fig. 4a). Figure 4b reveals that the C 1s has three characteristic peaks. The peak located at 284.6 eV is attributed to the hydrocarbons in the XPS instrument and the sp2-hybridized carbon atoms in the aromatic ring, which were bonded to N (N-C=N). The other peak is attributed to the sp3 hybrid carbon source (C-(N) 3 ) with a binding energy of 286.5 eV. The peak at the binding energy of 287.8 eV is attributed to the C-N-C in the graphite phase [21]. The N 1s XPS spectrum is presented in Fig. 4c. A major peak was at 397.9 eV, which correspond to the aromatic between N and two C atoms (C=N-C). A weaker characteristic peak is located at 399.2 eV, which is mainly attributed to the trinitrogen (N-(C) 3 ) that links the basic structure (C 6 N 7 ), or the amino groups related to the structural defects and incomplete condensation ( (C) 2 -N-H) [22]. The XPS spectrum of O 1s is presented in Fig. 4d, and the peak at 530.1 eV corresponds to the O 2 − ion in the Zn-O bond of the ZnO hexagonal wurtzite structure [23]. The peak at 531.8 eV corresponds to the oxygen vacancy in ZnO. In the Zn 2p XPS spectrum (Fig. 4e), there are two characteristic peaks at the binding energies of 1021.4 ev and 1044.3 eV, and the distance between the two peaks is 22.9 eV, which is included in the standard reference value of zinc oxide. The binding energy difference indicates that the zinc ion in the composite was in +2 states [23]. In the XPS spectrum of the Fe 2p (Fig. 4f ), the two peaks are located at 710.6 ev and 724.4 eV, which correspond to the 2p1/2 and 2p3/2 orbitals, respectively [24]. These results reveal that g-C 3 N 4 are composited on the ZnO, which may promote the absorption of visible light and improve the transfer and separation of charge carriers; thus, enhancing the photocatalytic activity [25]. UV-vis DRS Diffuse reflectance spectroscopy was used to investigate the light absorption behavior of the photocatalysts. The results are presented in Fig. 5. The absorption of light with a significant red shift may improve photocatalytic performance in the visible region. In the ultraviolet region, the pure ZnO demonstrated a strong absorption at the wavelength of 388 nm, which corresponds to a band gap of 3.20 eV. Different from the ZnO absorption behavior, g-C 3 N 4 yields an absorption shift at 460 nm, and the corresponding band gap energy was 2.70 eV, which indicates a higher response for photocatalytic activity under visible light [26]. Compared with pure ZnO or g-C 3 N 4 , the absorption edge of the ZnO/Fe 3 O 4 /g-C 3 N 4 composite material shifted significantly to a longer wavelength region, which suggests that the absorption edge of the composite material shifted to the lower energy region. These results may be due to the synergistic relationship between g-C 3 N 4 and ZnO in the composite samples, which is consistent with the report by Le et al. [7]. The red shift of the absorption edge of ZnO/Fe 3 O 4 / g-C 3 N 4 increased with an increase in the g-C 3 N 4 loading up to 50%. However, the absorption edge decreased, when the g-C 3 N 4 loading was 70%. The decrease in ZnO/Fe 3 O 4 /g-C 3 N 4 -70% may be because g-C 3 N 4 loading above the optimal level may shield the light intensity absorption by ZnO. Therefore, among all the prepared samples, the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite exhibited the most extensive and strongest absorption of visible light. This is similar to the results obtained by Jo et al., who reported that ZnO-50%/g-C 3 N 4 exhibited the strongest absorption of visible light [1]. The composite material demonstrated the strongest light absorption to visible light, which increased the generation of electronhole pairs under visible light irradiation, resulting in a higher photocatalytic activity. PL The effect of the synergistic relationship between ZnO and g-C 3 N 4 on the photocatalysis was further evaluated using PL. The PL spectra of ZnO, g-C 3 N 4, and ZnO/ Fe 3 O 4 /g-C 3 N 4 -50% are presented in Fig. 6. The excitation wavelength was 300 nm, and the PL of the samples were tested at room temperature. The emission spectra in the range of 300-800 nm were recorded. It is common knowledge that the recombination of electron-hole pairs inside semiconductors releases energy in the form of PL. In general, a lower PL intensity indicates a lower recombination rate of carriers, which leads to efficient photocatalytic activity. In the PL spectrum, g-C 3 N 4 exhibited a strong emission peak at approximately 460 nm, which is in accordance with the UV-vis results (Fig. 5) and literature [7]. The emission peak of pure ZnO was lower than that of g-C 3 N 4 , at approximately 410 nm [21]. Compared with the PL peak of pure ZnO, the emission peak of the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst was red-shifted, and its peak intensity was significantly reduced. Moreover, PL peak intensity of the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst was lowest. Based on these results, it was concluded that the electron-hole pairs photogenerated by the ZnO/Fe 3 O 4 / g-C 3 N 4 -50% nanocomposites under visible light irradiation can be effectively transferred at the interface of the heterostructure. Thus, the electron-hole recombination rate decreased, which resulted in the highest photocatalytic activity under visible light irradiation. Electrochemical Analysis The photocatalytic redox reactions between the separation, migration, and capture of photogenerated electrons by semiconductor photocatalysts are closely related. To qualitatively evaluate the photo-induced charge separation efficiency during the photocatalytic reaction, the photocurrent responses of the ZnO, g-C 3 N 4, and ZnO/Fe 3 O 4 /g-C 3 N 4 -50% nanocomposites were investigated under visible light irradiation. Figure 7a presents the photocurrent-time (I-t) curves of three samples under intermittent illumination. From the figure, it can be seen that once the irradiation of light was turned off, the photocurrent value decreased abruptly, and the photocurrent maintained a constant value when the light was turned on again. Moreover, this phenomenon is reproducible, which indicates that most of the photogenerated electrons were transferred to the surface of the sample, and a photocurrent was generated under visible light irradiation. Pure ZnO demonstrates the weakest photocurrent response under visible light irradiation, due to its wide band gap. Moreover, the ZnO/ Fe 3 O 4 /g-C 3 N 4 -50% composite samples exhibited the highest photocurrent intensities. The results suggest that the relationship between ZnO and g-C 3 N 4 is beneficial for the improvement of the separation efficiency and transfer of photogenerated electrons and holes [27]. This phenomenon is consistent with the PL results. The electrochemical impedance spectroscopy (EIS) results of the sample are presented in Fig. 7b. The arcs on the EIS electrochemical impedance spectrogram reflect the charge transfer layer resistance at the electrode/electrolyte interface. A smaller arc represents a lower resistance, which indicates a higher efficiency of charge transfer [27]. The arc radius of the ZnO/Fe 3 O 4 / g-C 3 N 4 -50% composite photocatalyst is smaller than that of ZnO and g-C 3 N 4 , which indicates that the charge transfer layer resistance of the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% interface was the smallest. Thus, the photo-induced electron-hole pairs exhibited the highest separation and Magnetic Aroperties The hysteresis loops of ZnO, Fe 3 O 4 , and ZnO/Fe 3 O 4 / g-C 3 N 4 -50% are presented in Fig. 8. The results reveal that pure ZnO is non-magnetic, pure Fe 3 O 4 exhibited the strongest saturation magnetization, and the saturation magnetization of ZnO/Fe 3 O 4 /g-C 3 N 4 -50% was lower than that of the pure Fe 3 O 4 , which is attributed to the presence of non-magnetic substances, i.e., ZnO and g-C 3 N 4 . No hysteresis, remanence, and coercivity were observed in the hysteresis loop of ZnO/Fe 3 O 4 / g-C 3 N 4 -50%. Therefore, the sample was superparamagnetic. Moreover, the saturation magnetization of the composite photocatalyst was sufficient to separate from the solution using an external magnetic field, as shown in Fig. 8 (inset), which promoted the photocatalyst recovery and increased its recyclability. Photocatalytic Properties The degradation of MO over different photocatalysts is presented in Fig. 9a. The pure ZnO slightly degraded the methyl orange under visible light irradiation, given that the wide band gap of ZnO allows it to respond only to ultraviolet light. The degradation efficiency of pure g-C 3 N 4 for methyl orange was not very high, due to its high photoelectron-hole pair recombination rate, despite its response to visible light, which resulted in the low photocatalytic activity of g-C 3 N 4 . The photodegradation efficiency of MO on the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst was higher than that of the other catalysts, for the following three reasons: First, the UV-Vis spectra indicated that the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst exhibited the strongest visible light response intensity and a large visible light absorption range. Second, the PL and electrochemical results revealed that the electron-hole pair recombination rate of ZnO/Fe 3 O 4 /g-C 3 N 4 -50% was the lowest. Third, the electrochemical results indicated that the photoelectron transfer rate of theZnO/Fe 3 O 4 / g-C 3 N 4 -50% photocatalyst was the fastest compared with single photocatalyst. In addition, the kinetics of the degradation of MO on the photocatalysts were also evaluated (Fig. 9b). The results revealed that the degradation kinetics of MO on different photocatalysts followed the first-order kinetic model, and all the degradation rate constants are presented in Table 2. The apparent rate constant of ZnO/Fe 3 O 4 /g-C 3 N 4 -50% was the highest (0.02430 min −1 ), and which was higher than the degradation rate of g-C 3 N 4 /Fe 3 O 4 /TiO 2 and TiO 2 /biochar composite catalysts [28,29]. Moreover, the ZnO/Fe 3 O 4 / g-C 3 N 4 -50% exhibited a higher photocatalytic rate relative to g-C 3 N 4 /Fe 3 O 4 /AgI on the degradation of MO (0. 0016 min −1 ) [10]. Stability of ZnO/Fe 3 O 4 /g-C 3 N 4 -50% Composite Photocatalyst In addition, the stability of photocatalysts is a critical factor in relation to large-scale technology application. To evaluate the stability of the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst, recycling experiments were conducted on the photocatalyst for the degradation of MO under visible light irradiation. The photocatalyst was collected by magnetic decantation and then washed using distilled water and ethanol. Thereafter, it was dried in an oven at 80°C. The sample was reused for subsequent degradation, and the results are presented in Fig. 10a. The composite photocatalyst maintained a very high photocatalytic activity, and the removal rate of MO on the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst was 95.3% after 5 cycles. In addition, there was a slight decrease in the amount of photocatalysts during the cycle processes. Therefore, the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst exhibited high stability under visible light irradiation. To further evaluate the stability of the ZnO/Fe 3 O 4 /g-C 3 N 4 -50%, samples were collected after 5 cycles for XRD testing and compared with the XRD pattern of the sample before cycling. The results are presented in Fig. 10b. No significant changes were observed in the structure of the photocatalyst before and after use, which indicates that the ZnO/Fe 3 O 4 / g-C 3 N 4 -50% photocatalyst was highly stable. Degradation of Monoazo Dyes on ZnO/Fe 3 O 4 /g-C 3 N 4 -50% For the evaluation of the photocatalytic degradation behavior of different monoazo dyes, the degradation of MO, AYR, and OG over ZnO/Fe 3 O 4 /g-C 3 N 4 is presented in Fig. 11. The plots of the absorbance with respect to the wavelength for the MO, AYR, and OG degradations over ZnO/Fe 3 O 4 /g-C 3 N 4 -50% at various irradiation times are presented in Figs. 11a-c. The maximum absorption wavelength of MO, AYR, and OG before and after degradation were 466 nm, 372 nm, and (Fig. 11d). There are two possible reasons for this phenomenon. First, as can be seen in Fig. 11d, the adsorption efficiency of OG on the photocatalyst was the lowest. The lower adsorption efficiency of OG can be explained by the steric limit of a large aromatic molecule, which reduced the number of OG molecules adsorbed on the photocatalyst. The lower adsorption efficiency of the azo dye therefore resulted in a small amount of molecules concentrated on the active site of the photocatalyst, which decreased the degradation efficiency of the azo dye [30]. Second, AYR has a high degradation efficiency, which is related to the presence of a carboxyl group that can react with H + in a light Kolbe reaction. However, the lower degradation efficiency of OG and MO could be due to the presence of a withdrawing SO 3 − group, and the increasing number of sulfonic acid groups could inhibit degradation of the dye [31]. The properties of the three dyes are listed in Table 1. The molecular weight and number of sulfonic acid was in the following order: AYR < MO < OG. Therefore, the degradation efficiency of OG over ZnO/ Fe 3 O 4 /g-C 3 N 4 -50% was the lowest. It is necessary to investigate the relationship between the molecular weight of the azo dye and its degradation Figure 12 reveals that the molecular weight of azo dye had a good negative correlation with the degradation efficiency (R 2 = 0.9776). Moreover, a molecular weight of the azo dye would result in a low degradation efficiency. The results are consistent with those presented above. Mechanism for Photocatalytic Degradation To further investigate the mechanism of the photocatalyst for the degradation of MO under visible light irradiation, radical, electron, and hole scavenging experiments were conducted to detect the main active species in the photocatalytic process. Moreover, ·OH, ·O 2 − , h + , and e − were eliminated using tert-butanol (t-BuOH), p-benzoquinone (p-BQ), ammonium oxalate (AO), and K 2 S 2 O 8 , respectively. The degradation efficiencies of MO on the photocatalyst in the presence the scavengers are presented in Fig. 13. The removal rate of MO was significantly reduced after the addition of t-BuOH and p-BQ. Conversely, the removal efficiency of MO was not significantly reduced in the presence of AO and K 2 S 2 O 8 . Therefore, the active species that play a critical roles during the photocatalytic degradation of MO over the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% photocatalyst are ·OH and ·O 2 − Based on the relevant literature and experimental results (including the physicochemical properties, photocatalytic performance, and detected active components), a possible photocatalytic mechanism of the ZnO/Fe 3 O 4 / g-C 3 N 4 -50% nanocomposites prepared under visible light irradiation is proposed. It is common knowledge that ZnO and g-C 3 N 4 are typical n-type semiconductors. Therefore, an n-n heterojunction is formed at the interface between the g-C 3 N 4 and ZnO nanoparticles. The ZnO/Fe 3 O 4 /g-C 3 N 4 -50% can be excited to generate electrons and holes under visible light irradiation. The excited electrons are then transferred from the CB of the g-C 3 N 4 to the CB of the ZnO. The improvement of the photocatalytic performance of the composite photocatalyst is mainly due to the effective separation of photogenerated electrons and holes at the heterojunction interface [32]. Given that the CB edge potential of g-C 3 N 4 is more negative than that of ZnO, the excited electrons in g-C 3 N 4 are transferred to the CB of ZnO, and the holes are retained in the valence band (VB) of g-C 3 N 4 [33,34]. In contrast, ZnO holes are injected into the holes of g-C 3 N 4 . Therefore, an internal electrostatic potential is formed in the space charge region, which promotes the separation of the photogenerated carriers. The charge transfer to the surface of the compound semiconductor reacts with water and dissolved oxygen to produce ·OH and ·O 2 − , or it reacts directly with MO. From Fig. 13, it can be seen that ·OH and ·O 2 − play a vital role in the degradation of MO on composite photocatalysts. Therefore, possible photocatalytic mechanisms are presented below: Based on the above discussion, it was concluded that the photocatalytic activity of the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% Fig. 13 The degradation efficiencies of monazo dyes over ZnO/Fe 3 O 4 /g-C 3 N 4 -50% in the presence of various scavengers Fig. 12 Correlation between molecular weight of azo dye and degradation efficiency nanocomposite semiconductor was significantly improved. This was because of the following two reasons: (1) heterostructure between g-C 3 N 4 and ZnO improved the light absorption properties, and (2) the synergistic effect of the internal electric field and the matched band structure of g-C 3 N 4 and ZnO increased the separation rate of photogenerated carriers (Fig. 14). Conclusions In this study, ternary magnetic ZnO/Fe 3 O 4 /g-C 3 N 4 nanocomposites were successfully fabricated, as novel recyclable visible-light-driven photocatalysts. Among all the prepared photocatalysts, the ZnO/Fe 3 O 4 /g-C 3 N 4 -50% composite photocatalyst exhibited the most efficient photocatalytic activity, due to the improved light absorption properties resulting from the heterojunction structure between g-C 3 N 4 and ZnO, in addition, to the synergistic effect of their internal electric field and matched energy band structure. Moreover, the separation rate of the photogenerated carriers was high. The degradation efficiencies of MO, AYR, and OG over ZnO/Fe 3 O 4 /g-C 3 N 4 -50% were 97.87%, 98.05%, and 83.35%. This was due to the number of dye molecule adsorbed on the photocatalyst, and the structure of the azo dye molecule had an influence on the degradation. The kinetics of the degradation of MO on the composite photocatalyst was in accordance with first-order kinetics. Furthermore, the addition of Fe 3 O 4 significantly improved the stability and recyclability of the photocatalyst. Superoxide ions are the main reactive species, which indicates that the azo dyes have the same degradation mechanism.
2019-04-30T01:47:20.043Z
2019-04-29T00:00:00.000
{ "year": 2019, "sha1": "2f914ae5df19e6fb2ad5a5127d2044a484f85539", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-019-2974-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f914ae5df19e6fb2ad5a5127d2044a484f85539", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
246762755
pes2o/s2orc
v3-fos-license
Metabolic Profiling Reveals That the Olfactory Cues in the Duck Uropygial Gland Potentially Act as Sex Pheromones Simple Summary For birds, the uropygial gland is a special organ. We believe that its secretion can be used as a pheromone between the sexes to play a role in mate selection and mating. Therefore, we studied the chemical composition of duck uropygial gland secretions and the differences between males and females. After a series of screenings, 24 different volatile metabolites were obtained in our experiment. On this basis, five extremely significant volatile metabolites were screened out—significantly more males than females. The results show that these volatile substances are potential sex pheromone substances, which may be the critical olfactory clues for birds to choose mates. Our results lay the foundation for further research on whether uropygial gland secretion affects duck reproduction and production. Abstract The exchange of information between animals is crucial for maintaining social relations, individual survival, and reproduction, etc. The uropygial gland is a particular secretion gland found in birds. We speculated that uropygial gland secretions might act as a chemical signal responsible for sexual communication. We employed non-targeted metabolomic technology through liquid chromatography and mass spectrometry (LC-MS) to identifying duck uropygial gland secretions. We observed 11,311 and 14,321 chemical substances in the uropygial gland secretion for positive and negative ion modes, respectively. Based on their relative contents, principal component analysis (PCA) showed that gender significantly affects the metabolite composition of the duck uropygial gland. A total of 3831 and 4510 differential metabolites were further identified between the two sexes at the positive and negative ion modes, respectively. Of them, 139 differential metabolites were finally annotated. Among the 80 differential metabolites that reached an extremely significant difference (p < 0.01), we identified 24 volatile substances. Moreover, we further demonstrated that five kinds of volatile substances are highly repeatable in all testing ducks, including picolinic acid, 3-Hydroxypicolinic acid, indoleacetaldehyde, 3-hydroxymethylglutaric acid, and 3-methyl-2-oxovaleric acid. All these substances are significantly higher in males than in females, and their functions are involved in the reproduction processes of birds. Our data implied that these volatile substances act as sex pheromones and may be crucial olfactory clues for mate selection between birds. Our findings laid the foundation for future research on whether uropygial gland secretion can affect ducks’ reproduction and production. communication can be classified into olfactory (chemical) communication [1], auditory communication [2], visual communication [3], tactile communication, and electrical communication, etc. For visual communication, the visual acuity of birds is excellent and extends to ultraviolet. Birds, such as peacocks, often use bright feather colors to perform elaborate sexual displays [4]. For olfactory communication, studies have shown that in quails, the deprivation of olfactory inputs decreases the neuronal activation induced by sexual interactions with a female. Different bird groups, such as petrels, auklets, and ducks, have been shown to produce special odors, which may play an important role as pheromones in within-species social interactions [4]. The chemical compounds emitted by animals, and which are used in mate attraction and mate choice, are called pheromones [5]. Sex pheromones are substances released by animals of both sexes to identify each other. Through this substance, females and males can approach each other, thereby, leading to mating. Generally, females secrete sporadic sex pheromones to induce sexual excitement in active males, but there are also species in which sex pheromones are secreted by males. I nsects are also sensitive to specific sex pheromones. Based on this, the sex attractant has been developed as a high-tech bionic product that simulates the insect sex pheromone in nature and is released to the field through a releaser to trap and kill heterosexual pests [6]. Some insectivorous birds can exploit the pheromones emitted by female moths to attract males as a method of prey detection [7]. In the terrestrial salamander (Plethodon shermani) [8], a male delivers proteinaceous pheromones to the female as part of their ritualistic courtship behavior. These pheromones increase the female's receptivity to mating, as shown by the reduction in courtship duration. Human pheromones also have sex distinctions; the male pheromone is androstenedione [9], and the female pheromone is estradiol [10]. Many animals have a particular organ that secretes special odors, such as the glandular sac of the male musk deer, which can secrete musk to attract females [11]. Civets have a scent gland in the perineum that secretes civet scent, which plays a role in marking territory and attracting the opposite sex [12,13]. For birds, the uropygial gland is a special skin derivative and hole plasma secretory gland. The uropygial gland is usually arranged in pairs, parallel to the back of the tail feather base, and located under the skin. As the largest exocrine gland in birds, the uropygial gland usually secretes lipids and aliphatic monoesters, which are likely to be essential sources of chemical signals for birds. Using gas chromatography-mass spectrometry (GC-MS), Burger et al. identified the uropygial gland secretion of greenwood hoopoe (Upupa epops). They showed the volatile substances consisted of short-chain fatty acids, aldehydes, aliphatic and heterocyclic aromatic amines, ketones, and dimethyl sulfides [14]. Researchers have detected gender differences in the chemical composition of the uropygial gland waxes in domestic ducks before the nesting period [15]. Studies have also found that the volatile substances secreted by the uropygial glands are related to the behavior [16] and reproductive activities of avian species [17]. Additionally, it has been proved that the presence of a fatty acid mixture (called soothing pheromones) in the uropygial gland secretions of hens has a soothing effect on chicks, reducing their stress, anxiety, and aggressive behavior, and promoting the growth of chicks [18]. Furthermore, studies have explained the biological functions of uropygial glands in mate selection [19], production performance [20], and reproductive performance [21,22], etc. Research shows that a passerine species can discriminate the sex of conspecifics by relying on chemical cues, which suggests that uropygial gland secretion may potentially function as a chemical signal used in mate choice and/or intrasexual competition in this species [23]. Hirao et al. found that roosters are more likely to mate with hens with uropygial glands when compared to hens with uropygial glands removed. This means that the uropygial glands of hens can emit some odor information, and male chickens can distinguish by smell [19]. Although growing evidence supported that those chemical compounds emitted by the uropygial gland of birds may play a role in individual recognition, the possible role of chemical cues in the sexual selection of birds has only been preliminarily studied. Waterfowl have well-developed uropygial glands [24]. Research has shown that male domestic ducks with the olfactory nerve removed exhibited significantly inhibited sexual behavior, implying that chemical signals may play a role in duck courtship behaviors [25]. We hypothesized that uropygial gland secretions might act as a chemical signal responsible for sexual communication in ducks. Considering that high-performance liquid chromatography (HPLC) is a chromatographic technique with strong versatility and analytical capabilities, it is suitable for any compound with solubility in liquids and is widely used to quantify small molecules and ions, as well as the separation and purification of large molecules [26]. Therefore, it is necessary to give a comprehensive view of the chemical compounds secreted by the uropygial glands of ducks based on non-targeted metabolomics of LC-MS, and to show the differences of ducks between the sexes to further explore whether there were differences in the uropygial gland, and what the primary differences were between olfactory cues. Birds and Sampling The animal treatment and welfare protocol listed below has been approved by the Sichuan Agricultural University Animal Ethical and Welfare Committee, ethic code is 20190035. The Nonghua strain ducks used in this study were provided by the waterfowl breeding farm of Sichuan Agricultural University. The study was conducted from August 2020 to January 2021. A total of 40 healthy ducks, including half male and half female ones with similar body weight, were reared together in the floor-reared system with 5 cm-thick sawdust bedding covering the concrete floor. The stocking density was 1 duck/m 2 . The temperature of the ducks' room was maintained between 20 and 30 • C. The ducks were fed a standard growth period and layer duck period diet throughout the trial (Supplementary Table S1). At 20 weeks of age the ducks had reached primiparous and 6 male and 6 female ducks were randomly selected for exsanguination. The secretion of the uropygial glands was collected from the left side of the uropygial gland. Finally, the samples were kept in dry ice and then were sent to Suzhou Panomick Biopharmaceutical Technology Co., Ltd. for metabonomic analysis. Metabolite Extraction Samples were thawed at 4 • C, transferred 100 mg into 2 mL centrifuge tubes, then 600 µL 2-chlorophenyl alanine (4 ppm) of methanol (−20 • C) was added and shaken for 30 s. This is followed by 100 mg glass beads being added and put into the tissue grinder, ground for 90 s at 60 Hz. After ultrasound at room temperature for 10 min and being centrifuged at 4 • C for 10 min at 12,000 rpm, the supernatant was filtered through a 0.22 µm membrane to obtain the prepared samples for liquid chromatograph-mass spectrometer (LC-MS). A quantity of 20 µL from each sample was taken to generate the quality control (QC) samples, and the rest were used for the LC-MS analysis [27]. Mass Spectrometry Conditions The electrospray ionization mass spectrometer (ESI-MSN) experiments were executed on the Thermo Q Exactive Plus mass spectrometer (Q Exactive HF-X, Thermo Fisher Technologies, Shanghai, China) with the spray voltage of 3.5 kV and −2.5 kV in positive and negative modes. Sheath gas and auxiliary gas were set at 30 and 10 arbitrary units. The capillary temperature was 325 • C. The analyzer scanned over a mass range of m/z 81-1000 for a full scan at a mass resolution of 70,000. Data-dependent acquisition (DDA) MS/MS experiments were performed with a high-energy collision dissociation (HCD) scan. The normalized collision energy was 30 eV. Dynamic exclusion was implemented to remove some unnecessary information in MS/MS spectra [28]. Qualitative and Quantitative Analysis of Metabolites The obtained raw data were converted into mzXML format (xcms input file format) through ProteoWizard software (v3.0.8789) [29]. The XCMS package of R (v3.3.2) was used for peak identification, peak filtration, and peak alignment. The main parameters were bw = 5, ppm = 15, peakwidth = c (5, 30), mzwid = 0.015, mzdiff = 0.01, method = "centWave". A data matrix was obtained, including mass to charge ratio (m/z), retention time, peak area (intensity), and other information; these precursor molecules were obtained according to positive ion mode and negative ion mode, then these data were exported to Excel for subsequent analysis. Finally, the data of different magnitudes were analyzed and batch normalization of peak area was performed on the data. The qualitative metabolite analysis firstly confirmed the precise molecular weight of the metabolite (molecular weight error < 30 ppm), and then the MS/MS product ion spectrum of the metabolites were matched with the structural data acquired from the Human Metabolome Database (HMDB), Metlin, Massbank, Lipymaps, Mzclound, and the self-built Standard Product Database (http://query.biodeep.cn/, accessed on 4 September 2021), using the Mass Fragment software (2022 Waters, Shanghai, China). Differential Metabolite Screening and Functional Analysis We usually use the VIP (variable influence on projection) value to measure the intensity of the influence of the expression of each metabolite on the discrimination of each group of samples, thereby assisting the screening of marker metabolites. The sum of the squares of all VIP values equals the total number of variables in the model, so the average value is 1. When the VIP of a variable is >1, the variable is important. Differential metabolite screening criteria are p-value ≤ 0.05 and VIP ≥ 1. Among the total differential metabolites, 139 differential metabolites were annotated. First, 80 extremely significant differential metabolites were selected according to (p < 0.01), and then 24 volatile differential metabolites were screened according to boiling point (50-260 • C). Based on the thermal map cluster analysis, to narrow the range, 5 extremely significant volatile differential metabolites were screened out according to these 24. The PHEATMAP package in R (v3.3.2) [30] was used to perform agglomerative hierarchical clustering on each data set. Based on the MeTPA database [31], the differential metabolites were enriched with the Kyoto Encyclopedia of Genes and Genomes (KEGG), and we analyzed the metabolic pathways related to the differential metabolites in each group. The Chemical Components of the Uropygial Glands Can Help Distinguish the Ducks with Different Genders After chromatographic separation, the outflow components continued to enter the mass spectrometry. Mass spectrometry continuously scans these components and collects data. One mass spectrometry was obtained for each scan, and the ions with the highest intensity in each mass spectrometry were selected for continuous description. The spectrum was obtained by taking the ion intensity as the ordinate and time as the abscissa (Figure 1). Under the conditions of positive and negative ions, there are significant differences in the mass spectrometry of the uropygial gland secretions in female ducks and male ducks. Most of the chromatographic peaks of the male ducks are significantly higher than that of the female ducks. These results indicated that sex differences greatly influence the metabolites' composition of the uropygial glands in ducks. We performed quality control steps during the mass spectrometry-based metabolomics to obtain reliable and high-quality metabolomics data. Through the principal component analysis (PCA) plot (Figure 2A), all QC samples were gathered, indicating the data repeatability is reasonable and the LC-MS system was stable. Then, QC data normalized the raw data to omit the batch effects. This study performed the orthogonal projections to latent structures discriminant analysis (OPLS-DA) method on the two groups of samples. The OPLS-DA analysis can show clear differences between groups ( Figure 2B). The OPLS-DA score plots showed that the male and female duck samples were significantly separated in the positive and negative ion modes. The OPLS-DA model parameters in positive ion mode are R 2 Y = 0.999, Q 2 = 0.886; in negative ion mode they are R 2 Y = 1, Q 2 = 0.862. The R 2 Y and Q 2 values were close to 1.0 in the positive and negative ion modes, and the gap was less than 0.2 ( Table 1), indicating that the OPLS-DA model had good accuracy and reliable predictive power. Screening Differential Metabolites in Duck Uropygial Gland between Two Sexes The LC-MS helped identify 11,311 metabolites in duck uropygial secretions in positive ion mode and 14,321 in negative ion mode. Then, the differential metabolites were screened under the standards of p-value ≤ 0.05 and VIP ≥ 1 ( Figure 3A). Under the positive ion mode, 3831 differential metabolites, with 1493 up-regulated and 2338 down-regulated, have been screened in the duck uropygial gland between the two sexes ( Figure 3B). In the negative ion mode, there were 4510 different metabolites, of which 1583 were upregulated, and 2927 were down-regulated ( Figure 3B). Because the standard products in our information database are limited, some cannot be annotated. We have only a small fraction of metabolites annotated. By annotating all metabolites, a total of 139 differential metabolites were annotated, of which 49 were up-regulated, and 90 were down-regulated ( Figure 3C, Supplementary Table S2). The relative value of the metabolites of all the differential annotated metabolites was used to perform a hierarchical cluster analysis. It can be seen that the annotated metabolites were divided into 17 main sub-clusters based on their relative content (Supplementary Figure S1), implying a functional difference among these metabolites. The enrichment and function annotation of differential metabolites between the sexes: (A) the volcanic map was used to show the whole of the differential metabolites in positive and negative ion modes; (B) annotated histogram of differential metabolite statistics. The red block represents the up-regulated metabolites, and the blue block represents the down-regulated metabolites; (C) the histogram was to show the differential annotated metabolites between sexes. KEGG Enrichment Based on the Differential Annotated Metabolites The KEGG database used 139 differential metabolites in the duck uropygial gland between the two sexes for enrichment pathways. The results showed that a total of 77 pathways had been enriched (Supplementary Table S3). Here, we listed the top 25 significantly enriched metabolic pathways (Figure 4), including amino acid metabolism, lipid metabolism, carbohydrate metabolism, signaling molecules and interactions, and nucleotide metabolism, etc. Each of the following pathways has at least four differential metabolites. They included the ABC transporter, purine metabolism, pyrimidine metabolism, arginine, and proline metabolism; the neuroactive ligand-receptor interaction, β-alanine metabolism, aminoacyl-tRNA biosynthesis, fatty acid biosynthesis, saturated fatty acid biosynthesis, alanine, aspartic acid, and glutamate metabolism; and the glutathione metabolism, cysteine, and methionine metabolism; and the tyrosine metabolism. The following pathways enriched at least two differential metabolites, including the calcium signaling pathway, taurine, and hypotaurine metabolism; the sphingolipid metabolism, pantothenic acid, and coenzyme A biosynthesis; pyruvate metabolism, phenylalanine, tyrosine Acid, and tryptophan biosynthesis; galactose metabolism, ascorbic acid, and aldonic acid metabolism, niacin and niacinamide metabolism; neomycin, kanamycin, and gentamicin biosynthesis; and the tryptophan metabolism. pathways based on the differential metabolites in the uropygial gland between male and female ducks. The dot size represents the number of different metabolites in the corresponding pathway. The larger the dot, the more differential metabolites enriched in the pathway. The ratio is the number of differential metabolites in the corresponding signal pathway to the total number of identified metabolites. Bubbles with different colors indicate the degree of significance. The Enrichment of Volatile Substances Potentially as Olfactory Cues in Duck Uropygial Gland for Chemical Signals The volatile substances are easily emitted into the environment and can be considered candidates for the potential olfactory cues that respond to chemical signals. Among the 80 differential metabolites that reached an extremely significant difference (p < 0.01), we enriched the volatile substances according to the boiling point of each differential metabo-lite (50-260 • C) between the two sexes. We identified 24 volatile substances ( Table 2) and performed a heat map analysis on them ( Figure 5). Using heatmaps, we can visually judge the differences between samples/groups by the shades and differences of colors. Combined with the significance results of the statistical tests, the direction of significance can be assessed. Furthermore, we further demonstrated that five volatile substances could distinguish the sex groups of ducks very well. They included picolinic acid, 3-hydroxypicolinic acid, indoleacetaldehyde, 3-hydroxymethylglutaric acid, and 3-methyl-2-oxovaleric acid ( Figure 6). All these substances are significantly higher in males than in females. substances identified in 80 significantly differential metabolites between the two sexes, (B) hierarchical clustering showed that five volatile substances could distinguish the sex groups of ducks. In classification, red represents DM and yellow represents DG. In legend, the color represents the relative content. The redder the color, the higher the expression level, and the bluer the color, the lower the expression level. Values are Z-score normalized for peak responses of metabolites. 6. The histogram shows the differences between the sexes regarding the relative expression of the volatile secretions from the uropygial glands. All these substances are significantly higher in males than in females. Discussion It is well-known that the lipid substances secreted by the uropygial glands are beneficial for protecting feathers. The uropygial gland may play an essential role in maintaining the integrity of feathers. It was generally believed that birds usually peck the secretions from the uropygial gland with their beaks and smear it on feathers during preening to increase waterproofing and resistance to pathogens [32]. In our study, many chemical compounds were identified in the uropygial gland secretions. According to the metabolites annotated, lipids accounted for 12.245% of the total annotated metabolites, supporting the theory that the uropygial glands can secrete large amounts of lipids related to their protective function [33]. The uropygial gland is a special skin derivative specific to birds. Leclaire et al., found that uropygial gland secretions were significantly different between females and males. They inferred that uropygial gland secretions might be important clues for mating among birds [34]. Moreover, according to Hirao's study, the mating frequency of males with females that had uropygial glands was significantly higher than that with females with the uropygial glands removed [19], which indicates that uropygial gland secretions may affect mating and mate selection among birds. Among the 80 differential metabolites with extremely significant differences in our study, the volatile substances accounted for 31.6%. The volatile substances were more easily emitted into the environment, and a high ratio of volatile substances accounted for the total differential substances between male and female ducks, which implied a potential role for these volatile substances to act as sex pheromones. Zhang et al. also observed that the proportion of volatile octadecyl alcohol, nonadecanol, and eicosanol in the uropygial gland secretion of male parrots was four-times higher than that of females, and proved that 3-alkanols compounds were the main substances that were transmitting sexual signals between male and female parrots [35]. Although there is some difference in the compositions of volatile substances identified in our study, both have identified alcohols, supporting that these volatile substances secreted by the uropygial glands might act as sex pheromones and transmit sexual signals between the sexes. In our results, the uropygial gland secretions of male ducks were significantly more than those of females, including picolinic acid, 3-hydroxypicolinic acid, indoleacetaldehyde, 3-hydroxymethylglutaric acid, and 3-methyl-2-oxovaleric acid. Stralendorff et al., found that 3-methyl-2-oxovaleric acid is a broth-flavored ingredient, and its concentration in male urine is about 20 times that of female urine [36]. It can also represent the typical odor of male tree shrews (Tupaia belangeri). Moreover, these substances can be used as pheromones for signal communication between the sexes. Some studies have shown that indole acetaldehyde is a decomposition product of tryptophan and can attract lacewings [37]. Picolinic acid is a demethylated analogue of trigonelline. Poulin studies have shown that trigonelline can remind mud crabs to notice the presence of their natural enemy-blue crabs [38]. These studies provided necessary evidence supporting the claim that this differential volatile substance might be the candidate for the sex pheromone responsible for information exchange between the sexes. We performed a KEGG enrichment analysis on the above five volatile metabolites and found that two were enriched. The pathways were enriched, including valine, leucine, isoleucine biosynthesis, valine, leucine, and isoleucine degradation, and tryptophan metabolism. The three enriched metabolic pathways are all an amino acid metabolism, which may be the main mechanism for determining the gender difference of the uropygial gland. Additionally, studies have shown that androgens and estrogens can promote pheromone's contents in mouse urine, such as 2-heptanone and R, R-dehydroexotropin [39]. Asnani et al. [40]. measured the regulatory effects of hormones on the uropygial glands of male adult pigeons. They found that adrenal steroids are mainly involved in the regulation of uropygial glands. There are also studies showing that testosterone can affect the secretions of the uropygial glands in birds [41]. In the chicken uropygial gland, the expression of pro-opiomelanocortin (POMC) and melanocortin receptor 5 (MC5-R) have been reported [42,43]. Although no sex hormones have been identified as significantly differential metabolites in our study, we demonstrated a significant difference in the secretion of duck uropygial glands between the two sexes, which may because of the regulatory effects of sex hormones between males and females. As a result, the pathways of these amino acid metabolites may also be disturbed by sex hormones, resulting in changes in the content of related substances between sexes. Conclusions In summary, the study, which combined LC-MS and non-targeted metabolomics, revealed that duck uropygial glands secreted substances (including volatile substances) that are very different between the sexes. Picolinic acid, 3-hydroxypicolinic acid, indoleacetaldehyde, 3-hydroxymethylglutaric acid, and 3-methyl-2-oxovaleric acid in the uropygial gland secretions of male ducks were significantly higher than in those of females, which suggests that they are the candidates for a pheromone to information exchange between the sexes. These substances are essential clues for mate selection and mating among birds. In addition, the KEGG analysis showed three amino acid metabolism pathways, including valine, leucine, and isoleucine biosynthesis; valine, leucine, and isoleucine degradation; and the Tryptophan metabolism, led to changes in related metabolite levels. These metabolite pathways may be disturbed by sex hormones and are the main mechanism that determines the sex difference in the uropygial glands. Therefore, our research results laid a foundation for future research on whether uropygial gland secretions affect ducks' reproduction and production levels. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ani12040413/s1, Figure S1: heatmap showing the clusters of all annotated metabolites based on their metabolic profiling content in all uropygial glands of ducks, Table S1: nutrient composition for the feed of ducks, Table S2: 139 differential metabolites identified in the uropygial glands of male and female ducks using non-targeted metabolomics technology, Table S3: a total of 77 pathways were enriched. According to the KEGG database, this table shows all the enrichment pathways of 139 differential metabolites in the uropygial glands of male and female ducks. Institutional Review Board Statement: The animal-use protocol listed below has been reviewed and approved by the Sichuan Agricultural University Animal Ethical and Welfare Committee, ethic code is 20190035.
2022-02-12T16:13:31.855Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "976eea2d75db06ef813440705e98c2f5f3239c32", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/12/4/413/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66f740f88647a47196e05475b53de39a4eb20e92", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
226270713
pes2o/s2orc
v3-fos-license
Blade Rub-Impact Fault Identification Using Autoencoder-Based Nonlinear Function Approximation and a Deep Neural Network A blade rub-impact fault is one of the complex and frequently appearing faults in turbines. Due to their nonlinear and nonstationary nature, complex signal analysis techniques, which are expensive in terms of computation time, are required to extract valuable fault information from the vibration signals collected from rotor systems. In this work, a novel method for diagnosing the blade rub-impact faults of different severity levels is proposed. Specifically, the deep undercomplete denoising autoencoder is first used for estimating the nonlinear function of the system under normal operating conditions. Next, the residual signals obtained as the difference between the original signals and their estimates by the autoencoder are computed. Finally, these residual signals are used as inputs to a deep neural network to determine the current state of the rotor system. The experimental results demonstrate that the amplitudes of the residual signals reflect the changes in states of the rotor system and the fault severity levels. Furthermore, these residual signals in combination with the deep neural network demonstrated promising fault identification results when applied to a complex nonlinear fault, such as a blade-rubbing fault. To test the effectiveness of the proposed nonlinear-based fault diagnosis algorithm, this technique is compared with the autoregressive with external input Laguerre proportional-integral observer that is a linear-based fault diagnosis observation technique. Introduction A blade rub-impact fault is a severe type of mechanical fault frequently occurring in rotating machinery, especially in various turbines. The interactions between the blades of the rotor and the stationary parts of rotating machines can be recognized as a separate mechanical fault that can be caused by rotor blade extension due to the high operating temperatures or as a coupling fault where the rub-impact is a consequence (or evidence) of a different mechanical fault. Under the faults leading to blade rub, usually, shaft imbalance, misalignments, excessive self-excited vibrations, or bearing failures are understood [1]. If not detected and identified at the early stages, a blade rub-impact fault may cause the failure of the system and severe economic loss. Vibration signal analysis [2] is most frequently applied for diagnosing blade rub-impact faults in comparison with other methods, such as acoustic [3], pressure [4], and temperature analysis [5]. The main reason for its application is that performing the vibration signal acquisition in the field is relatively easy compared to other techniques. However, it is known that for proper vibration analysis, the signal processing methods play an essential role. A system with a rotor-to-stator rub impact fault approximation of the system and signal estimation) with one block, which is represented by the deep learning technique DUDAE. First, the DUDAE is trained using the vibration signals corresponding to the healthy state of the rotor system. During this step, the DUDAE learns the latent coding in its bottleneck layer that represents the nonlinear function of the rotor system under normal operating conditions. Next, the vibration signal corresponding to the unknown state of the rotor system is pushed at the input layer of the DUDAE, where it estimates the signal of the current state using the latent coding learned on signals of normal operating conditions. Then, the residual signal (i.e., error signal) is generated as the difference between the real vibration signal of an unknown system state and the one estimated by the DUDAE. Residue generation is used for enhancing the dissimilarities of the signals corresponding to different classes using the anomaly detection properties of the autoencoder, and hence, it generates sequences (residual signals) that are treated as discriminative features capable of improving fault diagnosis performance. At the final step, these residual signals are used as inputs to the DNN to accomplish the task of fault identification of rotating machinery. The specific contributions of this study can be summarized as below: 1. The novel data-driven method for diagnosing coupling rotor imbalance and blade rub-impact faults in nonlinear rotor systems is presented. 2. The deep learning-based system identification approach for approximating the nonlinear function of the system and state estimation has been introduced as a part of the proposed fault diagnosis solution. The remainder of this manuscript is organized as follows. Section 2 introduces the proposed methodology for diagnosing the coupling blade rub-impact faults of different severity levels. Section 3 provides an experimental validation of the introduced framework and discussion. Finally, Section 4 contains the concluding remarks. Proposed Methodology A block diagram of the proposed approach for identifying the coupling blade rub-impact faults of various intensities is depicted in Figure 1, and it consists of three important steps. First, the collected vibration acceleration signals corresponding to the normal operating state when no faults are observed in the system are used to train the DUDAE to create a nonlinear function approximation of a system under normal operating conditions. Then, the autoencoder's property of anomaly detection is used to represent the deviations in the state of the system by generating the residual signal. This residual signal represents the difference (error) between the current vibration signal approximated by the DUDAE using the learned nonlinear function of the normal state of the system and the actual current vibration signal. At the final step, this residual signal is considered a discriminative representation containing fault feature information and describing the current state of the system that is employed as an input to the DNN to accomplish the problem coupling blade rub-impact fault identification. Data Collection To investigate the capabilities of the proposed framework for blade rubbing fault identification, an experimental dataset was acquired in this study. The test rig used to collect rub-impact fault data of various severity levels is presented in Figure 2. Two vibration sensors installed at the different ends of the shaft were used to collect the data during the experiment. Each of the sensors has two channels for recording the displacements of the rotor in vertical and horizontal directions. In this work, the coupling shaft imbalance and periodical local blade rub-impact fault has been simulated. Specifically, to create the interactions between rotor disk blades (the number of blades of Data Collection To investigate the capabilities of the proposed framework for blade rubbing fault identification, an experimental dataset was acquired in this study. The test rig used to collect rub-impact fault data of various severity levels is presented in Figure 2. Two vibration sensors installed at the different ends of the shaft were used to collect the data during the experiment. Each of the sensors has two channels for recording the displacements of the rotor in vertical and horizontal directions. Data Collection To investigate the capabilities of the proposed framework for blade rubbing fault identification, an experimental dataset was acquired in this study. The test rig used to collect rub-impact fault data of various severity levels is presented in Figure 2. Two vibration sensors installed at the different ends of the shaft were used to collect the data during the experiment. Each of the sensors has two channels for recording the displacements of the rotor in vertical and horizontal directions. In this work, the coupling shaft imbalance and periodical local blade rub-impact fault has been simulated. Specifically, to create the interactions between rotor disk blades (the number of blades of In this work, the coupling shaft imbalance and periodical local blade rub-impact fault has been simulated. Specifically, to create the interactions between rotor disk blades (the number of blades of the Sensors 2020, 20, 6265 6 of 22 rotor disk is equal to 16) and rotor cage (i.e., blade rub-impact fault), shaft imbalance fault (<45 • ) has been first simulated by attaching additional weights to the rotor disk. The detection and evaluation of the severity levels of the fault being investigated have been done by a thermal camera mounted on the non-drive end of the rotor. Overall, 10 classes of signals were observed during data collection. Specifically, class #1 corresponds to 0 g of extra weight added to the shaft, which represents the normal operating state of the system. Classes #2, #3, and #4 correspond to 0.5, 1, and 1.5 g of extra weight added to the shaft. At this time, a shaft imbalance fault appeared in the testbed; however, despite the rotor imbalance, no contact between the blades and the stator was detected through the thermal camera. Classes #5 to #7 describe the first evidence that a coupling fault appeared in the system when shaft imbalance caused a slight blade rub fault with the 1.6, 1.7, and 1.8 g of additional weight added to the shaft, respectively. Classes #8 and #9 with 2 and 2.4 g of additional weight correspond to the coupling fault of shaft imbalance with an intensive blade rub, while class #10 with 2.8 g of extra weight attached to the rotor disk represented the coupling fault when the shaft imbalance led to a severe blade rub-impact fault condition. The experiment was conducted under constant rotational speed equal to 2500 revolutions per minute (RPM). The vibration signals were collected at a sampling rate of 65.5 kHz. The duration of signal recordings for each signal class was 30 s. Then, for signal processing purposes, these signals were split into samples of 1 s in length. Thus, the total number of data instances acquired during the experiment was equal to 300 samples before cutting them into windows during signal resampling. The main properties of the collected signal classes are summarized and presented in Table 1. Signal Resampling In general, deep learning-based approaches require datasets with a huge number of samples for efficient representation learning. However, it is not always possible and even expensive to collect huge datasets with the samples corresponding to faulty conditions of the system. Furthermore, when the artificial intelligence algorithms are applied to one-dimensional signals, the size of these input signals affects the architecture of the network (i.e., depth of the network, number of nodes, shape) as well as the time needed for learning these representations. To address these issues prior to creating the autoencoder-based nonlinear observer, in this work, we perform a resampling of the collected vibration signals corresponding to different states of the system into a series of windows such that each window has a length equal to the number of data points collected during one revolution of the shaft. For vibration signal resampling, first, the number of revolutions completed in one second (RPS) should be computed by the formula as below: where RPM is the rotational speed used during data recording. Next, the time needed for one revolution (2) and the number of data points (3) collected during one revolution of the shaft can be obtained as shown below: Sensors 2020, 20, 6265 Here, TFOR stands for the time for one revolution expressed in seconds, w_length corresponds to the length of each window of the resampled signal expressed in a number of data points, and f sampling is the sampling frequency used during the data acquisition. The computed parameters for resampling the signal into windows are as follows: RPS ≈ 41.6, TFOR = 0.024, and w_length ≈ 1598, respectively. An example of signal resampling using the achieved resampling parameters is depicted in Figure 3. Here, stands for the time for one revolution expressed in seconds, _ ℎ corresponds to the length of each window of the resampled signal expressed in a number of data points, and is the sampling frequency used during the data acquisition. The computed parameters for resampling the signal into windows are as follows: 41.6, = 0.024, and _ ℎ 1598, respectively. An example of signal resampling using the achieved resampling parameters is depicted in Figure 3. Deep Undercomplete Denoising Autoencoder (DUDAE)-Based Nonlinear Function Approximation of the System and Residual Signal Generation Autoencoders are a form of deep neural network that are widely used for problems where manifold learning is required. The most common tasks that are solved by autoencoders are feature learning [42], feature extraction [43], and feature selection [44]. However, since autoencoders are deep neural networks with a symmetric structure, they can successfully utilize the properties of neural networks to learn and discover complex nonlinear relations of the input data (i.e., nonlinear function approximation) and successfully utilize them for the input data reconstruction, which is the purpose of the autoencoder in this paper. The simple undercomplete autoencoder consists mainly of three layers that are trained in an unsupervised manner. The first layer is called the input layer. It receives the input data and pushes it to the further layers. The hidden layer after the input layer with lower dimensionality is called a bottleneck layer. It is used to extract the latent coding, i.e., the high-level representative features of the input data. The dimensionality of the latent codes is equal to the number of nodes in the bottleneck layer. The last layer, called the output layer, is used to decode the obtained latent codes and reconstruct the original input data. In summary, the autoencoder performs two tasks: (1) it encodes the input data into the latent coding, and (2) it decodes the latent coding to reconstruct the original data. The operation of the autoencoder can be summarized as follows: : → : → ′ , = argmin( − ′) . As mentioned above, the simple undercomplete autoencoder has only one hidden layer: the bottleneck layer. During the encoding stage, the autoencoder receives the input data of the dimensions and nonlinearly maps the input data to the latent coding with the dimensions . The encoding process can be presented as below: where is the latent coding, represents the weight matrix, stands for the bias, and is a nonlinear activation function. The decoding process of the autoencoder is described by: Deep Undercomplete Denoising Autoencoder (DUDAE)-Based Nonlinear Function Approximation of the System and Residual Signal Generation Autoencoders are a form of deep neural network that are widely used for problems where manifold learning is required. The most common tasks that are solved by autoencoders are feature learning [42], feature extraction [43], and feature selection [44]. However, since autoencoders are deep neural networks with a symmetric structure, they can successfully utilize the properties of neural networks to learn and discover complex nonlinear relations of the input data (i.e., nonlinear function approximation) and successfully utilize them for the input data reconstruction, which is the purpose of the autoencoder in this paper. The simple undercomplete autoencoder consists mainly of three layers that are trained in an unsupervised manner. The first layer is called the input layer. It receives the input data and pushes it to the further layers. The hidden layer after the input layer with lower dimensionality is called a bottleneck layer. It is used to extract the latent coding, i.e., the high-level representative features of the input data. The dimensionality of the latent codes is equal to the number of nodes in the bottleneck layer. The last layer, called the output layer, is used to decode the obtained latent codes and reconstruct the original input data. In summary, the autoencoder performs two tasks: (1) it encodes the input data into the latent coding, and (2) it decodes the latent coding to reconstruct the original data. The operation of the autoencoder can be summarized as follows: (4) As mentioned above, the simple undercomplete autoencoder has only one hidden layer: the bottleneck layer. During the encoding stage, the autoencoder receives the input data x of the dimensions R m and nonlinearly maps the input data to the latent coding F with the dimensions R n . The encoding process can be presented as below: where F is the latent coding, W represents the weight matrix, b stands for the bias, and f is a nonlinear activation function. The decoding process of the autoencoder is described by: Here,x is the reconstructed output that resembles the input data, W is the weight matrix, b stands for the bias, and f represents an activation function of the decoder, respectively. To perform the training of the autoencoder, the mean squared error (MSE) loss function should be calculated between the original input data and the reconstructed data using the following equation: where L stands for the MSE loss function, θ is a set of model parameters, and N is the dimensionality of the input data, i.e., the number of nodes in the input layer of the autoencoder. In this paper, the DUDAE is utilized to approximate the nonlinear function of the normal state of the rotor system. The detailed architecture of this autoencoder is presented in Table 2. Unlike the simple three-layer undercomplete autoencoder, the proposed DUDAE is a deep autoencoder (emphasized by the first 'D' in the abbreviation) that has more than one hidden layer, as can be seen in the table. However, the basic idea described in Equations (4)-(7) pertains to the DUDAE, with the only difference being that during the encoding and decoding phases, more nonlinear data transformations are done concerning the increased number of hidden layers. From the same table, it can be seen that the size of the encoding layers is smaller than that of the input layer, which means that the structure of the proposed autoencoder is "undercomplete" (highlighted by the "U" in the abbreviation). This is needed to force the autoencoder to learn a more compact representation (i.e., nonlinear function) from the input data. To increase the tolerance to the noise of the autoencoder used for approximating the nonlinear function of the normal operating state of the system, the dropout [45], with a rate equal to 0.1, is added to the input layer in which the input signals are received. This makes the proposed autoencoder belong to a type of denoising autoencoders (this property is expressed as the second 'D' in the abbreviation). As the activation function for the hidden and output layers of the DUDAE, the scaled exponential linear units (SELU) function is chosen in this paper. There are a few main reasons of employing this activation function: (1) the input vibration signals collected by the sensors contain both the positive and negative values, hence a possibly non-saturating activation function that supports these types of inputs is needed, (2) the specific formulation of the SELU activation prevents the vanishing gradient problem that may be faced in deep architectures, as well as avoids the situations when the neuron can die during training, and (3) the SELU activation function speeds up the training process and convergence of the deep neural network due to its normalization properties [46]. The formulation of the SELU activation function is shown in Equation (8): where λ ≈ 1.05 and α ≈ 1.6731 are the coefficients predetermined by the inventors of SELU activation [42]. Glorot uniform weight initialization [47] was chosen as the initialization strategy of the weights in the hidden layers of the proposed deep undercomplete denoising autoencoder. As the optimization algorithm for training the deep denoising undercomplete autoencoder to estimate the nonlinear function of the normal system state using backpropagation, the RMSProp optimizer [48], a widely used variant of stochastic gradient descent for training autoencoders, is applied in this paper. The equation of this optimization algorithm can be presented as follows: Here, s is a vector containing the squares of the loss function gradients; γ stands for the rate of decay (γ = 0.9); ∇ θ L(θ) represents the gradient of the loss function (MSE in this case) with the respect to the parameters of the deep learning model, θ; ξ is the notation for the learning rate that was assigned to be equal to 0.001; ε is the coefficient needed to prevent zero division (ξ = 10 −7 in this paper); and ⊗ and are the operators of element-wise multiplication and division, respectively. Residual Signal Generation The main purpose of the autoencoder (DUDAE) in this paper is to learn the nonlinear function of the system under normal operating conditions. Once the training is completed, this trained model is used to give its estimate of the current system state by attempting to reconstruct the signal previously unseen during the training (i.e., a signal corresponding to the unknown state of the system). Next, the residual signals are generated as a difference signal between the real unknown vibration signal and the estimate of this signal delivered by the DUDAE. These residual signals are used at the next step as the input for the DNN to perform fault identification, and can be computed as below: where rx(n) is the residual signal, x(n) stands for the original vibration signal in the time domain, andx(n) is the signal reconstructed by the autoencoder using the latent coding learned while training on signals corresponding to the normal operating state of the system (i.e., when no imbalance and no blade rub fault are observed). The purpose of computing the residual signals is as follows. Since the DUDAE is trained using only the data collected under normal system operating conditions, it learns how to reconstruct this data by using the learned nonlinear function, i.e., latent coding. However, it cannot accurately reconstruct the data that have not been used during the training process. That is, if the DUDAE is applied to reconstruct the signals not observed during training and that significantly deviate from the signals corresponding to the normal system state, it will inevitably lead to a reconstruction error. Furthermore, when a shaft imbalance or a coupling imbalance and blade rub fault appears in the system, the values of statistical parameters of the vibration signals increase with the increase of their amplitude. This means that errors between the real signals corresponding to abnormal conditions of the system and the ones estimated by the DUDAE will increase, too. This allows for detection of the current state of the system and residual signals computed by Equation (10) can be used as discriminative features to perform fault identification of coupling blade rub faults of various intensity levels. Fault Identification Using Residual Signals and the DNN Despite the fact that the DNN is a variation of the conventional ANN, which was first introduced a long time ago, due to the higher dimensionality and the number of hidden layers, it became one of the most powerful and widely applied decision-making algorithms for a huge variety of problems. Furthermore, DNNs became the main core of recent trends in the field of artificial intelligence algorithms, such as deep representation learning. The general DNN architecture resembles the architecture of an ANN and consists of an input, output, and a sequence of hidden layers. The generalized formula of the m th hidden layer operation can be summarized as follows: x where x m is the output of the m th hidden layer after applying the nonlinear activation function f ; x m−1 is the output of the previous hidden layer after application of the activation function; and W m and b m are the weight matrix and bias vector of the m th hidden layer, respectively. In this paper, the DNN is used to perform the task of blade rub-impact fault identification using the residual signals computed using Equation (10). The exact architecture of the DNN used for fault identification is presented in Table 3. As can be seen from the table, the architecture of the proposed DNN is similar to the encoder part of the DUDAE described in Section 2.3. However, there are two differences that are discussed below. The first is the way the dropout regularization has been applied. Unlike the autoencoder, where the dropout was applied only to the input layer to increase its robustness to the noise in the data, in the DNN, it is used for fault identification, and a dropout rate of 0.1 is applied to hidden layers #2, #3, #4, and #5 to avoid overfitting of the data. If the DNN overfits the training data, it might fail to generalize the validation and testing data (the data unseen during the training process), which will lead to a decrease in the fault classification performance. It cannot be seen from the table, but along with dropout regularization, an early stopping procedure is applied during the training of the DNN to reduce the chance of overfitting. The idea of early stopping is to interrupt the training process once the validation error stops decreasing or starts increasing with some tolerance level during a defined number of epochs. The second difference is an activation function of the output layer. To solve a multiclass classification problem, SoftMax activation is employed in the output layer of the DNN. The SoftMax activation function is given as follows: where K is the total number of classes and s(x) is a vector containing the scores of each class for the specific data instance x. The input data instance is assigned to the class with the highest estimated probabilityP k (i.e., the class with the highest computed score for this sample). To train the DNN to perform blade rub fault identification using the residual signals, the categorical cross-entropy loss function is used with the outputs of the SoftMax activation of the output layer to perform decision making about the state of the system. The categorical cross-entropy loss can be represented as below: where θ is the set of model parameters and y i k andP k are the target and estimated probabilities that the i th data instance belongs to the class k, respectively. The same optimization algorithm used for training the autoencoder, RMSProp (Section 2.3, Equation (9)), is used for training the DNN by computing the gradients of the categorical cross-entropy loss function with respect to model parameter θ. The remaining parameters of the machine learning model, such as the weight initialization algorithm, the learning rate of the optimization algorithm, and other parameters of the network remain the same, as described in Section 2.3. Training, Validation, and Testing Data Configuration After signal resampling, described in Section 2.2, the new dataset consisted of 12,300 time-domain resampled vibration signals in total (1230 resampled signals for each system condition observed during data collection). To investigate the fault identification capabilities of the proposed approach, the two experimental datasets were constructed. The first dataset consisted of all the resampled time-domain vibration signals corresponding to the normal state of the system (1230 resampled signals), i.e., when neither imbalance nor coupling imbalance and blade rub faults were observed (this dataset is further referred to as dataset #1). This dataset is needed to train the DUDAE to reconstruct the input data using the learned latent coding and to derive the residual signals that are further used for fault identification by the DNN. For training the DUDAE, dataset #1 was randomly divided into training and validation subsets at a rate of 8:2. Thus, 984 resampled signals corresponding to normal system conditions were used as a training subset for the DUDAE, whereas the remaining 246 signals comprised the validation subset used to measure validation error. Once the autoencoder was trained, it was used to generate the residual signals using the whole 12,300 original resampled vibration signals. The data were used further to accomplish the task of fault diagnosis. For this, the dataset of residual signals (further referred to as dataset #2) was first randomly split into training and testing subsets at a ratio of 8:2. Then, the obtained training subset was randomly split again at the ratio 8:2 to get a validation subset. Thus, the obtained training subset from dataset #2 consisted of 7872 residual signals, the validation subset contained 1968 samples, and the remaining 2460 residual signals previously unseen by the DNN were used as a testing subset for evaluating the fault diagnosis capabilities of the proposed framework. To ensure the reliability of the proposed methodology and exclude the effect of randomness, the experiments for the proposed and referenced methods will be performed 10 times with different training, validation, and testing subsets randomly sampled at each trial. Training the DUDAE-DNN Model Before validating the capabilities of the proposed framework to identify blade rub-impact faults of various intensity levels, the modules of the proposed framework, the DUDAE and the DNN, should be trained. Furthermore, they should be trained in a pipeline (i.e., sequential order). Thus, first, the training and validation subsets of dataset #1 containing the time-domain resampled vibration signals corresponding to the normal condition are used to train the DUDAE. Next, the training and validation subsets of dataset #2 (consisting of residual signals obtained after training the DUDAE) are utilized to train the DNN to perform fault diagnosis. For training both parts of the model, data batches with 64 data samples each were utilized. Initially, the number of training epochs for the DNN model was assigned to be equal to 1000. However, the early stopping algorithm was applied during the training that stops the learning process once the validation accuracy stops improving and restores the model parameters that demonstrated the highest fault classification accuracy on the validation subset. In Figure 4, the analysis of dependence between the number of training epochs of the DUDAE model and its influence on the validation classification accuracy of the DNN is presented. Sensors 2020, 20, x 12 of 22 In Figure 4, the analysis of dependence between the number of training epochs of the DUDAE model and its influence on the validation classification accuracy of the DNN is presented. From this figure, it can be seen that in general, the increase in the number of training epochs for the DUDAE model leads to a decrease in the number of training epochs for the DNN model (the early stopping technique stops training earlier, since no improvement on the validation subset has been observed for a certain number of epochs). At the same time, for all of the cases presented in Figure 4, it can be observed that at the moment when the training was stopped, the validation accuracy curve saturates and slightly oscillates around an accuracy of 94-95%. However, when we applied the best model parameters saved during the DNN training in each experiment, it appeared that the model trained on residual signals obtained after training the DUDAE during 600 epochs reached 95.5% accuracy on the validation subset. The other models demonstrated slightly worse performance: 95.33%, 95.2%, 95.4%, 95.4%, and 95.1% for 100, 200, 300, 400, and 500 training epochs of the DUDAE, respectively. Overall, the number of training epochs of the DUDAE did not significantly affect the fault classification performance of the DNN. However, considering that the validation accuracy of the DNN was slightly higher when the DUDAE was trained over 600 epochs, it was decided to keep this number of training epochs for the DUDAE while the number of training epochs of the DNN was left under full control of the early stopping algorithm. Once the training epoch number of DUDAE and training scenario for decision-maker (DNN) are fixed, we repeat the training-validation procedure 10 times to observe the behavior of the training-validation loss curves and generalize the conclusions on the convergence of the proposed From this figure, it can be seen that in general, the increase in the number of training epochs for the DUDAE model leads to a decrease in the number of training epochs for the DNN model (the early stopping technique stops training earlier, since no improvement on the validation subset has been observed for a certain number of epochs). At the same time, for all of the cases presented in Figure 4, it can be observed that at the moment when the training was stopped, the validation accuracy curve saturates and slightly oscillates around an accuracy of 94-95%. However, when we applied the best model parameters saved during the DNN training in each experiment, it appeared that the model trained on residual signals obtained after training the DUDAE during 600 epochs reached 95.5% accuracy on the validation subset. The other models demonstrated slightly worse performance: 95.33%, 95.2%, 95.4%, 95.4%, and 95.1% for 100, 200, 300, 400, and 500 training epochs of the DUDAE, respectively. Overall, the number of training epochs of the DUDAE did not significantly affect the fault classification performance of the DNN. However, considering that the validation accuracy of the DNN was slightly higher when the DUDAE was trained over 600 epochs, it was decided to keep this number of training epochs for the DUDAE while the number of training epochs of the DNN was left under full control of the early stopping algorithm. Once the training epoch number of DUDAE and training scenario for decision-maker (DNN) are fixed, we repeat the training-validation procedure 10 times to observe the behavior of the training-validation loss curves and generalize the conclusions on the convergence of the proposed methodology. The training and validation loss curves obtained during 10 experiments are presented in Figure 5. The training and validation curves corresponding to DUDAE are demonstrated in Figure 5a,b. From these figures, it can be seen that the values of loss functions during ten experiments first demonstrated a sharp descent during the first 40 epochs of training and then continued decreasing toward zero steadily. Despite in all experimental trials DUDAE having been trained during 600 epochs, from Figure 5c,d, we can observe that the training process of DNN has been stopped by an early stopping algorithm at different moments before 400 epochs in all trials except for in experiment #7, where the training of DNN has lasted for 488 epochs (the longest result). From Figure 5d and its color bar, it can be concluded that in all experimental trials, the validation loss curves of DNN demonstrated similar descending patterns, and at the moment when the training procedure was stopped, they were oscillating around the value of 0.2. Overall, it can be concluded that the proposed methodology demonstrates repeatable results in terms of convergence under various training and validation subset permutations. However, it can be also seen that there is an open direction for improvement of the part related to the decision making in the proposed framework because despite a good convergence of DUDAE under various data permutations, the loss functions of DNN saturated at a certain level without moving closer to zero. Residual Signal Analysis In this subsection, the analysis of residual signals obtained after the DUDAE was trained on signals corresponding to the normal system state is provided. From the previous subsection, it was concluded that when the number of training epochs of the DUDAE is equal to 600, the obtained residual signals that are used as input to the DNN for decision making on the state of the system lead to the highest classification accuracy on the validation dataset. The main point of this is that the well-trained DUDAE delivers residual signals of a small magnitude oscillating around zero (i.e., small reconstruction error) for the signals that correspond to the normal state of the system or for the signals that resemble those signals. On the other hand, when the imbalance and blade The training and validation curves corresponding to DUDAE are demonstrated in Figure 5a,b. From these figures, it can be seen that the values of loss functions during ten experiments first demonstrated a sharp descent during the first 40 epochs of training and then continued decreasing toward zero steadily. Despite in all experimental trials DUDAE having been trained during 600 epochs, from Figure 5c,d, we can observe that the training process of DNN has been stopped by an early stopping algorithm at different moments before 400 epochs in all trials except for in experiment #7, where the training of DNN has lasted for 488 epochs (the longest result). From Figure 5d and its color bar, it can be concluded that in all experimental trials, the validation loss curves of DNN demonstrated similar descending patterns, and at the moment when the training procedure was stopped, they were oscillating around the value of 0.2. Overall, it can be concluded that the proposed methodology demonstrates repeatable results in terms of convergence under various training and validation subset permutations. However, it can be also seen that there is an open direction for improvement of the part related to the decision making in the proposed framework because despite a good convergence of DUDAE under various data permutations, the loss functions of DNN saturated at a certain level without moving closer to zero. Residual Signal Analysis In this subsection, the analysis of residual signals obtained after the DUDAE was trained on signals corresponding to the normal system state is provided. From the previous subsection, it was concluded that when the number of training epochs of the DUDAE is equal to 600, the obtained residual signals that are used as input to the DNN for decision making on the state of the system lead to the highest classification accuracy on the validation dataset. The main point of this is that the well-trained DUDAE delivers residual signals of a small magnitude oscillating around zero (i.e., small reconstruction error) for the signals that correspond to the normal state of the system or for the signals that resemble those signals. On the other hand, when the imbalance and blade rub-impact fault appear in the system, the vibration signals start deviating from the ones corresponding to a normal operating state. Hence, with the increase of rub-impact fault intensity, the reconstruction error increases as well, which leads to residual signals of higher magnitudes and higher deviations from zero. The examples of residual signals computed after the trained DUDAE for different states of the system are depicted in Figure 6. corresponding to a normal operating state. Hence, with the increase of rub-impact fault intensity, the reconstruction error increases as well, which leads to residual signals of higher magnitudes and higher deviations from zero. The examples of residual signals computed after the trained DUDAE for different states of the system are depicted in Figure 6. As can be seen from this figure, the magnitudes of residual signals and their shapes change with the progression of the fault. Furthermore, it can be seen that MSE values computed between the original and reconstructed signals also increase when the signals in the input of the trained DUDAE deviate significantly from the signals corresponding to the normal system condition when neither shaft imbalance nor blade rub faults was observed. Figure 7 illustrates the energy of residual signals generated by the proposed methodology for five signal classes, namely normal system condition (class #1), shaft imbalance fault (class #4), shaft imbalance + slight rubbing fault (class #6), shaft imbalance + intensive rubbing fault (class #9), and shaft imbalance + severe rubbing fault (class #10), respectively. The signal groups presented in Figure 7 are the same as the ones demonstrated in Figure 6 for the sake of consistency. In the proposed methodology, DUDAE extracts the function of the dynamic behavior of the normal signal (the rotor system is under the normal operating condition when no faults are observed) during its training. However, in abnormal conditions of the system, the behavior of the signal is utterly different from its behavior in the normal state of the system. Regarding Figure 7, it can be seen that the accuracy of the dynamic behavior estimation for the signals belonging to different classes is As can be seen from this figure, the magnitudes of residual signals and their shapes change with the progression of the fault. Furthermore, it can be seen that MSE values computed between the original and reconstructed signals also increase when the signals in the input of the trained DUDAE deviate significantly from the signals corresponding to the normal system condition when neither shaft imbalance nor blade rub faults was observed. Figure 7 illustrates the energy of residual signals generated by the proposed methodology for five signal classes, namely normal system condition (class #1), shaft imbalance fault (class #4), shaft imbalance + slight rubbing fault (class #6), shaft imbalance + intensive rubbing fault (class #9), and shaft imbalance + severe rubbing fault (class #10), respectively. The signal groups presented in Figure 7 are the same as the ones demonstrated in Figure 6 for the sake of consistency. In the proposed methodology, DUDAE extracts the function of the dynamic behavior of the normal signal (the rotor system is under the normal operating condition when no faults are observed) during its training. However, in abnormal conditions of the system, the behavior of the signal is utterly different from its behavior in the normal state of the system. Regarding Figure 7, it can be seen that the accuracy of the dynamic behavior estimation for the signals belonging to different classes is satisfactory, especially for class #1. The reason for this observation is that the residual signal itself is a type of error signal that is computed between the actual vibration signal and one estimated by DUDAE. That is, since the DUDAE has been trained on signals belonging to normal conditions, it is capable of accurately estimating the unknown signals when their dynamic behavior is close to the ones it has been learned on. Furthermore, it can be seen from Figures 6 and 7 that when we use DUDAE to estimate the unknown signal dynamic behavior that drastically differs from ones collected under normal operating conditions, the estimation error (residual signal) between the actual and estimated signal is increasing. In Figure 6, this difference can be observed in a deviation of residual signals from zero-mean along with the increasing value of the MSE metric, while in Figure 7, this difference is characterized with growing values of energy features extracted from those residual signals. Based on the energies of the residual signals presented in Figure 7, it can be concluded that the obtained residual signals are sensitive to the degradation of the system, which means that these residual signals can be used as discriminative features itself for fault classification or for the feature extraction in conjunction with feature-based machine learning classifiers for diagnosing faults. Thus, the more discriminative the residual signals are, the easier it is for the classifier to perform fault identification. However, some overlap can be observed when the intensity of rub fault increases, such as in classes #9 and #10. Therefore, to improve the potential fault classification accuracy, the DNN with the residual signals as input features is recommended in this work instead of conventional amplitude-based statistical feature extraction and fault classification schemes, the performance of which can be affected by the overlap of the extracted feature parameters. Sensors 2020, 20, x 15 of 22 satisfactory, especially for class #1. The reason for this observation is that the residual signal itself is a type of error signal that is computed between the actual vibration signal and one estimated by DUDAE. That is, since the DUDAE has been trained on signals belonging to normal conditions, it is capable of accurately estimating the unknown signals when their dynamic behavior is close to the ones it has been learned on. Furthermore, it can be seen from Figures 6 and 7 that when we use DUDAE to estimate the unknown signal dynamic behavior that drastically differs from ones collected under normal operating conditions, the estimation error (residual signal) between the actual and estimated signal is increasing. In Figure 6, this difference can be observed in a deviation of residual signals from zero-mean along with the increasing value of the MSE metric, while in Figure 7, this difference is characterized with growing values of energy features extracted from those residual signals. Based on the energies of the residual signals presented in Figure 7, it can be concluded that the obtained residual signals are sensitive to the degradation of the system, which means that these residual signals can be used as discriminative features itself for fault classification or for the feature extraction in conjunction with feature-based machine learning classifiers for diagnosing faults. Thus, the more discriminative the residual signals are, the easier it is for the classifier to perform fault identification. However, some overlap can be observed when the intensity of rub fault increases, such as in classes #9 and #10. Therefore, to improve the potential fault classification accuracy, the DNN with the residual signals as input features is recommended in this work instead of conventional amplitude-based statistical feature extraction and fault classification schemes, the performance of which can be affected by the overlap of the extracted feature parameters. Fault Identification Performance To evaluate the fault identification capabilities of the proposed framework, we compare it with two counterpart methods. Since the proposed model is a pipeline process containing two steps, nonlinear function approximation of the system state and decision making, for fair comparison, it is reasonable to fix the decision-making approach (i.e., DNN) and vary the methods at the first step to observe whether the proposed pipeline influences the fault identification abilities or not. The first Fault Identification Performance To evaluate the fault identification capabilities of the proposed framework, we compare it with two counterpart methods. Since the proposed model is a pipeline process containing two steps, nonlinear function approximation of the system state and decision making, for fair comparison, it is reasonable to fix the decision-making approach (i.e., DNN) and vary the methods at the first step to observe whether the proposed pipeline influences the fault identification abilities or not. The first method used for the comparison is directly applying the DNN to resampled signals in the time domain (further referred to as RAW+DNN). This approach allows us to investigate the improvement in classification performance of the proposed method where nonlinear function approximation by the DUDAE is utilized in comparison to when no function approximation is used. In the second approach used for the comparison, we are utilizing a widely used state-of-the-art linear observation method from the field of control theory, autoregressive with external input ARX-Laguerre proportional-integral observer (PIO) (ARXLPIO) [21], for estimating the nonlinear blade rub-impact fault signals. The residual signals computed as the difference between the original raw signals and ones estimated by ARXLPIO are used as the inputs to the DNN to accomplish the task of fault diagnosis. This method will be further referred to as ARXLPIO+DNN. The architecture of the DNN employed in the comparison approaches matches the one used in the proposed DUDAE+DNN model. Note, in this comparison, we are not using the modern control theory algorithms, such as nonlinear observation techniques. The main reason for this, as was discussed in the introduction part of this manuscript, is the complexity of the design process of these approaches in a real industrial environment as well as the need to re-design the nonlinear observation technique whenever the system changes. Additionally, to investigate the quality of obtained residual signals and compare the performance of different types of techniques for residual signal classification (i.e., fault identification), the two additional techniques that include residual signal characterization with feature parameter and classification are included in this experiment. One method represents the characterization of residual signals with the energy feature parameter and decision tree machine learning algorithm, as has been proposed in [21] (further referred to as RS+EN+DT). As the second approach used for residual signal characterization and classification, the combination of the SVM machine learning classifier, as one of the most popular classification algorithms, is applied to the energy features extracted from the residual signal [49] (further referred to as RS+EN+SVM). The fault classification performance for the methods mentioned above is evaluated using the widely micro-averaged forms of widely used metrics [50], such as micro-averaged recall (Rec µ ), micro-averaged precision (Prec µ ), micro-averaged F1-score (F1 µ ), and total fault identification accuracy (FIA). It is decided to use the micro-averaged versions of these metrics to address the possible deviations in the numbers of data samples presented in each class in the testing subsets due to the random sampling procedure applied at each trial of the experiment. These metrics are expressed as follows: Here, TP k , FP k , and FN k are the true-positive, false-positive, and false-negative values computed for the data instances of the class k, respectively; N is the total number of data samples in the datasets used for the experiment, and K is the total number of signal classes presented in the datasets. The experimental results expressed in these metrics averaged over 10 experiments are tabulated in Table 4. Figure 7 where the boxplots with distributions of accuracy values obtained during 10 experiments are presented. The black cross in the boxes belonging to different methods in Figure 8 corresponds to the average classification accuracy values presented in Table 4. Figure 7 where the boxplots with distributions of accuracy values obtained during 10 experiments are presented. The black cross in the boxes belonging to different methods in Figure 8 corresponds to the average classification accuracy values presented in Table 4. As can be seen from the boxplots demonstrated in Figure 8, the classification accuracy values did not deviate significantly from the mean and median values during the experiments for the proposed method, which ensures the repeatability of the results. For ARXLPIO+DNN, it can be seen that the deviation of the accuracy values is also not very significant, with outliers not laying far from the box; however, all the accuracy values are distributed lower than the results of the proposed technique. Unlike the proposed method where artificial intelligence-based system identification has been used and ARXLPIO+DNN where the linear observer has been utilized, we can see that the box corresponding to the RAW+DNN method is wider with a long whisker laying toward the outlier of 68.1%. From this figure, it can be concluded that the proposed DUDAE used for nonlinear function approximation and the ARXLPIO observation technique both can improve the fault diagnosis stability; however, the DUDAE helps to increase the average classification performance when applied to a nonlinear rubbing signal in comparison with a linear observation technique. Additionally, both RS+EN+DT and RS+EN+SVM techniques, where the residual signals provided by DUDAE have been characterized by an energy feature parameter, provided relatively high results in terms of average FIA. Furthermore, these methods demonstrated small deviations of the FIA metric from its mean value during 10 experiments, which also shows that even less powerful classifiers in comparison with DNN are capable of yielding stable results under different training-testing data permutations when applied to the features extracted from residual signals delivered by the proposed technique. The results of RS+EN+DT and RS+EN+SVM also speak for the advantage of the proposed technique and highlight the importance of the quality of residual signals. As can be seen from the boxplots demonstrated in Figure 8, the classification accuracy values did not deviate significantly from the mean and median values during the experiments for the proposed method, which ensures the repeatability of the results. For ARXLPIO+DNN, it can be seen that the deviation of the accuracy values is also not very significant, with outliers not laying far from the box; however, all the accuracy values are distributed lower than the results of the proposed technique. Unlike the proposed method where artificial intelligence-based system identification has been used and ARXLPIO+DNN where the linear observer has been utilized, we can see that the box corresponding to the RAW+DNN method is wider with a long whisker laying toward the outlier of 68.1%. From this figure, it can be concluded that the proposed DUDAE used for nonlinear function approximation and the ARXLPIO observation technique both can improve the fault diagnosis stability; however, the DUDAE helps to increase the average classification performance when applied to a nonlinear rubbing signal in comparison with a linear observation technique. Additionally, both RS+EN+DT and RS+EN+SVM techniques, where the residual signals provided by DUDAE have been characterized by an energy feature parameter, provided relatively high results in terms of average FIA. Furthermore, these methods demonstrated small deviations of the FIA metric from its mean value during 10 experiments, which also shows that even less powerful classifiers in comparison with DNN are capable of yielding stable results under different training-testing data permutations when applied to the features extracted from residual signals delivered by the proposed technique. The results of RS+EN+DT and RS+EN+SVM also speak for the advantage of the proposed technique and highlight the importance of the quality of residual signals. If the residuals are of high quality, various algorithms for classifying these residual signals can be used without significant performance degradation. Figure 9 presents the confusion matrices averaged over 10 experiments to provide more details on the fault diagnosis performance. From this figure, it can be seen that the proposed technique demonstrated the lowest numbers of misclassifications in conditions where the nonlinearity of the rotor system increases in comparison with referenced techniques, especially, with the RAW+DNN where the DNN has been applied directly to vibration signals. These conditions corresponded to the appearance of coupling imbalance and slight blade rub-impact faults (classes #5-7) and imbalance with intensive blade rubbing faults (classes #8 and #9). From Figure 9c,d, it can be observed that despite demonstrating relatively good results of fault classification when different feature-based machine learning classifiers are applied to the energy feature parameters extracted from the residual signals provided by the proposed technique, those classification results can be still improved by using more powerful approaches for decision making that might extract discriminative features autonomously or utilize the residual signal itself as a feature, such as DNN. Sensors 2020, 20, x 18 of 22 If the residuals are of high quality, various algorithms for classifying these residual signals can be used without significant performance degradation. Figure 9 presents the confusion matrices averaged over 10 experiments to provide more details on the fault diagnosis performance. From this figure, it can be seen that the proposed technique demonstrated the lowest numbers of misclassifications in conditions where the nonlinearity of the rotor system increases in comparison with referenced techniques, especially, with the RAW+DNN where the DNN has been applied directly to vibration signals. These conditions corresponded to the appearance of coupling imbalance and slight blade rub-impact faults (classes #5-7) and imbalance with intensive blade rubbing faults (classes #8 and #9). From Figure 9c,d, it can be observed that despite demonstrating relatively good results of fault classification when different feature-based machine learning classifiers are applied to the energy feature parameters extracted from the residual signals provided by the proposed technique, those classification results can be still improved by using more powerful approaches for decision making that might extract discriminative features autonomously or utilize the residual signal itself as a feature, such as DNN. The main reason that the ARXLPIO+DNN method shows degraded performance in comparison with the proposed methodology is in the nature of the ARXLPIO method. When a linear observation technique, such as ARXLPIO, is applied to approximating the nonlinear function of the system state when dealing with nonlinear and nonstationary signals, it inevitably leads to the appearance of estimation errors. This is because the uncertainty term of the nonlinear and nonstationary signal (i.e., blade rub-impact fault signal) modeling cannot be estimated properly by the linear observation technique. Despite the fact that the residual signals obtained after the ARXLPIO observation technique appear to be more discriminative as features in comparison with the original raw time-domain signals, the degradation of classification performance in comparison with the nonlinear observation techniques from the field of control theory or the artificial intelligence-based techniques is expected. The main reason that the ARXLPIO+DNN method shows degraded performance in comparison with the proposed methodology is in the nature of the ARXLPIO method. When a linear observation technique, such as ARXLPIO, is applied to approximating the nonlinear function of the system state when dealing with nonlinear and nonstationary signals, it inevitably leads to the appearance of estimation errors. This is because the uncertainty term of the nonlinear and nonstationary signal (i.e., blade rub-impact fault signal) modeling cannot be estimated properly by the linear observation technique. Despite the fact that the residual signals obtained after the ARXLPIO observation technique appear to be more discriminative as features in comparison with the original raw time-domain signals, the degradation of classification performance in comparison with the nonlinear observation techniques from the field of control theory or the artificial intelligence-based techniques is expected. The RAW+DNN method demonstrated the lowest FIA in comparison to other techniques presented in Table 4. In this approach, the DNN utilized the raw resampled time-domain vibration signals as the inputs to perform the task of fault identification. It mainly demonstrated lower accuracy in comparison with the proposed approach due to the complexity of the blade rub-impact fault signal. Due to its non-stationarity, the statistical properties of time-domain samples may vary over time even when they belong to the same signal class, which leads to the problem that time-domain vibration signal patterns are not discriminative enough and may lead to the failure of the DNN to adjust its weights during training to reach a good level of generalization. Overall, it can be concluded that the proposed data-driven framework consisting of the DUDAE-DNN model is suitable for diagnosing blade rub-impact faults of various intensity levels with high fault classification accuracy in comparison with the other referenced methods. From the experimental results, it can be seen that the application of the DUDAE for approximating the nonlinear function of the nonlinear rotor system state improves the fault diagnosis capabilities of the DNN in comparison with the state-of-the-art linear observation techniques frequently used in industry and the situations when no signal observation is used. Another important advantage of the proposed methodology is that its structure is pipeline-shaped, which supports modifications of the current architecture as well as applicability to other systems, since the residual signals used as features in this study are generated based on the ideas of system identification. However, from the results, it also can be seen that the proposed methodology for diagnosing coupling blade rub-impact faults still should be improved to increase its classification performance when dealing with vibration signals corresponding to the increasing nonlinearity of the rotor system. Furthermore, to accomplish a comprehensive investigation of the robustness and reliability, it is important to test the proposed methodology on the datasets with varying operating conditions, such as varying rotating speed and varying load. Conclusions In this paper, a novel method for diagnosing complex coupling faults consisting of shaft imbalance and blade rub-impact faults of different severity levels is introduced. In the proposed fault diagnosis technique, the input time-domain vibration signals are first resampled concerning the fundamental frequency of the rotating machine. Then, the nonlinear function approximation of the system state under normal operating conditions is accomplished by training the deep undercomplete denoising autoencoder on the resampled signals corresponding to the state of the system when neither imbalance nor blade rub-impact faults were observed. Next, the residual signals are computed as a difference between the original resampled time-domain signals and their estimates by the autoencoder. Finally, these residual signals were used as the inputs to the deep neural network to perform the decision making about the current state of the rotor system. The series of experiments show that the proposed fault diagnosis model demonstrated stable convergence behavior under different training-testing data permutations and outperformed other methods used for the comparison in terms of the micro-averaged performance metrics. In future work, we will focus on the improvement of the robustness and reliability of the proposed methodology. The possible directions for the improvement are the discovering and application of more complex architectures of an autoencoder to improve the quality of nonlinear function approximation and deep neural network to improve the decision-making procedure that will lead to better classification of the vibration signals with the induced nonstationary. Furthermore, the problem of varying operating conditions should be considered, and the proposed technique should be validated using the datasets containing other mechanical faults with changing operating conditions.
2020-11-05T09:09:18.536Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "a679b9b5e536a806e1d1c8e8219d248aa310e462", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7662213", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "caaddd001cbbe1075026fc5bef32ad0f4776a007", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
231723667
pes2o/s2orc
v3-fos-license
Paroxysmal kinesigenic dyskinesia associated with a novel POLG variant Abstract Introduction: Paroxysmal kinesigenic dyskinesia (PKD) is a rare neurological disease characterized by recurrent dyskinesia or choreoathetosis triggered by sudden movements. Pathogenic variants in PRRT2 are the main cause of PKD. However, only about half of clinically diagnosed PKD patients have PRRT2 mutations, indicating that additional undiscovered causative genes could be implicated. PKD associated with POLG variant has not been reported. Patient concerns: A 14-year-old boy presented with a 2-month history of involuntary dystonic movements triggered by sudden activities. He was conscious during the attacks. Neurological examination, laboratory tests, brain magnetic resonance imaging (MRI), electroencephalogram (EEG) were all normal. Genetic analysis showed a novel variant of POLG (c.440G>T, p.Ser147Ile), which was considered to be a likely pathogenic variant in this case. Diagnoses: The patient was diagnosed with PKD. Interventions: Low dose carbamazepine was used orally for treatment. Outcomes: The patient achieved complete resolution of symptoms without any dyskinesia during the 6-month follow up. Conclusion: Our study identified the novel POLG variant (c.440G>T, p.Ser147Ile) to be a likely pathogenic variant in PKD. Introduction Paroxysmal kinesigenic dyskinesia (PKD, OMIM #128200) is a rare neurological disease characterized by recurrent attacks of transient involuntary dystonia or choreoathetosis movements triggered by sudden activities. [1,2] The prevalence of PKD is about 1 in 150,000, and the average time to obtain a correct diagnosis is almost 5 years due to lack of recognition. [2,3] The commonly used diagnostic criteria of PKD was proposed by Bruno et al [3] in 2004, which is based on history, clinical observation, imaging, and laboratory test results. Genetic advances led to greater diagnostic certainty. In 2011, researchers founded the PRRT2 gene as a primary causative gene of PKD with an autosomal dominant inheritance pattern. [4] However, only about 50% of primary PKD patients have PRRT2 variants, [5,6] suggesting that some other genes may be responsible for PKD. Therefore, finding new pathogenic genes and variants may provide an unequivocal diagnosis of the disease. In the current study, we described a case of clinically diagnosed PKD patient with a novel heterozygous POLG variant, which may broaden the genetic spectrum of PKD. A 14-year-old boy presented with a 2-month history of involuntary dystonic movements triggered by sudden activities after a period of physical rests. The involuntary movements last approximately 10 seconds. Attack frequency varies from about 20 times per day to once for a couple of weeks. Both sides of his body could be involved, accompanied by occasional spasmodic torticollis. Stress and anxiety can increase the likelihood of episodes. He was unable to control the attacks. His consciousness was unaffected during the process. He was previously healthy without known significant abnormalities during his birth and growth. His family history was not notable for involuntary movements, epilepsy, or other related diseases. Case report On physical examination, the vital signs were unremarkable and neurological examination was normal. Laboratory test results were within normal limits, including routine serum tests, standard biochemistry profile, serum lactate concentration, ceruloplasmin, etc. Brain magnetic resonance imaging (MRI) was normal (Fig. 1). Electroencephalogram (EEG) was normal. A diagnosis of PKD was concluded, and genetic analysis was carried out with a written informed consent obtained from his parents. High-throughput sequencing and Sanger sequencing were performed. Heterozygous variants were found in POLG (NM_002693.2, Exon2, c.440G>T, p.Ser147Ile) and PLA2G6 (NM_003560.3, Exon7, c.991G>T, p.Asp331Tyr) (Fig. 2). Both mutant alleles were inherited from his asymptomatic mother. The POLG variant (c.440G>T, p.Ser147Ile) located on the exonuclease domain, one of the important functional domains of POLG1 protein, and may lead to infidelity of mitochondrial DNA (mtDNA) replication and proofreading errors. [7,8] This variant was novel and not registered in the following genetic public database: Human Gene Mutation Database (HGMD), ESP6500, 1000 Genomes Project, ClinVar, and dbSNP. The deleterious effect was predicted by multiple programs in silico, including PolyPhen-2, SIFT, MutationTaster, MutationAssessor, FATHMM, GERP, PhyloP, and SiPhy. The POLG variant was thus assigned as likely pathogenic according to the guidelines of the American College of Medical Genetics and Genomics. [9] The missense PLA2G6 variant (c.991G>T, p.Asp331Tyr) has been reported in Parkinsonism patients with an autosomal recessive inheritance pattern. [10] Thus, the PLA2G6 variant was considered less likely to be the causal gene in this case. Oral carbamazepine (CBZ) 200 mg/d was used for treatment, and the dosages gradually reduced to 50 mg/d within 2 weeks. He achieved complete resolution of symptoms within 24 hours after he took medicine and reported no involuntary movements attacks when the dosage decreased to 50 mg/d. During the 6month follow up, the PKD episodes vanished entirely without a single attack since medicine use. This study was approved by the Ethics Committee of the First Hospital of China Medical University and adhered to the tenets of the Declaration of Helsinki. Discussion The present study described a clinically diagnosed PKD patient harboring heterozygous variants in POLG and PLA2G6. No variant in PRRT2 was detected. The heterozygous POLG variant was novel and considered to be likely pathogenic in this case according to the guidelines of the American College of Medical Genetics and Genomics. [9] This patient meets the criteria for PKD as proposed by Bruno et al, [3] including the onset of symptoms at 14 years old, identified kinesigenic trigger for attacks, short duration of attacks (<1 minute), unaffected consciousness during attacks, and without other organic disease or abnormal neurological examination. Moreover, low does CBZ could resolve the PKD episodes entirely, which was in accordance with previous studies. [11] PKD patients can manifest choreoathetosis movements, dystonic movements, or the mixed type, and most patients primarily present with dystonic movements. [3,12] Our case mainly presented with dystonic movements. The POLG gene, located on chromosome 15q25, encodes polymerase gamma 1 (POLG1), which is an enzyme responsible for the repair and replication of mtDNA (16). The POLG1 enzyme is comprised of 3 main functional domains, including exonuclease domain (amino acid residues 26-417), linker domain (amino acid residues 418-755), and polymerase domain (amino acid residues 756-1239). [8,[13][14][15] In our study, the variant (c.440G>T, p.Ser147Ile) located on the exonuclease domain and should lead to the infidelity of mtDNA replication and errors in proofreading. [7,8] The POLG-related disease is of broadspectrum and significant heterogeneity, including progressive external ophthalmoplegia (PEO), [16] autosomal recessive and dominant PEO, [16,17] sensory ataxic neuropathy, dysarthria and ophthalmoparesis, [18] autosomal recessive ataxia, [19,20] spinocerebellar ataxia with epilepsy, [21] etc. Individual presentations vary a lot and are influenced by multiple factors, including POLG genotype, genetic background, epigenetic effects, environmental factors and the age of onset. [8] Dystonia and movement disorders are common presentations in POLG-related diseases. [22,23] The pathophysiological mechanisms of POLG variant in PKD are unclear. Although multiple studies have been carried out on PKD, knowledge about the pathogenic mechanisms is limited. [24] Although the channelopathy hypothesis is prevailing in PKD, it is insufficient to explain the pathophysiology fully. [25] A recent study reported a complicated case of PKD with SACS mutation. [26] The SACS gene encodes mitochondrial protein sacsin, and variants of SACS result in defects in mitochondrial dynamics. This study indicated that mitochondria might play a role in the pathophysiology of PKD. Our report could be the second research about PKD associated with genes that may affect mitochondrial function. The identification of additional PKD cases associated with POLG variants and further functional studies are warranted. The association between mitochondrial disorders and paroxysmal dyskinesias have also been reported in several other studies. Mutations in ECHS1 (enoyl CoA hydratase, short chain, 1, mitochondrial), encoding for the short-chain enoyl-CoA hydratase protein (SCEH), have been reported as a novel cause of paroxysmal exercise-induced dyskinesia (PED). [27,28] Paroxys- www.md-journal.com mal non-kinesigenic dyskinesia (PNKD) can also occur in patients carrying variants in BCKD complex, which functioned as mitochondrial branched chain alpha ketoacid dehydrogenase kinase. [25] Although most of the POLG-related disorders are autosomal dominant or autosomal recessive, variably penetrant and incompletely penetrant dominant variants have also been reported. [8,18,[29][30][31] Burusnukul and de los Reyes [29] has reported a case of 2 half-siblings with heterozygous POLG variant (p. Gly517Val). The variant site located in the linker region between the exonuclease and polymerase domains of POLG1. The patients showed multiple symptoms, including early-onset seizures, myoclonus, hypotonia, and developmental delay, but their carrier mother was unaffected, [29] much as was the case in our study. Unfortunately, since limited family members of the patient were available in genetic analysis in our research, the exact inheritance pattern was unclear at this stage. These limitations also existed in previous similar studies. [26][27][28] Further clinical and biological researches are needed. Conclusion In conclusion, our study suggests a novel heterozygous POLG variant in a patient with PKD, which may expend the gene spectrum of PKD and help to diagnose precisely in PKD patients without PRRT2 mutations. The mitochondrial pathway may be a possible pathophysiology mechanism of PKD, and further functional analyses are needed.
2021-01-29T14:08:32.342Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "f290f5bbb4ab0ec23f1502890bfb327aca70cce1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000024395", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f290f5bbb4ab0ec23f1502890bfb327aca70cce1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }