id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
255055848 | pes2o/s2orc | v3-fos-license | Black economic empowerment, development and seeking justice at South African municipalities: A closer look at two case studies
risk, supply and demand, logistics, risk, sourcing and acquisitions are in place. The tactical and strategic aims and objectives of a public service institution, particularly municipalities can only be successful when their leadership and management guarantee the effective operation, functional and solid SCM operationalisation that is founded on well-planned external and internal controls. These controls need to be rooted in honesty, accountability, efficiency and transparency (RSA 2000b). Public institutions need to be centrally monitored through the utilisation of technological systems that are regularly updated. The existence and maintenance of such processes and functions are central in the efforts of leaders and managers to nullify existing human resource weaknesses Although South African and international research has been enriched by a wide variety of empirical findings regarding supply chain management (SCM) corruption in South Africa, there is a significant gap in the literature, particularly in terms of the direct and indirect connections of black economic empowerment (BEE) entrepreneurs, local government and to processes of SCM at South African municipalities. This study is based on an inductive, qualitative and interpretative methodology aimed at analysing and dissecting relationships in the context of BEE entrepreneurs engaging in corruption. Within this realm of corruption, the article also looks at the role of supply chain and procurement at two South African municipalities. The municipalities selected were situated in both urban and rural areas in KwaZulu-Natal and the Eastern Cape. The eight interviewees represented the political, administrative and workers’ sections of the municipalities. The findings pinpoint the realities of aspects of BEE associated with the nature of corruption in public procurement in the municipalities and the influence of BEE entrepreneurs in processes of corruption, particularly in SCM functions and processes. Contribution: Corruption remains a key threat to South Africa’s young democracy. This is particularly true at the local level, the central pivot in our society building exercise. A multidisciplinary journal of this nature will benefit from the focus of this article, particularly in light of the fact that it also hopes to ignite thought around the moral ramifications of rampant corruption in South Africa.
Introduction and context
Supply chain management (SCM) and procurement in the South African public sector is an institutional system based on clear functional and operational imperatives that are described in detail in the country's legal system. It comprises a plethora of rules and regulations emanating from the country's National Treasury (Republic of South Africa [RSA] 1999[RSA] , 2003a. Supply Chain Management, as a system, is key to the interconnection and relationships between its structures and functions, in line with municipal value and financial performance. This means that the supply chain strategies and tactics can only be successful when it coheres with supply chain partners, financial leadership and organisational personnel. Academic discourse and contributions on the subject remain vague and fragmented but cannot be relegated or underestimated (Vousinas 2019;Wieland, Handfield & Durach 2016:206).
For supply chain to be successful, comprehensive planning, designing and implementation of the functions and processes of the system is needed. Furthermore, effectiveness and efficiency can only become a reality when the fundamentals of risk, supply and demand, logistics, risk, sourcing and acquisitions are in place. The tactical and strategic aims and objectives of a public service institution, particularly municipalities can only be successful when their leadership and management guarantee the effective operation, functional and solid SCM operationalisation that is founded on well-planned external and internal controls. These controls need to be rooted in honesty, accountability, efficiency and transparency (RSA 2000b).
Public institutions need to be centrally monitored through the utilisation of technological systems that are regularly updated. The existence and maintenance of such processes and functions are central in the efforts of leaders and managers to nullify existing human resource weaknesses Although South African and international research has been enriched by a wide variety of empirical findings regarding supply chain management (SCM) corruption in South Africa, there is a significant gap in the literature, particularly in terms of the direct and indirect connections of black economic empowerment (BEE) entrepreneurs, local government and to processes of SCM at South African municipalities. This study is based on an inductive, qualitative and interpretative methodology aimed at analysing and dissecting relationships in the context of BEE entrepreneurs engaging in corruption. Within this realm of corruption, the article also looks at the role of supply chain and procurement at two South African municipalities. The municipalities selected were situated in both urban and rural areas in KwaZulu-Natal and the Eastern Cape. The eight interviewees represented the political, administrative and workers' sections of the municipalities. The findings pinpoint the realities of aspects of BEE associated with the nature of corruption in public procurement in the municipalities and the influence of BEE entrepreneurs in processes of corruption, particularly in SCM functions and processes. evident in the system, but above all, to fight against fraud, theft and corruption. For such a structure and process to be successful, a digital system that is technologically advanced at all operational levels is inevitable, because the provision of direct access to a wide variety of financial reports in sections and/or departments of the public institutions is the foundation of future success. Such processes can enable the consistent updating of departmental reports and further to this, the relevant state authorities can perform their evaluation and monitoring responsibilities in their effort to assess the progress of designated programmes.
These SCM and procurement laws and the regular treasury reports and regulations are considered the foundations of the ethical behaviour of state institutions. Included here, one needs to consider the behaviour of potential tender clients and the beneficiaries of black economic empowerment (BEE) dynamics.
This article falls in line with the new production of knowledge regarding corruption in procurement and SCM, particularly in terms of the direct and indirect connections of BEE entrepreneurs, local government officials and the processes of SCM at South African municipalities.
The municipal supply chain legislation in South Africa is the first dynamic to be considered in this regard.
Municipal legislation in South Africa
According to the Constitution of the Republic of South Africa, 1996, Section 52, Chapter 10 local government in South Africa consists of metropolitan, district and local municipalities. The foundations upon which municipalities must base their functional foundations and governing principles that are included in Section 195 of Chapter 10 of the Constitution are development-based orientation; professional ethics of high standards; efficient, effective and economic utilisation of resources; equitable, impartial, fair and unbiased services; citizens' public participation in policy-making; responding to people's needs and providing the public with timely, accurate and accessible information with transparency and accountability (RSA 1996, s.195, ch. 10).
South Africa's Constitution has set the municipal framework of the country in Section 52 of Chapter 5. The three main categories are the metropolitan (also known as metros), the district municipalities and the local municipalities under the control of the district municipalities.
The key principles upon which the fundamental priorities of municipalities are based include professional adherence to honest and efficient utilisation of municipal resources and being accountable and equitable. It also includes strong and active community participation in policymaking; a wellplanned and implemented response to the needs of communities and staff accountability in public administration. In addition, municipalities are to provide accurate, important and timely information to communities and should foster transparency as spelt out in the Constitution, Section 52 of Chapter 5 (RSA 1996).
The authority of the Municipal Council lies primarily in its function to plan and implement the development of the debt collection and credit control initiatives in respect of municipal rates, accompanied by services rendered and the charges. These crucial initiatives are detailed in Sections 95 and 96 of the Municipal Systems Act, 32 of 2000(RSA 2000a. The act identified important steps directly related to debt collection and credit control policies that need to be adopted as an integral part of the municipal by-laws. Such initiatives are considered critically important (RSA 2000a).
The act points to the multi-dimensional relationships between municipal financial realities and SCM and procurement functions and responsibilities. The connectivity of the relationship between financial realities and actions; and SCM and procurement are rooted in the realities of connected and collaborative planning, operational readiness and implementation leading to growth.
Inevitably all these responsibilities are in the hands of the municipalities' political and administrative leadership and staff, the main institutional stakeholders, who together with their communities should follow what has been described as the foundation of achieving a transparent and efficient municipality -the Municipal Finance Management Act, 56 of 2003 (RSA 2003a) and other related legislation and its application (Kanyane 2011:935).
The act (RSA 2003a) is established: [T]o secure sound and sustainable management of the financial affairs of municipalities and other institutions in the local sphere of government; to establish treasury norms and standards for the local sphere of government; and provide for matters connected herewith. (p. 23) The budget process of any municipality must be transparent and require the involvement of the community in terms of Section 21A of the Municipal Systems Act, 32 of 2000.
All information and decision-making pertaining to the budget should be undertaken in a transparent and consultative manner. The adopted budget should then be made available as a public document (RSA 2000a).
Furthermore, it is the responsibility of each municipality to employ well-planned financial measures to ensure that the municipal council collectively approves sustainable strategies and steps leading to the implementation of credible budgets. A sustainable budget is defined (Sheehan 2005) as: [A] fiscal strategy which can continue to exist for the foreseeable future without any substantial change, and in particular without any sharp changes in tax rates or spending to prevent a substantial deterioration in fiscal position. (p. 65) http://www.hts.org.za Open Access The additional legislative framework governing SCM including the Municipal Finance Management Act, 56 of 2003(RSA 2003a can be classified as a 'reform procurement legislation of the public sector' as it was based on a principle based on economic and social transformation emphasising the crucial importance of BEE. Such a brave initiative was rooted on the undertaking of policy directions of the new democratic government based on the expectation of the development of a 'mixed economy' founded on the Reconstruction and Development Programme (RDP) (RSA 1994).
The RDP lasted no more than one and a half years and was followed by the Growth, Employment and Redistribution Strategy (GEAR) (RSA National Treasury 1995).
These initiatives were accompanied by a continuous effort on the part of government to assist the elevation of the population's skills development pool in the country. Such an initiative took place around the promotion supplemented by a nation-wide public campaign that was based on the perpetual promotion of belief and adherence to the highest levels of good and honest governance, achievement of the highest international standards, transparency, accountability, economic and social development, accountability, transparency and active public participation (RSA 1994(RSA , 1996.
Within a carefully planned SCM based on the above laws, rules and regulations, procurement was to be the centre and foundation of organisational excellence and good governance. This, together with a joint effort of all state organisations and private sector entrepreneurs would be able to connect supply chain and procurement systems rooted in excellence, in a highly competitive environment.
An ethical, well-planned and implemented SCM process would be at the forefront of transparent, effective and efficient governance and the foundation of social and economic development, transformation and empowerment. The transformation effort of the new democratic government began with the PPPFA (the Preferential Procurement Policy Framework Act 2000, Act 5 of 2000) (RSA 2000b), which is based on the determination of the democratic government to promote BEE and economic and social development. The act is rooted in the reality of planning and implementation of a procurement system that was preferential and it was founded on a well-planned elaboration and expansion of Sections 217(3) and 217(b) of the country's Constitution.The sections indicate that when an organ of state contracts for services or goods, it must act in an equitable and transparent way.
It was founded on a number of goals that were to benefit groups and individuals who were historically disadvantaged during the period of the apartheid government. The major groups who were destined to benefit from this pioneering new policy were black people, women and the 'differently abled'. According to the law it was the responsibility of a state organ to determine and decide on the particularities of its own procurement policy planning and subsequent implementation according to the dictates of Section 2(1) of the PPPFA. (RSA 2003b). The BBBEEA are principally aimed processes that led to the strengthening of the application and particularities of the good contact codes for BEE. The act was promulgated to establish a framework leading to the promotion of BEE. This could succeed with the empowering of the Minister in order to issue codes of good practice and publish transformation charters; and to establish the BEE Advisory Council.
The legislation was promulgated in terms of expanding opportunities for the black population through the introduction of several new qualification criteria which are directly related to assets that belong to the state, provision of new licences and the expansion of equity and transformation criteria in terms of private-public partnerships. Its main aim was founded on the belief that the success of such initiatives would be instrumental in shaping a new progressive reality in cementing the key objectives, dynamics, functions and processes associated with the key aims of the BBBEEA at all societal and economic levels ( Van der Waldt et al. 2002:38).
The new law used a 'balanced scorecard' in the process of measuring and setting the parameters that point to the failure or the success of the set of guidelines, rules and regulations that are directly associated with the dictates of the law. According to the strategy for BEE, the state's strategic goals set in terms of the BBBEEA are visionary because of the strategic utilisation of the new policy.
The aspiration of the African National Congress (ANC) government leadership of the time and its Department of Labour associated with planning and implementation of the new legislation was to produce, develop and cement the BBBEE targets. It was hoped that proper implementation of the laws and regulations would build a new future where the economic and social inequalities of the apartheid regime would disappear because of a developmental transformation agenda. Sharma, Sengupta and Panja (2019:950) have shown that internationally SCM is a process encompassing a wide variety of functions such as buying, renting, contracting, purchasing, leasing and acquisition. Its emphasis is nestled in a number of principles including integrity, equity, integrity, efficiency and economy. These are accompanied by the equally important values of honesty, fairness, cost-effectiveness, competitiveness and transparency (Simangunsong, Hendry & Stevenson 2016). The process's fundamental principles are ethics, accountability, open competition, value for money, fair dealing and equity (Enderle 2015:60).
Literature review
The SCM model used in South Africa, according to the existing laws, such as the PPPFA and BBBEEA as identified earlier, provides a number of particulars including the 'demand management', followed by the 'acquisition management'. It also includes 'performance management' as outlined in detail by Manzini et al. (2019:119-120). It utilises a three-bid committee responsible for the processes and functions of specification, evaluation and adjudication for services and goods above R200.00 threshold.
The municipal SCM systems and operations as found in the MFMA were structured as the foundations of a new path to development and transformation, meaning that the SCM systems and processes were expected to ensure and promote effectiveness and accountability at all levels (Van der Waldt 2016:301-302). Within these processes, the internal control guarantees the efficiency of financial reporting of the organisation, and compliance with SCM laws, rules and regulations. The new initiatives were described as a step forward towards a finance-based reform strategy for municipalities in the effort for transparent quality service delivery (Van der Waldt 2016:299).
The existing hopes associated with a rigorous implementation of the laws and regulations by municipalities did not last long. In an important empirical study, Mhelembe and Mafini (2019) showed that the South African public sector including the municipalities have, over the years, faced a large number of external and internal risks that were crucial in limiting the performance of the existing supply chain. The objective of the above empirical study was to empirically test the relationship amongst flexibility, supply chain risks and performance in the South African public sector. The researchers administered a survey questionnaire to 307 supply chain practitioners based in the Gauteng public sector. The utilisation of a structural equation modelling procedure was the basis in the process of testing the proposed relationships. The findings of the study indicated that the existence of six supply chain risk factors, including government laws; rules, regulations and policies; supply-based complexity, security information; continuous performance monitoring of suppliers and efficiency of processes have significant influences on supply chain flexibility (Mhelembe & Mafini 2019).
The South African Auditor-General's (AG) 2021 report described the situation in the municipalities as extremely negative and both political and administrative leaders were called to commit themselves with strong determination to tackle the high levels of corruption and irregular expenditure in all municipalities (Auditor General South Africa [AGSA] 2021:11). The root of the unacceptable results was that most municipalities did not follow, plan and implement the guidance and recommendations of the Auditor General's office. Furthermore, municipalities failed to master the foundations of financial reporting, a fact that led to only 28% of municipalities submitting quality financial statements for auditing. The report indicated that inadequate financial reporting has cost municipalities over R5-billion. Subsequently, 18% of this cost was based on the continuous employment of financial reporting consultants. This even though only 2% of municipalities needed consultants, who have been used to 'bridge the vacancy gap'.
Despite this, a number of municipalities employed and paid consultants although they had capable finance units. This once again highlighted the realities associated with the appointment of consultants as it was noted that 64% of municipalities failed to provide detailed records on consultants, many of whom were appointed very late, or did not manage the consultants' performance rigorously and effectively.
Another significant problem as highlighted by the Auditor General, was the inability to audit contracts because municipalities failed to produce and present evidence and documents to support supply chain and procurement processes. The lack of proper and complete records led to unreliable financial reporting during the year. This resulted in serious harm to municipalities and compromised their ability to deliver an effective, honest and efficient service mandate to the people.
Continuous poor supply chain procurement and budgeting practices led to ineffective financial management. Nonetheless, several municipalities received conditional grants as the municipal leadership and the AGSA were of the view that such an initiative would ensure better operations in their efforts to improve service delivery for their communities. During the operations of the new effort and processes, the Auditor General's Office was unable to find substantial proof on how the money was utilised, resulting in five municipalities being kept under administration for another two years because of their poor performance. Although measures were planned and implemented, these municipalities continued to perform poorly. According to AGSA, these municipalities lacked effectiveness and efficiency of their administration processes. Because of these realities AGSA advised the municipal leaderships to avoid implementing short-term solutions that lead to the draining of money. On the other hand, the expected monitoring and evaluation of municipalities under administration as well as the support and knowledge of consultants have proved to be unworkable strategies at various operational levels. This meant that the foundation of future success was in the hands of municipal, political and administrative leadership to ensure a culture of accountability and good governance. The municipal transformation and success would be the only solution in the process of improving performance and financial management through a well-planned implementation of new strategies and tactics in the fight against accountability failures.
Furthermore, the Auditor General's Office indicated that the financial circumstances facing just over a quarter of municipalities have indicated that they would be unable to http://www.hts.org.za Open Access meet their obligations. In addition, half of the municipalities have shown indications of serious financial strain, including operational deficits, low debt recovery and inability to pay creditors. These issues have been confirmed by the fact that local government loses billions of rands every year because of penalties and interest that form a significant portion of the incurred R3.47-billion of wasteful expenditure reported during the financial year. This demonstrates that municipalities fail because they do not pay attention to the entity's consequence management, a fact that can only be solved when municipal managers act swiftly to prevent the continuity of the deterioration of their municipalities and enforce the relevant consequences for those responsible. If this is not achieved, the Auditor General's Office is obligated to implement further remedial action (Auditor General South Africa 2021).
Research methods
The utilisation of an inductive, qualitative and interpretative paradigm aimed at researching, the relationship between the BEE entrepreneurs, role of the supply chain and procurement realities and corruption at two South African municipalities. The selected municipalities were situated in KwaZulu-Natal and the Eastern Cape. The lived experiences and knowledge of the interviewees were municipal councillors and administrators with direct knowledge and understanding of the financial and political realities of the organisations. The group of eight interviewees comprises of politicians and administrators.
They all have experience of the existing relationships amongst key players in the municipalities, the functions and process of SCM and procurement in their institutions, including the gaps, strengths and weakness. Two trade unionists were amongst the interviewees. The interviews took place electronically because of the existing COVID-19 circumstances.
The selection of the interviewees was based on the knowledge of the researcher pertaining to previous dynamics around data gathering in this area of intellectual enquiry. In addition, it was based on wide-ranging knowledge of the participants and their understanding and first-hand experience of the realities within their organisations. The number of officials interviewed is according to the widely acknowledged positions of Marshall et al. (2013), and Dworkin (2012), who have advocated for the significance of a smaller number of interviewees in the process of qualitative research. The group included six male and two female interviewees.
Semi-structured interviews were conducted to elicit information from the relevant interviewees (Gill & Baillie 2018) while the open-ended interviews were more flexible and were based on an in-depth interviewing approach (Tod 2006).
An audio recording was made during the interviews and the data were transcribed by the researchers. The following questions were posed to the interviewees: • What is the nature of corruption in public procurement in your municipality and the influence of BEE businesspeople? • In which sections of the SCM is corruption taking place, in most cases? • Who are the key allies of the BEE businesspeople in the corruption process? • What is the relationship of councillors and administrators with the BEE businesspeople in SCM functions and processes?
A flexible thematic analysis was applied in the study. According to Clarke and Braun (2013), thematic analysis is a widely used form of analysis within qualitative research and emphasises interpreting patterns of meaning (or themes) within qualitative data.
The analysis was performed by the researcher who possesses the required disciplinary knowledge and the necessary research skills and experience. There were two ways to test this study's validity: the use of the external expert in this field of study and through a peer review process. An external expert was used to assess the quality of the collected data. The entire study process was reviewed by another researcher who was not directly involved in the process.
Analysis of data
Even though there have been several empirical studies and well-researched academic articles on municipal corruption and the importance of supply chain and procurement, there is significant gap in literature on the role of 'representatives of the BEE group/s' (including the black African, Indian, and 'coloured' entrepreneurs and companies) associated with corrupt acts. It is believed that this article will bring a new understanding of these important relationships.
The corruption process in public procurement in a municipality and the influence of black economic empowerment businesspeople
There was a common belief amongst all interviewees that most entrepreneurs who 'scored' in the tenders were members of a 'BEE group'. This reality included sophisticated and expensive projects mainly in the infrastructure sector. On a number of occasions, the Indian BEE group were beneficiaries, especially in the KwaZulu-Natal municipalities, as it is the province with the largest Indian population in the country (Interviewees 1, 2, 3, 4, 5, 6, 7 and 8). It was believed by most respondents that after the advertisement of the tender, the interested companies came in direct and immediate contact with senior or middle management staff they operated with. In this process, detailed inside information was crucial for the prospective tenderers. Such information from knowledgeable and particularly senior or SCM employees was considered 'costly' or 'very costly' depending on the monetary value of the tender (Interviewees 3, 5, 7 and 8). Given the expected competition in each tender, another 'costly issue' was the possible 'competitors' in the process, the reserve price and the final value of the contract and any information which they thought could help them win a tender. In most cases, the competition is so strong and diversified that when the information reaches the entrepreneur, the immediate step forward is the employment contract for an auditor, accountant, technical or infrastructurebased professionals to structure the proposal for the preparation of the final step (Interviewees 1, 4, 6 and 8).
It was strongly felt that such processes ultimately led to the manipulation of the contracting processes in such a way as to ensure that public contracts go to specific companies. In the vast majority of cases, the successful tenderers had close relations with politicians of the ruling parties dominating the municipality or businesspeople who have provided financial support to politicians, administrators and political parties dominating the municipality. Those who win tenders are basically financers of political or administrative rich lifestyles and personal and professional careers. These are corrupt relationships of a reciprocal nature as the tenderers who are successful are corrupt and exponents of clientelism and nepotism. These relationships have been considered as key reasons for the erosion of the quality of service to the communities they ought to serve (Interviewees 1, 3, 6, 7 and 8).
The sections of the supply chain management where corruption takes place in most cases
The majority of respondents indicated that the key sections where corruption is rooted besides the issues outlined above were initially the sections of the internal controls and audit sections, which ought to review all checks and balances methods and procedures in the process, guaranteeing an efficient and appropriate manner in the effort to ensure that the processes take place in an efficient and orderly manner. It was said that these were the roots of safeguarding assets and resources, thus deterring and detecting errors, fraud, and theft, and ensure accuracy and completeness of accounting data, produce reliable and timely financial and management information, and ensure adherence to its policies and plans. It was believed that such controls and audits were either weak because of the lack of knowledge on the part of the leadership, management and employees or the existence of corrupt relations with outsiders during the process (Interviewees 2, 3, 5, 6, 7 and 8).
The lack of monitoring data against indicators of fraud and corruption was also mentioned as a key weakness in the system, a process associated with the coverage of the collection of data from a wide variety of sources and its input into appropriate data management systems for interrogation enabling the identification of indicators of fraud and corruption. Such a step is directly related to the collation and interrogation of data and monitoring against indicators of fraud and corruption (Interviewees 3, 4 and 7). It was stated that expected outcomes were often not appropriate in the relations that were analysed; this was for several reasons. This meant that the priority of the questionable data and their interrogation was often not completed, meaning that it did not reach the responsible management personnel of the entity. Many reasons were cited and were associated with personal, professional and political relations existing within the municipality and the relevant sector. Such realities led to the lack of identification of possible fraud and corruption situations. This led to the lack of internal consistency and compliance with the existing structure, authorities and existing rules (Interviewees 1, 3, 6, 7 and 8).
Another key issue mentioned was the lack of knowledge or direct weakness of employees in the sphere of technology and digital realities associated with the job. Such weakness leads to the fact of the advantages of paid consultants of tenderers or potential tenderers such as technology experts, accountants and auditors to take advantage of weaknesses. These groups utilised the employees' lack of knowledge to take advantage of the process. Such employees lack the technical knowledge and capacity to investigate and detect corruption. What makes the situation more difficult is the fact that many municipalities cannot upgrade their technological systems because of the lack of funds from the provincial and local governments. This restricts the ability for serious investigations and prosecution of such highlevel corruption (Interviewees 1, 3, 5, 7 and 8).
Key allies of the black economic empowerment businesspeople in the corruption process and results
This question opened several different dimensions in the empirical terrain because a wide social group of people emerge as the category of key allies. The connections mentioned were family-related, group connections, political alliances, administrative friendships and collegiality based on high school and university connections. The political, administrative and family connections of key municipal management and employees were the majority. In KwaZulu-Natal, the close relations of a small number of families with both political and administrative connections were mentioned as 'very well known' throughout the province, the ruling party and almost everyone. Furthermore, it was stated that these key allies had historical and political roots and their expansion were known nationally. Their supply chain 'achievements' were known at local universities and provincial governments (Interviewees 2, 4, 6, 7 and 8).
It was believed that the allies of BEE businesspersons controlled the systems, processes and dynamics of supply chain and procurement systems because of their friendships and allowances within and outside the municipality, ordering key people and pinpointing their 'next step' forward. Such a reality outlines the control, manipulation and weaknesses of the system (Interviewees 1, 3, 5, 7 and 8).
It must be noted, however, that there have been cases of major BEE tender winners who have been described as being key manipulators of procurement. Their efforts at perpetuating corruption would not have been successful if processes of SCM and procurement were not weak. There were, however, cases described whereby the BEE tenderers had shown exceptional knowledge and strategies in respect of the functions and processes that exist in the SCM and procurement systems. There were tenderers who manipulated the system through the presentation of bogus securities, thus achieving success although the firm showed no capacity to deliver the infrastructure required. There have been a number of documented manipulations of tenders through presentations of fake documentation (Interviewees 1, 3, 5 and 7).
The relationship of councillors and administrators with the black economic empowerment businesspeople in supply chain management functions and processes
The question attracted a wide variety of responses utilising different words, phrases and sentences, but mostly underlying a common denominator. An interviewee summarised the common beliefs as follows: 'Municipal tenders throughout the country have become a machine that leads to the road of wealth firstly for politicians and those who support them. It is a system of politics and money, and politicians need money, not only to build big house and farms, but also to support their next national and municipal elections. In these situations, they know they can do very little about the tender's situation because they know very little about what is really happening. This means that they need to be in top terms with the municipal managers, the audit and SCM committees and administrators. Politicians mostly lead a good and rich life, but they have been used to luxury lives, they need the employees and their leadership to recover their expenses because they live a rich life, they get loans and must repay them. However, they can get money from companies they help to win big or medium tenders, the politicians and administrators can recoup the expenses, while the communities live in the outskirts of poverty and homelessness.' (Interviewee 7, trade unionist, Eastern Cape municipality) Such a statement was enriched by several additional comments associated with the issue of relationships amongst politicians and administrators. There was a general agreement that a large number of both the municipal councillors and SCM management and staff of the SCM have insufficient knowledge and understanding of the countries and procurement laws, regulations and systems. This means that in such cases many administration officials follow the politicians' instruction and employ accounting or auditor firms as well as legal practitioners who operate as mediators. However, the reality outlined by the interviewees was that in most cases, the ignorance or limited knowledge of such firms or individuals, ignorant of the SCM systems result in mal procurement to the benefit of politicians or/and administrators. The communities represented paid for their lack of knowledge of simple SCM and procurement opportunities for them. In this context they have been unable to win tenders for many years (Interviewees 2, 5, 7 and 8).
Politicians and administrative professionals as well as trade union leader activists have opened a debate based on every day and long-time knowledge acquired on specific municipal terrains. Many corrupt administrators and politicians escape punishment because of corrupt, pre-agreed and perpetual alliance of police investigators and leaders who either ignore their duties or are in part politically based factions or manipulate and/or never undertake expected investigations. In such situations, documents and other evidence disappear or are forged and manipulated as evidence. No chances for prosecution exist because evidence disappears, hence, courts drop them (Interviewees 3, 5, 6, 7 and 8).
Conclusions and recommendations
The combination of weak SCM and procurement audit and internal control environments, reality and the corrupt alliances between BEE forces and groups and municipal politicians and administrators have led the vast majority of South African municipalities on a road with no return. The perpetual ignorance of key functions such as monitoring and evaluation, lead to deviations from SCM rules, regulations and policy imperatives, a reality that is perpetrated because of the lack of annual reviews and a regulatory framework. The lack of enforcement of the code of ethical standards leads directly and indirectly to irregular practices as the SCM system is manipulated in the main terrains of the 'loss' of specific bidder documents, mysterious disqualification and continuous negative impacts on service delivery in the municipality.
Black economic empowerment groups, politicians and administrators communicate and join forces in planning and implementing corrupt practices leading to corrupt deliveries, assisted by incompetent administrators and poor contract management. This means that capacitation, empowerment, remedy of weak internal and audit controls can defeat nepotism and favouritism. Corrupt contracts manipulated bids and tenders find their way to municipal councillors and administrators' families and/or business associates.
Training and correct equipment for all municipal managers and accounting officers at all levels are a 'must', leading to anti-corruption financial management skills and senior politicians and public administrators need to commit themselves to accountability and transparency at operational and institutional levels. This will lead to the immediate appointment of oversight committees of the municipal council which will be instrumental in advancing SCM accountability and dealing directly and indirectly with corrupt BEE groups and individuals. Municipal leadership is politically, ethically and financially responsible for the monthly monitoring of the performance of contractors. By implementing such an initiative, they follow the country's legislation and respond decisively to delays in contract work through penalties and other forms associated with consequence management.The article can be considered an empirical contribution to public service administration and management, especially in light of the dynamics around the realities and relationship between BEE and SCM. The selfreflection piece here revolves around the need to consolidate the black middle class in a post-apartheid order in a way that does not pander to the toxicity of corruption and manipulation of state resources. | 2022-12-24T16:27:03.672Z | 2022-12-19T00:00:00.000 | {
"year": 2022,
"sha1": "d526f313cb9469900d5a385d06c545ee12d7b839",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/7962/23904",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c100981bbdb49a43aa44d400d834eae835b0927e",
"s2fieldsofstudy": [
"Political Science",
"Business",
"Economics",
"Sociology"
],
"extfieldsofstudy": []
} |
245161158 | pes2o/s2orc | v3-fos-license | The Role of Strategic Emotional Intelligence in Predicting Adolescents’ Academic Achievement: Possible Interplays with Verbal Intelligence and Personality
As recent meta-analyses confirmed that emotional intelligence (EI), particularly strategic EI, adjoins intelligence and personality in predicting academic achievement, we explored possible arrangements in which these predictors affect the given outcome in adolescents. Three models, with versions including either overall strategic EI or its branches, were considered: (a) a mediation model, whereby strategic EI partially mediates the effects of verbal intelligence (VI) and personality on achievement; the branch-level version assumed that emotion understanding affects achievement in a cascade via emotion management; (b) a direct effects model, with strategic EI/branches placed alongside VI and personality as another independent predictor of achievement; and (c) a moderation model, whereby personality moderates the effects of VI and strategic EI/branches on achievement. We tested these models in a sample of 227 students (M = 16.50 years) and found that both the mediation and the direct effects model with overall strategic EI fit the data; there was no support for a cascade within strategic EI, nor for the assumption that personality merely moderates the effects of abilities on achievement. Principally, strategic EI both mediated the effects of VI and openness, and independently predicted academic achievement, and it did so through emotion understanding directly, “skipping” emotion management.
Predicting Academic Achievement from Individual Dispositions
Academic achievement typically refers to performance outcomes in intellectual domains covered within instructional environments at different academic levels [1,2]. From a psychological perspective, it is essential not only because it determines prospective educational and vocational opportunities (e.g., university enrolment) [1,2] but also because it is a central indicator of positive psychological functioning in children and adolescents [3]. Within this group, higher academic achievement is positively related to subjective wellbeing [4,5], life satisfaction [6,7], and happiness [8]. In addition, academic success can profoundly affect one's self-perception, as it contributes to higher self-efficacy [9] and the formation of a positive academic self-concept [10]. Considering the role of academic achievement in children's and adolescents' optimal development and functioning, under-2 of 20 standing its correlates and predictors becomes "important both theoretically and practically, warranting continued scholarly pursuit" [11] (p. 33).
Academic achievement is a complex outcome variable, known to depend on various factors. While situational variables certainly play a role, they leave much of the variance in this outcome unexplained, implying that students' individual dispositions may greatly decide how well they fare at school [1,12]. Traditionally, intelligence and personality have been considered as the most influential dispositional predictors of academic achievement [1]. More recently, researchers have realized that students' emotions at school also strongly affect their motivation, learning strategies, and ultimately their performance [13]; thus, emotional intelligence (EI), i.e., the ability to cognitively process and deal with these emotions, has come into the foreground as another individual disposition implicated in academic success [14].
Intelligence
That intelligence should play an essential role in school attainments has been a general premise since the beginning of intelligence testing; in fact, these tests were initially designed to assess children's capacity to master the curriculum. To this day, intelligence measures build their validity on the successful prediction of academic achievement [15]. There is a common understanding among intelligence researchers that this individual trait captures one's "ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought" [16] (p.77). Thus, intelligence is bound to predict achievements in various fields, but particularly those in the academic realm, which draw heavily on abstract reasoning and learning.
Indeed, early narrative reviews of the association between scholastic achievement and intelligence found them to be substantially related, with a mean correlation of 0.50 [16][17][18]. The hitherto most comprehensive meta-analytic study [19], which included 240 independent samples of students from elementary to high school reported a corrected correlation of ρ = 0.54 between g and school grades, thus corroborating earlier findings. In addition, the respective meta-analysis also yielded important insights into the moderators of this relationship. First, out of the three types of intelligence tests considered in moderator analyses, a higher population correlation was observed for verbal and mixed measures (ρ = 0.53 and 0.60, respectively) than for nonverbal ones (ρ = 0.44). This finding was interpreted with regard to the importance of verbal skills for successful classroom interaction, as well as for taking oral and written exams. Second, moderator analyses confirmed that the population correlation increased from elementary (ρ = 0.45) to middle (ρ = 0.54) to high school (ρ = 0.58), suggesting that lack of ability could more easily be compensated for by hard work in lower grades, but that the importance of intelligence increases as students face more challenging study contents [19]. From this point, it becomes crucial to understand how different dispositional traits combine and interact to produce positive academic outcomes.
Personality
Interestingly, it was the very leaders of the intelligence-testing movement who were among the first to acknowledge that performance depends on other personal factors besides intelligence [20,21]. That is why much of the research on academic achievement has also been devoted to establishing how students' characteristic patterns of experiencing and acting in certain situations-in short, their personality traits-relate to performance in school. Nevertheless, the findings of these studies remained quite scattered and inconsistent until the appearance of broad factorial models of personality that provided a common framework for such studies [22]. The widely accepted Five-Factor Model of personality [23] was particularly potent in this regard. Within this model, personality is described in terms of five broad dimensions, commonly referred to as the Big Five: neuroticism (N), extraversion (E), openness (O), agreeableness (A), and conscientiousness (C).
To explore the relationship between the Big Five and academic performance, Poropat [22] performed a large meta-analysis on 47 to 138 samples (depending on the trait) of students from elementary school to university. To assess the meaningfulness of the obtained correlations, which in such large samples (N = 58,522-70,926) were statistically significant even when smaller than 0.10, Cohen's d was calculated, thus revealing a medium-sized effect of C (d = 0.46) and small effects of O (d = 0.24) and A (d = 0.14) on academic achievement; the effects of N and E were minor. Additional analyses corroborated the conclusion that, among the Big Five, C is the most stable predictor of academic performance. Its correlation with the criterion increased after controlling for intelligence, and remained unaffected by education level and student age, leading to the conclusion that "future considerations of individual differences with respect to academic performance will need to consider not only the g factor of intelligence, but also the w [willingness] factor of Conscientiousness" [22] (p. 334).
Emotional Intelligence
The concept of EI was introduced by Salovey and Mayer [24] to acknowledge individual differences in the ability to reason about emotions and to use them to enhance thought. More precisely, EI is the "ability to perceive emotions, to access and generate emotions so as to assist thought, to understand emotions and emotional knowledge, and to reflectively regulate emotions so as to promote emotional and intellectual growth" [25] (p. 5). According to this definition, EI comprises four "branches," which are thought to be hierarchically ordered, ranging from emotion perception (EP), as the most basic branch, through using emotions, to the more complex branches of emotion understanding (EU) and management (EM) [25,26]. The two lower branches constitute the experiential area, while the two higher ones form the strategic area of EI.
Soon after EI was proposed and established as a meaningful construct (see [26] for an overview of corroborating findings), research attention also turned to its role in promoting achievement at school. While it is acknowledged that EI's primary domain of relevance is in predicting (inter)personal rather than performance outcomes, it was also convincingly argued that EI abilities might be implicated in succeeding at school. For example, Ivcevic and Brackett [11] hypothesized that school outcomes are influenced by two distinct types of self-regulation dispositions: typical performance traits, such as C, and maximum performance attributes, such as the ability to understand and manage one's emotions (as defined within the EI construct). Both were presumed to aid achievement-related behaviors and to do so independently of each other. For example, a tendency to work hard should promote the completion of challenging and long-term tasks, but so should the ability to modulate unpleasant emotions that often accompany such tasks (e.g., frustration or anxiety) and bring about maladaptive reactions (e.g., procrastination or rumination). Similarly, Lopes et al. argued that the ability to regulate emotions contributes to academic achievement by "sustaining the motivation to pursue learning or mastery goals in the face of frustration or self-doubt" [27] (p. 716). Another possible mechanism through which EI could aid school performance is by facilitating positive peer and teacher interactions, and the expression of school-appropriate behaviors, hence providing the necessary social conditions for successful learning [27,28]. Finally, as a third possibility, it was also suggested that EI, particularly the EU branch, might be directly involved in mastering academic content in the language arts and humanities, i.e., in subjects that require an understanding of people and their emotions [14].
In line with these theoretical proposals, numerous studies have indeed found EI to be positively associated with school performance. The results of these studies were recently meta-analyzed by two independent research groups, leading to the common conclusion that EI significantly correlates with academic achievement, with estimates of the population correlation being ρ = 0.24 [14] and Z = 0.31 [29]. MacCann et al.'s [14] study further found strategic EI-i.e., emotion understanding and management-to have a significantly larger effect on academic achievement than the two lower EI branches. Moreover, EI and its strategic branches incrementally predicted academic performance over intelligence and personality, and a relative weights analysis showed EU and EM to be the second-best predictors of academic achievement, coming after intelligence but before C.
The Interplay between Intelligence, Personality, and EI as Predictors of Academic Achievement
Hitherto, the main research question concerning the role of EI in academic outcomes has been whether EI can indeed predict school performance, and if so, whether its contribution goes beyond what is already explained by individual differences in intelligence and personality. Now that this question has been resolved, a new one comes to the foreground: How does EI interact with intelligence and personality to predict academic achievement? In the following passages, we consider several possible models of interplay.
Mediation and "Cascading" Models
First, it is conceivable that EI mediates the relationship between intelligence and personality, on the one side, and academic performance on the other (Figure 1a). This model is reminiscent of the one proposed by Joseph and Newman [30] in their attempt to establish how the same trio of predictors relates to job performance. More precisely, Joseph and Newman's model also assumes a partial mediation effect, with intelligence and personality predicting the criterion both directly and through EI. As both intelligence and personality were more recently shown to explain nontrivial variance in EI [31], there is further justification for the hypothesized mediation. population correlation being ρ = 0.24 [14] and ̅ = 0.31 [29]. MacCann et al.'s [14] study further found strategic EI-i.e., emotion understanding and management-to have a significantly larger effect on academic achievement than the two lower EI branches. Moreover, EI and its strategic branches incrementally predicted academic performance over intelligence and personality, and a relative weights analysis showed EU and EM to be the second-best predictors of academic achievement, coming after intelligence but before C.
The Interplay between Intelligence, Personality, and EI as Predictors of Academic Achievement
Hitherto, the main research question concerning the role of EI in academic outcomes has been whether EI can indeed predict school performance, and if so, whether its contribution goes beyond what is already explained by individual differences in intelligence and personality. Now that this question has been resolved, a new one comes to the foreground: How does EI interact with intelligence and personality to predict academic achievement? In the following passages, we consider several possible models of interplay.
Mediation and "Cascading" Models
First, it is conceivable that EI mediates the relationship between intelligence and personality, on the one side, and academic performance on the other (Figure 1a). This model is reminiscent of the one proposed by Joseph and Newman [30] in their attempt to establish how the same trio of predictors relates to job performance. More precisely, Joseph and Newman's model also assumes a partial mediation effect, with intelligence and personality predicting the criterion both directly and through EI. As both intelligence and personality were more recently shown to explain nontrivial variance in EI [31], there is further justification for the hypothesized mediation. A distinctive feature of Joseph and Newman's model, lending it the name "the cascading model of EI," is that the domain of EI is represented by three branches-EP, EU, and EM-forming a causal sequence from EP as the most distal to EM as the most proximal predictor of job performance (i.e., EP→EU→EM) [30]. The results of both Joseph and Newman's meta-analysis and of a more recent study by Nguyen et al. [32] yielded support for the proposed cascading effects within EI, showing EU to largely [30] or fully [32] mediate the relationship between EP and EM. In view of this, but given the current focus on predicting academic achievement (rather than job performance), we considered an analogous yet somewhat more parsimonious model: here, the EI variable would be narrowed down to its strategic area, for which a robust association with academic achievement was established [14]; the cascading feature of the original model would still be retained but reduced to the two elements of strategic EI (i.e., EU→EM). We borrow the label "cascading" to refer to this version of the above-presented mediation model.
Adapting the original cascading model to suit the prediction of academic achievement, our next step was to consider which aspects of intelligence and personality to incorporate, as well as which paths from them to the criterion. Concerning intelligence, Joseph A distinctive feature of Joseph and Newman's model, lending it the name "the cascading model of EI," is that the domain of EI is represented by three branches-EP, EU, and EM-forming a causal sequence from EP as the most distal to EM as the most proximal predictor of job performance (i.e., EP→EU→EM) [30]. The results of both Joseph and Newman's meta-analysis and of a more recent study by Nguyen et al. [32] yielded support for the proposed cascading effects within EI, showing EU to largely [30] or fully [32] mediate the relationship between EP and EM. In view of this, but given the current focus on predicting academic achievement (rather than job performance), we considered an analogous yet somewhat more parsimonious model: here, the EI variable would be narrowed down to its strategic area, for which a robust association with academic achievement was established [14]; the cascading feature of the original model would still be retained but reduced to the two elements of strategic EI (i.e., EU→EM). We borrow the label "cascading" to refer to this version of the above-presented mediation model.
Adapting the original cascading model to suit the prediction of academic achievement, our next step was to consider which aspects of intelligence and personality to incorporate, as well as which paths from them to the criterion. Concerning intelligence, Joseph and Newman argued that it is the "knowledge-related component of cognitive ability" that primarily affects job performance [30] (p. 59). In the case of school achievement, metaanalytic findings point to verbal ability as a particularly strong predictor [16]; thus, the "cognitive ability" variable from Joseph and Newman's model would here translate to verbal intelligence (VI). The original cascading model further postulates that intelligence works through the EU branch to indirectly affect job performance (besides having a direct effect on the criterion). The same path may arguably be incorporated in the "cascading" model predicting school achievement: EU involves labeling emotions and propositional thinking with emotional information [33], which may indeed be influenced by overall VI (i.e., VI→EU). This would also be empirically backed by the fact that, of the four EI branches, EU is the strongest correlate of verbal/crystallized intelligence [26,34].
As for personality, Joseph and Newman' model includes only two of the Big Five, namely C and N. Certainly, C should feature in any model predicting academic achievement, as it is the trait that shows the largest and most robust associations with school performance [22]. Beyond C, at least two other traits should be incorporated in the model, namely O and A [22]. Admittedly, their associations with academic achievement are generally weaker than for C and tend to diminish when other variables (e.g., intelligence) are brought into play. Nevertheless, as both O and A commonly exhibit nontrivial correlations with strategic EI [26,34], this would support the assumption that their effects on academic achievement are at least partially mediated by EI abilities. Regarding these indirect effects in the "cascading" version of the model, we speculated that O is most likely to work through EU. This is because O entails traits such as intellectual curiosity, liberalism, imagination, and an interest in culture [35], which is likely to bring about learning opportunities that, apart from promoting achievement directly, may also spur the development of cognitive abilities [36], including EU. Concretely, greater exposure to emotionally diverse experiences (O) would lead to gains in emotion-related knowledge and reasoning (EU); this, in turn, would also widen one's spectrum of emotion management strategies (i.e., O→EU→EM), and ultimately enhance academic performance. Concerning the indirect effects of A, we assumed that this trait is most likely to act through the EM branch. More specifically, the tendencies implicated by A, such as altruism, compliance, truthfulness, modesty, tendermindedness, and trustworthiness [35], might orient a person toward envisioning and mentally exploring more constructive and socially desirable emotion regulation strategies, thus enhancing one's EM skills (i.e., A→EM); these could then be applied to achieve better outcomes at school. Finally, for C, we only assumed a direct effect on academic achievement, but not one that would be mediated by EI. The reason for this is, first, that C has generally exhibited only trivial or non-significant correlations with EI [34], and second, that a prior study found it to predict academic outcomes independently of EM [11]. Incidentally, in Joseph and Newman's model, C is assumed to have a direct effect on job performance and on EP, but not on either of the two strategic EI branches included in the present model.
In sum, the herein proposed mediation model assumes that VI, C, O, and A directly contribute to academic achievement, with all of these predictors except C also working to enhance school performance through strategic EI. The somewhat more complex "cascading" version of this model proposes that the indirect effects occur along the following paths: from VI to EU to EM, from O to EU to EM, from A to EM, and-in all three cases-ultimately to academic achievement.
Direct Effects Model
The second plausible model of the interaction between EI, intelligence, and personality in predicting academic achievement challenges the two main assumptions of Joseph and Newman's cascading model and the mediation model in general. First, recall that separate mechanisms were proposed by which EU and EM may affect academic achievement (see Section 1.3). In light of this, it is conceivable that EU does not predict school performance via EM, but that it does so directly. While Joseph and Newman's and Nguyen et al.'s [30,32] studies clearly showed that the effect of EP on EM is largely/fully mediated by EU (by also estimating the direct path from EP to EM), they did not provide the same type of evidence for the proposed mediation by EM of the relationship between EU and job performance (i.e., a direct path from EU to the criterion was not checked). Judging from MacCann et al.'s study [14], which demonstrated that both EU and EM "are active ingredients in the prediction of academic performance" [14] (p. 169), a direct effect of EU on academic performance might even be more probable than an EU→EM cascade.
Second, regarding Joseph and Newman's assumption that intelligence and personality are "important antecedents of the EI processes" (p. 69) in predicting job performance [30]and the extension of this idea to the prediction of academic achievement, as proposed in the mediation model-one might argue that EI's relationship to intelligence and personality is rather a bidirectional one. For instance, it is conceivable that those who are better at understanding emotions will also be more open to new experiences and the feelings related to them (O) because they have the cognitive tools to process these experiences. Furthermore, those who have at their disposal a large repertoire of emotion regulation strategies and can judge the effectiveness of particular actions in a given situation (high EM) will probably tend to resort to these strategies to avoid or resolve conflicts and thus come across as more affable (A). Moreover, evidence has accumulated in support of the notion that EI is a second-stratum ability, roughly equivalent to other broad intelligence factors appearing at this level of the Cattell-Horn-Carroll (CHC) structure of cognitive abilities [37,38].
All in all, it seems reasonable to assume that a model in which EI is not placed "after" personality and (verbal) intelligence but "alongside" them could equally well explain the targeted outcome and give a more appropriate estimate of the contribution of strategic EI to academic achievement ( Figure 1b). After all, MacCann et al.'s [14] meta-analysis showed that overall EI does feature as one of the three most important predictors of academic achievement (along with intelligence and C) and that its strategic branches (entered separately as predictors) even outweigh C in predicting the given criterion.
Moderation Model
Finally, although this possibility has hitherto received little attention in the EI literature, it is quite plausible that EI, intelligence, and personality could be positioned in the same way as are the ability and personality factors of performance in so-called models of talent development, e.g., [39,40]. Specifically, these models propose that ability variables, such as intellectual or social abilities, are directly invested in the process of competence development and ultimately reflected in the level of achievement; personality variables, on the other hand, are assumed to act as moderators, which do not directly affect learning outcomes, but may enhance or stifle the learning process, i.e., the transformation of abilities into competencies and achievement. In our case, this would mean that VI and strategic EI (or each strategic branch) would be the actual predictors of academic achievement, whereas personality would simply moderate their effects on the criterion (Figure 1c). This model would be able to capture the commonly encountered situation in which a student has high VI, but her lack of self-discipline or achievement motivation (low C) is interfering with the learning process such that she ends up underachieving at school [41]. The proposed influence of C on the relationship between intelligence and GPA has already received some empirical support in previous research [42].
Moreover, the "moderation model" would also represent the possible scenario in which certain personality traits decide whether EI will indeed contribute to academic achievement in the ways proposed in the literature and described in Section 1.3. For example, a student might possess the relevant EI abilities to analyze and grasp the emotional meaning of content taught at school (high EU), or to select an effective approach to get the teacher to explain the content to her (high EM), but may also be quite defiant and rebellious (low A), making her reluctant to use her EI skills for the sake of achievement. That personality traits can act as moderators when it comes to the expression of EI was empirically demonstrated by Freudenthaler and Neubauer [43]. Specifically, these authors found the typical emotion management behavior of individuals with higher A and C to be more aligned with their cognitive ability to deal with their own and other people's emotions (EM). Transferred to the educational context, this finding would imply that students' A and C are likely to influence whether their EM abilities will translate into successful emotion regulation at school and during learning, i.e., whether the mechanism will be activated by which EM is supposed to promote academic achievement [27,28]. Further supporting the notion that students "must not only have emotional intelligence, but also must be motivated to use it" (p. 39), a study with undergraduates showed that EI incrementally predicted GPA (over general intelligence), but that its effect on the criterion was moderated by C [44].
The Present Study
Accumulated findings from three decades of research have confirmed that when considered alongside intelligence and personality, EI can still add to the prediction of academic achievement, particularly through its strategic branches. However, to our knowledge, no previous studies have directly investigated how (strategic) EI interacts with intelligence and personality to predict academic achievement. As argued above, several models involving these three clusters of predictors are theoretically and empirically defensible. The present study thus examined the relationship between strategic EI and academic achievement while also considering intelligence and personality, with the main aim of testing the above-proposed models of interaction between the given predictor variables.
We set off with some clear expectations regarding the linear associations between our study variables. First, we expected that strategic EI and its branches would be positively associated with academic achievement, and that, consistent with previous findings, the same would be true for VI and the Big Five traits C, O, and A. Second, we hoped to replicate previous findings regarding the pattern and size of associations between EI, VI, and the Big Five, meaning that strategic EI and its branches should have a moderate positive association with VI, small-to-moderate associations with O and A, and not be significantly related to the remaining three Big Five dimensions. Moreover, we expected that strategic EI would incrementally predict academic performance over VI and the Big Five.
With respect to our main research goal, we hope to have provided a clear rationale for why each of the models described in the previous sections might be expected to receive some empirical support. To sum up, the mediation model assumed that strategic EI would partially mediate the effects of VI, O, and A on academic achievement. A more elaborate, "cascading," version of this model further proposed that the indirect paths from VI and O would go through EU, cascading to EM, and ultimately to academic achievement, while the effect of A on the criterion would only be partially mediated by EM. In the direct effects model, strategic EI or its two branches were positioned at the same level as VI and the three personality traits (C, O, A), and it was assumed that the EI variables would have a significant independent effect on academic achievement. Finally, the moderator model assumed that the Big Five personality traits, or some of them, might moderate the effects of VI and strategic EI on academic performance. As all of the proposed models are theoretically plausible, they were regarded as competing and the study was designed so as to empirically test their appropriateness.
Numerous studies, including two recent meta-analyses [14,45], suggest that the observed associations between EI and academic achievement can be affected by students' age and gender. Thus, in examining the proposed models, we also took these variables into account.
Participants
The study sample included 227 students (146 female), whose ages ranged from 13 to 19 years (M = 16.50 years; SD = 1.42 years). At the time of the study, 108 (47.6%) participants attended compulsory secondary education (from 13 to 16 years old), while 119 (52.4%) received vocational training or baccalaureate (from 17 to 19 years old) in 64 different educational centers in the province of Cádiz (Spain).
Verbal Intelligence
Verbal intelligence was assessed using the 39-item verbal analogies test ALF7 [46]. The ALF7 is a standard test of verbal analogical reasoning in which respondents must identify one option among four alternatives that adequately complements the presented word based on the relationship established among the first word pair, e.g., "Day is to night, as white is to . . . (red, black, clear, clean)." Each item has a single correct answer that receives one point, thus resulting in an overall score ranging from 0 to 39. Results of prior studies confirmed that the ALF7 is a highly reliable and valid measure of intelligence [46]. For this study, test items were adapted from Serbian to Spanish using the double-translation procedure [47].
Big Five Personality Traits
The Ten-Item Personality Inventory (TIPI) [48] was used to assess the Big Five personality dimensions. Each Big Five dimension is represented by two items, one of which is phrased to present the dimension's positive pole, and the other its negative pole. Items are scored on a Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree). Validation of the Spanish version of the inventory resulted in the conclusion that, apart from shortcomings related to internal consistency, the TIPI is a promising instrument when assessment brevity is a priority [49].
Emotional Intelligence
Emotional intelligence was assessed using the Spanish adaptation of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) [50]. In this study, four tasks examining the strategic EI area were administered: Blends, Changes, Emotional Management, and Emotional Relations. Depending on the task, participants are asked to respond to questions by either rating the appropriateness of several alternatives on a 5-point Likert-type scale or by choosing the most appropriate alternative among those presented. Responses were scored using the consensus scoring procedure by the publisher (TEA Ediciones), thus yielding EU and EM scores, as well as an overall strategic EI score for each participant. Prior studies showed that the Spanish version of the test mirrors the excellent reliability and validity features of the original [51,52].
Academic Achievement
Academic achievement was operationalized using grade point average (GPA) as reported by the participants. In Spain, positive grades are coded from 5, indicating sufficient, to 10, indicating excellent achievement. For the purpose of this study, grades were converted to a 0-4 scale, with the higher score indicating better school performance. Information on GPA provided by the students was cross-checked with reports made by their parents.
Procedure
Data were collected during the spring quarter of 2020 as part of a larger study relating parental styles to the cognitive, emotional, and social development of adolescents. All participants were recruited through a call that was passed on from the Andalusian Delegation of Education in Cádiz (Spain) to associations of mothers and fathers of students within different educational centers in the province. Due to the pandemic confinement, the response rate of students and parents within a single educational center was rather low, which is why all those who had accepted the call were included in the study without any additional selection procedure. As a result, participating students and parents were dispersed across 64 educational centers within the province of Cádiz. Students who agreed to participate were asked for an informed consent form signed by either parent. In return for participation, parents received a detailed report on their child's test results. All questionnaires were completed online, except for the intelligence tests that collaborators directly administered to adolescents on Google Meet. For this purpose, six Ph.D. students were trained and coached to administer the tests. The average total administration time for all tests was about 40 min.
Data Analyses
Descriptive statistics, internal consistencies, correlations between study variables, and gender differences were examined using SPSS v20 (IBM, Armonk, NY, USA). The same software was used to perform hierarchical regression analysis that inspected the incremental validity of strategic EI. Using the maximum likelihood estimation method, structural equation modeling (SEM) was performed in AMOS 16 to test the proposed models of interaction between VI, personality, and strategic EI in predicting school achievement. The goodness of model fit was evaluated based on the following indices and thresholds for good model fit [53,54]: (a) chi-square statistic (χ 2 ) and its probability > 0.05, (b) CFI (comparative fit index) ≥ 0.95, (c) TLI (Tucker-Lewis index) ≥ 0.95, (d) RMSEA (root mean square error of approximation) < 0.05, and (e) SRMR (standardized root mean square residual) < 0.05. Comparison of the models was based on the AIC (Akaike information criterion) [55] and ECVI (expected cross-validation index) [56], with smaller values indicating a better fit and greater potential for replication. The significance of indirect effects was examined using the bootstrap estimation procedure, with 2000 cases and a 95% confidence interval. Table 1 presents the means and standard deviations, as well as score ranges for all examined variables. Variability indices (ranges and SDs) and absolute skewness values suggested that all scores followed an approximately normal distribution. Alphas were generally adequate and comparable to those previously established for the same instruments, or even higher in the case of three TIPI subscales (Table 1). A negative exception was the poor internal consistency for A and O.
Descriptive Statistics, Relationships between Variables, and Gender Differences
The correlations between study variables were in the expected direction and of the expected size (Table 1). School achievement was positively related to VI, A, C, O, and to strategic EI and its two branches, with most of these correlations being of medium size. Statistically significant positive correlations were also established between VI and strategic EI (overall and at the branch level), as well as between these variables and O. In addition, EM had a small positive correlation with A and a small negative one with N. Finally, age was positively related to VI, strategic EI (and its branches), and O.
The results of ANOVA for gender differences demonstrated that girls scored higher on C (F (1225) = 17. 16
Hierarchical Regression Analyses
To test the incremental validity of strategic EI in the prediction of GPA, a three-step hierarchical regression analysis was performed: age and gender were entered in step 1, followed by VI, C, O, and A in step 2, and rounded off with strategic EI in step 3. The overall model was statistically significant (F 7219 = 12.12, p < 0.001), explaining 26% of the variance in GPA. Entered in the last step, strategic EI added a significant 2% (p < 0.05) to the overall variance explained. The order of significant independent predictors of GPA was C (β = 0.32, p < 0.001), VI (β = 0.23, p < 0.001), O (β = 0.19, p < 0.01), age (β = −0.16, p < 0.05), and strategic EI (β = 0.15, p < 0.05). When SEI was replaced by EU and EM in the last step, the results for the overall model were practically the same (F 8218 = 10.83, p < 0.001, 26% of the variance explained), but of the two branches, only EU emerged as a significant predictor (β = 0.16, p < 0.05), accounting for an additional 2% (p < 0.05) of the variance in GPA.
Structural Models
In terms of the predictor variables included and their (direct and indirect) effects on GPA, the models tested were as proposed in the Introduction based on the rationale provided there (see Sections 2 and 3). The correlations between the exogenous variables were modeled according to the results of correlational analyses in the current data set. Furthermore, age and gender were entered as controls in all models to help minimize unrelated effects and improve the robustness and validity of the results.
Mediation and "Cascading" Models
The mediation structural model hypothesized that VI, O, and A have both direct and indirect effects-via strategic EI-on GPA, while C was modeled to have only a direct impact on the dependent variable. Figure 2a displays the standardized parameters for this model. Paths modeled from age and gender were omitted from this and all subsequent figures to ease interpretation. The results showed that the direct paths from VI, C, O, and strategic EI to GPA were all significant. The outcome of the bootstrap estimation procedure confirmed that the effects of VI and O on GPA were also partially mediated by strategic EI ( Table 2). As for A, both its direct and indirect pathways to GPA were nonsignificant. Finally, while VI had a significant direct effect on strategic EI, this was not the case for O and A. Fit indices suggested that the mediation model had an excellent fit, with χ 2 (3) = 1.452, p = 0.694; GFI = 0.998; CFI = 1.00; TLI = 1.093; SRMR = 0.016; and RMSEA = 0.000. Table 2. Standardized indirect effects and 95% confidence intervals for the mediation and "cascading" models.
Model Tested
Model Pathways Estimated 95% CI The results showed that the direct paths from VI, C, O, and strategic EI to GPA were all significant. The outcome of the bootstrap estimation procedure confirmed that the effects of VI and O on GPA were also partially mediated by strategic EI ( Table 2). As for A, both its direct and indirect pathways to GPA were nonsignificant. Finally, while VI had a significant direct effect on strategic EI, this was not the case for O and A. Fit indices suggested that the mediation model had an excellent fit, with χ 2 (3) = 1.452, p = 0.694; GFI = 0.998; CFI = 1.00; TLI = 1.093; SRMR = 0.016; and RMSEA = 0.000. Next, we tested the "cascading" model in which strategic EI was replaced by the EU→EM sequence. As in the initial mediation model, direct pathways were entered from all exogenous variables to GPA, with the additional mediation paths via EU for VI and O, and via EM for A.
Again, direct structural path coefficients from VI, C, and O to GPA were statistically significant (Figure 2b). However, the hypothesized indirect effects on GPA were nonsignificant (Table 2), except for the indirect effect of O on EM via EU. As in the mediation model, both direct and indirect pathways modeled from A to GPA were nonsignificant. However, A had a significant direct effect on EM. Standardized coefficients were also significant for the direct effects of VI on EU and of EU on EM, while the pathway from EM to GPA was nonsignificant. Fit indices for the model were as follows: χ 2 (8) = 9.759, p = 0.282; GFI = 0.990; CFI = 0.992; TLI = 0.964; SRMR = 0.026; and RMSEA = 0.031. Since the indirect effect of EU on GPA via EM was nonsignificant, the "cascading" model was modified to include a direct pathway from EU to GPA as well. This direct effect was significant (β = 0.16, p < 0.05), but moreover, a significant mediated effect through EU on GPA was established for both VI (β = 0.02, 95% CI [0.001, 0.057]) and O (β = 0.02, 95% CI [0.001, 0.056]). The fit was was again excellent, as for the two previous models: χ 2 (7) = 4.063, p = 0.773; GFI = 0.996; CFI = 1.00; TLI = 1.070; SRMR = 0.020; and RMSEA = 0.000.
A comparison of the three tested models based on the AIC (67.451, 83.759, and 80.063, respectively) and ECVI (0.298, 0.371, 0.354, respectively) yielded the starting mediation model as the best fitting and most replicable one. When the two variations of the "cascading" model were compared, a significant χ 2 difference (5.696, p < 0.05) indicated that the modified model in which EU influenced GPA not only via EM but also directly was the better-fitting one.
Direct Effects Model
The first tested model hypothesized that the same predictor variables (VI, strategic EI, O, A, and C) all affect GPA directly; correlations between the predictors were modeled as bidirectional when that was suggested by the results of correlational analyses in the current dataset.
The results showed that all entered variables, excluding A, had significant independent effects on GPA (Figure 3a The results showed that all entered variables, excluding A, had significant independent effects on GPA (Figure 3a). A comparison of the 95% confidence intervals for standardized coefficients of significant predictors C (95% CI In the second version of the direct effects model, strategic EI was replaced by its branches. Again, A had no significant effect on GPA (Figure 3b). Another important result was that the standardized regression coefficient for the EM→GPA also failed to reach a statistically significant value. On the other hand, the effect of EU on GPA was significant and independent of VI, C, and O. Once more, the 95% confidence interval for the EU effect In the second version of the direct effects model, strategic EI was replaced by its branches. Again, A had no significant effect on GPA (Figure 3b). Another important result was that the standardized regression coefficient for the EM→GPA also failed to reach a statistically significant value. On the other hand, the effect of EU on GPA was significant and independent of VI, C, and O. Once more, the 95% confidence interval for the EU effect (95% CI When A and EM were omitted from the model, its predictive power was slightly improved (R 2 = 0.27) and so were the model fit indices: χ 2 (2) = 1.436, p = 0.488; GFI = 0.998; CFI = 1.000; TLI = 1.041; SRMR = 0.017; and RMSEA = 0.000.
Inspection of the AIC and ECVI values indicated a better fit of the model with strategic EI as a predictor (AIC = 68.968, ECVI = 0.305) compared to the model that included its branches (AIC = 90.456, ECVI = 0.400). However, based on the AIC (53.436) and ECVI (0.236), the best-fitting direct effects model was the one in which nonsignificant predictors, namely, EM and A, were omitted.
Moderation Model
Finally, a series of two-way interactions in regression analyses was performed to test the moderating effects of personality traits on the relationship between VI and strategic EI on the one side and GPA on the other. The moderating effects of C, O, and A were tested in three separate models. Before the analyses, all independents were standardized and product variables were created.
At the start, each model included VI and strategic EI, one of the Big Five traits (C, O, or A), and interaction terms of that particular trait with the other two predictors. Since all exogenous variables were allowed to covariate, all models were initially saturated and the model fit was not obtained. Regardless of that, moderating effects were nonsignificant for all three inspected personality traits, which is why the moderation models were not further tested with EU and EM instead of overall strategic EI.
Discussion
The current study built on recent meta-analytic findings showing that EI, particularly its strategic area, plays an important role in predicting academic achievement, next to intelligence and the Big Five (most notably C). However, our study took this strain of research a step further by considering and empirically testing several plausible models of the interplay between strategic EI and relevant aspects of intelligence and personality in predicting this outcome. Specifically, we proposed three different models whereby (a) strategic EI partially mediates the effects of VI and personality on academic performance (the mediation model), (b) strategic EI directly predicts academic performance alongside VI and personality (the direct effects model), or (c) the effects of strategic EI and VI on academic performance are moderated by personality (the moderation model). Each model also had a version in which strategic EI was unpacked into its branches, namely, EU and EM.
By testing the proposed models, the current study shed light on several important issues. The first one concerns the position of EI in relation to intelligence and personality as well-established predictors of academic achievement: Is EI a partial mediator of their effects or another independent predictor of this outcome? The second one pertains to the order of the two strategic branches, namely, EU and EM, in predicting (academic) performance: Is there a causal sequence (EU→EM→GPA), as proposed in Joseph and Newman's cascading model [30,32], or rather two parallel paths (EU→GPA, EM→GPA), as recently suggested by MacCann et al. [14]? The final question refers to the possible interactions between ability variables and personality in explaining academic achievement: Are the effects of VI and strategic EI moderated by personality, as would be implied by talent development models [39,40]?
Before turning to these issues, we shall briefly comment on the linear associations between study variables, and the predictor variables' contribution to explaining the criterion in question. In this regard, it may generally be noted that the current results largely mirror those from previous studies and meta-analyses, thus supporting the expectations uttered in Section 3.
First, strategic EI and both its branches were positively related to GPA. The strength of the association for EU (r = 0.20, 95% CI [0.08, 0.32]) was slightly lower than would be expected from meta-analytical findings (95% CI [0.28, 0.43]), while for EM (r = 0.17, 95% CI [0.04, 0.29]), it was right within the expected range (95% CI [0.16, 0.35]) [14]. As expected, VI, O, A, and C also emerged as positive correlates of GPA, with the association with the latter being largest for C, which is in accordance with some meta-analytic findings [22].
Second, the pattern of associations between EI, VI, and the Big Five was fully in line with those previously reported. Strategic EI (or at least one of its branches) had a positive correlation with VI, O, and A, and a negative one with N. The correlations were expectedly largest for VI and O, while A was significantly associated only with EM (cf. [34]).
In addition, testing for gender and age differences in strategic EI, we were able to replicate the two most common findings within younger age cohorts; for a recent discussion on age specific demographic differences in EI, see [57]. The first one was that EI increased with age; the second was that girls tended to outperform boys, although, in this study, they did so only on EM.
Finally, in a hierarchical regression analysis, strategic EI (more precisely, its EU branch) accounted for a modest but statistically significant 2% of the variance in GPA above VI and personality after considering age and gender. Thus, our results provide further evidence of the incremental validity of EI above traditional dispositional predictors of academic achievement [14,28,58] and align with previous findings pointing to EU as the EI branch that best predicts school performance [58,59].
Is Strategic EI a Partial Mediator for VI and Personality or an Independent Predictor of Academic Achievement?
Turning to the first of the above-enumerated issues (EI as a mediator or another independent predictor), it is pertinent to discuss the SEM results for the mediation model vs. the direct effects model. The primary thing to notice here is that the first and best-fitting versions of each model, i.e., those including overall strategic EI (Figures 2a and 3a), both had excellent fit indices. This would seem to suggest that either of these two models could be validly representing the interaction of strategic EI, VI, and personality (O, A, and C) in predicting academic achievement.
Looking at the results for the mediation model, we find confirmation for the assumption that VI, C, O, and strategic EI directly affect academic achievement, but that the effects of VI and O on GPA are also partially mediated by strategic EI. The present results thus provide initial empirical support for the proposed mechanisms by which verbal ability and intellectual traits (O) work to promote school performance (see Section 2.1). More precisely, apart from contributing directly to learning and achievement (by waking interest in and enhancing the processing of academic contents), both VI and O seem to be implicated in the development of abilities to understand and manage emotions, which are then exercised in attaining favorable outcomes at school.
On the other hand, the strong fit obtained for the direct effects model implies that strategic EI does not merely act as a mediator through which VI and O affected academic achievement but is a relevant ingredient of school success in its own right. In fact, a comparison of the 95% CIs for the direct and total (i.e., direct + indirect) effects of VI (β VIdir 95% CI [0.12, 0.33], β VItot 95% CI [0.14, 0.35]) and O (β Odir 95% CI [0.06, 0.33], β Otol 95% CI [0.08, 0.34]) in the mediation model revealed a full overlap, which indicates that the introduction of indirect pathways via strategic EI to GPA did not substantially improve the explanatory power of VI and O. Thus, in terms of parsimony, and given that some direct (O→strategic EI) and indirect pathways (A→strategic EI→GPA) in the mediation model were non-significant, the direct effects model seems to outweigh the former. In addition, the direct effects model is also more compatible with the empirically supported proposition that EI is a broad intelligence factor, placed at the same level of the CHC hierarchy as other broad abilities, such as verbal/crystallized intelligence [37,38]. Finally, it also accords with MacCann et al.'s [14] remark that the effect of EI, and particularly of EU, on academic performance "cannot be explained solely by its overlap with intelligence" (p. 167).
Do EU and EM Work in Sequence or in Parallel to Predict Academic Achievement?
We now turn to a consideration of the order of the two strategic branches, namely, EU and EM, in predicting academic achievement. A necessary starting point is to evaluate the model versions (of both the mediation and the direct effects model) that resulted from substituting overall strategic EI with its branches (Figures 2b and 3b) In both cases, the resulting models had good fit indices but were nevertheless outperformed by the corresponding simpler version that included only the aggregate strategic EI score. The most probable reason for this is the lack of specific criterion variance explained by EM, as shown by its nonsignificant contribution to GPA when combined with other predictor variables. Thus, the effect of strategic EI in the first versions of both models practically comes down to the effect of EU, leaving EM as a redundant variable in the second model versions. What does this mean for the proposed sequential order of the two EI branches in predicting academic achievement?
First of all, the current data do not support the proposition that EU works through EM to enhance school performance. In fact, the introduction of a direct pathway from EU to GPA within the "cascading" model significantly improved the model fit indices and confirmed that the effect of EU on the criterion was exclusively a direct one. This finding contradicts one of the basic assumptions of Joseph and Newman's cascading model, namely, that there is a causal chain in which EU precedes EM, and the latter serves as the direct link to (job) performance [30]. Two reasons may explain why our study reached a different finding than that presented by Joseph and Newman [30] and Nguyen et al. [32]. The first and most obvious explanation draws on the difference in the criteria predicted: while the cascading model of EI seems to fit when predicting job performance, it evidently does less so when the criterion is academic achievement. This might ultimately be related to the fact that, unlike for job performance, a broad and solid mechanism is conceivable by which EU can directly affect achievement in any academic program involving language arts and humanities. As for the other possible explanation, recall that neither Joseph and Newman nor Nguyen et al. reported the fit indices and standardized estimates for any alternative models that included a direct path from EU (or EP for that matter) to job performance. Consequently, it remains dubious whether the cascading model of EI really provided the best fit to the data in the two respective studies. In other words, although the present findings apparently contradict those reported in the referred-to studies, we found upon closer inspection that the results taken to support the cascading model were not conclusive enough in the first place. Overall, then, it is not surprising that we found EU to have a direct effect on GPA rather than one that is mediated by EM, especially since EU was recently judged to play a "critical role" in academic performance [14] (p. 169). The question remains, however, why EM did not contribute to predicting the criterion in the present study, while it did so in MacCann et al.'s meta-analysis [14]. A tentative answer is provided below, as we consider the mechanisms by which EU and EM affect academic achievement.
While the present study clearly speaks against a cascade from EU to EM to academic achievement and in favor of EU having a direct effect on the outcome (comparable to the effects of VI and O), it does not provide any direct insights into the mechanisms by which this effect was exerted. Nevertheless, the fact that EU incrementally predicted GPA and that its effect on the latter was not mediated by EM resonates with the proposition that EU contributes to school achievement in its own right, namely, as a resource for mastering those aspects of the curriculum that require an understanding of human emotions and emotionally driven (inter)actions, as is the case in the language arts and humanities [14,60].
As for EM, it should be noted that the mechanisms by which this EI branch was hypothesized to affect school achievement (see Section 1.3) are somewhat less direct: Unlike emotional vocabulary and understanding, which may be directly invested in mastering certain academic contents, knowledge about effective ways to regulate emotions (i.e., EM as operationalized by EI tests) first has to be translated into actual emotion management, and then there might be other intermediate steps-e.g., establishing good social relations at school [27,28]-before EM abilities can be reflected in learning and, finally, achievement. Taking this truly strategic approach to learning, or even just turning one's emotion management knowledge into actual control over emotions in the academic context, might be particularly challenging for adolescents, who-more than other age groups-are likely to act on their impulses despite knowing better [61]. In any case, it seems to us that, compared to EU, EM might be the more distal and thus also less stable predictor of school achievement, which also explains its non-significant contribution in the present sample.
Do Personality Traits Moderate the Effects of Abilities or Contribute Alongside Them?
Finally, regarding the third issue, i.e., the role of personality traits in a model that serves to predict academic achievement, the results of the present study were quite clear: The proposed moderating effects of personality on the relationship between VI and strategic EI, on the one side, and academic performance, on the other, were not borne out by our data. Given the excellent fit of the other two models (i.e., mediation and direct effects), whereby personality traits were assumed to directly and/or indirectly contribute to academic achievement, it seems fair to conclude that their role is not merely one of "catalysts" that enhance or inhibit the learning process, the outcomes of which are basically influenced by students' level of abilities [39], be they verbal, emotional, or of another type. Certainly, some studies have provided empirical support for the proposition that personality traits moderate the expression of abilities-in this case, EI [43]-and we are not suggesting that this is not one viable path by which personality may affect academic performance. However, it is obvious from our data that a model assuming that this is the only way in which personality is implicated in academic achievement is overly simplistic.
According to the present results, two personality traits, namely, C and O, contributed independently to adolescents' school achievement, with C also surfacing as the strongest in this particular set of predictors. It is thus clear that personality, and especially C, remains a crucial ingredient of academic performance, even when increasing content complexity is expected to put more weight on students' intellectual capacities (cf. [19]). In fact, personality may affect achievement via paths beyond those proposed and tested in the present study, which remains to be tackled by future research.
Practical Implications
Providing further evidence of the importance of strategic EI and the "critical role" of EU [14] for adolescents' academic achievement, the present study also justifies the efforts placed into developing theory-driven approaches to enhance students' abilities to label and understand emotions (e.g., RULER) [62]. More specifically, it suggests that interventions designed to promote students' emotional vocabulary and understanding of "feeling words" might be an effective means to improve their academic performance in areas that require the use and understanding of emotional language (i.e., the language arts and humanities).
While EM did not contribute to academic performance in the present sample, we do not take this result as undermining the importance of this EI branch; rather, we speculate that it might be challenging, particularly for adolescents, to draw upon their emotion regulation knowledge to actually manage their own and others' emotions, and ultimately perform better in terms of grades. From this point of view, it would also make sense to employ interventions that teach students how to handle and respond to school situations in an emotionally intelligent rather than impulsive manner. Accumulating evidence testifies to the usefulness of such tools for improving individual student outcomes [63] and the quality of learning environments [64]. Although important at all educational levels, these systematic interventions might be especially fruitful for adolescents, supplying them with the skills necessary to deal with the overwhelming emotional experiences typical of that age and preparing them for the complex social environments of adulthood.
Limitations and Future Directions
Before concluding, we would like to acknowledge several methodological limitations of the present study.
First, our findings are based on cross-sectional data and, thus, all causal relations tested and confirmed via SEM should be taken with the necessary caution required by virtue of the study design. As is usually the case with modeling the presumably complex interplay of factors that determine an outcome such as academic achievement, only (multiple) longitudinal studies would allow for more definite conclusions about the dynamics of all relevant dispositions.
Second, our study included participants of a limited age range and, thus, the current findings may not fully generalize to other age groups. The relative contribution of different dispositional traits in the prediction of academic achievement can change with age, and the same could be the case for EI. Apart from that, it should also be acknowledged that the current study did not control for the possible effects that other demographic variables, particularly SES, could have on the established associations.
Next, certain limitations were related to the measures used to assess some of the main study variables. Although we employed a well-validated test of VI, a more reliable assessment of this, or any other, broad ability would require at least two to three separate test indicators, which is something that we certainly recommend for any future replications of the present study. Moreover, for the sake of brevity, we opted for a very brief, 10-item measure of the Big Five, which compromised the internal consistencies for some traits. Even with such low reliabilities, the Big Five exhibited the expected pattern of relationships with other study variables, which speaks in favor of the robustness of these associations. Nevertheless, any future investigation of the same research question should preferably rely on a longer and more reliable Big Five inventory. Finally, given the known legal difficulties in obtaining official school records, we had participants report their GPA, which was then cross-checked only via parental reports on the same variable. Certainly, it would have been optimal to collect the data on GPA from the schools directly, though meta-analytic findings reveal a rather strong correlation (ρ = 0.82) between self-reported and official high school GPA [65].
Not a limitation of the present study, but rather an incentive for future ones, would be the need to chart out how different educational outcomes are predicted by particular EI branches, i.e., wherein the predictive power of each branch lies. A possibility suggested by the present study, in conjunction with some previous research, is that EU might be the branch that is most directly involved in predicting school performance in terms of grades [14], while EM might play a more important role vis-à-vis outcomes such as social adjustment and students' well-being at school (cf. [27,28,66]).
Conclusions
To our knowledge, this was the first study to propose and empirically test several plausible models of the interplay between strategic EI, VI, and personality in predicting academic achievement. To begin with, our results confirmed that strategic EI is positively associated with and incrementally predicts academic achievement: in this case, it was overall strategic EI and its EU branch that predicted GPA over VI and the Big Five traits of O, A, and C. Moreover, by using SEM to explore several possible arrangements in which the above predictors might affect academic achievement, we were able to draw the following conclusions: First, while they certainly allow for the possibility that the effects of VI and O on academic achievement are partially mediated by strategic EI, overall, our results make a better case for a parsimonious "direct effects model," whereby strategic EI is placed alongside VI and the relevant personality traits (i.e., O, A, and C) as another independent predictor of GPA. Second, from the present results, it seems unlikely that there is a cascading effect of EU on EM and thus on academic achievement, but rather that EU affects the criterion independently of EM (which, in this sample, did not even contribute at a statistically significant level to predicting GPA). Assuming that in other instances, both EU and EM can contribute to academic performance (as suggested by prior research), the present findings would imply that they are likely to do so in a parallel rather than in a sequential order. In fact, our results indicate that-among the two strategic EI branches-it is EU, not EM, that is the more direct and "proximal" predictor of academic achievement. Finally, our study suggests that personality traits do not merely moderate the effects of abilities, i.e., VI and strategic EI, on academic achievement, but that some of them, specifically C and O, substantially affect this outcome independently of VI and EI and, in the case of O, partially through EU abilities.
Data Availability Statement:
The database is available from the corresponding authors upon reasonable request. | 2021-12-16T16:28:02.177Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "b7a3b3a910da6bce3abca6fa9be03c657bbfb456",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/24/13166/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04b4b17808468c5035d2e878058ce69d6fac4735",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238873388 | pes2o/s2orc | v3-fos-license | Digitalization of agricultural industrial complex, application of robotics in agriculture
. The article deals with the issues of digitalization of the agro-industrial complex. It is noted that one of the important areas of digitalization is the development of information technologies and robotization of agriculture. High efficiency is achieved by those industries that effectively use robotic means – (Repair and Maintenance Stations) - drones, unmanned agricultural machines (tractors, seeders, cultivators, etc.). The advantages of using RMS in agriculture are shown. Kazakhstan produces, various seeders and cultivators, combines for picking berries, self-driving machines for spraying in fields, gardens and other RMS. Work is underway to create quadrocopters intended for use in agriculture, and the structure and mathematical model of the quadrocopter are described in the following article.
Introduction
Currently, robotics and robotic tools are being introduced into various spheres of human activity and are associated with carrying out work in uncertain and extreme conditions [1]. And today, an important direction in solving the tasks of priority development of the agroindustrial complex of the republic, solving food issues, improving the competitiveness of agricultural production is the digitalization of agro-industrial production based on automation, robotics and the development of information technologies.
Information and communication technologies are now widely used in various fields of agriculture: in crop production, animal husbandry, animal nutrition, in calculation of fertilizer doses, land management, regulation of plant nutrition and microclimate in greenhouses, etc.
High efficiency is achieved precisely by those industries that use technological complexes, where mass production and serial production are present and robotic means (RMS) are effectively used. To the following means we refer drones, unmanned agricultural machines (tractors, seeders, cultivators, etc.). With the help of automation and robotization of agricultural production, the reliability and service life of technological equipment is increased, working conditions are facilitated, and its safety is increased too. It becomes more prestigious, moreover the cost per unit of production is reduced, increasing its quantity and improving the quality of products.
Literature review: World practice is replete with examples of the use of RMS in agriculture. For example, the companies ISO Group, Flier Systems (the Netherlands) have robotized the processes of growing flowers in greenhouses with a system for planting them.
The product of Autonomous Tractor (SS A) was a robotic tractor without a cab. The robot of the company Blue River Technologist (USA) works as a cultivator [2]. Agrobot (Spain) produces a hydroponic system for growing and harvesting strawberries: Agrobot SW6010 and AGS Hydro.
Agribotix leases drones to cooperatives, agronomists, consultants, farm managers, and large industrial agricultural corporations. These devices allow you to take high-resolution images and maps using a variety of sensors, process them, and make maps to identify which areas of the field are most in need of fertilizer application. The company offers image processing services with a fee per unit of area for the development of maps under an annual contract [3].
These are just a few examples of the use of robotic tools in agriculture. RMSs are increasingly used in precision farming technologies that have become widespread in the last 10-15 years. These technologies made it possible to consider numerous factors affecting plant growth in a completely different way and at a different level, to reduce the cost of seeds, fertilizers, pesticides, and water, reducing the cost of production [4].
Methodology
In recent years, another wave of transformation of business and social models has been unfolding in our republic, caused by the emergence of a new generation of digital technologies, which due to the scale and depth of their influence, have been called "end-toend" -artificial intelligence, robotics, Internet of Things, wireless communication technologies, and a number of others.
One of the promising areas of development of the digital economy today is robotization, namely, the creation of robotic systems used in the management of economic systems-firms, enterprises, business structures. The state program "Digital Kazakhstan" provides for the use of RMS in all areas of public production and mainly in the field of agriculture.
The advantages of using RMS in agriculture are as follows: -robots are able to perform various operations i.e tillage, fertilizing, sowing, planting, milking livestock, shearing wool, feeding, butchering meat and fish, etc.; -improving business efficiency through planning, drawing up a field passport; -increase in yield by monitoring the infestation of fields, sifting and rapid response; -exclusion of unauthorized downtime of equipment, control of field work; -high accuracy and speed of technological operations; -operation in aggressive, harmful and dangerous places inaccessible to humans; -robots monitor the cultivation of plants, track the movement of harmful insects, and allow you to make electronic maps for agriculture.
In Kazakhstan self-driving machines for spraying in fields, orchards, various seeders and cultivators, harvesters for collecting berries and other are produced today (see Figure 1 and 2). In the laboratory of robotics of the Eurasian National University named after L.V. Gumilyov, research and development work is carried out to create RM and, in particular, quadrocopters intended for use in various fields, including agriculture.
Photo 3 shows a model of a quadcopter, that is kind of a sweet spot depending on weight, enhancements, installation of additional modules, payload lifting power, cost, complexity of installation, repair and flight time.
A quadcopter is a symmetrical aircraft with four equally sized rotors, propelled by adjusting the speed of its propeller motors. The use of multiple rotors provides great maneuverability and speed and the ability to hover in the air.
They have become known as "unmanned aerial vehicles" UAVs. They offer significant advantages when used for aerial surveillance, reconnaissance and inspection in difficult and hazardous environments.
Mathematical model of a quadrocopter
The structure of a quadcopter is a rigid fixed link between four motors. The movement of the four blades causes them to direct the air flow downward, thereby causing thrust in the opposite direction. To describe a quadrocopter model, two systems of relations are usually used: G-system (ground), an inertial reference system is determined by the earth and Bsystem (body) -a fixed system of a body (device) (Fig. 4) [5]. The G-system (OG, XG, YG, ZG) is defined by XG, YG, ZG towards north, west and up, respectively. B-system (OB, XB, E3S Web of Conferences 282, 07006 (2021) EFSC2021 https://doi.org/10.1051/e3sconf/202128207006 YB, ZB) is determined by XB, YB, ZB in the direction from the quadcopter forward, to the right and downward, respectively. The origin of this system is the center of mass of the quadcopter. Since both systems are relative to each other in a fixed coordinate system, it is possible to determine the transformation RG and the rotation of the Eulerangles (Z-Y-X) by the formula (1): In an inertial coordinate system, the acceleration of a quadrocopter is due to thrust, gravity and linear friction. It is possible to obtain the thrust vector in an inertial coordinate system using the R matrix to map the thrust vector from the quadrocopter frame to the inertial coordinate system. Thus, linear motion can be obtained by the formula (2) where x ሬ⃗ is the position of the quadrocopter, g is the acceleration of gravity, FD is the drag force, TB is the thrust vector in the quadcopter frame. You can use the center of mass of the quadrocopter as the center of reference of the inertial system to express the equation of rotation. The rotation equation can be obtained from the Euler equation for the dynamics of rigid bodies,expressed in vector form, by the formula (3) where ω is the vector of angular velocity, I is the inertia matrix, τ is the vector of moments of external forces. (3) can be rewritten as (4): The quadrocopter can be modeled as two thin homogeneous rods intersecting at the origin with a point mass (motors) at the end of each. With this in mind, we can see that symmetry leads to a diagonal inertia matrix (5) of the form Thus, you can get the final form of the mathematical model of quadrocopter in the form (6): This mathematical model can be used in the design of quadrocopters of various modifications.
Conclusion
The use of unmanned aerial vehicles in agriculture can become the main tool for precision farming. The desire to introduce precision farming technologies in modern agricultural enterprises leads to an increase in the efficiency of all processes. Using spectral sensors on unmanned aerial vehicles, farmers can receive information not only in the visual spectrum, but also in various spectral ranges for calculating vegetation indices or compiling soil distribution maps. All data are provided with precise coordinates with the possibility of detailed study and laboratory analysis. Unmanned aerial vehicles used in the field of agriculture monitor plant growth, track the movement of harmful insects, and make electronic maps for agriculture. They can be roughly divided into two groups: quadrocopters and small aircraft type aircraft (drones). They differ in size, functionality, flight range, level of autonomy and other characteristics.
The main advantage of an aircraft-type UAV is its flight range. Such devices can provide a flight duration of up to three hours, while the maximum flight duration of quadrocopters used in Kazakhstani agricultural enterprises does not exceed 20-30 minutes.
Unmanned aerial vehicles and, separately, UAV services in Kazakhstan are offered by the «TerraPoint» company. The multipurpose drones offered by the company are designed for monitoring, aerial photography and mapping in agriculture and allow aerial photography from a height of 100 to 3000 meters. When acquiring ownership of a UAV, TerraPoint provides a free training service for the user of the aircraft. When an agricultural enterprise orders UAV services, TerraPoint provides services in accordance with the agreed terms of reference for the season: analysis of the NDVI vegetation index, monitoring of weediness, sifting, heterogeneity, disease development, mapping, etc. | 2021-08-27T16:53:57.639Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1f9464aa183984d91049e861951181564ea6b31b",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/58/e3sconf_efsc2021_07006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7e7227cc484da3d8f9678642507f9bb1a57c3b5a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
252875965 | pes2o/s2orc | v3-fos-license | Reducing uncertainty in local temperature projections
Planning for adaptation to climate change requires accurate climate projections. Recent studies have shown that the uncertainty in global mean surface temperature projections can be considerably reduced using historical observations. However, the transposition of these new results to the local scale is not yet available. Here, we adapt an innovative statistical method that combines the latest generation of climate model simulations, global observations, and local observations to reduce uncertainty in local temperature projections. By taking advantage of the tight links between local and global temperature, we can derive the local implications of global constraints. The model uncertainty is reduced by 30% up to 70% at any location worldwide, allowing to substantially improve the quantification of risks associated with future climate change. A rigorous evaluation of these results within a perfect model framework indicates a robust skill, leading to a high confidence in our constrained climate projections.
INTRODUCTION
As the global mean temperature keeps rising and climate change intensifies, there is a growing demand for local-scale monitoring of current and future climate change. Assessing and planning the adaptation to the expected unprecedented impacts of climate change on human activities, ecosystems, and the biosphere as a whole require an accurate local information with well-calibrated uncertainties. This need relates to estimates of warming to date and the future warming in response to set of scenarios of future greenhouse gas emissions.
It is unequivocal that human influence has warmed the atmosphere, ocean, and land since preindustrial times (1). Concurrently, the anthropogenic influence is not detected everywhere at the local scale (2,3). Natural climate variability can blur the emergence of the anthropogenic signal for the next few years at high latitudes, while a substantial warming is already reported in several tropical regions (4,5). Regarding climate projections, the Intergovernmental Panel on Climate Change (IPCC) concluded in its fifth assessment report (AR5) (6) that "Future [human-induced] warming trends cannot be predicted precisely, especially at local scales".
In the IPCC AR6 (7), a new generation of climate models (8) has been used to provide a range of projections in response to different socioeconomic scenarios (9). On the basis of this new dataset, various studies have recently shown that uncertainty in global mean warming can be considerably reduced using the information provided by recent observed warming trends via so-called "constraint" methods (10)(11)(12)(13). These studies consistently point toward a downward revision of the expected warming in all emission scenarios (10,12), with a decrease in model uncertainty of nearly 40% for end of century projections (11), and even more at shorter lead times. This is an important result as, until then, observations have failed to provide clear evidence in reducing the range of climate projections (14).
The next challenge is to transpose these new findings on global warming to regional and local scales. At the regional scale, a few studies have adopted the partitioning from the Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX) (15) and have attempted to narrow model uncertainty with sophisticated techniques with promising results (16,17). However, the SREX regions are typically continental-wide and do not provide relevant information for local adaptation. At the local scale (defined as the size of a global climate model grid box of about 200 km) and to the best of our knowledge, only a few studies have attempted to narrow climate model uncertainty, by using weighting methods to account for interdependencies between models (18,19) or by focusing on specific and limited areas (20)(21)(22). In particular, although constrained projections of global mean temperature are now used in the IPCC AR6 (7), local climate projections are still solely based on a raw ensemble of available climate models (https:// interactive-atlas.ipcc.ch/), derived from global warming levels.
Here, we assess how much uncertainty in local temperature projections can be reduced. We first take advantage of the tight links that exist between local climate and global mean temperature (23,24). Specifically, we describe the local implications of the recent advances in the reduction of the uncertainties in global mean temperature projections. We then provide a set of local-scale temperature projections, which encapsulate another source of information: the observed local warming to date. If compared to the global mean temperature record, internal variability is larger in local observations. However, they still provide a useful source of information about both past and future trends, particularly over some specific regions. We discuss how much these two types of observations (global and local) narrow uncertainty on future warming ranges. Such a reduction is expected to provide more accurate information that becomes critical for policy-makers in the local climate risk management (25), as well as for the climate science community.
RESULTS
The Kriging for Climate Change (KCC) method used by Ribes et al. (11,16) is one of the statistical techniques that have led to a significant reduction of uncertainty in probabilistic projections of global mean temperature by combining observations and models (7,11). This Bayesian method involves three steps. First, the response to all external forcings for each climate model considered is estimated over the period 1850-2100, after filtering out internal variability as much as possible. Second, the sample of forced responses from available climate models is used as a prior of the real-world forced response for each grid point. This is done assuming that "models are statistically indistinguishable from the truth." Third, observations are used to derive a posterior distribution, i.e., a constrained temperature response, of the past and future forced response given observations, in a Bayesian way.
Compared to many emergent constraint approaches in which observations are often summarized into one single variable (26,27), the KCC method is able to take the full observed time series into account. Here, we further extend this technique to account for multiple time series and potential dependencies between them (see Materials and Methods). When applied to the global surface air temperature (GSAT) time series simulated by the models from the Coupled Models Intercomparison Project phase 6 (CMIP6) (8) models and the Shared Socioeconomic Pathway (SSP) 5 to 8.5 scenario, the amplitude of best estimate of the projected GSAT changes constrained by the observations is revised downward by 0.5°C by 2100, with a reduction in model uncertainty of nearly 40% (11). Here, we consider global mean surface temperature (GMST; a blending of land air and ocean sea surface temperatures; see Materials and Methods) instead of GSAT, as GMST is more consistent with local observational record. Figure 1A offers an update of the GMST constraint: the warming of 5.3°C projected by CMIP6 is in this case revised downward by 0.4°C (best estimate) in 2100. Minor differences with Ribes et al. (11) are explained by 2 years of additional observations, by the addition of several CMIP6 models, which affect the prior distribution, and by the lower warming observed in GMST compared to GSAT (28,29).
Constrain local climate projections with global observations
Climate models exhibit a strong correlation between future GMST and local warming over most regions of the globe (Fig. 1B). To take such a relationship into account, we extend the KCC method to constrain local temperature projections. Beyond the simple correlation shown in Fig. 1B, this method uses all the information contained in the entire observed GSAT time series to derive local warming (considering that the annual time series provides useful additional information, e.g., to distinguish between greenhouse gases (GHG) and aerosol forcings). This is done by deriving the local warming conditional on the observed GMST record (hereafter the GMST-only case; see Materials and Methods). As an example, we consider the North American city of Dallas for which the simulated local temperature over the 2022-2100 period is significantly correlated with future GMST (see the corresponding point in Fig. 1B). Consistent with GMST results, the local temperature range constrained by GMST observations indicates a decrease in uncertainty of about 20% over the 2021-2040 period, up to 30% over the 2081-2100 period ( Fig. 2A) in the GMSTonly case. The best estimate of local warming is revised downward by 0.4°C by 2100 compared to the unconstrained projections (hereafter the unconstrained case).
When the method is applied to any location worldwide, the results in the projected mean temperature and in model uncertainty show a clear relationship with the level of correlation between the local temperature and GMST (Figs. 3, B and D, and 1B). Note that the method is applied for each location separately, i.e., that it does not take advantage of spatial correlations and does not provide constrained projection at a larger regional scale. The reduction of uncertainty in local projections is the highest at the locations where the correlation with GMST is the strongest. For these locations, e.g., over several continental regions, the North Pacific and the Indian Ocean, a reduction of the ensemble spread of about 45% is obtained over the 2021-2040 period, with a downward revision of the best estimate warming between 0.2° and 1°C (Fig. 3, A and C). Conversely, for locations where the correlation is low, such as in southern Indian Ocean and the Barents Sea, the local temperature response is weakly constrained, with a reduction of the model uncertainty of 10% and a revision of the best estimate by 0.5°C or less. These revised ranges lead to a warming pattern at +2°C of global warming, considerably different from that assessed in the AR6 (Fig. 4, A and B, and fig. S1C). For example, the constrained local temperatures over North America are expected to be 0.4°C warmer compared to the unconstrained case. Note that, by construction, only the spatial pattern of warming is affected by the observational constraint in this case, since the +2°C warming level is let unchanged-so, the map of differences necessarily mixes patches of positive and negative values here, even if warming ranges at a given date are all revised downward (see caption of Fig. 4).
Added value of local observations to the constraints
Beyond the useful information provided by the historical GMST time series, it is insightful to assess the consistency between the expected local response (regardless whether observed GMST is accounted for) and local historical observations. Current and past warming is spatially heterogeneous, and some regions such as the Arctic are warming faster than others (Fig. 3A) (30). Therefore, it is relevant to account for both GMST and local observations to provide local projections consistent with all available observations. Using recent local observations could particularly affect short-term projections (typically over the 2021-2040 period) and could provide a different picture of the constrained temperature ranges.
To make such a calculation, we derive a posterior of the expected local warming given local historical observations in addition to the GMST observations (hereafter the Local + GMST case; see Materials and Methods). Following the example of Dallas considered in Fig. 2A, the constrained local temperature ranges become typically closer to long-term local observed variations ( Fig. 2B compared to Fig. 2A). The constraint by local observations is found to influence the best estimate of future warming even on longer lead times. Regarding the uncertainty, the added value of local observations in the reduction of the model spread compared to the GMST-only case is limited in this example, with a decrease of about 10% of the confidence range width compared to the GMST-only case. Two reasons contribute to this limited impact and must be considered for any location. First, the local signal-to-noise ratio can be small. This may happen if local internal variability or measurement uncertainty is large (i.e., local observations provide little insight on the externally forced response). Second, the global and local responses can be highly correlated with each other so that they partly provide the same information, leading to a limited impact of local observations on uncertainty ranges. In both cases, the model uncertainty will be only marginally reduced by local historical data (see Materials and Methods).
The application of the Local + GMST constraint to all grid points worldwide results in a projected warming pattern, which remains quite close to the GMST-only case (Fig. 3, C and E) but with regional differences. On the one hand, for several regions over the Arctic, the warming is revised downward compared to the unconstrained case, making the projections more consistent with recent observations and implying a reduced warming compared to the GMST-only case. On the other hand, an upward revision is obtained over Eastern Asia, Greenland, the East Siberian region, and Southern oceans. The added value of local observations in the reduction of model uncertainty is largest over these regions where the correlation in Fig. 1 is low (Fig. 3F). The estimated projections for the 2081-2100 period can also be derived and are consistent with these results ( fig. S2). Note that even if each grid point is treated independently from the others (see Materials and Methods), the global mean of the constrained local ranges (for both the GMST-only and the Local + GMST cases) is very close to the constrained GMST ranges shown in Fig. 1A, suggesting a consistency between the different types of constraints.
The addition of local information can clearly modify the warming pattern at +2°C of global warming (Fig. 4, B and C; and fig. S1, B and C). For example, while a downward revision of the temperature change of −0.2°C is obtained over Europe in the GMST-only case, an upward revision of 0.3°C is obtained in the Local + GMST case. This change of sign is widespread over Eurasia. In the context of an urgent need of adaptation to the threat of climate change, our constrained warming pattern provides a revised and a more relevant information for local adaptation planning.
Evaluation of the constrained projections
The robustness of these promising results is quantified within a socalled perfect model framework, using a leave-one-out cross validation (see Materials and Methods). Each member of each model is considered as pseudo-observations over the 1850-2021 period. These are subsequently used to constrain the temperature projections, using all other models as a prior. The constrained temperature
of 13
range is then compared to the projected warming simulated by the model from which pseudo-observations were taken. As making this evaluation for all the grid points is computationally expensive, this procedure is applied to several locations, considered as representative of the diversity of the worldwide climate. As for the real observations, we assess both the GMST-only and Local + GMST constraints. The continuous ranked probability skill score (CRPSS) (31) is used to measure the accuracy of the method, taking the unconstrained projections as a baseline as a first step (see Materials and Methods). Figure 5A shows that the average of the CRPSS distribution based on all pseudo-observations is positive for every location in the GMST-only case with an improvement of about 30% over the 2021-2040 period. Depending on the location, the skill is remarkably improved by 10 to 50% (for the average) ( Fig. 5 and figs. S5 to S9). In the Local + GMST case, the average skill is also positive (Fig. 5C), and for most locations, the CRPSS lies between 10 and 60% relative to the unconstrained case.
These results clearly demonstrate the performance of the method over short lead times (2021-2040). Moreover, the comparison between the Local + GMST and the GMST-only constraints indicates that the skill is slightly improved when adding local observations to constrain projections. The average of the CRPSS distribution is positive for 52 locations over 55 when comparing the Local + GMST and the GMST-only constraints ( Fig. 5D and see boxplots in magenta in figs. S5 to S9) and is slightly negative for the remaining points. The significance of this result is assessed with a binomial test. Under the null hypothesis that adding local observations has no impact on the skill (i.e., that the GMST-only and Local + GMST cases are categories equally likely, such as a coin toss), the probability of getting this result by pure chance is less than 0.1%. This suggests that there is a clear added value in considering the constrained ranges derived from the Local + GMST case relative to the GMST-only case. A third case for which we only use local observations (Local-only case) to constrain projections indicates lower scores than in the Local + GMST case (Fig. 5B) and confirms that using the combination of global and local observations enhances the accuracy of the method.
Similar results are obtained when we consider the 2081-2100 period (fig. S11).
The rare cases where the CRPSS is negative are due to the large low-frequency variability in few CMIP6 models. This is a topic of concern, as several CMIP6 model are characterized by clear multidecadal and even centennial internal variability in GMST (32,33). Figures S12 to S16 show that models associated with a strong decadal variability (e.g., CNRM-CM6-1, EC-Earth3, and IPSL-CM6A-LR) are those contributing negatively to the CRPSS in most cases. The assumptions used in the KCC method to statistically model internal variability might explain the failure to capture such a slow internal variability, resulting in overconfidence. Excluding those specific models from the evaluation process leads to a clear increase in the CRPSS by 15% both in the GMST-only and Local + GMST cases ( fig. S17). Discussing the realism of these particular models requires further analyses on internal variability (34,35) and climate sensitivity (36), which are beyond the scope of this study. Still, regardless of whether the real system contains such low frequency variability, the CRPSS distributions remain mostly positive across locations for the three types of constraint, even if models with large internal variability are included. Both distributions in fig. S17 (excluding models with large internal variability) are above the distributions (including all models) in fig. S10, which themselves show mostly positive values (see the median value). Last, the relevance and reliability of our method seems robust to including all CMIP6 models and would be even strengthened if low-variability models were proven less realistic. This is supported by a second evaluation criterion of the method based on coverage probabilities, which lead to similar conclusions (see Supplementary Discussion). From all of these evaluation results, we retain the Local + GMST case to provide guidance in constraining local projections. The evaluation of the KCC method suggests that the constrained temperature ranges are reliable and demonstrate that relying on unconstrained projections to assess the local future climate is no longer the best approach.
DISCUSSION
We have shown, using a statistical method combining the entire temperature observation records with model simulations, that uncertainty in local temperature projections can be substantially narrowed. Local projections constrained by both global and local observations exhibit a reduction of the uncertainty of 40% on average by 2100. This demonstrates the benefits of merging model simulations with observations to provide the best picture of future climate change. Figure 6 offers a complementary perspective to the AR6 (7) conclusions that were solely based on raw (unconstrained) projections. For each location, a temporal evolution from 1850 to 2100 of the constrained temperature and its uncertainty can be derived, with revised projections for the near and the long-term time scales. In particular, the KCC method provides a way out of the concept of global warming levels, by estimating the uncertainty for a given date. This fills the gap in the IPCC atlas (only based on unconstrained projections) and provides a considerable revision of the local exposure to the consequences of the on-going climate change (37). An online tool that implements the method and illustrates the constrained temperature ranges for every point over a horizontal grid of 2.5° resolution is available via the following demonstrator: https://saidqasmi.shinyapps.io/KCC-shinyapp/.
Promising prospects exist to improve the constrained projections. CMIP5 (38) and CMIP6 (8) ensembles sample model uncertainty in a probabilistic way using all climate models as an "ensemble of opportunities" (39,40). This approach has several limitations that can bias the estimation of climate uncertainty (41). First, they are not designed to sample uncertainty comprehensively (42), e.g., no models are being intentionally built to have low or high climate sensitivities. Forcing uncertainty is also poorly sampled in the CMIP6 ensemble, e.g., the magnitude of the aerosol forcing in 2014 (43). Second, each model output is considered as independent and contributes equally to the multimodel ensemble, although it is known that many CMIP6 models share common components and parametrizations (44). This "model democracy" paradigm has been largely used to summarize projection information in IPCC assessment reports (6), although it can be criticized (45). Therefore, using a subset of models qualified as independent a priori or weighting the models in this way (46,47), before applying the observational constraint, may provide even more reliable results.
Our results demonstrate that available observations offer valuable information to reduce uncertainty in climate projections over the next decades. Even with the continuous improvement of climate models, the intermodel spread is not necessarily reduced for several variables from one CMIP generation to the next (1). In this sense, the contribution of observations and constraint methods is expected to improve the reliability of the projections. Therefore, it is critical to account for this new source of information and to regularly bridge the gap between monitoring recent changes and predicting future changes.
The KCC method itself can also be improved. Although it can be used on larger areas to easily derive constrained projections, e.g., on the SREX regions (16), the current implementation does not take into account the spatial dependence in the climate variability between locations. Taking the spatial dimension fully into account could bring additional useful information and would result in consistent uncertainties at all spatial scales. This requires dealing with much larger covariance matrices. Estimating those matrices is a key challenge, as the number of models is much smaller than the number of grid points and years. Reducing the dimension of the problem, with methods used in data assimilation or considering spatiotemporal principal components may help to regularize these highdimensional matrices.
Generalizing the method to other variables of high societal impacts, e.g., extreme precipitation, droughts, and snow cover, some of which are also tightly related to GMST, would also be very relevant. For some of these variables, the observed data are sparse, which requires finding well-sampled covariables over the historical period. In addition, variables such as sea ice or precipitation do not necessarily follow Gaussian distributions, which makes it necessary to adapt the KCC method to other types of distributions. In this way, the climate science community could take a step forward toward a more accurate assessment of past and future human-induced climate change.
Observational dataset and models
The temperature observations are from the HadCRUT5 (48) dataset over the 1850-2021 period. The temperature field comes from a blending of near-surface temperature and sea surface temperature using land sea mask and sea ice concentration. The measurement uncertainty of the HadCRUT5 dataset is estimated from an ensemble of 200 equiprobable realizations. Most of the other observational products are included in the temperature range estimated by this ensemble, which confirms our choice to consider the HadCRUT5 dataset as a reference.
CMIP6 models are selected according to the availability of the following data: at least 200 years of a preindustrial control (piControl) simulation; at least one member of a historical simulation and one member of a projection simulation for the SSP5-8.5 scenario. To constrain the simulated temperatures at a grid point scale in a consistent way, a blended temperature field T blend is computed in each CMIP6 model based on the formulation of Morice et al. (48) where T air , T ocean , f ice , and f ocean are for each grid point near-surface air temperature, sea surface temperature, sea ice concentration, and sea area fraction. The 27 models for which these variables are available and which satisfy the above criteria are listed in Table 1.
We define the GSAT as the global mean of T air and the GMST as the global mean of T blend . Several studies have shown that GMST and GSAT clearly differ as GMST warms less than GSAT (28,29).
Models are interpolated on a common horizontal grid of 2.5° resolution before calculating blended temperatures and applying the constraining method. This choice is motivated by a compromise between the different resolutions of the CMIP6 models (between 1.5° and 2.5°). Note that the KCC method can be applied to finer resolutions if observations are available at this scale. For temperature, for which the spatial autocorrelation is high, the reduction in the uncertainty is expected to be the similar as for the 2.5° resolution.
Statistical method
The statistical method is based on the same one used by Ribes et al. (11), whose formulation and principle is similar to kriging, which is a method originally developed to interpolate geophysical data based on prior covariances. In Ribes et al. (11), this method is applied to the analysis of time series from climate simulations of CMIP5 and CMIP6 models and is used for several purposes: (i) reducing model uncertainty on past and future global warming estimated by CMIP and ScenarioMIP (9) simulations, (ii) reducing uncertainty on warming attributed to several external forcings via the Detection and Attribution Model Intercomparison Project (DAMIP) (49) models, and (iii) complete or statistically reconstruct missing simulations from other physically relevant simulations (e.g., using the so-called 1% CO 2 simulations in which the CO 2 concentration increases by 1% each year, to reconstruct DAMIP historical simulations in which greenhouse gases follow their historical concentrations, while other forcings are kept constant). Here, we apply this method of KCC to Table 1. List of the available CMIP6 models and the associated number of members in the historical and SSP5-8.5 simulations used to constrain temperature projections.
Model simulation
Historical + SSP5-8.5 reduce the model uncertainty in the past and future temperature changes simulated by CMIP6 models at each grid point. Note that a confusion can be made with techniques based on so-called emergent constraints methods (26,27). Emergent constraints would usually consider the sole observed global warming trend (a single scalar); e.g., over the 1980-2021 period, to constrain the simulated temperature changes in the future. The KCC method has several advantages compared to this approach. Instead of simply constraining a trend over a subperiod, it uses the entire observed time series of temperature, which avoids ignoring useful information. In addition, the method takes into account the model temporal pattern uncertainty and provides confidence ranges specifically for the forced response, while many other studies also include internal variability. For a given grid point, we define y loc * as the yearly time series of the real (and unknown) temperature response to external forcings over the 1850-2021 period and y loc as the observed yearly temperature time series over the same period. Similarly, we define for the GMST the vectors y glo * and y glo as the unknown response to external forcings and the observed time series, respectively. They constitute the following y* and y vectors, both of size 2n y where n y = 170
ACCESS
Assuming that the observed temperature total variability can be decomposed as the sum of a term of forced variability and a term including both internal variability and measurement errors, y takes the following form where ϵ = (ϵ loc , ϵ glo ) is a vector of size 2n y and corresponds to the local and global terms of measurement errors and observed internal variability. Further assuming that models are indistinguishable from the truth, i.e., that observations and models are exchangeable (50)(51)(52), observations y can be rewritten where x loc and x glo are the yearly time series over the 1850-2100 period of the local and global temperature responses to external forcings estimated in CMIP6 models, respectively, i.e., vectors of size n x = 251. H is an observation operator of size 2n y × 2n x , which extracts the part of x that is observed in y, i.e., the forced response from 1850 to 2021, and whose form depends on the type of the applied constraint (using only GMST observations or both GMST and local observations; see eq. S21). Note that the assumption of exchangeability between observations and models has been suggested as well supported by observations, especially for temperature (50,53).
For a given CMIP6 model m listed in Table 1, we choose to estimate the simulated response to all external forcings x m,glo , by decomposing the simulated GMST over 1850-2100 into an anthropogenic response x m,ant,glo , and a natural response x m,nat,glo . Therefore, after averaging all available members of the model m, the simulated GMST time series over 1850-2100 x m,glo writes x m,glo = x m,nat,glo + x m,nat,glo + m (5) where ϵ m is a random term for internal variability.
To estimate x m,nat,glo and x m,ant,glo in the model m, we use a generalized additive model (GAM) to compute the response to all external forcings, x m,glo (recall that x m,glo follows Eq. 4) where m is an unknown scaling factor. e is a vector of size n x and is the temperature response to the natural forcings computed from a two-layer (atmosphere-ocean) energy balance model (EBM) following equations 1 and 2 of Geoffroy et al. (54), using the historical and SSP5-8.5 natural forcings between 1850 and 2100 estimated by the Priestley Center (55) as a radiative term. Here, e is calculated using an average of EBM parameters fitted to the CMIP6 ensemble and aims at estimating rapid year-to-year variations of natural forcings. f(t) is a time series [with t = (1850, …,2100)] and refers to an assumed smoothed response of GMST to the anthropogenic forcings (i.e., a smoothed response of x m,glo ). The function f corresponds to smoothing splines to filter out part of internal variability, with 6° of freedom (a value that was selected as a bias-variance trade-off). The total forced responses as estimated by this procedure are illustrated in fig. S18. We apply the exact same procedure to estimate the local forced responses as simulated by each CMIP model. For each grid point from the model m, we consider x m,loc , the average of all available members, to estimate the local forced response, x m,all,loc . We assume that the local natural response scales linearly with the globally averaged natural forcings time series, as the EBM response e used to calculate x m,nat,glo is also used when we fit the GAM to compute the local natural response x m,nat,loc . Thus, x m,nat,glo and x m,nat,loc only differ by their scaling factor m . We believe that our results are not sensitive to this choice given the reduced strength of, and uncertainty in, the natural response compared to the anthropogenic response.
The multimodel ensemble of the local and global simulated responses to all external forcings is used to derive a distribution of x, noted (x) ∼ N(, Σ mod ), built from all x m,glo and x m,loc . = ( loc , glo ) is a vector of size 2n x and is the multimodel ensemble mean of the concatenated local and global forced responses. Σ mod is a variancecovariance matrix of size 2n x × 2n x that describes the model spread, with the following form where Σ mod,loc and Σ mod,glo are the sample covariance matrices of size n x × n x modeling local and global model spread within x loc and x glo , respectively. Σ mod,dep is the covariance matrix modeling the dependence between x loc and x glo In our Bayesian framework, (x) is a first (probabilistic) estimate of x, which makes no use of observations, and is only based on climate models. We want to update this estimate by incorporating the observational evidence provided by y. Following the Bayesian theory, the calculation of the posterior distribution p(x|y) is required. A prerequisite is to define the observational uncertainty, i.e., the covariance matrix associated with y.
Modeling of observational uncertainty
Given Eq. 4, we assume that ϵ ∼ N(0, Σ obs ), where Σ obs = Σ meas. + Σ iv is the observation error covariance matrix. Σ meas. and Σ iv are both of size 2n y × 2n y and describe the measurement error and internal variability, respectively. Σ meas. is estimated as the sample covariance matrix over the 200-member ensemble of the HadCRUT5 dataset (48). Σ iv is estimated using observed annual time series of global and local temperature over the 1850-2021 period. First, we compute the global observational residuals by subtracting the CMIP6 response to all external forcings glo (1,…,n y ) from the observations y glo . Similarly, we derive local residuals by subtracting loc (1,…,n y ) from y loc . These residuals constitute a first estimate of global and local internal variability, noted ˆ iv,loc,1 , ˆ iv,glo,1 , respectively.
We define Σ iv as a matrix of size 2n y × 2n y of the following form where Σ iv,loc and Σ iv,glo are the covariance matrices of size n y × n y modeling local and global internal variability within y loc and y glo , respectively. Σ iv, dep is the covariance matrix modeling the dependence between local and global internal variability, i.e., ϵ iv,loc and ϵ iv,glo .
To compute Σ iv , we take into account decadal internal variability that exists in the global (56), regional (57), and even local (58) observations, using a mixture of two autoregressive processes or order 1 (AR1), hereafter mixture of autoregressive processes (MAR), as done by Ribes et al. (11). The MAR formulation includes a fast (f) and a slow (s) components such that global internal variability ϵ iv,glo within the GMST residuals writes at a time t ⎧ ⎪ ⎨ ⎪ ⎩ ϵ iv,glo (t ) = ϵ iv,f,glo (t ) + ϵ iv,s,glo (t ) , ϵ iv,f,glo (t ) = f,glo ϵ iv,f,glo (t − 1 ) + Z f,glo (t ) , ϵ iv,s,glo (t ) = s,glo ϵ iv,s,glo (t − 1 ) + Z s,glo (t) (9) where the parameters s,glo and f,glo are the lag 1 coefficients of the AR1 processes and s,glo ≥ f,glo by convention. Z s,glo (t ) ∼ N ( 0, s,glo 2 ) and Z f,glo (t ) ∼ N ( 0, f,glo 2 ) are white noises associated with the two AR1. The slow component is able to generate a dependence on time scales of typically one decade, while the fast component accounts for interannual variability. Following the principle of parsimony, only four coefficients ( f,glo 2 , f,glo , s,glo 2 , and s,glo ) are thus needed to characterize internal variability at the global scale and to make Σ iv,glo invertible. We fill the covariance matrix Σ iv,glo following the calculations of each of its coefficients, as detailed in eq. S8. In practice, we apply a maximum likelihood procedure to the local and global residuals according to the statistical model from Eq. 9. Uncertainty related to these coefficients is not taken into account. Then, we make the same assumptions, and estimate four other parameters, ( f,loc 2 , f,loc , s,loc 2 , and s,loc ) , to characterize fast and slow components in local internal variability ϵ iv, loc and to compute Σ iv, loc .
The initial estimate of Σ iv , noted ˆ Σ iv,1 is solely based on the residuals ˆ iv,loc,1 and ˆ iv,glo,1 derived from the unconstrained forced response. This first estimate is likely flawed as the real (and unknown) forced response y* is not necessarily consistent with the unconstrained forced response estimated by . In addition, as can be by construction different from the best estimate of the constrained forced response ˆ 1 (the mean of the posterior distribution p(x|y), see section bellow), the residuals ˆ iv,loc,1 , ˆ iv,glo,1 before constraint are not always coherent with the residuals ˆ iv,loc,2 , ˆ iv,glo,2 computed as the y − ˆ 1 difference.
Hence, to ensure an accurate estimation of internal variability in the constraint procedure, an iterative algorithm is applied to find the MAR parameters that fit the residuals from the constrained forced response where, for each iteration n, ˆ n and ˆ iv,n are estimates of the forced response and internal variability, respectively. The termination criterion is based on the Frobenius norm ‖. ‖ F . Hence, we consider that the algorithm converges at the iteration n, i.e., that ˆ iv,n → iv , when the relative difference between ‖ ˆ Σ iv,n ‖ F and ‖ ˆ Σ iv,n−1 ‖ F is inferior to 1%, meaning that the MAR parameters values have also converged. In practice, n varies between 2 and 4 depending on the location. The autocorrelations from this MAR model suggest that our statistical representation of internal variability effectively captures decadal variability (typically between lag 5 and lag 10) in the GMST and local temperature time series, e.g., for the Atlantic, African, and South American regions (Fig. 7). We are aware that initial condition large ensembles and long piControl simulations provide a nice sampling of internal variability and could also be used to estimate this variability. However, we choose to not directly rely on it because of the huge discrepancies between models in terms of their simulated internal variability (56). Figures S20 to S30 illustrate this aspect with the piControl simulations from the CMIP6 models, including those used to build large ensembles. In all cases, the models do not converge to a consistent estimate of internal variability. For instance, over the Atlantic ocean, many models exhibit clear pseudo-periodic low frequency variability, while other models do not simulate decadal variability.
Modeling of the dependence between local and global internal variability
As impacts from Pacific and Atlantic decadal variability (and potential other modes of variability) on GMST have been reported over the historical period (59,60), we need to allow a potential dependence between global and local internal variability in Σ iv,dep . Therefore, finding a simple and parsimonious dependence model that is compatible with the MAR structure is required. Allowing the covariances Cov [ϵ s,glo (t), ϵ s,loc (t)] and Cov [ϵ f,glo (t), ϵ f,loc (t)] to be nonzero is not trivial, and these terms need to be quantified to fill the covariance matrix Σ iv,dep . Note that the fast and slow components remain always independent and that Σ iv is computed for each location separately, as the spatial dependence among various locations is not considered in the method. To compute Σ iv,dep , we introduce a ninth parameter accounting for some correlation between the local versus global components in the MAR modeling. The formulation of the covariances is slightly different in this case, and the calculations are detailed in the Supplementary Materials.
Calculation of p(x|y)
As (x) and ϵ are assumed to follow normal distributions, the Gaussian conditioning theorem is applicable to derive the posterior or the "constrained" distribution p(x|y). Its formulation (detailed in eq. S23) indicates that the method is conservative: The uncertainty in p(x|y) is never larger than that in (x). Therefore, if observed internal variability is very large, then the model uncertainty in p(x|y) will remain very close to that in (x).
Perfect model evaluation
We evaluate the performance of the KCC method within a perfect model framework, following a leave-one-out cross-validation: 1) For a given model, we consider a single member as pseudoobservations y over the 1850-2021 period (the historical simulation is extended by the SSP5-8.5 simulation over the 2015-2021 period).
2) We use the other 26 models to derive the prior (x) ∼ N(, Σ mod ).
3) As there is no measurement uncertainty in models, Σ meas. is null; therefore, Σ obs = Σ iv . As done with the real observations, internal variability within the pseudo-observations is estimated from the difference between the pseudo-observations time series and the forced temperature response estimated by the ensemble mean of the 26 other models. Σ iv is then derived from the MAR fitted on the obtained residuals. 4) We apply the KCC method using the inputs y, Σ obs , , Σ mod to calculate projected changes constrained by pseudo-observations. 5) These four steps are repeated for each available member of the considered model and for all available models.
Continuous ranked probability score
We use the CRPS (31) to quantify the performance of the KCC method. It is defined as the quadratic measure of discrepancy between (i) 1(x ≥ y pobs ), the empirical cumulative distribution function (CDF) of a scalar pseudo-observation y pobs simulated by one model and averaged over the future period, and (ii) the projected CDF G cons of p(x|y) (derived from all of the other models) over the same period CRPS cons ( G cons , y pobs ) = ∫ ℝ [ G cons (x ) − 1(x ≥ y pobs ) ] 2 dx (11) where 1 is the indicator function (note that x is here a bound variable in the integral, different from the vector x in Eq. 4). Similarly, we define a reference CRPS, CRPS ref based on G ref , the CDF of (x), the unconstrained distribution, and y pobs . We can compute the CRPSS, which quantifies the performance of the KCC method if compared to the reference The CRPSS is computed over all available pseudo-observations (121 values; see Table 1). CRPS cons is calculated in both GMST-only and Local + GMST cases. Therefore, the quantity 1 − CRP S cons (Local + GMST) _______________ CRP S cons (Global − only) allows quantifying the added value from local observations compared to the sole use of GMST observations. A positive (negative) value, indicates an improvement (deterioration). The higher the CRPSS (bounded at 1), the better the performance. | 2022-10-14T06:17:13.854Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "2a5c6bd1f883f43390a7c7a83fb625bbe41c6fc9",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d90a108bfcd64a13bfbc43c074de05f1eb7631c3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212930026 | pes2o/s2orc | v3-fos-license | Concept of using composite multiferroic structure magnonic crystal - ferroelectric slab as memory unit
In the present work, it is proposed to use as a memory cell with random access a new type of memristor - a multiferroic wave memristor based on structures type magnonic crystal-ferroelectric. Main feature of propagating excitations trough such structure is band gap formation: Bragg band gap that position in spectrum depends on the polarization of ferroelectric. Thus, information from ferroelectric is read as a result of signal transmission / attenuation at frequency of band gap. In present work theoretical model of spin waves propagating in structure magnonic crystal-ferroelectric is developed. Results of an experimental study of the propagation of spin-wave excitations in such structure are given.
Introduction
Currently, a promising approach for creating a neuromorphic architecture is the use of memristors as electronic synapses. It is now known that ferroelectric memory has memristor [1] properties and can be considered to create a new architecture of neuromorphic computations. When an electric field is applied to ferroelectric, it changes its polarization, moving to opposite part of the hysteresis loop and can maintain its state. Due to this, it is possible to obtain two states that are well distinguishable in energy, and this is enough to create non-volatile memristor memory based on such cell.
Currently, there are several approaches to creating storage devices based on ferroelectrics. The first is the FRAM [2], created by analogy with DRAM, whose memory cell consists of a transistor and ferroelectric capacitor. The potential applied to the capacitor plates polarizes ferroelectric, thus recording information 0 or 1. For reading potential is re-applied, and memory cell state can be determined by the presence or absence of recharge current. One of the drawbacks of this type of memory is the loss of storage data in the cell when it is read -its destructiveness. Those after any read operation, a data recovery operation should follow. As a result, the cycle of memory access takes more time; significant part of the energy is spent on regeneration and reduces the lifetime of the ferroelectrics. In addition, there are limits for miniaturizing capacitor sizes.
Another type of ferroelectric memory -FeFET uses a different type of cell [3]. In this case, a thin layer of ferroelectric is deposited onto the gate of the field effect transistor. When the polarization of ferroelectric changes, the voltage required to open transistor changes. The disadvantage of this type of memory is that the location of ferroelectric materials on semiconductor in transistor is accompanied by the diffusion of the layers and leads to the effect of charge injection, which can change the direction of pass along the hysteresis loop. The most new type of memory is ferroelectric tunnel junction (FTJ), based on the ferroelectric tunnel effect [4]. This is a quantum effect, which consists in a change of resistance of the metalferroelectric-metal structure in the case of a change in polarization when a coercive voltage is applied to ferroelectric. For reading it is necessary to measure resistance by control current. The ratio of resistances of the "on" and "off" states reaches 100 times in structures based on barium titanate and 10 times in structures based on hafnium oxide, which is not enough to implement a multi-level memristor.
Another approach to creating memristor devices lies in the field of spintronics and is based on the use of spin transfer torque (STT) [5]. The basis of spintronic memristors consists of magnetic structures of two layers of a ferromagnetic with different magnetization. Due to the effect of the magnetic tunnel junction, the resistance can be discretely changed when an electric pulse is applied. The disadvantage of such structures is that to switch between parallel and antiparallel states, different current values are required, which is inconvenient for controlling the memristor or can lead to errors. The ratio of resistances for parallel / antiparallel states in such memristors does not exceed several units. A single line for reading and writing can also lead to errors when, for example, reading causes switching.
The development of spin-wave electronics technology, scalable up to several nanometers, opens up the possibility of using the advantages of spin-wave electronics to improve the performance of ferroelectric memory. The use of spin waves for reading the state of the ferroelectric will solve a number of problems of the ferroelectric memory, such as destructiveness, scaling problems, speed, and also makes such memory insensitive to external radiation. The basis of this type of memory, combining the advantages of ferroelectricity and spintronics, can make a multiferroic spin-wave memristor [6]. This memristor consists of ferroelectric layer and ferromagnetic film with a periodic change of parameters (of magnonic crystal).
Main part
In multiferroic spin-wave memristor in contrast to the previously known principles of ferroelectric memory, another principle of operation is used, based on wave interactions of different physical nature: spin-wave excitations in magnonic crystal and electromagnetic waves in ferroelectric film. With this approach, the polarization of ferroelectric affects the characteristics of spin-wave excitations in magnonic crystal. Reading information recorded on ferroelectric is carried out by analyzing the spin-wave signal at the output of the structure.
The proposal of a ferromagnetic layer as a readout layer is explained by the possibility of a spin wave propagating in such a layer, the characteristics of which are sensitive to the state of ferroelectric, and for the excitation of which low energy costs are required. With this approach, when reading, the state of polarization of ferroelectric does not change and the bit is not erased after each reading. Thus, the problem of memory destructiveness can be solved.
As a reading layer in such a structure, it is proposed to use a periodic ferromagnetic layer -a magnonic crystal (MC). The choice of a periodic layer is explained by the presence of band gaps in the spectrum of waves in such structure -non-transmission bands. Bits 0 (or 1) will correspond to the fall (or not fall) of the control frequency in the band gap.
Consider the features of the formation of band gaps in the propagation of waves in the structure of magnonic crystal-ferroelectric.
The simplest multiferroic structure based on ferromagnetic films is the structure of the ferromagnetic film-ferroelectric. With tangential direction of external magnetic field in ferromagnetic film, the surface magnetostatic wave (MSW) will propagate. The distribution of the electromagnetic field of surface MSW and electromagnetic wave (EMW) is the same and at the point of phase synchronism of surface MSW and EMW, the dispersion characteristics push apart: magnetostatic and electromagnetic waves are hybridized and a hybrid spin-electromagnetic wave (HSEW) is formed.
In figure 1a shows the theoretical dispersion characteristics of HSEW in structure magnonic crystal-ferroelectric calculated using the dispersion equation obtained from solving Maxwell's In magnonic crystal-ferroelectric structure, the periodicity of the structure leads to the formation of waves reflected from inhomogeneities In figure 1a shows dispersion characteristics of the following types of waves in isolated layers: direct and reflected EMW in ferroelectric layer in the absence of coupling between these waves (dashed curves), direct and reflected MSW in magnonic crystal in the absence of coupling between these waves (dashed curves).
Dispersion characteristics in figure 1a shows that there are 5 points of intersection of the curves presented (points A, C, A', C', B). Hybridization at point A (the point of intersection of the dispersion characteristics of the direct MSW and the direct EMW) occurs due to the interaction of these waves and is similar to the hybridization for structure ferromagnetic film -ferroelectric. As a result, two branches of HSEW are formed (branches 3 and 4). Hybridization at point A' (the point of intersection of the dispersion characteristics of the reflected MSW and reflected EMW) has a similar nature. As a result, two branches of the HSEW are formed (branches 3'and 4'). Hybrid band gap is formed near points C, C' due to the formation of hybrid branches 5 and 6 and branches 5' and 6' (area c). This band gap is formed in the presence of ferroelectric layer. Hybrid band gap is located higher in frequency than band gap for waves in single magnonic crystal (f2 > fB, where f2 is the central frequency of hybrid band gap) and has a smaller width Δf2 < Δf1.
With an increase dielectric constatnt of ferroelectric ε, the center frequency of the main band gap shifts to the lower border of the MSW bandwidth. In turn with an increase ε the hybrid band gap are broaden and shifts down in frequency, and the central frequency of hybrid band gap tends to the central frequency of the main band gap (to point C). The width and position of the main band gap does not depend on ε.
In figure 1b black curves show the experimental frequency response of the surface MSW in a single magnonic crystal. Magnonic crystal was a yttrium iron garnet film with a thickness of 12 μm, a length of 7 mm, a width of 2 mm, and a saturation magnetization of 1750 G, on the surface of which a periodic system of grooves with a period of 200 μm, a groove width of 100 μm, and a groove depth of 1 μm was created. The magnitude of external magnetic field was 860 Oe. One can clearly see two minima corresponding to the band gaps of the first and second Bragg resonance in the magnonic crystal. The minima are observed at frequencies f1 = 4.4 GHz (main band gap (1)) and f12 = 4.52 GHz (main band gap (2)). To form magnonic crystal -ferroelectric structure on the surface of magnonic crystal, an ferroelectric plate of barium strontium titanate with a thickness of 500 μm and a dielectric constant of 4000 was placed. The frequency response of the HSEW in the magnonic crystalferroelectric structure is shown in orange in figure 1b. It can be seen that in this case, between the first and second Bragg band gaps (marked with blue ellipses) at the frequency f2 = 4.49 GHz, an additional band gap is formed (marked with a red ellipse). This additional band gap can be interpreted as a hybrid band gap. Dispersion characteristic for HSEW in the structure of magnonic crystal -ferroelectric with the parameters of the experiment is shown in figure 1a.
When a voltage is applied to the ferroelectric layer, its dielectric constant changes and the hybrid band gap shifts, as shown in figure 1c.
This effect can be used as a basis for the operation of a spin-wave memristor based on structure magnonic crystal -ferroelectric (figure 2a). To read information from ferroelectric layer, the signal is fed at the frequency in band gap (reference frequency f0). Because the control frequency lies in band gap, signal is attenuate (figure 2b). That signal does not pass through such structure, which corresponds to state 0 (figure 2c). With a different state of ferroelectric, i.e. another dielectric constant band gap is at a different frequency (figure 2d). The signal at the control frequency f0 passes without attenuation, which corresponds to state 1 (figure 2e). The principle of such a reading is given in the form of a table in figure 2f. To read the n-level state of ferroelectric, it is necessary to set n control frequencies, each of which corresponds to band gap positions for each polarization state. With this approach, the contrast (i.e., the ratio of the signal power at frequency of band gap and frequency outside band gap) does not depend on the number of intermediate levels of ferroelectric.
Using magnonic crystal in memory cell allows reading information from two ferroelectric layers located above and below magnonic crystal (figure 3a). In this structure, the formation of two band gaps takes place. Moreover, the position of the first band gap (BG-1) at frequency f1 is determined by the state of the top ferroelectric layer, and the position of the second band gap (BG-2) at the frequency f2 is determined by the state of the bottom ferroelectric layer. In this case, the signal at frequency f1 will allow reading information from top ferroelectric layer ( figure 3b, d), and the signal at f2 -from bottom ferroelectric layer (figure 3c, e). With this method, it becomes possible to compact the placement of information by 2 times.
Conclusions
Thus, shown features of the spin wave and electromagnetic excitations in periodic layered structures based on magnonic crystals and ferroelectrics allow us to consider such structures as a cell of nonvolatile memory that functions as linear and nonlinear synapses in the architecture of neuromorphic networks. This memory cell is not destructive, allows to multi-bit reading of information, the expected transmission / attenuation ratio does not depend on the number of read bits. When scaling such a cell into the nanometer region, its parameters can be comparable the similar characteristics of known types of ferroelectric memory. | 2019-12-05T09:25:29.272Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "25748ca950873508c91f9c37b31caa1f1f979e68",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1389/1/012041/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8c31c7df886963855189d845312758f3f046deee",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
} |
7620359 | pes2o/s2orc | v3-fos-license | Active maturation-promoting factor is present in mature mouse oocytes.
Cytoplasmic extracts of meiotically mature mouse oocytes were injected into immature Xenopus laevis oocytes, which underwent germinal vesicle breakdown within 2 h. Germinal vesicle breakdown was not inhibited by incubation of the Xenopus oocytes in cycloheximide (20 micrograms/ml). Identically prepared extracts of meiotically immature mouse oocytes, arrested at the germinal vesicle stage by dibutyryl cyclic AMP (100 micrograms/ml), did not induce germinal vesicle breakdown in Xenopus oocytes. The results show that maturation-promoting factor activity appears during the course of oocyte maturation in the mouse.
Oocytes arrest their meiotic progression during late 02 phase of the first meiotic division. It is in this arrested stage that oocytes grow in preparation for fertilization and embryogenesis. Fully grown oocytes reinitiate meiosis in response to a hormonal signal at or near the time of ovulation. In amphibians, pituitary gonadotropin triggers the production of a maturation-inducing substance (presumably progesterone) by follicle cells encasing the oocyte. In turn, progesterone acts upon the surface of the oocyte to initiate meiotic maturation, which involves dissolution of the nuclear membrane (germinal vesicle breakdown [GVBD]t), condensation of the interphase chromatin, spindle formation and completion of the first meiotic division, and arrest at metaphase of the second meiotic division (13,20).
The action of progesterone in inducing GVBD is mediated by a cytoplasmic maturation-promoting factor (MPF). Cytoplasm from mature frog oocytes, injected into immature germinal-vesicle stage oocytes, will induce maturation in the absence of progesterone (15). MPF from Xenopus laevis oocytes has been purified >20-fold and has been characterized as a protein (or proteins) with a molecular weight of-100,000 (28).
MPF appears to be ubiquitous. MPF activity has been demonstrated in the amphibians Rana pipiens (15), X. laevis (19), and Ambystoma mexicanum (17), and in the starfish Asterina pectinifera (6). MPF activity has also been demonstrated in amphibian blastomeres undergoing mitosis (15,26), in HeLa cells and Chinese hamster ovary cells arrested in metaphase of mitosis (16,23), and in yeast (27). Insofar as MPF controls the transition from G2 to M phase in both meiosis and mitosis, Gerhart et al. (4) propose that the acronym MPF refers to "M-phase promoting factor." MPF also appears to lack species specificity, which suggests that it has been highly conserved in evolution. MPF from each of three different species of starfish was effective in inducing maturation after injection into oocytes of the other two species (7). Similarly, MPF from R. pipiens, X. laevis, and A. mexicanum induced maturation in one another's oocytes (17). MPF from mammalian mitotic cells induced meiotic maturation in X. laevis (16,23) and A. pectinifera oocytes (8), and MPF from Bufo bufo japonicus oocytes induced meiotic maturation in starfish and sea cucumber oocytes (8).
Recently, Kishimoto et al. (9) demonstrated maturation of starfish oocytes injected with cytoplasm from maturing mouse oocytes. It was the purpose of the study reported here to test for the appearance of MPF during mouse oocyte maturation by using X. laevis oocytes as the test system. 14 h after injection of human chorionic gonadotropin, oviducts were dissected, swollen ampullae were nicked, and cumulus masses were expelled into modified Hanks' balanced salt solution (flushing medium [ [FM I] [22]). Hyaluronidase (Type IV, Sigma Chemical Co., St. Louis, MO) was added to a final concentration of 0.05%. Cumulus masses were incubated at room temperature until mature oocytes were freed (-15 min). Oocytes were washed in FM ! several times to remove cumulus cells.
MATERIALS AND METHODS
Immature (germinal-vesicle stage) oocytes were freed from their follicles by teasing apart ovaries in FM I. Cumulus cells were removed by repeated pipetting, and oocytes were washed in FM I several times. Oocytes were arrested at the germinal vesicle stage by the addition of 100 ug/ml dibutyryl cyclic AMP (Sigma Chemical Co.) to the FM 1 (3) from the moment the oocytes were freed until the final wash (-2 h).
Zonae pellucidae were digested by the incubation of oocytes in 0.5% pronase (Type B, Calibochem-Behring Corp., La Jolla, CA) at 37"C for 3 min. Oocytes were quickly washed in several changes of FM I. Just before extraction, oocytes were washed several times in calcium-and magnesium-free FM I and transferred to miniature centrifuge tubes constructed by flaming closed one end of a capillary tube (Kimax-51, 0.8-ram i.d., Kdmble Products, Toledo, OH). Oocytes were washed through two changes of ice-cold extraction buffer (80 mM sodium 13-glycerophosphate, 20 mM sodium EGTA, 15 mM MgCI2, pH 7.3, plus 0.1 mM 8-thiolated ATP, 1 mM dithiothreitol, and the following protease inhibitors: 0. I t,g/ml pepstatin, 5 uM phenylmethylsulfonyl fluoride, 0.25 ug/ml leupeptin, 0.25 ug/ml aprotinin, 10 uM benzamidine HCI [28] by low-speed centrifugation. After removal of all but ~ 1 ul of extraction buffer, oocytes were disrupted by repeated pipetting through a mouth-controlled pipette with an inner diameter ~~/3 that of the oocyte. The homogenate was centrifuged at 10,000 g for 10 min at 4"C, and the supernatant (extract) was used to assay MPF activity (Fig. 1 ).
Adult X. laevis were obtained from Xenopus I (Ann Arbor, MI). Females were injected via the dorsal lymph sac with 20-50 IU of pregnant mare's serum gonadotropin 3 d before the surgical removal of ovaries. Stage 6 oocytes were dissected manually for use in the injection assay of MPF. Cycloheximide was obtained from the Sigma Chemical Co.
Mouse oocyle extracts were injected in 50-hi aliquots into the animal hemispheres of Xenopus oocyles. The total number of 50-hi aliquots recovered from the centrifuge tube was used to estimate the total volume of extract.
Xenopus oocytes were incubated at room temperature after injection. 2 h after injection, GVBD was assessed by the appearance of a characteristic white spot in the pigmented animal hemisphere. Assessment was confirmed by fixing oocytes for 10 min in 10% trichloroacetic acid and dissecting them to observe the presence or absence of a germinal vesicle.
RESULTS
Extracts from mature ovulated mouse oocytes induced GVBD in immature X. laevis oocytes within 2 h after injection ( Table FIGURE 1 Mature oocytes before (/eft) and after (right) extraction.
Whole oocytes were disrupted and centrifuged in capillary tubes. A common straight pin is included to demonstrate scale. Bar, 1 mm. x 11.4.
1638 THE JOURNAL OF CELL BIOLOGY • VOLUME 100, 1985 I). Buffer-injected control Xenopus oocytes gave no response.
The MPF activity was estimated by calculation of the number of mouse oocytes in the volume of extract injected. The minimum number of mouse oocytes that gave a positive response in the injected recipient oocytes was 23. Mature oocytes extracted by freezing and thawing were completely ineffective in inducing maturation. Two samples of Xenopus oocytes incubated in 20 ~tg/ml cycloheximide and injected with extracts from mature mouse oocytes also underwent GVBD (Table II). Treatment of Xenopus oocytes with this concentration of cycloheximide inhibits >95% of 3sS-[methionine] incorporation into protein (Gerhart, J., personal communication).
In contrast to the mature ovulated oocytes, extracts of immature germinal-vesicle stage ovarian oocytes arrested by incubation in dibutyryl cyclic AMP did not induce GVBD in Xenopus oocytes even though the number of mouse oocytes injected per recipient oocyte (34-64) was well above the number of mature oocytes demonstrated to cause GVBD (Table III). Xenopus oocytes co-injected with extract from immature mouse oocytes and with active Xenopus MPF did undergo GVBD (data not shown).
DISCUSSION
The results reported allow the mouse oocyte to be included among the known meiotic and mitotic cell types in which the transition from G2 to M phase is regulated by a cytoplasmic factor. The appearance of a cytoplasmic MPF in maturing mouse oocytes was first suggested by Balakier and CzoIowska (2), who observed that anucleate fragments of immature mouse oocytes, when fused to interphase blastomeres from two-celled mouse embryos, induced premature dissolution of the nuclear membrane and premature chromatin condensation in the blastomeres when the nucleate sister fragment underwent GVBD.
The precocious induction of GVBD (<2 h) compared with the time of progesterone-induced maturation (>3 h) supports the conclusion that the active agent in extracts of mature mouse oocytes is MPF. According to the evolving model of progesterone control of amphibian oocyte maturation (12,14), it is the phosphorylation of preexisting MPF that triggers GVBD. Progesterone, the maturation-inducing substance in amphibians, is proposed to release calcium ions from the cell surface that activate a cytoplasmic phosphodiesterase, which in turn decreases the concentration of cyclic AMP. A lowered cyclic AMP concentration leads to inactivation of protein kinase, resulting in dephosphorylation of a hypothetical initiator protein, changing it from an inactive to an active state. The initiator protein is proposed to be a cyclic AMP-independent protein kinase responsible for the phosphorylation and activation of inactive MPF. MPF also may be a protein kinase capable of phosphorylating itself, thus explaining its observed autocatalytic behavior on serial transfer (15). The distal position of MPF in this elaborate protein phosphorylation cascade explains the more rapid response of oocytes to MPF injection compared with the time of oocyte response to progesterone treatment.
The induction of GVBD in the presence of cycloheximide, an inhibitor of protein synthesis, can be considered definitive evidence for MPF activity in mature mouse oocyte extract.
Progesterone-treated Xenopus oocytes do not mature in cycloheximide, but become increasingly resistant to the inhibitor as the maturation process progresses (25). Cycloheximide does Xenopus oocytes were incubated for 1 h in 20 pg/ml cycloheximide before injection and incubated continuously in 20 pg/ml cycloheximide after injection. not prevent maturation of oocytes injected with MPF (25,28) and does not prevent the appearance of high concentrations of MPF when oocytes are injected with low concentrations of MPF (4). These data suggest that MPF is present in the oocyte before GVBD and is activated post-translationally, most likely by phosphorylation. The hypothesis that the phosphorylation cascade depends upon the synthesis of an initiator protein nominates it as a target of cycloheximide action. Since MPF acts "downstream" from the initiator protein, injected MPF would not be sensitive to the inhibition of protein synthesis. Other substances capable of inducing GVBD upon injection, such as the R and I proteins of cyclic AMP metabolism ( 11 ), are sensitive to cycloheximide owing to their "upstream" position in this hypothetical cascade. Consistent with the cascade model for progesterone action, the maintenance of a high cyclic AMP concentration would prevent MPF activation. The failure of an extract ofdibutyryl cyclic AMP-arrested mouse oocytes to induce GVBD after injection into Xenopus oocytes is therefore predicted. Dibutyryl cyclic AMP-arrested mouse oocytes, however, do undergo initial chromosome condensation and extensive convolution of the germinal vesicle membrane (24), two changes associated with incipient GVBD. These initial maturational events, therefore, do not appear to be regulated by MPF.
Sunkara et al. (23) calculated that a minimum of 1,000 mitotic HeLa cells (with a mean diameter of 17 vm) was capable of causing 100% maturation after injection into Xenopus oocytes. With a diameter of 72 pm (10), mouse oocytes have 75 times the volume of a HeLa cell. Accordingly, 13 mature mouse oocytes should be the minimum number capable of causing a response in Xenopus oocytes, assuming an equivalent MPF concentration in both HeLa cells and mouse oocytes. This value is close to the estimated value in this study.
The question of the relationship of MPF to the development of meiotic competence in mouse oocytes is intriguing. Sorensen and Wassarman (21) observed that small oocytes recovered from mice younger than 15 d post partum were incompetent to mature in culture. Balakier (1) reported that incompetent young oocytes that were fused with maturing oocytes or four-cell blastomeres were arrested in meiosis, underwent GVBD, and progressed to metaphase of the first meiotic division. The injection of MPF into small Xenopus oocytes incapable of responding to progesterone treatment SORENSEN El-AL. Maturation-promoting Factor in Mouse Oocytes induced GVBD (5,18). Our preliminary attempts to microinject either concentrated Xenopus MPF (28) or freshly prepared extract of mature mouse oocytes into incompetent mouse oocytes have failed to induce GVBD (Sorensen, R. A., and R. A. Pedersen, unpublished results). Insofar as the microinjected volume is considerably less than that introduced by fusion of a whole oocyte, either incompetent mouse oocytes are completely deficient in MPF, which can only be provided by the massive transfusion of cytoplasm afforded by cell fusion, or they do not possess sufficient MPF to be activated by the injection of a small volume of active MPF. | 2014-10-01T00:00:00.000Z | 1985-05-01T00:00:00.000 | {
"year": 1985,
"sha1": "f2cc3c645e6c0e7e72bfa4ae432e37842d61fd02",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/100/5/1637/1051578/1637.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2cc3c645e6c0e7e72bfa4ae432e37842d61fd02",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
9893745 | pes2o/s2orc | v3-fos-license | Endodontic treatment of a maxillary second molar with developmental anomaly: a case report.
Fusion is a rare occurrence in molar teeth. The purpose of this rare case presentation is to describe the nonsurgical endodontic treatment of maxillary molar. A 28-year-old patient was referred for endodontic treatment of her chronic apical abscess of right maxillary second molar. In the clinical examination, a sinus tract adjacent to involved tooth and a small crown of supernumerary tooth fused to the buccal surface of the molar at gingival margin was observed. Endodontic treatment was decided for the involved molar for functional reason. Recall examination, a year after completion of endodontic and restorative treatments, showed the tooth was clinically asymptomatic and there was no radiographic lucency around the apical region.
INTRODUCTION
Dental fusion is defined as the merging of two or more teeth at enamel and/or dentinal level (1)(2). The etiology of dental fusion is still unclear. One reason may be the influence of pressure or physical forces providing close contact between two adjacent tooth germs and thus resulting in the enamel organ and the dental papilla to unite (3). Local metabolic interferences, which occur during differentiation of the tooth germ, may be another cause for dental fusion (4). Genetic determination may also be evident in some of the cases presented in the literature (5). In addition, other types of dental anomalies have been described (2,6): a) Dehiscence: laceration resulting from trauma and affecting the crown of a tooth germ; b) Concrescence: the merging of two or more teeth at root cementum; c) Gemination (twinning): attempted division of a tooth germ in two; d) Schizodontia: complete division of a tooth germ in two; and e) Dens in dente: enamel penetration into the pulp chamber.
Dental fusion occurring particularly in anterior teeth with apparent equal distribution between the two jaws, and is more common during the deciduous dentition phase. It is very rare in molars. The prevalence of this dental anomaly is estimated at 0.1-2.5% of individuals (7)(8)(9). Fused teeth may present two independent endodontic systems, one pulp chamber dividing into two root canals or, less often, a single root and one or two pulp chambers (1,10). The fusion may be partial or total depending on the stage of tooth development at the time of union, including only the tooth crowns or the tooth crowns and roots, respectively (11). Although the most common situation is the fusion of one supernumerary with a normal tooth, the fusion of two normal teeth may occur, thus reducing the number of teeth in the arch (12);the fused tooth may be of normal size or larger than normal (13). Dental fusion is generally asymptomatic and dose not requires any treatment. However, there could be poor aesthetics, periodontal damage or caries leading to pulp necrosis (4). This case report describes the endodontic management of a maxillary second molar fused with a supernumerary.
CASE REPORT
A 28-year-old woman was referred for endodontic treatment of an anomalous tooth (maxillary right second molar). The patient complained of draining sinus tract in the gingiva of involved tooth (Figure 1), (Figure 2). Her medical history was noncontributory. Clinical examination revealed that the maxillary right second molar was fused with a supernumerary. There was an extra small crown of abnormal appearance adjacent to the normal tooth. A draining sinus tract was present at the distobuccal of the fused tooth. No caries could be detected. The tooth displayed physiological mobility, but was sensitive to percussion. Thermal and electrical pulp testing didn't give a normal response while probing revealed no periodontal pocketing around the tooth. Radiographic examination demonstrated mesio- (Figure 7). A postoperative radiograph revealed densely condensed root canal fillings in the two canals. The crown was then restored permanently with amalgam. The recall examination after 1 year revealed that periodontal condition was healthy, the tooth was asymptomatic, and complete bone regeneration was seen on radiograph ( Figure 8).
DISCUSSION
As a common opinion, the clinical differentiation between dental fusion and germination is difficult. For the purpose of having better differentiation between these anomalies, it has been suggested that the teeth in the arch be counted with the anomalous crown counted as one. The presence of all teeth indicates gemination, whilst one tooth less than normal indicates fusion (14). Unfortunately, this rule is compromised when dental fusion or germination associated to dental agenesis or supernumerary teeth (15). As an illustrative example of such difficulty, a geminated second molar can be observed with agenesis of the wisdom tooth. However, there is no clinical importance in differential diagnosis of dental fusion and germination (16); many authors prefer to use the term "double tooth" or "fused teeth" in view of the uncertainty regarding the embryological cause underlying the junction defect or teeth joined together by dentine, respectively (8,17). In this case, the history of wisdom tooth extraction, clinical observation of a small abnormal fused crown of a supernumerary tooth, and normal number of remaining teeth led us to established diagnosis of either dental fusion of the second maxillary molar with a supernumerary tooth or gemination of the second molar. A prerequisite for nonsurgical endodontic treatment of anomalous teeth is a careful clinical and radiographic examination. Many clinicians revealed very complex internal IEJ -Volume 2, Number 2, Summer 2007 anatomy in double tooth and stressed the significance of knowledge with the root canal morphology before starting the treatment (18)(19). During endodontic treatment of a double tooth, the clinician must be prepared for abnormal root canal anatomy and irregular outline of the access cavity. Sometime a multidisciplinary approach to treatment and restoration the function and aesthetic appearance is required. It is important to stress that using higher magnification helped to find and negotiate the root canals more easily in complex cases (19). However, in the present case the canals of supernumerary, mesiobuccal, and distobuccal roots were joined together and positioned like the mesial root of mandibular molars while palatal canal and root position was normal, so the internal anatomy of treated case was very simple. Successful endodontic treatment depends on careful cleaning, shaping and three-dimensional obturation of the root canal system. Although mechanical debridement of the root canals in double tooth is difficult, but the combination of chemo-mechanical instrumentation and the use of sodium hypochlorite in the present case was sufficient.
CONCLUSION
In our opinion, an individualized treatment plan is required in double tooth, since these cases may or may not require different treatment modalities compared to normal teeth. | 2016-05-04T20:20:58.661Z | 2007-07-05T00:00:00.000 | {
"year": 2007,
"sha1": "6e7921367378dbac9d8626ef14a6efbbd364db0f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6e7921367378dbac9d8626ef14a6efbbd364db0f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15369271 | pes2o/s2orc | v3-fos-license | An unusual Spigelian hernia involving the appendix: a case report
Introduction Spigelian herniae are uncommon and frequently pose a diagnostic challenge. Case presentation We report the case of a 71-year-old man in whom an ischaemic appendix was found within the sac of a Spigelian hernia during emergency repair. Conclusions There are very few reported cases in which an appendix has been found within a Spigelian hernia, in the absence of inflammatory bowel disease. An awareness of the range of viscera which may be encountered in Spigelian herniae is important for safe repair.
Introduction
Spigelian herniae occur through slit-like defects in the anterior abdominal wall adjacent to the semilunar line. They are uncommon and frequently pose a diagnostic challenge. We report a case in which an appendix was discovered within the sac of a Spigelian hernia during repair. The range of viscera which have been found within Spigelian herniae is discussed. Awareness of possible contents of these herniae is crucial for safe repair.
Case Presentation
A 71-year old man presented with a 24-hour history of nausea, vomiting and a painful abdominal swelling located in the right lower quadrant. There was no history of a change in bowel habit and he was passing flatus. Past medical history consisted of open repair of an abdominal aortic aneurysm, cerebrovascular disease, diabetes, a myocardial infarct and atrial fibrillation. He was a non-smoker and consumed only a small amount of alcohol.
On examination the patient was comfortable and not tachycardiac or febrile. Abdominal examination revealed the presence of a tender, irreducible mass located between the umbilicus and right anterior superior iliac spine. It was approximately 8 × 8 cm in diameter and had a positive cough impulse. A clinical diagnosis of an incarcerated right Spigelian hernia was made. Abdominal x-rays revealed dilated loops of small bowel.
The patient underwent emergency repair of the Spigelian hernia. A transverse skin incision was made over the hernia and the external oblique aponeurosis was divided to reveal the hernial sac which was opened. It contained an ischaemic appendix and a knuckle of small bowel ( Figure 1). The appendix was resected and the small bowel, which was viable, was reduced. The small muscular defect was approximated without tension in two layers using polydioxanone (PDS). Postoperative recovery was unremarkable. Histological examination of the appendix revealed features characteristic of ischaemia but no evidence of inflammation or neoplasia.
Discussion
Spigelian herniae were initially described by Josef Kinkosh in 1764 and named after a Belgian anatomist, Adrian van der Spieghel, who had previously described the semilunar line. They account for 1-2% of all hernias and occur through slit-like defects in the anterior abdominal wall adjacent to the semilunar line. Approximately 90% are located in a 6 cm zone limited superiorly by the transumbilical plane and inferiorly by the interspinal plane. A particularly weak area is the intersection between the semilunar line and the arcuate line of Douglas. The majority of Spigelian herniae are intramural and remain deep to the external oblique aponeurosis. Usual contents are omentum or small bowel, however large bowel, stomach, gallbladder, ovary, testis, bladder, a Meckel's diverticulum and leiomyoma of the uterus although rare, have been described [1]. There are few reported cases in which an appendix has been found within a Spigelian hernia [2,3] some occurring in the presence of Crohn's disease [4,5].
Pain which is exacerbated by contraction of the abdominal musculature is the most common symptom associated with Spigelian herniae and is described by over 60% of patients. The second most common clinical feature is a palpable abdominal mass which is present in approximately 35% of cases [2]. Spigelian herniae characteristically possess a narrow neck (0.5-2 cm in diameter) and at presentation approximately 20% of hernias are incarcerated and 14% are strangulated [6]. Clinical diagnosis is often complicated by the intramural position of Spigelian herniae and the fact that obesity is a predisposing factor. Ultrasonography and computer tomography (CT) may confirm the presence of a hernia in cases of clinical uncertainty.
Treatment of a Spigelian hernia involves primary fascial closure, with synthetic mesh reinforcement if a large defect is identified. Recently, laparoscopic Spigelian hernia repair, using both intra-abdominal and extraperitoneal approaches, has been described. A randomised control trial, albeit involving a small number of patients, has compared outcomes following Spigelian hernia repair. There were no differences in recurrence rates between open and laparoscopic hernia repair, however, laparoscopic repair conferred benefits in terms of hospital stay and morbidity. An extraperitoneal approach was recommended for uncomplicated elective repair and an intra-abdominal approach if co-existent pathology requires surgery during the same intervention. In the case of emergency Spigelian hernia repair an open approach, as performed here, was advocated [7].
Conclusion
Spigelian herniae are uncommon and frequently pose a diagnostic challenge. A range of viscera may be found within a Spigelian hernia; caution is therefore required when the hernial sac is opened to prevent damage to its contents.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the journal's Editor-in-Chief. | 2014-10-01T00:00:00.000Z | 2010-01-13T00:00:00.000 | {
"year": 2010,
"sha1": "0a759862b22a84c8c5fcb881869c3205b0ef7fb9",
"oa_license": "CCBY",
"oa_url": "https://casesjournal.biomedcentral.com/track/pdf/10.1186/1757-1626-3-22",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a759862b22a84c8c5fcb881869c3205b0ef7fb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255968719 | pes2o/s2orc | v3-fos-license | Apple polyphenols delay postharvest senescence and quality deterioration of ‘Jinshayou’ pummelo fruit during storage
Introduction Apple polyphenols (AP), derived from the peel of mature-green apples, are widely used as natural plant-derived preservatives in the postharvest preservation of numerous horticultural products. Methods The goal of this research was to investigate how AP (at 0.5% and 1.0%) influences senescence-related physiological parameters and antioxidant capacity of ‘Jinshayou’ pummelo fruits stored at 20°C for 90 d. Results The treating pummelo fruit with AP could effectively retard the loss of green color and internal nutritional quality, resulting in higher levels of total soluble solid (TSS) content, titratable acidity (TA) content and pericarp firmness, thus maintaining the overall quality. Concurrently, AP treatment promoted the increases in ascorbic acid, reduced glutathione, total phenols (TP) and total flavonoids (TF) contents, increased the scavenging rates of 2,2-diphenyl-1-picryl-hydrazyl-hydrate (DPPH) and hydroxyl radical (•OH), and enhanced the activities of superoxide dismutase (SOD), catalase, peroxidase, ascorbate peroxidase (APX), and glutathione reductase (GR) as well as their encoding genes expression (CmSOD, CmCAT, CmPOD, CmAPX, and CmGR), reducing the increases in electrolyte leakage, malondialdehyde content and hydrogen peroxide level, resulting in lower fruit decay rate and weight loss rate. The storage quality of ‘Jinshayou’ pummelo fruit was found to be maintained best with a 1.0% AP concentration. Conclusion AP treatment can be regarded as a promising and effective preservative of delaying quality deterioration and improving antioxidant capacity of ‘Jinshayou’ pummelo fruit during storage at room temperature.
Introduction
Pummelo (Citrus maxima Merr.) fruit, the largest known citrus fruit, is a non-climacteric subtropical fruit belonging to the Citrus family, and widely cultivated in Southeast China (e.g., Fujian, Jiangxi, Zhejiang, Guangdong, and other adjoining provinces) Chen et al., 2022). Owing to its rich nutrients (e.g., dietary fiber, organic acids, vitamins, pectins, flavonoids and minerals), full succulency, attractive appearance, pleasant flavor, and blessed moral, pummelo fruit is considered as a well-welcomed citrus fruit for consumers (Nie et al., 2020;Ding et al., 2022). However, pummelo fruit severs its connection to the tree in terms of water and nutrients after harvest, and must still undergo respiration and other forms of physiological metabolisms, consuming the fruit's own nutrients and resulting in an acute deterioration in fruit nutritional quality during poststorage (Nie et al., 2020). Due to its large size and heavy weight, 'Jinshayou' pummelo preservation is a crucial issue that has to be addressed. In recent years, several preservation methods have been employed to maintain postharvest storability of harvested pummelo fruit, such as 1-methylcyclopropene (Lacerna et al., 2018), CaCl 2 (Chen et al., 2005), chitosan (Nie et al., 2020;Chen et al., 2021), and gibberellic acid (Porat et al., 2001)., As these above mentioned methods are inconvenience and unsatisfactory, there is a need for the development of effective postharvest technologies for pummelo fruit preservation.
One of the molecular mechanisms underlying fruit postharvest senescence is the induction of oxidative stress (Lum et al., 2016;Wang et al., 2021;Zhang et al., 2021;Ackah et al., 2022). Oxidative damage to horticultural fruits, such as apple, litchi, ponkan, and tomato, is reported to contribute to the imbalance of reactive oxygen species (ROS) homeostasis, potentially reducing their resistance to postharvest senescence stress (Zhang et al., 2018;Chen et al., 2021;Huang et al., 2021). As one kind of plant polyphenols, apple polyphenols (AP), extracted from the peel and pomace of green-ripened apple, is rich in antioxidants include phenolic acids, procyanidins, anthocyanins, flavonols, as well as dihydrochalcones (Rana and Bhushan, 2016;Su et al., 2019), and has gained much attention in recent years for their potential to provide a variety of health benefits, including reducing oxidative stress, inhibiting a-glucosidase, preventing chronic ethanol exposure-induced neural injury, as well as decreasing the risk of type II diabetes, cardiovascular and intestinal diseases (Riaz et al., 2018;Li et al., 2019;Gong et al., 2020;Wang et al., 2020). In addition to health benefits, AP as a natural antioxidant has been widely used to reduce oxidative stress via elevating antioxidant ability, improving ROS-scavenging system, or inhibiting lipid and protein oxidation. In earlier studies, pre-storage application of 5.0 g/L AP treatment via immersion means was found to be effective in alleviating postharvest pericarp browning and improving the antioxidant activity, thereby contributing to the maintenance of the edible quality and extension of shelf-life in harvested litchi fruit (Zhang et al., 2015;Su et al., 2019;Bai et al., 2022). In addition, Xiang et al. (2022) reported that AP remarkably inhibited the mycelial growth of Peronophythora litchii as well as spore germination under in vitro condition, and effectively reinforced the disease resistance against postharvest downy blight rot through the elicitation of defense response in harvested 'Baitangying' litchi fruit. Fan et al. (2018) reported that AP should adopt a safe approach to preserve fresh-cut red pitaya fruit for 4 d with some delaying effects on pulp color-change, softening, nutrient deterioration, and microbial growth, ultimately to get a longer shelf-life since pitaya fruit has a high commodity value mostly consumed as a fresh-cut product. These studies suggest that AP could be an effective and promising preservative for delaying postharvest senescence of horticultural fruits or vegetables. However, there is currently no experimental evidence regarding the definite mechanism by which AP alleviates postharvest senescence and modulates antioxidant response in citrus fruit. Therefore, the potential preservative effects of AP are warranting further investigation to elucidate its underlying mechanism responsible for enhancing ROS-scavenging system in 'Jinshayou' pummelo fruit.
Though it has been shown that AP exhibits its strong antioxidant ability, there is limited research on its efficacy as a postharvest anti-senescence treatment. This study was undertaken to determine the consequences of AP treatment on postharvest senescence and nutritional quality of 'Jinshayou' pummelo fruit following storage at room temperature, with a focus on fruit senescence development, oxidative stress, antioxidant capacity, and the ROS-scavenging system. This study contributes to our understanding of how AP treatment can be effectively applied for pummelo fruit preservation, and may also have implications for its wider application in other horticultural fruit.
Pummelo fruit and treatments
Mature-green 'Jinshayou' pummelo (C. maxima Merr.) fruit were harvested from Jinggang honeydew base in Ji'shui County, Jiangxi Province (27°21′26″N and 115°9′1″E), and transferred to the Jiangxi key laboratory for postharvest technology and nondestructive testing of fruits & vegetables. A total of 900 pummelo fruits were selected for uniform size, shape and absence of mechanical damage or disease, washed with tap water, and then air-dried at room temperature overnight. The selected pummelo fruits were randomly divided into three groups, with three replicates consisting of 300 (3×100) fruits for each group, and subjected to following treatments: immersion in AP (food grade, purity greater than 75%, Yuanye Biotechnology Co., Ltd., Shanghai, China) solution at 0, 0.5, or 1.0% for 5 min at 20°C. After air-drying, the control and AP-treated fruits were individually packaged in polyethylene film bag (0.02 mm thickness) and stored at 20°C with relative humidity of 80-90% for 90 d. Fruit decay index, weight loss, peel color, albedo firmness, and other biochemical quality parameters were measured at 15 d intervals. For each sample, the juice sac tissues of 15 fruits (5 fruits per replicate) randomly taken from each group were ground into powder and then stored at -80°C condition.
Assessment of peel color and albedo firmness
The CIE parameters of L* (dark to light), a* (green to red), and b* (blue to yellow) on two opposite equatorial sites of 'Jinshayou' pummelo peel were measured directly using a CR-400 colorimeter (Minolta Co., Osaka, Japan), following a recognized method described by Mitalo et al. (2022). Citrus color index (CCI) was calculated by the following Hunter lab equation: CCI = 1000 × a*/(L* × b*).
The firmness of the albedo was assessed by a GY-4 handheld fruit hardness tester with a 7 mm probe after removal of a 3 mm flavedo section, and the result was recorded in Newton (N).
Measurement of fruit decay index and weight loss
The detailed method for evaluation of fruit decay index has been previously described by Huang et al. (2021). The decay index was evaluated as the number of decayed pummelo fruit those with visible symptoms of pitted peel or pathogen growth compared to the total number of pummelo fruit, and was expressed as a percentage (%) following each storage period.
At 15-day intervals during storage at 20 ± 2°C, the weight loss (WL) rate of 'Jinshayou' pummelo fruit was recorded according to the method of Baswal et al. (2020). The percentage (%) of WL rate was calculated compared to the initial weight.
Determination of biochemical quality parameters
Total soluble solid (TSS) content in pummelo juice sac was determined with a digital saccharometer (model: RA-250WE, Atago, Japan), calibrating with deionized water before each reading, and the result was expressed as a percentage (%). Titratable acidity (TA) content was analyzed in terms of citric acid by adding 4.0 g extracted juice with two drops of 1% phenolphthalein in 40 mL of distilled water, and then titrating with 0.1 M NaOH solution, and the result was calculated based on the NaOH consumption and expressed as %.
The permeability of cell membrane was estimated by determining electrolyte leakage with a DDS-307A conductivity meter (Shanghai Rex., China) following the method of Huang et al. (2021), being reported as a percentage (%) of the initial value to the final value. Malondialdehyde (MDA) content was assayed using the TBA method as described by Bakpa et al. (2022) with a slight modification. Briefly, 2.0 g of frozen juice sac was extracted with 5 mL of 10% (m/v) TCA solution and centrifuged (10 000 × g at 4°C for 20 min). Afterwards, 2.0 mL of the supernatant was mixed by adding the same volume of 0.67% TBA (dissolved in 50 mM NaOH) solution, followed by boiling water bath for 20 min, and then quickly cooled in an ice bath. Finally, the absorbance of the supernatant was recorded at three specific wavelengths (450 nm, 532 nm, and 600 nm) using a UV-Vis spectrophotometer (model: TU-1950, Persee General Instrument Co., Ltd., Beijing, China), with the results were reported as millimole per gram (mmol/g) FW.
For hydrogen peroxide (H 2 O 2 ) content assay, a total of 2.0 g frozen sample was extracted with 5 mL of pre-cooled acetone and the homogenate was centrifuged (10 000 × g at 4°C for 20 min) to discard the residue. H 2 O 2 content was determined using a specific detection kit (No: BC3590, Solarbio, Beijing, China) by monitoring the absorbance at 412 nm (Ackah et al., 2022), with the results reported as micromole per gram (mmol/g) on a frozen weight (FW).
Quantitative determination of ascorbic acid (AsA) content in pummelo fruit was carried out on juice sac samples according to the 2,6-dichlorophenol-indophenol (DPIP) dye titration method described by Huang et al. (2021), with L-AsA as the standard, where the AsA content was expressed as mg of AsA equivalent per 100 g of juice sac FW.
The glutathione (GSH) content of pummelo juice sac was determined using the 5,5'-dithiobis-(2-nitrobenzoic acid) reaction method, as described by Nie et al. (2020). A total of 5.0 g juice sac was homogenized with 5 mL of 5% trichloroacetic acid (TCA) solution (containing 5 mM ethylene diamine tetracetic acid) under an ice bath condition, and then centrifuged at 12,000 × g for 20 min at 4°C. The reaction solution containing the supernatant (0.4 mL), 100 mM phosphate buffer (1.0 mL, pH 8.0), 4 mM 5,5'-dithiobis-(2nitrobenzoic acid) (0.6 mL) was incubated at 25°C for 10 min. The GSH content reported as milligram per kilogram juice sac FW was calculated from the absorbance value measured at 412 n m a c c o r d i n g t o a s t a n d a r d c u r v e o f 1 0 0 m M reduced glutathione.
2.0 g of pummelo juice sac were homogenized with 8 mL of 1% HCl-methanol solution, followed by an extraction step at 4°C in the dark for 20 min, and vacuum filtered to remove the pumice. Following the Folin-Ciocalteu method and the AlCl 3 colorimetric method outlined by Nxumalo et al. (2022), both total phenolics (TP) content and total flavonoids (TF) content were measured at 760 nm and 510 nm, with gallic acid (GA) and rutin as the standard, respectively, and the results of both TP and TF content were expressed as mg equivalent per 100 g (mg/ 100 g) of juice sac FW.
Two different assays were applied to assess the total antioxidant capacity in the juice sac of pummelo fruit: DPPH and hydroxyl radical (·OH) scavenging capacity assays. Determination of DPPH scavenging capacity was performed as described in Chen et al. (2022) with minor modifications. Briefly, 100 mL of the extracted juice sample was mixed with 1.0 mL of 0.1 mM DPPH solution, and allowed the mixture to stand in darkness for 30 min at 25°C before recording the absorbance at 517 nm. Adding 100 mL of deionized water to 1.0 mL of 0.1 mM DPPH solution was done in order to control the solution. The capacity to scavenge DPPH was expressed as a percentage (%) and calculated by the following formula: (control OD 517 − sample OD 517 )/control OD 517 ×100.
The ·OH scavenging capacity was assayed by referring to the salicylic acid-Fenton method, as described by Chen et al. (2022) with minor modifications. Briefly, the juice supernatant was extracted from 5.0 g of pummelo juice sac with 5 mL of 50% (v/ v) ethanol. The reaction system consisted of 1.0 mL of juice supernatant, 1.0 mL of 9 mM ferrous sulfate solution and 1.0 mL of 8.8 mM H 2 O 2 solution, and 1.0 mL of 9 mM salicylic acid solution (dissolved in ethanol). The absorbance of the mixture was measured at 410 nm after being held in a 37°C water bath for 20 min. The ·OH scavenging capacity was expressed as a percentage (%) and calculated by the following formula: (control OD 410 − sample OD 410 )/control OD 410 ×100.
Extraction of ROS-scavenging enzymes and the activities determination
For the enzyme extraction and activity assay, all steps were carried out at 4°C. Crude enzyme was extracted by homogenizing 2.0 g of frozen juice sac powder with 8 mL of pre-cooled 100 mM phosphate buffer (pH 7.5, containing 5 mM DTT and 5% PVP). The centrifugation at 12 000 × g for 30 min removed the sediment, and the supernatant was collected for assaying ROS-scavenging enzymes [e.g., superoxide dismutase (SOD), catalase (CAT), peroxide (POD), ascorbate peroxidase (APX), and glutathione reductase (GR)] activities. SOD (EC 1.15.1.1) activity was determined via a specific SOD test kit (No: BC0170, Solarbio, Beijing, China) by detecting the absorbance of the reaction system at 560 nm. SOD activity was reported as U/g, where one unit (U) of SOD activity was equal to the photochemical reduction of nitroblue tetrazolium inhibited by 50% per minute.
CAT (EC 1.11.1.6) activity was determined according to the method of Carrioń-Antolı́et al. (2022) in the presence of H 2 O 2 . The reaction mixture contained 2.9 mL of 15 mM H 2 O 2 and 0.1 mL of crude enzyme, and the absorbance was recorded at 240 nm. One unit of CAT activity corresponded to a 0.01 decrease in absorbance at 240 nm per minute, and the results were expressed in terms of U/g.
The guaiacol oxidation method of Nxumalo et al. (2022) was used to monitor POD (EC 1.11.1.7) activity, with a few modifications. The addition of 0.2 mL of 0.5 M H 2 O 2 (diluted with 50 mM phosphate buffer) to 3 mL of 25 mM guaiacol solution and 0.3 mL of crude enzyme triggered the activation of the reaction mixture for POD activity. POD activity was given in U/g, where one unit (U) of POD activity was equal to the increase in the absorbance of 1 per minute at a wavelength of 470 nm.
APX (EC 1.11.1.11) activity in pummelo juice sac was measured by our previous method, as described by Nie et al. (2020). The total volume of the reaction mixture was 3 mL, made up of 2.6 mL of 50 mM phosphate buffer (pH 7.5), 0.3 mL of 20 mM H 2 O 2 (diluted with 50 mM phosphate buffer) and 0.1 mL of enzyme crude extract. APX activity was expressed in U/g, with one unit being defined as the decrease of 0.01 in absorbance at 290 nm over one minute.
GR (EC 1.8.1.7) activity was determined following the method of Peng et al. (2022). 30 mL of nicotinamide adenine dinucleotide phosphate was added to 3.0 mL of 100 mM phosphate buffer (pH 7.5), 0.2 mL of enzyme crude extract, and 0.1 mL of 10 mM oxidized GSH to initiate the reaction mixture (3.3 mL) for GR activity. One unit of GR activity corresponded to a decline of 0.01 in absorbance at 340 nm per min, and the results were expressed as U/g.
RNA extraction and RT-qPCR analysis
The expressions level of CmSOD (Cg7g011780), CmCAT (Cg3g025260), CmPOD (Cg2g001370), CmAPX (Cg6g002810), and CmGR (Cg5g018970) was measured according to our modified methodology (Nie et al., 2020). Total RNA was extracted from 0.5 g of frozen juice sac samples on 0, 15, 30, 45, 60, 75, and 90 d in the control and AP-treated pummelo fruit according to the cetyltrimethyl ammonium bromide (CTAB) method described by Landi et al. (2021). The first-strand cDNA synthesis and RT-qPCR analysis of ROS-scavenging enzymes encoding genes were orderly performed following the procedures described by Chen et al. (2022). The reaction conditions were 95°C pre-denaturation for 30 s, 39 cycles of 95°C for 5 s and 60°C for 30 s, 95°C hold for 15 s, 60°C for 30 s (lysis curve temperature), and 95°C extension for 5°C. The Actin (Cg8g022300) gene was used as the internal control gene (Nie et al., 2020). The 2 -DDCt method was using to quantify the relative expression of CmSOD, CmCAT, CmPOD, CmAPX, and CmGR (Livak and Schmittgen, 2001). All primers of the above mentioned genes for RT-qPCR analysis are listed in Supplementary Table 1.
Statistical analysis
The data were processed through analysis of one-way ANOVA test with SPSS Statistics Software (20.0 versions, IBM, NY, USA). The significant differences among the AP treatment means were separated at 5% level of probability at each storage time.
Results
3.1 Effects of AP on peel color, albedo firmness, decay index, weight loss, TSS content and TA level The color of harvested fruit is commonly used as an indicative of its appearance quality, which is why consumers use it as a criterion for procurement. The surface color of 'Jinshayou' pummelo fruit turns from green to yellow during storage at room temperature after harvest ( Figure 1A). The CCI value for 'Jinshayou' pummelo fruit showed a gradual increase over the storage period for the non-treated (control) and two AP-treated groups ( Figure 1B). Nevertheless, the increase in CCI value was markedly reduced by pre-storage AP treatment in comparison to the control fruit's surface. The inhibitory effect of pre-storage AP treatment was positively correlated with the APtreated concentration, and 1.0% AP treatment showed a significantly delayed increase in CCI value in contrast with its 0.5% concentration ( Figures 1A, B).
The firmness of fruit is a key aspect that affects the fruit quality during postharvest storage. The albedo firmness of the control and AP-treated 'Jinshayou' pummelo fruit decreased throughout the storage period, reaching the lowest values at the end of storage. AP treatment at 0.5% and 1.0% effectively postponed the decline in albedo firmness, being 16.3% and 25.9% higher (P < 0.05) than that in the control pummelo fruit at 90 d of storage, respectively ( Figure 1C).
As depicted in Figure 1D, it was observed that the decay index in the three groups continuously increased during the entire storage period. The decayed fruit began to appear after 15 d of room temperature storage, and the decay index in the control pummelo fruit reached 14.0% at 90 d of storage, while the decay index in the 0.5% and 1.0% AP-treated groups were only 11.0% and 8.5%, respectively, demonstrating that the efficacy of AP treatment on 'Jinshayou' pummelo fruit against postharvest senescence may be due to the defense response of AP toward abiotic stress or pathogen infection.
In general, fruit weight loss increased gradually with the extension of storage period. The weight loss of 'Jinshayou' pummulo fruit increased during the room temperature storage period, but the control group had a significantly higher rate increase in weight loss (P < 0.05; Figure 1E). At the end of storage period (90 d), the weight loss in the control group was 10.27%, while the weight loss in the two groups treated with 0.5% and 1.0% AP were 7.32% and 7.21%, respectively.
Both TSS and TA are known as important indicators of fruit maturity, which mainly determine the storability and overall flavor of horticultural fruits, particularly citrus fruit. TSS and TA content in pummelo juice sacs decreased in all three groups throughout the storage period, and pre-storage AP treatment notably delayed the reduction of TSS and TA contents during the room temperature storage period ( Figures 1F, G). The contents of TSS and TA in pummelo fruit treated with 1.0% AP were 13.0% and 26.7% higher than those in the control fruit The inhibitory effects of pre-storage AP treatment on color development (A), CCI (B), firmness (C), decay index (D), weight loss (E), TSS content (F), and TA content (G) of harvested 'Jinshayou' pummelo fruit. The different letters for each same sampling point indicate significantly different at P < 0.05 according to Duncan's multiple range test.
(P < 0.05), while both contents in the 0.5% AP-treated pummelo fruit were 10.3% and 16.7% lower than those in the control fruit at the end of the storage period (P < 0.05). Meanwhile, there was no significant difference in TSS content between the two APtreated groups ( Figure 1F). Compared to 0.5% AP treatment, the better mitigating effect for TA degradation in 'Jinshayou' pummulo fruit was found in the 1.0% AP treatment ( Figure 1G).
Effects of AP on electrolyte leakage, MDA content and H 2 O 2 level
Electrolyte leakage, as a physiological marker for membrane permeability, has been widely applied to evaluate the integrity of cell membrane. As illustrated in Figure 2A, there was a gradual increase in electrolyte leakage of pummelo juice sacs during the storage period. A 1.0% concentration of AP resulted in a remarkable suppression of the increase in electrolyte leakage and maintained the lowest level in comparison with the control and 0.5% AP-treated groups (P < 0.05). The electrolyte leakage of the control pummelo fruit was 26.4% at the end of storage period, while that of the 0.5% and 1.0% AP treatments had reached 18.1% and 15.5%, respectively.
Similarly, AP treatment delayed the accumulation of MDA content compared with the control pummelo juice sacs ( Figure 2B). The MDA content in the control pummelo juice sacs increased continuously during the storage at room temperature, while the accumulation of MDA content in both 0.5% and 1.0% AP-treated fruit showed a slower rate. The MDA content of the 1.0% AP-treated pummelo juice sacs was much lower than that of the control fruit over the whole of storage period (P < 0.05), suggesting that the 1.0% AP treatment may help to delay the membrane lipid peroxidation's extent in 'Jinshayou' pummulo fruit under unfavorable conditions, including postharvest senescence stress.
An uptrend in the H 2 O 2 content of each room temperature stored pummelo fruit with increasing storage period is displayed in Figure 2C. The H 2 O 2 content in the juice sacs of the control group increased rapidly during storage, while both AP-treated fruit showed a remarkable lower increase of H 2 O 2 content during the first 30 d of storage (P < 0.05), followed by a rapid increase. Compared with the control pummelo juice sacs, the increase in H 2 O 2 content was decreased by 10.8% and 16.9% in the 0.5% and 1.0% AP-treated fruit at the end of storage period, respectively.
Effects of AP on antioxidants (AsA, GSH, phenolics and flavonoids) contents and antioxidant capacity
AsA is not only a key primary component affecting citrus quality, but also one of the endogenous non-enzymatic antioxidants involved in the clearance of ROS overaccumulation. As illustrated in Figure 3A, the AsA content in pummelo juice sac from the control group decreased with storage duration, while the AsA content in 0.5% AP or 1.0% AP-treated fruits showed a slight increase during the first 30 d of storage, followed by a decline until the end of storage. Furthermore, the overall AsA content in the 1.0% AP-treated pummelo juice sac was higher than in the control and 0.5% APtreated fruit during the middle to late stage of storage (P < 0.05), indicating that pre-storage 1.0% AP treatment effectively prevented the decline of AsA content, likely due to the antioxidative effect of AP.
In addition to AsA, GSH is another representative substrate in AsA-GSH system (Halliwell-Asada cycle), both of them perform a pivotal role in the AsA-GSH system together with other enzymatic antioxidant systems to maintain redox homeostasis in postharvest fruits, and their amount can directly indicate the fruits' ability to scavenge ROS. As shown in Figure 3B, the GSH content in the control pummelo fruit exhibited a slight increase during the first 30 d of storage, and then decreased. By contrast, the GSH content in pummelo juice sacs treated with 0.5% AP or 1.0% AP prominently improved and peaked at 60 d of room temperature storage. The GSH
FIGURE 2
Variation in electrolyte leakage (A), MDA content (B), and H 2 O 2 content (C) in the juice sac of 'Jinshayou' pummelo fruit treated without or with AP (0.5% and 1.0%) during room temperature storage period. The different letters for each same sampling point indicate significantly different at P < 0.05 according to Duncan's multiple range test. content in the 1.0% AP-treated pummelo fruit was higher than that of the 0.5% AP-treated fruit (P < 0.05); meanwhile, in comparison with the control pummelo fruit, both 0.5% and 1.0% AP treatment led to a significant increase in GSH content, indicating that pre-storage AP treatment had a positive effect on GSH accumulation. Phenolic compounds are a class of plant secondary metabolites that widely found in fruits, and have a variety of biological activities, particularly in relation to their antioxidant activity. The peak of TP content in each group was observed at 45 d of the room temperature storage ( Figure 3C). The maximum TP content in the control fruit increased by 26.5% compared with the initial value (35.6 ± 2.1 mg/100g), while those in the 0.5% and 1.0% AP-treated juice sacs increased by 69.1% and 83.2%, respectively (P < 0.05); meanwhile, 'Jinshayou' pummelo fruit treated with AP at 0.5% or 1.0% had significantly higher levels of the TP content when compared to the control fruit throughout the storage period (P < 0.05). This evidence suggested that pre-storage AP treatment may promote the accumulation of phenolic compounds and delay their degradation in the later stage of storage.
The TF content is one of the important indexes to evaluate the antioxidant capacity of harvested fruits. The control pummelo fruit achieved its maximum TF content at 30 d of postharvest storage, and then experienced a sharp decline thereafter ( Figure 3D). Interestingly, 0.5% or 1.0% AP treatment reached the peak of TF content until 60 d of storage and decreased gradually afterward. The TF content of 1.0% APtreated pummelo juice sacs was significantly higher than the control and 0.5% AP-treated groups after 60, 75, and 90 d of storage (P < 0.05; Figure 3D), suggesting that 1.0% AP treatment had the most effective ability to preserve TF content. Variation in AsA content (A), GSH content (B), TP content (C), TF content (D), DPPH scavenging capacity (E), and •OH scavenging capacity (F) in the juice sac of 'Jinshayou' pummelo fruit treated without or with AP (0.5% and 1.0%) during room temperature storage period. The different letters for each same sampling point indicate significantly different at P < 0.05 according to Duncan's multiple range test.
The DPPH scavenging capacity is a key element in evaluating fruit antioxidant activity. Similar to the overall variations of TP content, the DPPH scavenging capacity of pummelo fruit reached the peak at 60 d of storage, after which it rapidly began to decline ( Figure 4E). Compared with the initial value (50.3 ± 2.0%) at 0 d, three increases of the highest •OH scavenging capacity in the control, 0.5% and 1.0% AP-treated pummelo juice sac were 13.9%, 18.2% and 22.3%, respectively, showing that pre-storage AP treatment could reduce the oxidative damage by enhancing the ability to scavenge ROS radicals in 'Jinshayou' pummelo fruit.
The •OH scavenging capacity in the control juice sacs decreased throughout the storage period, similar to AsA content. In the AP-treated pummelo fruit, the •OH scavenging capacity in the juice sacs slightly increased and peaked at 30 d of room temperature storage, followed by a progressive decline ( Figure 3F). It is worth mentioning that the control pummelo fruit showed the lowest levels of •OH scavenging capacity over the entire storage period (P < 0.05), indicating that the •OH scavenging capacity in pummelo juice sacs treated with 0.5% and 1.0% AP were overall 1.14 and 1.23 times higher, respectively, compared to the control pummelo fruit.
Effects of AP on ROS-scavenging enzyme activities and their encoding expression levels
During the room temperature storage, the accumulation of excess ROS could lead to a decline in storage quality due to oxidative stress, as could the enzymatic antioxidant system. The activities of antioxidant enzymes, including SOD, CAT, POD, APX, and GR, were detected in this paper. The SOD activity in pummelo juice sacs showed a sharp increase in the first 30 d of storage in all three groups, and both AP-treated fruits showed a higher SOD activity compared to the control pummelo fruit (P < 0.05; Figure 4A). The expression level of CmSOD gene was strongly stimulated in the 1.0% AP-treated pummelo juice sacs during the last 30 d of storage ( Figure 4B).
Compared to the control pummelo, 1.0% AP treatment significantly increased CAT activity over the storage periods, especially the peak of CAT activity at 45 d causing approximately 17.8% increase. Although 0.5% AP treatment also showed some increase in CAT activity, there was no noticeable difference between both treated groups, except that observed at 30 d ( Figure 4C). Concurrently, both 0.5% and 1.0% AP treatments significantly increased the expression level of CmCAT gene after 30 d of storage, which were overall 27.0% and 29.6% higher than that in the control group, respectively (P < 0.05; Figure 4D).
The POD activity in the control, 0.5% AP and 1.0% APtreated pummelo juice sac increased with the extension of storage duration, reaching the peak values of 13.53 ± 0.75, 17.67 ± 1.10 and 19.01 ± 0.55 U/g at 75 d, and then followed by a decline until the end of storage ( Figure 4E). Compared to the control group, both AP-treated pummelo juice sacs exhibited significantly higher levels of POD activity throughout the room temperature storage (P < 0.05). Additionally, the up-regulated expression levels of CmPOD gene were appeared in pummelo juice sac following treatment with 1.0% AP, with a noticeable discrepancy in the whole of storage (P < 0.05; Figure 4F).
A gradual decrease in APX activity was observed in the control group ( Figure 4G). Unlike the control, pummelo juice sacs treated with AP at 0.5% and 1.0% exhibited higher APX activity throughout the entire storage period (P < 0.05), resulting in an overall increase in APX activity of 11.6% and 18.4%, respectively. The expression level of CmAPX gene exhibited a trend similar to that of APX activity. During the late 60 d of storage period, both 0.5% and 1.0% AP treatments significantly increased the expression level of CmAPX gene in pummelo juice sacs (P < 0.05), with an overall increase in CmAPX expression of 40.9% and 67.2%, respectively ( Figure 4H).
The GR activity in the control pummelo juice sac consistently decreased over the storage period, while that in both AP-treated fruit increased gradually, peaking at 60 d of storage, before decreasing in the rest of storage period ( Figure 4I). The 1.0% AP-treated pummelo juice sac displayed a higher level of GR activity than the control and 0.5% APtreated groups (P < 0.05), with a noticeable discrepancy during the whole storage period. As shown in Figure 4J, the expression level of CmGR gene in the control fruit was continuously decreased during the whole storage at room temperature, while pre-storage AP treatment resulted in a remarkable upregulation of this gene expression throughout the storage period (P < 0.05).
Correlation analysis
To understand the impact of AP treatment on postharvest senescence and quality deterioration of 'Jinshayou' pummelo fruit, both principal component analysis (PCA) and correlation analysis were applied to identify the ROS metabolism-related parameters measured above. All 29 parameters associated with fruit postharvest ROS metabolism were clustered into two primary categories (PC1: 70.50%; PC2: 20.59%, Figure 5A). The PC1 consisted of 11 parameters, including CCI, fruit decay rate, weight loss, electrolyte leakage, MDA content, H 2 O 2 content, TP content, DPPH scavenging capacity, POD activity, CmSOD and CmPOD expression ( Figure 5A). PC2 demonstrated a greater relationship factor for five nutritional/ functional components (TSS, TA, AsA, GSH, and TF) contents, firmness, •OH scavenging capacity, four ROS-scavenging enzymes activities, as well as the expression level of CmCAT, CmAPX and CmGR gene, exhibiting a high negative correlation to postharvest quality deterioration ( Figure 5B; P < 0.05 or 0.01).
The development of fruit postharvest senescence is a result of the imbalance of ROS homeostasis, causing an increase of defective fruit as well as the degradation of fruit quality. In the present study, AP treatments had the preservative effects of reducing senescence-related indicators, such as CCI, firmness, electrolyte leakage, MDA content, and H 2 O 2 content, while also enhancing the non-enzymatic (AsA, GSH, TP, and TF) and enzymatic (SOD, CAT, APX, and GR) ROS-scavenging systems ( Figure 5C). These findings well demonstrated that postharvest senescence and quality deterioration in 'Jinshayou' pummelo fruit is accompanied by a reduction in the antioxidant system and the overall quality, thus suggesting that ROS metabolism is The activities and expression levels of SOD (A, B), CAT (C, D), POD (E, F), APX (G, H), and GR (I, J) in the juice sac of 'Jinshayou' pummelo fruit treated without or with AP (0.5% and 1.0%) during room temperature storage period. b-actin (Cg8g022300) was used as the internal control gene. The different letters for each same sampling point indicate significantly different at P < 0.05 according to Duncan's multiple range test.
closely linked to postharvest storability in 'Jinshayou' pummelo fruit stored at room temperature.
Discussion
Pummelo fruit is the largest known citrus fruit and is extremely prone to water loss and decay rot after harvest, which seriously reduces its edible quality and commodity value. Polyphenols are important plant secondary metabolites with strong antioxidant capacity, and they have their potential to preserve fruit under postharvest conditions (Zhang et al., 2015;Su et al., 2019;Yu et al., 2021;Bai et al., 2022;Li et al., 2022;Xiang et al., 2022). AP, a natural polyphenol found in greenripened apple, has been shown to be beneficial in inhibiting aglucosidase, enhancing human immunity, deferring organism aging, preventing chronic ethanol exposure-induced neural injury, and reducing the risk of type II diabetes, cardiovascular and intestinal diseases (Riaz et al., 2018;Li et al., 2019;Gong et al., 2020;Wang et al., 2020). It also has excellent antioxidant ability to scavenge free radicals and protect plant cells from oxidative stress-caused damages (Su et al., 2019;Wang et al., 2020). Numerous studies have demonstrated that pre-storage AP treatment can retard postharvest senescence and quality deterioration of various fruits, and also improve their resistance to abiotic stress (Fan et al., 2018;Su et al., 2019;Riaz et al., 2021;Bai et al., 2022). In this study, pre-storage application of AP was found to effectively delay fruit color change and softening, as well as reducing fruit decay and weight loss of harvested 'Jinshayou' pummelo fruit during storage at room temperature ( Figures 1A-E). Pre-storage AP treatment at 1.0% was more effective in terms of prolonging the storage duration to 90 d at 20°C. Nie et al. (2020) found that the 1.5% chitosan-based coating could reduce postharvest loss of 'Majiayou' pummelo The PCA (A) and correlation analysis (B) of the ROS metabolism-related parameters in 'Jinshayou' pummelo fruit stored at 20°C for 90 d. A proposed model of the potential mechanism of pre-storage AP treatment delayed postharvest senescence and quality deterioration by regulating ROS-scavenging system in harvested 'Jinshayou' pummelo fruit (C).
fruit, and the storability and nutritional quality maintained for 120 d. The levels of TSS and TA in fresh fruits correspond to the flavor and nutritional quality. As harvested fruits undergo senescence, the soluble sugar and organic acids gradually degrade; however, delaying this process can help to maintain the fruit's flavor quality and extend its storage life (Chen et al., 2005;Baswal et al., 2020;Serna-Escolano et al., 2021). Our result regarding the downtrends of soluble sugar and organic acids in pummelo juice sacs during storage at room temperature is consistent to a previous finding of Huang et al. (2021). The data shown in Figures 1F, G reveals that pre-storage AP treatment at 0.5% and 1.0% can delay the declines of TSS and TA content in 'Jinshayou' pummelo fruit after 30 d of storage period, as both levels are higher than the control fruit, which were in accordance with pre-storage treatment of AP to 'Dingxiang' litchi (Su et al., 2019). Moreover, it was noted that 1.0% AP treatment displayed a more effective delay in TA degradation, from which the highest TA content was obtained during the whole storage period. Therefore, 'Jinshayou' pummelo fruit treated with 1.0% AP is likely to have a better storage life in terms of flavor quality compared to the control.
Postharvest senescence and quality deterioration of fresh fruit after harvest may be partially due to excess ROS-induced oxidative stress, leading to a loss of cell membrane integrity (Fan et al., 2018;Huang et al., 2021;Hanaei et al., 2022). Prolonged exposure to senescence-elicited oxidative stress results in membrane lipid peroxidation. Electrolyte leakage and MDA content are widely used to assess the integrity of cell membranes (Bakpa et al., 2022;Chen et al., 2022). In the present study, postharvest loss development in pummelo fruit stored at 20°C for 90 d may be likely due to the loss of cell membrane integrity and membrane lipid peroxidation, as revealed by the increase in electrolyte leakage, MDA accumulation and H 2 O 2 content. Pre-storage treatment with AP at 0.5% and 1.0% resulted in lower levels of electrolyte leakage, MDA and H 2 O 2 content, protecting cell membrane's integrity in 'Jinshayou' pummelo fruit from ROS-induced oxidative damage. Pre-storage AP treatment with its antioxidant potential has been shown to be effective in reducing ROS-induced oxidative stress, as evidenced by lower levels of electrolyte leakage, MDA accumulation and H 2 O 2 content following pre-storage AP application in 'Dingxiang' litchi fruit (Su et al., 2019), 'Dongzao' winter jujube fruit (Zhang et al., 2016), and 'Hongyan' strawberry fruit (Riaz et al., 2021), which has well alleviated postharvest senescence and quality deterioration in 'Jinshayou' pummelo fruit.
In order to reduce the accumulated ROS-caused oxidative damage and maintain the homeostasis of ROS level in plant cells, it is extremely important to motivate their ROS-scavenging system, including both the non-enzymatic scavenging system and the enzymatic scavenging system (Lum et al., 2016;Chen et al., 2022;Peng et al., 2022). AsA is considered as not only a key primary component affecting citrus quality, but also one of the endogenous non-enzymatic antioxidants in AsA-GSH system (Halliwell-Asada cycle) scavenging over-accumulated ROS in fruit (Nie et al., 2020;Serna-Escolano et al., 2021;Ackah et al., 2022). Additionally, GSH is another representative substrate in AsA-GSH system involved in the clearance of ROS overaccumulation, which maintains ROS homeostasis and delays cell senescence caused by oxidative stress (Hanaei et al., 2022;Peng et al., 2022). Phenolics compounds are a class of plant secondary metabolites that perform a pivotal role in the color conversion, flavor formation, and stress resistance of harvested fruits, in conjunction with flavonoids, which protect them from the oxidative damage caused by the over-production of ROS (Ackah et al., 2022;Jiang et al., 2022). Therefore, high levels of antioxidants (AsA, GSH, phenolics, and flavonoids) contents are closely related to the fruit's resistance to postharvest senescence stress. Our findings from this experiment demonstrated that the 1.0% AP-treated pummelo juice sacs resulted in remarkable elevations of AsA, GSH, TP and TF contents in comparison to the control sample ( Figures 3A-D). Correspondingly, both scavenging capacity for DPPH and •OH were maintained at the higher levels than those in the control pummelo fruit (Figures 3E, F). Similar improvements in antioxidants amount and antioxidant capacity were also observed in 'Majiayou' pummelo treated with 1.5% chitosan (Nie et al., 2020), 'Kinnow' mandarin treated with 2.0% gum Arabic enriched (0.5-1.0%) ZnO-NPs (Nxumalo et al., 2022), 'Fino' lemon treated with 0.5 mM salicylic acid (Serna-Escolano et al., 2021), 'Rio Red' grapefruit treated with 10 g/L pectic oligosaccharides (Vera-Guzmań et al., 2019), 'Satsuma' orange treated with hot (40°C) electrolyzed functional water (Shi et al., 2020), and kumquat fruit treated with 300 mg/L ellagic acid (Liu et al., 2018). Zhang et al. (2015) and Su et al. (2019) found that litchi fruit treated with 0.5% AP dipping presented the higher levels of AsA, GSH, and TP content, as well as superior DPPH scavenging capacity, which may be beneficial to delay pericarp browning and maintaining postharvest quality during the storage at 25°C. Our results in this study in combination with previous reports has well demonstrated that pre-storage AP treatment can maintain the antioxidant capacity of harvested 'Jinshayou' pummelo fruit by decreasing the consumption of endogenous antioxidants in the juice sacs, which in turn reduces the ROS-caused oxidative damage and delays the quality deterioration of pummelo fruit.
In addition to non-enzymatic ROS-scavenging system, the enzymatic ROS-scavenging system also has its irreplaceable role in reducing the oxidative senescence in harvested fruits (Zhang et al., 2015;Lum et al., 2016;Nie et al., 2020). During the longterm storage at 20°C, the triggering of ROS accumulation in pummelo fruit may be responsible for the oxidative damage, thereby leading to an unacceptable deterioration in fruit quality. SOD, CAT, POD, APX, and GR are the key antioxidant enzymes in the enzymatic ROS-scavenging system to delay harvested pummelo fruit senescence. For instance, SOD is the only enzyme in plant cells with the disproportionation ability to dismutate superoxide anions into H 2 O 2 and H 2 O (Serna-Escolano et al., 2021;Mitalo et al., 2022). Then, CAT and POD are two important antioxidant enzymes that act synergistically to decompose the produced H 2 O 2 into H 2 O and O 2 , thus reducing the harmful effect of over-produced H 2 O 2 in plant cells (Carrioń-Antolı́et al., 2022;Hanaei et al., 2022;Nxumalo et al., 2022). At the same time, APX and GR, as the key enzymes in AsA-GSH system, have a crucial part in maintaining redox homeostasis and protecting plant cells form excess ROS-incited oxidative damage. It has been well demanstrated that the delay in postharvest senescence and quality deterioration is associated with the enhancement of enzymatic ROS-scavenging system (Nie et al., 2020;Ackah et al., 2022;Peng et al., 2022). In this study, the higher levels of SOD, CAT, POD, APX, and GR activities was observed in the AP-treated pummelo fruit; furthermore, the expression levels of these genes encoding CmCAT, CmPOD, CmAPX and CmGR were up-regulated by 1.0% AP treatment in pummelo fruit (Figure 4), accompanying by the inferior H 2 O 2 content ( Figure 2C), suggesting that the elimination of H 2 O 2 in pummelo juice sacs is dependent on the improvement of SOD, CAT, POD, APX, and GR activities, and the up-regulation of their encoding gene expressions. Numerous earlier studies have illustrated that postharvest treatment with AP or other plant polyphenols (e.g., tea polyphenols, Prunus mume polyphenols, p-coumalic acid and procyanidins) could prompt the enzymatic ROS-scavenging system to postpone postharvest deteroration and enhance the storability of litchi (Zhang et al., 2015;Su et al., 2019;Bai et al., 2022;Xiang et al., 2022), winter jujube (Zhang et al., 2016;Yu et al., 2021), pitaya (Fan et al., 2018), strawberry (Riaz et al., 2021), banana (Chen et al., 2019), and blueberry (Mannozzi et al., 2018). All these studies suggested that the activation of enzymatic ROSscavenging system is crucial in delaying postharvest senescence of horticultural fruits.
Conclusion
The findings of this study demonstrated that pre-storage AP treatment was effective in reducing postharvest loss of 'Jinshayou' pummelo fruit. This was due to 1.0% AP enhancing the activities of ROS-scavenging enzymes and up-regulating their corresponding gene expression, maintaining higher antioxidant contents as well as free radical scavenging capacity, thereby reducing oxidative stress and stabilizing the ROS homeostasis in pummelo juice sacs ( Figure 5C). Therefore, the 1.0% AP-treated 'Jinshayou' pummelo fruit stored at room temperature for 90 d exhibited the decreased peel color-change, fruit softening, decay rate, and weight loss. Considering its strong antioxidant attribute and acceptable cost, AP could be an effective and safe strategy for delaying postharvest deterioration and extending the storability of harvested citrus fruit.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. | 2023-01-19T21:28:30.066Z | 2023-01-19T00:00:00.000 | {
"year": 2023,
"sha1": "b753beecbd34fb719afa7236db166e0f85f58849",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b753beecbd34fb719afa7236db166e0f85f58849",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257923612 | pes2o/s2orc | v3-fos-license | Origin of the Hydrophobic Behaviour of Hydrophilic CeO 2
: The nature of the hydrophobicity found in rare-earth oxides is intriguing. The CeO 2 (100) surface, despite its strongly hydrophilic nature, exhibits hydrophobic behaviour when immersed in water. In order to understand this puzzling and counter-intuitive effect we performed a detailed analysis of the confined water structure and dynamics. We report here an ab-initio molecular dynamics simulation (AIMD) study which demonstrates that the first adsorbed water layer, in immediate contact with the hydroxylated CeO 2 surface, generates a hydrophobic interface with respect to the rest of the liquid water. The hydrophobicity is manifested in several ways: a considerable diffusion enhancement of the confined liquid water as compared with bulk water at the same thermodynamic condition, a weak adhesion energy and few H-bonds above the hydrophobic water layer, which may also sustain a water droplet. These findings introduce a new concept in water/rare-earth oxide interfaces: hydrophobicity mediated by specific water patterns on a hydrophilic surface.
Rare-earth oxides hold a special place among the metal oxides as they have been found experimentally to exhibit particularly high degrees of hydrophobicity. [1]Normally, metal oxide surfaces are hydrophilic due to the exposed under-coordinated metal and oxygen atoms, which create local dipoles where water molecules adsorb strongly. [2,3] he origin of the apparently intrinsic and unexpected hydrophobic behaviour of rare-earth oxides is unclear and many different and contradicting scenarios have been proposed in the literature.Azimi et al. [1] proposed that the inaccessibility of the 4f electrons of the metal ions for interacting with the adsorbing water oxygen could explain an intrinsic hydrophobicity.Carchini et al. [4] instead proposed that the mismatch between the rare-earths lattice oxygens and the adsorbed water network would overrule a natural hydrophilic character of these metal oxide surfaces making them hydrophobic.They also proposed that a change in the protonation state (induced by oxygen vacancies, surface reduction and/or the degree of water splitting) of a rareearth oxide surface could switch its hydrophobic character to becoming a completely wet surface, in accordance with previous studies [5] that also discussed the role of surface hydroxylation.
Among the rare-earth oxides, ceria (CeO 2 ) has been seen as a reference material for the study of wetting properties due to its catalytic power for water splitting [6] and its high degree of apparent hydrophobicity, with water contact angles between 90°and 120°[ 1,7,8] (note: a large water contact angle is the definition of a hydrophobic surface).[11][12] The latter experimental results originate from X-Ray Photoelectron Spectroscopy (XPS) analysis of the ceria surfaces upon removal of water excess in order to measure the presence of adsorbed carbon species.Furthermore, the same studies reported a rapid increase of the water contact angle immediately after exposure of the ceria surface to the atmospheric moisture.Hence, a direct in situ measurement of the hydrophilic-hydrophobic transition at operando conditions during water exposure is missing, which makes it difficult to decipher the microscopic-level reasons behind this phenomenon.
All in all, it is fair to conclude that the current understanding of, and information about, the hydrophobicity / hydrophilicity of ceria is ambiguous.One can further note that the degree of CeO 2 hydrophobicity may depend on the specific crystal facets exposed (and their degree of hydroxylation) as suggested by density functional theory (DFT) based theoretical estimates of contact angles by Fronzi et al.; [13] in fact only the (100) facet was so far experimentally demonstrated to behave hydrophobically. [14,15] this Communication we report a first-principles Molecular Dynamics (AIMD) simulation addressing the origin of hydrophobicity of a hydrophilic CeO 2 surface.We demonstrate that the investigated (100) facet of CeO 2 , although intrinsically hydrophilic, exhibits an effective hydrophobicity which is induced by the first adsorbed water layer rather than by the ceria surface itself.This waterinduced hydrophobicity is tested in an AIMD droplet-on-afixed-layer simulation, which is found to retain a sustainable non-wetting character.Finally the molecular diffusion rate of water confined between the CeO 2 surfaces is found to be enhanced with respect to that in the bulk water at the same thermodynamic condition.18][19][20][21] Theoretical studies of the clean CeO 2 (111) surface [4] and later also of clean (110) and (100) surfaces [13] have reported these to be hydrophobic.This conclusion was derived by estimating the water contact angle from expressions involving the interaction energy of a double ice layer with the CeO 2 surfaces.Such a computational procedure does not consider the contribution of liquid bulk water and disregards the water interaction beyond the double ice layer.Fully hydrated CeO 2 facets have indeed been investigated by computations, for example in the AIMD simulations by Camellone et al. [22] and by Ren et al.; [23] 1).This structure was suggested by Kropp et al. [24] and it was later confirmed experimentally by a Nuclear Magnetic Resonance (NMR) study showing that the (100) surface is dominated by the existence of two OH populations with different chemical environments, one weakly interacting and one strongly interacting. [15]In our study, the vacuum region above the surface (about 37 Å thick) was filled with H 2 O molecules corresponding to the water density at 1 atm and 310 K (see methods for further details).
The first important result is the formation of an almost rigid, well structured, flat layer on top of the hydroxylated CeO 2 (100) surface.This water layer is visible in Figure 1 (upper panel) as the peak labeled L 1 in the water oxygen density profile at 4 Å and by the representative snapshot displayed (Figure 1 lower panel and Figure 2a).The L 1 layer interacts with the surface hydroxyl groups (which we label L 0 ).After room-temperature MD relaxation, in the absence of the water slab above them, the hydroxide ions in L 0 exhibit two conformations, orthogonal and parallel to the surface (OH ?and OH k ).These OH configurations are maintained in the presence of liquid water and can be identified by the hydrogen density peaks at 1.5 and 2.4 Å in Figure 1.Our resulting MD structure, which features an L 0 layer interacting strongly with a water layer (L 1 ) above it, is consistent with the structures obtained by Kropp et al. [24] in their static DFT structure optimizations.
Also in our MD simulations, L 1 displays quite a rigid structure which is maintained along the full MD trajectory (see Figure S1 in Supporting Information), suggesting significant adhesion to L 0 .This is also confirmed by a few separate single-point calculations from the AIMD trajectory where one single isolated water molecule from L 1 was left to interact with L 0 (all other water molecules were removed).This was done for two typical structures: the L 1 water donating a hydrogen bond to an OH k in L 0 , or the L 1 water accepting a hydrogen bond from OH ? in L 0 .The former has an interaction energy of 0.5 eV and the latter of 0.3 eV.
We furthermore computed the interaction energy (adhesion energy) between the layers L 1 and L 0 (see Table 1 and Supp.Info.), expressed per L 1 water molecule.The resulting energy is 0.53 eV per water molecule in L 1 .This value is larger than our interaction energy per water molecule in bulk water, 0.45 eV (see Table 1).The extra energy contribution is due to the strong interaction with the hydroxylated CeO 2 (100) surface, confirming its hydrophilic nature.In the same way we computed the adhesion energy between L 1 and L 2 , which instead resulted in 0.28 eV per water molecule, considerably lower that the respective value in bulk water.This implies a loss of energy for a water molecule in L 2 interacting with L 1 (which instead prefers the liquid water surroundings), which suggests an hydrophobic behaviour.
Returning now to the structure, inspection of the second water layer (L 2 , peak maxima at 6.8 Å in the density profile) is also informative.We note in particular the low H density between the first and second layers (i.e. between L 1 and L 2 ).The hydrogen atoms belonging to L 1 water molecules lie almost entirely within this layer, or they point towards the CeO 2 surface (peak at 3.1 Å).Just a minority of the H atoms are oriented towards L 2 .At the same time, the H atoms of the L 2 water molecules either lie completely in L 2 or they are, on average, slightly pointing away from L 1 .This suggests only weak interaction between the L 1 and L 2 layers. [25]ogether, L 0 and L 1 form a bi-layer with a distorted square lattice.A top view snapshot of L 1 is presented in Figure 2a.Classical MD studies [26][27][28] have demonstrated that distorted and ordered FCC and hexagonal patterns of water molecules might exhibit hydrophobic character.Interesting cases are the bi-layer hexagonal ice (BHI) structure observed experimentally for water on graphene at low temperature, [29] water within hydrophobic nanopores, [30] water on clay mineral surfaces [31] and water on gold surfaces [32][33][34] at low temperature.The BHI pattern has been hypothesized to behave like a hydrophobic surface due to the locking pattern of hydrogen bonds which impedes the interaction with the surrounding water molecules.This was recently confirmed experimentally for polytetrafluoroethylene surfaces where the first hexagonal adsorbed water layer was shown to behave hydrophobically. [35]The square lattice of the water bi-layer that we observe in our simulation resemble the hydrogen-bond-locking pattern observed in the BHI structure.
As we mentioned, the existence of an ordered single water monolayer above the hydroxylated (100) ceria surface was predicted theoretically for a low water coverage scenario. [24]Here we confirm that such a structure (the L 1 layer) is stable also at room temperature and we furthermore explore the nature of its interactions with excess water added above it, which has not been discussed in the literature before.We propose that the locking mechanism and the interface structures may have significant consequences for the understanding of ceria hydrophobicity.This Table 1: Adhesion energies involving the L 1 water layer, expressed in energy per L 1 water molecule, and compared with the average interaction energy in bulk water.(1st row:) The label "L 1 -L 0 " stands for the adhesion energy between L 1 and the "L 0 + ceria" slab.(2nd row:) The label "L 1 -L 2 " here stands for the adhesion energy between it L 1 and the water film "above" it.(3rd row:) The intermolecular interaction energy per water molecule from a separate BLYP-D3 bulk simulation.The "L 1 -L 2 " interaction is seen to be considerably weaker than the other two cases.See Supp.Info.for the mathematical expressions used for the calculations.
Interaction type Energy [eV]
L 0 -L 1 0.53 L 1 -L 2 0.28 bulk water 0.45 might be the case also for the (111) facet of ceria, for which the formation of an ordered water layer was also predicted. [24]o corroborate our hypothesis regarding the hydrophobicity of L 1 , we performed an ab-initio MD simulation of a water droplet placed on the surface of an extended L 1 water layer (see methods).Figure 2b shows a characteristic simulation snapshot of the droplet interacting with the layer after 40 ps.It is clearly seen that the water droplet is not wetting the underlying water layer but forms a half-spherical shape typical of water droplets on hydrophobic surfaces.Although the simulation time might result too short for the characteristic relaxation time of the droplet interface, we note that in a simulation with the same droplet placed instead on the bare hydroxylated CeO 2 (100) surface, it completely spreads on the available surface after a few picoseconds.This supports the notion of the hydrophobic behaviour of L 1 compared to that of the bare hydroxylated CeO 2 (100) surface, in agreement with the interaction energy differences reported in Table 1.Similar computational experiments were performed using classical MD simulations for water droplets on TiO 2 , [36] on Al 2 O 3 [37] and on regular water patterns, [26] providing support for the notion that certain structures of water in the first adsorbed water layer can increase the water contact angle.
In the following section we present results for water dynamics above the layer L 1 in our system, which provide compelling evidence in support of our conjecture that the structure of the first adsorbed water layer plays a key role for the hydrophobicity of ceria (100).
The effects of the proximity of a confining wall on liquid dynamics have been explored in a number of studies. [16,39,40] Iwas found that a wide range of liquids demonstrate enhanced dynamics close to non-interactive walls.[44][45] On the other hand, a strong enhancement of translational diffusion was observed in a similar simulation of water confined between two flat walls of graphene, [18] which is well-known to exhibit a pronounced hydrophobicity.Thus, the comparison of the diffusion rate in water close to confining surfaces with that in the bulk water can be regarded as a robust test of the surface hydrophobicity.
Following these arguments, we investigated both the translational and rotational diffusion of the water in our CeO 2 model system in order to find supporting evidence for our conjecture that the water bilayer previously described behaves as a hydrophobic surface with respect to the rest of the water.Our first important observation is a significant enhancement of the translational diffusion in the water confined between the CeO 2 (100) surfaces.Figure 3a shows the three-dimensional (3D) Mean Square Displacement (MSD) of water molecules confined between CeO 2 (100) and TiO 2 anatase (101) surfaces as compared with the MSD calculated for the bulk water at the same thermodynamic conditions.It is evident that the confinement between the supposedly hydrophilic CeO 2 (100) surfaces enhances the water diffusion, while the opposite behaviour is observed for the TiO 2 case.
We also calculated the 2D MSD for L 1 and the water above L 1 .The results are presented in Figure 3b, where the lateral MSD parallel to the CeO 2 and TiO 2 surfaces are compared with the 2D MSD of bulk water at the same temperature and water density.Here we note that practically no diffusion is observed in L 1 for neither CeO 2 nor TiO 2 , confirming their rigid structures.The water above L 1 for CeO 2 instead displays significant lateral diffusion enhancement relative to the bulk water and the TiO 2 surface.[19] A similar effect was also reported for a water layer next to a repulsive model surface [20] and for a water layer on hydrophobic Lennard-Jones surfaces. [21]e note that Cicero et al., [18] using ab-initio MD simulations, demonstrated that the dipole-dipole correlation [38] Angewandte Chemie Communications in water confined between hydrophobic graphene sheets decayed at the same rate as that of bulk water, when the graphene surface separation is 25 Å.In our simulation the dipole correlation function calculated for water between L 1 layers on CeO 2 separated by 28 Å (between z = 6 Å and z = 34 Å in Fig. Figure 1) closely agrees with the dipole correlation of bulk water, supporting the hydrophobic effect of L 1 (see Figure S2 in Supp.Info.).
These results show that translational and rotational dynamics of water confined between the CeO 2 (100) surfaces match the behaviour of water confined between hydrophobic graphene surfaces.On the other hand, the observed dynamics is quite different from that in the water confined between TiO 2 surfaces.We regard this observation as a strong evidence of the hydrophobic nature of the L 1 layer.
The main result of this work is our deciphering of the hydrophobic origin of the hydrophilic hydroxylated ceria (100) facet.A first-principles AIMD simulation of liquid water "on top of" this facet was performed and the following key observations and conclusions were made.(1) An ordered water layer was found to form on the hydroxylated surface, the two together (L 0 + L 1 ) forming an H-bonded key-lock pattern which persists at room temperature.(2) The ordered water layer (L 1 ) interacts only weakly with the water film above it (L 2 ), as evidenced both by the scarcity of H-bonds between the layer and the film above and by the modest interaction energy between them.(3) The ordered water layer formed on the ceria (100) facet thus makes it hydrophobic.This observation is consistent with the results of a separate AIMD simulation for a water droplet deposited on the ordered water layer.(4) The hydrophobicity of such an ordered layer manifests itself in the enhancement of water diffusion as compared to bulk water.18][19][20][21] Altogether our results reveal an atomistic mechanism which explains the origin of the natural hydrophobicity of the hydrophilic ceria (100) and possibly of other rare-earth oxides as well: hydrophobicity triggered by specific water patterns on hydrophilic surfaces.
Figure1).This structure was suggested by Kropp et al.[24] and it was later confirmed experimentally by a Nuclear Magnetic Resonance (NMR) study showing that the (100) surface is dominated by the existence of two OH populations with different chemical environments, one weakly interacting and one strongly interacting.[15]In our study, the vacuum region above the surface (about 37 Å thick) was filled with H 2 O molecules corresponding to the water density at 1 atm and 310 K (see methods for further details).The first important result is the formation of an almost rigid, well structured, flat layer on top of the hydroxylated CeO 2 (100) surface.This water layer is visible in Figure1(upper panel) as the peak labeled L 1 in the water oxygen density profile at 4 Å and by the representative snapshot displayed (Figure1lower panel and Figure2a).The L 1 layer interacts with the surface hydroxyl groups (which we label L 0 ).After room-temperature MD relaxation, in the absence of the water slab above them, the hydroxide ions in L 0 exhibit two conformations, orthogonal and parallel to the surface (OH ?and OH k ).These OH configurations are maintained in the presence of liquid water and can be identified by the hydrogen density peaks at 1.5 and 2.4 Å in Figure1.Our resulting MD structure, which features an L 0 layer interacting strongly with a water layer (L 1 ) above it, is
Figure 1 .
Figure 1.Contents of our periodic AIMD simulation box for a hydroxylated CeO 2 (100) surface in liquid water.The water oxygen and hydrogen density profiles (along the direction perpendicular to the surface) averaged along the MD trajectory are shown above the box.The CeO 2 (100) surface is stabilized by hydroxyl groups (layer L 0 ) which arrange themselves alternatively parallel and perpendicular to the ceria surface and form an intricate key-lock pattern with the first water layer (L 1 ).
Figure 2 .
Figure 2. a) A snapshot from the AIMD simulation of the system in Figure 1, but here only the L 1 water layer is displayed.For visualization purposes it has been expanded periodically 3 × 3 times in the xy plane.b) Simulation of a water droplet (at 310 K) on a fixed L 1 structured water layer formed on the hydroxylated CeO 2 (100) surface.The droplet persists with a contact angle of circa 90°after 40 ps.
Figure 3 .
Figure 3. MSD of water confined between CeO 2 (100) surfaces and between TiO 2 anatase (101) surfaces, respectively, compared with bulk water at the same temperature and density.a) The total 3D MSD, and b) the lateral 2D MSD.The overall 3D diffusion enhancement is mainly attributed to the lateral component.The water data on the TiO 2 anatase (101) surface are taken from Ref.[38] | 2023-04-05T06:17:23.370Z | 2023-04-03T00:00:00.000 | {
"year": 2023,
"sha1": "6a654aea0a9fedd9a140cac47716008a7543eff1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.202303910",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "242dfdb54befef2ebe9c03580632e241cbfb23be",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119474525 | pes2o/s2orc | v3-fos-license | Compact Groups analysis using weak gravitational lensing
We present a weak lensing analysis of a sample of SDSS Compact Groups (CGs). Using the measured radial density contrast profile, we derive the average masses under the assumption of spherical symmetry, obtaining a velocity dispersion for the Singular Isothermal Spherical model, $\sigma_V = 270 \pm 40 \rm ~km~s^{-1}$, and for the NFW model, $R_{200}=0.53\pm0.10\,h_{70}^{-1}\,\rm Mpc$. We test three different definitions of CGs centres to identify which best traces the true dark matter halo centre, concluding that a luminosity weighted centre is the most suitable choice. We also study the lensing signal dependence on CGs physical radius, group surface brightness, and morphological mixing. We find that groups with more concentrated galaxy members show steeper mass profiles and larger velocity dispersions. We argue that both, a possible lower fraction of interloper and a true steeper profile, could be playing a role in this effect. Straightforward velocity dispersion estimates from member spectroscopy yields $\sigma_V \approx 230 \rm ~km~s^{-1}$ in agreement with our lensing results.
INTRODUCTION
The largest concentrations of mass and visible matter in the Universe reside in galaxy clusters. However, a significant fraction of galaxies are located in groups of different mass and morphology content (Karachentsev 2005). Studying the physical properties of these systems is of prime importance to understand galaxy formation and evolution.
Compact groups of galaxies (CGs) are a special class of galaxy systems, containing generally 4 to 6 members within a region of just a few galaxy radii, and with a low radial velocity dispersions (∼ 200 km s −1 , e.g. McConnachie et al. 2009). This particular combination implies that CGs have short crossing times (∼ 0.2 Gyr), providing an ideal scenario to study galaxy merging and the impact of enviroment on galaxy evolution. However, the effects of such an extreme environment and the short time-scales in which these systems would collapse are not completely understood, setting an ongoing debate about the nature of these systems. Numerical simulations have shown that member galaxies can eventually merge and so groups may disappears (Barnes 1985(Barnes , 1989Mamon 1987) in a time scale comparable to the ob-E-mail: mchalela@oac.unc.edu.ar served crossing times (Hickson et al. 1992). Other simulations present an alternative picture, where CGs lifetime is much longer than the crossing time (Governato et al. 1991;Athanassoula et al. 1997) which would explain the relatively high number density of these systems in the observations. Nevertheless, there is a strong debate regarding the genuineness of these systems, since it has been suggested that most of them could be spurious line-of-sight alignments rather than truly bound systems (Mamon 1986).
In a widely accepted scenario, CGs are gravitationally bound, but unstable systems. The X-ray observations showing great emission from the hot intragroup gas (Ponman et al. 1996), suggest that strong interactions between member galaxies could have provided a significant intragroup medium. Orbital decay due to dynamical friction should strip away galaxies from their haloes resulting in eventual mergers in short timescales, leading to a morphological evolution. Therefore, the fraction of early-type galaxies would pinpoint the evolutionary state of the groups as a whole. Although group members can merge, CGs may increase their number of members by acquiring them from the surroundings, extending their lifetime (Diaferio et al. 1994). Many studies support this scenario showing that most of these galaxy systems reside within larger structures such as loose arXiv:1702.00402v1 [astro-ph.GA] 1 Feb 2017 groups and rich clusters (e.g. Rood & Struble 1994;de Carvalho et al. 2005;Mendel et al. 2011).
Hickson CGs (hereafter HCG, Hickson 1982) sample has been widely analysed providing several studies of these systems at low redshift (z ∼ 0.03). High mass-to-light ratio determinations of 50hΥ and typical line-of-sight velocity dispersions of 200 km s −1 (Hickson et al. 1992), suggests the presence of substantial amounts of dark matter. Furthermore, a recent study by Pompei & Iovino (2012), based on spectroscopically confirmed CGs at higher redshift (z ∼ 0.12), reports remarkably higher average values of M/LB = 190Υ and σLOS = 273 km s −1 . The authors suggest these high values could be due to the proximity of large-scale structures, which may affect mass estimates. Despite differences with other authors, these results are consistent with predictions of the hierarchical model of structure formation. Results from hydrodynamical and N-body simulations show that individual dark matter haloes of CGs members merge first, creating a common massive halo that dominates galaxy dynamics (Barnes 1984;Bode et al. 1993).
Until now CGs' masses have been determined through a dynamical approach, either by measuring velocity dispersions or through X-ray observations. Ponman et al. (1996) showed that these systems slightly deviate from the known relation LX −T for clusters (being fainter than the predicted one) but are still consistent with the LX − σLOS relation. Gravitational lensing provides an alternative approach to measure the mass of galaxy systems. Mendes de Oliveira & Giraud (1994) analysed the possibility that a CG could act as a lensing system. Based on the HCGs the authors quantified the lensing efficiency, concluding that they would be too weak to be detected as a lens since this sample is quite nearby. However, their calculations show that CGs at higher redshifts (z ∼ 0.1), such as those available in modern catalogues, could produce a detectable lensing signal.
Weak lensing techniques have been applied almost exclusively to clusters of galaxies providing precise determinations consistent with values derived from dynamical analysis and X-ray observations (Hoekstra et al. 1998;Fischer 1999;Clowe et al. 2000). In recent years, several studies have analysed the lensing effects produced by groups of galaxies (e.g. George et al. 2012;Spinelli et al. 2012;Foëx et al. 2013Foëx et al. , 2014, nevertheless none of them have focused on CGs. In order to apply weak lensing techniques to low mass galaxy systems, such as groups with masses ∼ 10 13 M , stacking techniques have shown to be a powerful tool to increase the signal-to-noise ratio and thus, suitable to derive groups statistical properties (e.g. Rykoff et al. 2008;Leauthaud et al. 2010;Foëx et al. 2014).
In this work we present the first statistical weak lensing analysis of a sample of CGs using stacking techniques. Our systems were extracted from the catalogue of CGs of McConnachie et al. (2009). Images for the analysis were obtained from Sloan Digital Sky Survey data (York et al. 2000). This survey has the largest imaging coverage available at present, providing a statistically significant data base suitable for stacking techniques. These data have been successfully used in previous weak lensing studies to analyse the density profile and determine total masses of galaxies and galaxy systems (e.g., Mandelbaum et al. 2006;Sheldon et al. 2009;Clampitt & Jain 2016;Gonzalez et al. 2016). From our lensing analysis, we derive the average mass under the as-sumption of spherical symmetry. We probe three different definitions of CGs centre to identify which one best traces the dark matter halo. Furthermore, we compare our results with dynamical estimates and we analyse the observed lensing signal according to various CGs properties. The paper is organized as follows. In section 2 we describe the selection of groups used throughout the study. In section 3 we briefly describe the weak lensing analysis, as this was extensively discussed in previous works, along with the formalism of miscentred density profiles. In section 4 we present the obtained mass and finally, in section 5 we summarise our results and compare them with other studies. We adopt, when necessary a standard cosmological model H0 = 70 km s −1 Mpc −1 , Ωm = 0.3 and ΩΛ = 0.7.
COMPACT GROUPS: SAMPLE
DESCRIPTION AND SOURCE GALAXIES
McConachie Compact Groups
There are several catalogues of compact groups in the literature. In general, the identification of these data sets follow Hickson's original selection criteria, or variations in order to identify similar systems. Some are based on spectroscopic information like Barton et al. (1996) and Allam & Tucker (2000), while others follow photometric criteria such as Hickson (1982), Prandoni et al. (1994), Iovino (2002) N (∆m = 3) is the number of member galaxies within 3 magnitudes of the brightest galaxy, θG is the angular diameter of the smallest circle that enclose the centres of these galaxies, θN is the angular diameter of the largest concentric circle with no additional galaxy in this magnitude range or brighter, and µ is the effective surface brightness of member galaxies (where the total flux is averaged over the circle of angular diameter θG). These criteria were applied in two ranges of limiting magnitude resulting in two datasets, Catalogue A and Catalogue B. Catalogue A includes 2297 CGs identified from galaxies with r magnitude in the range 14.5 r 18.0. Catalogue B contains 74791 CGs with member galaxies in a wider magnitude range 14.5 r 21.0. An individual visual inspection of all groups in Catalogue A was carried out minimizing the contamination of the sample due to photometric errors in the automatic SDSS pipelines. This procedure was not applied to Catalogue B given the large number of objects, with an estimated contamination by false sources of about 14%. Both catalogues provide detailed information about CGs and their member galaxies such as group surface brightness, radius and number of members, as well as each galaxy r and g magnitude, and spectroscopic redshift (when available). Given that the Hickson criterion relies only on photometric information, not all CG members may have spectroscopic data.
Final sample and image data
For statistical reasons we extracted our sample from Catalogue B. Redshifts of all galaxies in this catalogue were updated with information from SDSS Data Release 12, and we recalculated CGs redshifts as the mean value of the group members. The redshift distribution of the updated catalogue B peaks at z ≈ 0.1 extending up to z ≈ 0.6.
Given that the lensing efficiency depends on the lens distance and considering that the redshift distribution peaks at z ∼ 0.1, we discard groups with z < 0.06 which contribute little weight. We also discard systems with z > 0.2 since the density of background galaxies is insufficient to extract a reliable signal. We analyse only objects with µ 25 mag arcsec −2 , where µ is defined as the r-band surface brightness. This cut is made to increase the fraction of CGs without interlopers in the sample; members of brighter groups are more probable to be part of a real bound system and not a visual alignment in the sky. According to McConnachie et al. (2008) the sample purity improves from about 30%, for CGs with µ 26 mag arcsec −2 , to 43%, for groups with µ 25 mag arcsec −2 .
The final sample consists of 6257 CGs. In Figure 1 we show the distribution of CGs properties of catalogue B and our final sample. It can be noticed that with the mentioned cuts, we exclude the more extended (R 80 h −1 70 kpc) CGs. Image data was obtained from the SDSS. This survey provides the largest photometric and spectroscopic public database available at present. It was constructed using a 2.5 m telescope at Apache Point Observatory in New Mexico. The tenth data release (SDSS-DR10, Ahn et al. 2014) covers 14555 square degrees of sky imaged in five bands (u, g, r, i and z) and has a limiting magnitude r = 22.2. For the lensing analysis we use images in r and i band, obtained from DR10 as it includes all prior SDSS imaging data. This allows us to select the frame with the best seeing conditions in the field of a given CG. Each SDSS image is 9.8 × 13.5 , corresponding to 1489 × 2048 pixels, with a pixel size of 0.396 . The average seeing is about 1 in the i-band.
Photometry, source classification and shape measurements
In this subsection we describe the details regarding detection, classification and shape measurements of background galaxies. The implemented pipeline has been successfully applied to SDSS data in order to estimate total masses of galaxy systems (Gonzalez et al. 2016).
We conduct a search of frames in order to analyse the most adequate images for our lensing analysis. Thus, for each CG we sequentially search and retrieve the best centred i-band frames within 50 pixels from the borders and select the first frame in the search with seeing lower than 0.9 . If no frame satisfies this seeing condition, we choose that with the lowest seeing, up to 1.3 . CGs in frames not satisfying seeing values < 1.3 are discarded. This results in 5568 CGs suitable for the analysis (i.e. ∼90% of the selected 6257 systems). After the i-band frame is selected we also retrieve the corresponding r-band frame. Notice that given the low lensing signal expected at large radii from the lens centre, it is not necesary to use a frame mosaic, but rather use a single frame for each system.
To perform the detection and photometry of the sources we implement SExtractor (Bertin & Arnouts 1996) as described in Gonzalez et al. (2015), in a two-pass mode. The first run is made with a detection level of 5σ above the background to detect bright objects and estimate the seeing. A second run is made with a detection level of 1.5σ in dual mode to detect objects on the i frame, while photometric parameters are measured on both i and r-band frames.
Sources are classified in stars, galaxies and false detections according to their full-width (FWHM), stellarity index and position in the magnitude-peak surface brightness (µmax) plot, where these parameters are obtained from SExtractor output. In Figure 2 we show an example of the source classification for a single frame whith seeing = 1.0 . Objects that are more sharply peaked than the point spread function (PSF), thus with FWHM < seeing −0.5 pixel, and with SExtractor FLAG parameter > 4, are considered as false detections. As the light distribution of a point source scales with magnitude, objects on the magnitude-µmax line ± 0.4 mag and FWHM < seeing + 0.8 pixel are considered as stars. The rest of the sources with stellarity index < 0.8 are classified as galaxies.
For the shape measurements we use Im2Shape (Bridle et al. 2002) which computes the shape parameters modelling the object as sum of gaussians convolved with a PSF, also modeled as a sum gaussian. For simplicity, we modeled the sources and the PSF using only one gaussian. The PSF map across the image is estimated from the shape of stars, since they are intrinsically point-like objects. We only used objects with a measured ellipticity smaller than 0.2 to remove most of the remaining false detections and faint galaxies. Looking at the five nearest neighbours of each star, we also removed those that differ by more than 2σ from the local average shape. Finally, the local PSF at each galaxy position is linearly interpolated by averaging the shapes of the five nearest stars. Once the PSF is determined, we run Im2Shape on galaxies to meassure their intrisic shape parameters. In order to test our PSF treatment, we apply the PSF correction on stars to check that it can recover point-like shapes. In Figure 3, we show the major semi-axis distribution of stars for two frames, before and after taking into account the PSF in the shape measurement. After the PSF correction, the major semi-axis sizes are considerably smaller and the orientation is randomly distributed, consistent with point-like sources.
To perform the lensing analysis, background galaxies are selected as those with r magnitudes between mP and 21 mag. mP is defined as the faintest magnitude at which the probability that a galaxy is behind the group is higher than 0.7. This value is computed according to the redshift of each CG using a catalogue of photometric redshifts (see Gonzalez et al. 2015, for details about mP estimation). Discarding galaxies fainter than 21 mag ensures that we are not taking into account faint galaxies with high uncertainties in their shape measurements. We also restrict the selection to those objects with a good pixel sampling by using only galaxies with FWHM > 5 pixels. In Figure 4 we show the color-magnitude diagram of all selected galaxies with the photometric cuts used for the background galaxy selection. The average number of backgorund galaxies obtained is 60 per frame, which corresponds to a density of ∼ 0.46 galaxies/arcmin 2 , making a total of ∼ 2600 galaxies/arcmin 2 for the catalogue used in the stacking analysis.
Stacking technique
We briefly describe the lensing analysis and the stacking technique as these were described in detail in Gonzalez et al. (2015Gonzalez et al. ( , 2016. Gravitational lensing effects are characterized by an isotropic stretching called convergence, κ, and an anisotropic distortion called shear, γ. Using the second derivative of the projected gravitational potential to express the shear and convergence, one can show that for a lens with a circular-symmetric projected mass distribution, the tangential component of γ is related to the convergence through (Bartelmann 1995): whereκ(< r) andκ(r) are the convergence averaged over the disk and circle of radius r, respectively. On the other hand, the cross component of the shear, γ×, defined as the component tilted at π/4 relative to the tangential component, should be exactly zero.
Since the convergence is defined as the surface mass density Σ(r) normalized by the critical density Σcrit , we can rewrite the previous equation defining the density contrast, ∆Σ, which is redshift-independent: The tangential shear component is directly estimated asγT = eT , where the tangential ellipticity of background galaxies is averaged over annular bins. The averaged cross ellipticity component, in turn, should be zero and corresponds to the cross shear component. . PSF correction applied to stars of two frames: semimajor axes before (left panels) and after (right panels) the deconvolution. Notice that after taking into account the PSF correction, semi-major axes orientations are randomly distributed and with significantly smaller moduli. For the composite lens, the density contrast is obtained as the weighted average of the tangential ellipticity of background galaxies: where ωij is the associated weight of each background galaxy as described in Gonzalez et al. (2016). NLens is the number of lensing systems and NSources,j the number of background galaxies located at a distance r±δr from the jth lens. Σcrit,j is the critical density for all the sources of the lens j, defined as: Here DOL j is the angular diameter distance from the observer to the jth lens, G is the gravitational constant, c is the light velocity and βj is the geometrical factor defined as the average ratio between the angular diameter distance from the galaxy source i to the lensing system j and the angular diameter distance between the observer and the source ( βj = DLS j /DOS i i). Given the lack of redshift information for individual background galaxies, it is not possible to directly estimate the geometrical factor β. Therefore, we estimated this value using Coupon et al. (2009) catalogue of photometric redshifts. This catalogue is based on the public release Deep Field 1 of the Canada-France-Hawaii Telescope Legacy Survey, which is complete down to mr = 26. We computed βj after applying the same photometric cut used in the selection of background galaxies. This value is fairly insensitive to the detailed redshift distribution, as long as the mean redshift of background galaxies is considerably larger than the lens redshift (Meylan et al. 2006). This is the case of our sample, which has a mean redshift of 0.1, while the mean redshift of background galaxies is 0.32. We consider the contamination due to foreground galaxies by setting β(z phot < z lens ) = 0, which outbalances the dilution of the shear signal by these unlensed galaxies. The average βj value is ≈ 0.50.
The misidentification of faint group members as background galaxies weakens the lensing signal since they are not sheared. Although CGs have few members, numerical simulations suggests that fainter satellite galaxies could be surrounding the group. To overcome this problem, ∆Σ(r) is multiplied by a factor 1+fcg(r) following Hoekstra (2007), where fcg(r) is the fraction of group members that remains in the catalogue of background galaxies. To estimate fcg(r) we fit a 1/r profile to the galaxy excess relative to the background level and we correct the measured shear according to the distance to the lensing system centre.
The statistical uncertainties associated with the estimator ∆Σ(r) are computed taking into account the noise due to the galaxies' intrinsic ellipticity: where σ is the dispersion of the intrinsic ellipticity distribution. We adopt σ = 0.32 according to the value considered by Clampitt & Jain (2016) for a sample of background galaxies measured using SDSS data image. These quantities allow us to compute the total signal-to-noise ratio (S/N) as follows: where the sum runs over all the bins used to fit the profile.
Miscentred density contrast profile
McConnachie et al. (2009) defines the centre of a CG as the centre of the smallest circle that contains the geometrical centre of its member galaxies. This position could be displaced from the true dark matter halo centre, leading to a flattening of the average density contrast profile and a mass underestimation.
If rs is the projected offset in the lens plane, the azimuthally averaged Σ(r) profile is given by the convolution (Yang et al. 2006): Since the actual offsets are not known, we adopt Johnston et al. (2007) approximation where a 2D gaussian distribution describes this miscentering: where σs is the width of the distribution. This value has been obtained in previous analysis of groups and cluster of galaxies, considering the BCG (Brightest Cluster Galaxy) of the system centre. (George et al. 2012) reported σs = 24.8 ± 12 kpc for X-ray selected groups. On the other hand, other works estimate higher values ranging from 0.2h −1 Mpc to 0.42h −1 Mpc, being higher for massive clusters (Johnston et al. 2007;van Uitert et al. 2016). The discrepancy between these results could rely on the sample properties, since Xray selected groups may contain more relaxed systems. Taking into account the above considerations and the fact that CGs are much smaller than clusters, with typical radii of ∼ 40 h −1 70 kpc, we assume σs = 40 h −1 70 kpc. The resulting projected surface mass density for the sample can be written as and ∆Σs(r) can then be calculated with (2) considering that:Σ s(< r) = 2 r 2 r 0 r Σs(r )dr .
The effect of this miscentring on ∆Σ(r) produces a suppression on the lensing signal at scales of the order of σs. On the outer region however, the signal remains almost unaffected.
Fitting mass density profiles
Density contrast profile ∆Σ(ri) is computed using nonoverlapping concentric logarithmic annuli to preserve the signal-to-noise ratio of the outer region, from rin = 50 h −1 70 kpc up to rout ≈ 900 h −1 70 kpc, where the signal weakens. We fit this profile using two models, the singular isothermal sphere (SIS) and the Navarro et al. (NFW, 1997) profile. The SIS profile describes a relaxed spherical distribution with a constant 1-D velocity dispersion, σV . In this model, the shear γ(θ) at an angular distance θ from the lens' centre, is directly related to σV by the equation where θE is the critical Einstein radius defined as: From this model we can compute the characteristic mass M200 ≡ M ( < R200), defined as the mass within the radius that encloses a mean density 200 times the critical density of the universe, as in (Leonard & King 2010): The NFW is a radial profile constructed by fitting the average halo density profile in cold dark matter numerical simulations. It depends on two parameters, R200 and a dimensionless concentration parameter, c200, as follows: where rs is the scale radius, rs = R200/c200 and δc is the characteristic overdensity of the halo, In order to fit this profile, we use the gravitational lensing expressions formulated by Wright & Brainerd (2000). There is a well-known degeneracy between the two parameters R200 and c200 that can be broken by combining weak and strong lensing information. Since we lack of strong lensing information for CGs, we can estimate the concentration parameter with the relation c200(M200, z), given by Duffy et al. (2011), using the M200 value obtained in the SIS fit and the average redshift of CGs weighted by their number of background galaxies. We use this aproximation considering that the derived NFW masses are not too sensitive to this parameter given the uncertainties in the shear profile. Once the concentration is estimated, we fit the NFW profile with only one free parameter, R200, and calculate M200.
We derived the parameters of each mass model performing a standard χ 2 minimization: where the sum runs over the N radial bins of the profile and p is the fitted parameter (σV in the case of the SIS profile, andR200 for the NFW model). Errors in the fitted parameters were computed according to the χ 2 dispersion. The optimal bin steps were chosen to minimize χ 2 values. Other lensing studies consider the average density contrast profile taking into account the contribution from other neighboring mass concentrations by introducing another halo term (e.g., Johnston et al. 2007;Leauthaud et al. 2010;Oguri & Takada 2011). In order to test our results derived up to rout = 900h −1 70 kpc, we have also fitted the profiles within a significantly smaller radius (rout = 500h −1 70 kpc). We find that the derived CGs density contrast profiles are in good agreement within uncertainties, showing the reliability of our results.
Systematic errors and control test
Here we present the results of a control test to check the confidence of our lensing analysis. We also discuss the uncertainties regarding redshift estimation of background galaxies and the dispersion among stacked groups. We do not take into account errors regarding background sky obscuration given that this effect is negligible for SDSS (Simet & Mandelbaum 2014). The effects of miscentring are discussed in detail in section 4.
In order to test the reliability of our measured lensing signal, we compute radial profiles using the background galaxy catalogue centred at random positions within the field of each frame. We carried out 200 relisations to look for any systematics in the density contrast profiles. In Figure 5 we show the averaged profiles together with the dispersion of the resulting 200 relisations. The obtained profiles, using the tangential and cross ellipticity components, are both consistent with a null signal.
Given that the geometrical factor was estimated using a catalogue of photometric redshifts, based on Deep Field 1 which covers 1 square degree, we estimate the impact of cosmic variance on β . We divided this field in 25 nonoverlapping areas of ∼ 144 arcmin 2 , assuming the average CG redshift of 0.12, and computed β for each area. The uncertainty in this parameter was estimated according to the dispersion of the 25 regions, obtaining a typical value of 10%, which implies a 15% error in the mass.
In order to test the stability of our results we performed a bootstrap analysis by fitting both, SIS and NFW centred models, to 1000 samples of identical size randomly selected with reposition. The distributions of the best fit parameters, σV and R200, follow approximately gaussian distributions with dispersions lower than 10%.
The uncertainties introduced by the issues discussed here are considerably lower than the errors obtained according to the χ 2 dispersion. Nevertheless, these were considered in the final error estimation.
Centre definition analysis
In order to analyse the centre offsets with respect to those of the true dark matter halos we consider three different centre choices: the geometrical (GC, included in Catalogue B), the coordinates of the brightest member (BC, also in Catalogue B) and a geometrical centre weighted by luminosity (LC), i.e.: where r i = (α, δ) are the group members celestial coordinates and Li are their corresponding r-band luminosities. Li were computed using CGs' redshifts and r-band magnitudes corrected by galactic extinction. We applied kcorrections to magnitudes, using Chilingarian et al. (2010) public code calck cor.py 1 . In Figure 6 we show the distributions of normalized centre differences and in physical units: |r G − r L| (where r G is the coordinates of the geometrical centre); |r G − r B | (where r B is the coordinates of the brightest galaxy member) and |r B − r L|. As can be noticed, the distribution of the brightest galaxy shows a peak at the group radius given the characteristics of the identification algorithm of CGs. The measured density profiles for the three centre choices are shown in Figure 7. We include in this Figure the fitted centred (SIS and NFW) and miscentred (SISs and NFWs) models, with their corresponding parameters and the reduced χ 2 values of each fit. Points and crosses repre- sent the tangential and cross density contrast components averaged in annular bins, respectively. As it can be seen, there are differences in the inner region of the derived profiles. The slope of the LC centred profile presents no signs of flattening inwards (r 100 h −1 70 kpc), contrary to GC and BC centred profiles. Nevertheless, according to χ 2 red , both profiles are well described by a miscentred model as well as by a centred one. In general, derived masses from both centred and miscentred profiles are in mutual agreement taking into account the uncertainties, while larger differences are observed for SIS masses. Given that the SIS profile is more sensitive to centre definition, we have compared the obtained χ 2 of both, centred and miscentred, SIS fitted profiles (see Figure 7), and therefore we choose the LC as the gravitational potential centre. In Table 1 we summarise our results adding the errors discussed in subsection 3.4.
The model that best describes the LC centred profile is the centred SIS yielding an average velocity dispersion of σV = 270 ± 40 km s −1 , which corresponds to M200 = 17 ± 8 × 10 12 h −1 70 M . Since the halos of CGs are expected to have undergone significant contraction due to the baryonic cooling and collapse, a SIS profile can be a suitable alternative model to NFW, to describe the mass distribution of these low mass systems. It should be noted, however, that the estimated SIS and NFW masses are in good agreement within a ∼ 10% factor as in previous works (Gonzalez et al. 2015(Gonzalez et al. , 2016. For the rest of the analysis we use these fitted parameters to compare them with dynamical estimates and to study variations in the total sample.
Dependence of the lensing signal on CGs physical properties
We studied how CGs average lensing mass varies with respect to three parameters: physical radius, R, surface brightness, µ, and average concentration index weighted by luminosity, CL. We defined CL for a group as: where ci is the individual concentration index of member galaxies defined as the ratio of the radii enclosing 90% and 50% of the Petrosian flux, i.e. ci = r90/r50. For each parameter we divided our sample into two equal-sized subsamples according to the median value of the parameter distribution. (2) surface brightness, µ; (3) weighted concentration index, C L (see text for definition). Rows: (1) R subsamples; (2) µ subsamples; (3) C L subsamples. All distributions were normalized to have the same area. The solid black lines correspond to the complete sample; the dashed and gray lines correspond to the higher and lower subsamples, respectively. Below each panel, we show the residuals between the complete sample distribution and each subsample. Mpc] from the NFW fit.
In Figure 8 we plot these parameters distributions together with their respective subsamples distributions.
In Table 2 we summarize the results of this analysis. To test the significance of these results, we performed a jacknife resampling technique by randomly choosing 1000 subsamples taking 50% of the groups. From this analysis we obtained gaussians distributions for the fitted parameters with dispersions of 30 km s −1 and 0.08Mpc for σV and R200, respectively. We find no significant variation of the fitted parameters for the R and µ subsamples, since they are in good agreement taking into account the errors. However, for the CL subsamples, the resulting parameters differ by ∼ 2σ considering the jacknife dispersion.
The concentration index is an indicator of galaxy morphology, where late-type galaxies tend to have lower ci values than early-type. Thus, groups with lower and higher CL are expected to be dominated by late and early type galaxies, respectively. The detection of a higher lensing signal for groups with higher CL values could be influenced by a lower fraction of interlopers. Given that CGs are expected to have a greater fraction of early type members, by selecting CGs with low CL we could be including more systems with interlopers and, thus, reducing the lensing signal. As a matter of fact, this cut in concentration modifies the distribution of surface brightness: higher CL groups tend to be brighter than lower CL groups (see Figure 8). As mentioned before, the fraction of interlopers declines as brighter groups are considered (McConnachie et al. 2008), and since the estimated parameters are in agreement for both µ subsamples, this result suggests that a cut in CL may be more efficient than a cut in µ in order to reduce the contamination in the CGs sample. This is also evident from the observed relations between CL vs. Nz, and CL vs. Nz/N members . Nz is the number of member galaxies with available spectroscopy, and N members is the total number of members (we restrict to groups with a maximum line-of-sight velocity difference between pairs of members, max(∆v) < 1000 km s −1 , a usual criterium to minimize interlopers Hickson et al. 1992;Mc-Connachie et al. 2009). As can be seen in Figure 9, groups with higher CL tend to have higher Nz and Nz/N members values, making them more reliable. In Figure 10 we show images for both subsamples together with their respective average density constrast profiles. By selecting CGs dominated by early-type galaxies, the systems tend to be more masive and evidence a more evolved structure.
Comparison with dynamical estimates
Given that the σV parameter derived from the weak lensing analysis can be directly compared with dynamical estimates, we have analysed the redshift distribution of CGs' member galaxies in order to estimate the dynamical velocity dispersion, σ V,dyn . With this aim, we consider only CGs having 3 or more members with redshift information and, as before, we discard those with max(∆v) > 1000 km s −1 . From our sample of 5568 CGs, only 61 satisfy these requirements. We find a median dynamical velocity dispersion σ V,dyn = 224 ± 13 km s −1 , where the uncertainty corresponds to the 1σ standard deviation derived from 1000 bootstrap resamplings. This value is in good agreement with other dynamical estimates for CGs: 200 km s −1 (Hickson Figure 10. Example of CGs present in both C L subsamples accompanied by their respective density contrast profile. As in Figure 7, parameter errors consider only the fitting uncertainties and do not include those discussed in subsection 3.4. The four images on the left, and the profile below them, correspond to the sample with lower C L values. The remaining figures on the right side correspond to systems with higher C L values. Images were obtained from the SDSS Navigate Tool. Since gravitational lensing allows the measurement of the mass distribution at large angular distances from the centre, one would expect that CGs lensing inferred velocity dispersions could be higher than those derived from their core's dynamics. It should also be taken into account that the presence of dynamical friction among highly interacting group members could further reduce their velocity dispersion. Nevertheless, the weak lensing estimate of σV = 270 ± 40 km s −1 , although slightly higher, mutually agree with dynamical determinations within 1σ.
Using the same criteria, we also estimated the dynamical velocity dispersion for both CL subsamples. For groups with higher CL values we find σ V,dyn = 238 ± 15 km s −1 , while for groups with lower CL we find σ V,dyn = 190 ± 22 km s −1 . These results show the same tendency as the aforementioned weak lensing estimates, reinforcing their interpretation.
SUMMARY
In this work we analysed a sample of Compact Groups from McConnachie et al. (2009) Catalogue B using weak lensing stacking techniques. We derive the average density contrast profile of the composite system for three centre definitions: the geometrical centre, the brightest galaxy member and a luminosity weighted centre. Measured profiles were fitted using centred and miscentred, SIS and NFW, density models. Luminosity weighted centres were selected as the best description of the true dark matter halo centres. We also studied the lensing signal dependence on physical parameters (radius, surface brightness and concentration index of galaxy members) of the CGs. We did not observe a significant difference between the fitted parameters for subsamples defined according to group radius and surface brightness cuts. Nevertheless, CGs composed by galaxies with larger ci show a stronger lensing signal. This could be explained by a lower number of interlopers, as well as by a trend to include more massive and evolved systems. We argue that by considering groups with higher concentration index weighted by luminosity, could be efficient in order to increase the fraction of genuine CGs in the sample.
The resulting velocity dispersion derived from the SIS profile was compared to the dynamical estimate obtained from spectroscopic information of member galaxies. Although the lensing estimate is slightly higher, both results are in good agreement within uncertainties.
This work provides the first lensing analysis of a sample of CGs based on SDSS images. Our results, in agreement with other dynamical estimates, give hints on the mass distribution and dependence on CGs properties. In a forthcoming paper we will consider in detail mass-to-light ratio and a comparison to simulations. | 2017-02-01T19:00:01.000Z | 2017-01-28T00:00:00.000 | {
"year": 2017,
"sha1": "989025e00d09c6035c12068bce3a3d673daeefbf",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/64104/2/CONICET_Digital_Nro.97658fd9-9be6-420d-a8fc-d8b2ccf9b8a1_A.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "989025e00d09c6035c12068bce3a3d673daeefbf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218479326 | pes2o/s2orc | v3-fos-license | Cooperation in a Company: A Large-Scale Experiment
We analyze cooperation within a company setting in order to study the relationship between cooperative attitudes and financial as well as non-financial rewards. In total, 910 employees of a large software company participate in an incentivized online experiment. We observe high levels of cooperation and the typical conditional contribution patterns in a modified public goods game. When linking experiment and company record data, we observe that cooperative attitudes of employees do not pay off in terms of financial rewards within the company. Rather, cooperative employees receive non-financial benefits such as recognition or friendship as the main reward medium. In contrast to most studies in the experimental laboratory, sustained levels of cooperation in our company setting relate to non-financial values of cooperation rather than solely to financial incentives. JEL-Codes: C930, D230, H410, J310, J320, M520.
Introduction
Within organizations most processes and production steps entail voluntary cooperation among employees to realize optimal output. This is particularly true for teamwork, but also for other daily interactions like helping or knowledge sharing (see Gittell, 2000;Fehr, 2018), where cooperation requires solving a social dilemma: those involved are better off if everybody provided high levels of effort or lots of time, but due to the individual incentive to contribute the enforceable minimum, the joint product is provided on a suboptimal scale, or not at all.
Social dilemmas have been studied extensively in the experimental laboratory (for reviews see Ledyard, 1995;Chaudhuri, 2011;Fehr and Schurtenberger, 2018) as well as in the context of governing the commons in the field (e.g., Ostrom, 1990;Rustagi et al., 2010;Fehr and Leibbrandt, 2011;Gneezy et al., 2016). Interestingly, there is much less empirical evidence on cooperation within organizations, and, in particular, companies. 1 They often have to solve a general tradeoff between creating a cooperative culture in order to provide internal public goods on an efficient level and securing a competitive environment in order to induce innovation and to be able to select the best employees for promotion. Striking the balance, given the tension between cooperation and competition, is probably one of the most difficult management tasks (Fehr and Fischbacher, 2002).
A key aspect of cooperation within organizations is that employees and teams often interact repeatedly. While reputation concerns and informal peer sanctioning can reduce the free-rider problem, they are often unable to solve social dilemmas fully (e.g., Fischbacher and Gächter, 2010). 2 Thus, even in repeated interaction and with peer sanctioning mechanisms in place, it is essential for companies to establish a cooperative culture in order to sustain high levels of cooperation over time, avoiding the often observed decay in cooperation.
In this paper, we exploit a unique setting for studying how financial and non-financial reward instruments within organizations relate to the cooperative culture among employees. Understanding this relationship entails relevant implications for many organizations. Our analysis is based on incentivized online experiments with 910 employees of a large software company. 3 We link data on the level of the employee from these experiments 1 Notable exceptions are Charness and Villeval (2009) and Burks et al. (2009). 2 Among other reasons, decreasing cooperation levels in repeated interaction result from contractual incompleteness of cooperative behavior (e.g., Holmstrom, 1982;Itoh, 1991), the existence of imperfectly conditional cooperators (see Fischbacher and Gächter, 2010;Ambrus and Pathak, 2011) or imperfect sanctioning mechanisms (Kandel and Lazear, 1992;Fehr and Rockenbach, 2003;Houser et al., 2008;Nikiforakis, 2008). 3 Following the typology of Harrison and List (2004) our experiments can be referred to as an "artefac-2 that measure cooperative attitudes in variants of the public goods game (see Fischbacher et al., 2001;Fischbacher and Gächter, 2010) with reward and context variables from company records. Our setup allows for three main contributions. Firstly, we can systematically provide evidence on the association between cooperative attitudes and financial rewards within the company, while being able to control for determinants of cooperation whose relevance is suggested by economic theory. Secondly, we can assess potential non-financial reasons for cooperation in a natural environment that have so far almost exclusively been studied in the experimental laboratory. Thirdly, our study fulfills a methodological purpose by assessing the external validity for a business context of one of the most frequently applied laboratory measures of cooperation. 4 With respect to our first contribution, we find that cooperativeness of employees does not lead to higher individual financial rewards. In stark contrast, our estimates show that within our study period from 2016 to 2018, cooperative employees received on average 29% lower annual wage increases and 15% lower financial award payments than their more selfish colleagues. Being cooperative is not rewarded but rather punished in terms of remuneration.
Regarding our second contribution, we observe that a large fraction of employees exhibits comparatively high levels of cooperation, despite the financial disincentives and the existence of selfish employees. Hence, in contrast to laboratory experiments, in which opportunistic cooperation is usually observed by selfish players in repeated cooperation that leads to a quick decay of contributions over time, we observe a potentially stable pattern of cooperation in the company. Consequently, behavior in the field experiment and observational data form the company together suggest that there must be substantial non-financial rewards of cooperation for the cooperators. Otherwise, cooperation should break down over time. While our online experiment features a one-shot interaction and thus cannot observe contribution dynamics, the high share of perfect conditional cooperators and the substantial number of unconditional cooperators provide the basis for stable cooperation.
We find supportive evidence for this interpretation when linking experimental data with record data from a non-financial recognition tool that employees can access via the company's intranet. Cooperative employees receive 51% more recognition awards tual field experiment". Alternatively, one could call it a "lab-in-the-field experiment" (Gneezy and Imas, 2017). 4 There is an active methodological discussion about the generalizability/external validity of standard laboratory measures (see Levitt and List, 2007;Falk and Heckman, 2009;Burks et al., 2016;Gneezy and Imas, 2017).
3 from their colleagues. In a similar vein, we find that cooperative employees and teams comprised of a larger share of cooperative employees report stronger team cohesion and higher work satisfaction in our post-experimental survey, which is again a sign for nonfinancial reward components of a cooperative environment.
Regarding our third contribution, we document that cooperative employees send more than twice as many recognition awards than selfish employees. This correlation corroborates the external validity of cooperative attitudes measured in our experiments as sending an award requires some individual cost to write a justification and induces a positive externality on a co-worker.
Overall, our data is indicative of the idea that the company positively affects levels of cooperation through supplying non-financial compensating differentials to cooperative employees. 5 This is our preferred interpretation of the data, because it provides a joint mechanism for (i) high levels of cooperation, (ii) a negative nexus between financial rewards and cooperativeness, and (iii) a positive nexus between non-financial rewards and cooperativeness. We also investigate three other mechanisms that are likely to be present in our setting, but that are unlikely to be the sole driver of our three findings: an omitted variable bias related to performance or skills that are specific to cooperative attitudes (Bowles et al., 2001;Barr et al., 2009;Leibbrandt, 2012), selection based on cooperative attitudes (Falk and Heckman, 2009;Dohmen and Falk, 2011), and context-dependent preferences (Bowles, 1998;Levitt and List, 2007;Cohn et al., 2014).
The remainder of this paper is structured as follows. We first relate our study to the literature on artefactual field experiments to study cooperation in the field. In Section 3, we outline the company setting at hand. In Section 4, we describe our experimental setup and the data for our analysis. Then, we report the correlation between cooperative attitudes and relevant outcome variables from the company context in Section 5. Section 6 discusses the main findings and potential underlying mechanisms. Section 7 concludes the paper.
Related Literature
It is impossible to do justice to the large experimental literature on cooperation, even if one restricts attention to (artefactual) field experiments and lab experiments predicting prosocial behavior outside the laboratory (for a survey see Galizzi and Navarro-Martínez, 2019). Examples for field experiments on cooperation are List and Lucking-Reiley (2002), Cardenas (2003, Frey and Meier (2004), Alpizar et al. (2008), Benz and Meier (2008), Burks et al. (2009), Carpenter and Seki (2011), Croson and Shang (2008, Rustagi et al. (2010), Fehr and Leibbrandt (2011), Voors et al. (2011, 2012, Stoop et al. (2012), or Gneezy et al. (2014, 2016. People have studied charitable giving, fishermen, truck drivers, visitors of national parks and many more. However, there is very little evidence on company settings. Regarding our main research interest, the financial and non-financial rewards of cooperative attitudes of employees in a company there is particularly scarce existing empirical evidence from the field. This is despite an abundance of case studies and anecdotical evidence on firms that must balance cooperative and competitive elements in their incentive schemes or that must foster cooperation within teams to be successful (e.g., Dirks, 1999;Dirks and Ferrin, 2001;Gratton, 2009Gratton, , 2011Grant, 2013). Beersma et al. (2003) discuss the relevant management literature and provide a study on the cooperation/competition tradeoff, including personality differences and task characteristics.
In the following, we provide an upshot of the existing literature on our three main contributions. Our first contribution is on the association between cooperative attitudes and financial rewards within the company. Burks et al. (2009) use a naturally occurring social dilemma among bicycle messengers in Switzerland and the United States. Their focus is on the selection of messengers into companies based on incentive schemes. Workers in companies that pay for performance show less cooperation than workers in companies that pay fixed hourly wages or that are members of cooperatives.
There is more closely related literature in other than a standard workplace domain (or using other paradigms than the standard public goods game) that can still inform our setup. Leibbrandt (2012) compares behavior of professional shrimp sellers in a laboratory public goods game with natural market outcomes. He finds a positive relationship between cooperativeness and market success as measured by achieving higher prices for shrimps and establishing longer lasting trade relations. He argues that the detected correlation is driven by cooperative employees being able to signal trustworthiness. Similarly, Essl et al. (2018) study the trustworthiness of sales employees of an Austrian retail chain using a modified trust game and relate behavior in the game to individual sales performance data. The authors find that higher trustworthiness is associated with lower sales per day, but with higher revenue per customer. Cardenas and Carpenter (2005) look at experimental measures of cooperation and link them to household expenditures in Vietnam and Thailand, showing that more cooperative individuals are better off. Likewise, Barr and Serneels (2009) provide evidence that experimentally elicited trustworthiness is positively related to wages of manufacturing workers in Ghana.
Regarding the relationship between cooperative attitudes and non-financial rewardsour second contribution -there again exists limited evidence. Ruff and Fehr (2014) summarize evidence from laboratory FMRI studies that indicate "[...] an experienced value of cooperation per se that might bias individuals to display cooperative behavior" (p. 557). In the field, Hamilton et al. (2003) show that workers at a garment plant voluntarily select into a team-based work organization despite financial losses as compared to performing sewing tasks individually. They argue that such selection behavior is likely driven by non-financial reasons such as hedonic benefits from team work. In a similar vein, Bandiera et al. (2005,2011,2013) find that UK fruit pickers increase efforts or forgo financial benefits due to social ties to co-workers.
Our third contribution relates to the external validity of experimentally elicited cooperative attitudes. While we know quite a lot on the external validity of different measures on uncertainty preferences (risk and ambiguity) and time preferences, we know much less on the external validity of standard measures of cooperative attitudes. Existing studies that provide evidence of the external validity of the standard linear public goods game, i.e., the voluntary contribution mechanism (VCM), are mainly linked to the problem of the commons. Rustagi et al. (2010) elicit cooperative attitudes of members of 49 forest user groups in Ethiopia in an artefactual field experiment setting. They link cooperative attitudes to natural forest commons outcomes and find that groups that are comprised of a larger number of conditional cooperators are doing a better job in managing the forest commons. In a similar vein, Gneezy et al. (2016) study Brazilian fishermen who are organized differently in different places regarding the need for team work. Fishermen at the sea who are forced to work in teams cooperate and trust more than their counterparts at lakes who mostly work individually (see also, for instance, Carpenter and Seki, 2011;Fehr and Leibbrandt, 2011;Stoop et al., 2012;Voors et al., 2011Voors et al., , 2012. Evidence for contexts, apart from common pool management, is provided by Burks et al. (2016) who conduct prisoner's dilemma experiments with truck drivers. More cooperative truck drivers are found to send satellite uplink messages from their trucks more frequently (messages are costly but benefit an anonymous colleague). Englmaier and Gebhardt (2016) perform a lab-field comparison by inviting student participants to a laboratory public goods game and to a natural work setting (registering books in a library database) in which incentives condition on team outcomes. From the positive correlation of behavior in the laboratory and in the natural work task, the authors conclude that the laboratory public goods game captures important aspects of structurally equivalent situations outside the laboratory. 6 Our study adds the novel elements of a work team and company setting and the link of financial as well as non-financial outcome data with behavioral data from incentivized experiments to the existing literature in economics and management.
The Company Setting
We conduct our study in partnership with a large, multinational software company. About 40% of the employees work as software developers, 40% work in the sales and consulting area, and the remaining 20% work in more general service areas like Human Resources, Accounting & Finance or Marketing. Several institutional features are important to understand how the company and its reward systems operate.
Business Models. Most individual and teamwork tasks in the company are mainly taking place in either a customer business model or cloud business model. The customer model uses servers that are on the premise of the client and that are serviced by company employees, whereas the cloud business model uses internet cloud solutions that concurrently apply to many clients. According to our discussions with managers of the company before conducting the study, the latter model requires more cooperation among workers at the software producer than the former; in other words, it entails a production function with much more pronounced complementarities (for instance, between software development and consulting). 7 Interestingly, due to the cloud model connecting several software products on an interface, sales employees also have sales bags that are comprised of items that, if sold, positively affect the performance of their sales team, i.e. other team members. Pay Schemes. Employees are enrolled in one of two co-existing pay schemes: either the company performance or the individual performance pay scheme. Both schemes involve a fixed component and a variable pay component. They differ in how the variable pay component is determined. In the company performance pay scheme, employees receive 6 As in our study, Charness and Villeval (2009) deploy a linear public goods game in actual companies, but they focus on the difference in behavior of junior and senior employees. The main finding is that senior employees are more cooperative than junior employees. Von Bieberstein et al. (2020) analyze student performance in math exams and partner work assignments at university using public goods game measures, however, they find no correlation (but free-riders are performing better in the exam). 7 For validation, we ask all participants how important cooperation is to successfully fulfill their individual and teamwork tasks on a standard Likert scale in an online survey. We detect a strong correlation between the business models and responses to the survey question (Spearman correlation: −0.214, p < 0.001). While 42% of employees state that teamwork is of high importance in the cloud business model only 24% do so in the customer business model (t-test, p < 0.001). 7 bonus payments that are determined by the overall company performance. Under the individual performance pay, bonuses depend on individual performance assessments. Enrollment in either of these schemes is tied to job roles such that selection is only possible via job choice. While all developers and employees in the service areas work under the company performance pay, most sales employees work under individual performance pay. Consultants are equally likely working in either of the schemes depending on whether they are in-house or outgoing consultants.
Wages. The employees' target wage consists of a base wage and the bonus conditional on full target achievement (either company or individual target). This means that the target wage does not necessarily correspond to the actual wage payed. However, analyses by the company show that the target wage is a good proxy for actual wages, hence, we refer to them as wages. 8 Cross-sectional variation in wages is mainly due to jobs at different career levels or in different functions. Variations in the wage levels over time reflect job trajectories. For example, this includes promotions or other internal job changes that relate to a different pay mix. In addition, managers have a budget for merit increases paid to their employees to be decided upon on a yearly basis. Financial Awards. Another important reward instrument of managers is the conferral of financial awards. At the end of a year, every manager can allocate financial awards that consist of shares of the company among employees in his/her team. An award conferred in a particular year is paid out in three tranches in the subsequent years. The budget is fixed for each year for the whole company and on team levels. The award guidelines handed out to the managers specify the idea of a financial award as recognizing employees that are important for the success of the company and as an instrument for employee retention. The guidelines apply to all departments, job positions and both pay schemes. Recognition Awards. Furthermore, there exists a non-financial recognition system that every employee can easily access via the company's intranet. The program is an institutionalized way to thank a colleague for several desired behaviors including, for example, cooperation, promise keeping, or embracing diversity. If an employee receives an award, he/she is notified via e-mail. The e-mail prominently shows a slogan such as "Thank you for being cooperative!" (or the relevant other award justification). It also contains a message from the sending employee and his/her name. The receiving employee's manager can see every award and the total number of awards received for each team member. The role of the manager is also to prevent employees from sending awards back and forth. There are no direct financial consequences related to a recognition award, neither for the sending nor for the receiving employee. However, sending an award requires some effort as it must be justified in a text of at least 150 characters.
Experimental Setup and Data
Our analyses are based on data from three different sources. First, we collect data from an incentivized online experiment. Second, in a subsequent survey module, we elicit a variety of control variables such as socio-demographic characteristics or behavioral measures that relate to cooperation. The gathered data is then merged with reward and context variables from the company records on the individual level. An overview of all collected variables can be found in Appendix A.
Behavioral Measure of Cooperative Attitudes
The first part is a public goods experiment according to the "ABC-framework of cooperation" (Gächter et al., 2017). 9 It uses the design of Fischbacher et al. (2001), including the elicitation of beliefs. In a VCM setting, we elicit an unconditional contribution to a public good, a full contribution schedule contingent on average contributions of other group members, and subjects' beliefs about others' average unconditional contributions. Participants are randomly assigned to groups of three. Every participant knows that all other participants are randomly selected employees of the company. Each group member receives an initial endowment of 10 Tokens to be allocated to a private account or to be contributed to a public account. One Token equals e1. The invested amount c i ∈ {0, 1, ..., 10} is referred to as the unconditional contribution. The sum of all contributions to the public good is multiplied 1.5 in our case, and divided equally among all n = 3 group members. This leads to the following payoff function for subject i: which is linear in the public good contribution and where c i denotes the contribution of group member i. The marginal per capita return (MPCR) from investing in the public good is 1/n < γ = 0.5 < 1. From an individual perspective, free-riding (i.e., c i = 0) is a dominant strategy. Since the sum of marginal returns is larger than 1, however, contributing the entire endowment (i.e., c i = 10) is the optimal choice from a collective perspective. The decision is made only once and anonymously. Thus, there are no incentives and no possibilities to build a reputation.
Participants do not receive any feedback after indicating an unconditional contribution. Subsequently, participants are asked to fill in a contribution table indicating their contribution for each possible average contribution of the other group members, rounded up to integers. The conditional contributions from the contribution table allow us to classify three distinct cooperative attitudes. We depart from the existing literature for expositional reasons; the interpretation of our analysis is simplified when using the three categories. Fischbacher et al. (2001) and many follow-up papers classify free riders (zero contributions, regardless of the average contributions of others), conditional cooperators (increasing contributions with increasing average contributions of others), and humpshaped contributors (increasing contributions with increasing average contributions of others up to a certain contribution level, and above decreasing contributions with increasing average contributions of others). Since we additionally observe a significant number of perfect conditional contributors (those who match the average contributions of others perfectly) and even some unconditional full contributors (contributing the maximum amount of ten Tokens regardless of the average contribution of others), we use the following classification: • Net-Taker: We classify an employee whose average conditional contribution is smaller than five Tokens as a Net-Taker. This means that the employee, on average, free-rides (at least partially) on the contribution of others to the public good (mainly free riders, conditional cooperators with a self-serving bias 10 ).
• Net-Giver: An employee that contributes more than five Tokens is defined as a Net-Giver. The employee, on average, contributes more than the two others (mainly conditional cooperators with an other-serving bias, unconditional full contributors).
• Matcher: An employee who, on average, exactly matches the average contribution of the two other members is considered a Matcher (almost equivalent to perfect conditional cooperators). 11 To make both the unconditional and the conditional contributions incentive-compatible, we use the mechanisms described in Fischbacher et al. (2001). That is, for one randomly selected subject the conditional contributions are payoff-relevant, whereas for the two remaining subjects the unconditional contribution is to determine the average contribution of other group members. We also elicit expected contributions of others in an incentivized way. Following Gächter and Renner (2010), participants are asked to guess the average unconditional contribution of the other group members and receive e5 if they are correct, otherwise they receive e0.
Survey and Company Variables
After the incentivized parts, we elicit additional variables that are relevant for the analysis of the determinants and the context of cooperation without using monetary incentives. Importantly, we ask whether an employee's individual and teamwork tasks are mainly related to the customer or the cloud business model. In addition, we capture personality traits (a short form of the Big Five; Rammstedt et al. (2013)), and survey measures of related social preference concepts like negative/positive reciprocity (Falk et al., 2018) and trust (Anderson et al., 2004). We also elicit a measure of individual competitive attitude (i.e., the competitiveness index as introduced by Newby and Klein (2014)) and basic socio-demographic variables (such as nationality, education, and number of kids and friends). Furthermore, variables with respect to perceived team cohesion (Ashforth and Mael, 1989), team stability, and work-related stress (Schulz and Schlotz, 1999) are elicited. 12 We combine the elicited data with a rich data set from the company. On the employee level, this includes age, gender, seniority (years employed at the company), career levels, and personal leadership responsibilities. Using a work team identifier, we can also infer information about team compositions (for example, with respect to gender and age). Regarding reward institutions, we have individual level information on the employees' pay scheme, his/her wage level, and the value of financial award payments. Observing employees' wage levels over time allows us to calculate annual wage increases. We additionally observe the numbers of recognition awards received and sent for each employee.
Procedures
We conducted the described experimental and survey modules online. 13 Eligible employees received a personalized participation link. Every respondent knew that he/she can complete the experiment within a two-weeks period. There were two roll-out phases with different employees, the first in November 2017 and the second in February 2019. Employees could participate during regular work hours. The total completion of the experiment and the survey took about 30 minutes and could be interrupted at any time.
The online experiment did not require participants to simultaneously make decisions. Participants were informed that groups were assembled randomly ex post. Since nobody received feedback during the experiment, such a procedure is equivalent to simultaneously entered decisions. Participants could use their personal ID code to login after the roll-out phase had ended to get feedback on the results. We asked participants to perform the online experiment individually. The random and anonymous allocation to groups made sure that coalition formation among group members when filling in the online experiment was impossible. 14 Before a participant could decide about the public good contributions, he/she needed to answer comprehension questions on the game. If an answer was wrong, the participant was notified and was shown the correct answer to be re-entered in the respective input box. We set up a telephone hotline and an e-mail address for potential questions during the experiment. We received very few calls and messages.
In the first roll-out phase in 2017, we implemented an unexpected donation option at the end of the experiment as a control for social desirability concerns. In 2019, we included an additional public goods game (administered in a within-subject fashion) that varied the MPCRs (either very high, 1.2, or very low, 0.3) to check whether participants would react to changes in the the social dilemma characteristics. Notice that an MPCR of 1.2 makes it individually optimal to contribute, whereas an MPCR of 0.3 makes it both individually and social optimal not to contribute.
Individual data from the company was de-identified before linking it to our elicited data. The data collection and storage were facilitated through Qualtrics. There exists a data protection agreement between the company and Qualtrics; and a research agreement 13 Our study represents one of many studies and surveys that employees fill out at the company. The company even has its' own survey team. Hence, asking employees to participate in an online study while being at their workplaces is nothing unusual, although the incentivized experimental part was of course somewhat special to most employees. 14 It was extremely unlikely that (matched) participants would be sitting in a shared office. Analyses of the participants' start and end times suggest that there was no communication or coordination of employees of a work team (for the analysis see Appendix C).
(including data protection) between the company and the research team. Data protection units at the company, at University of Munich and University of Heidelberg supervised the study. The company did not receive individual-level data, and all participants were informed about the full pseudonymization of their responses before the experiment. The data protection at the company was only to be involved in determining the exact procedures, not in handling the linked data. We made sure that the pseudonymized final data set was only stored on the computers of the researchers involved in this project within university fire-walls.
Employees were aware of the data protection procedures and provided informed consent before participating in the study. Ethics approval by the University of Munich was granted in September 2017. The study was pre-registered at the AEA registry (AEARCTR-0002596). The respective pre-analysis plan was slightly updated and re-submitted before the second round of experiments took place in 2019.
Sample and Selection
We invited 2,799 employees from 371 work teams to participate in our study. 15 This includes 1,297 employees invited in 2017 and 1,502 employees invited in 2019. We randomly selected teams that had between 8-20 team members of which more than 70% were based in the German-speaking area.
Overall, 910 employees from 299 teams participated. 16 This corresponds to a participation rate of about 32.5%. The characteristics of the participating employees are mostly representative for the employee population at the company (conditional on the invitation requirements) as can be seen in Table 1. There does not seem to be any selection bias into the experiment based on observable characteristics. However, compared to nonparticipating employees, participating employees less frequently work under the individual performance pay scheme (26% versus 22%). Almost all participating employees are placed in the German-speaking area (99% versus 98% in the invited sample). We did not receive wage data for 57 participating employees. These data were either secret, from working students or external employees, or were not available to the company's German human resources department that we worked with to retrieve the data from the records. More generally, one might expect sample selection according to the unobserved level of cooperativeness of employees. Cooperative employees could more frequently volunteer to participate in surveys/experiments, which could bias our results and interpretations. First, this is not so much of a concern, given that we are not interested in the level of cooperation, but in the link between cooperation and company outcomes. Second, as a robustness check, we show in Section 5.2 that given a high correlation between, for example, recognition award sending and cooperativeness, we do not find any evidence for the systematic selection into our experiments based on cooperative attitudes. The significant correlates of cooperativeness are statistically indistinguishable between participating and non-participating employees.
Cooperative Attitudes
About 24% (N=201) of the employees can be classified as Net-Takers, i.e., they contribute on average less than five Tokens (mean of 2.51 Tokens) in the conditional contribution decisions. We classify 35% (N=345) as Net-Givers who contribute on average more than five Tokens (mean of 7.23 Tokens). Around 41% (N=364) of the employees exhibit a contribution pattern best described by Matcher behavior, which implies an average contribution of exactly five Tokens. 17 Table 2 presents an overview of the collected public goods measures for each of the three cooperative types. Overall the unconditional contribution decisions reveal very high cooperation levels (79% of the endowment), despite the existence of Net-Takers. Net-Takers contribute significantly less unconditionally than Matchers and Net-Givers (5.44 versus 8.41 and 8.77, respectively; Mann-Whitney-U (MWU) tests, both p-values < 0.001). They also expect lower unconditional contributions from their colleagues (4.54 versus 7.30 and 7.32, respectively; MWU tests, both p-values < 0.001). Differences between Matchers and Net-Givers are not statistically significant (MWU tests; unconditional contributions, p = 0.876; beliefs, p = 0.436). (2010), we estimate each employee's slope parameter from a linear regression of the conditional contribution and the contribution schedule. 18 The average slope parameter is 0.71, which reflects a tendency to conditioning own contributions on others' contributions. The Net-Takers' average slope parameter equals 0.46 and is lower than the parameters of the other two attitude types (MWU tests, both p-values < 0.001). While the Matchers' slope parameter is almost 1 (mean of 0.95), reflecting that most of these employees are perfectly conditionally cooperative, the Net-Givers have a slope parameter of 0.59, which lies between the other two attitude types (MWU tests, all p-values < 0.001). 19 Figure 1 relates the number of received (left) and sent (right) recognition awards per employee to cooperative attitudes. We observe that Net-Givers act more cooperatively and are also recognized as such. They sent more than 2.5 times as many recognition awards and receive about 40% more than their colleagues (MWU tests, pooling Net-Takers and Matchers, p = 0.057 and p = 0.039, respectively). The difference between Net-Givers and Matchers in sending behavior is statistically significant (MWU test, p = 0.012), and the difference between Net-Givers and Net-Takers in reception levels is as well (MWU test,
Recognition Awards and Cooperative Attitudes
We model the number of received (R r ) and sent (R s ) recognition awards as where C is the vector of dummies for Matchers and Net-Givers using Net-Takers as the base category. The covariate vector X consists of socio-demographics and company controls, including the career level and job role as defined by the department (e.g. software development). The variable year absorbs differences between 2017 and 2018. The respective multivariate Poisson regression estimations presented in Table 3 are in average contribution. 19 In Appendix D, we show in more detail how our cooperative attitudes are related to the cooperation types proposed by Fischbacher et al. (2001) and Fischbacher and Gächter (2010). In Appendix E, we provide an extensive multivariate analysis that characterizes cooperative attitudes in terms of the employees' personal, behavioral and work-related characteristics. We observe a positive relationship between age and cooperativeness, and interestingly that female employees are less cooperative than male employees. In terms of the behavioral survey measures, we document that employees are more likely to be Net-Takers the more competitive, distrusting, negatively reciprocal, extroverted and neurotic they are. Besides, we find that employees in the individual performance pay scheme are more likely Net-Takers than Matchers as compared to employees in the company performance scheme. There are no significant differences in the distribution of cooperative attitudes with respect to career levels, leadership responsibility, seniority, or business model. line with the preceding non-parametric analyses. Net-Givers receive 51% more awards and send more than twice as many awards as Net-Takers, when including socio demographics and company controls (see columns (3) and (6)). Due to relatively low number of employees sending awards (about 11% sent at least one award), these estimates are less precise then the estimations for the reception patterns. Notably, we also observe that Matchers receive and sent significantly fewer awards than Net-Givers. We take the comparatively high number of sent recognition awards by Net-Givers as evidence for the external validity of experimentally elicited cooperation levels. Sending an award induces a positive externality on a co-worker and requires writing a justification for the award, i.e., it represents a costly pro-social act similar to public goods game contributions. The externality may involve positive emotions on the recipient's side, but also potentially some indirect monetary value. Remember that managers observe awards; hence, monetary consequences could include financial awards and merit increases, respectively. Moreover, recognition awards seem to be unrelated to a strong reciprocity concern as Matchers, who exhibit strong reciprocity through their contribution schedule, Notes: Standard errors clustered on team-level in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01; Asterisks for the control variables show the test result from an F-Test, testing the joint difference from zero. Alternative estimations using zero-inflated poisson models yield qualitatively very similar results.
receive and send significantly fewer awards than Net-Givers. Figure 2 shows the mean annual wage increases and the financial award allocation by cooperative attitudes and pay schemes. A similar pattern arises for both variables: 20 When pooling data from both pay schemes, Net-Takers receive a higher financial appreciation than their colleagues (MWU tests, wage increases, p < 0.001; financial awards, p = 0.077 We model the financial appreciation variables using linear regressions. Wage increases ( w t w t−1 ) are measured in percent of the base year (either 2016 or 2017 depending on the year of participation). Financial award payments ( f ) are measured in percent of the wage in 2017.
Financial Rewards and Cooperative Attitudes
In model (4), we use the same covariates as described in model (2). 21 In model (5), we drop year dummies as we include data on financial award payments from 2017 only. Table 4 shows estimated coefficients from OLS regression models. Columns (2) and (6) contain estimated differences between cooperative attitudes, while controlling for sociodemographic and company covariates. We observe that Net-Givers' wage increases are 29% (1.5%-points) and financial award payments are 15% (1%-point) lower than Net-Takers' appreciation, respectively. As already suggested by Figure 2, this difference is only relevant in the individual performance pay scheme. Here, Net-Givers receive about 48% (4.4%-points) lower wage increases and 32% (2.7%-points) lower financial award payments than Net-Takers (see columns (4) and (8), respectively). We observe no differential financial appreciation between Matchers and Net-Givers and no differences in the company performance pay scheme. One can also look at whether cooperative attitudes and observables determine wage levels instead of wage increases. The results of the analysis are provided in Appendix H.
Controlling for relevant Career and Department Dummies as well as socio-demographic and company control variables, only age is a significant determinant of overall wage levels. There is no significant interaction effect with the incentive scheme, either. Obviously, short-term changes in the wage levels are much more responsive to cooperative attitudes. We know that these variations might change with age, with incentive schemes, and with other influences. Together with potential long-term selection effects into different areas or jobs within and outside the company and leveling effects of collective bargaining agreements over time that matter for the overall wage levels, regressions that use wage levels as dependent variable are probably not that informative for our setup. Hence, the results based on wage levels should be interpreted carefully; we would have needed a much more flexible wage determination environment (e.g., top-level management) to detect a potential relationship between cooperative attitudes and wage levels.
Analysis of Potential Mechanisms
How can a company achieve high levels of cooperation despite financial disincentives to cooperate? According to Rosen (1986), teamwork at the workplace (and cooperation) involves other, non-financial returns for employees such as less boring work or hedonic benefits from social interaction. In the context of our study, such non-financial returns (e.g. measured by the number of received recognition awards) are likely to act as equalizing or compensating differentials against the financial disincentives that may arise from cooperation when wage increases or merit-based awards are lower than for those who cooperate less.
In the following, we consider this mechanism and further plausible mechanisms that may be prima facie in line with our main results. In discussing alternative mechanisms, we do not necessarily assume that our three main results, (i) high cooperation levels, (ii) a negative nexus between cooperative attitudes and financial rewards, and (iii) a positive nexus between cooperative attitudes and non-financial outcomes, are connected. Obviously, only exogenous variation in some variables can provide a final answer on the sole driver of our results. However, some variables will never be varied exogenously in a meaningful way such as wage levels or wage increases. There is always a tradeoff between searching under the lamppost (and accepting that one studies very special setups that allow for exogenous variation) or using real-world environments that limit opportunities to exogenous variation. Nonetheless, we can provide heterogeneity analyses and robustness checks to shed light on the potential relevance of various mechanisms for our setting and for being in line with our main results.
High Levels of Cooperation
Our measures of cooperation are qualitatively comparable to the standard conditional contribution patterns documented in the behavioral economics literature; yet they appear higher (e.g., Fischbacher et al., 2001;Fischbacher and Gächter, 2010;Kocher et al., 2015). 22 To what extent cooperation rates in our setting reflect a general high level of cooperativeness of employees or rather a stronger role of a potential social desirability bias in our setup is a question that deserves further attention.
Within the framework of our study, we implemented an unexpected option to donate the experimental income at the end of the experiment in 2017. Participants could choose between receiving their income from the experiments in their personal bank account and donating it to one of five charities of their choice. At this point, participants did not know their income yet. We find a positive but insignificant relationship between donations and our public goods game variables (contributions and more cooperative types). This holds regardless of whether we use unconditional, conditional contributions or cooperative attitudes as regressors (see Table 5). Thus, donations seem to draw on a distinct concept 22 With respect to other non-student samples, Charness and Villeval (2009) observes that employees in the manufacturing industry contributed between 32% and 38% of their endowment to a three-person public good. Another example is Burks et al. (2016), who classify 24% of truck drivers in the same company as free-riders using a Prisoner's Dilemma Game. Algan et al. (2013Algan et al. ( , 2014 conducted public goods games with programmers at Sourceforge.net (an open source software platform) and users that contribute to Wikipedia, respectively. In both samples, subjects have already selected in a voluntary contribution platform; still, they are less cooperative, on average, than employees in our company (the 850 Sourceforge.net users unconditionally contribute 64% of their 10 tokens; the 1,194 Wikipedia users are less likely unconditional contributors and more likely free-riders than employees in our setting).
(1) than cooperative attitudes and cooperation. We consider this suggestive evidence that social desirability is not too much of an issue in our setup. Donation behavior (or dictator game giving more generally) is often thought of as being heavily affected by social desirability concerns. If cooperation in the public goods game was affected by social desirability concerns as well, we would observe a significantly positive correlation between the two sets of decisions. In 2019, we implemented an additional public goods game after the main experiment in which the MPCR was set to either 0.3 or 1.2. Participants that are driven by social desirability concerns should be less likely to adjust their unconditional contribution to the reduction in the MPCR from 0.5 to 0.3, because they might want to signal cooperativeness. Responses to the increase of the MPCR to 1.2 should reflect mainly a sound understanding of the game's incentives. We elicited unconditional contributions, beliefs, and conditional contribution schedules for both alternative MPCRs, using the strategy method. We observe strong reactions to the two variations. Subjects significantly decrease unconditional contributions, beliefs, and conditional contributions when the MPCR decreases to 0.3 (means: 3.71, 2.91, 3.82, respectively, using Wilcoxon Signed-Ranks tests in comparison to the standard MPCR of 0.5; all p-values< 0.001). The reverse happens when the MPCR increases to 1.2 (8.82, 8.53, 8.37, respectively, using Wilcoxon Signed-Ranks tests in comparison to the standard MPCR of 0.5 all p-values< 0.001). We conclude that nei-ther social desirability nor confusion are convincing explanations for the high levels of cooperation that we observe.
Negative Nexus Between Cooperative Attitudes and Financial Rewards
Following Bowles et al. (2001) and Barr and Serneels (2009), the correlation between cooperative attitudes and financial rewards could also be explained by an omitted variable bias with respect to skills that are specific to cooperative attitudes and related to performance differences. For example, Net-Givers could have a comparative advantage in networking or socializing and Net-Takers could be more strategically sophisticated. Table 6 shows OLS regressions of financial rewards on cooperative attitudes estimated for the two business models that exist in the company. As cooperation is more important in cloud-related jobs, we expect Net-Givers to perform better than Net-Takers in such jobs and thus receive higher wage increases or financial awards. However, Net-Takers receive significantly higher financial rewards than Net-Givers and Matchers (see columns (1) and (2) and column (5) and (6), respectively). This relationship does not exist in customer-related jobs (see columns (3) and (4) and columns (7) and (8), respectively). Thus, even if Net-Givers work on tasks with complementarities for which they should have the more appropriate cooperative attitude, Net-Takers get 2.1%-points higher annual wage increases and 2.5%-points higher award payments. The result indicates that there are no strong comparative skill and performance differences between attitudes; however, it might still be the case that Net-Takers have an absolute skill advantage. However, this would require Net-Takers, i.e. less pro-social types, to have, in general, higher levels of skill.
Another potential mechanism could be related to selection based on cooperative attitudes. Net-Takers could select into jobs with higher financial rewards, while Net-Givers could select into jobs with higher non-financial rewards (Falk and Heckman, 2009;Dohmen and Falk, 2011). Conversely, along the lines of Bowles (1998) and Levitt and List (2007), financial and non-financial rewards could also shape cooperative attitudes. Pay scheme specific norms could render selfish behavior in the individual performance scheme and pro-social behavior in the company performance scheme more appropriate and hence employees that comply with the norm get financially rewarded.
In line with both explanations, Appendix E shows that employees in the individual performance pay scheme are significantly more likely to be Net-Takers than employees in the company performance pay, which goes with conventional wisdom. Individual performance incentives do not foster pro-self behavior or they do not seem to attract more Notes: Standard errors clustered on team-level in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01; For wage increases, we use data from 2016/2017 for participants of 2017 and data from 2017/2018 for participants in 2019. We use the value of financial award payments received in 2017 in percent of the wage level in that year in 2017. We exclude employees that neither work in the cloud nor in the customer business model. Here, we do not observe statistically relevant differences in financial awards with respect to cooperative attitudes. In terms of wage increases, we observe that Matchers receive slightly higher increases than Net-Takers which is marginally significant (p=0.071). Asterisks for the control variables show the test result from a F-Test, testing the joint difference from zero.
pro-social employees. Also, our survey analysis confirms that employees in the individual performance pay consider cooperation to be less important to fulfill their tasks successfully. If this observation is due to the idea that incentives shape preferences, we would expect that employees who already work for several years in the company and presumably in the same pay scheme exhibit pay scheme specific norms more strongly. Thus, we expect that employees get less cooperative the longer they work in the individual performance pay scheme. Table 7 shows results from an OLS regression assessing the effect of seniority on the relationship between cooperative attitudes and pay schemes. While the significant difference in mean conditional contributions between pay schemes remains, we find no significant interaction effect with seniority. This evidence suggests that there is a potentially stronger role for selection.
Rewards
We observe that the positive relationship between cooperative attitudes and sending awards appears widely insusceptible to context factors like pay schemes and business models. Based on simple regression similar to those used in Table 3, we observe that Net-Givers send significantly more recognition awards than their colleagues in the cloud (means per employee: 0.32 versus 0.12, p = 0.085) and the customer business model (means per employee: 0.65 versus 0.13, p = 0.002) as well as in the company performance pay scheme (0.41 per employee versus 0.13 per employee, p = 0.004). We also observe a similar pattern in the individual performance pay scheme that is however not statistically significant; admittedly, there is a relatively small sample size for this comparison (0.27 per employee versus 0.19 per employee, p = 0.492). The existence of the relationship across different company contexts suggests a more general link between recognition awards and cooperative attitudes, corroborating our external validity argument. At the same time, we observe strong differences in reception rates between context factors. We find that reception rates are generally higher in the cloud model than in the customer-based model (0.31 per employee versus 0.21 per employee; p = 0.046) and in the company performance pay versus the individual performance pay (0.30 per employee versus 0.16 per employee; p = 0.023). This indicates that the recognition tool is used more frequently in areas in which teamwork and cooperation is required.
In our post-experimental survey, we elicit further variables that may relate to nonfinancial rewards or non-financial costs of cooperation. On the individual level, we capture work-related stress and overall work satisfaction. While our stress measure appears to be unrelated to conditional contributions (Spearman Correlation= -0.098, p = 0.438), we observe a strong positive correlation between cooperativeness of employees and work satisfaction (Spearman Correlation= 0.916, p = 0.014) that is robust to including personal and company controls. On the team-level, we measure perceived team cohesion and team stability. In Appendix I, we show that there exists no statistically relevant relationship between team stability and the share of Net-Givers in a team, but teams that perceive themselves as being more cohesive tend to consist of more Net-Givers.
Conclusion
This paper provides novel evidence on how cooperative attitudes of employees are related to professional behavior and rewards within a large company. We observe high levels of cooperation among employees and evidence on the external validity of our experimental measure of cooperative attitudes for the company setting. In addition, we document a robust negative nexus between cooperative attitudes and financial appreciation, and a positive nexus between cooperative attitudes and non-financial rewards.
In line with a recent literature that emphasizes the intrinsic nature of cooperation (e.g., Hamilton et al., 2003;Bandiera et al., 2005Bandiera et al., , 2011Bandiera et al., , 2013Ruff and Fehr, 2014) our analyses suggest that the company studied here positively affects levels of cooperation -despite financial disincentives for cooperators -through providing cooperative employees with non-financial compensations. We also document a potential role of selection based on cooperative attitudes in pay schemes similar to Burks et al. (2009).
Our findings have implications for the optimal design of incentives and management practices in companies that want to foster cooperation. A general implication is that companies should create a work context that allows non-monetary forms of rewards as values for cooperation to unfold. This might entail the opportunity for employees to voluntarily select into differently composed teams or work organizations, or the selection into organizational units with different cooperative cultures (Kosfeld and Von Siemens, 2011). At the same time, our findings stress the importance of management practices that operationalize the non-monetary returns of cooperation (like the recognition award systems used in our company).
We see our study as a first step and encourage other researchers to study cooperation in corporations as well. Obviously, we have no way to take firm conclusions regarding company-specific and more general results, given that our focus is on one company. It might well be that the specific interplay between incentives and culture at our company is different than in other companies. It might well be that the industry that our company is operating in has specific characteristics in terms of how cooperation is rewarded. Given the importance of cooperation in teamwork, it is astonishing that there is not more research empirically addressing the relationship between corporate culture, financial and non-financial rewards, and cooperation within the company. Although we believe that the gist of our results will hold more generally, given its systematic pattern, our results at the very least provide a proof of concept: The experimentally elicited measures on cooperation are systematically related to outcomes in the company. Our tests for external validity provide promising results.
We have searched for evidence outside the light of a lamppost, in contrast to some other studies that use more artificial designs in the wild to get more powerful inference. Both approaches seem useful. Next to understanding the causal mechanisms underlying our findings, a deeper understanding of the nature of the relationship between financial and non-financial incentives for cooperative behavior in organizations is required. Can financial and non-financial incentives work as substitutes on the individual employee level and, at the same time, work as complements when regarding the company's profits? How can the optimal mix of financial and non-financial incentives be characterized? More research is needed to empirically understand the optimal balance between cooperation-enhancing and competition-enhancing policies within organizations, probably dependent on cooperation culture and workforce composition.
B Instructions
You are a member of a group of three, consisting of anonymous participants in this study. All participants are randomly selected employees of [COMPANY]. The combination into groups of 3 occurs randomly. The payouts for you and the other group members in this section depend on your decisions and the decisions of the other members of your group.
Decision-making situation Each member of the group must decide on the use of 10 tokens each. You and the other group members can put the 10 tokens into a private account, or you can deposit them in whole or in part into a common account. Any tokens that you do not deposit into the common account are automatically added to your private account.
Income from the private account
You earn exactly one euro for each token you put in your private account. For example, if you put 4 tokens into your private account, you will earn exactly e4 from your private account. No one but you receives income from your private account.
Income from the common account
For each token that is added to the common account, you will receive e0.5. The other two group members also each receive e0.5 for each token you contribute. Conversely, you also earn money from the contributions of the other two group members to the common account. The income of each member from the common account is determined as follows: Individual income from the common account = Sum of the contributions of all three group members to the common account times 0.5 For example, if the sum of all three group members' contributions to the common account results in 30 tokens, then you and the other two group members each receive 30 x 0.5 = e15 from the common account. If the three group members pay a total of 10 tokens into the common account, you and the other two group members receive 10 x 0.5 = e5 each from the common account.
Total income Your total income is the sum of your income from your private account and your income from the common account. So: Income from the private account (= 10 -contribution to the common account) + income from the common account (=0.5 x sum of contributions to the common account) = Total income As described above, you can use 10 tokens to fund your private account and the common account. Each group member has to make two types of contribution decisions, which we will refer to below as the contribution and the contribution table. You can find a detailed description of your entries on the entry screens.
B.1 Comprehension Questions
Please answer the following questions to ensure that you have understood the instructions of the experiment. If you are unsure, you can return to the instructions by clicking on "Back".
1. Assume that none of the group members (even you yourself) pay a contribution into the group account.
• How high is your total income?
• How high is the respective total income of the other two group members?
2. Assume that all three group members (also you yourself) each pay a contribution of 10 tokens into the group account.
• How high is your total income?
• How high is the respective total income of the other two group members?
3. Assume that you deposit 0 tokens into the common account and that the other two members of your group deposit 10 tokens each.
• How high is your total income?
• How high is the respective total income of the other two group members?
4. Assume that you pay 10 tokens into the common account and the other two members of your group each pay 0 tokens.
• How high is your total income?
• How high is the respective total income of the other two group members?
B.2 Contribution Decisions
When choosing the contribution to the common account, you determine how many of the 10 tokens you want to deposit into the common account. The deposit to your private account is automatically the difference between 10 tokens and your contribution to the common account.
• Please enter the amount you would like to pay into the common account (any whole-number value between and including 0 and 10 is possible): ... Now you will be asked to fill in a contribution table. In the contribution table, you should specify how many tokens you want to pay into the common account for each possible (rounded) average contribution of the other two group members to the common account. So, depending on how much the others contribute on average, you must define your own contribution decision. For each average contribution of the other two group members, please indicate the amount you would like to pay into the common account (any whole-number value between and including 0 and 10 is possible; of course, you can also enter the same amount several times): What is your contribution to the common account if...
• ... the other two group members deposit an average of 0 tokens.
• ... the other two group members deposit an average of 1 tokens.
• ... the other two group members deposit an average of 2 tokens.
• ... the other two group members deposit an average of 3 tokens.
• ... the other two group members deposit an average of 4 tokens.
• ... the other two group members deposit an average of 5 tokens.
• ... the other two group members deposit an average of 6 tokens.
• ... the other two group members deposit an average of 7 tokens.
• ... the other two group members deposit an average of 8 tokens.
• ... the other two group members deposit an average of 9 tokens.
• ... the other two group members deposit an average of 10 tokens.
Help option: The numbers in the left column are the possible (rounded) average contributions of the other two group members to the common account. You now have to specify how many tokens you want to deposit into the common account for each slider, provided that the others contribute the specified amount on average. You have to make an entry in each field. For example, you are to specify how much you contribute to the common account if the other group members deposit an average of 0 tokens into the common account; how many tokens you contribute if the others contribute an average of 1 token or 2 tokens or 3 tokens, and so on. You can enter any whole-number contribution from 0 tokens to 10 tokens in each field and, of course, the same amount several times.
B.3 Incentive Compatibility Payout relevance of your decisions
After all study participants have made their decisions, one member is randomly selected in each group of 3. For the randomly selected member, only the contribution table filled in by him/her is relevant for decision making and payout. For the other two group members who have not been selected, only the contribution is relevant for decision-making and payout. The average of the two contributions (rounded to the next whole number) then determines the relevant conditional contribution from the third member's contribution table. Of course, you do not yet know which of your contribution decisions will be randomly selected. You must therefore carefully consider both types of contribution decisions, as both can become relevant to you.
The following graphic (Figure 1) is intended to visualize the decision-making situation. For the randomly selected person on the right, the conditional contribution from the contribution table is relevant. For the other two group members, the contribution is relevant for payout.
B.4 Belief Elicitation
In addition to your earnings from your private and common account, you will receive a further payout for estimating the average contribution of the other two members of your group to your common account. Your payout will depend on how accurately you estimate the actual average contribution of your two group members. If you are exactly right, you will receive an additional e5. If your estimate differs by 0.5 or more tokens from the actual average contribution, you will receive e0. Please enter a number from 0 to 10 (each What do you think is the average amount of tokens your two group members contribute to the common account?
• ... Average contribution of the other two members of your group
C Communication and Coordination of Employees
Employees could interrupt the experiments and continue at a later point in time. On average, employees finished the experiment and survey within approximately one and a half days (mean=1.35 days). While employees in a public goods game group were anonymously selected and matched, one might be concerned about communication and coordination during the experiment as some teams in the company are seated in shared offices (max four team members per office). To alleviate this concern, we observe no correlation between contribution behavior, beliefs and attitudes of employees with respect to the variance of finishing times within work teams (Spearman Correlations; uncond. contribution, ρ = −0.004, p = 0.905; belief about others' uncond. contribution, ρ = −0.006, p = 0.853; mean cond. contribution, ρ = 0.008, p = 0.827).
D Overview of Public Goods Game Measures
Our employee sample appears to be very cooperative as can be seen from Table 2. In the unconditional contribution decision, they contribute on average 7.9 Tokens (which corresponds to 79% of the endowment) in the public good. The average belief about the public good contribution of the other group members equals 6.7 Tokens. The difference in actual contributions and beliefs is statistically significant at the 1%-level (Wilcoxon Sign Rank Test, p < 0.001). Reassuringly, we observe very similar responses in the public goods games when comparing the variables collected from the experiments in 2017 and 2019. This holds for the data presented in Table 4 We observe that cooperative attitudes are highly predictive for unconditional contributions, also when we control for beliefs about other's contributions (see Table 5). Net-Givers contribute more than Matchers and Matchers contribute more than Net-Takers. Both differences are highly statistically significant.
The scatter plot in Figure 2 shows a significant variation in the average conditional contributions and the reciprocity parameter of employees. The size of the bubbles represents the frequency of the observed combination of mean conditional contribution and reciprocity. There are several mass points that stand out.
Next to our cooperative attitudes, we also classify cooperation types as described by Fischbacher et al. (2001) andFischbacher andGächter (2010). These types are also visible in the scatter plot. First, we observe employees that behave like perfect conditional cooperators ([1, 5]). Secondly, there are clusters of employees whose contributions are independent of the contribution schedule. They are either contributing nothing (free-riders) or they contribute a strictly positive amount (unconditional cooperators). Most of the uncon- Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01. ditional cooperators contribute all their endowment. Thirdly, imperfectly conditional cooperators are split in two groups, conditional cooperators with a self-serving bias (mean unconditional contribution below 5) and conditional cooperators with an other-serving bias (mean unconditional contributions above 5). The remaining employees are classified as Others.
Figure 3: Distribution of Cooperation Types
Notes: Bars show the fraction of all participating employees that belong to a particular cooperation type. Bars are ordered by mean conditional contributions. Figure 3 shows an overview of all types and Figure 4 relates our cooperative attitudes and the cooperation types. Cooperative attitudes subsume the classification types reasonably well. We use cooperative attitudes because they prove handier for the statistical analysis.
Figure 4: Cooperation Types and Cooperative Attitudes
Notes: Bars show the fraction of participating employees that belong to a particular cooperative attitude. Table 6 shows an inverse U-shape relationship between cooperative attitudes and reciprocity. Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01.
E Correlates of Cooperative Attitudes
In Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01; Five subjects missing because they did not insert information on their socio-demographic status. High Education is an indicator for higher than median education subsuming two out of five education categories.
First, we observe an indication for age being positively related to the cooperativeness of employees. Older employees are significantly more likely to be Net-Givers rather than Net-Takers. The share of Matchers is relatively stable across age cohorts. Second, female employees are less frequently Net-Givers than Net-Takers and, again, the share of Matchers is very similar. Marginal effect calculations show that female employees are about 7%-points more likely to be Net-Takers rather than Net-Givers than male employees are. Third, the competitiveness index correlates with cooperative attitudes. Intuitively, employees are more likely to be Net-Takers the more competitive they are. Moreover, we find that the agreement to the statement "You can't trust strangers anymore" is highly predictive for the cooperative attitude. The likelihood of being a Net-Taker decreases with reported distrust in strangers. Finally, we observe positive correlations between survey measures for positive and negative reciprocity (agreement with "When someone does me a favor, I am willing to return it" and "If I am treated unjustly, I will take revenge at the first occasion, even if there is a cost to do so", respectively) and cooperative attitudes -again, in the expected positive direction. 23 In Table 8, we present the correlations between cooperative attitudes and structural variables from the company context. Here, the main observation is that the cooperativeness of employees is less pronounced in the individual performance pay scheme. While we classify 20% of participants in the company performance pay scheme as Net-Taker, the respective share increases to 27% in the individual performance pay. This increase in the share of Net-Takers comes along with a decrease in the share of Matchers (from 41% to 35%). The share of Net-Givers is not significantly different between incentive schemes. We do not observe significant differences in the distribution of cooperative attitudes with respect to career levels, leadership responsibility, seniority, or the team work production function.
Lastly, we use a short form of the big five personality trait questionnaire validated by Rammstedt et al. (2013) from our online survey. The traits consist of extraversion, agreeableness, conscientiousness, neuroticism, and openness. Table 9 shows the correlation between our cooperative attitude classification and the five traits. Net-Takers are significantly more extroverted and neurotic than Net-Givers, and more conscientious than Matchers.
Cooperative Attitude Matchers Net-Givers
Net-Takers Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01; 49 subjects are missing be-cause they did not insert information of their socio-demographic status or there was no wage data available. High Education is an indicator for higher than median education subsuming two out of five education categories. Career levels subsume several categories in each presented category. Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01; Four subjects missing because they did not insert information on their socio-demographic status. High Education is an indicator for higher than median education subsuming two out of five education categories.
F Description of Outcome Variables
In the following, we provide descriptive analyses of our main outcome variables. Company variables stem from the records as of 12/31/2017 for employees that were invited to participate in the experiments in 2017. For employees invited to the second experiment, we use record data as of 12/31/2018. Table 10 shows the data availability for our main outcome variables. We have data on recognition awards from 2017 for employees that participated in 2017 and the data from 2018 for the participants from 2019. Wage data covers 2016/2017 and 2017/2018 for the employees in the different roll-out phases, respectively. This allows us to look at changes in wage over time. We do not have information on financial awards in 2018 for employees from the first experiments due to data restrictions at the company. In addition, the company-wide budget for the financial award allocation differed strongly between 2017 and 2018 such that there is low comparability. Table 11 shows robustness analyses of our main effects with regard to the part-time share of employees.
H Wage Levels and Cooperative Attitudes
Columns (1) to (3) of Table 12 show that we find no significant relationship between wage levels and cooperative attitudes. In columns (4) to (6), we additionally control for an interaction of cooperative attitudes and age. It can be seen that in these regressions, Net-Takers and Matchers earn less than Net-Takers but that this effect decreases with age. This is likely related to the explanations mentioned by us in the main text such as in-/outflux of employees and leveling effects of collective bargaining agreements.
Asterisks for the control variables show the test result from an F-Test, testing the joint difference from zero.
I Survey Outcomes and Cooperative Attitudes
In Table 13, we show OLS regression models of both variables on the share of Net-Takers, Matchers, and Net-Givers in a work team, estimated using regressions with analytical weights to account for team-specific participation rates . We detect no statistically relevant relationship to the perception of team stability as shown in column (1). However, in column (2), we find that members of teams that perceive themselves as being in a more cohesive team tend to be more cooperative in the experiment.
(1) Notes: Robust standard errors in parentheses; * p < 0.10, * * p < 0.05, *** p < 0.01; We use OLS with analytical weights that emphasize averages of teams that participated with a higher share of team members. We control for gender and age composition as well as average seniority. We do not control for career levels or function compositions because of the large number of different categories. | 2020-05-02T18:47:38.021Z | 2020-03-30T00:00:00.000 | {
"year": 2020,
"sha1": "73eaf344c31a3fd32464e4e0b59d938ffb0cea6d",
"oa_license": "CCBY",
"oa_url": "http://www.ihs.ac.at/publications/eco/ihswps-15.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d3f2f27f19597f1900fedf2a6f33e1fc74ef4626",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
118679543 | pes2o/s2orc | v3-fos-license | Indicators of Intrinsic AGN Luminosity: a Multi-Wavelength Approach
We consider five indicators for intrinsic AGN luminosity: the luminosities of the [OIII]$\lambda$5007 line, the [OIV]25.89$\mu$m line, the mid-infrared (MIR) continuum emission by the torus, and the radio and hard X-ray (E $>$ 10keV) continuum emission. We compare these different proxies using two complete samples of low-redshift type 2 AGN selected in a homogeneous way based on different indicators: an optically selected [OIII] sample and a mid-infrared selected 12$\mu$m sample. We examine the correlations between all five different proxies, and find better agreement for the [OIV], MIR, and [OIII] luminosities than for the hard X-ray and radio luminosities. Next, we compare the ratios of the fluxes of the different proxies to their values in unobscured Type 1 AGN. The agreement is best for the ratio of the [OIV] and MIR fluxes, while the ratios of the hard X-ray to [OIII], [OIV], and MIR fluxes are systematically low by about an order-of-magnitude in the Type 2 AGN, indicating that hard X-ray selected samples do not represent the full Type 2 AGN population. In a similar spirit, we compare different optical and MIR diagnostics of the relative energetic contributions of AGN and star formation processes in our samples of Type 2 AGN. We find good agreement between the various diagnostic parameters, such as the equivalent width of the MIR polycyclic aromatic hydrocarbon features, the ratio of the MIR [OIV]/[NeII] emission-lines, the spectral index of the MIR continuum, and the commonly used optical emission-line ratios. Finally, we test whether the presence of cold gas associated with star-formation leads to an enhanced conversion efficiency of AGN ionizing radiation into [OIII] or [OIV] emission. We find that no compelling evidence exists for this scenario for the luminosities represented in this sample (L$_{bol}$ $\approx$ 10$^{9}$ - 8 $\times$ 10$^{11}$ L$_{\sun}$). (abridged)
we test whether the presence of cold gas associated with star-formation leads to an enhanced conversion efficiency of AGN ionizing radiation into [OIII] or [OIV] emission. We find that no compelling evidence exists for this scenario for the luminosities represented in this sample (L bol ≈ 10 9 -8 × 10 11 L ⊙ ).
Introduction
Active galactic nuclei (AGN) are powered by accretion onto a central supermassive blackhole. According to the unified model (e.g. Antonucci 1993), an optically-thick torus of dust and gas surrounds this central engine. Hence orientation of the system plays a central role in determining the observable features of AGN. In Type 1 AGN, the system is oriented face-on, leaving an unobstructed view of the central engine and the broad line region. In contrast, Type 2 AGN are oriented edge-on, blocking the accretion disk and the broad line region. These obscured AGN can be identified by their narrow optical and mid-infrared emission lines which originate in gas photoionized by accretion disk photons. This narrow line region (NLR) extends hundreds of parsecs away from the central source and is therefore not significantly affected by torus obscuration.
The luminosity of emission lines formed in the narrow line region can therefore be used as isotropic indicators of intrinsic AGN luminosity. The flux of the [OIII]λ5007 line is commonly used as such a diagnostic (e.g. Bassani et al. 1999, Heckman et al. 2005 as it is one of the most prominent lines and suffers little contamination from star formation processes in the host galaxy. This line can be attenuated by dust in the host galaxy, though this effect can be somewhat remedied by applying a reddening correction using the observed Balmer decrement (i.e. the observed ratio of the narrow Hα/Hβ emission-lines compared to the intrinsic ratio) and the extinction curve for galactic dust (Osterbrock & Ferland 2006).
Isotropic indicators of AGN luminosity also exist in the infrared band and are much less affected by dust extinction than the optical [OIII] line. Recently, the luminosity of the [OIV] 25.89µm line has been shown to be a robust proxy of AGN power (e.g. Meléndez et al. 2008): it is formed in the NLR, so it is not affected by torus obscuration, and with an ionization potential of 54.9 eV, starburst activity does not significantly contribute to this line. AGN also emit over 20% of their bolometric flux in the mid-infrared (MIR), where photons produced by the continuum are absorbed by the torus and re-radiated (e.g. Spinoglio & Malkan 1989). This MIR emission from the dusty torus can also be a proxy for the intrinsic AGN luminosity. Two potential issues with the MIR continuum are contamination by emission from dust heated by stars (e.g. Buchanan et al. 2006;Deo et al. 2009) and possible anisotropy in the torus emission (e.g. Pier & Krolik 1992, Buchanan et al. 2006, Elitzur & Shlosman 2006, Nenkova et al. 2008. Radio and hard X-ray (E > 10 keV) flux can serve as proxies of the intrinsic AGN continuum. Radio emission has been shown to be similar between type 1 and type 2 AGN (e.g. Giuricin et al. 1990, Meléndez et al. 2010) and correlated with optical luminosity, in particularly the [OIII] flux (Xu et al. 1999), making high resolution radio observations that isolate emission from the nucleus another diagnostic of intrinsic AGN power. Hard X-rays can pierce through the obscuring torus, provided that the object is not heavily Compton thick (N H < 10 25 cm −2 ), and has therefore been used as a method to select AGN samples (e.g. Winter et al. 2008, Treister et al. 2009).
Connections have been observed between star formation activity in the host galaxy and the central AGN (e.g. Kauffmann et al. 2003). Starburst activity can be parametrized by various IR features, such as the equivalent width (EW) of polycyclic aromatic hydrocarbons (PAHs). IR and optical data, such as the ratio of fine structure lines and the shape of the spectral slope, can also reveal the relative amount of AGN to starburst activity. Samples of Seyfert 2 galaxies (the predominant local class of type 2 AGN) are useful in examining the relationships among these star formation vs. AGN-to-starburst indicators as the obscuration of the central engine allows detailed study of the host galaxy.
To address these issues, we will use two complete and homogeneous Sy2 samples selected based on isotropic indicators of AGN luminosity (one [OIII]-selected sample and one MIRselected sample). We will compare the various diagnostics of intrinsic AGN luminosity and probe for biases resulting from sample selection criteria, starburst contamination, errors introduced from extinction correction, and scatter due to the various physical mechanisms producing these emission features. Such biases are likely minimized in the diagnostic ratios with the smallest dispersion. Where available, we compare these ratios with the Sy1 values to probe for differences due to the inclination of the system, thus testing to which extent these indicators of intrinsic AGN luminosity are truly "isotropic." We will also test the agreement among mid-infrared and optical star formation indicators. Finally, we will examine the possibility that the fraction of the AGN ionizing luminosity that is converted into [OIII] and [OIV] emission is systematically higher in systems in which there is a copious supply of dense gas associated with starburst activity.
Sample Selection
The selection of the SDSS [OIII] sample is discussed in detail in LaMassa et al. (2009). In brief, Type 2 AGN were drawn from a parent sample of approximately 480,000 galaxies in SDSS Data Release 4 = DR4 (Adelman-McCarthy et al. 2006). The Type 2 AGN with an observed [OIII] flux greater than 4 × 10 −14 erg s −1 cm −2 were selected, providing a complete sample of 20 Sy2s (hereafter the "[OIII] sample," listed in Table 1).
The mid-IR sample comprises the Seyfert 2 galaxies from the original IRAS 12µm survey (Spinoglio & Malkan 1989). This represents a complete sample of Sy2s down to a fluxdensity limit of 0.3 Jy at 12 µm, drawn from the IRAS Point Source Catalog (Version 2), with latitude |b| > 25 o to avoid contamination from the Milky Way. We have dropped NGC 1097 from this original sample as it has since been classified as a Type 1 Seyfert (Storchi-Bergmann et al. 1993), leaving 31 mid-IR selected Sy2s (hereafter the "12µm sample," listed in Table 2).
Optical Data
The optical data for the [OIII]-sample were drawn from SDSS DR4, whereas the optical data for the 12µm sample were collected from the literature or from SDSS Data Release 7 (DR7) where available. The reddening corrected [OIII] flux (F [OIII],corr ) was calculated using the observed Hα/Hβ ratio and an intrinsic ratio of 3.1 with R=3.1 extinction curve for galactic dust (Osterbrock & Ferland 2006). Tables 3 and 4 list the optical emission line fluxes and ratios utilized for this study, as well as the relevant literature sources for the 12µm sample. The black hole masses (M BH ) were derived for the [OIII] sample by the SDSS velocity dispersion (σ) and the M-σ relation (M BH = 10 8.13 (σ/200 km s −1 ) 4.02 M ⊙ , Tremaine et al. 2002). We used literature values for M BH for the 12µm sources, with most of the masses derived using the M-σ relation cited above. For F04385-0828, F05189-2524 and TOLOLO 1238-364, the full width half max (FWHM) of the [OIII] line was used as a proxy for the velocity dispersion (Wang & Ahang 2007, Greene & Ho, 2005, and photometry of the host galaxy was used to estimate M BH for F08572+3915 (see Veilleux et al. 2009 for details).
Infrared Data
The infrared data presented here were obtained from the Infrared Spectrograph (IRS, Houck et al. 2004) on board the Spitzer Space Telescope. Low-resolution spectra were obtained using the Short-Low (SL, 3.6"×57" aperture size) and the Long-Low (LL, 10.5"×168" aperture size) modules and high-resolution spectra were provided by the Short-High (SH, 4.7"×11.3" aperture size) and the Long-High (LH, 11.1"×22.3") modules.
The Sy2s in the [OIII]-sample were observed in IRS staring mode in both high and low resolution under Program ID 30773. For the 12-µm sample, high resolution data existed for all 31 Sy2s but low resolution data were only available for 30 galaxies (IRAS 000198-7926 lacked low resolution data). The high resolution data were obtained in IRS staring mode for 30 of the Sy2s (NGC 5194 had only IRS spectral mapping mode high-resolution data in the archive). Several galaxies had multiple IRS observations: we analyzed these observations independently and compared our results between the two observations. For the low-resolution data, IRS staring mode was used when available with the remainder observed in IRS spectral mapping mode. The Spectral Modeling Analysis and Reduction Tool (SMART, Higdon et al. 2004) was used to reduce the staring mode observations for the 12µm-sample to be consistent with previous IRS analysis of the 12 µm sample (e.g. Tommasin et al. 2008, Wu et al. 2009, Buchanan et al. 2006, Spitzer IRS Custom Extraction (SPICE) was used to analyze the staring mode observations for the [OIII]-sample 1 , and the Cube Builder for IRS Spectra Maps (CUBISM, Smith et al. 2007a) was utilized to analyze the spectral mapping observations. Table 5 lists the Program ID(s) for each galaxy, the IRS mode used, and spectral extraction area for low-resolution spectral mapping mode data (discussed below).
High-Resolution IRS Staring Spectra
We used the basic calibrated data (BCD) pipeline products as the starting point for our analysis. Rogue pixels were removed using the IDL routine IRSCLEAN MASK and the rogue pixel mask matching the campaign number of the observation. Dedicated off-source background observations were taken for all sources in the [OIII]-sample and for most of the Sy2s in the 12µm sample. Multiple background observations, if present, were coadded within each nod and subsequently subtracted from the source image. The backgroundsubtracted source images were then coadded between the two nods. The galaxies in the 12µm sample that had dedicated background observations and thus were background subtracted are marked with a "b" in Table 5. For sources in the 12µm sample without dedicated off-source observations, no background subtraction was performed.
Spectra were extracted from these combined observations, using the full aperture extraction mode. The edges of each order were then inspected, removing any data points that fell outside of the calibrated range for that order (IRS Data Handbook, Version 3.1, Table 5.1). The orders were then combined using a 2.5-σ clipping mean, resulting in a final cleaned spectrum.
Low-Resolution IRS Staring Spectra
The low-resolution data were processed in a similar fashion as the high-resolution data, i.e. we started with the BCD products and removed the rogue pixels with IRSCLEAN MASK. However, for these observations a background data set was built for each nod and order by coadding the off-source order and nod position. The background-subtracted nods (following the same procedure as above) were combined for each order and the spectra then extracted using tapered column extraction. The orders were combined using a 2.5-σ clipping average. This procedure was executed separately for the SL and LL module. Fourteen of our galaxies had low resolution IRS staring mode data; the rest were acquired in spectral mapping mode. We note that IRAS 00198-7926 did not have archival low resolution spectral data.
Spectral Mapping Spectra
The IRS spectral mapping observations were analyzed with CUBISM (Smith et al. 2007a), which uses the BCD data to create 3-D spectral cubes (one spectral dimension and 2 spatial dimensions). For the low resolution data, background observations were built from the other order of the on-source module (e.g. SL 2 was used as the background for SL 1, etc.). After the rogue pixels were removed, using the default "autogen bad pixels" option in CUBISM, a spectral cube was built. Spectra were then extracted using matched apertures among the detectors and centered on the nucleus. The aperture extraction size for these lowresolution spectral mapping observations are listed in Table 5. The low resolution spectral mapping data for NGC 1068 was saturated near the nucleus and consequently not included in this analysis.
For the IRS spectral mapping high resolution observation of NGC 5194, no background subtraction was performed. The spectrum was extracted over the full cube, corresponding to a size of 31.5"×45" in the LH module and 13.8"×27.6" in the SH module.
Radio and Hard X-ray Data
The radio and hard X-ray data were drawn from the literature; VLA radio data at 8.4 GHz were only available for the 12µm sample (Thean et al. 2000). In several cases, multiple radio components were analyzed; we included only the flux for the component that was nearest the published center of the galaxy. Twenty-six of the 31 12µm sources had radio data, with 3 additional sources having upper limits. The hard X-ray fluxes originated from the 22-month Swift-BAT Sky Survey (Tueller et al. 2010) and from BeppoSax (Dadina 2007). Only 11 out of the 31 12µm sources and one of the 20 [OIII] sources (IC 0486) have X-ray detections in the 14-195 keV range. We adopted an upper limit of 3.1×10 −11 erg cm −2 s −1 , the flux limit of BAT, for the remainder of the sample when an upper limit was not quoted in either Tueller et al. (2009) or Dadina et al. (2007).
IR Emission Line Fluxes
The high resolution spectra were utilized to measure the emission line fluxes: a Gaussian profile was fit to the emission line feature, with the local continuum, centered on the line's rest-frame wavelength, fit by a zero-or first-order polynomial. The errors were estimated by calculating the root-mean-square (RMS) around this local continuum and measuring the flux values with the continuum shifted by ± the RMS. In the cases where an emission line was not present, a 3-σ upper limit was estimated from the RMS around the best-fit local continuum (where the RMS is assumed to be the 1-σ error). In the cases with multiple observations per galaxy, we measured the emission line fluxes independently and averaged the resulting values; these flux measurements agreed within several percent between most of the individual observations, with at most a factor of ∼1.5 discrepancy, which was only present in one of the sources. 2 Tables 6 and 7 list the emission line flux values for the [OIII] and 12µm samples, respectively. Comparing our line flux values with Tommasin et al. (2008Tommasin et al. ( , 2010, we find that our [OIV] flux values largely agree within a factor of 1.5 (with the exception of NGC 1667 and NGC 7582 where their values are a greater than a factor of 2 higher than ours). However, their [NeII] flux values are generally systematically higher by a factor of ∼2.5 -∼4.5, though we do obtain consistent values for NGC 424, NGC 5135 and NGC 5506. Despite these differences in the measured [NeII] line strength, we obtain similar results to Tommasin et al (2010), namely that as the relative contribution of the AGN to the ionization field increases (parameterized by [OIV]/[NeII]), the starburst strength (parameterized by the PAH equivalent width) decreases.
IR Continuum Flux and PAHs
The MIR continuum flux values (F M IR ) and PAH equivalent widths (EWs) were measured using the low resolution spectra. For the galaxies that had multiple observations, we utilized the observations that had consistent flux values in the overlap region between the SL and LL modules: Program ID 30572 for For NGC 1386, NGC 4388, NGC 5506 and NGC 7130; Program ID 0086 for NGC 5135; and Program IDs 00086 and 30572 for NGC 5347 (for this source, the analysis was done separately for each observation and the results averaged together). The MIR continuum flux was measured at 13.5 µm (rest-frame), averaged over a 3µm window; these flux values are listed in Table 6 for the [OIII]-selected sample and Table 7 for the 12µm sample. This window was chosen as it is free from strong emission line and PAH features. 3 In LaMassa et al. (2009), we included the flux centered at 30µm as part of the MIR flux diagnostic. However, emission at this higher wavelength can be strongly affected by star formation processes in the host galaxy (e.g. Deo et al. 2009, Baum et al. 2010, so here we use F 13.5µm as F M IR . We used PAHFIT (Smith et al 2007b) to measure the PAH EWs, a program which uses a model consisting of several components: a starlight component represented by blackbody emission at T = 5000 K, a thermal dust continuum constrained to default temperature bins (35, 40, 50, 65, 135, 200 and 300 K), IR emission lines, PAH (dust) features and extinction (we used a foreground extinction screen). As PAHFIT requires a single stitched spectrum, the SL spectrum was scaled to match the LL spectrum, with typical adjustments under 20% (though several galaxies were adjusted by ∼40% and NGC 7582 by greater than a factor of 6, indicating the presence of extended IR emission in this object). Here we utilize the EW of the PAH features at 11.3µm and 17µm, which consist of the features within the wavelength range 11.2-11.4µm and 16.4-17.9µm, respectively (Tables 6 and 7). However, we note that the current version of PAHFIT has a bug which assumes the PAH EW feature to be Gaussian rather than Drude, which could underestimate the PAH EW by a factor of 1.4. We report the EWs as reported from PAHFIT, with the caveat that these may be lower limits.
We compared our results with the 11.2µm feature from Wu et al. (2009) and the 11.3µm and 17µm features measured by Gallimore et al. (2010), where in the latter, we added their published 11.2µm and 11.3µm EW values. Wu et al. employed a spline fit between 10.8µm and 11.8µm to measure the EW. With this method, the results are widely influenced by the choice of anchor points for fitting the pseudo-continuum and can result in an underestimate of the EW compared to a method that utilizes spectral decomposition, such as PAHFIT (Smith et al 2007b). Of the 28 sources we have in common with Wu et al. (2009), 12 of them had consistent 11.3µm EW values (within a factor of 2), 6 had lower values than we obtained (which would be expected from the disagreements between the spline vs. decomposition methods mentioned above) and 10 had higher values, where for 6 of these, PAHFIT had obtained an EW value of zero, yet the spline method yielded a measurement. Comparing our results with Gallimore et al. (2010) gave better results, though a discrepancy did still exist: of the 23 sources in common, we obtained consistent EW values (within a factor of 2) for 11 sources at 11.3µm and for 12 sources at 17µm. Though Gallimore used PAHFIT to measure these features, they modified the code to include more fine-structure lines, fit silicate emission features, and use the cold dust model from Ossenkopf et al. (1992); they also generated their own software to build spectral data cubes whereas we employed CUBISM. Such differences could account for the inconsistencies in our PAH EW measurements. Though the derived EWs are different from those reported by Wu et al. and Gallimore et al. for at least half the sources we have in common, our main conclusions based on PAH EWs agree qualitatively with Wu et al. and Gallimore et al.: PAH features are associated with other star formation activity indicators (Gallimore et al. 2010, Wu et al. 2009) and the EWs are inversely correlated to the strength of the ionization field (Wu et al. 2009 where they use the IRAS colors to parameterize AGN strength).
In the discussion that follows, we divide the 12µm-sample into two classes, those with weak PAH emission ("PAH-weak" sources) and those with strong PAH emission ("PAHstrong" sources, galaxies with EW > 1 µm in either the 11.3 µm or 17µm band, with PAH EWs detected in both bands); the strong PAH emission is likely due to starburst activity in the host galaxy (see §5.2).
Diagnostics of Intrinsic AGN Luminosity
Our goal is to evaluate the relative efficacy of the five different proxies for the AGN intrinsic luminosity under consideration in this paper. We expect that these different proxies will not agree perfectly, due to the different physical mechanisms that produce and affect the emission features as well as biases resulting from sample selection, starburst contamination, statistical errors and in some cases, uncertain application of extinction corrections. To address this, we will undertake two kinds of comparions.
First, we will use our two Sy2 samples to inter-compare these proxies in a pair-wise fashion and measure the amount of scatter in the corresponding flux ratios. Which proxies agree best with one another? Second, we will compare these pairs of flux ratios to the corresponding values for unobscured Type 1 AGN to test which proxies are more "isotropic," i.e. suffer the least AGN-viewing-angle dependence.
Figures 1 -10 show the histograms of a subset of ratios for the five proxies. In each plot, the solid black line represents both samples combined, the red dashed line and green dotteddashed line delineate the 12µm sample ("PAH-weak" and "PAH-strong" sources respectively) and the cyan filled histogram reflects the [OIII]-sample. Adjacent to these histograms are the luminosity vs. luminosity plots, showing the correlation between these indicators: the cyan asterisks represent the [OIII] sample, the red diamonds (green triangles) depict the "PAH-weak" ("PAH-strong") 12µm sources, and the dashed black line represents the best fit from multiple linear regression analysis (i.e. the REGRESS routine in IDL), where in the figure captions, ρ is the linear regression coefficient and P uncorr is the probability that the two quantities are uncorrelated. Though the distance dependence in luminosity vs. luminosity plots enhances the correlation compared to flux vs. flux plots, we employed this method as the 12µm sample lies at a systematically lower redshift, and thus have higher flux values, than the [OIII] sample. One of our main goals is to examine the dispersion in the flux ratios, where this distance dependence cancels out. In §4.3, we test if these ratios are affected by luminosity.
Where available, the values for Sy1s are included in these plots. The results are summarized in Tables 8 and 9 which lists the mean and sigma of each ratio for the combined sample and the sub-samples separately. In the histograms and luminosity plots, the upper limits are plotted but not included in the analysis of the mean and sigma (except for the ratios involving the hard X-ray flux). Since only 12 of the 51 AGN were detected in hard X-rays, we have employed survival analysis to quantify the correlations among the proxies and to calculate the mean of the ratios. This approach takes the upper-limits into account (ASURV Rev 1.2, Isobe and Feigelson 1990;LaValley, Isobe and Feigelson 1992; for univariate problems using the Kaplan-Meier estimator, Feigelson and Nelson 1985; for bivariate problems, Isobe et al. 1986).
Inter-Comparison of Proxies
The isotropic luminosity diagnostics that agree best, and therefore may be least subject to the uncertainties and errors discussed above, are F [OIII],obs , F [OIV ] and F M IR . A wider spread is present between the radio and hard X-ray fluxes compared with the optical and MIR values.
In all cases, a wider dispersion is present between all the flux ratios in the 12µm sample as compared to the [OIII] sample. Below, we examine whether such a scatter could be due to aperture effects, extinction corrections applied to the [OIII] flux, starburst contamination to the the MIR flux or if it represents a real difference between AGN selected on the basis of [OIII] flux versus MIR flux.
Since the 12µm Sy2s are typically more nearby than the [OIII]-selected galaxies, aperture effects can potentially play a significant role when comparing flux values by either missing NLR flux or obtaining too much host galaxy contamination. However, we find no evidence in our data for such an effect (see Appendix). Another possible explanation for this wider dispersion is that the optical data for the 12µm sample are drawn from the literature, which can introduce scatter into the optical diagnostics as such data are not taken and reduced in a uniform matter. The most striking example of this is the comparison of the [OIV] flux with the observed and extinction corrected [OIII]-flux: de-reddening the [OIII]-flux widens the dispersion in the 12µm-sample (see Figure 1). This can be an artifact of using literature Hα and Hβ values and to a lesser extent can be due to large amounts of dust in the host galaxies of the 12µm sample as evidenced by the wide range of Balmer decrements. Goulding and Alexander (2009) note that galaxies that would not be identified optically as AGN (i.e. have a low "D" value, see §5.1) tend to have similar Balmer decrements yet higher F [OIV ] /F [OIII],obs ratios than optically identified AGN. This result suggests that applying a reddening correction using the Balmer decrement still under-represents the intrinsic [OIII] flux. However, our 12µm sources with lower "D" values (<1.2) do not show systematically higher F [OIV ] /F [OIII],obs ratios (see Figure 11), indicating that "extra" extinction that Goulding and Alexander observe in their sources is not present in ours. As expected, the ratio of F [OIV ] /F [OIII],obs increases with Hα/Hβ, denoting that both quantities trace host galaxy extinction, though with wide scatter. Comparison with the locus of points for the [OIII] sample shows that the Balmer decrement is systematically higher for similar 12µm F [OIV ] /F [OIII],obs values, suggesting that the the 12µm Balmer decrements are over-estimating the amount of dust present rather than under-estimating. This result indicates that the literature Balmer decrements, which are not analyzed in a systematic and homogenous way, are introducing uncertainties that bias the results and do not better recover the truly intrinsic AGN luminosity.
However, this can not be the only cause of the greater scatter in the 12µm sample, since the ratio of MIR/[OIV] fluxes shows more scatter in the 12 µm sample (Figure 3), though these data were analyzed homogeneously. Could the presence of Sy2s in the 12µm sample that have significant contributions from starburst activity create the wider dispersion in these diagnostics? To address this issue, we isolated the "PAH-strong" sources, which have greater amounts of star formation activity (discussed in detail below). The distributions between the "PAH-strong" and "PAH-weak" sub-samples are similar, suggesting that starburst processes are not responsible for the wider dispersion. We also focused on sources with a limited "D" value, which as noted above indicates the relative contribution of AGN to starburst activity. Repeating the calculation of mean and standard deviations on the flux ratios for the 12µm sources with 1.2≤ D ≤ 1.7 did not result in a significant decrease (a factor of 2 or more) in the dispersion with the exception of log (F M IR /F [OIII],obs ) (σ=0.42 dex), log (F [OIII],obs /F 8.4GHz ) (σ=0.60 dex) and log (F [OIII],corr /F 8.4GHz ) (σ=0.51 dex). For these first two ratios, this is largely due to the removal of the 3 outliers (F04385-0828, F08572+3914 and Arp 220) with systematically higher (lower) F M IR /F [OIII],obs (F [OIII],obs /F 8.4GHz ) values from the full sample. The dispersions for the other ratios were still systematically higher than the [OIII]-sample. We conclude that there is a real difference between the AGN selected on the basis of [OIII] emission-lines and MIR continuum.
We also compared our results with two other samples of Seyfert 2 galaxies: one a complete sample down to a flux limit of (1-3) ×10 −11 erg cm −2 s −1 at 14 -195 keV drawn from the 3 and 9 month Swift-BAT survey (Meléndez et al. 2008 and references therein) and the other drawn from the revised Shapley-Ames catalog (Shapley & Ames 1932;Sandage & Tammann 1987), consisting of galaxies with B T ≤ 13 ). Here we include those radio quiet Seyfert types 1.8 -2 that have measured [OIII] and [OIV] fluxes, giving 12 and 56 Sy2s, respectively. The log of the ratios of [OIV] to [OIII],obs for both samples are higher than our combined sample, 0.60 ± 0.74 dex (Meléndez et al. 2008) and 0.57 ± 0.67 dex ) vs. 0.08 ± 0.41 dex, but the differences are not statistically significant. A wider dispersion is present in these comparison samples than the [OIII]-selected sample (as was the case for the 12µm sample). This may indicate that selection based on [OIII] leads to better agreement between between the [OIII] and [OIV] flux rather than selection based on other methods. This effect could be due to the [OIII]-bright sources having less extinction in the NLR than Sys selected in other ways. 2010) for Sy2s is 2.38±0.45 dex. All three values are systematically higher than what we obtain for our samples of Sy2 galaxies by roughly an order of magnitude (see Table 9). This is depicted graphically in Figure 12. Employing survival analysis, we compared these ratios between the BAT-selected Sy2s and the [OIII] and 12µm samples separately and find that they differ significantly (i.e. p <0.05, corresponding to the 2σ level, that they are drawn from the same parent population), with the caveat that with only one [OIII] Sy2 detected by BAT, the comparison between BAT and [OIII] selected Sy2s may be less robust. These differences suggest that the samples selected in hard X-rays do not fairly sample the population of Type 2 AGN selected in the MIR and possibly the optical, however comparisons with BAT-selected Sy1s reveal mixed results (see §4.2).
Comparison with Sy1s
In order to determine if the proxies we are considering are affected by the orientation of the AGN, and evaluate the extent to which they may be considered truly "isotropic," we compared our results for Sy2 with the corresponding values for Sy1, using data taken from the literature. The Sy1 MIR fluxes were calculated from the 14.7µm flux densitites reported in Deo et al. (2009), where they analyzed a heterogeneous sample of Sy1 and Sy2 galaxies available in the Spitzer archive, ranging in redshift 0.002 < z < 0.82. The radio flux values were derived from the high-resolution 8.4-GHz flux density values from Thean et al.(2000), which presented analysis of the extended 12µm sample. 5 The hard X-ray data (14-195 keV) are drawn from Meléndez et al. (2008, sample selection described above), the 22-month Swift-BAT Catalog (Tueller et al. 2010) and Rigby et al. (2009, same parent sample as Diamond-Stanic et al. 2009, with X-ray data derived from the 22-month Swift-BAT Catlog, BeppoSAX (Dadina 2007) and Integral (Krivonos et al. 2007)). The comparison Sy1 [OIII] and [OIV] flux values are derived from Meléndez et al. (2008) and Tommasin et al. (2008Tommasin et al. ( , 2010, which presents high resolution Spitzer spectroscopy of the extended 12µm sample. As only Winter et al. (2010) quote Balmer decrements, we only have comparison Sy1 F [OIII],corr values for the samples selected from the BAT catalog (i.e. F 14−195keV and F [OIV ] from Weaver et al. 2010). We utilize both the Kolmogorov-Smirnov test ("K-S" test) and Kuiper's test on the detected data points (excluding the three [OIV] and three radio upper limits in the 12µm data) to determine to which extent the flux ratios are significantly inconsistent between the Sy1 and Sy2 galaxies: a lower Kuiper "D" value indicates that these two populations are drawn from the same parent population, suggesting that such fluxes are independent of viewing angle. The Kuiper test is similar to the more often used K-S test but with the following modification: the "D" statistic of the K-S test represents the maximum deviation between the cumulative distribution functions (CDFs) of the two samples, whereas the "D" statistic in Kuiper's test is the sum of the maximum and minimum deviations between the CDFs of the two samples, so that this statistic is as sensitive to the tails as to the median of the distribution. The results of the K-S test and Kuiper test agree in that they do not lead us to reject the null hypothesis that the two samples are drawn from the same parent population, with the exception of the F [OIV ] /F [OIII],obs ratio, where the tests lead to conflicting results. We note that two-sample tests work better for larger data sets, so the probabilities quoted in Table 10 should be interpreted as approximate. Baum et al. (2010) suggests that such [OIII] obscuration results from the AGN torus: using the 12 µm sample, they find a correlation between the F [OIV ] / F [OIII],obs ratio and the Sil 10µm feature, which probes torus obscuration. 6 In type 1 Sy1s, this silicate feature is in emission, whereas Sy2s exhibit Sil absorption, making the Sil strength a probe of system orientation. The ratio of F [OIV ] to F [OIII],obs increases with Sil absorption (parameterized by negative values of the Sil strength) which could suggest that the torus is extincting part of the [OIII] emission. Our results may confirm these previous studies as we find that Sy2s tend to have lower observed [OIII] emission as compared to Sy1s and this may be due to NLR extinction. We note, however, that such extinction affects the [OIII] line only up to a factor of 2 on average between our Sy2 and comparison Sy1 samples, albeit with a wide dispersion, and this difference between the two populations may not be significant.
The average log (F M IR /F [OIII],obs ) ratio is consistent between Sy1s (2.56 ± 0.50 dex) and Sy2s (2.62 ± 0.62 dex), which could seemingly contradict the results cited above where NLR extinction causes attenuation of the [OIII] flux in Sy2s but not in Sy1s. The clumpy torus model of Nenkova et al. (2008) and smooth torus model of Pier & Krolik (1992) predicts a slight anistropy in emission at 12µm depending on viewing angle: as the viewing angle increases from 0 • (Sy1) to 90 • (Sy2), the torus flux decreases by a factor of ∼2. The effects of depressed MIR emission in Sy2s and enhanced MIR emission in Sy1s, assuming [OIII] is more extincted in the former than the latter, would therefore result in F M IR /F [OIII],obs ratios that are more consistent than F [OIV ] /F [OIII],obs , which is indeed what we observe. However, the average differences between F M IR /F [OIII],obs and F [OIV ] /F [OIII],obs are within the scatter of these ratio values, and we are unable to rule this out as the main driver for the disagreement, rather than invoking anisotropies in torus emission.
Interestingly, the relationship between L M IR and L [OIV ] is nearly identical for Sy1 and Sy2 galaxies (Figure 3). Though the MIR flux is not corrected for starburst contamination (see Appendix), and the Sy2s in the 12µm sample are thought to harbor more star formation activity than Sy1s (e.g. Buchanan et al. 2006), we see no evidence that star formation activity is contributing significantly to the MIR emission. As the F M IR /F [OIV ] diagnostic ratio shows the smallest dispersion for the combined sample, a similar relationship to Sy1 galaxies and no evidence for luminosity bias (see Sectin §4.3), the MIR and [OIV] flux may be the most robust proxies for the intrinsic AGN luminosity in Type 2 AGN. However, the KS test and Kuiper's test indicates a lower probability that the Sy1 and Sy2 samples agree than the F M IR /F [OIII],obs ratio, though this could be driven by the lower N ef f value for F M IR /F [OIII],obs rather than a robust statistical agreement.
The different slopes between Sy1s and Sy2s in the luminosity plots of the radio data against other intrinsic AGN flux proxies (Figures 4, 5 and 6) suggest disagreements between these samples. However, the F [OIV ] /F 8.4GHz and F M IR /F 8.4GHz flux ratios are consistent between Sy1 and Sy2 galaxies, indicating that the disparate slopes are perhaps influenced by scatter due to the wide range of radio loudness in AGN. Results of the KS test and Kuiper's test (Table 10) also indicate that the differences in the radio flux ratios between Sy1s and Sy2s are not statistically significant. Diamond-Stanic et al. (2009) compared the 6 cm radio data between Sy1s and Sy2s and found that for the Sy2s with a measured Xray column density, these two samples show no statistically significantly differences, though they find a higher probability that they are drawn from the same distribution (∼68 -78%) than we do (∼55%). Meléndez et al. (2010) also found that the 8.4 GHz and [OIV] fluxes between Sy1 and Sy2 galaxies are not significantly different, though sources dominated by star formation (i.e. less than 50% of the [NeII] line attributable to AGN ionization) had statistically different F [OIV ] /F 8.4GHz values than AGN dominated sources, indicating that radio emission may not accurately trace intrinsic AGN power. This latter result may agree qualitatively with our Figure 5, where the "PAH-strong" sources lie at or below the best-fit line between L [OIV ] and L 8.4GHz .
The hard X-ray proxy performs much more poorly (Figures 7 -10), based on both the wider dispersion in the diagnostic flux ratios and the larger disagreement between the Sy1 and Sy2 flux ratios. The mean hard X-ray emission (normalized by other isotropic indicators) in Sy2s tends to be about an order of magnitude weaker than in Sy1s, though this is driven largely by the 12µm sample as only one source was detected by BAT in the [OIII] sample. This disagreement agrees with the results of Rigby et al. (2008) andWeaver et al. (2010), where the X-ray flux was normalized by the [OIV] emission. Indeed, using survival analysis, we find that F 14−195keV /F [OIV ] disagrees significantly between BAT-selected Sy1s and both the [OIII] and 12µm sub-samples. Such a large disagreement is not found between the Sy1s and Sy2s in the Winter et al. (2010) sample (see Table 9), which is driven by the lower [OIII] flux observed in their Sy2s as compared to their Sy1s. The hard X-ray to [OIII] flux ratios, both observed and reddening corrected, do not differ significantly between the BAT-selected Sy1s and the [OIII]-selected Sy2s, but do for the 12µm sample. 7 According to the Logrank and Peto & Prentice Generalized Wilcoxon tests, the hard X-ray flux normalized by the MIR flux differs signficantly for both Sy2 subsamples and the BAT-selected Sy1s. Consistent with the results from §4.1, hard X-ray selected AGN do not represent the population of those selected in the MIR, and there may be some evidence that they do not fully sample the optically selected sources. Compton scattering may be responsible for weakening the observed hard X-ray emission in Sy2s, as suggested by Weaver et al. (2010), which indicates that the 14-195 keV emission is not truly isotropic.
Luminosity Dependence
As we have seen above, there is significant scatter in the flux ratios of the different proxies for AGN intrinsic luminosity. Here we examine the possibility that some of this scatter is caused by systematic differences that correlate with the accretion rate of the black hole (in units of the Eddington limit).
To make this test, for any pair of luminosity proxies we parameterized L AGN /L Edd by the square root of the product of the luminosities of the two proxies divided by the mass of the central black hole (M BH , listed in Tables 1 and 2). Linear regression fits were performed, with the correlation coefficients and probability of uncorrelation listed in Table 11.
We find three statistically significant anti-correlations ( Figure 13 ,obs are largely driven by those galaxies with a high Balmer decrement. When we exclude the 6 sources with Hα/Hβ ≥ 9, which may be those systems with the most NLR etinction, these anti-correlations are no longer statistically significant. This may indicate that the bolometric correction to the observed [OIII] luminosity might have a weak dependence on the Eddington ratio. However, the observed [OIII] luminosity, which partly parameterizes the Eddington ratio, does not as accurately trace intrinsic AGN flux for these more dust-obscured sources. If the Eddington ratio is defined as just L OIV /M BH and L M IR /M BH in these relationships, the anti-correlations are no longer significant. Hence, the weak trends in Figure 13 a) and b) is likely driven more by NLR extinction bias on the [OIII] flux rather than the accretion rate of the black hole. This latter mechanisms could be affecting the F M IR /F 8.4GH ratio, albeit with wide scatter.
Comparison of Different Diagnostics
Given the strong connection between Type 2 AGN and star-formation (e.g. Kauffmann et al. 2003) we expect that the signature of both processes will be present in optical and MIR spectra of AGN. By analogy to the previous section (where we compared different proxies for the intrinsic AGN luminosity) we now undertake a comparison of different diagnostics of the relative energetic significance of a starburst vs. the AGN.
One such diagnostic involves the use of the MIR polycyclic aromatic hydrocarbon (PAH) features. These have been shown to be correlated with star formation activity (e.g. Smith et al. 2007) and possibly anti-correlated with the presence of an AGN (O'Dowd et al. 2009;Voit 1992). More specifically, we used the equivalent width (EW) of the PAH features to assess the relative amount of starburst activity in the host galaxy (e.g. Genzel et al. 1998). Another empirical diagnostic of the relative contribution of the starburst in the MIR can be parametrized by the MIR spectral index: α 20−30µm 8 . Larger values of α 20−30µm indicate the presence of cold dust from starburst activity (Deo et al. 2009 and references therein). The ratio of the [OIV] to [NeII] 12.81µm MIR emission-lines probes the hardness of ionizing spectrum and hence the relative importance of the AGN and starburst. A larger ratio (∼1) implies the dominance of AGN activity whereas a lower ratio (∼0.02) is pure starburst activity (Genzel et al. 1998). The analogous diagnostic from optical spectra is the distance a galaxy spectrum lies from the locus of star forming galaxies in the Baldwin, Phillips & Televich BPT (1981, BPT) diagram (D = ([NII]/Hα + 0.45) 2 + ([OIII]/Hβ + 0.5) 2 , Kauffmann et al. 2003). A larger "D" parameter indicates pure AGN activity while a smaller value implies a mixture of starburst and AGN processes in the host galaxy. Figures 14 and 15 illustrate the relationship between these AGN and star formation activity indicators for the Sy2s in our combined sample. The color coding is the same as in Figures 1 -10. According to linear regression analysis, α 20−30µm and the PAH EWs are correlated and [OIV]/[NeII] and the PAH EWs are anti-correlated at greater than the 3.5σ level: PAH EW 11.3 µm vs. α 20−30µm has ρ=0.609, P uncorr =1.47×10 −5 ; PAH EW 17 µm vs. α 20−30µm has ρ=0.600, P uncorr =5.26×10 −6 ; PAH EW 11.3mum vs.
[OIV]/[NeII] has ρ=-0.515, P uncorr =4.89×10 −4 . We also note that the Sy2s with strong PAH emission mostly lie at systematically higher PAH EW values than the relation found between α 20−30µm and the PAH EWs. The majority of the Sy2s have D values between ∼1.2-2.0, with five of the strong PAH sources at system-atically lower D values, ∼0.5-1.1 (see Figure 16 9 ). The Sy2s with lower D values have higher PAH EW values and lower F [OIV ] /F [N eII] ratios, though they exhibit a weaker trend with the IR spectral index.
These results indicate that the various MIR and optical indicators of starburst activity agree both qualitatively and quantitatively.
The Spatial Scale of the MIR Emission and the Role of the Host Galaxy
The results above refer to measurements of the central region enclosed by the IRS aperture. However, the 12µm sample was drawn from the IRAS Point Source Catalog, where the aperture size (0.75 × 4.5' at 12µm) is much larger and will encompass contributions from the host galaxy. To quantify the extendedness of the MIR emission in the 12µm sample, we calculate the ratio of the IRAS flux (at 12µm, Spinoglio & Malkan 1989) to the IRS flux. A ratio of ∼1 indicates the MIR emission is dominated by the galactic center, whereas higher ratios imply a greater amount of extended emission. In Figure 17, we plot this ratio against the PAH EW at 11.3 µm and F [OIV ] /F [N eII] . As expected, the relative amount of extended MIR emission decreases as the relative energetic importance of the AGN increases.
Comparison of the ratio by which the SL module was rescaled to match the LL module (see § 4.2) reveals the presence of extended emission on smaller spatial scales. Inspection of this extendedness factor does not show any significant differences between the "PAH-weak" and "PAH-strong" sources (with the exception of NGC 7582).
Are [OIII] and [OIV] Biased by Star Formation?
In this section, we investigate whether the relative fraction of the AGN bolometric luminosity that emerges in [OIII] and [OIV] line emission is preferentially higher in galaxies with more star formation. This might be expected if the gas clouds in the NLR that are photoionized by the AGN and produce these lines are directly related to the gas clouds responsible for star-formation. If true, this would imply that AGN selected using [OIII] or [OIV] would be biased towards galaxies with higher star formation rates.
To test this, we have plotted the ratio of both F [OIII],obs and F [OIV ] to F M IR versus the star formation indicators analyzed above (PAH EWs, IR spectral index and the "D" parameter). We find no strong trends between the star formation indicators and [OIV] and [OIII] emission, as illustrated in Figures 18 and 19.
We conclude that there is no convincing evidence that host galaxies with a large star formation rate have preferentially higher relative luminosities of [OIII] and [OIV] at the luminosities represented in this sample, where the bolometric luminosity (L bol ) ranges from L bol ≈ 10 9 -8 × 10 11 L sun , which is ∼3×10 −5 to 0.5 of the Eddington luminosity (L edd ) 10 . Thus, these proxies of intrinsic AGN power are not biased by star formation activity at these Eddington ratios.
Conclusions
We have taken an empirical approach in analyzing the agreement among the various indicators of isotropic AGN luminosity for two complete and homogeneously selected samples of Sy2s, one selected based on observed [OIII] flux and the other on MIR flux. The diagnostic ratios with the smallest spread are likely those where such biases from sample selection, starburst contamination, statistical errors, and scatter due to the various physical mechanisms that produce these emission features, are minimized. Such indicators, as well as those that agree the most with Sy1 relations, may be the most robust tracers of AGN activity. Our results on these indicators are summarized below.
-Sample Selection The optically selected sample, picked on the basis of high [OIII] flux, shows tighter correlations among these diagnostics than the MIR selected sample. We investigated whether the inclusion of active star forming galaxies in the 12µm sample introduced the spread in these ratios by dividing the sample into galaxies that have large amounts of starburst activity ("PAH-strong") and those that do not ("PAH-weak"). The distribution of the diagnostic ratios for the two sub-samples are similar. Isolating the 12µm sources with a limited D range (1.2≤D≤1.7) also results in large scatter that is systematically higher than that observed in the [OIII] sample for all but three of the flux ratios. A similarly wide spread between F [OIV ] /F [OIII],obs is present in other samples (i.e. Meléndez et al. 2008 andDiamond-Stanic et al. 2009), indicating that sample selection based on [OIII] is primarily responsible for the tighter correlations we observe. This may be due to less extinction in the NLR region which would be expected in sources that have high observed [OIII] flux.
-Extinction Correction Applying an extinction correction to the [OIII] flux tightens the correlations with the other luminosity indicators for the [OIII]-selected sample, yet broadens the dispersion for the 12µm sample. It is not clear whether this difference is primarily due to the different sources of the emission-line data (homogenous SDSS data for the [OIII] sample and heterogeneous data for the 12µm sample), or whether it simply points out the limitations of extinction corrections in very dusty AGN. Comparison of the optical vs. MIR properties of the most dusty AGN in the SDSS suggest that the former effect is important (Wild et al. 2010).
-Agreement Among Sy2s The observed [OIII], [OIV] and MIR luminosities agree the best in the combined Sy2 sample. The widest spread among the various proxies is seen in the radio emission. The X-ray data are dominated by upper limits, but also show a significantly larger dispersion than the optical and IR isotropic flux indicators.
-Comparison with Sy1s The mean ratio of the observed [OIII] flux to the [OIV] flux is lower in Sy2 than in Sy1s by a factor of 1.5-2, while the mean ratio of the observed [OIII] flux to MIR is consistent between Sy2s and Sy1s. The former result, which represents a statistically significant difference between Sy1s and Sy2s according to the KS test but not Kuiper's test, agrees with previous findings (e.g. Haas et al. 2005, Meléndez et al. 2008) and has been interpreted as extinction affecting [OIII] in the NLR, or even torus obscuration attenuating the [OIII] emission (Baum et al. 2010). However, the latter result can not be simply explained by a larger amount of dust extinction of [OIII] in the Sy2s, but it could be due to a slight anisotropy in the MIR emission as predicted by Pier & Krolik (1992) and Nenkova et al. (2008) where Sy1s could have up to a factor of two higher MIR flux as compared to Sy2s. The wide scatter in these ratios can also be responsible for the discrepancy between the F [OIV ] /F [OIII],obs and F M IR /F [OIII],obs values, which may be the main driver for the mild disagreement rather than torus emission anisotropy. The F [OIV ] /F 8.4GHz and F M IR /F 8.4GHz mean values are consistent between Sy1 and Sy2 galaxies (in agreement with Diamond-Stanic et al. 2009 andMeléndez et al. 2010 for the [OIV] and radio comparison), though the slopes of the luminosity plots show disagreements and there is wide scatter which is expected due to the wide range of radio loudness observed in AGNs.
-Hard X-ray Selected Samples We find that the hard X-ray flux (relative to the [OIII], [OIV], and MIR fluxes) is suppressed by about an order-of-magnitude in MIR selected Sy2s compared to both Sy1s (consistent with Rigby et al. 2008) and to hard X-ray selected Type 2 AGN (Winter et al. 2010 andWeaver et al. 2010). The comparison with the [OIII] sample is mixed, with statistically significant differences between the Sy2s and Sy1s when the Xray flux is normalized by the [OIV] and MIR flux, but not when normalized by the [OIII] flux. However, hard X-ray selected Sy2s differ significantly from [OIII]-selected Sy2s (though with only one [OIII] Sy2 detected by BAT, this analysis may be less robust than the 12µm comparison). These results indicate that hard X-ray emission (E > 10 keV) is anistropic and hard X-ray selected samples are biased against the more heavily obscured type 2 AGN that are present in MIR and possibly [OIII] selected samples. As Weaver et al. (2010) note for sources detected in hard X-rays, Compton scattering could be responsible for the hard X-ray attenuation observed in Type 2 AGN as compared to Type 1. In more obscured sources, Compton scattering may be pushing them below the flux sensitivity of BAT.
F M IR and F [OIV ] agree the best, both in comparison with the other indicators in the combined Sy2 sample (having the least scatter) and in having a nearly identical flux ratio in Sy2s and Sy1s as well as not suffering from a luminosity bias. Among the indicators we have considered, they are the best proxies for truly isotropic AGN emission.
We tested the level of agreement of various optical and infrared indicators of star formation activity compared with proxies of AGN activity. Similar to previous works, we find statistically significant relations among the various indicators of the relative energetic significance of star formation and an AGN. These include the MIR spectral slope (parametrized by α 20−30µm ), the PAH EWs at 11.3µm and 17µm, the ratio of [OIV]/[NeII] fluxes, and the location of the galaxy in the commonly used diagnostic diagram based on optical emissionlines. We note that the last two diagnostics are a measure of the relative contribution of AGN vs. starburst activity to the incident ionizing radiation. As the incident radiation field becomes more dominated by the AGN activity, the PAHs become weaker relative to the MIR continuum. In part this is expected because of the"dilution" of the MIR continuum by AGN-heated dust, but the AGN may also be directly affecting the PAH emission (e.g. O'Dowd et al. 2009).
We also found that the Sy2s that were clearly starburst/AGN composites based on the above indicators were systematically those cases in which large-scale MIR emission from the host galaxy greatly exceeded that from the AGN. We quantified this by comparing the ratio of the 12µm flux from the large IRAS aperture with the 15.5µm flux from the small IRS aperture. A smaller aperture is therefore necessary to isolate AGN emission in systems with high rates of star formation on large scales and/or low AGN luminosities.
The ratios of the [OIII]/MIR and [OIV]/MIR fluxes show little if any evidence for a correlation with any of the above measures of the relative amount of star formation. This lack of a relationship suggests that the [OIII] and [OIV] lines are not biased to be a preferentially higher fraction of the AGN bolometric luminosity in host galaxies with more star formation activity (more dense gas).
This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. The authors thank the anonymous referee whose insightful comments improved the quality of this manuscript. In all cases, the BAT selected Sy2s have systematically higher hard X-ray emission when normalized by other intrinsic AGN proxies as compared to the optical and IR selected Sy2s, with the mean values differing by almost an order of magnitude or more. The color coding is the same as in Figure 13. 2 For low resolution spectral mapping observations. b Galaxies that had dedicated off-source high-resolution observations and were therefore background subtracted. See Section 3.2 for details. (Winter et al. 2010, 24) 2.73 0.57 Sy1s (Winter et al. 2010, 29) 2.02 0.54 F 14−195keV /F [OIV ] Both (48 Sy2s N ef f = n 1 × n 2 /(n 1 + n 2 ) where n 1 and n 2 are the number of data points in each sample.
A. Aperture Bias
As the optical data for the 12µm sample were culled from the literature, we examined the data to see if an aperture bias was evident. The aperture sizes used for the optical data (θ OIII ) ranged from 3 to 14" for the 12µm sources and was 3" for all the SDSS Sy2s. For several of our sources, we found the size of the NLR (θ N LR ) from Schmitt et al (2003a) and estimated the NLR for the remainder using log R maj = (0.31 ± 0.04) × log L[OIII] -10.08 ± 1.80 (Schmitt et al 2003b). As shown in Figure A.1 (same color coding as Figure 1 in the main text), the aperture was large enough to encompass the full NLR for all sources. Hence, we are not "missing" any of the NLR optical flux.
But are we observing too much [OIII] flux, perhaps from starburst contamination in the host galaxy which would affect [OIII] emission more than [OIV]? If this is the case, we would expect the F [OIV ] /F [OIII],obs ratio to decrease as the projected size of the aperture increases and the "PAH-strong" sources to lie at systematically higher aperture sizes. However, neither of these trends are apparent ( Figure A.2), reaffirming the results in the main text where we find no evidence for starburst bias on [OIII] emission. We also note that the opposite effect is also absent, namely increase of F [OIV ] /F [OIII],obs with aperture size. This indicates that though the [OIV] flux is collected from the Long-High module which has a larger aperture than the [OIII] data, the [OIV] flux is not produced in regions outside of the NLR. Hence the dispersion present in the flux ratios is not due to sampling the galaxies at different scales between the separate IR and optical observations.
B. Starburst Contribution to the MIR
As discussed in the main text, in many cases the Spitzer IRS data show evidence for the presence of both an AGN and active star formation. This implies that the MIR continuum will include emission from warm dust heated by both the AGN and massive stars. Since we are interested in using the MIR luminosity as an indicator of the intrinsic luminosity of the AGN, we would need a way to subtract off the contribution from the dust heated by stars.
We attempted to make this correction by using a simple dilution model based on the EW of the PAH features bracketing the continuum around 15.5µm (the PAH EW complexes at 11.3µm and 17µm, c.f. Genzel et al. 1998, Wu et al. 2009). This approach assumes that for starbursts there is a simple linear relationship between the PAH luminosity and that of the MIR continuum, and that the only effect of the AGN is to produce an additional source of MIR continuum emission, while not affecting the PAH luminosity.
As we have shown, the PAH EWs are indeed correlated with other indicators of star formation activity. We therefore compared the average PAH EWs for starburst galaxies (< EW 11.3µm,SB >=2.87 and < EW 17µm,SB >=1. 74, O'Dowd et al. 2009;Treyer private communication) to the measured EWs for our Sy2s and used this to derive the fraction of the MIR emission that is attributed to the AGN heating of the torus (i.e. f AGN = 1 − EW Sy2 / < EW SB >). In most cases, this correction was negligible. However, in the composite (strong PAH) AGN, this procedure seemed to subtract too much MIR flux, leading to poorer agreement with the other proxies and strong systematic trends with AGN luminosity (e.g. Figure A.3 as compared with Figure 3).
Such results indicate that accurately isolating the amount of MIR flux due to the AGN in strongly composite systems requires more detailed modeling that takes account of the possible influence of the AGN on the PAHs themselves. In the tight linear correlation we see between the uncorrected MIR and [OIV] luminosities ( Figure 3) there is no evidence that the strong-PAH sources have systematically higher MIR luminosities. Taken at face value this would imply that in these sources the starburst does not dominate the MIR continuum. This would in turn imply that the EW of these long-wavelength PAHs with respect to the pure starburst MIR continuum is unusually high. This can not be understood as resulting from the destruction of PAHs by the AGN (e.g. Treyer et al. 2010), as the effect is in the opposite direction. vs log (F [OIV ] ). Compared with the raw MIR flux, this corrected flux leads to a poorer agreement with the [OIV] flux and seems to over-substract the AGN contribution to the MIR flux for the "PAH-strong" sources. | 2010-07-06T14:16:38.000Z | 2010-07-06T00:00:00.000 | {
"year": 2010,
"sha1": "5bb803eb061bef5ca32bb8c68857d28bb542bdaa",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/0004-637X/720/1/786/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "5bb803eb061bef5ca32bb8c68857d28bb542bdaa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5852206 | pes2o/s2orc | v3-fos-license | Genome-Wide Survey of Gut Fungi (Harpellales) Reveals the First Horizontally Transferred Ubiquitin Gene from a Mosquito Host
Harpellales, an early-diverging fungal lineage, is associated with the digestive tracts of aquatic arthropod hosts. Concurrent with the production and annotation of the first four Harpellales genomes, we discovered that Zancudomyces culisetae, one of the most widely distributed Harpellales species, encodes an insect-like polyubiquitin chain. Ubiquitin and ubiquitin-like proteins are universally involved in protein degradation and regulation of immune response in eukaryotic organisms. Phylogenetic analyses inferred that this polyubiquitin variant has a mosquito origin. In addition, its amino acid composition, animal-like secondary structure, as well as the fungal nature of flanking genes all further support this as a horizontal gene transfer event. The single-copy polyubiquitin gene from Z. culisetae has lower GC ratio compared with homologs of insect taxa, which implies homogenization of the gene since its putatively ancient transfer. The acquired polyubiquitin gene may have served to improve important functions within Z. culisetae, by perhaps exploiting the insect hosts’ ubiquitin-proteasome systems in the gut environment. Preliminary comparisons among the four Harpellales genomes highlight the reduced genome size of Z. culisetae, which corroborates its distinguishable symbiotic lifestyle. This is the first record of a horizontally transferred ubiquitin gene from disease-bearing insects to the gut-dwelling fungal endobiont and should invite further exploration in an evolutionary context.
Introduction
Insects are hosts to various symbionts, including bacteria, fungi, and viruses Moran 2007;Hedges et al. 2008) and these symbiotic interactions have spurred the interest of both ecologists and evolutionary biologists. As they evolve, reciprocal responses between hosts and symbionts may have reshaped both associates, possibly invoking morphological changes that accompany genetic signatures (Mandel et al. 2009;Moran and Jarvik 2010). With obligate symbiotic associations, there may be irreversible gene gain or loss events that correspond to functional changes as both insects and endobionts adapt over evolutionary timescales (Moran 2007;Mandel et al. 2009;Moran and Jarvik 2010;Selman et al. 2011).
Harpellales is an order of early-diverging fungi (James et al. 2006;White 2006;Hibbett et al. 2007), which commonly attach to the chitinous hindgut linings of immature stages of aquatic insects (lower Diptera, including black flies, midges, and mosquitoes, as well as mayflies and stoneflies), and are thus known as "gut fungi" (White and Lichtwardt 2004;Strongman et al. 2010;Valle and Cafaro 2010;Tretter et al. 2014;Wang et al. 2014). Members of the Harpellales are usually considered commensals, although at least one species has been reported to be fatal to its mosquito host (Sweeney 1981). Zancudomyces was recently established to accommodate Z. culisetae based on both molecular phylogenetic and morphological analyses . Formerly recognized as Smittium culisetae, the species has been shown to benefit the in vivo development of infested mosquito larvae under specific conditions (Horn and Lichtwardt 1981). In contrast, Z. culisetae can also lead to the death of mosquito larvae, in situations where the hosts hindgut becomes overgrown (Williams 2001).
Genome-wide data are providing the opportunity to critically assess symbiotic ontogenetic stages, from surface adhesion, host invasion, molecular interactions to genomic Article ß The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com Open Access modifications (Moya et al. 2008). This also includes possibilities of investigating horizontal gene transfer (HGT) events. For example, whole-genome sequencing enabled the identification of several independent purine nucleotide phosphorylase HGT events between Encephalitozoon (Microsporidia) and arthropod donors (Selman et al. 2011). As an example of a fungi-donated gene, the carotenoid coding gene within the pea aphid genome has been shown to be laterally transferred from fungi (Moran and Jarvik 2010).
Ubiquitin is universally present in eukaryotes where it is widely known as posttranslational tag for the hydrolytic destruction of proteins (Goldstein et al. 1975;Welchman et al. 2005). Ubiquitin and ubiquitin-like proteins have also been found to play crucial roles in DNA transcription, autophagy, and inflammatory responses during pathogen defense by the host (Jiang and Chen 2012;Severo et al. 2013). For example, ubiquitination is involved in regulation of immune responses in mosquitoes, which are notorious vectors for spreading diseases like dengue, malaria, Zika fever, and West Nile encephalitis . Simultaneously, some pathogens seem to have countered with similar ubiquitin-dependent processes to facilitate entrance into the host (Collins and Brown 2010;Haldar et al. 2015). Ubiquitin may function in separate ways depending on how monoubiquitins are linked. Generally, K48-linked polyubiquitin chains target proteins destined for proteolysis, whereas K63-linked chains are involved in inflammatory response, protein trafficking, and ribosomal protein synthesis (Zhao and Ulrich 2010). Within gut linings of arthropods, the ubiquitin-proteasome system (UPS) is believed to function in food particle degradation and nutrient absorption, but may also simultaneously affect their immune responses . As gut-dwelling symbionts, Harpellales occupy an interface that presumably exposes them intimately and intensively to the hosts' ubiquitination machinery.
In light of the aforementioned cases of HGT between insects and fungal symbionts, we investigated the existence of such HGT elements within Harpellales, which present an excellent study system due to their inflexible association with insect hosts. Four Harpellales genomes (Z. culisetae, S. mucronatum, and two strains of S. culicis) were sequenced and annotated for the present study. Through phylogenetic reconstruction and a series of comparative analyses of amino acid compositions, predicted secondary structures, and synteny across eukaryotic clades, we authenticate the first case of a polyubiquitin gene transfer event from mosquito hosts to the gut fungus, Z. culisetae.
Harpellales Genome Features
Genome assembly statistics and annotation features are presented in table 1. The Core Eukaryotic Genes Mapping Approach (CEGMA) recovered the presence of above 90% of core eukaryotic genes in all four assemblies. The genome size of Z. culisetae (28.7 Mb) is much smaller than those of Smittium (71.1-102.4 Mb) and genome-wide GC ratios of Smittium representatives were below 30%, whereas Z. culisetae was 35.5%. The ab initio gene predictions discovered approximately 12,000 genes in each strain of S. culicis, 8,410 genes in S. mucronatum, and 8,252 genes in Z. culisetae. On average, the Smittium genomes had more than two exons per gene, whereas Z. culisetae had less than two. Gene ontology analyses also indicated that Z. culisetae possesses several genes with unique (not found in Smittium) annotations for biological processes (rhythmic process) and cellular components (cell junction, symplast, and synapse) ( fig. 1), suggesting that the genomic compositions may differ significantly between Z. culisetae and Smittium.
HGT Detection and Syntenic Analyses
A set of similarity searches (using BLASTp) (Altschul et al. 1990), as well as a customized Python script ("HGTfilter.py", available from GitHub) was employed to recover putative HGT elements from the four Harpellales genomes. The analyses recovered 59, 60, 27, and 33 potential HGT events from S. culicis strain GSMNP, S. culicis strain ID-206-W2, S. mucronatum, and Z. culisetae, respectively (table 1). Among the total pool of 179 candidates, only five, all from Z. culisetae, could be adequately mapped back to the host genomes. One of these (supplementary table S1, Supplementary Material online), a triple-ubiquitin gene, demonstrated a conserved Our results suggest that this gene occurs as a single copy in the Z. culisetae genome (although monoubiquitins occur on other scaffolds in the genome), and that the original fungal copy may have been lost at some point during evolution and interaction with the insect hosts.
Multiple polyubiquitin candidates (with E-values < 1E À100 ) with varying repeat motifs were discovered in the examined eukaryotic genomes (supplementary table S2, Supplementary Material online). In order to infer homology, corresponding flanking genes were recovered for each included taxon by scanning the genomes and annotations manually. The rationale for this strategy is twofold. First, it will aid in revealing the HGT element within the Z. culisetae genome where the insert should be flanked by genes of fungal origin. Second, it would minimally allow for inference of homology on a clade-by-clade basis if the upstream and downstream genes were conserved throughout the clades. We found high levels of conservation of adjacent genes within clades ( fig. 2c and e; supplementary table S2, Supplementary Material online), but rather high disparity among clades. The numbers of repeats in the polyubiquitin genes also varies across the diversity, with animals having more repeats than fungi in general ( fig. 2d). The single-copy polyubiquitin gene of Z culisetae is flanked by two proteincoding genes of fungal origin, which contain conserved domains. The flanking upstream gene codes for "laminin globular (LamG)"; putative homologs of this gene were found to be conserved in the Smittium genomes but were located in different parts of the genome (i.e., gene order was not conserved). However, LamG was found to be lacking from all animal taxa (top BLASTp hit against a closely related zygomycotan fungus, Mortierella verticillata, and no hits for animal taxa). The flanking downstream gene contains "Serine/Threonine protein kinases, catalytic domain (S_TKc)" and is again conserved regarding amino acid structure but not regarding its genomic position in both Smittium and animal species (top BLASTp hit against Rhizophagus irregularis, a closely related glomeromycotan fungal species) ( fig. 3); phylogenetic analysis of both fungal-and animal-derived S_TKc confirmed the fungal origin of the Z. culisetae-derived gene (supplementary fig. S1, Supplementary Material online). Interestingly, S_TKc is associated with apoptosis, focal adhesion, and metabolic pathways of ubiquitin-mediated proteolysis (Sanjo et al. 1998).
Phylogenetic Analyses of Polyubiquitin Sequences
The Bayesian inference analyses reached congruence after 1 million and 0.5 million generations, respectively, for amino acid and nucleotide sequences. Trees resulting from the maximum likelihood (ML) and Bayesian analyses fully agree on the topology, with the exception of the unresolved placement of Umbelopsis rammaniana (places as sister to Smittium in the ML tree but with negligible support). Both ML bootstrap proportions (MLBP 100%) and Bayesian posterior . 2a). Consistent with the amino acid tree, the animal origin of the Z. culisetae polyubiquitin gene is confirmed (MLBP 77%, BPP 1.00) and its position as the sister group to that of Anopheles gambiae is recovered with relatively strong support (MLBP 80%, BPP 0.95). The three Smittium species form a monophyletic group (MLBP 100%, BPP 1.00) together with representatives of Zygomycota (MLBP 99%, BPP 1.00). Consistent with the amino acid tree ( fig. 2a), the phylogenetic analysis of the polyubiquitin nucleotide sequences failed to recover the Dikarya clade of higher fungi (Ascomycota þ Basidiomycota).
Secondary Structures, Selection Analyses, and GC Ratio Variation
Secondary structure analyses predicted that ubiquitins show different structures between animals (including Z. culisetae) and fungi ( fig. 4c). Specifically, the animal ubiquitins share a coil structure immediately adjacent to the first set of helices, in contrast to all other fungal members, which instead show an additional helix structure. The polyubiquitin genes were under strong purifying selection, with more than 94% of the codons showing negative selection, according to codonspecific analyses (table 2). Among the taxa examined, GC ratios of polyubiquitin genes (fig. 2b) were much elevated for animals (49.45-58.77%) compared with zygomycotan fungi (39.31%-47.26%). The GC ratio of Z. culisetae (44.15%) falls within the range of other Harpellales and zygomycotan representatives ( fig. 2b), despite its animal origin ( fig. 2a). Interestingly, the GC ratio range of the Zygomycota clade is also lower than that of "Dikarya" (47.59-54.82%). The GC ratios of the first and second codon positions are rather consistent across both animals and fungi, yet the ratios of the third codon position vary greatly (fig. 5). Two major categories emerged according to the third codon position GC ratios: one (Animalia and Dikarya except Amanita thiersii) shows higher third codon position GC ratios than either that of the first or second codon position; the other category (Zygomycota and Am. thiersii) shows lower third codon position GC ratios than the first codon position, but higher than the second codon position.
Discussion
With support provided by phylogenetic analyses, amino acid compositions, secondary structure prediction, and a variety of MBE BLAST and BLAT analyses, our results all converge on the indication that the gut fungal symbiont, Z. culisetae, has acquired a single-copy polyubiquitin gene through horizontal transfer from an insect host, possibly from the ancestral lineage of A. gambiae. This represents the first report of HGT within Harpellales, notwithstanding that their symbiotic lifestyle presents an intimacy that is similar to other systems that have experienced such events. It is reasonable to expect that genetic modifications have occurred within Harpellales genomes as they adapted to the gut-dwelling lifestyle. All four Harpellales genomes present AT enrichment, and Z. culisetae shows a much reduced genome size when compared with the other representatives.
Homolog Detection and Phylogenetic Analyses
Representatives of Ascomycota and most Zygomycota (with the exception of Backusella, Hesseltinella, and Umbelopsis) possess a single-copy polyubiquitin gene (supplementary
Selection Analyses and Potential Functions of the Horizontally Transferred Polyubiquitin Gene
The lower GC ratio of Z. culisetae, compared with other members of the animal clade (figs. 2b and 5), implies that the HGT event was followed by substantial homogenization of the gene region. Given the putative importance in function of the animal-like polyubiquitin gene found in Z. culisetae, it was not surprising to find high levels of negative selection acting across the gene, although other evolutionary forces keep working at the nucleotide level, leading to synonymous substitutions. This is reflected both in the GC ratio of the Z. culisetae polyubiquitin gene, which is consistent with other parts of the fungal genome ( fig. 5), and in the higher resolution of the phylogenetic tree derived from the nucleotide alignment ( fig. 2a). Ubiquitin is critical in controlling the fate of eukaryotic proteins and previous studies have revealed other complex functions of polyubiquitin chains in eukaryotic systems (Collins and Brown 2010;Hagai and Levy 2010;Zhao and Ulrich 2010;Severo et al. 2013). A potential benefit of an insect-originated triple-ubiquitin gene might be to label and degrade certain insect proteins by hijacking their own UPS machinery. However, why only three out of the 14 A. gambiae ubiquitin repeats were found to be acquired by Z. culisetae remains to be answered. The significance and function of the repeat number is still unclear, although it has been found that doubling the polyubiquitin repeat units from four to eight did not accelerate the degradation process of proteins (Zhao and Ulrich 2010). This suggests either that the number of repeats bears no burden for the functionality of the protein (this characteristic should then be under neutral selection and may present itself in random constellations across clades, as seems to be the case) or that the 14-repeat ubiquitin genes in some animal taxa may serve other functions. The proteasome regulatory pathway is capable of degrading many kinds of proteins, though its efficiency varies greatly depending on the biophysical properties of the substrate (Baugh et al. 2009). The polyubiquitin chains (especially the K48 linked variant) alter the thermodynamic stability of the substrate, unwind its local structures, and help initiate its degradation (Hagai and Levy 2010). The proline residue found here for animals (including Z. culisetae) versus the serine residue of other lineages may be associated with specific substrate binding and unfolding, following signal transductions ( fig. 4b and c).
The LamG and S_TKc domains on adjacent flanking genes ( fig. 3) may amplify the potential of the triple-ubiquitin gene in serving its function. Laminin is a family of glycoproteins that are important parts of the basal lamina, involved in cell differentiation, migration, adhesion, and survival (Timpl et al. 1979), and are secreted and incorporated into cell-associated extracellular matrices. Laminins and laminin-binding domains are involved in adhesion of Aspergillus fumigatus conidia to host cell surfaces (Upadhyay et al. 2009). The exact function of the LamG domain remains unknown, although binding functions and disease progression have been ascribed to different LamG modules (Schéele et al. 2007). The LamG adjacent to the triple-ubiquitin of Z. culisetae is a transmembrane protein according to the TMHMM prediction (supplemen tary fig. S3, Supplementary Material online). The S_TKc domain serves in protein phosphorylation and ATP-binding processes (Hanks et al. 1988). The highly conserved catalytic domain is essential for catalyzing numerous related enzymes, several of which play important roles in ubiquitin-mediated proteolysis, apoptosis, and differentiation (Sanjo et al. 1998).
Based on our current knowledge, the mosquito-originated polyubiquitin gene in Z. culisetae may be useful during the invasive processes of the fungus, to induce the hosts UPS by labeling and degrading host cell membrane proteins. The upstream and downstream genes may also assist this process in differentiating, adhering, and catalyzing. An alternative use of the acquired mosquito-originated polyubiquitin gene could be that Z. culisetae uses it as a defense against bacteria, viruses, or other microbes that coexist in the insect guts, whether for its own competitive advantage or as an ally of the host. Recent research has shown that polyubiquitin has important roles in regulating the hosts' immune and inflammatory responses (Jiang and Chen 2012;Severo et al. 2013) and is able to target nonself-entities (i.e., microbial pathogens) and assist selective autophagy (Collins and Brown 2010; Jiang Horizontal Gene Transfer from Mosquitoes to Fungi . doi:10.1093/molbev/msw126 MBE and Chen 2012). In addition Haldar et al. (2015) reported that ubiquitin-centered mechanisms were involved in immuneresponse attacks on pathogen-containing vacuoles by the host.
Zancudomyces and Harpellales Genomic Studies
Zancudomyces culisetae was one of the first gut fungi to be isolated axenically and it is one of the most frequently encountered species of Harpellales from various regions globally (Lichtwardt et al. 1999;Valle and Santamaria 2004;White et al. 2006;Wang et al. 2013). Zancudomyces culisetae has been used in pioneering numerous research avenues (Williams 1983;Horn 1989;Grigg and Lichtwardt 1996;Gottlieb and Lichtwardt 2001;Tretter et al. 2013) as it demonstrates an intimate relationship with larval mosquitoes (Horn and Lichtwardt 1981;Williams 2001); the fungal spores present a delicate response mechanism, which is triggered by pH and ion changes along the digestive tract, and corresponding with germling release and development (Horn 1989). In this study, novel genome-level comparisons revealed that Z. culisetae has a considerably smaller genome size, greater gene density, and more unique gene ontology annotations compared with Smittium (table 1 and fig. 1). These genomic insights could indicate that Z. culisetae has evolved a tighter relationship with its hosts, either as a mutualistic symbiont or perhaps even with parasitic potential. The horizontally transferred triple-ubiquitin gene may also help secure the symbiotic relationship between Z. culisetae and mosquitoes, and to some extent, related aquatic Diptera, which may explain the exceptional success of Z. culisetae in light of its global distribution. Smittium mucronatum presents the largest genome size among the four, which may be related to its host specificity to Psectrocladius (Chironomidae). A similar result was recently recovered for an Aedes aegypti-specialized fungal pathogen, Edhazardia aedis, which shows a notably larger genome size (51 Mb) when compared with other Edhazardia species (2-9 Mb) (Desjardins et al. 2015). While such considerations are in their infancy, further studies relating to host specificity, secondary metabolites, genes specialization, and molecular interactions should shed light on these questions, as well as on how Harpellales have maintained their obligate gutdwelling lifestyle in such an effective manner (Galagan et al. 2005;Staats et al. 2014).
Fungal Strains and DNA Extraction
Four Harpellales taxa were included in this study (table 1). Two Smittium culicis strains (GSMNP and ID-206-W2) were selected to represent divergent clades within the species complex . The type species S. mucronatum, S. culicis strain GSMNP, and Z. culisetae were obtained from USDA-ARS Collection of Entomopathogenic Fungal Cultures (ARSEF). Smittium culicis strain ID-206-W2 was recently isolated from the hindgut of a mosquito larva in Boise, ID, USA (MMW's lab at Boise State University). Strains were cultured in broth tubes of Brain Heart Infusion Glucose Tryptone (BHIGTv) medium at room temperature , and the DNA extraction followed a standard CTAB protocol ).
Genome Sequencing, Assembly, and Annotation
Paired-end libraries (with 500 bp insert size) were prepared and sequenced for all four strains at the Centre for Applied Genomics (Hospital for Sick Children, Toronto, Canada) using one lane of the Illumina HiSeq 2500 platform (2 Â 150 bp read length). Raw sequence reads were quality trimmed and assembled with RAY v2.3.1 (Boisvert et al. 2010). Potential contamination was examined and characterized using the Blobology pipeline . Satellites, simple repeats, and low-complexity sequences were annotated with RepeatMasker v4.0.5 (http://www. repeatmasker.org, last accessed September 18, 2015) and Tandem Repeat Finder v4.07b (Benson 1999), corresponding to the "Fungi" taxon. Gene prediction employed AUGUSTUS v3.1 (Keller et al. 2011) using the genome profiles of Conidiobolus coronatus (Entomophthoramycotina, Zygomycota) (Chang et al. 2015). As a corollary, TransDecoder (Haas et al. 2013) was used to predict open reading frames and enable a conservative comparison to estimate gene numbers. Gene functions of the AUGUSTUS prediction set were inferred from Blast2GO v3.0 (Conesa et al. 2005) and InterProScan v5.8-49.0 (Jones et al. 2014) against the nonredundant database in NCBI and protein signature databases in EBI. Secreted proteins were predicted by SignalP v4.1 without truncation (Petersen et al. 2011), and transmembrane helices were predicted through TMHMM Server v2.0 (Krogh et al. 2001). CEGMA v2.4.010312 (Parra et al. 2007) was used to identify the presence of core eukaryotic protein-coding genes for subsequent evaluation of genome coverage.
HGT Detection and Homolog Identification
The four Harpellales proteomes were BLASTed (using BLASTp) against a concatenated proteome database, consisting of 512 fungal representatives from Broad Institute and Joint Genome Institute (JGI), as well as five proteomes of insect (lower Diptera) hosts of Harpellales (Aedes aegypti, Anopheles gambiae, Culex quinquefasciatus, Chironomus tentans, and Simulium vittatum) from VectorBase, European Bioinformatics Institute (EBI), and the Human Genome Sequencing Center at Baylor College of Medicine (BCM-HGSC) (Holt et al. 2002;International Human Genome Sequencing Consortium 2004;Nene et al. 2007;Ma et al. 2009;Zimin et al. 2009;Arensburger et al. 2010;Burmester et al. 2011;Hu et al. 2011;Arnaud et al. 2012;Collins et al. 2013;Howe et al. 2013;Hoeppner et al. 2014;Kutsenko et al. 2014;Kohler et al. 2015). A customized Python script ("HGTfilter.py", available from GitHub) was applied in order to identify promising HGT elements in the Harpellales genomes. The script works by comparing BLAST-based hits against both the 512 fungal genomes (in this case) and the five host genomes and lifting out hits that match insectderived genes at a lower E-value than fungus-derived genes. Due to the employment of fungal genomes across the Wang et al. . doi:10.1093/molbev/msw126 MBE diversity of the kingdom, an insect-derived match necessarily had to "compete" with a broad swath of fungi in order to be deemed of insect origin. All corresponding nucleotide sequences of the filtered outputs were mapped back as queries to the host genomes using BLAT (Kent 2002), in order to robustly infer HGT events. Homologs among 12 fungi and 8 animals were selected (based on a 1E À50 cutoff) to represent Ascomycota, Basidiomycota, Zygomycota, and animal clades (table 3 and fig. 2a). To strengthen the inference of homology on a clade-by-clade basis, upstream and downstream genes for each homolog were recovered by manually scanning the genomes and annotations. The polyubiquitin gene for the outgroup taxon, Arabidopsis lyrata, was obtained from GenBank under accession number XM_002872723. The syntenic structure of triple-ubiquitin and adjacent genes within the Z. culisetae genome were plotted using the genoPlotR package in R (Guy et al. 2010).
Multiple Sequence Alignment, Model Test, and Phylogenetic Reconstruction
The polyubiquitin amino acid sequences were aligned using MUSCLE v3.8.31 (Edgar 2004), the result of which served as the guide for the nucleotide alignment. The RtREV þ IþG and GTR þ IþG models were suggested for the polyubiquitin amino acid and nucleotide sequence alignments by ProtTest v2.4 (Abascal et al. 2005) and JModelTest v2.1.3 (Posada 2008), respectively. ML analyses employed RAxML v8 (Stamatakis 2014) and Bayesian inferences were performed using MrBayes v3.2 (Ronquist et al. 2012) for both amino acid and codon-partitioned nucleotide sequences. The ML search used 1,000 initial addition sequences with 25 initial GAMMA rate categories and final optimization with four GAMMA shape categories. Support values for nodes were acquired through 1,000 pseudoreplicates with random seeds. For the Bayesian inference analysis, a total of eight chains (two runs, each with three hot and one cold chain) were performed for 50 million generations and Tracer v1.5 (http://beast.bio.ed.ac.uk/Tracer, last accessed December 2, 2015) was used to confirm convergence of the Bayesian chains and the sufficiency of the default burnin value (25%). Regarding the phylogenetic analysis of the upstream flanking S_TKc gene, only RAxML was used to infer the phylogeny, using the same settings as mentioned above.
Secondary Structure Prediction and Selection Analyses
Monoubiquitin secondary structures were predicted using the CFSSP server (Kumar 2013). Selection pressures on the polyubiquitin genes across both animal and fungal lineages were assessed on a molecule-wide basis. Both purifying and positive selection hypotheses were tested using the Z-test in MEGA v6.06 (Tamura et al. 2013) and positive selection was tested with the PARRIS method in HyPhy (Kosakovsky Pond et al. 2005) through the Datamonkey server (Delport et al. 2010). The Z-test was performed allowing both for 0% gaps and 30% gaps, using 1,000 replicates for the bootstraps and the Nei-Gojobori method (Nei and Gojobori 1986). Codonspecific selection was tested using codon-based likelihood ratio tests, including Single-Likelihood Ancestor Count (SLAC), Fixed Effects Likelihood (FEL), and Random Effects Likelihood (REL) on the Datamonkey server following the methods detailed in Kvist et al. (2013). Negative selection pressures (no signatures of positive selection were recovered)
Supplementary Material
Supplementary material is available at Molecular Biology and Evolution online (http://www.mbe.oxfordjournals.org/). | 2018-04-03T03:34:41.602Z | 2016-06-24T00:00:00.000 | {
"year": 2016,
"sha1": "0750ca98ddeb64860c06c0f46bcddb01f2e0d5d8",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/mbe/article-pdf/33/10/2544/17472779/msw126.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0750ca98ddeb64860c06c0f46bcddb01f2e0d5d8",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
88653786 | pes2o/s2orc | v3-fos-license | Natural occurrence and pathogenicity of Xanthomonas bacteria on selected plants
The bacterial genus Xanthomonas consists of several species of economic importance, among which Xanthomonas campestris pv.musacearum (Xcm), the cause of enset and banana wilt is the most important in tropical Africa. However, the natural occurrence and host range of this species is yet to be clarified. The objectives of this study were to verify the presence of Xanthomonas bacteria on plants growing in and around enset gardens in South and Southwest Ethiopia, and to elucidate the pathogenicity of Xcm strains to cultivated and wild plants. Several economical and ornamental plants were assessed for wilting in South and Southwest Ethiopia. Wilting was visible on Canna spp. with 9.8% incidence and 30% prevalence, while reddish streak symptoms (typical of Xanthomonas bacteria) were observed on the leaves of sugarcane, sorghum and wild sorghum with disease incidence ranging from 20 to 80%, and prevalence varying from 30 to 100%. The pathogenicity of three Xcm isolates to five plant species was tested in a factorial experiment arranged in CRD with five replications. All the tested Xcm isolates were found to be pathogenic to banana, cultivated and wild enset, Canna indica, Canna orchoides, maize, sorghum and finger millet. The analysis of variance for incubation period and disease incidence revealed significant differences (p<0.05) among test plants and isolates. Results suggest marked variations among test plants’ ability to resist the bacterium. Variations were also evident in the aggressiveness of the bacterial isolates. On the other hand, enset and banana did not show any symptom after being inoculated with four Xanthomonas isolates from other crops.
INTRODUCTION
The genus Xanthomonas is composed of several species of economic importance as they affect the production of different crops all over the world. A member of this genus, Xanthomonas campestris pv. musacearum (Xcm), has been implicated in threatening the crop enset (Ensete ventricosum (Welw.) Cheesman) in Ethiopia since the 1960s (Yirgou and Bradbury, 1968;Dereje, 1985;Weldemichael, 2000).
The disease has also been implicated as causing a serious threat on banana production thereby the *Corresponding author. E-mail: alemayehuchala@yahoo.com. Tel: +251 912 163096. Fax: +251 46 2206711. Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License livelihood of thousands of people throughout the Great Lake Region of Africa (Wasukira et al., 2012). Previous work based primarily on DNA sequences and fatty acid data has shown that strains of X. campestris pv. musacearum have very close homology to strains of Xanthomonas vasicola and most likely belong to this species. Accordingly, the name X. vasicola has been proposed for X. campestris pv. musacearum (Aritua et al., 2007). However, we will use the previous name Xcm (currently still the official name) throughout this paper.
The initial symptoms by Xcm on enset and banana occur on the central leaf and spread to all parts. The earliest symptoms are usually loss of turgor and wilting in the spear (youngest emerging leaf) or one or more of the young leaves, sometimes preceded by yellowing and distortion, especially in young plants. Bacterial ooze exudes when a non-dry part of the plant is cut. A cut made through the petioles of newly infected enset plants reveals browning of the vascular bundles and yellowish or grayish masses of bacterial ooze come out from the vascular bundles (Tripathi et al., 2009). Cross sections at the base of the pseudostem and corm show discoloration of the vascular bundles with large bacterial pockets and grayish or yellowish exudates with brownish to black spots, respectively (Wondimagagne, 1981;Ashagari, 1985).
The main known natural host plants to X. campestris pv. musacearum are Banana (Musa spp.) and cultivated enset (Ensete ventricosum) both of which belong to the Musaceae family and order zingiberales Bradbury, 1968, 1974). However, the host range of this pathogen appears rather controversial. While Ssekiwoko et al. (2006) reported Xcm as being able to only infect monocots that belong to the families Musaceae and Cannaceae, Mwangi et al. (2006) ruled out grasses like maize, sorghum and napier grass along with such crops as common beans, cassava, taro, sweet potato and tobacco as hosts to the pathogen. On the other hand, Xanthomonas species have been reported in sweet potato, sugar cane, maize, common beans and sorghum (Hernandez and Trujillo, 1990;Destefano et al., 2003;Mkandawire et al., 2004;De Cleene, 2008;Todorović et al., 2008). Xcm was also found to be pathogenic to maize and sugarcane (Aritua et al., 2008;Karamura, 2012). Wild Musa zebrina, Musa ornata and Canna indica were also reported as potential alternative hosts for this pathogen (Ssekiwoko et al., 2006). Enset bacterial wilt caused by X. campestris pv. musacearum was first reported in Ethiopia by Yirgou and Bradbury (1968) and has since spread to all the enset growing regions in Ethiopia (Brandt et al., 1997). However, most of the studies conducted in Ethiopia thus far focus on surveying the disease in some areas and characterizing the pathogen based on biochemical tests (Wondimagagne, 1981;Ashagari, 1985;Spring et al., 1996;Bobosha, 2003;Addis et al., 2004).
Screening some cultivated enset clones for wilt resistance and studying the survival and dispersal of the pathogen have also been investigated although not thoroughly (Weldemichael, 2000;Addis et al., 2006;Weldemichael et al., 2008a and b). Studies on the occurrence of the disease on plants other than enset and banana are lacking under Ethiopian conditions, and are very limited even throughout Africa. Besides, the pathogenicity of Xcm isolates to plants growing in and around enset gardens has not been well established. Therefore, this study was designed to: 1) verify the presence of Xanthomonas bacteria on plants growing in and around enset gardens in South and Southwest Ethiopia, and 2) elucidate the pathogenicity of Xcm strains to various plants.
Assessing plants around enset gardens for Xanthomonas spp. infection
A field survey was carried out to assess some crops that is, Canna spp., sugar cane (Sacharum officinarum), cultivated sorghum (Sorghum bicolor), and wild sorghum (Sorghum halepense). The survey was carried out by visiting enset and banana producing areas in South and Southwest Ethiopia. During the survey, data were collected on the type of plants growing in and around each field; incidence of disease on each of the above plants as proportion of plants with visible symptoms. Besides, specimens were collected from each plant and brought to the laboratory for verification. Identity of the isolated bacteria was confirmed following colony growth on semi selective medium (sucrose peptone agar medium: 20 g sucrose, 5 g peptone, 0.5 g K2H3PO4, 0.25 g MgSO4 and 15 g agar in 1l sterilized distilled water) (Mwebaze, 2007;Mwangi et al., 2007), catalase (Dickey and Kelman, 1988) and Gram staining reaction tests (Schaad, 1988). In addition, physiological tests that is, gelatin liquefaction and starch hydrolysis testes as well as catalase reaction were carried out.
Pathogenicity tests
Pathogenicity tests were carried out to determine the possible host range of the pathogenic Xcm and the reaction of various plant species. The experiment had a factorial design with isolates as subfactors and test plants as main factors. It was arranged in a completely randomized design with five replications.
Three Xcm isolates (I1, I2 and I3) were isolated from naturally infected cultivated enset, wild enset and banana, respectively (Table 1), and used for the pathogenicity test on cultivated enset, wild enset, banana, Canna species and cereal crops (maize, sorghum and finger millet) collected from different areas ( Table 2). Each isolate was collected by taking bacterial ooze in the field using a toothpick and then suspending the ooze in a test tube half filled with sterilized distilled water according to Weldemichael (2000). Before inoculation of test plants, the concentration of each bacterial suspension was adjusted using a spectrophotometer to 0.3OD at 460 nm, which is equivalent to 10 8 cfu/ml bacteria cells.
Seedlings of banana, enset and Canna spp. were transplanted into pots (22×22 cm), filled with a sun-dried mixture of soil, sand and manure at a ratio of 3:1:1 (Quimio, 1992), then allowed to establish for three months (four to seven leaf stage). Inoculation of test plants with each bacterial isolate was done by injecting an aliquot of 3 ml of the bacterial suspension into the petiole base of the newly expanding central leaf using a 10 ml sterile hypodermic syringe (Ashagari, 1985). Inoculated plants were then covered with a wet plastic bag for 48 h. For the cereals, seeds of each species were planted in plastic pots (18x18 cm) filled with a sun-dried sterile mixture of soil, sand and manure (3:1:1) and 324 mg of urea per pot; this amount of urea was re-applied six weeks after planting. About 10 seeds were planted in drills and thinned to five plants/pot two weeks after planting. The cereals were inoculated at one month old (three to four leaf stage) by wounding and spraying techniques, that is, their leaves were physically struck with very fine sterile sand paper, sprayed with 3 ml of each bacterial isolate suspension and covered with a transparent plastic bag for 48 h (Hussien, 2001). Negative controls of each plant species were inoculated with the same quantity of sterile distilled water.
Disease assessment
Data were collected on incubation period (period between inoculation and first wilting symptom) and the number of plants showing disease symptoms was recorded weekly starting from one week after inoculation for four consecutive months. Disease incidence was calculated according to the following formula: Where, DI: disease incidence, NPCW: number of plants completely wilted, NPPT: number of plants assessed.
In addition, disease severity was assessed using a standard disease scale of 0 to 5 (Winstead and Kelman, 1952) where 0: no symptom; 1: only the inoculated leaf wilted; 2: 2 to 3 leaves wilted; 3: four leaves wilted; 4: all leaves wilted and 5: plant dead). The severity grades were converted into percentage severity index for analysis (Cooke, 2006).
Where, PSI is percent severity index; SNR is the sum of the numerical rating; NPR is number of plant rated; MSS is the maximum score of the scale. Severity from each scoring date was converted to area under percent severity index progress curve (AUPSiPC) using the formula by Jerger and Vijanen-Rollinson (2001) as follows: Where, n is total number of assessments, ti is the time of the i th assessment in weeks from the first assessment date, xi is the percentage of the disease severity or disease incidence at i th assessment. AUPSIC is the area under percent severity index progress curve was expressed in percent-weeks.
Data analysis
Analysis of variance was performed for data on disease parameters (wilt incidence and incubation period) using the General Linear Model of SAS computer package (SAS, Institute Inc., 2003). Means were separated by least significant difference (LSD) at 5% probability level.
Disease incidence on plants around enset garden and bacterial isolates characterization
Visible disease symptoms (yellowing of the leaf at margin side and tip, wilted leaf and blade folded upward and inward and also dry leaf) were evident on diseased plants. Reddish-brown streaks were also recorded on the grasses that is, cultivated and wild sorghum, and sugar cane. Disease incidence (proportion of infected plants in a field) varied from 10% on Canna sp. to 80% on sugar cane (Figure 1). Disease prevalence (proportion of fields with at least one diseased plant) ranges between 30% on Canna sp. and 100% on wild sorghum.
When diseased (wilted) plants were plated on sucrose peptone agar medium (a semi selective medium for Xanthomonas), deep yellow colonies grew out from sugarcane and sorghum, and yellow colored colonies were observed growing from Canna. All of these isolates were found to possess negative reaction to Gram staining, and positive reaction to catalase reaction. Inoculation of the isolates to enset and banana did not induce any symptom and hence the isolates were considered non-pathogenic to enset and banana.
Pathogenicity of Xcm isolates to various plants
The pathogenicity of Xcm isolates to various plants was tested in two experiments. Although, the results of both experiments were consistent; the average of the two experiments was presented in the current report.
Banana and cultivated enset
Disease assessment started a week after inoculation and the earliest typical external disease symptoms were observed two to four weeks post inoculation on 'Pisang awak' and enset clones. These included folding down of the leaf blade along the midrib, followed by scalding and dull green appearance of the central inoculated leaf. This was followed by yellowing, starting at the apex, sequential wilting of leaves, drying and wilting of the whole plant and finally plant rotting and death (Figure 2). Yellowish bacterial ooze was observed when pseudostem and leaf petiole were cut. In the current experiment, there were significant variations among clones and isolates in terms of incubation period, disease incidence and area under disease severity index progress curve (Tables 3 to 5). However, the interaction was not significant (data not shown). Among the tested plants, the banana cultivar 'Pisang awak' was found to have the shortest incubation period followed by the enset clone 'Mandaluka'. Enset clone 'Mezya' had the longest incubation period. This clone also had the lowest average wilt incidence (27%) across the three isolates, while 'Pisang awak' and 'Geziwot' had the highest wilt incidence (73%), followed by 'Mandaluka' (60%). Furthermore, the highest AUPSPC (1039) was recorded on 'Pisang awak' followed by 'Geziwet' and 'Mandaluka' in that order, while the lowest AUPSPC (534) was recorded on 'Mezya'. Thus, the banana cultivar 'Pisang awak' and enset clone 'Geziwot' were suggested to be highly susceptible to Xcm as compared to the other clones tested in the current experiment. When comparisons were made across isolates, isolate I 3 caused wilting the earliest (2.7 weeks after inoculation) while the other two isolates, I 1 and I 2 , took about four weeks and three weeks, respectively, to induce symptoms (Table 3). Most plantlets inoculated with isolates I 2 and I 3 completely wilted but most of the enset clones and some 'Pisang awak' plantlets inoculated with isolate I 1 did not wilt completely. Moreover, wilt incidence and area under the disease severity index progress curve were significantly the lowest for isolate I 1 . On the other hand, isolate I 3 caused the earliest wilting and disease parameters after inoculation with this isolate were significantly greater than for the others. As a result, among the three isolates of Xcm used in this study, the wild Xcm isolate I 1 was found to be a weaker pathogen as compared to isolates I 3 and I 2 . In contrast, isolate I 3 , which was obtained from cultivated enset in Sidama zone of southern Ethiopia, was the most virulent and aggressive.
Wild enset
The first disease symptoms on wild enset plants were recorded a week after inoculation as yellowing from the apex to the edge of the inoculated leaf and water-soaked lesions along the inoculated leaf's midrib. Two to five weeks after inoculation leaf wilting and yellowing symptoms were observed on most plantlets (Figure 2). Yellowish bacterial ooze was observed when pseudostem and leaf petiole were cut. Such symptoms are similar to typical Xanthomonads bacterial wilt symptoms described on the banana cultivars and cultivated enset under field and experimental conditions. Like the cultivated enset clones, the wild enset types also reacted differentially to the isolates of Xcm. Significant variations were observed among the wild enset types and Xcm isolates in terms of incubation period, disease incidence and area under disease severity index progress curve (Table 4).
The mean number of weeks required for the appearance of initial symptoms on wild enset clones varied between three and five. The incubation period was shorter on wild ensets clones such as 'Epoo5', 'Epoo2', 'Erpha18' and 'Erpha13', while 'Epoo4' had the longest incubation period among the tested wild enset clones. None of the nine wild enset types tested in the current experiment showed complete resistance to Xcm isolates used in this study. Among wild enset type tested in the current experiment, wilt incidence was the highest (60%) on 'Erpa18' followed by 'Epoo2', which had the highest AUPSPC (805). Thus these two wild enset types were found to be highly susceptible to Xcm. On the other hand, the wild enset 'Epoo4' had significantly the lowest wilt incidence and AUDSPC, making it relatively more tolerant to the pathogen.
In this experiment too, incubation period was the longest for isolate I 1 , while isolate I 3 had the shortest incubation period. Symptom appearance after inoculation with I 1 was delayed by one to two weeks compared to the other two isolates. Most of the plantlets inoculated with isolates I 2 and I 3 were completely wilted 10 weeks after inoculation. On the other hand, only one plantlet of 'Epoo3' inoculated with isolate I 1 completely wilted at the same time of assessment. This difference between isolates in inducing symptoms on tested plants indicates variations in aggressiveness among the isolates. Disease incidence and severity were also high for most wild enset after inoculation with isolates I 2 and I 3 . One hundred percent disease severity indexes were recorded at 5 to 9 weeks after inoculation on wild enset with isolate I 3 (data not shown). Isolate I 2 caused 60 to 100% severity at 7 to 11 weeks after inoculation, while isolate I 1 resulted in 40 to 60% severity at 8 to 14 weeks after inoculation. On average, 70% disease incidence and AUDSPC value of 831 were caused by isolate I 3 . In contrast, isolate I 1 had significantly lower disease incidence and AUDSPC. This further confirmed the most aggressive nature of isolate I 3 as compared to the remaining two isolates.
Canna spp. and cereals
Among the suspected alternative host plants, Canna spp., maize, sorghum and finger millet varieties were tested for the reaction to three Xcm isolates. Two to three weeks after inoculation, typical external disease symptoms were observed on some plantlets of these suspected plants. On Canna plantlets, water soaked lesions developed along the inoculated leaf's midrib within two weeks after inoculation and after three to four weeks some inoculated leaves wilted and leaf blade folded upward and inward, turned yellow, dried and died. However, new suckers that emerged from the corm after the inoculated plantlet kept growing. This may be related to inability of the bacteria to colonize the corm of the Canna plants.
In maize, the first symptom observed on the inoculated leaf was necrosis and discoloration or yellowing of the leaf, starting from the tip to the bottom of the leaf, three to four weeks after inoculation. Gradual wilting along the midrib to the edge of the inoculated leaf was also observed. In sorghum varieties, lesions or discoloration initially developed at the tip of the inoculated leaf two weeks after inoculation. Thereafter, the lesions at the tip of the leaf gradually elongated to the midrib and then to the leaf blade. Eventually, a yellowing symptom appeared on the leaf blade and, in severe cases, a burned appearance at the margin of the leaf. In addition, leaves withered and turned brown, wilted, dried and dropped off. The observed symptoms on finger millet varieties were discoloration starting from the tip to bottom of the leaf and finally turning yellow and dried.
The analysis of variance for incubation period, disease incidence and AUDSPC revealed significant differences among varieties and isolates (Table 5). The number of weeks to the appearance of first disease symptoms varied between two and half, and four among cereal cultivars. Among the tested plants, initial symptoms appeared the earliest on the sorghum cultivar 'RTxTAM' and the latest on the finger millet cultivar 'Pandet'. Each of the inoculated plant species reacted differently to the three isolates of Xcm. Disease incidence was in excess of 70% on C. indica and C. orchoides and reached 67% on the maize variety 'ACV6' (Table 5). Disease incidence was negligible on the finger miller variety 'Pandet'. This variety showed initial symptoms but then the disease progressed quite slowly. The second longest incubation period and lowest AUDSPC were recorded from the other finger millet variety, 'Tadess'. The current results may suggest the more resistant nature of finger millets as compared to the other cereals.
Isolates of Xcm differed in their ability to cause the disease on Canna spp. and the various cereals. Disease symptoms were induced the earliest by isolate I 3 followed by isolate I 2 . The highest disease incidence was induced by isolate I 3 on Canna indica and Canna orchoides. The same isolate caused up to 60% disease incidence on maize, cultivated and wild sorghum and finger millets. On average, the highest AUDSPC value of 593 was recorded when plants were inoculated by isolates I 3 (Table 5). This was significantly higher than the AUDPSC from isolate I 1 . In a trend similar to that from cultivated and wild enset, and banana, isolate I 3 was found to be the most aggressive on Canna spp. and cereals.
DISCUSSION
Bacterial wilt caused by X. campestris pv. musacearum is considered as one of the major biotic stresses threatening enset and banana (Thwaites et al., 2000). The enset-Xcm pathosystem remains one of the least studied pathosystems to date. The current study objectives were to determine enset bacterial wilt occurrence on various plants commonly grown in and around enset farms in South and Southwest Ethiopia and elucidate the pathogenicity of Xcm isolated from different group of plants.
During the field survey, different plants that is, Canna sp., cultivated and wild sorghum, and sugarcane were assessed for symptoms associated with the Xanthomonas bacteria. Results reveal the prevalence of symptoms associated with the Xanthomonas bacteria ranging from 30 to 100% with disease incidence varying from 10 to 80%. Thus, these plants were considered as possible alternate hosts to the Xcm bacteria. Ssekiwoko et al. (2006) also reported 80 to 100% disease incidence on C. indica in a pot experiment, while Ashagari (1985) identified C. orchoides as a host for the Xcm pathogen. In the present study, Xanthomonas from any of these plants did not induce observable symptoms on enset and banana, and hence the isolates were considered as nonpathogenic to enset and banana. On the other hand, the Canna spp. and all the aforementioned cereals crops were found to be susceptible to Xcm from enset and banana.
The current study reveals the pathogenicity of Xcm to cultivated and wild enset, banana, Canna spp., and several grasses. In contrast, Ssekiwoko et al. (2006) reported that Xcm infects only monocots belonging to the two families Musaceae and Cannaceae. Mwangi et al. (2006) has also excluded maize and sorghum from possible hosts of Xcm. On the other hand, Aritua et al. (2008) and Karamura (2012) have reported maize and sugarcane developing disease after being artificially inoculated with Xcm. Aritua et al. (2008) even reported genetic similarities between Xcm isolates on one hand, and isolates of Xanthomonas vasicola pv holcicola from sorghum and Xanthomonas vasicola pv vasculorum from sugarcane on the other. Inoculation of maize with Xcm resulted in the development of full blown yellow-brown streaks (Karamura, 2012), a result that coincides with this study findings.
Significant variations (p<0.05) existed among the isolates in terms of incubation period, wilting incidence and severity. In general, isolate I 3 from cultivated enset in South Ethiopia was found to be the most aggressive, while I 1 from wild enset plant was the least aggressive. Variability in terms of pathogenicity among Xcm isolates was also reported by Weldemichael (2000). This was contrary to a report by Aritua et al. (2007) that revealed low level of genetic variation among the pathogen isolates collected from different African countries.
However, the current pathogenicity test findings contradict those of Tripathi et al. (2009), who reported no significant differences in pathogenicity among Xcm isolates. Our findings thus call for more research in diversity of the pathogen populations. Besides, the results confirm the need to consider isolate variation in breeding for bacterial wilt resistance. We recommend this isolate be used in future resistance screening trials. We also suggest that molecular studies including sequencing be carried out to understand the genetic basis of variation in pathogenicity of the isolates.
The test plants also differed significantly in their degree of susceptibility to Xcm. The banana cultivar Pisang awak, C. indica, and enset clones Geziwot, Mandaluka, wild enset Epoo2 and Erpa18 showed high disease incidence and severity, and short incubation period, and hence were considered as most susceptible. Enset clones Mezya, wild enset Epoo4, and finger millet cultivar Pandet had lower disease severity and longer incubation period, and hence were considered relatively tolerant to the pathogen. While not much work has been done to assess the susceptibility of wild enset to Xcm, the enset clone 'Mezya' was also found to be more tolerant to Xcm infection by Ashagari (1985) and Weldemichael (2000).
The current work reveals the potential various plants including wild enset may play in harboring Xcm pathogenic to both cultivated enset and banana. Hence, care must be taken to minimize the risk of the pathogen being spread from the wild to agricultural fields. Further characterization of the X. campestris pv.musacearum strains from wild enset, cultivated enset and banana should be carried out by using the existing available detection methods. In addition, the genetic diversity among both the host and the pathogen should be investigated further. Additional tests on the Xcm isolates to different plant species should be carried out to elucidate the potential of wild and cultivated plants in harboring and disseminating the pathogen. | 2019-04-01T13:15:27.703Z | 2016-09-28T00:00:00.000 | {
"year": 2016,
"sha1": "155166f1845612643ce57132bfcb6236ab05f6b0",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/B6B571E60675",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "644ef74c197f31ea28da91d4d1686c385f0323d2",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
255747520 | pes2o/s2orc | v3-fos-license | Investigating outbreaks of initially unknown aetiology in complex settings: findings and recommendations from 10 case studies
Abstract Background Outbreaks of unknown aetiology in complex settings pose challenges and there is little information about investigation methods. We reviewed investigations into such outbreaks to identify methods favouring or impeding identification of the cause. Methods We used two approaches: reviewing scientific literature and soliciting key informants. Case studies were developed through interviews with people involved and triangulated with documents available from the time of the investigation. Results Ten outbreaks in African or Asian countries within the period 2007–2017 were selected. The cause was identified in seven, of which two had an unclear mode of transmission, and in three, neither origin nor transmission mode was identified. Four events were caused by infectious agents and three by chemical poisoning. Despite differences in the outbreaks, similar obstacles were noted: incomplete or delayed description of patients, comorbidities confounding clinical pictures and case definitions wrongly attributed. Repeated rounds of data collection and laboratory investigations were common and there was limited capacity to ship samples. Discussion It was not possible to define activities that led to prompt identification of the cause in the case studies selected. Based on the observations, we conclude that basing case definitions on precise medical observations, implementing initial comprehensive data collection, including environmental, social and behavioural information; and involving local informants could save precious time and hasten implementation of control measures.
Introduction
Epidemiologists and public health professionals charged with investigating outbreaks of unknown aetiology face many common challenges, whether the outbreaks occur in well-resourced or low-resourced areas. 1 Little is known about outbreaks that occur in remote settings other than what is highlighted in the global scientific literature.A review of ProMed reports of undiagnosed disease events shows that those events mainly occurred within low-resource countries. 2New, emerging infectious diseases also present first as outbreaks of unknown aetiology, as in the first cases of acquired immunodeficiency syndrome, severe acute respiratory syndrome (SARS), Middle East respiratory syndrome and coronavirus disease 2019. 3From our experience and literature review, little information is available to the scientific community about outbreaks where the cause is not identified or where there is controversy around the primary agent involved.Also, detailed information regarding the investigation processes in such outbreaks is scant: what challenges do investigation teams face for data collection, sample collection, storage and shipment?Even less well documented is how to propose control measures while there is still uncertainty about the cause of the outbreak.
In 2012 Goodman et al. 1 published a review of 14 historically important syndromic outbreaks initially of unknown aetiology investigated in the USA and Cuba.The authors identified lessons learned from which they derived a series of necessary measures, many of which are not always available in remote settings.
Dalton et al. 4 proposed a framework for auditing outbreak investigations in Australia and insisted on the establishment of performance standards in outbreak investigations.However, the resources allocated to investigations vary dramatically from one country or setting to another and make the identification of such criteria difficult.Even so, the need for good practices in investigating these outbreaks supports the work presented here.
As part of the World Health Organization (WHO) Outbreak Toolkit project, we initiated a review of recent investigations into outbreaks of unknown aetiology.The Outbreak Toolkit project is a WHO-coordinated initiative aimed at improving the outcomes of investigations into outbreaks, of known or unknown aetiology, by developing one website with tools and guidance to assist the public health professionals who respond to them ( https: //www.who.int/emergencies/outbreak-toolkit ).Special attention was paid to outbreaks for which investigations did not lead to the determination of the aetiology and those that occurred in remote or complex settings (especially conflict and hard-to-reach areas), in humanitarian crises, or in low-and middle-income countries (LMICs).The aim of this work was to describe the process of investigation of the outbreaks of unknown aetiology occurring in complex settings and to determine whether certain methods, techniques or strategies of investigation aided or impeded the rapid identification of the aetiology of the outbreak.In this article we present a summary of 10 case studies and discuss key findings that could assist the development of a framework and recommendations for investigating outbreaks of unknown aetiology.
Definition of an outbreak of unknown aetiology
For this study, an outbreak of unknown aetiology refers to either an outbreak where a definitive causative agent was not identified during an initial investigation or an outbreak where a definitive causative agent was identified during an initial investigation but the source or route of transmission remained unclear.For the purpose of the study, we classified the outcome of the investigations as: r Complete outcome: the causative agent, its source and mode of transmission were identified.r Unsatisfactory outcome: an underlying causative agent has been proposed but remains disputed or unconfirmed and no clear source or mode of transmission could be established.
Selection of the case studies
We began by compiling a list of investigations into public health 'events' of unknown aetiology that occurred within a 10-y period from 2007 to 2017.We used two approaches: searching the scientific literature, specialized social media and a WHO repository of alerts, using standardized keywords described below; and key informant interviews with representatives of institutions involved in investigations into outbreaks of unknown aetiology.
We conducted the search of scientific literature in the PubMed database using the keywords: ('Disease Outbreaks'[Mesh] OR 'disease outbreak'[TW] OR 'disease outbreaks'[TW]) AND (unidentified [TW] OR 'unknown causes'[TW] OR 'unknown cause'[TW] OR mysterious [TW]) using the English language.The search of the ProMed ( https://www.promedmail.org, accessed 22 December 2019) and FluTrackers.comarchives ( https://flutrackers.com/ forum/ , accessed 22 December 2019) was conducted using the keywords 'undiagnosed', 'unknown' 'unknown origin' and 'mysterious'.The records of the WHO Event Management System were also searched for files relating to outbreaks of unknown origin (grey literature).Fig. 1 illustrates the search strategy and selection process for the 10 case studies.
An important criterion for case study selection was the availability of health professionals involved in the investigation to share first-hand information about the process of the investigation into the outbreak.
Source of information
Ten case studies were developed through the collection and review of primary and secondary documents from the investigations, including outbreak reports and outbreak situation reports, line lists, media articles, peer-reviewed journal articles and laboratory reports.The complete list of documents consulted is available in the Annex (Other sources of information).Key informant interviews with health professionals directly involved in the investigations were conduc ted using a semistruc tured interview guide.
Analysis
For each of the case studies we tracked the factors and activities that either promoted or impaired the progress of the investigation towards the outcome.Investigations that identified clear causative agents were compared with those that did not.
Definition
The term 'transnational institution' describes any organization that has a presence in more than one country and routinely draws on material resources and expertise of more than one country.
Description of the ten outbreaks
The 10 case studies reviewed are summarized in Table 1 .
Outcome of the investigations
A clear causative agent was identified in 7 of 10 case studies.In five of these, both the source and the route of transmission were identified (complete outcome) (Table 2 ).In the remaining two case studies, a compelling causative agent was identified but the event remained unexplained (incomplete outcome).These case studies were classified as incomplete because in Liberia there was no explanation for the sudden clustering of unusual forms of meningococcal disease and septicaemia, and in Cambodia there was no definitive explanation for the unusually high case fatality ratio (CFR) observed in the enterovirus 71 outbreak at the centre of the case study.For the three remaining case studies, Vietnam, Congo and São Tomé and Príncipe, while the outbreaks have at times been attributed to various infectious and non-infectious agents, there is no consensus by the partners involved in the investigation about the agent primarily driving the event (unsatisfactory outcome).
Public health intervention
Five investigations [Angola, Nigeria, Democratic Republic of the Congo (DRC), Liberia and Cambodia] resulted in public health interventions or general awareness raising that contributed to the early conclusion of the acute events.In Cameroon, the investigation lasted for 2 y before a plausible aetiology was identified and a control programme implemented.In São Tomé and Príncipe and Ethiopia, the combination of messages and awareness of the population about hygiene, in conjunction with favourable climatic factors, likely led to the resolution of the event.We were unable to document with precision whether the events in Vietnam and Congo had been resolved by 2022.
Place and magnitude
Eight of the ten case studies occurred in African countries and two in East Asia.All the events occurred in LMICs, with seven occurring in impoverished, remote and difficult-to-access communities.The magnitude of the outbreaks varied significantly: the largest outbreak extended over an entire country, two involved poor suburbs of a capital city, six affected villages in remote areas and the smallest outbreak was restricted to a suburb of a district capital.The number of cases reported varied from 30 to > 3000.
Syndromic categories
We observed three case studies with neurological syndromes, one with gastrointestinal syndrome associated with neurological manifestations, one with severe respiratory syndrome with neurological manifestations, three with cutaneous syndromes, one with fatal febrile syndrome and one with acute jaundice syndrome.The severity of the event, based on CFR, varied from an estimated 0 to 98% (Table 1 ).
Investigation time frames
It was not possible to ascertain an exact start date for most of the events, but the estimates available indicated that the delay between onset of the outbreak and an investigation being initiated ranged from 1 d (in Liberia) to several months, with a mean of 2.7 months and a median of 2.0 months.For the five case studies with a complete outcome, the duration of the investigation in four of the case studies varied from 1 to 5 months and in the other case study, Cameroon, it lasted > 34 months with iterative periods of field work.For the two case studies with incomplete outcomes, Liberia and Cambodia, the investigations lasted for 15 d and 1 month, respectively.Of the three events with unsatisfactory outcomes, two were investigated iteratively over several years (São Tomé and Príncipe, Vietnam) and, to our knowledge, one was not monitored after an initial brief investigation lasting a few days (Congo).
Environmental surveys and case-control studies (CCSs)
Environmental surveys were conducted in five of the case study investigations and were considered a key factor in finding the aetiological agent or its source in three of the case studies, all of which had complete outcomes.In the other two outbreaks where environmental surveys were performed, one with an incomplete outcome and one with an unsatisfactory outcome, they did not play a role in identifying the agent or its source.CCSs were conducted as part of six investigations (Angola, Liberia, São Tomé and Príncipe, DRC, Cameroon, Cambodia) and requested, though not performed, in a further two investigations (Ethiopia, Congo).The CCSs proved to be of limited utility in most of the investigations and potentially may have hindered some of them.In Cameroon, logistical constraints arising from instability in the area meant that only a clustered and small number of suspected cases could be enrolled in the CCS, limiting extrapolation of the findings.In the Angola case study, the CCS failed to cor-rectly identify the causative agent and identified several indirect and erroneous risk factors.In the Liberia case study, the CCS erroneously appeared to indicate that consumption of some specific food products were risk factors for infection.It is likely that the findings were an artefact of the causative agent being spread among a group of people sharing a meal; however, the findings were initially interpreted as being indicative of a foodborne outbreak.In the DRC case study, the CCS was performed only after the correct causative agent had been identified, meaning that it had little impact on the trajectory of the investigation.In Cambodia, the CCS exploring the treatment modalities as a possible cause of the high CFR was inconclusive.As it is still unclear what causative agent was responsible for the São Tomé and Príncipe event, it is difficult to understand the relevance of the findings produced by the CCS.
Three investigations benefitted from a cross-sectional doorto-door survey, which helped to describe the geographic distribution of cases and establish baseline numbers (Angola), estimate the rate of increased mortality in households (Nigeria) and define the geographic spread and epidemiological features of the event (Vietnam).
Laboratory investigations
All 10 of the case studies involved laboratory testing and 8 case studies involved more than two rounds of laboratory testing.In nine of the events, laboratory support was acquired outside the affected country.In eight of these events, environmental samples, including soil, water, pharmaceuticals and food were tested, in addition to human biological samples.
Two distinct approaches to laboratory testing emerged across the case studies.One approach involved the use of targeted testing to confirm the presence of a particular causative agent suspected through clinical assessment, environmental investigation and interpretation of the descriptive epidemiology.The other approach involved identifying the causative agent by subjecting human or environmental samples to either nonspecific tests or tests that simultaneously test for a multitude of possible causative agents.Both approaches proved effective at times.'Scattershot' testing appeared to offer significant advantages when investigating an outbreak of unknown aetiology where there was an atypical clinical presentation of the causative agent, as occurred in Liberia.In the case of chemical poisonings, laboratory investigations were most effective when the clinical syndrome pointed to a specific substance or group of substances.
In all 10 case studies, diagnostic testing revealed high burdens of other infectious or environmental disease-causing agents.There was also one incident of a false-positive result that nearly prematurely ended investigations.There were also two falsenegative results, the impacts of which were mitigated through the testing of different sample types, the use of a variety of tests and the testing of severely ill patients.
The collection, storage and transport of samples often proved logistically challenging and were hampered by a lack of technical knowledge or adequate material, issues around interinstitutional coordination and significant resource constraints.Recurring issues included uncertainty around which samples to collect, poor quality of the samples, inadequate sample documentation, problems with packaging and storage during transport, difficulties getting official clearance for sample export, difficulties identifying an appropriate laboratory to perform the testing, the cost of specific laboratory tests and samples going missing at all stages of the testing process.A failure to communicate salient medical information and contextual indications that might guide laboratory investigations was reported as hampering testing efforts.Difficulty in linking laboratory test results to individual patients was widely reported as impairing the interpretation of laboratory findings.
Partnership and collaboration
All 10 of the case study investigations involved the WHO (represented by country office, regional office or headquarters) and at least one other transnational institution in addition to the national ministry of health.Nine of the case studies featured at least three transnational institutions and most involved more.The most common transnational institutions were Médecins Sans Frontières (MSF), the US Centers for Disease Control and Prevention (CDC) and Institut Pasteur.The CDC was involved in seven investigations either directly, via its national offices or via the WHO Global Outbreak Alert and Response Network.Institut Pasteur and MSF played important roles in five of the investigations.
Doctors or toxicologists outside the affected country were consulted in seven case studies through teleconferences and the circulation of photos, videos or case descriptions of patients.This consultation facilitated the identification of the agent or the syndrome in five case studies, in some cases by concluding that a chemical cause was unlikely, thus narrowing the focus.External collaborators assisted with the identification of suitable laboratories to undertake testing.This was particularly important for chemical analyses, as there is no established global network of toxicological laboratories.
Clinical characterization and development of a case definition
In five (half) of the case studies, the event was initially incorrectly attributed to a specific notifiable disease, namely, meningitis, malaria, hepatitis or Ebola.These early misdiagnoses were based on initial clinical assessment, location of the event, season and local epidemiology.For example, in Nigeria, meningitis was initially suspected, so fever was included in the first case definition despite many cases having been afebrile, as the true cause was lead poisoning.
Unusual or atypical clinical presentations were a recurring hinderance.For example, the unusually severe clinical pictures with high CFRs in the Ethiopia and Cambodia case studies initially appeared to contraindicate the correct diagnoses of hepatitis A and EV-71, respectively.In three further case studies (Angola, Liberia, Cameroon), the agent responsible for the outbreak was suspected early during the investigation but discarded as a result of perceived incompatibilities with the clinical picture or epidemiological features (compared with academic knowledge) or a lack of comprehensive knowledge of local disease incidence.In the Angola case study, bromide compounds were initially considered as a potential causative agent.They were discarded because the clinical picture did not correspond to any known descriptions of acute or chronic bromide poisoning.The unexpected clinical presentations were secondarily attributed to a range of factors, including co-infections (Ethiopia, Cameroon), the presence of comorbidities, malnutrition, and poor health status (Ethiopia), alcoholism (São Tomé and Príncipe, Liberia), the use of immunosuppressant therapies before admission (Cambodia), the genetics of the affected population (Cambodia, Vietnam) or the method of estimating CRF using only hospital-admitted patients in the denominator (Cambodia).
Epidemiological information
For two events with incomplete outcomes, Vietnam and Congo, the absence of systematic reporting of patient histories and thorough descriptive epidemiology hindered the exploration of possible routes of transmission and the generation of an adequate explanation for the event.
Two outbreaks presented with unusual epidemiological features.In Liberia, the apparent presentation of a point source outbreak following a social event triggered the search for a food poisoning-associated aetiological agent.Later, the outbreak was confirmed as a cluster of septicaemia of Neisseria meningitidis serogroup C, which is spread through human-to-human transmission.By the time the correct aetiological agent was identified, it was no longer possible to explore the transmission pattern of this unique outbreak of septicaemia of N. meningitidis serogroup C. Second, in Cameroon, the results of an entomological investigation published 1 y after the alert identified the agent of visceral leishmaniasis in sand flies captured in the area, a geographic focus of visceral leishmaniasis that was previously unknown.This triggered further epidemiological and laboratory investigations and allowed the identification of the source of the long-lasting outbreak for which leishmaniasis had already been suspected and discarded.
Laboratory information
Laboratory testing of samples identified the presence of numerous aetiological agents in all the case studies.In the Congo case study, four cases that tested positive for yellow fever at the field level were secondarily identified as false positives after a second testing; in the same case study, an incidental finding of a holoendemic infectious agent, Plasmodium falciparum , appears to have truncated the investigation.
Perceived risk and consequent prioritization of laboratory testing of highest-risk pathogens hindered investigation of the outbreak in Ethiopia.While hepatitis A virus was the most likely causative agent and was subsequently proven to be the correct causative agent with respect to the clinical and epidemiological picture, it was not requested for testing before the second round of samples was sent to the Dakar Pasteur Institute.Instead, initial laboratory investigations focused on testing for severe diseases such as yellow fever, Ebola virus disease and dengue fever.
Process of investigation
The process of identifying the correct causative agent was often iterative, requiring multiple forms of evidence gathering.The sequence of investigative activities varied between case studies and sometimes involved repeating the same investigative activity many times, such as laboratory testing, CCSs and environmental surveys.
Documentation of the investigations and their findings
Published accounts of previous investigations into similar events proved helpful in investigating an outbreak with an unusual clinical or epidemiological presentation in two of the case studies (Ethiopia, Angola).At the time of this writing, five of the case studies have been published in peer-reviewed journals.However, little is said about the challenges faced in the investigations.Accounts of the investigations appear in reports on institutional websites, in ProMed posts or as training or conference materials.
Discussion
We focused our efforts on identifying and describing the transnational responses to outbreaks that do not systematically generate public attention and that suffer from limited resources for investigation.These settings have proven to be possible sources of emerging diseases with pandemic potential, such as SARS in 2003 20 and the H1N1 swine-origin influenza pandemic in 2009. 21n important criterion for selection of the 10 case studies was the availability of first-hand information on the investigations from public health professionals (most of them are co-authors of this study).We intended to use this information to evaluate the investigatory practices and the difficulties faced, as well as to gain insights into factors that favour or limit identification of aetiologies, sources or transmission patterns in remote settings.To our knowledge, best practices in the investigation of outbreaks of unknown aetiology in remote settings do not yet exist.
As a retrospective analysis of the investigations, there are some limitations.The accounts of the investigations are not all verified in official reports and might sometimes even contradict the official version.As some outbreaks occurred 10 y ago, there may be some gaps in the recall of the investigation process by informants and some descriptions may be incomplete.
It was difficult to clearly identify investigative methods universally favouring or limiting outbreak resolution because of the wide variation in the type, severity, time frame and scale of the events.However, some common challenges were identified: difficulty in generating an effective case definition; the iterative process of the investigative activities, with consequent resource needs; difficulty in integrating the social and epidemiological environments into the analysis; difficulties in obtaining timely and good-quality laboratory investigations; and multiple positive laboratory findings of variable relevance.Factors that facilitated the investigations were the use of remote expert support via video or pictures of clinical findings and collaboration between national and international institutions.
Interviewees reported that a major challenge during the investigation of outbreaks of unknown aetiology was obtaining accurate and complete descriptions of clinical signs and symptoms, patient history and complete epidemiology of the event.As with Silarug et al. 22 in Thailand, who described in the 1990s the association in one outbreak of influenza A with dengue fever, we highlight that secondary medical conditions, whatever the cause, play an important role in transforming the medical picture into one that is unexpected, atypical or confusing when compared with current knowledge and medical textbooks.One consequence of complex clinical presentations is that investigations are hindered as a result of an imprecise case definition.New disease trackers have been developed and can eventually serve to accelerate the identification of the disease 23 ; however, the quality of the case description remains the key to the success of the disease tracker.
The complexity of the scenarios makes iterative investigations likely to first exclude more common causes or those easier to assess.However, investigations could have been better targeted in some cases had more precise and complete data been gathered first.Our study shows that when investigating outbreaks in complex settings, it is necessary to take account of the setting and context, including societal fac tors direc tly or indirec tly affec ting health.In investigating a cluster of cancer cases in the USA, Simpson et al. 24 recommended active listening, which seeks out peoples' perspectives, validates their concerns and engages them in the investigative process.In Cuba, the authors of an investigation into an outbreak of optic and peripheral neuropathies documented the myriad challenges in detecting, investigating and intervening in a syndromic problem in a complex setting of substantial social and economic transition, as was the situation in most of our case studies. 1 Guha-Sapir and Scales 25 declared that understanding the political dynamics of the outbreak setting is important to undertaking meaningful research and getting the results out in time (or at all).
In Ethiopia, Liberia and Cameroon, laboratory investigations focused on a variety of agents that were unlikely given clinical and epidemiological analysis.Cornejo et al. 26 reported how an unidentified cluster of infection in the Peruvian Amazon region was wrongly attributed using standard microbiological methods; the cause could be identified only by polymerase chain reaction using universal primers.In the DRC, the outbreak of neurological syndrome initially attributed to meningitis and finally linked to falsified drugs emphasizes the importance of investigating atypical clinical presentations, of building precise and objective case definitions and the need for multidisciplinary approaches. 13he results of laboratory investigations in the DRC, Ethiopia, Cameroon, Vietnam and São Tomé and Príncipe reveal that more than one agent can be identified in patients' biological tests during investigations but are not always pertinent to the outbreak.This highlights the danger of deciding on the cause based on the first positive result.Several factors can contribute during an outbreak, or there may be more than one event occurring concurrently, leading to atypical epidemiology.
Based on analysis of the case studies, we propose the following recommendations to improve the process of investigating outbreak of unknown aetiology: -Propose precise, objective, operational, logical case definitions using available information for medical and laboratory findings, and limit these with time and place criteria.Determine the sensitivity and specificity required and re-evaluate as often as needed.-Be aware of the effects of comorbidities on atypical clinical presentations or the epidemiological picture and the possibility of false-positive and -negative laboratory results.-Review case definition and hypotheses as new information becomes available.-Collect, via key informants, cultural habits, societal structure, behaviours and information that can affect the transmission of an agent.-Actively seek local input into investigations via key informants to draw hypotheses, inform interpretation of the data and findings and discuss hypotheses for the cause, source or transmission.-Conduct additional studies, such as case-control, only if you can guarantee that they will be conducted with enough quality to allow interpretation of findings with confidence (watch size of the sample, selection of control, quality and reliability of answers).
r Proactive preparation for future outbreaks -Have modular questionnaires ready to adapt to the situation, covering medical, epidemiological and environmental issues.-Establish agreements/memoranda of understanding between institutions usually engaged in such investigation and with ministries of health in advance.-Establish links with specialist resources such as international clinical experts, environmental experts and reference laboratories, and have clear processes for using them.-Support (including with funding) outbreak investigation infrastructure, such as a field epidemiology training programme and rapid response teams that can facilitate complex investigations.
r Learn from past outbreaks -Publish descriptions of investigations of complex outbreaks, even when the cause is not determined, to alert the scientific community so that new knowledge from outbreaks can inform updates to medical texts.-Strengthen the WHO Emergency Monitoring System recording of follow-ups and outcomes to serve as an open international database of all outbreaks reported.
Conclusions
To support countries in addressing the investigation of outbreaks of unknown aetiology in remote and complex settings, the WHO Outbreak Toolkit project will use the observations gathered in this review to develop guidance and tools to improve the quality and timeliness for initial data collection and investigation of outbreaks of unknown aetiology.The toolkit hosts several tools: a recently published WHO manual for investigating outbreaks of possible chemical aetiology, 27 guidance for investigating clusters of respiratory disease of unknown aetiology and other syndromes 28 and a questionnaire for early investigation of outbreak of unknown aetiology, 29 among others.
International Health from internal organizational reports and from key informant interviews.The non publicly available data are available upon request.
r
Incomplete outcome: the causative agent was identified, but the source or mode of transmission or what triggered the outbreak remained unclear.
Figure 1 .
Figure 1.Selection process of the 10 cases studies for the review of investigations of outbreaks of initially unknown aetiology in complex settings.
Table 1 .
Main characteristics of the selected case studies
Table 2 .
Classification of the 10 case studies according to the outcome of the outbreak investigation, country, syndrome, year of initial investi- Be aware and document the event that triggered the alert.Document what has happened since the alert.Many factors can modify the transmission mode of an agent, which can switch from unique source to person to person, or the reverse. | 2023-01-13T06:17:37.591Z | 2023-01-11T00:00:00.000 | {
"year": 2023,
"sha1": "40b67251f67ba93c9de4303b06fea2c0ee92c6a5",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/inthealth/advance-article-pdf/doi/10.1093/inthealth/ihac088/48618818/ihac088.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b77f5f04b0f3a835991a1da65badffd310fa4ad",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220964259 | pes2o/s2orc | v3-fos-license | Association of glial and neuronal degeneration markers with Alzheimer’s disease cerebrospinal fluid profile and cognitive functions
Background Neuroinflammation has gained increasing attention as a potential contributing factor in the onset and progression of Alzheimer’s disease (AD). The objective of this study was to examine the association of selected cerebrospinal fluid (CSF) inflammatory and neuronal degeneration markers with signature CSF AD profile and cognitive functions among subjects at the symptomatic pre- and early dementia stages. Methods In this cross-sectional study, 52 subjects were selected from an Icelandic memory clinic cohort. Subjects were classified as having AD (n = 28, age = 70, 39% female, Mini-Mental State Examination [MMSE] = 27) or non-AD (n = 24, age = 67, 33% female, MMSE = 28) profile based on the ratio between CSF total-tau (T-tau) and amyloid-β1–42 (Aβ42) values (cut-off point chosen as 0.52). Novel CSF biomarkers included neurofilament light (NFL), YKL-40, S100 calcium-binding protein B (S100B) and glial fibrillary acidic protein (GFAP), measured with enzyme-linked immunosorbent assays (ELISAs). Subjects underwent neuropsychological assessment for evaluation of different cognitive domains, including verbal episodic memory, non-verbal episodic memory, language, processing speed, and executive functions. Results Accuracy coefficient for distinguishing between the two CSF profiles was calculated for each CSF marker and test. Novel CSF markers performed poorly (area under curve [AUC] coefficients ranging from 0.61 to 0.64) compared to tests reflecting verbal episodic memory, which all performed fair (AUC > 70). LASSO regression with a stability approach was applied for the selection of CSF markers and demographic variables predicting performance on each cognitive domain, both among all subjects and only those with a CSF AD profile. Relationships between CSF markers and cognitive domains, where the CSF marker reached stability selection criteria of > 75%, were visualized with scatter plots. Before calculations of corresponding Pearson’s correlations coefficients, composite scores for cognitive domains were adjusted for age and education. GFAP correlated with executive functions (r = − 0.37, p = 0.01) overall, while GFAP correlated with processing speed (r = − 0.68, p < 0.001) and NFL with verbal episodic memory (r = − 0.43, p = 0.02) among subjects with a CSF AD profile. Conclusions The novel CSF markers NFL and GFAP show potential as markers for cognitive decline among individuals with core AD pathology at the symptomatic pre- and early stages of dementia.
Introduction
In recent years, a paradigm shift in the research criteria of Alzheimer's disease (AD) has occurred as the primary focus has shifted from clinical to biological criteria. The emphasis is now on the pathology [1], which is believed to start decades before the appearance of clinical symptoms [2]. The core cerebrospinal fluid (CSF) biomarkers reflecting the hallmarks of AD pathology, extracellular amyloid plaques (Aβ), and neurodegeneration (total tau [T-tau] and phosphorylated tau [P-tau]) have been at the center of this shift and have been extensively studied [3]. Although the diagnostic accuracies of these markers are generally satisfactory [4], their levels are relatively constant in the symptomatic stages of the disease and do not correlate well with the progression of cognitive decline [5][6][7]. This necessitates the need for exploration of novel biomarkers that help in better understanding the different aspects of AD pathology, its progression, and clinical manifestation.
Increasing evidence shows that inflammation is a contributing factor in the pathogenesis and development of AD and other neurodegenerative diseases [8,9]. A number of studies show that Aβ toxicity and plaques induce an immune response, including activation of astrocytes and microglia, the immune cells of the brain [10][11][12]. Furthermore, activation of these cells is also thought to play a role in the formation and progression of neurofibrillary tangles (NFTs), contributing to neuronal dysfunction and loss [13]. Glial activation markers are, therefore, of high interest when it comes to exploring new biomarkers for the diagnosis of dementia.
GFAP is a key intermediate filament protein and marker of reactive astrocytes, whose expression has been associated with amyloid plaque load and, to a lesser extent, the number of NFTs [23][24][25].
Inflammation in the brain and its role in AD can be studied indirectly through the analysis of CSF proteins. Increased levels of CSF YKL-40, S100B, and GFAP have been observed in AD patients compared to healthy controls, although results have not been consistent [26]. The relationship between inflammatory and core AD markers (Aβ, tau) in CSF has also been explored. Previous studies have found a strong positive association between CSF YKL-40 and tau proteins but not between YKL-40 and Aβ 42 [19,[27][28][29]. YKL-40 has also been shown to strongly correlate with neuronal degeneration marker neurofilament light (NFL) in CSF [30], further supporting the association between glial activation and neurodegeneration. NFL is mainly located in myelinated axons. Therefore, its levels also reflect white matter changes, with recent studies indicating a potential for this protein as both a diagnostic and progression marker in AD and other neurodegenerative diseases [26,31]. Few studies have examined the relationship between S100B and GFAP with core AD markers in CSF. Hov et al. [32] found an association between S100B and P-tau but not Aβ 42 among elective surgery patients free from dementia and delirium. Ishiki et al. [33] did not find an association between CSF GFAP and core markers within a dementia cohort.
Loss of memory is typically among the first clinical symptoms of AD, marking the beginning of cognitive decline. The medial temporal lobe is an early site of tau accumulation, and its dysfunction may underlie episodic memory decline [34]. Other cognitive domains are also involved in AD, such as language, non-verbal episodic memory, and executive functions [35].
In the most recent research criteria from the International Working Group for the diagnosis of AD published in 2014 [36], the diagnosis of prodromal AD requires both the presence of cognitive symptoms and AD signature biomarker profile (increased amyloid positron emission tomography [PET] deposition or the combination of lowered CSF amyloid-β 1-42 and elevated CSF tau). It is essential for the evaluation of novel biomarkers to examine their relationship with both entities separately, independent of diagnosis. That type of approach could enhance both understanding of the underlying pathology of AD and the sequence of events leading to cognitive impairment. The first aim of this study was to assess the ability of glial (YKL-40, S100B, GFAP) and neurodegeneration (NFL) markers in CSF to discriminate between different CSF profiles (AD and non-AD) among subjects at the symptomatic pre-and early stages of dementia. In addition, the results were compared to the discrimination ability of neuropsychological tests, which are commonly used to aid AD diagnosis. The second aim was to investigate the relationship between the CSF markers with neuropsychological tests reflecting different cognitive domains.
Subjects
Individuals, referred to The National University Hospital of Iceland Memory Clinic during a 4-year period which had (1) a score between 24 and 30 on the Mini-Mental State Examination (MMSE) and (2) a score of 4.0 or less on the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) [37], were invited to join a prospective study on mild cognitive impairment (MCI, n = 218). The exclusion criteria were (1) cognitive impairment that, without a doubt, could be explained by a condition other than dementia; (2) difficulties participating due to health or social issues; and (3) residency outside the Reykjavík Capital Area. In entering the study, each subject underwent various assessments, including a standard clinical and neuropsychological assessment and brain magnetic resonance imaging (MRI) for evaluation of medial temporal lobe atrophy (MTA). Lumbar puncture for collection of CSF, which was optional by the requirement of the National Bioethics Committee, was also carried out. For this particular study ( Fig. 1), only subjects with CSF samples and complete neuropsychological assessment were selected from the cohort (n = 56). The final sample included 52 subjects as four were removed due to excessively high value on CSF GFAP (n = 1) or blood-contamination in the CSF sample (n = 3). Clinical diagnosis of AD was based on the criteria for probable AD dementia defined by the National Institute on Aging-Alzheimer's Association (NIA-AA) [38], with evidence of AD pathophysiological processes (based on MTA score or/and analysis of core CSF markers). Patients with Lewy body dementia (LBD) were diagnosed based on the consensus criteria of McKeith et al. [39]. MCI diagnosis required the fulfillment of the Winblad criteria [40], with those not fulfilling the criteria diagnosed as having subjective mild cognitive impairment (SCI).
CSF collection and analysis
CSF was collected via lumbar puncture with a 22-gauge spinal needle at the L3/4 or L4/5 interspace. Uncentrifuged samples were frozen in 2-ml polypropylene tubes and stored at − 80°C. Commercially available sandwich enzyme-linked immunosorbent assays (ELISAs) were used for measurements of all proteins. Analyses of core AD markers T-tau (IBL International, Hamburg, Germany) and Aβ 42 (IBL International, Hamburg, Germany) were carried out in the ISO 15189 accredited medical laboratory MVZ Labor P.D. Dr. Volkmann und Kollegen GbR (Karlsruhe, Germany). Assays for novel markers NFL (Uman Diagnostics, Umeå, Sweden), YKL-40 (Quantikine ELISA Human Chitinase-3-like 1; R&D systems, M.N., USA), S100B (BioVendor GmbH, Heidelberg, Germany), and GFAP (BioVendor GmbH, Heidelberg, Germany) were performed in technical duplicates and according to the manufacturer's instructions in a laboratory at the University of Iceland. The mean Intraassay CV was < 10% and mean Inter-assay CV < 15% for all assays.
Subject grouping based on CSF measures
Each subject was classified independently of clinical diagnosis on the basis of CSF T-tau and Aβ 42 values. Ttau/Aβ 42 ratio cut-off of 0.52 was chosen based on results from a large memory clinic cohort study [41], giving a sensitivity of 93% for AD and specificity of 83% for controls. A positive CSF AD profile was defined as Ttau/Aβ 42 ratio > 0.52. The same ratio was also used as a part of the clinical diagnosis of AD, explaining full concordance with CSF AD profile.
Neuropsychological tests
All subjects underwent a detailed neuropsychological assessment performed by licensed psychologists. Five cognitive domains, commonly affected by aging and AD, were assessed using seven tests (Table 1). For the evaluation of verbal episodic memory, two tests were used. The first, Rey Auditory Verbal Learning Test (RAVLT), consisted of 15 nouns read aloud by the examiner for five consecutive trials. Each trial was followed by a free-recall test. After a 30-min delay, subjects were required to recall the words without being reread the list [42]. The second test was composed of a story [43], which included 25 ideas verbally presented by the examiner. Right after the story was presented (immediate recall), the subject was asked to repeat what they remembered without being given any clues (free recall). Thirty minutes later, subjects were asked to recall the story again (delayed recall). The Rey-Osterrieth complex figure test (ROCF) was used to assess nonverbal episodic memory [42]. The subject was asked to reproduce a complicated line drawing, first by copying it free-hand, second by drawing from memory (immediate recall), and third by drawing it after a 30-min delay (delay recall). Verbal fluency [44] was evaluated with subjects having to produce as many animal names and words starting with the letters H and S as possible in 60 s. Two subtests were used to evaluate processing speed. Part A of The Trail Making Test (TMT-A) [45] required subjects to connect 25 numbered circles positioned randomly on a piece of paper. The first and the most simple part of the Stroop test-Word readingwas also used for the evaluation of the same cognitive domain [46]. Subjects were shown a list of color names (red, green, yellow, or blue), each printed in black ink, and told to read out loud as rapidly as possible. For evaluation of executive functions, The Digit Symbol Substitution Test (DSST), Trail making Test B (TMT-B), and Stroop 4th/3rd parts were used. DSST [47] is a paper-and-pencil test that requires the participant to match symbols to numbers according to a key located at the top of the page. The subject copied the symbol into spaces below a row of numbers. The number of correct symbols within 120 s, constituted the score. TMT-B includes both numbers (1-13) and letters (A-L), with the subject drawing lines between circles, alternating between numbers and letters (1-A-2-B-3-C, etc.). In Stroop-part 4, subjects had to name the color of words when color and meaning were incongruent. Part 3-naming of squares of given colors-was used to control for speed by calculating the ratio between the two parts. Composite scores for each cognitive domain were calculated by averaging neuropsychological test z-scores and subsequently converting those scores into z-scores. Before the computation of composite scores, z-scores for tests measuring reaction time were reversed (TMT, Stroop test, DSST) for the purpose of test consistency (higher scores always indicating better performance). Receiver operating characteristic (ROC) curves were constructed for the differentiation between CSF AD and non-AD profiles. The discrimination abilities of each CSF marker and cognitive domain were compared using the area under the curve (AUC) method, according to DeLong et al. [48]. The AUC is the probability that a randomly selected pair of subjects from each CSF profile group is correctly classified. Stability selection was employed in combination with least absolute shrinkage and selection operator (LASSO) regression for the purpose of identifying stable predictors in multivariable models [49]. LASSO is a penalized approach to multiple regression and especially useful when dealing with multicollinearity (highly correlated predictors). A penalty is introduced, reducing large variance due to multicollinearity in exchange for a tolerable amount of bias. It also performs variable selection as it imposes coefficients of some variables to shrink towards zero. Stable selection is based on resampling the data for avoidance of overfitting, which can be advantageous when dealing with smaller data sets. Instead of fitting one model on a whole sample, many models are fitted on subsamples drawn from it. Stability selection was performed by the use of the function stabsel in the package stabs, implementing the package glmnet for LASSO model fitting [50,51]. Cut-off value for stable selection was set to 75% (the percentage of times a variable was selected into a model) and per-family error rate (PFER) to 1 for all analyses. Each subsample was half the size of the original one, with 100 subsamples being drawn. LASSO logistic regression was applied for the selection of novel CSF markers and composite tests, most accurately distinguishing between the two CSF profiles. LASSO linear regression was used to select variables, out of CSF markers and demographic variables, predicting with most accuracy the composite z-score for each cognitive domain. Two LASSO regressions with a stability selection were performed for each cognitive domain, one which included all subjects and the other, which only included those with a CSF AD profile. Scatter plots were used for visualization of the selected relationships between CSF markers and cognitive domains. Cognitive domain measures were adjusted for age and education before the calculations of corresponding Pearson's correlations coefficients. For the adjustment, linear regression models were created with each composite test z-score as the dependent variable and age and education as independent variables. The residual for each subject was subsequently calculated (observed minus predicted score). Significance values were not adjusted for multiple comparisons, as this study was viewed as explorative with emphasis on discovering relationships. All statistical analyses were performed using R (version 3.6.1, The R Foundation for Statistical Computing). Table 2 shows the demographic, pathophysiological, and clinical characteristics of the cohort by CSF profile. There were no significant differences between the groups in age, length of education, novel CSF protein levels, or gender frequencies. Boxplots comparing distributions in CSF protein levels (NFL, YKL-40, S100B, GFAP) between profile groups are presented in Additional file 1, S1a-d. The CSF AD profile group showed significantly worse performance on the MMSE, RAVLT, Story, ROCF immediate recall, and Verbal fluency animal tests compared to the non-AD group (p < 0.05).
Pearson's correlations between CSF markers
Pearson's correlations between the CSF markers, age, and length of education are presented in Fig. 2, respectively. Inflammatory markers YKL-40 and S100B and neurodegeneration markers NFL and T-tau all correlated positively and significantly with each other. The highest correlation was found between NFL and YKL-40 (NFL: r = 0.62, p < 0.001). GFAP did only significantly correlate with the CSF marker S100B (r = 0.53, p < 0.001). No CSF markers correlated significantly with Aβ 42. All the CSF markers, except for Aβ 42 , correlated positively with age. Length of education correlated weakly and negatively with T-tau (r = − 0.29, p = 0.03).
Accuracy of CSF markers and cognitive domains in distinguishing between CSF profiles
Accuracies for distinguishing between CSF AD and non-AD profiles were based on univariable ROC analyses (Table 3). AUCs for novel CSF markers ranged from 0.61 to 0.64, with a lower limit of each confidence interval below the value of 0.5. In comparison, neuropsychological tests reflecting verbal episodic memory had the highest accuracy compared to other measurements, with all AUCs over 0.70, which is considered fair [52]. The scores for the verbal episodic memory composite test (Table S1, Additional file 1), although AUC coefficients were overall higher for women (n = 19) compared to men (n = 33). LASSO logistic regression with stability selection was performed for the selection of variables distinguishing between the CSF profile groups with the highest consistency. Nine possible predictors could be selected, the four novel CSF markers and the five composite tests presenting each cognitive domain. Only the test reflecting verbal episodic memory was selected as a predictor, with selection frequency (96%) above the cut-off value. All other possible predictors had a much lower selection frequency (≤ 20%). Figure 3 illustrates the ROC curves for the two cognitive domains and the CSF measure with the highest AUC from Table 3. Verbal episodic memory (AUC = 0.80) was superior in distinguishing between CSF AD vs. non-AD profiles compared to non-verbal episodic memory (AUC = 0.65) and CSF GFAP (0.64).
Selection of predictors for scores on each cognitive domain
LASSO linear regression with a stability selection was applied for identifying a set of variables (CSF markers and demographic variables) predicting cognitive scores with the highest consistency (Fig. 4). Two analyses were performed for each of the five domains, one including all subjects (n = 52) and the other only among those with a CSF AD profile (n = 28). Variables with stability selection above 75% were considered reliable predictors. GFAP (78%) was selected as a predictor for executive functions (Fig. 4a) and age (95%) as a predictor for non-verbal memory (Fig. 4b) within the whole cohort. Among subjects with a CSF AD profile, GFAP (87%) and age (81%) were selected as predictors for processing speed (Fig. 4c) and NFL (80%) for verbal episodic memory (Fig. 4d). No variables reached the stability selection criteria as predictors of score reflecting language (Fig. 4e).
Pearson's correlations between selected CSF markers and cognitive domains
Relationships between CSF measures and cognitive domains, as selected with LASSO regression-stability Fig. 2 Pearson's correlation matrix between CSF markers, age, and length of education. Colored squares indicate statistical significance (p < 0.05). CSF measures were natural log-transformed selection analyses (Fig. 4), were visualized using scatter plots. It is well established that normal aging and level and quality of education can influence cognitive test performance [53]. Composite z-scores were therefore adjusted for age and education prior to Pearson's correlations calculations.
CSF NFL levels did not significantly correlate with verbal episodic memory among all subjects (r = − 0.26, p = 0.06, Fig. 5a). Analysis by CSF profile (Fig. 5b) revealed moderate, significant correlation among subjects with a CSF AD profile (r = − 0.43, p = 0.02) compared to none among those without (r = − 0.05, p = 0.82). Correlations The composite test for verbal episodic memory was the only measure to have selection frequency above the cut-off value between the NFL levels and individual neuropsychological tests reflecting verbal episodic memory are presented in Additional file 1, S2a-e. T-tau did not reach the selection criteria for any cognitive domain. It is, nonetheless, of interest to compare the results of T-tau to NFL as both proteins are markers of neurodegeneration. The association between T-tau and verbal episodic memory was similar to NFL within the whole cohort (r = − 0.28, p < 0.04, Fig. 5c) but did not reach significance within the CSF AD group (r = − 0.15, p = 0.45) when analyzed by CSF profile (Fig. 5d). Correlation between CSF GFAP levels and processing speed did not reach significance within the whole cohort (r = − 0.27, p = 0.06, Fig. 5e) or among those with a CSF non-AD profile (r = 0.02, p = 0.94, Fig. 5f). A moderately strong correlation was, on the other hand, detected among those with a CSF AD profile (r = − 0.68, p < 0.001, Fig. 5f). A weak, negative correlation was found between CSF GFAP levels and executive functions, both within the whole cohort (r = − 0.37, p = 0.01, Fig. 5g) and among subjects with a CSF AD profile (r = − 0.39, p = 0.04, Fig. 5h). The corresponding correlations between CSF GFAP levels with individual neuropsychological tests reflecting processing speed and executive functions are presented in Additional file 1, Fig. S3a-e. Additional file 1 also includes scatter plots identical to those shown in Fig. 5 without adjustment for age and education (Fig. S4a-h) and Pearson's correlations between CSF markers, age, and education and composite scores of each cognitive domain, both unadjusted and adjusted for age and education (Table S2).
Discussion
We compared different CSF biomarkers reflecting neurodegeneration (NFL) and inflammation (YKL-40, S100B and GFAP) in relation to core CSF AD markers and cognitive functions in a cohort of subjects at the pre-and early symptomatic dementia stages. While our results indicated that these CSF markers did not accurately distinguish between AD and non-AD CSF profiles, they exhibited different patterns of association with certain cognitive domains, as evaluated by various neuropsychological tests. This pattern was mainly observed among subjects with a CSF AD profile. Within that group, levels of the neurodegeneration marker NFL associated with verbal episodic memory while inflammatory marker GFAP associated with processing speed. In addition, GFAP associated weakly with executive functions within the whole cohort. Overall, these results indicate that CSF NFL and GFAP levels do relate to cognitive functions, specifically among those with a CSF AD profile.
Both CSF NFL and YKL-40 levels correlated with Ttau but not with Aβ 42 , in accordance with previous studies [54][55][56]; thereby, NFL and YKL-40 levels most likely reflect processes that are independent of Aβ pathology [55,57,58]. The putative inflammatory marker, S100B, did show a similar trend as YKL-40 within the whole cohort, correlating strongly with CSF neurodegeneration markers (NFL and T-tau) but not with Aβ 42 levels. In contrast, GFAP did not correlate with the CSF neurodegeneration markers nor with CSF Aβ 42 levels. Neither CSF S100B nor GFAP have been much studied in terms of correlation with CSF core AD markers. Hov et al. [32] found similar results among elective surgery patients free from dementia and delirium, with S100B positively correlating with P-tau but not with Aβ 42 in CSF. Ishiki et al. [33] did not find an association between GFAP and the core AD markers within a sample of healthy subjects and dementia patients. Here we found that CSF NFL, YKL-40, S100B, and GFAP all performed poorly in differentiating between the CSF AD and non-AD profiles. In summary, these results are in accordance with previous findings that have suggested markers NFL, YKL-40, S100B, and GFAP to be not AD specific.
The neuropsychological tests reflecting verbal episodic memory did show the best accuracy in differentiating between the CSF profiles out of all the evaluated cognitive measures and the novel CSF markers. The accuracy was good for the composite score of verbal episodic memory and RAVLT delayed recall test (80%), but fair for all the other verbal episodic memory tests (between 70 and 80%). A recent meta-analysis [59] based on 47 studies has shown that immediate and delayed memory tests consistently show good accuracy (above 80%) for differentiating between AD and healthy controls, especially those involving list recall. Importantly, these studies are based on the clinical diagnosis of AD, while our focus was on the signature of the CSF AD biomarker profile. CSF markers related in different ways to cognitive measures. Both CSF NFL [56,60] and YKL-40 [58] have been previously reported to associate with cognitive decline, with correlation found between CSF levels and global cognition assessed by MMSE test scores among AD patients. In the same studies, the correlation did not hold for patients with MCI. Thus, NFL and YKL-40 might not be sensitive to very early changes in cognition in the earliest symptomatic stages of dementia (SCI, MCI) as in more advanced stages. In this study, the relationship between NFL and YKL-40 with different cognitive domains within the whole cohort could not be confirmed. A possible explanation could be that a majority of subjects (n = 34) were at the SCI or MCI stages, with 23 of those without a CSF AD profile.
Knowledge regarding the relationship between core CSF biomarkers and cognition remains incomplete. Overall, Aβ 42 and T-tau appear to associate with memory and executive functions in some studies [61,62], although results have not been consistent in terms of which cognitive domains they are associated with, which particular tests are most suitable and the strength of relationships in different clinical stages [61,63,64]. However, the levels of core CSF marker have shown evidence of reaching a plateau early in the clinical course of the disease and are therefore not considered ideal for tracking the progression of disease at later stages [65].
Increased CSF levels of inflammatory marker GFAP was found weakly associated with worse performance on tests reflecting executive functions, both within the whole cohort and among subjects with CSF AD profile. Few studies have examined the relationship between CSF GFAP levels and cognitive functions. Ishiki et al. [33] did not find an association between CSF GFAP levels and MMSE scores in a sample of healthy subjects and dementia patients. Darreh-Shori et al. [66] also reported no correlation between CSF GFAP levels and MMSE scores among AD patients. As with CSF GFAP, little research has been conducted on the association between CSF S100B levels and cognition. In the same study [66], a weak, positive relationship was found between levels of CSF S100B and MMSE scores within the same patient group.
Associations between selected CSF markers and cognitive domains were also examined within each CSF profile. CSF NFL levels moderately related to verbal episodic memory among those with CSF AD profile but not among those without. Higher levels of CSF GFAP also moderately associated with worse performance on processing speed only within the CSF AD profile group. This is of interest because the CSF markers did not directly relate to the CSF AD profile (ability in discriminating between CSF profiles was poor). This outcome could possibly be explained by the additive effects of distinctive processes on cognitive functions. A previous study [67] showed a similar trend where CSF YKL-40 levels associated with less preservation of global cognition only in individuals with low Aβ levels (Aβ positive). CSF Aβ levels did though not correlate with YKL-40 or cognitive decline, but to brain atrophy in Aβ positive subjects.
This study has several limitations. First, the sample was relatively small, and hence, present findings need to be validated in a larger study. The sample did not include healthy controls, which could underestimate associations between the studied variables. Another limitation of the study is the lack of information about the ApoE genotype. However, it is unlikely that the ApoE genotype affects the outcome as previous studies have suggested that ApoE ε4 status does not influence CSF NFL or YKL-40 levels [19,68,69].
Conclusions
Our findings suggest that levels of CSF markers NFL and GFAP relate to different cognitive profiles at the symptomatic pre-and early dementia stages. The relationships between the levels of NFL with verbal episodic memory and GFAP with processing speed were only observed among those with CSF AD profile, although the CSF markers did not directly relate to the CSF AD profile. These CSF markers could be of potential use as progression markers, monitoring subtle cognitive changes at the earliest symptomatic stages of dementia among those with AD pathology. Further studies with bigger group sizes are needed to validate these results and to evaluate their potential in tracking changes in the more advanced stages of AD and other types of dementia.
(See figure on previous page.) Fig. 5 Scatter plots presenting Pearson's correlations between CSF levels of NFL and verbal episodic memory (a, b), T-tau and verbal episodic memory (c, d), GFAP and processing speed (e, f), and GFAP and executive functions (g, h) within the whole cohort and by CSF profile. *Cognitive domains were adjusted for covariates (age and education). Without the bottom corner GFAP outlier in the CSF AD profile group, Pearson's correlations were slightly lower for f processing speed (r = − 0.58, p = 0.001) and h executive functions (r = − 0.28, p = 0. 15) | 2020-08-05T13:20:06.989Z | 2020-01-02T00:00:00.000 | {
"year": 2020,
"sha1": "239b69230e03725304247367c87788f7613bac38",
"oa_license": "CCBY",
"oa_url": "https://alzres.biomedcentral.com/track/pdf/10.1186/s13195-020-00657-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2cb4002d557ddef7fdfebb2bb5a5f7584556190",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220387805 | pes2o/s2orc | v3-fos-license | Renewable Energy Support Policy based on Contracts for Difference and Bilateral Negotiation
The European Union has been one of the major drivers of the development of renewable energy. The energy policies of most European countries have involved subsidized tariffs, such as the feed-in tariff in Portugal, the regulated tariff and the market price plus premium in Spain, and the Renewables Obligation in UK, that came into effect in 2002. Recently, UK has made some reforms and started to consider contracts for difference (CfDs) as a key element of the energy policy. This paper presents a support policy based on CfDs and bilateral negotiation. The first phase consists in a CfD auction and the second phase involves a bilateral negotiation between a Government and each of the selected investors. The paper also presents a case-study to analyze the potential benefits of the support policy. It was performed with the help of the MATREM system. The preliminary results indicate some advantages for the Government (and, in some cases, for the investors as well).
Introduction
The energy industry has evolved into a liberalized industry in which market forces drive the price of electricity. Three markets models have been distinguished [1,2]: pools, bilateral contracts and hybrid models. A pool is defined as a centralized marketplace that clears the market for buyers and sellers. Market entities submit offers for the quantities of power that they are willing to trade. These offers are handled by a market operator, whose function is to coordinate and manage the transactions between market entities. Bilateral contracts are agreement between two parties to trade electrical energy. The hybrid model combines several features of the previous two models.
There are various types of contractual arrangements that fall under the broad heading of bilateral contracts, notably contracts for difference (CfDs). Such contracts are bilateral agreements to provide a specific amount of energy for a fixed price, referred to as the strike price. Also, CfDs are typically indexed to a reference price, which is often the centralized market-clearing price. In some cases, CfDs can be one way contracts, when the difference payments are made by one of the parties only [3,4].
The evolution of renewable energy has increased substantially over the past decade. Europe has been at the forefront of the renewable energy policy design and deployment [5,6]. In particular, Portugal has benefited from a feed-in tariff [7]. In Spain, two different types of retribution have been considered: the regulated or feed-in tariff and the market price plus premium. In UK, there have been two main policy instruments: the Non-Fossil Fuel Order, a centralized bidding system that ran from 1990 to 1998, and the Renewables Obligation, that came into effect in 2002. Recently, UK has made some reforms to meet the challenges of the electricity sector. Contracts for difference are a key element of such reforms-they are essentially private law contracts between low carbon electricity generators and a Government-owned company. CfDs provide long-term price stabilisation to low carbon plants, allowing investment to come forward at a lower cost of capital and therefore at a lower cost to consumers [8,9].
Renewable generation is characterized by high fixed capital costs but nearzero variable production costs, and a great dependence on weather conditions. These characteristics may significantly influence the behavior and outcomes of energy markets. In particular, high levels of renewable generation may reduce market prices due to their low-bid costs, and increase price volatility because of their increased variability and uncertainty [11].
Against this background, this paper presents a study to investigate the potential benefits of both contracts for difference and bilateral negotiation as a basis of a renewable energy support policy. It considers a particular Government (e.g., the Portugal or Spain Governments) that makes a public announcement of new investment in wind energy involving two phases. The first phase consists in a CfD auction, like the UK CfD auctions [8,10], when the investors make their offers. This phase is simulated by considering a contract net protocol. The Government selects all the offers that provide a specific level of benefit and comply with the requirements. In the second phase, there is a bilateral negotiation between the Government and each of the selected investors, where the parties negotiate a mutually acceptable value for the strike price. Negotiation involves an iterative exchange of proposals and counter-proposals. The study is conducted with the help of an agent-based tool, called MATREM [12,13].
The remainder of the paper is structured as follows. Section 2 presents an overview of the markets supported by the MATREM system, placing emphasis on the bilateral marketplace for negotiating long-term contracts (notably contracts for difference). Section 3 discusses some aspects of the formalization of bilateral negotiation involving CfDs. Section 4 presents the case-study and discusses the experimental results. Finally, concluding remarks are presented in section 5.
MATREM (for Multi-Agent TRading in Electricity Markets) is an energy management tool for simulating liberalized electricity markets. The tool supports a day-ahead market, a shorter-term market (an intra-day market), a balancing market, a futures market, and a bilateral marketplace. Market entities are modeled as software agents equipped with models of individual and social behaviour, enabling them to be pro-active (i.e., capable of exhibiting goal-directed behaviour) and social (i.e., able to communicate and negotiate with other agents in order to complete their design objectives). Currently, the tool supports generating companies, retailers, aggregators, coalitions of consumers, traditional consumers, market operators and system operators. A detailed description of the tool is presented in [12]. A classification of the tool according to various dimensions associated with both energy markets and software agents can be found in [13]. This section gives an overview of the markets supported by the tool, particularly the bilateral marketplace.
The day-ahead market is a central market where generation and demand can be traded on an hourly basis [14]. The intra-day market is a short-term market and involves several auction sessions. Both markets are based on the marginal pricing theory. The balancing market is a market for primary, secondary and tertiary reserve. For the particular case of tertiary reserve, two computer simulations are performed, one for determining the price for up-regulation, and another for computing the price for down-regulation.
The futures market is an organized market for standardized financial and physical contracts. Such contracts may span from days to years and typically hedge against the price risk inherent to day-ahead and intra-day markets. Players enter orders involving either bids to sell or buy energy in an electronic trading platform that supports anonymous operation. The platform automatically and continuously matches the bids likely to interfere with each other.
The bilateral marketplace allows market participants to negotiate all the details of two different types of tailored (or customized) long-term bilateral contracts, namely forward contracts and contracts for difference (see, e.g., [15]). Forward bilateral contracts are agreements between two agents to exchange a specific energy quantity at a certain future time for a particular price. Contracts for difference are agreements in which each agent ensures the other against discrepancies between the contract price (or strike price) and the market-clearing price. The terms and conditions of both types of contracts are flexible and can be negotiated privately to meet the objectives of two parties. To this end, market agents are equipped with a model that handles two-party and multi-issue negotiation [16,17]. Negotiation proceeds by an iterative exchange of offers and counter-offers. An offer is a set of issue-value pairs, such as "energy price = 50 e/MWh", "contract duration = 6 months", and so on. A counter-offer is an offer made in response to a previous offer. The bilateral marketplace and the associated negotiation of long-term contracts represents a novel and powerful tool. Accordingly, some details about the negotiation process follow.
Let A = {a 1 , a 2 } be the set of software agents participating in negotiation. The agent interact according to the rules of an alternating offers protocol [18]. This means that one offer (or proposal) is submitted per time period, with an agent a i ∈ A offering in odd periods {1, 3, . . .}, and the other agent a j ∈ A offering in even periods {2, 4, . . .}. The negotiation process starts with a i submitting a proposal p 1 i→j to a j at time period t = 1. The agent a j receives p 1 i→j and can either accept it, reject it and opt out of the negotiation, or reject it and continue bargaining. In the first two cases, negotiation comes to an end. Specifically, if the proposal is accepted, negotiation ends successfully. Conversely, if the proposal is rejected and a j decides to opt out, negotiation terminates with no agreement. In the last case, negotiation proceeds to the next time period t = 2, in which a j makes a counter-proposal p 2 j→i . The agent a i receives the counter-proposal p 2 j→i and the tasks just described are repeated, i.e., a i may either accept the counter-proposal, reject it and opt out of the negotiation, or reject it and continue bargaining. Negotiation may end with either agreement or no agreement.
Negotiation strategies are computationally tractable functions that define the negotiation tactics to be used during the course of negotiation. Concession tactics are functions that generate new values for each issue at stake throughout negotiation. Let X designate an issue and denote its value at time t by x. Formally, a concession tactic for X is a function with the following general form: where m = 0 if an agent a i wants to minimize X or m = 1 if a i wants to maximize X, C f ∈ [0, 1] is the concession factor, and lim is the limit for X (i.e., the point where a i decides to stop the negotiation rather than to continue, because any agreement beyond this point is not minimally acceptable). The concession factor C f can be a positive constant independent of any objective criteria. Also, C f can be modelled as a function of a single criterion. Useful criterion include the total concession made on each issue throughout negotiation as well as the amount or quantity of energy for a specific trading period [19].
Bilateral Negotiation and Contracts for Difference
As noted, contracts for difference are agreements in which the purchaser pays the seller the difference between the contract price (or strike price) and some reference price, usually the market-clearing price. Concretely, CfDs are settled as follows [1]: • if the strike price is higher than the market-clearing price, the buyer pays the seller the difference between these two prices times the energy quantity agreed; • conversely, if the strike price is lower than the clearing price, the seller pays the buyer the difference between these two prices times the quantity of energy agreed.
In this section, we consider that CfDs may specify the provision of different quantities of energy for different periods of time, at different prices (see also [19]). Thus, we consider that the set of negotiating issues includes n strike prices and n energy quantities. Let P k be a strike price and Q k an energy quantity (for k = 1, . . . , n). Let p k denote the value of P k for quantity q k of Q k . Also, let rp k be the value of a reference price RP k associated with a specific period of a day. The financial compensation associated with CfDs can now be formalized. Specifically, when the strike prices are smaller than the reference prices, the amount to pay is as follows: And the amount to pay by buyers to sellers is as follows:
Case-Study
This section analyzes the potential benefits of a renewable energy support policy involving two phases. The agents are a Government of a particular country (e.g., Portugal or Spain) and various investors (or renewable energy producers). The first phase consists in a CfD auction, like the UK CfD auctions, where investors can make offers, and the Government select the best ones according to pre-defined requirements. The second phase consists in a private bilateral negotiation between the Government and each of the selected investors, to obtain a mutually acceptable value for the strike price. Since each selected investor knows the first proposal of the other investors, there is the possibility to make a more competitive offer in this phase. Accordingly, this support policy could be advantageous for the Government (and, in some cases, for the investors as well).
First Phase: CfD Auction. The public announcement involves the investment in wind energy in a maximum of 100 MW of installed capacity (per investor). The investors can propose projects involving less than 100 MW, but only in group with investors in a similar situation. Joint projects involving more than 100 MW of installed capacity can be accepted, in case they are advantageous for the Government. All investors should submit the following: strike price (SP), average expected power factor (PF) of the project, and (iii) installed capacity (IC). The power factor consists in the average number of hours that wind farms will be operating at the installed capacity/nominal power. The points (P) attributed to each proposal are computed by considering the relation between the strike price and the average expected power factor, as well as the installed capacity (see Equation 4). The power factor and the strike price have weights k 1 and k 2 , respectively (such that k 1 + k 2 = 1). Since SP is often the most important factor, in this work we consider k 1 = 0.3 and k 2 = 0.7.
The CfD auction involved a total of 15 proposals, but only 5 were accepted for the next phase. Specifically, projects 1, 9 and 11 were accepted as individual projects, while projects 2 and 3 were accepted as a joint project. This means that investors of projects 2 and 3 make a strategic alliance to be awarded a CfD (although their companies remain separated). Table 1 shows the CfD allocation auction round outcome, with the 5 projects delivering 410 MW of renewable energy. Several potential projects for the next delivery years did not get awarded a CfD, which may to some extent raise questions about the viability of the CfD regime for small and medium enterprises. Probably, a strike price of around 80 e is acceptable, but lower than this could be not workable. Furthermore, the importance of interest rates in relation to the strike price should not be ignored in future work, given that the project time line will likely extend into periods of possible interest rate change which could impact on the viability of any strike prices.
Second Phase: Private Bilateral Negotiation. After announcing the results of the CfD auction, the Government starts private bilateral negotiations with the investors. For projects 1, 9 and 11, negotiation involves the Government and each of the corresponding investors. Projects 2 and 3 are a joint project, meaning that negotiation involves the Government and the agent representing the joint project. A detailed description of the negotiation process between the Government and the investor responsible for project 9 follows.
Negotiation proceeds through two main phases: a beginning or initiation phase (pre-negotiation) and a middle or problem-solving phase (actual negotiation). Pre-negotiation involves the creation of a well-conceived plan specifying the issues at stake, the limits and targets, the attitude toward risk and an appropriate strategy. In this work, the negotiating agenda involves mainly the strike price. The limit or resistance point is the point where each party decides to stop the negotiation rather than to continue. The target point is the point where each negotiator realistically expects to achieve an agreement. We consider that the Government adopts a risk-seeking attitude. Thus, assuming the existence of a number of interested investors, the Government can adopt an aggressive position throughout negotiation in searching for a good deal. The investor adopts a risk-averse attitude, acting carefully to achieve a deal and award a CfD. Both parties adopt concession strategies, meaning that they are willing to partially reduce their demands to accommodate the opponent.
Actual negotiation involves an iterative exchange of offers and counter-offers. The investor makes an opening offer involving the strike price shown in Table 1 (74.20 e/MWh). The Government may be pleased with such an offer, but might still believe that there is room for a few concessions. Accordingly, the Government responds with an offer involving a strike price lower than the received price, lets say 72.50 e/MWh. After these two offers, the parties may argue for what they want, but at a certain point they recognize the importance of moving toward agreement. Thus, and despite the fact of being reluctant to make any concession, the investor slightly reduces the strike price. The Government decides to respond in kind and mirror the concession of the investor. And the agents enter into a negotiation dance, exchanging proposals and counter-proposals, until they reach the final price of 73.37 e/MWh. This price represents a good deal for the Government and an acceptable deal for the investor, who will be awarded with a contract for difference for the next 15 years.
Conclusion
This article has presented an energy support policy based on contracts for difference and bilateral negotiation. The support policy involves two phases. The first consists in a CfD auction, like the UK auctions, and the second involves a private bilateral negotiation between each of the selected investors and a particular Government.
Preliminary results from a case-study indicate some advantageous for the Government (and, in some cases, for the investors as well). In the future, we intend to perform a number experiments, using controlled experimentation as the experimental method, to evaluate the effectiveness of the support policy in a number of different situations. | 2020-07-08T13:07:45.774Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "2b4412f4f0117e94ca6a868736c07e0ea5e1d5c3",
"oa_license": "CCBY",
"oa_url": "http://repositorio.lneg.pt/bitstream/10400.9/3303/1/C2-Paper-111_FernandoLopes.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "ce4bfdcb4fdaa6a344ca9e5070fc6d18c430cc36",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
96430959 | pes2o/s2orc | v3-fos-license | Editorial: Oxytocin and Social Behaviour in Dogs and Other (Self-)Domesticated Species: Methodological Caveats and Promising Perspectives
1 Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences, Budapest, Hungary, 2 Faculty of Medicine, Nursing and Health Sciences, School of Psychological Sciences, Monash University, Clayton, VIC, Australia, Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine, Vienna, Austria, Medical University of Vienna, University of Vienna, Vienna, Austria
Over the past decade the oxytocin system has become a focus of attention for researchers from various fields studying mechanisms underlying different forms of social behavior. Some have even suggested that it is the neurohormone, oxytocin, that has had the most permissive role in the evolution of the human nervous system (Carter, 2014), implying that Homo sapiens could not have evolved without it, as the success of this species highly depends on social behavior and cognition. Not surprisingly research into model systems of human social behavior has followed this trend including several discoveries on the relatedness of numerous forms of domestic species' social behavior and their respective oxytocin systems. This is particularly interesting as domestic species are known to have adapted to the human social environment in evolutionary terms, however the proximal and distal mechanisms underlying behavioral parallels between humans and domestic animals still remain largely unexplored.
Among domestic species, dogs are the most studied model of human behavior, and their humananalog socio-cognitive skills have been well-established both at the behavioral (Miklósi and Topál, 2013) and at the neural (Bunford et al., 2017) level. This bias in favor of dogs is also present in the number of research papers published about oxytocin and social behavior, as considerable amount of information has already accumulated about this species over the past few years (reviewed in Kis et al., 2017). The aim of this special issue was to fill in gaps not only for canine oxytocin research, but also for research on other domestic species. An important critical review article by Rault et al. presents literature on dogs, pigs, cattle, and sheep focusing on welfare aspects and outlines both problems and possible solutions for oxytocin research. The other 16 articles in the special issue present original research that keep up with the high methodological standards and present valuable data that the field had thus far been missing.
These include, first of all, research on non-canine domestic species. Bienboire-Frosini et al. present a reliable immunoassay measure of peripheral (plasma) oxytocin in cat, dog, horse, cow, pig, sheep, and goat. Arahori et al. focus on domestic cats and describe microsatellite polymorphisms adjacent to the oxytocin receptor gene revealing moderate associations with owner-rated personality traits.
The remaining 14 original research papers present data on domestic dogs' oxytocin system and social behavior. These include research using various methodological approaches: gene × behavior associations, intranasal oxytocin administration, and peripheral oxytocin measurements. Among the genetic studies is one (Cimarelli et al.) that presents a considerable methodological advance in the field, introducing an epigenetic study and presenting, for the first time, evidence that oxytocin receptor gene (OXTR) methylation is associated with social behaviors of pet dogs. A similarly important conceptual novelty is the introduction of quantitatively measured environmental factors (OXTR polymorphisms in the owners' gene; Kovács et al.) as well as contextual and individual characteristics (Turcsán et al.) into canine OXTR research. Both of these studies highlight significant additional factors to genetic research into dogs' oxytocin system, although at present have only tested these on one specific breed, the Border collie. The importance of breed differences, on the other hand, is highlighted by another paper (Kubinyi et al.) that presents incremental research about the relationship between OXTR polymorphisms and greeting behavior. Building on their previous research on Border collies and German shepherds the authors show a similar relationship in Siberian huskies. Direct comparison with human subjects (infants) is carried out by Oláh et al. investigating gaze-following as a function of OXTR polymorphisms.
The special issue also includes significant new research using intranasal administration of oxytocin (IN-OT) for dogs. While the literature of the field seems to be biased toward reporting of positive results, an important negative finding is highlighted here by Thielke et al., in a study investigating the effects of oxytocin administration on dogs' attachment behavior toward their owners measured via the strange situation test. Results show that contrary to expectations, intranasal administration of oxytocin fails to increase owner-directed proximity and contact seeking, rather it decreases such behaviors (in the baseline phase). A very interesting pair of papers by Somppi et al. and Kis et al. used eye-tracking technology to assess how oxytocin administration modulates dogs' viewing of human faces with different emotional expressions. The two research groups independently carried out studies using the same set of stimuli, and while the general conclusion from both is that there is an effect of oxytocin on the outcome measure, the specifics of the results differ by several points. The special issue also includes a paper presenting important incremental research using IN-OT methodology (Nagasawa et al.), which shows that previous results of the same research group about enhanced gazing behavior following oxytocin treatment can be conceptually replicated in ancient Japanese breeds (Shiba, Kai, and Shikoku).
Studies about canine peripheral oxytocin levels in this special issue include methodological improvements as well as the assessment of different forms of dog-human interaction. Temesi et al. describe the time-course of intranasally and intravenously administered oxytocin on serum and urine oxytocin concentrations by directly comparing these measures in Beagle dogs. MacLean et al. validate their salivary oxytocin measure by comparing it to plasma measures of the same Labrador retriever and Labrador retriever × Golden retriever dogs before and after a free-form social interaction with a human versus control treatment (resting). Two of the papers (Petersson et al.; Rossi et al.) measure both oxytocin and cortisol levels from dogs' blood samples and find that the two hormones are related to behavior during their respective tests. Another paper by MacLean et al. connects to the applied value of oxytocin research focusing on dogs' aggressive behavior: while dogs with and without reported history of aggression only differ in plasma vasopressin levels (and not oxytocin), an interesting difference is found between pet dogs and assistance dogs (that have been bred for affiliative and nonaggressive temperaments), with the latter group having higher oxytocin levels.
In our view the papers of this special issue present a considerable advancement in the field of oxytocin research in domestic species. While the independent research papers collected here use varying methodology and address independent scientific questions, they all nicely tie to the conceptual and methodological gaps that have been highlighted. Studies including non-canine domestic species, as well as different breeds of dogs inform us about both the specificity and the generalizability of oxytocin effects. Conceptual replications and incremental research presented here ensure the robustness of the findings. The novel methodological approaches as well as conceptual innovations described in this issue broaden the scope of the field. Furthermore, the open reporting of negative and controversial findings guarantees transparency of research. The papers of this issue are all good examples for this, and thus together strengthen the view that domesticated animals serve as valuable models for investigating the interrelatedness of social behavior and the oxytocin system.
AUTHOR CONTRIBUTIONS
AK drafted the first version of the manuscript. All authors read and commented the manuscript as well as contributed to its content and approved of the final text. | 2019-04-06T13:07:29.028Z | 2019-04-05T00:00:00.000 | {
"year": 2019,
"sha1": "3fb2db10c2a0b21f1ac08b69ec2789c663deff42",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00732/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fb2db10c2a0b21f1ac08b69ec2789c663deff42",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
85545262 | pes2o/s2orc | v3-fos-license | Medicinal Herbs Effective Against Atherosclerosis: Classification According to Mechanism of Action
Atherosclerosis is a widespread and chronic progressive arterial disease that has been regarded as one of the major causes of death worldwide. It is caused by the deposition of cholesterol, fats, and other substances in the tunica intima which leads to narrowing of the blood vessels, loss of elasticity, and arterial wall thickening, thus causing difficulty in blood flow. Natural products have been used as one of the most important strategies for the treatment and prevention of cardiovascular diseases for a long time. In recent decades, as interests in natural products including medicinal herbs have increased, many studies regarding natural compounds that are effective against atherosclerosis have been conducted. The purpose of this review is to provide a brief over-view of the natural compounds that have been used for the treatment and prevention of atherosclerosis, and their mechanisms of action based on recent research.
INTRODUCTION
Atherosclerosis, the underlying cause of cardiac ischemia, heart failure, heart attack, stroke, and peripheral vascular disease, is known to be one of the major causes of death and morbidity worldwide. Endothelial cell injury, damage, and dysfunction in the heart are characteristic properties of atherosclerosis. Endothelial damage leads the build-up of plaque in the damaged area and narrowing of the arteries, as well as cholesterol accumulation on the artery wall and monocytes adhesion to the endothelium. This process leads to chronic inflammation and eventually causes stenosis or thrombosis (Insull, 2009).
Natural products have been regarded as important all over the world since the beginning of human civilization for many purposes including medicinal use. Like in many other diseases, medicinal herbs have been used to treat patients with atherosclerosis. However, elucidation of the mechanisms of action (MOAs) of these herbs has just started, via the use of cellular models for anti-atherogenic natural product screening (Orekhov, 2013;Orekhov et al., 2015;Orekhov and Ivanova, 2016) or extensive reviews on specific plants that have been used for the prevention of atherosclerosis (Prasad, 2010;Varshney and Budoff, 2016) However, in this review, recent studies regarding natural products, and more specifically medicinal herbs, were sorted and described according to their MOAs against atherosclerosis to increasing our understanding of these plants.
BLOOD LIPID-LOWERING EFFECTS OF MEDICINAL HERBS
Dyslipidemia is known as one of the main risk factors of atherosclerosis. Numerous studies have demonstrated that hypercholesterolemia and hypertriglyceridemia lead to the increased risk of development and progression of atherosclerosis (Liu et al., 2013;Peng et al., 2017;Roubille et al., 2018). Previous studies have indicated that increased levels of lowdensity lipoprotein cholesterol (LDL-C) and its major protein, apolipoprotein B-100 (apoB-100), are critical causes of atherosclerosis. Infiltration and retention of apoB-containing lipoproteins in the artery wall can initiate inflammatory responses and promote the development of atherosclerosis (Liu et al., 2013). Many studies have therefore focused on the lipid-lowering effect of natural products ( Table 1).
Extract of Tribulus terrestris decreased serum lipids in New Zealand rabbits fed a high-cholesterol diet. The experimental group treated with extract of T. terrestris showed decreased levels of total cholesterol (TC), high-density lipoproteincholesterol (HDL-C), LDL-C, and triglyceride (TG) in serum compared to those in the negative control group (Tuncer et al., 2009). Extract of Ocimum basilicum reduced lipid profiles in Triton WR-1339-induced hyperlipidemic rats. In rats treated with O. basilicum extract, TC, TG, and LDL-C levels decreased, while HDL-C levels were higher than those in rats treated with Triton alone (Amrani et al., 2006).
The dried roots of Salvia miltiorrhiza, commonly called Danshen, have long been used in traditional oriental medicine for the prevention and treatment of cardiovascular diseases such as atherosclerosis. Cluster of differentiation 36 (CD36), a class B scavenger receptor, is known to be important in the pathogenesis of vascular inflammatory diseases. Salvianolic acid B, the most abundant bioactive compound from S. miltiorrhiza, showed inhibition of CD36-mediated lipid uptake. Using surface plasmon resonance analysis, salvianolic acid B was found to bind directly to CD36 with high affinity, thus confirming its physical interaction with this receptor (Bao et al., 2012). Treatment of rats fed with high-fat/-cholesterol diets with Cynanchum wilfordii ethanol extract reduced TG and LDL-C levels while increasing HDL-C levels (Choi et al., 2012). The ethanolic fraction of Terminalia arjuna markedly decreased TC, TG, and LDL levels, increased HDL levels, and furthermore lessened atherosclerotic lesions in the aortas of rabbits fed a high-fat diet (Subramaniam et al., 2011). Polysaccharides from Polygonatum sibiricum displayed hypolipidemic activities on TC, LDL-C, and lipoprotein(a) (Lp(a)), but not on TG or HDL-C in a high-cholesterol diet-induced atherosclerosis rabbit model . Marrubium vulgare extract containing polar products decreased plasma lipid levels. The lipid-lowering effects of petroleum ether-, chloroform-, ethyl acetate-, and methanol-soluble fractions of M. vulgare extract were investigated. The solvent-soluble fractions showed lipid-lowering effects in plasma TC, and petroleum ether fractions significantly lowered not only LDL-C levels but also TG levels. Elevated atherogenic indexes (AIs) and LDL-/HDL-C ratios were more influenced by polar fractions (methanol and ethyl acetate), while these atherogenic markers were not significantly inhibited by the chloroform-and petroleum ether-soluble fractions (Ibrahim et al., 2016). Saponins from Panax notoginseng also showed lipid-lowering properties in apolipoprotein-E (apo-E)-knockout rats. Ginseng saponins significantly reduced serum lipids, including TC, LDL-C, HDL-C, and TG in apo-E-knockout mice (Wan et al., 2009). Propolis and thymoquinone, the active constituents of Nigella sativa seed oil, inhibited the formation of early atherosclerotic lesions in hypercholesterolemic rabbits. Administration of propolis or thymoquinone together with a cholesterol-rich diet remarkably decreased TC, LDL-C, and TG while increasing HDL-C levels (Nader et al., 2010). Celastrus orbiculatus decreased TC, non-HDL-C, TG, apoB-100, and apo-E levels, and elevated HDL-C levels. Furthermore, messenger ribonucleic acid (mRNA) levels of the LDL receptor (LDL-R), scavenger receptor class B type 1 (SR-B1), cholesterol 7α-hydroxylase A1 (CYP7A1), and 3-hydroxy-3-methyl-glutaryl-coenzyme A (HMG-CoA) reductase were up-regulated by C. orbiculatus. Conversely, C. orbiculatus significantly decreased lipid deposition in the arterial wall . Administration of swertiamarin isolated from Enicostemma littorale lowered serum TC, TG, and LDL-C levels while elevating HDL-C levels in poloxamer 407-induced hyperlipidemic rats (Vaidya et al., 2009). Lipid metabolism dysfunction leads to consequential health problems in postmenopausal women and can be a risk factor for the progression of atherosclerosis. Pueraria mirifica remarkably lowered serum apo-B and LDL-C levels in postmenopausal women, and elevated serum apolipoprotein A-I (apo A-I) and HDL-C levels. Moreover, ratios of LDL-C to HDL-C and apo-B to apo A-I were significantly reduced in the P. mirifica-treated group (Okamura et al., 2008). Administration of medium-dose (75 mg/kg body weight (BW)/day) and highdose (150 mg/kg BW/day) flavonoid-rich extract of Hypericum perforatum significantly reduced serum levels, including those of TC, LDL-C, and TG, while it increased HDL-C levels in rats fed a cholesterol-rich diet (Zou et al., 2005). Astragaloside IV, the major effective component from Astragalus membranaceus, down-regulated TC, TG, and LDL-C levels while elevating HDL-Cs level in the blood of apo-E-knockout mice fed a high-fat diet .
INHIBITORY EFFECTS OF MEDICINAL HERBS AGAINST MONOCYTE RECRUITMENT AND ACTIVATION
Monocyte-endothelial cell interactions are reported to induce the expression of adhesion molecules such as vascular cell adhesion molecule-1 (VCAM-1), endothelial leukocyte adhesion molecule-1 (E-selectin), and intercellular cell adhesion molecule-1 (ICAM-1), which may cause the accumulation and migration of monocytes into the subendothelial space. Very low-density lipoprotein (VLDL), modified LDL, and APs act on monocyte-derived macrophages, which accelerates the transition of monocytes into foam cells. Foam cells are fat-laden macrophages that serve as the hallmark of early stage atherosclerotic lesion formation.
Corilagin from Phyllanthus emblica and its analogue Dgg16 are reported to have anti-atherogenic effects. Human umbilical vein endothelial cells (HUVECs) incubated with oxidized LDL (oxLDL) were treated with corilagin or Dgg16, followed by incubation with monocytes. OxLDL up-regulated adhesion of monocytes to endothelial cells, although co-treatment of ox-LDL with corilagin or Dgg16 quickly decreased adhesion at a dose of 0.001 mmol/L or higher (Duan et al., 2005). Danshenol A from Salvia miltiorrhiza suppressed ICAM-1 expression induced by tumor necrosis factor-α (TNF-α) and relevant monocyte adhesion to endothelial cells through the NADPH oxidase subunit 4 (NOX4)-dependent inhibitor of kappa B (IκB) kinase β (IKKβ)/nuclear factor-kappa B (NF-κB) pathway (Zhao et al., 2017). The anti-atherogenic activity of cryptotanshinone, a constituent of S. miltiorrhiza, was evaluated using apo-Edeficient mice fed an atherogenic diet as well as oxLDL-stimulated HUVECs. Cryptotanshinone reduced lectin-like oxidized low-density lipoprotein receptor-1 (LOX-1) mRNA and protein expression induced by oxLDL, and suppressed subsequent LOX-1-induced adhesion of monocytes to HUVECs by lowering the expression of ICAM-1 and VCAM-1 . In addition, cryptotanshinone attenuated monocyte adhesion to endothelial cells by inhibiting expression of adhesion molecules (Ang et al., 2011). The ethanol extract of Prunella vulgaris suppressed adhesion of monocyte-/macrophage-like human macrophage cells (THP-1 cell). P. vulgaris also decreased expression of ICAM-1, VCAM-1, reactive oxygen species (ROS), E-selectin, and NO production in TNF-α-induced human aortic smooth muscle cells (HASMCs) and decreased NF-κB activation (Park et al., 2013). Paeonol, the active compound of Paeonia lactiflora, dose-dependently reduced ICAM-1 expression through inhibition of NF-κB p65 translocation into the nucleus and phosphorylation of IκBα. Paeonol also blocked the phosphorylation of p38 and extracellular signalregulated kinase (ERK) induced by TNF-α, which is involved in regulating ICAM-1 production (Nizamutdinova et al., 2007). P. notoginseng saponins decreased monocyte adhesion to the endothelium in a concentration-dependent manner and suppressed the expression of TNF-α-induced endothelial adhesion molecules such as ICAM-1 and VCAM-1 (Wan et al., 2009). Curcumin, isolated from Curcuma longa, showed a sonodynamic effect on THP-1-derived macrophages. Commercial drugs that have sonodynamic effects become cytotoxic upon exposure to ultrasound, which can be useful when treating a localized part of the body, thus reducing the risk of systemic side effects . Fibronectin is one of the most important extracellular matrix proteins as it plays a critical role in leukocyte recruitment to the endothelium and initiates the process of atherosclerosis. The effects of protocatechualdehyde, an aqueous ingredient of S. miltiorrhiza, were evaluated on the expression of fibronectin in HUVECs stimulated with TNF-α via enzyme-linked immunosorbent assay (ELISA) and western blot analysis. Protocatechualdehyde treatment remarkably attenuated TNF-α-stimulated fibronectin surface expression and secretion in a dose-dependent manner. TNF-α-induced ROS generation and c-Jun NH2-terminal kinase (JNK) activation were also inhibited by protocatechualdehyde (Tong et al., 2015). Aqueous extract of Buddleja officinalis reduced the up-regulation of cellular adhesion molecules. Pretreatment of HUVECs with B. officinalis extract (1-10 µg/mL) dose-dependently decreased TNF-α-induced adhesion of U937 monocytic cells. Furthermore, mRNA and protein expression of VCAM-1 and ICAM-1 were suppressed by this extract via inhibition of NF-κB and ROS. In addition, TNF-α-induced degradation of IκBα was inhibited by blocking phosphorylation of IκB-α in HUVEC (Lee et al., 2010). Ziziphus nummularia extract suppressed TNF-α-induced adhesion of THP-1 monocytes to HASMCs and endothelial cells in a concentration-dependent manner (Fardoun et al., 2017). Purple perilla extract and its major compound α-asarone inhibited oxLDL-induced foam cell formation by inhibiting SR-B1 expression. However, purple perilla extract promoted the upregulation of the adenosine triphosphate (ATP)-binding cassette transporter A1 (ABCA1) and ABCG1, and subsequently promoted cholesterol efflux from macrophages by activating interactions between peroxisome proliferator-activated receptor γ (PPARγ), liver X receptor α (LXRα), and ABC transporters (Park et al., 2015). These results are summarized in Table 2.
ANTI-INFLAMMATORY EFFECTS OF MEDICINAL HERBS
Many studies have revealed the association between the initiation and progression of atherosclerosis with the vascular inflammation mechanisms. Inflammation participates in all stages of atherogenesis, from lesion initiation to thrombotic complications of the disease. Arterial endothelial cells begin to express adhesion molecules that bind leukocytes. Leukocytes adhere to the endothelium to penetrate into the intima at the lesion formation site in response to chemoattractants. Next, blood-derived inflammatory cells participate in and trigger inflammatory responses.
Signal transducer and activator of transcription protein 3 (STAT3), a transcription factor involved in inflammatory responses and the cell cycle, is activated by chemokines such as interleukin (IL)-6 and IL-8. Pretreatment of endothelial cells with magnolol isolated from Magnolia officinalis suppressed IL-6-induced phosphorylation of Tyr705 and Ser727 on STAT3 in a concentration-dependent manner. However, it did not affect the phosphorylation of Janus kinase 1 (JAK1), JAK2, or ERK1/2. An electrophoretic mobility shift assay (EMSA) revealed that magnolol treatment significantly decreased STAT3 binding to the IL-6 response elements region, and ICAM-1 expression was significantly reduced on the endothelial surface (Chen et al., 2006). Emodin from rhubarb stabilized the vulnerable atherosclerotic plaque in the aortic root of apo-Eknockout mice by exerting anti-inflammatory effects. It also significantly inhibited the expression of matrix metalloproteinase-9 (MMP-9) and granulocyte-macrophage colony-stimulating factor (GM-CSF), while inducing PPAR-γ expression in plaque (Zhou et al., 2008). Astragaloside IV significantly downregulated CD40 ligand and C-X-C chemokine receptor type 4 (CXCR4) expression of the platelet surface, and also reduced stromal cell-derived factor-1 (SDF-1) and CXCR4 expression in the aorta. Western blotting and real-time polymerase chain reaction (PCR) demonstrated that astragaloside IV significantly down-regulated the mRNA and protein expression of SDF-1 and CXCR4 in apo-E-knockout mice fed a high-fat diet . Cryptotanshinone remarkably suppressed endothelial permeability, monocyte-endothelial cell adhesion, and expression of ICAM-1 and VCAM-1 in HUVECs (Ang et al., 2011). Cryptotanshinone significantly suppressed formation of atherosclerotic plaque and increased plaque stability in apo-E-knockout mice by reducing the expression of LOX-1 and MMP-9, and NF-κB activation. In addition, it reduced the expression of serum pro-inflammatory mediators without altering the serum lipid profile . The ethanolic extract of P. vulgaris suppressed adhesion of THP-1 cells to HAMCs, and inhibited p38 mitogen-activated protein kinase (MAPK) and ERK phosphorylation by induction of TNF-α (Park et al., 2013). Salvianolic acid B decreased the phosphorylation of JAK2 (Tyr 1007/1008) and STAT1 (Tyr701 and Ser727) by inducing interferon-γ (IFN-γ). Monocyte adhesion to IFN-γ-treated endothelial cells was decreased by pretreatment with salvianolic acid B. This compound also increased the expression of protein inhibitor of activated STAT 1 (PIAS1) and suppressor of cytokine signaling 1 (SOCS1) in endothelial cells . Salvianolic acid B pretreatment also reduced adhesion of adenosine diphosphate (ADP)-activated platelets to EA.hy926 cells (a human endothelial cell line) and inhibited activation of NF-κB. In addition, salvianolic acid B significantly inhibited mRNA expression of platelet-induced pro-inflammatory mediators (monocyte chemoattractant protein 1 (MCP-1), ICAM-1, IL-1β, IL-6, and IL-8) and the release of their corresponding proteins in EA.hy926 cells (Xu et al., 2014). Honokiol, an active component isolated from M. officinalis, markedly suppressed the overexpression of pentraxin 3 in palmitic acid (PA)-induced HUVECs by reducing IκB phosphorylation and expression of NF-κB subunits p50 and p65 in the IKK/IκB/NF-κB signaling pathway. In addition, honokiol markedly inhibited the production of IL-6, IL-8, and MCP-1 in PA-induced HUVECs (Qiu et al., 2015). Eleven ingredients of the herb Folium Eriobotryae were shown to possess anti-inflammatory properties. Using systematic network analyses, their targets were determined to be 43 inflammation-associated proteins including cyclooxygenase 1 (COX1), 5-lipoxygenase (5-LO), PPAR-γ, TNF, and transcription factor p65 (RELA), which are mainly involved in the MAPK and NF-κB signaling pathways . Artesunate, a derivative of artemisinin isolated from sweet wormwood, attenuated the progression of atherosclerotic lesion formation alone or in combination with rosuvastatin in western-type diet-fed apo-E-knockout mice. No differences in food uptake, BW, and plasma lipid levels were observed in any of the groups, but a significant reduction in the expression of pro-inflammatory mediators such as TNF-α and IL-6 was noted in the treated groups. Furthermore, artesunate suppressed expression of pro-inflammatory chemokines such as IL-8 and MCP-1 in the aortas of mice. Rosuvastatin combined with artesunate delayed the progression of atherosclerotic lesions more effectively than artesunate alone (Jiang et al., 2016). β-Elemene isolated from Curcuma wenyujin reduced the size of atherosclerotic lesions and increased plaque stability in apo-Eknockout mice by inhibiting the production of pro-inflammatory cytokines and cell adhesion molecules such as IL-1β, TNF-α, INF-γ, MCP-1, and ICAM-1 . Because 5-LO is a key enzyme in inflammatory disorders such as atherosclerosis, 5-LO inhibition by Plectranthus zeylanicus extract, a medicinal plant extensively used in Sri Lanka and South India to treat inflammatory disorders, was evaluated (Napagoda et al., 2014). P. zeylanicus extracted with the non-polar solvents n-hexane and dichloromethane significantly inhibited 5-LO activity in stimulated human neutrophils with 50% inhibitory concentrations (IC 50) of 6.6 and 12 µg/mL, respectively, and suppressed human recombinant 5-LO with IC50 of 0.7 and 1.2 µg/mL, respectively (Napagoda et al., 2014). A cell-free assay using isolated human recombinant 5-LO was employed in this study in order to investigate whether the extract directly inhib- ited 5-LO activity. Z. nummularia extract decreased expression of MMP-2, MMP-9, NF-κB, ICAM-1, and VCAM-1 induced by TNF-α in a concentration-and time-dependent manner, as revealed via reverse transcription (RT)-PCR and western blot analysis. C. orbiculatus reduced C-reactive protein (CRP), IL-6, and TNF-α levels in plasma. Immunohistochemistry and western blot analysis showed that CD68 up-regulation and NF-κB p65 protein activation in the arterial wall were reduced by C. orbiculatus treatment as well . Celastrol, a triterpenoid isolated from Tripterygium wilfordii, inhibited the phosphorylation and degradation of IκB and decreased the production of inducible nitric oxide synthase (iNOS), NO, and pro-inflammatory cytokines including TNF-α and IL-6 ( Gu et al., 2013). Bisacurone isolated from C. longa concentration-dependently suppressed VCAM-1 expression and inhibited NF-κB p65 translocation into the nucleus and phosphorylation of IκBα, protein kinase B (Akt), and protein kinase C (PKC; Sun et al., 2008). Patchouli alcohol, a tricyclic sesquiterpene isolated from Pogostemonis Herba, blocked aortic mRNA expression of inflammatory cytokines such as iNOS, MCP-1, IL-1β, IL-6, CXCL9, and CXCL11 (Wang et al., 2016). Proteomic analysis of the relationship between atherosclerosis and 2,3,5,4′-tetrahydroxystilbene-2-O-β-D-glucoside revealed that five proteins were mainly involved in cholesterol transport, inflammation, cell apoptosis, and cell adhesion. 2,3,5,4′-Tetrahydroxystilbene-2-O-β-D-glucoside elevated the expression of heat shock protein 70 (HSP70), lipocortin 1, and apo A-1 but reduced the expression of calreticulin and vimentin . Do In Seung Gi-Tang, a traditional herbal preparation composed of Rheum undulatum, Prunus persica, Conyza canadensis, Cinnamomum cassia, and Glycyrrhiza uralensis (ratio, 8:6:4:4:4), shows anti-inflammatory activities by regulating the 5′ AMP-activated protein kinase (AMPK) pathway. Treatment with this herbal preparation reduced the size of atherosclerotic lesions, suppressed ICAM-1, VCAM-1, and E-selectin expression, and reduced lipid accumulation, progression of inflammation, and fatty acid synthase (FAS) levels. Furthermore, Do In Seung Gi-Tang promoted AMPK and inhibited acetyl-CoA carboxylase (ACC) expression in liver tissues (Park et al., 2016). These results are summarized in Table 3.
ANTI-OXIDATIVE EFFECTS OF MEDICINAL HERBS
Oxidative stress induced by the excessive generation of ROS and macrophage inflammation has emerged as a crucial mechanism for the initiation and progression of endothelial dysfunction and atherosclerosis (Kattoor et al., 2017). OxLDL is a harmful type of cholesterol that is formed when LDL-C is damaged by free radicals. Malondialdehyde (MDA), which is formed during oxidation of LDL, is used as an oxidative stress marker.
Corilagin and its analogue Dgg16 decreased the formation of MDA and inhibited the proliferation of vascular smooth muscle cells (VSMC) activated by oxLDL (Duan et al., 2005). Danshenol A inhibited ROS generation and NOX4 expression (Zhao et al., 2017). The aqueous extract of O. basilicum displayed very high antioxidant power, indicating that 1 L of the extract possessed antioxidant capacity equal to that of 32.8 g ascorbic acid (Amrani et al., 2006). Cryptotanshinone reduced LOX-1 mRNA and protein expression, and suppressed NOX4induced ROS production and comparative activation of NF-κB in HUVECs . Tanshinone IIA, which was also isolated from S. miltiorrhiza, showed protective effects against H2O2-induced apoptosis and protected HUVECs from inflammatory mediators induced by H2O2 via pregnane X receptor (PXR) activation . Pretreatment with tanshinone IIA reduced H2O2-induced ROS formation and H2O2-triggered cell apoptosis in EA.hy926 cells. RT-PCR and western blotting results indicated that it remarkably suppressed the expression of pro-apoptotic proteins such as B-cell lymphoma (Bcl)-2-associated X protein (Bax) and caspase-3, while increasing the expression of the anti-apoptotic protein Bcl-2 (Jia et al., 2012). Tanshinone IIA also increased glutathione peroxidase 1 (GPx-1) mRNA levels and GPx activities, and protected cultured macrophages from H2O2-induced cell death (Li et al., 2008). Cymbopogon citratus extract reduced the formation of ROS by D-glucose, hydrogen peroxide, and oxLDL in HUVECs (Campos et al., 2014). The protective effects of Danshen aqueous extract and its active compounds were studied on HUVECs using an in vitro tube formation assay. The Danshen extract and its pure compounds showed effectiveness in protecting HUVECs against homocysteine-induced injury, providing evidence of its beneficial effects on cardiovascular disease. Treatment with B. officinalis inhibited TNF-α-induced ROS formation in HUVECs (Lee et al., 2010). Farrerol, a flavonoid considered to be the major component in the dried leaves of Rhododendron dauricum, significantly increased cell viability and enhanced superoxide dismutase (SOD) and GPx activity in H2O2-induced EA.hy926 cells. Farrerol also reduced elevation of intracellular MDA, ROS, and apoptosis, and significantly reduced the expression of Bax mRNA and protein, cleaved caspase-3, and phospho-p38 MAPK, while increasing the expression of Bcl-2 mRNA and protein in H2O2-induced EA.hy926 cells, as determined via real-time PCR and Western blot analysis . Salvianolic acid B reduced oxidative stress, LDL oxidation, and oxLDL-induced cytotoxicity. Salvianolic acid B inhibited cupric ion-mediated LDL oxidation in vitro and attenuated human aortic endothelial cell-mediated LDL oxidation as well as ROS elevation (Yang et al., 2011). Treatment with β-elemene up-regulated the activities of antioxidant enzymes such as catalase, GPx, and glutathione in the aorta, while lowering oxidative damaging biomarker MDA. β-Elemene also elevated the generation of NO and up-regulated phosphorylation of eNOS (ser1177) and Akt in vitro . Low-molecular weight compounds from white ginseng, mostly phenolic compounds, decreased the extent of atherosclerosis by attenuating oxidative stress (Lee et al., 2013). Protocatechualdehyde suppressed ROS generation induced by platelet-derived growth factor-BB (PDGF-BB) in VSMCs, and increased the phosphorylation of Akt and ERK 1/2 via PDGF stimulation. These results suggest that protocatechualdehyde inhibits PDGF signaling by acting upstream of Akt and ERK 1/2, which indicates that its antioxidant effect might be related to PDGF signal transduction inhibition . Ethanolic propolis extract or thymoquinone treatment could reverse the oxidative damage resulting from a high-cholesterol diet in rabbits. Ethanolic propolis extract and thymoquinone decreased serum thiobarbituric acid reactive substances (TBARS) levels while enhancing glutathione levels in high-cholesterol diet-fed rabbits (Nadar et al., 2010). C. orbiculatus decreased MDA levels and increased SOD activity in the plasma of guinea pigs fed a high-fat diet. These results indicate that C. orbiculatus inhibited oxidative stress . Isorhamnetin, a flavonoid isolated from Hippophae rhamnoides, significantly inhibited oxLDL-induced THP-1-derived macrophage impairment by decreasing ROS levels, lipid accumulation, and caspase-3 activation. Isorhamnetin also induced phosphatidylinositol 3-kinase (PI3K)/AKT activation and heme oxygenase-1 (HO-1) induction, which inhibited atherosclerotic plaque progression in apo-E-knockout mice (Luo et al., 2015). Celastrol significantly suppressed oxLDL-induced excessive expression of LOX-1 and production of ROS in RAW264.7 mouse macrophages. Furthermore, celastrol remarkably reduced the expression of LOX-1 and generation of superoxide in mouse aortas (Gu et al., 2013). The aqueous extract of Chlorophytum borivilianum showed high antioxidant capacity through powerful NO, superoxide, hydroxyl, 2,2-diphenyl-1-picrylhydrazyl (DPPH), and 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) (ABTS) radical-scavenging activity. Furthermore, this extract showed ferric ion reducing capacity, metal chelating ability, and reduced lipid peroxidation in mitochondrial fractions significantly more than ethanolic extracts. In addition, the extract remarkably inhibited LDL oxidation (Visavadiya et al., 2010). The ethanolic extract of Glossogyne tenuifolia and its main compound luteolin-7-glucoside were revealed to be scavengers of superoxide, DPPH, and hydroxyl radicals (Wu et al., 2005). Copper-mediated LDL oxidation was also reduced by treatment with G. tenuifolia extract and luteolin-7-glucoside, and this was evaluated by measuring the formation of conjugated dienes and MDA, as well as electrophoretic mobility. Oral administration of the flavonoid-rich extract of H. perforatum reduced MDA levels in the sera and livers of rats. It also elevated SOD activity in the serum and liver, although catalase activity was significantly increased only in the liver (Zou et al., 2005). These results are summarized in Table 4.
INHIBITORY EFFECTS OF MEDICINAL HERBS AGAINST THE INFILTRATION AND PROLIFERATION OF VASCULAR SMOOTH MUSCLE CELLS
VSMC proliferation and migration, which contribute to the pathogenesis of atherosclerosis, are known to be associated with other cellular processes such as apoptosis, senescence, inflammation, and matrix alterations. Therefore, understanding VSMC behavior in atherosclerosis is critical in identifying therapeutic targets to both prevent and treat atherosclerosis.
Pre-treatment with corynoxeine (5-50 µM) significantly reduced VSMC numbers and inhibited PDGF-BB-induced DNA synthesis and ERK1/2 activation by VSMCs without inducing cytotoxicity . Corilagin and its analogue Dgg16 inhibited oxLDL-induced proliferation of VSMCs (Duan et al., 2005). Sparstolonin B, isolated from Sparganium sto-loniferum, suppressed endothelial cell tube formation and cell migration in a concentration-dependent manner. Treatment of HUVECs with sparstolonin B caused an increase of cells in the G1 phase and decreased the number of cells in the S phase. Cyclin E2 (CCNE2) and cell division cycle 6 (CDC6), cell division regulatory proteins, were down-regulated after sparstolonin B exposure. In addition, sparstolonin B significantly reduced capillary length and branching number (Bateman et al., 2013). Hibiscus sabdariffa is also known to show hypolipidemic activity in cholesterol-fed rabbits. In addition, it suppressed the formation of foam cells and inhibited the migration of smooth muscle cells and calcification in blood vessels (Chen et al., 2003). Nelumbo nucifera leaf extract treatment induced apoptosis and altered the JNK and p38 MAPK pathways in VSMCs. Non-cytotoxic doses of this extract also inhibited the secretion of MMP-2/9 and cell migration by suppressing the focal adhesion kinase (FAK)/PI3K/small G protein pathway. Histopathological results revealed that 1.0% of the extract reduced formation of neointima, restrained the proliferation of smooth muscle cells, and reduced MMP-2 secretion in the blood vessels of rabbits (Ho et al., 2010). The ethanolic extract of Gleditsia sinensis up-regulated p21WAF1 levels and suppressed cyclinB1, cyclin-dependent kinase 1 (Cdc2), cell Wu et al. (2005) Hypericum perforatum extract Hypericum perforatum MDA, SOD, CAT Zou et al. (2005) division cycle 25c (Cdc25c), and G2/M cell cycle regulators.
In addition, treatment with this extract activated ERK1/2, p38 MAPK, and JNK, and inhibited expression of MMP-9 induced by TNF-α in VSMCs. This extract also reduced the expression of NF-κB and activator protein 1 (AP-1), which are essential cis-elements for the MMP-9 promoter . Salvianolic acid B remarkably suppressed LPS-induced cell migration via the inhibition of MMP-2 and MMP-9 synthesis and the reduction of JNK and ERK1/2 (Lin et al., 2007). Protocatechualdehyde especially inhibited PDGF-induced migration and proliferation of VSMCs. It also down-regulated the PI3K/Akt and MAPK pathways, both of which regulated major enzymes associated with proliferation and migration. In addition, it promoted S-phase arrest of the VSMC cell cycle and inhibited cyclin D2 expression . The ethanolic extract of Z. nummularia decreased HASMC proliferation, adhesion to fibronectin, migration, and invasion (Fardoun et al., 2017). Esculetin significantly suppressed the proliferation of VSMCs through a lipoxygenase-dependent pathway. Three predominant signaling pathways are inhibited by esculetin. The first pathway is the activation of p42/44 MAPK and the immediate early genes of the downstream effectors of c-fos and c-jun, the second is the activation of NF-κB and AP-1, and the third is PI 3-kinase activation and cell cycle progression. Furthermore, esculetin also reduced activation of RAS, a shared upstream event of the above signaling cascades (Pan et al., 2003). Honokiol inhibited the TNF-α-induced proliferation and migration of rat aortic smooth muscle cells in a dose-dependent manner. Pretreatment with honokiol blocked expression of MMP-2 and MMP-9, activation of NF-κB, and phosphorylation of ERK1/2 induced by TNF-α (Zhu et al., 2014). These results are summarized in Table 5.
INHIBITORY EFFECTS OF MEDICINAL HERBS ON PLAQUE FORMATION
Atherosclerosis is characterized by the narrowing and hardening of arteries following the buildup of plaque, which is composed of substances found in the blood, such as fat, cho-lesterol, and calcium. Plaque blocks the artery and disrupts blood flow around body, leading to life-threatening conditions.
The modulatory effects of salvianolic acid B, the most abundant bioactive compound from S. miltiorrhiza, were evaluated on activated platelet-induced inflammation in endothelial cells (Xu et al., 2014). This compound inhibited ADP or α-thrombininduced human platelets aggregation in platelet-rich plasma samples in a dose-dependent manner in a platelet aggregation assay, and significantly reduced the release of soluble Pselectin release. In addition, adhesion of ADP-activated platelets to EA.hy926 cells and NF-κB activation were reduced by pre-treatment with this compound (Xu et al., 2014). Cryptotanshinone, another bioactive compound from S. miltiorrhiza, significantly suppressed the formation of atherosclerotic plaque and increased plaque stability in apo-E-knockout mice by suppressing the expression of LOX-1 and MMP-9 . The effects of atractylenolides on platelet function were investigated in vitro and in vivo . Atractylenolides I, II, and III are the major components of the medicinal plant Atractylodes macrocephala. Atractylenolides II and III attenuated agonist-induced platelet aggregation and ATP release from dense granules, whereas atractylenolide I did not show such effects. Atractylenolides II and III showed suppressive effects similar to those of acetylsalicylic acid on platelet activation in response to agonists . Plasminogen activator inhibitor-1 (PAI-1) is associated with fibrin deposition, which develops into organ fibrosis and atherosclerosis. The ethanolic extract of Zanthoxylum nitidum var. tomentosum and its main compound, toddalolactone, showed PAI-1 inhibitory effects. Toddalolactone suppressed binding of PAI-1 with urokinase-type plasminogen activators (uPA), and therefore attenuated formation of the PAI-1/uPA complex (Yu et al., 2017). Compounds isolated from Callicarpa nudiflora, including 1,6-di-O-caffeoyl-β-D-glucopyranoside, suppressed platelet aggregation induced by ADP, U45519, and arachidonic acid. 1,6-Di-O-caffeoyl-β-D-glucopyranoside also revealed obvious competitive effects on thromboxane prostanoid (TP) and P2Y12 receptors, and inhibited RhoA and PI3K/Akt/glycogen synthase kinase 3 beta (GSK3β) signal transduction (Fu et al., 2017). Using an aggregometer, protocatechualdehyde Honokiol Magnolia officinalis MMP-2, MMP-9, NF-κB, ERK1/2 Zhu et al. (2014) was found to show anti-thrombotic effects associated with inhibition of platelet aggregation . These results are summarized in Table 6.
CONCLUSION
This review highlighted recent studies of effective herbs in the treatment and prevention of atherosclerosis. Herbs have long been used for medicinal purposes and are still widely used today, although elucidation of their therapeutic efficacies and mechanisms has only recently begun. We reviewed most articles concerning herbs that are effective for the treatment of atherosclerosis and classified them into six categories according to their MOAs. The experiments reviewed in this article were conducted with either herbal extracts or pure compounds isolated from herbs. The mechanisms of herbal compounds were diverse, such as blood lipid-lowering activities, inhibition of monocyte recruitment and activation, anti-inflammatory effects, anti-oxidative effects, inhibition of the infiltration and proliferation of vascular smooth muscle cells, and inhibition of plaque formation. Certain medicinal herb-derived compounds such as salvianolic acid B, cryptotanshinone, and protocatechualdehyde did not show not specific MOAs, suggesting that they exhibited anti-atherosclerotic activities via multiple mechanisms. In addition to this, there may be more specific and detailed mechanisms. Moreover, most of the compounds act not just via one mechanism, but in several ways. In conclusion, many reports suggest that herbal compounds are effective in the treatment of atherosclerosis. However, one should note that because in vivo studies have been conducted using laboratory animals such as rabbits and rats in most cases, the results and efficacies may not be the same in humans. Moreover, in the studies using plant extracts rather than pure compounds, the proportion of active compounds may differ even in the same kind of herbs as the production environment affects the contents of herbal compounds. Further studies including more subjects are needed for better understanding of herbal compounds, and we hope that this review will be helpful for future studies. | 2019-03-29T13:03:07.665Z | 2019-03-26T00:00:00.000 | {
"year": 2019,
"sha1": "fd2d649d7e4a2b55906fe01250cb195ffb0ca466",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc6513182?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd2d649d7e4a2b55906fe01250cb195ffb0ca466",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
228076891 | pes2o/s2orc | v3-fos-license | Static Upper/Lower Thrust and Kinematic Work Balance Stationarity for Least-Thickness Circular Masonry Arch Optimization
This paper re-considers a recent analysis on the so-called Couplet–Heyman problem of least-thickness circular masonry arch structural form optimization and provides complementary and novel information and perspectives, specifically in terms of the optimization problem, and its implications in the general understanding of the Mechanics (statics) of masonry arches. First, typical underlying solutions are independently re-derived, by a static upper/lower horizontal thrust and a kinematic work balance, stationary approaches, based on a complete analytical treatment; then, illustrated and commented. Subsequently, a separate numerical validation treatment is developed, by the deployment of an original recursive solution strategy, the adoption of a discontinuous deformation analysis simulation tool and the operation of a new self-implemented Complementarity Problem/Mathematical Programming formulation, with a full matching of the achieved results, on all the arch characteristics in the critical condition of minimum thickness.
Introduction
This work further investigates the issue of (symmetric) circular masonry arch form optimization (Couplet-Heyman problem), in the quest of a least-thickness evaluation under uniform self-weight ( Figs. 1 and 2). The modern framing of such a problem relies in the contemporary contributions by Jacques Heyman [1][2][3][4][5][6] and the recent revisitation in earlier companion work [7], with therein extensive references, also to various historical and development perspectives on the subject.
The present investigation belongs to a research project by the authors on the statics of masonry arches [7][8][9][10][11][12][13][14], where the following treatments have been attempted: analytical [7,14]; analytical-numerical, accompanied by a Discrete Element Method (DEM) investigation, through an available Discontinuous Deformation Analysis (DDA) tool [8,9,[15][16][17], and including reducing friction effects and resulting mixed collapse modes [10,11]; analytical-numerical, by an innovative Complementarity Problem/Mathematical Programming formulation, truly accounting for finite friction implications [12,13]. This shall be framed within the relevant, updated literature, specifically considering the issue of minimum thickness evaluation , and dedicated attempts to reveal further and independent/complementary information on the analytical treatment, with additional separate validation in terms of numerical results.
Specifically, in the least-thickness collapse evaluation, classical Heyman solution [3] is shown to constitute a sort of approximation of the true solution (here labelled as "CCR" [7]), in Heyman assumption of self-weight distribution along the geometrical centreline of the arch, while Milankovitch solution [43][44][45], as a cornerstone of thrust-line-like analysis, in view of form optimization [46][47][48][49][50][51][52], may as well be derived, in the consideration of the real self-weight distribution along the arch, though at the price of a recorded increasing complexity in the explicit analytical handling of the governing equations (now analytically resolved to a very end in [14]).
The analysis makes reference to the classical three Heyman hypotheses of masonry structure behaviour (no tensile strength; infinite compressive strength; no sliding failure) and refers just to the potential development of a purely rotational collapse mode (as at infinite friction). Given the value of half-opening angle α of the (symmetric) circular masonry arch, the following three basic arch characteristics are sought, in the least-thickness condition of incipient collapse: angular inner-hinge position β from the crown; thickness t to radius r ratio η t/r and non-dimensional horizontal thrust within the arch h H/(w r), w γ t d where H is the horizontal thrust, w t d the specific weight per unit length of geometrical centreline of the arch, γ and d the constant specific weight per unit volume and out-of-plane depth of the arch. Alternatively [7,14], one may also make reference to intrinsic non-dimensional horizontal thrustĥ η h = H/(γ d r 2 ), defined at given material (γ ) and geometrical properties (d, r) of the circular masonry arch to be optimized (critical η still to be sought).
The paper is organized as follows. First, Sect. 2 provides a basic framing, with all the main governing equations, specifically concerning equilibrium and tangency conditions, for the line of thrust (locus of pressure points) within the least-thickness arch. Then, Sect. 3 derives alternative analytical solution approaches to deliver "correct" CCR solution [7] vs. "approximate" Heyman solution [3], through a "Coulomb's static approach" based on the so-called upper and lower horizontal thrusts, and a "Mascheroni's kinematic approach", based on the work/power balance at incipient collapse, with least-thickness condition consistently stated within that. A representation of "accurate" Milankovitch solution [43][44][45] is then also recalled, and the response of the mechanical system illustrated, in terms of the three achieved solutions. Subsequently, Sect. 4 develops a further validation part by a separate numerical treatment, where: first, a recursive determination of angular inner-hinge position is developed; second, a final DDA validation is deployed, completing that earlier presented in [8,9]; third, an original self-implemented Complementarity Problem/Mathematical Programming computational strategy is adapted and operated, to deliver the final arch characteristics in terms of all kinematical (β), geometrical (η) and statical (h,ĥ) quantities. A full matching between numerical results and analytical outcomes is recorded, in showing the validity of the three solution instances in terms of least-thickness masonry arch
Equilibrium Condition
At incipient (assumed) purely rotational collapse, in the least-thickness condition, the equilibrium of the (symmetric) circular masonry arch (thus, of the half-arch) shall be imposed. From the rotational equilibrium of upper portion AB around inner-hinge B, one has (Fig. 3): where, in Heyman assumption of a uniformly distributed self-weight along the geometrical centreline of the arch, weight W 1 of the upper portion of the half-arch is obtained as and the centre of gravity of the upper portion of the half-arch is located at following horizontal distance x 1 from the vertical axis of symmetry (and vertical distance y 1 from centre O): By substituting Eqs. (2-3) into rotational equilibrium Eq. (1), and shifting to nondimensional quantities η t/r and h H/(w r), one obtains a first equilibrium relation, (for instance) in terms of h: Now, from the rotational equilibrium of total half-arch AC around hinge C at the shoulder extrados, one gets a second equilibrium condition (Fig. 3): where W (W W 1 + W 2 ) is the total weight of the half-arch, with half-angle of embrace α, acting at horizontal distance x W from crown A. Given Eqs. (2)-(3), one has: and, by substituting Eq. (7) into rotational equilibrium Eq. (6) one gets a second equilibrium relation in terms of h and η: This equilibrium relation is again linear in h (and η), and also linear in group A(α) αcot α/2, inserting the explicit dependence on half-opening angle α Fig. 4 Functional dependence of A α cot(α/2) on α, with indication of stationary (on β) and limit parameters of present CCR solution (see plot in Fig. 4). It may linearly be solved with respect to h (or η, or even A αcot α/2), by obtaining Equilibrium Eqs. (5) and (9) constitute a system of two static equations, which may be condensed in a single one, by eliminating h, as h 1 h 2 , to get the following single limit equilibrium equation:
Tangency Condition
Now, beside equilibrium, an optimality condition, in the least-thickness condition, shall also be set for the masonry arch, as the tangency condition of the line of thrust at haunch intrados B, according to the textual description in Heyman words [1][2][3][4][5]. However, Heyman [3] seems to have actually analytically stated this tangency condition in terms of the resultant thrust force [6], corresponding to a simplification, in the analysis, leading to a beautiful "linear algebraic problem" [7], as shown below.
Since the angle of inclination of the thrust force in B to the horizontal is such that its tangent is given by ratio W 1 /H, which shall then coincide with the local inclination of the inner circle of the intrados profile, from Eq. (2) and relation H w r h, one has tan β (11) leading to the following tangency equation for Heyman solution: To get instead the true tangency condition of the line of thrust (locus of pressure points) at intrados B, one shall first derive its analytical representation, e.g. in terms of eccentricity e(β)= M/N of the centres of pressure with respect to the centreline of the arch (e taken positive from centreline towards centre O of the circular arch).
Towards that, one first gets a slightly modified and more general version of equilibrium relation (1), by the rotational equilibrium of any upper portion of the half-arch with respect to the centre of pressure at eccentricity e, at a general position β along the half-arch: By solving this equilibrium relation with respect to eccentricity e (or to nondimensional eccentricity −1 ≤ ê 2e/t ≤ 1), again in terms of non-dimensional variables η t/r and h H/(wr), one derives the equation expressing the line of thrust as the locus of the centres of pressure of the resultant thrust force: Eccentricity function ê(β) in fractional form in Eq. (14), displaying built-in property ê(0) −1 at the crown (β 0), depends on both η and h. By further posing h h 2 , Eq. (9), which would automatically set ê(α) −1 also at the arch shoulder (β α), one obtains the final expression of the eccentricity of the line of thrust passing from crown A and shoulder C, at any given value of α (thus of A α cot α/2) and η: In the critical, least-thickness condition, the line of thrust touches intrados B where the hinge at the haunch forms: and, at the same time, becomes tangent to the intrados of the arch. Thus, function e(β) has to display a stationary point at the haunch section (where e = t/2), i.e. the first- since the tangency condition has to hold at haunch B, where ê 2e/t 1, and one gets the tangency condition expressed in terms of h as follows: where term f (β sin β) sin β + β cos β is involved (see Fig. 5 and later discussion).
This shows that assuming h e h H , as taken by Heyman, leads to an approximation of true tangency condition h h e . This might look reasonable until η keeps small. All this makes the correct solution slightly more involved than the former, and leading to shift from a "linear" to a "quadratic algebraic problem" [7].
Finally, the governing system of the least-thickness masonry arch optimization problem for a self-weight distribution along geometrical centreline may be stated, in terms of h, as follows: where A α cot α/2 ( Fig. 4) and δ CCR is an on/off control flag allowing to shift from Heyman (δ CCR 0) to CCR solution (δ CCR 1). The first two equations, eliminating h, as h 1 h 2 , set the equilibrium relation, Eq. (10), the third equation, i.e. Eq. (20), sets the tangency (optimality) condition, in a shift between Heyman and CCR solutions. These equations will be the subject of a further separate analytical treatment in the following section.
Alternative Analytical Derivation and Interpretation
Now, further, independent and reconciling ways to derive the least-thickness condition, leading to same results, are here presented, namely: a "Coulomb's static approach", based on the so-called upper and lower horizontal thrusts and a "Mascheroni's kinematic approach", based on the balance of virtual work (or power).
Coulomb's Static Approach
An alternative way of deriving the least-thickness condition could be based on a "Coulomb's static approach", according to the terminology adopted in Sinopoli et al. [22]. Specifically, reference is here made to the original derivations presented in Blasi and Foraboschi [19], which account for the explicit determination of the "upper" and "lower" horizontal thrusts. In practice, according to Coulomb's view, it is stated that, to warrant equilibrium, the horizontal thrust should take values that are in between the minimum and maximum values that the horizontal thrust could assume. Such two limit values can be obtained as described below.
Lower horizontal thrust H min (β) H L H 1 is the minimum value of the horizontal thrust, applied to the extrados at crown A, which corresponds to impose the rotational equilibrium of any upper portion of the half-arch, of variable half-opening β, with respect to intrados inner-hinge B. In practice, this coincides with earlier-mentioned value H 1 in Sect. 2. Upper horizontal thrust H max (β) H U is the maximum value of the horizontal thrust, applied to the extrados at shoulder C, which corresponds to impose the rotational equilibrium of any lower portion of the half-arch, of variable (α−β) opening, with respect to intrados inner-hinge B.
With reference to Fig. 3, and following the same type of earlier-explained derivations, the lower and upper horizontal thrusts may be determined by the above-described equations of rotational equilibrium as follows: where: Thus, in usual non-dimensional form (η t/r, h H/(wr)), one has: (26) where the quantities at the numerators and denominators in the fractional forms have been introduced, and use has been made of Eq. (9), stating the rotational equilibrium of the total half-arch with respect to shoulder hinge C, also re-written as well below in fractional form: Equation (25) confirms that h L corresponds to same value h 1 earlier derived in Eq. (5); thus h L h 1 and, through h 1 , previously, use was already made of the lower thrust. Equation (26) represents an additional expression of the non-dimensional horizontal thrust, which is alternative to that of h 2 , Eq. (27), earlier accounted for. This could be used to write the second ruling equilibrium equation in an alternative way.
Notice that, instead of what is happening for h 2 , the dependence on α only through group A is not apparent in Eq. (26). However, it is straightforward to show that, based on the last relation that has been written in Eq. (26) In practice, out of three equilibrium-linked horizontal thrusts h L h 1 , h 2 , h U , any of the three equivalent equilibrium conditions below, based on two of them, could be employed to state the equilibrium equation: Notice that, here, all three thrusts h L , h 2 , h U depend on η. Thrust h L h 1 depends only on angular position β; h 2 only on angular position α; h U on both angular positions α and β.
As debated by Blasi and Foraboschi [19], see Figures 7 and 8 in their paper, h L (β) and h U (β), as a function of β, respectively, provide lower and upper bounds for h. Since such two curves depend on η, at a given value of α in h U , they may turn out as follows: detached, providing, with a positive minimum relative distance (clearance), a measure of the "margin of safety" for the arch; intersecting, with a negative minimum relative distance, at same quote h L h U h 2 , denouncing a sub-critical condition; tangent to each other at a zero relative distance, at a point where h L h U h 2 , locating the true critical least-thickness condition.
Also, since three curves h U (β), h L (β), h 2 , the latter corresponding to a horizontal line at constant quote h 2 , intersect, if it happens, at same quote h 2 , disregarding the values of β corresponding to h U h L , the intersection between h U and h L occurs at constant h, i.e. with h 0. Thus, in the limit of the tangency condition at the minimum thickness, curves h U (β) and h L (β) are both stationary at the point where h L h U , i.e. their local tangent is horizontal, as that of constant h 2 . This states the critical condition as zero first-order derivative with respect to β, h L 0 or h U 0, where equilibrium Eqs. (28) hold. At such stationary points of h U (β) and h L (β), one has h min h L h 1 and h max h U , with positive clearance h max − h min > 0 in over-safe condition h max > h min , negative clearance h max − h min < 0 in sub-critical condition h max < h min and no clearance h max − h min 0 in critical condition h max h min . Now, it is quite straightforward to show, from the fractional forms reported in Eqs. (25)- (26), that the stationary conditions on either h L or h U , or even the condition of mutual tangency h L h U , as adopted by Blasi and Foraboschi [19], at h L h U , lead to the same tangency condition as analysed earlier for the line of thrust, which was stated in terms of h h e , Eq. (20), through the definition of the line of thrust. Indeed, at the stationary points of h L and h U : where Also, alternative condition h L h U , at h L h U , from (29) to (30), leads to: Thus, all the tangency conditions above are equivalent to h h e , as earlier directly stated on the line of thrust: This actually signals the easiest way to account for the tangency condition of the line of thrust at the intrados hinge. Once determined from equilibrium, the simpler expression of lower thrust h L h 1 (dependent just on β), its stationary condition h L 0 at h h L , immediately leads to tangency condition h h e , without even the need of passing through the definition of the line of thrust and of its e(β) eccentricity (Sect. 2.2).
In conclusion, given four thrust equations h h L h 1 , h h U , h h 2 , h h e , any of the four possible systems with three of them would equivalently lead to a correct least-thickness solution representation: These systems could be taken for a numerical solution of critical parameters β, η, h at given half-angle of embrace α. The solutions of the first three systems, involving tangency condition h h e , are expected to be more efficient than that of the last system, based just on equating three thrusts h L h 1 , h U , h 2 . However, this latter system based just on three equilibrium relations, despite not explicitly accounting for the tangency condition, could be used as well for the final solution. The first system is still probably the simplest, also conceptually. As stated above, this corresponds to set the equilibrium relation as equilibrium condition h L h 2 and the stationary condition as either tangency condition e 0 or h L 0. All the above-outlined considerations can be inspected in the plots of horizontal thrusts h L h 1 , h U , h 2 depicted in Figs. 6, 7, 8, and 9, which, respectively, show the thrust functions depending on β, for the two reference cases of α 90°a nd α 140°, in comparison for Heyman and CCR solutions. As commented in [7], Plot with the non-dimensional thrusts h U , h L , h 2 of the arch with α 7 π /9 according to present CCR solution: the three lines are truly tangent within a fork of two values of β that surround the true value of β determined by CCR solution (which is almost in the middle), correctly leading, instead, to a true tangency condition. In Figs. 6, 7, 8, and 9, curve h e (β) is as well reported, which is useful to represent the stationary, thus tangency, condition, by locating the stationary points of curves h U (β) and h L (β). Also, the plots of CCR solution show that the direct use of tangency condition h h e shall numerically become more effective, since the cutting over h U (β), h L (β), h 2 is sharper than that going through the quite flat stationary points at the Also, it is confirmed that curves h L (β) ed h U (β) turn out quite flat near the common stationary point in the critical condition, which locates correct angular position β of the inner-hinge. Thus, as noted by Heyman in quoting Coulomb's observations, it is clear that even approximate estimations of angular inner-hinge position β might lead to fairly correct values of h (and also of β). For this reason, despite missing the correct estimate of β, as instead obtained by CCR solution, Heyman solution still appears quite acceptable in engineering terms (at least for under-complete arches).
Mascheroni's Kinematic Approach
An additional, independent way of deriving the least-thickness condition could be based on a "Mascheroni's kinematic approach", following again the terminology adopted in Sinopoli et al. [22]. Indeed, the below reported solution is derived from an alternative kinematic approach, based on the principle of virtual work (or power), which is written with reference to the purely rotational rigid-body five-hinge collapse mechanism of the arch (see Figs. 2 and 3).
Referring to the potential three-hinge rigid-body kinematic chain of the half-arch in Fig. 3, one takes the external virtual work (or power) equation, to state equilibrium at incipient collapse:L e 0 ( 3 4 ) i.e., explicitly, for any nonzero angular rotations ψ and ϕ (or velocitiesψ andφ): where, given Eqs. (2)- (3) and (24), W 1 wrβ and W 2 wr (α − β, are the weights of the two portions of the half-arch separated by inner haunch hinge B (thus, with total weight of the half-arch W W 1 + W 2 wrα), acting at abscissas x 1 r (1 -cos β)/β and x 2 r (cos β -cos α)(α − β) x 2B +(r − t/2) sin β from the vertical axis of symmetry at the crown; x 1 is the horizontal distance of the centre of rotation Ω 1 of the upper portion of the half-arch with respect to crown A and x 2B is the horizontal distance from the line of action of W 2 to point B. Equation (34), i.e. explicitly (35), is evidently a way to state equilibrium at the virtual rotational collapse mechanism that may develop.
The kinematic chain is a one-degree-of-freedom system. The relation between two angular velocitiesψ andφ in Fig. 3 can be obtained by imposing that the horizontal velocity of inner-hinge B is the same for the two portions of the arch. Thus, one obtains the following kinematic link: Moreover, abscissa x 1 can as well be determined, by imposing that the vertical velocity of inner-hinge B is the same for the two portions of the arch, by obtaining: namely: By substituting Eq. (38) into virtual work (or power) Eq. (35) and by making use of relation (36) and explicitly of W W 1 + W 2 , x 2 x 2B +(r − t/2) sin β, one has, in view of relations (22)-(23), for lower and upper thrusts H L and H U : Thus, statingL e 0 as equilibrium condition is fully equivalent to state it as H L H U , namely the same equilibrium condition found from previous Coulomb's static approach. In other words, and in non-dimensional terms,L e 0 is equivalent to assume an equilibrium condition in the form h L h U : A second relation, which indirectly expresses the tangency condition of the line of thrust at intrados B, can be obtained by setting to zero the derivative with respect to β of the external virtual work (or power): This comes from the following consideration: if the line of thrust has to turn out tangent to the intrados at haunch B, it should also not that much deviate from the circular intrados curve in the surroundings of B. As a consequence, equilibrium should be warranted also for small variations of point B, thus of angular position β, and this holds true if also the angular derivative of the external virtual work (or power), at intrados B, vanishes.
Given obtained Eq. (39), alternative stationary condition (41), becomes equivalent to: Thus, atL e 0, i.e. at H L H U ,L e 0 is equivalent to H L H U , i.e. one of the equivalent ways to set the tangency condition according to earlier Coulomb's static approach. In other words, and in non-dimensional terms: so that system {L e 0,L e 0} formed by Eqs. (34) and (41) is wholly equivalent to system {h L h U , h L h U } earlier obtained by Coulomb's static approach. Thus, the two approaches are fully equivalent, and both lead to the same solution (as earlier derived), as it could be checked by independent numerical evaluations of the various solving systems that have been here derived.
Different ways to rewrite the first of Eqs. (45) similarly to Heyman formula (44) a , with term A isolated on the right-hand side, could be the following: though obviously there now appears a dependence on A, thus on α, also on the lefthand side of the equation (thus inspiring possible recursive evaluations, as treated in the next section). Similarly, for the sake of completeness, the following expressions could as well be written for η: and for h: or forĥ: The solution of CCR system (21), for δ CCR 1, or of quadratic system (45) can then be obtained in compact form, by explicitly solving for triplet (A, η, h): where the signs in front of the square root terms are sorted out in triplet (+,-, +) or in triplet (-, + ,-), i.e. with sorting (A + ,η − , h + ) and (A − ,η + , h − ). Non-dimensional horizontal thrustĥ can then be determined as well by the product of η and h (and sorted out in the same way as that of h). Notice that A(β), thus α(β), η(β) and h(β), orĥ(β), are two-valued functions of β, i.e. there are two values of A, thus of α, η and h, orĥ, that correspond to same inner-hinge position β. Term f 2 − 2 g S+ S 2 is ≥ 0 for 0 ≤ β ≤ β CCR sβ (Fig. 5), where β CCR sβ is the root of f 2 − 2 g S+ S 2 0, assuring that the solution turns out real-valued in that range of β.
A direct graphical confrontation between Heyman and CCR solutions can be appreciated in Fig. 10, where triplet A, η, h is represented by analytical plots as a function of angular inner-hinge position β. This representation in terms of β allows to highlighting the differences between the two solutions. This is mainly due to the dissimilar trends experienced on the estimated hinge position. Indeed, the trends of A, η, h are monotonic (single-valued) for Heyman solution and non-monotonic (double-valued) for CCR solution, with a very appreciable deviation in terms of β, especially at increasing half-opening angle α (decreasing A), already when approaching complete semicircular arch case α A π /2 and more and more when α goes beyond that (thus the greatest differences are revealed for over-complete arches with half-angles of embrace larger than 90°). Despite that, since as stated by Heyman, the hinge position is somehow an internal ingredient in the solution, in engineering terms, the final differences on η and h at variable angle of embrace are rather limited [7]. In the plots in Fig. 10, the trends for β small, corresponding to small angles of embrace α, which turn out the same for Heyman and CCR solutions, are as well represented.
A further detailed representation of CCR solution is provided in Fig. 11, where triplet β, η, h is depicted as a function of A, by analytical parametric plots, with The true appearance of the line of thrust that develops within the arch in the critical least-thickness condition is also analytically represented for CCR solution by the polar plots reported in Figs. 12, 13, 14 and 15, for some characteristic values of the half-angle of embrace, including: a reference case for α < 90°, i.e. α 70°; the characteristic case of α CCR sβ that corresponds to the stationary condition of curve β CCR (α) [7,14]; another taken over-complete reference case, i.e. α 140°; limit case α α CCR l of CCR solution, leading to a vanishing horizontal thrust [7]. The plots make apparent the increasing thickness that the arch shall display to warrant self-standing equilibrium, at increasing opening angle, with a corresponding decrease in non-dimensional horizontal thrust h (and actually a bell-shaped trend of intrinsic non-dimensional horizontal thrustĥ, going through a maximum [7,14]). The lines of thrust start to "bend" for an α that is around that leading to the stationary condition on β and reach, in the limit configuration, a theoretical profile that gets to the intrados on the vertical axis of symmetry at the crown. This corresponds to a precarious, inverted pendulum equilibrium configuration, that is achieved by a resultant self-weight of the half-arch that is exactly vertically aligned on the underlying bearing hinge at the shoulder, with a zero transverse horizontal thrust (h 0) and inner-hinge that disappears, by pulling-back to zero (β 0), with a released section at the crown (giving rise to an overturning mechanism, at infinite friction [13]). In such a limit case, according to CCR solution, the thickness of the arch becomes equal to its radius (η 1).
Generalization to Milankovitch Solution
So far, the assumption of Heyman uniform self-weight distribution along the geometrical centreline of the arch was considered. Instead, due to the curvature of the circular arch and to the resulting wedged-shape of each ideal infinitesimal chunk of the continuous arch, it appears that its centre of gravity is slightly radially displaced, at radial distance r G a bit larger than radius r: where Milankovitch [43][44][45] multiplicative correction factor (1 + η 2 /12) appears and comes then to affect the various governing equations (at growing resulting η). Specifically [7], for equilibrium relations: eccentricity relations: and tangency condition: leading to final Milankovitch governing system: then configurating a more involved "cubic algebraic problem" [7]: Explicit analytical closed-form representations of the solution of such a cubic algebraic problem are now newly provided in [14]. Minimal differences between Milankovitch and CCR solutions may be appreciated at increasing opening angle of the arch, mainly for over-complete arches (Fig. 16).
Finally, similarly to previous Figs. 12, 13, 14 and 15, Fig. 17 represents a resuming analytical representation of the line of thrust on the true arch profile in the critical least-thickness condition, for a taken reference case of over-complete arches (α 140°). Thereby, the salient differences between Heyman, CCR and Milankovitch solutions may be appreciated, all together, as: line of thrust actually going out of the arch profile for Heyman solution; line of thrust truly tangent to the intrados at the haunch for CCR and Milankovitch solutions; quite near representations for CCR and Milankovitch solutions; different angular inner-hinge positions, drastically dissimilar for Heyman solution; different resulting thickness estimates, with Heyman visibly being sub-critical and CCR just slightly under-conservative with respect to Milankovitch solution; positioning of the resulting self-weight resultant for CCR (and Heyman) solution, as opposed to the true location completely accounted for by Milankovitch solution.
Further illustration on the characteristics of the mechanical system, for the three solutions, is available in [7,14], including for the returning and bell-shaped trends of intrinsic non-dimensional horizontal thrustĥ ηh, with diverging differences for Heyman solution in the dependencies on β and minimal differences in the dependencies on A and α. Specific aspects of the stationarity of these curves are analytically treated in [14], by explicit closed-form solutions, referring to the case of the symmetric circular masonry arch of "maximum horizontal thrust" and of "widest angular inner-hinge position".
Numerical Validation
Independent, numerical-validation approaches are here outlined, for a confrontation to the analytical outcomes and for a further illustration of the symmetric circular masonry arch characteristics, in the optimality condition of minimum thickness.
Heyman Solution
The numerical solution of transcendental Eq. (44) a for angular inner-hinge position β is quite straightforward (especially in inverse form A(β) α cot α/2, at given β). However, a recursive procedure could promptly be devised, e.g. for a further root refinement of a guess that could be taken, for instance, from a proposed root fit of the solution [7]. Table 4 reports a possible convenient way of doing that, which requires origin solutions that are not much on the left of the correct one or, more precisely, that are on the right of a singularity point that may arise when the denominator term in the recursive form, which is a function of β and A, becomes zero. Despite this little limitation (which is promptly over-passed by the very good estimate provided by the above-commented good fit), convergence turns out quite fast and, most important, over all the range of the admissible values of A. Indeed, the recursive proposal reported in Table 4 allows for a fast convergence on both extremes of the values of A, being actually slower in the usually considered range of half-angles of embrace that are lower than π /2. Other proposals might work better in this range but would display a slower convergence on the opposite side or might have no singular points but achieve a much slower convergence for the different values of A. So, the presented option constitutes a reasonable compromise to handle all the possible cases.
Briefly, the adopted recursive formula that has been reported on top of Table 1 has been obtained as follows. Take Eq. (44) a , isolate a pivoting β p term and solve with respect to that. Different possibilities arise, which can be checked right away for a possible recursive convergence. The chosen one has originated from the following choice (for compactness, again, S sin β, C cos β): Also, to further improve the convergence rate, additional splits of the pivoting term have been attempted and optimized through numerical trials by the following proposal: which, by solving with respect to β p , leads to the following expression, finally useful towards a recursive evaluation of root β in Heyman solution: This expression has been used to generate the recursive estimations reported in Table 1, for five given values of A (A α π /2 and two values on the right and on the left of that). Starting from simple fitting guess [7]: a few iterations turn out enough to recover the correct recursive estimate of root β.
CCR Solution
Expressions (47) on A could numerically be used for a recursive determination of the value of A at given pre-peak 0 ≤ β ≤ β sβ . Specifically, the first two relations in the first line of Eq. (47) run the recursive iteration in all four cases reported in Eqs. (47), initial roots A 0 should be chosen as defect estimates for the A + root and as excess estimates for the A − root. Similarly, Eqs. (48) and (49) could be used as well for a recursive determination of η and h, with above comments applying in the same way. However, recall that the role of η is inverted, as opposed to that of A and h, since the triplets have to be sorted out in orders (A + ,η − , h + ) and (A − ,η + , h − ). The estimate ofĥ ηh might either go through similar resolutions of Eqs. (50) or directly by the product of the found η and h estimates.
A root recursive estimate of angular inner-hinge position β that works quite well on all sides of the solution branches is provided in Table 2, similarly to what reported in Table 1 for Heyman solution. It is based on the first expression in the second line of Eq. (47), which can lead to: Starting from easy-to-remember CCR fitting guess [7] β CC R which one could derive for CCR solution based on the trends experienced for α small (A near 2) and α near α CCR l (A near 2/3), see Fig. 4, a good refinement is achieved in not many iterations (in general terms, a bit more than for the previous Heyman recursive evaluation).
Milankovitch Solution
As above, Table 3 presents a recursive strategy for refining angular hinge position β in Milankovitch solution, according to expressions starting from an appropriate fitting guess, such as [7] β M leading to a reasonable convergence on all branches of the solution characteristics.
Overall, the number of iterations may become the highest, among the three solution instances, due to the increasing complexity in, respectively, dealing with Heyman, CCR and Milankovitch solutions (appreciate the global increasing height of represented Tables 1, 2 and 3).
DDA Least-Thickness Results
A least-thickness self-standing evaluation of the masonry arch may be elaborated by using Discrete Element Method (DEM) quasi-static simulations of discrete voussoir arches [36][37][38]. To provide an independent numerical validation of the achieved analytical results, an available Discontinuous Deformation Analysis (DDA) tool was adopted in [8,9], to deliver the estimates of the critical thickness and the appearance of the corresponding collapse mode (notice that the five-hinge purely rotational collapse mode is assumed from scratch, in the analytical analysis, while in such a case is numerically evaluated, out of the analysis). Here, further complementary and completing results are reported. On the description of the employed methodology, and the framing in the competent literature, see [9]. The adopted DDA computational tool was freely taken from the web (sourcefoce.net, "DDA for Windows", Limerick version 1.6), as developed from researchers at the University of Berkeley (see [15][16][17]).
Symmetric discretized arches with four blocks (with radial joints), at variable halfopening angles α between 60°and 140°, each 10°(thus encompassing under-complete, semicircular complete and over-complete arches) are DDA analysed, at given innerjoint position, for which the minimum thickness is numerically estimated (alias, the minimum thickness still preventing, or the maximum thickness still inducing, arch collapse). At each given value of α, a target value of angular inner-joint position β is assumed, on the basis of the derived analytical solution (CCR solution is here taken for reference). For each arch, three analyses have been performed, for that value of β and for two fork, inferior/superior values of β, as ± 0.5°that value of β, as to reveal and confirm a possible maximum trend of η at variable β, as it should be in denouncing the critical least-thickness condition [9]. A summary of such DDA results, with a direct comparison to CCR and Milankovitch solutions is provided in Table 4 and compactly illustrated in Figs. 18 and 19. Table 4 reports the recorded values of critical η t/r, at the given values of α and β, determined with the procedure explained in [9]. On the side, the target discrete η values for CCR and Milankovitch solutions [7] are as well reported, for comparison purposes. As it may be appreciated, differences look really minimal and results show that the features of the analytical treatment are correctly reproduced and that the numerical DDA tool is able to provide consistent estimates of the critical thickness (having reasonably guessed the position of the inner-joint). Clearly, little dislocations of the inner-joint position do not alter much the numerically recorded values of critical thickness.
Similar comments may be deciphered from the reading of Figs. 18 and 19. Figure 18 globally shows in grid-view the array of the various masonry-arch openings, with inner-joint set from CCR solution, with the corresponding recorded value of critical thickness-to-radius ratio and the attached apparent rotational collapse mode. The sequence of plots clearly illustrates the monotonic increasing trend of thickness necessary for the arch to withstand, at increasing arch opening, over-complete arches becoming really much thicker to stand up. The purely rotational collapse mode is anyway correctly reproduced (a high value of friction coefficient is set in the simulations, to avoid possible manifestations of any form of sliding [9], se also next subsection). This looks thus consistent with source Heyman hypothesis of no sliding failure, at the basis of the present theoretical treatment (finite friction effects are separately analysed in [10][11][12][13]). Figure 19 further gathers, on the top plot, the imposed values of angular inner-joint position β at variable half-opening values α of the arch, with fork values around CCR solution (Milankovitch solution is never much dissimilar, if not, a bit, for the last case of α 140°) and going through a maximum of β for an α at around 120°-130° [14].
Least-Thickness Optimization by a Complementarity Problem/Mathematical Programming Formulation
A Complementarity Problem/Mathematical Programming (CM/MP) numerical formulation and self-made implementation, recently developed by the authors [12,13] within a MATLAB environment, with the target to specifically enquire finite-friction effects on masonry arches [22,[53][54][55][56][57][58][59][60][61][62], is here adapted to the numerical analysis of symmetric circular masonry arches relying on infinite (say high) friction and employed for a further independent validation and interpretation of the arch characteristics in the least-thickness condition, as by the solution of a numerical optimization problem. The general formulation is first briefly introduced, in its main traits, and described in the needs of adaptation of the computational implementation, towards the present numerical optimization analysis. Then, it is run, and salient results are selected and presented, in view of complementing and confronting the previous analytical and numerical outcomes.
The 3n velocities ruling the rigid-body movements of the n rigid blocks are collected in vectorU G : . . n, are the linear and angular velocities of each i-th block and the 4(n + 1) relative velocities relevant to the (n + 1) joints are gathered in vectorλ: The two sets of kinematic variables are linearly related by compatibility matrix B G : The physical constraints on the masonry arch require the fulfilment of the following kinematic relations: where B is a (kinematic) constraint matrix [13] while, at each joint, internal actions must fulfil the following static inequalities: Once internal actions are computed by equilibrium, relationships (72), in terms of activation functions ϕ k , define an "admissible static configuration" within the masonry arch, the independent variables being three redundant reactions H, V and W at the right built-in shoulder of the arch (Fig. 20): where A and T w are, respectively, the matrix and the vector governing equilibrium and joint activation [13]. Each relative velocity shall result orthogonal to the corresponding static activation function: Such relations also include a complete detachment (see [13]), for which, in order to eliminate a numerical multiplicity (not corresponding to a physical one), a (weak) "orthogonality" condition among non-negative variablesṡ + andṡ − shall be added to the problem statement:ṡ +ṡ− 0 The external power is then readily computed as: Finally, the velocity field can arbitrarily be normalized by setting: In general terms, in a limit equilibrium configuration at incipient collapse, kinematic (λ) and static (H) variables shall fulfil the following linear Complementarity Problem (see, e.g. [77]): Additionally, a convenient solution to complementarity system (78) may be obtained by the following non-linear Mathematical Programming problem, in which the (nonlinear) orthogonality condition on variables ϕ andλ is used as an objective function (see, e.g. [78]), to be led to a zero value: Further, also the orthogonality condition on variablesṡ + andṡ − may conveniently be transferred to the objective function, by adding (non-negative) scalar producṫ s +Tṡ− 0 to (non-negative) term −ϕ Tλ , in this way leading to a non-linear Mathematical Programming problem with linear constraints only [13].
Notice that, in Mathematical Programming problem (79), thickness t and friction coefficient μ are assumed as free parameters to be arbitrarily changed, to get to a zero value as an optimum value for objective function f −ϕ Tλ . Multiple solutions to (non-convex) problem (78), and thus to programming problem (79), may generally be expected (see, e.g. [79] and [82][83][84][85]), as linked to the context of non-standard Limit Analysis, in the realm of masonry arches with finite friction effects .
Additional details on the source CP/MP formulation are delivered in [13]. After an appropriate normalization of variables, the solution of the optimization problem has been obtained within MATLAB ® , by either the "interior point" or the "active set" minimization algorithm in (built-in) optimization function "fmincon", with tolerances kept at 10 −10 . The following dedicated strategies have here been considered and implemented, to handle the specific problem at hand, and leading to the outcomes then gathered in Figs (Fig. 20), at the achieved solution instance. The accuracy on the determination of such a static variable has been monitored and found as almost comparable to that recorded for the source geometrical variables in the optimization problem. This appears as a rather consistent feature of the present adaptation implementation, since it comes up able to correctly reproduce all the characteristic features of the least-thickness arch (geometrical, kinematical and statical). Specifically, the quest was posed in terms of evaluating the maximum horizontal thrust that the arch is able to transfer in the minimum-thickness condition (Fig. 23), as explicitly analytically addressed in [14]. The recorded matching, on the estimation of such a maximum value of thrust, turns out astonishingly good. • Issue of symmetry. As briefly introduced, the present formulation for circular masonry arches is rather general and allows for the analysis of non-symmetric arches (Fig. 20). Moreover, the collapse mechanism to be plotted is obtained out of the numerical calculation on the kinematic variables and, for symmetric arches, is by no means pre-imposed to be symmetric [13]. In other words, what becomes uniquely know is the geometrical position of the failure joints (geometrical variables), while the mechanism undertakes an arbitrariness linked to an exuberant number of dofs. By instead implicitly imposing symmetry, for the resulting mechanism, this becomes a single-dof mode, and fully respecting symmetry, with a transverse zero displacement at the crown of the symmetric masonry arch. This has been set in the present adaptation, through the insertion of a specific control flag, within the implementation, up to pre-set the symmetry condition in the treatment of the geometrical reference configuration and of the kinematic variables of the formulation and, for the definition of the symmetric collapse mode, some equations are added to set the relations for the centres of gravity of the i-th and j-th homologous chunks (by the axis of vertical symmetry at the crown), namely u i + u j 0, v i − v j 0, ϕ i + ϕ j 0. If the number of chunks is odd, for central chunk r including the crown, the following symmetry constraints are considered, for a consistent symmetric arch kinematics: u r 0, ϕ r 0. • Considering, still, the representation of the (symmetric) collapse mode (see Figs. 22 and 23), the following dedicated strategies have been implemented, in view of achieving a convincing and realistic representation out of the act of motion within linear kinematics (displacements for velocities). The positions of the vertices of the masonry block are determined through linear kinematics, thus conserving compatibility among the chunks and with external constraints. Moreover, the relative angle among the joints of the voussoir is the same, as the rigid-body act of motion lefts it unaltered, still within an arbitrarily amplified, linear kinematics (i.e. it represents a conformal mapping). Intrados and extrados vertices are, respectively, joined by circular segments, with a slightly varied radius, given by the prolongation of the facing joints of each chunk. • Since finite-friction effects are here not of a target, the information from the "line of friction" [12,13] has been eliminated, and a high value of friction coefficient μ among the blocks has been set, up to prevent any form of sliding, and then warrant a purely rotational collapse mode, as indeed recorded in all the analysed cases with 60°≤ α ≤ 140° (Figs. 22 and 23), where a high value of μ 10 (friction angle ϕ 84.3°) has been imposed. Indeed, any form of sliding, for α ≤ α lm 2.48716 142.504°, shall be prevented by μ ≥ μ lm 1.41527 (ϕ lm 54.7558°) [11], as also confirmed by the numerical analyses in [13].
• Given the fact that the amount of friction is here immaterial, with respect to what advanced in [13], a single plot with the line of thrust, turning out strictly contained within the arch, in the critical, optimality configuration of least-thickness, having superimposed the resulting purely rotational collapse mode, has been set, also in the perspective to facilitate the unitary grid-view representation in Fig. 22. Thus, the algorithm has been "specialized" to conveniently represent the features of symmetry-constrained arches, in the hypothesis of "infinite" friction, for inquiring and representing purely rotational collapse, as consistent with the mathematical derivation in the first part of the paper, for the continuous masonry arch.
Numerical results for symmetric arches with four blocks are condensed in Figs. 22 and 23 (here, comparison reference should be made to CCR solution, since uniform self-weight distribution is taken along geometrical centreline). Figure 22 shows, in compact grid-view, as earlier done in Fig. 18 for the DDA treatment, the results for the same half-angles of embrace of the arch, from 60°to 140°, each 10°. Results are perfectly matching, also with the numerical values that shall be compared with the analytical formulation (CCR solution). Figure 23 further shows two specific cases of a stationarity interest, namely the arch with "maximum horizontal thrust" and with "widest angular inner-hinge position", made the subject of a specific analytical investigation in [14], with wholly consistent results.
Conclusions
In the first part of the paper, different, equivalent analytical derivation approaches of the ruling equations for the statics of circular masonry arches in the least-thickness, optimality condition have been provided, specifically those based on the so-called upper and lower horizontal thrusts ("Coulomb's static approach") and on the balance of virtual work or power ("Mascheroni's kinematic approach"), both leading to same optimization outcomes: • The proposed formulations reconcile previous treatments and bring in a unifying and additional interpretation and understanding of the conditions that rule the leastthickness solution. • Specifically, beside established equilibrium at incipient collapse, the tangency condition of the line of thrust (locus of pressure points) at the intrados of the arch can be handled, as a stationary condition on the horizontal thrust itself, without the need of going through the definition of the line of thrust, namely of the equation of its eccentricity with respect to the geometrical centreline of the arch. • The generalization to Milankovitch solution for the real self-weight distribution is also reconsidered, and the features of the three arising solutions further described, with an additional illustration. Thus, analytical features and results herein presented complement those earlier provided in previous source companion work [7].
In the second part of the manuscript, an independent numerical validation investigation has then been developed, by three separate approaches, with truly consistent results: • First, a recursive determination of angular inner-hinge position of a continuous arch is devised, to avoid the direct numerical solution of the governing system of equations. This could conveniently be handled even by hand computations, or by a simple programming calculator or a spreadsheet, leading to the determination of the angular inner-hinge position, after which the main features of the arch characteristics, in the least-thickness critical condition, could directly flow down, by substitution into the provided explicit analytical representations. • Second, a further numerical Discrete Element Method (DEM), Discontinuous Deformation Analysis (DDA) investigation of discrete voussoir arches has been developed, to deliver a confirmation of the assumed purely rotational collapse mechanism and relevant estimation of the critical thickness. The achieved results complement and complete those earlier presented in [9]. • Third, an innovative Complementarity Problem/Mathematical Programming (CP/MP) formulation has been adapted and operated on symmetric circular masonry arches at high friction, for a consistent, final confirmation of all the characteristics of the circular masonry arch in the least-thickness condition. The attached presented results complement those delivered in [12,13].
The current attempt has considered the so-called Couplet-Heyman problem in the statement of a classical form-optimization problem in the Mechanics (statics) of (symmetric) circular masonry arches, as pertinent to the present editorial collocation. The optimization process has been set as follows: • Initially, by analytical procedures, through the appropriate statement of the underlying stationary conditions, as those corresponding to Heyman literal description, and has allowed to derive closed-form explicit analytical representations, for the problem at hand. • Additionally, by numerical strategies, where the optimization problem has been scheduled as a direct numerical target (objective function), which opens up the way to further perspectives towards a general interpretation and practical application, for cases that shall be going beyond those codified ones that may conveniently be handled by analytical derivations.
Indeed, further subsequent investigations may consider different shapes of the masonry arch and possible true implications of the presence of finite friction, in terms of non-standard Limit Analysis, about the bearing capacity of the masonry arch and the resulting collapse mode, which may include sliding and other potential effects linked to the arising non-normality kinematics of the arch collapse, as initially investigated, analytically in [10,11], and numerically in [12,13]. | 2020-12-03T09:07:41.856Z | 2020-11-28T00:00:00.000 | {
"year": 2020,
"sha1": "8341017c6b9bf603cf541665530dcb21294cf6b3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10957-020-01772-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3f3c646ba2fe4a2f68285b0e42c5860faa13aded",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
1703269 | pes2o/s2orc | v3-fos-license | Micrografting for fruit crop improvement
Micrografting is an in vitro grafting technique which involves the placement of a meristem or shoot tip explant onto a decapitated rootstock that has been grown aseptically from seed or micropropagated cultures. Following early experiments of micrografting in ivy and chrysanthemum, the technique has been used in woody species, especially fruit trees. Major work was carried out in different Citrus species for the elimination of various viral diseases. In vitro micrografting has been used for improvement and multiplication of fruit trees as the technique has potential to combine the advantages of rapid in vitro multiplication with the increased productivity that results from grafting superior rootstock and scion combinations. Successful micrografting protocols have been developed for various fruit crops including almond, apple, cherry, chestnut, Citrus, grapes, mulberry, olive, peach, pear, pistacio, walnut, etc. Special techniques have been used for increasing the percentage of successful micrografts with the use of growth regulators, etiolation treatments, antioxidants, higher sucrose levels, silicon tubes, etc. The technique has great potential for improvement and large scale multiplication of fruit plants. It has been used on commercial scale for production of virus-free plants in fruit crops and viroid free plants in Citrus. Micrografting has also been used in prediction of incompatibility between the grafting partners, histological studies, disease indexing, production of disease-free plants particularly resistant to soil borne pathogens and multiplication of difficult to root plants.
INTRODUCTION
Micrografting is a relatively new technique for propagation of plants. According to Hartmann et al. (2002), micrografting is an in vitro grafting technique which involves the placement of a meristem or shoots tip explant onto a decapitated rootstock that has been grown aseptically from seed or micropropagated cultures.
Following early experiments by Doorenbos in 1953 in ivy and later by Holmes (1956) on chrysanthemum, micrografting technique have been used in particular on woody species and especially on fruit trees, where work was carried out on different species of citrus with a view to eliminate viral diseases. The technique was modified and improved for increasing the graft success by Murashige et al. (1972) and Navarro et al. (1975). The technique has great potential for improvement and large scale multiplication of fruit plants. It has been used for the production of virus and viroid-free plants in fruit crops. Micrografting has also been used in prediction of incompatibility between the grafting partners, histological studies, virus indexing, production of disease-free plants particularly resistant to soil borne pathogens, safe germplasm exchange between countries and multiplication of difficult to root plants. Reviews on micrografting have been published by Jonard et al. (1983), , Roistacher et al. (1976), Parkinson et al. (1990) and Monteuuis (2012). The present review aims at examining the published literature related to micrografting to increase the application of this technique at commercial level for the improvement of fruit crops.
STAGES OF MICROGRAFTING
Micro-propagation protocol for scion as well as rootstock needs to be standardized separately before performing the micrografting operation under in vitro conditions. Thus, micrografting can be divided into three main stages:
Establishment and multiplication of scion
Shoot or meristem tips intended for grafting can be taken from actively growing shoots in greenhouse, chambers, field or in vitro. Generally, apical shoot tips or nodal cuttings are used as explants for the establishment of in vitro cultures. Following establishment, microshoots are transferred to shoot proliferation medium where shoot number increases by the development of new axillary shoots. Microshoots of desired thickness, age and length are used as scions for in vitro grafting operations.
Establishment and multiplication of rootstock
Rootstocks used for micrografting are in vitro or in vivo germinated seedlings and rooted or unrooted micropropagated shoots. When seedling rootstocks are used and all stages of grafting are conducted in vitro, seeds are surface sterilized and germinated aseptically in vessels containing nutrient salts. The seedlings may be supported on agar medium. Seedlings can also be on a porous substrate, such as sterile vermiculite, which allows the growth of a branched root system.
Preparation of rootstock and scion for micrografting
Micrografting is affected by cutting off the top of the seedling rootstocks usually just above the cotyledons or top of the micro-propagated shoot and placing small shoot apices of scion onto the exposed surface of decapitated rootstock in such a way that the cambium layer or vascular ring of the cut surfaces coincides with each other. This is called surface placement method. Wedge or cleft grafting is performed, incase thickness of rootstock and scion material is large enough to allow making of wedge on the scion material. Firm contact between rootstock and scion is extremely important at the graft junction for proper union of partners and callus formation (Canan et al., 2006). Several techniques have been developed for holding grafts together until fusion takes place such as translucent silicon tubing (Gebhardt and Goldbach, 1988), elastic strip (Jonard et al., 1983), filter paper bridge (Huang and Millikan, 1980), and glass tubing, nylon bands, aluminum foil tubes, dual layer apparatus of aluminum foil and absorbent paper (Obeidy and Smith, 1991). When grafts are successful, rootstock and scion grow together to produce a plant. It is usually necessary to examine freshly grafted seedlings on a regular basis and remove any adventitious shoot arising on or below the graft union.
GRAFTING SUCCESS DURING MICROGRAFTING
Grafting is a traditional method for production of composite plants but is season dependent. Failure of grafting means loss of one year for the production of grafted plants. This problem has been overcome through the use of micrografting which is done under controlled environmental conditions throughout the year and production can be planned according to market demand. Micrografting has particular utility in fruit tree production and protocol have been developed in many fruit crops including almond (Yıldırım et al., 2013;Isıkalan et al., 2011), apple (Huang and Millikan, 1980), apricot (Piagnani et al., 2006), avocado (Simon and Richard, 2005), cashew (Mneney and Mantell, 2001), cherry (Amiri, 2006(Amiri, , 2007, grapes (Tangolar et al., 2003;Aazami and Bagher, 2010), pear (Faggioli et al., 1997), pistachio (Abousalim and Mantell, 1992), walnut (Wang et al., 2010), etc. For commercialization of micrografting, protocol should be perfect to give higher percentage of successful micrografts. Good micrografting protocols have been developed for large scale production of micrografted plants in many fruit trees with high percentage of graft success (Table 1).
IMPROVEMENT IN MICROGRAFTING TECHNIQUE
Micrografting procedures are difficult and generally results in a low rate of successful grafts, which makes it an expensive and time-consuming production technique. It is due to the fact that more technical expertise is required in preparing successful grafts on small-scale material and handling difficulties associated with preserving the delicate graft unions. In many experiments, failure rate for micrografts was higher than desired. In vitro grafts of fruit plants often fail due to incompatibility reaction, poor contact between stock and scion and phenolic browning of cut surfaces (Ramanayake and Kovoor, 1999). In order to alleviate some of these limitations, different techniques have been developed to make micrografting a successful and superior technology for the benefit of technicians, researchers, nursery operators and commercial tissue culture laboratories.
Browning and tissue blackening
Exudation of phenolic compounds from the cut surfaces and their oxidation by polyphenoloxidase and peroxidase enzymes cause discolouration of the tissues which results in poor micrografting (Martinez et al., 1979). Browning of the cut surfaces inhibits the growth and development of new cells and results in poor graft union.
To block the oxidation phenomena and prevent tissue browning, various substances have been used which include thiourea, cysteine, chlorhydrate , citric and ascorbic acid (He et al., 2005), Phytagel (Zhang and Luo, 2006), PVP (Rather et al., 2011), DIECA (Martinez et al., 1979). Tissue blackening, which commonly results in the death of very small scions, has been reduced by soaking explants in an anti-oxidant solution, and/or placing a drop of solution onto the severed rootstock immediately before inserting the scion (Jonard et al., 1990;Ramanayake and Kovoor, 1999).
Sucrose concentration of the medium
Sucrose concentration of nutrient medium had a significant effect on the percentage of successful grafts. Navarro et al. (1975) reported that sucrose concentration of medium of grafted plants played a significant role and that the highest rate of successful grafts in citrus species was obtained with 7.5% sucrose. Generally in vitro growth and development increases with increased sugar concentration (Pierik, 1987). Naz et al. (2007) used 14 days old seedlings of rough lemon (Citrus jambheri Lash) grown under in vitro etiolated conditions as rootstock and microshoots of Kinnow mandarin/Succari sweet orange as scion. Micrograft success improved with increase in sugar levels in both cultivars from 20-22% with 3% sucrose to 36-38% with 7% sucrose. Hamaraie et al. (2003) also reported improvement in the micrograft success from 30 to 60% and scion growth from 8.7 to 13.8 mm with the increase in sucrose concentration from 2.5 to 7.5%, respectively during his studies with micrografting of grapefruit (Citrus paradisi) on sour orange seedlings germinated in vitro.
Light/dark incubation treatments
Significant variations have been reported in the percentage of successful grafts according to exposure of seedlings to light. Hamaraie et al. (2003) reported higher frequency of successful grafts (50%) in grapefruit (Citrus paradisi) cv. "Miami, when rootstock seedlings (sour orange) were obtained from seeds germinated under continuous darkness for two weeks as compared to only 5% successful grafts with seedlings which developed under light. Navarro et al. (1975) reported a very low frequency of successful grafts using Troyer citrange seedlings grown under continuous light as compared to seedlings grown in continuous darkness. Ewa and Monika (2006) found high percentage of successful micrografts in cherry under dark conditions.
Use of growth regulators
Usually growth regulators are not used in traditional grafting for increasing the graft success. However, under in vitro conditions, growth regulators particularly cytokinins and auxins have been found effective for improving the graft success rate. These growth regulators increase the rate of cell division and improve callus formation, which in turn help in increasing the percentage of successful graft unions. At the time of performing micrografting operation, prepared micro-scion is given a quick dip (5-10 s) in sterilized growth regulator solution of desired concentration and then inserted into or placed on the rootstock. Wang et al. (2010) found NAA effective in improving the micrograft success in walnut. Rafail and Mosleh (2010) reported increase in micrograft success from 30 to 90% in pear (cv. Aly-sur on Calleryana pear) and 40 to 90% in apple (cv. Anna on MM106) with increasing BAP concentration from 0-2.0 mg/L. Triatrniningsih et al. (1989) obtained a 24% increase in the frequency of successful grafts over untreated controls in Citrus by the use of BAP at 0.5 mg/L.
Nature of the supporting medium
Agar solidified medium and liquid medium have been used for the growth of micrograted plants. Rafail and Mosleh (2010) observed that number of successful micrografts increased from 10% in agar-solidified medium to 60% in apple and 70% in pear with use of liquid medium. There is usually more up take of nutrients and growth regulators by the microshoots in liquid media, which makes it more effective than solidified medium for micrografting success. MS liquid medium with vermiculite was found best for further development of the micrografts, because liquid medium alone or with agar forms asphyxic conditions, which prevents formation of lateral roots (Mosella-Chancel et al., 1979).
Preventing desiccation of the graft
Desiccation of graft or surfaces of the grafting partners is one of the major causes of graft union failure (Pliego and Murashige, 1987;George et al., 2008). To prevent this phenomenon, Pliego and Murashige (1987) applied a layer of moist nutrient agar gel to connect the grafting partners and obtained better graft success. Different chemicals have been tried to prevent graft desiccation so as to enhance the graft union. Rafail and Mosleh (2010) used an agar drop from the solidified culture medium and placed it on the cut area of the rootstock. Micrografts in which an agar drop was added to their grafted area were highly successful (70% in apple and 60% in pear) as compared to those without an agar drop (10%). Adding an agar drop usually prevents scion drying and makes the transport of different materials possible and holds the graft units together until the fusion takes place. Addition of agar drop supplemented with minerals and/or phytohormones further improved graft success. Amiri (2007) obtained 65% successful grafts in cherry using homoplastic grafting method (adding two drops of agar solution around the fitting site of micrograft) as compared to 41% through heteroplastic method (without application of agar drops).
Pretreatment of shoot apex
A technique which pretreats the apex allowing the selection of the viable apex and helping their development greatly improves the micrografting success. This is particularly effective when very small sized shoot apices are used. Excised apex is placed into a hemolyse tube on filter paper moistened by mineral solution of Murashige and Skoog (1962), supplemented with auxins and cytokinins. This treatment modified the physiological state of the excised apex and led to rapid development of leafy shoots even from smallest apices of 0.1 -0.2 mm, the direct grafting of which is generally difficult and ineffective (Jonard et al., 1983). Following proper development, the apex is micrografted on the rootstock. Mosella-Chancel et al. (1979) reported 64% successful micrografts in peach when pretreated with zeatin (0.1 mg/L) for 48 h as compared to 21.7% without any pretreatment.
Suitability of rootstock
Micrograft success varies with the rootstocks because of the compatibility reactions between the grafting partners. Evaluating the rootstocks for higher graft success with a particular scion will definitely help in commercializing micrografting technique for mass multiplication of fruit crops. Tangolar et al. (2003)
APPLICATIONS OF MICROGRAFTING
Micrografting has been used for the improvement and multiplication of various fruit crops and several papers have been published (Jonard et al., 1983;Bhat et al., 2010). Some of the main applications of micrografting in fruit crops are discussed below:
Virus and viroid elimination
The production of high-quality plants which can be certified genetically and virus-free is considered problematic and very challenging. An innovative technique of micrografting was developed by Murashige et al. (1972) for production of uniform virus-free plants on commercial scale in a controlled environment. They grafted small apical shoot of citrus to the top of a decapitated seedling grown in vitro. A few of these grafted plants, when indexed, were found freed of exocortis and stubborn pathogens. Navarro et al. (1975) (Jonard et al., 1983;Burger, 1985;Navarro et al., 1976Navarro et al., , 1980Navarro et al., , 1982Navarro and Juarez, 1977;Deogratias et al., 1986;Navarro, 1988;Jarausch et al., 2000;Zilka et al., 2002). In vitro grafting was used in Spain to produce virus-free plants of citrus and is considered a major factor in improving the Spanish citrus industry (Navarro et al., 1975). The technique has been used since 1998 for elimination of Micrografting exploits two concepts-meristems are relatively virus-free and meristems from mature plants retain the mature phase. Meristematic tissues in the shoot tips and axillary buds normally remain virus-free because the growth of the meristem is quicker than the systemic spread of the virus within the plant. Using micro shoot tips (less than 0.5 mm) as scions, STG produces plants that are virus-free and reproductively mature. Production of virus-free plants from nucellar seedlings or by thermotherapy has certain limitations. Although nucellar seedlings of citrus are both clonal and virus-free, the seedlings are juvenile and take many years to flower. In the case of thermotherapy, many viruses and viroids, such as exocortis viroid and stubborn virus, are difficult to clean up with this process (Roistacher, 2004). Thermotherapy has failed to eliminate citrus exocortis viroid, yellow vein virus (YVV), cachexia virus and Dweet mottle virus (Calavan et al., 1972;Roistacher and Calavan, 1972). These problems have been overcome through STG technique. High temperatures inactivate many viruses, thus in vitro propagation can be used in combination with heat treatment to produce virus-free material. A massive project was launched in Morocco and Israel to develop virus-free plants of commercial almond cultivars through in vitro micrografting in combination with thermotherapy during 1997-2001. The project resulted in successful sanitation of almonds and permitted recovery of virus-free plants from various varieties infected with PNRSV (Prunus necrotic ringspot virus), PDV (prune dwarf virus, CLSV (chlorotic leafspot virus). Thermotherapy treatment at 30-35°C for 14 days was applied to in vitro shoot cultures prior to excising shoot tips for performing shoot tip grafting. Size of shoot apex had a paramount influence on elimination of virus from the plants. Singh et al. (2008) reported low recovery of ICRSV-free plants (20%) from an infected plant of kinnow mandarin with shoot tip size of 0.3 mm through STG which increased to 100% with shoot tip size of 0.2 mm. Manganaris et al. (2003) developed an efficient micrografting protocol for production of nectarine plants free from PPV and PNRSV. Conejero et al. (2013) sucessfully used micrografting in stone fruits for elimination of not only graft-transmissible viruses but also viroids, like PLMVd affecting Prunus species worldwide. Once bud is obtained, micrografted plants were placed in a cold chamber at 4°C and then forced for 15 days at 35°C. This resulted in the elimination of not only viroids but also viruses in higher percentage than the previous protocols. In short, micrografting is the only technique to purify the horticultural crops from viral diseases without the spray of harmful pesticides.
Production of plants resistant to pests and diseases
Micrografting can be used as a means of elimination of pathogens in fruit crops. It has been successfully used in a wide range of horticultural plants as an effective method for the acquisition of plants resistant to soil borne pathogens. Grape phylloxera (Daktulosphaira vitifoliae) is considered as the most destructive insect pest of cultivated grapes worldwide, which feeds on the sap of grape roots, causing damage and often death of vines (Makee et al., 2004). An efficient and robust micrografting system was developed for production of phylloxera resistant plants in grapes by Kim et al. (2005) using pest resistant cultivars as rootstocks (Millardet et de Grasset 101-14, Couderc 3309, Rupestris du Lot and Kober 5 BB) and commercial favorable table grapes as scions (Kyoho, Campbell Early, Tamnara and Schuyler).
Assessment of graft incompatibility
The inability of two different plants when grafted together to produce a successful union and also to develop satisfactorily into one composited plant is termed as graft incompatibility. Graft incompatibility in fruit trees has been classified by Mosse (1962) into translocated incompatibility and localized incompatibility. Translocated incompatibility is often associated with the movement of some labile factors between the grafting partners and is not overcome by the insertion of a mutually compatible inter-stock. An example of this category is the combination of 'Nonpareli' almond on 'Mariana 2624' plum. Localized incompatibility depends upon actual contact between stock and scion. Separation of the components by insertion of a mutually compatible interstock overcomes the incompatibility. Bartlett pear grafted directly on quince rootstock shows this type of incompatibility. Another example is grafting of certain apricot cultivars with peach which is associated with a clear break of the trunk at the point of graft following strong winds even after several years of normal growth (Jonard et al., 1990).
Prediction of incompatible graft combinations is a very important area of study for preventing economic loss due to graft incompatibility. Signs of graft incompatibility are often detected after several years in the field but can be identified early using micrografting and in vitro callus fusion technique (Jonard et al., 1990;Errea et al., 2001). Micrografting has been used for assessment of graft compatibility/incompatibility between the grafting partners (Burger, 1985;Navarro, 1988). The technique facilitates early diagnosis of grafting incompatibilities and may provide a model for in-depth analysis of the incomepatibility phenomenon (Chimot-Schall et al., 1986;Jonard et al., 1990;Hossein et al., 2008;Errea et al., 1994;Espen et al., 2005). It has been used for studying histological, histochemical and physiological aspects of graft incompatibility between scions and rootstocks (Richardson et al., 1996;Ermel, 1999). Histological examination of the graft union revealed callus formation, cytodifferentiation and xylogenesis leading to the formation of vascular connections in successful micrografts (Gebhardt and Goldbach, 1988). Anatomical studies of incompatible grafts demonstrated a poor vascular connection, vascular discontinuity and phloem degeneration at the union area, which might be detected as early as few weeks after a graft establishment (Darikova et al., 2011).
In the case of incompatible associations, Martinez et al. (1979Martinez et al. ( , 1981 used in vitro micrografting to analyse localized incompatibilities of apricot/myrobalan and translocated incompatibilities of peach/apricot or peach/myrobalan. In the case of localized incompatibility, the percentage of success was very good during first three weeks but from the 14 th day, signs of incompatibility appeared around the graft. After 60 days, all the grafts perish leaving no visible plants. Translocated incomepatibility also called delayed incompatibility resulted in the development of whole plants in vitro, but the early symptoms of incompatibility still appeared on the young plants in pots 2 months after grafting (Martinez et al., 1981). During this experiment, 80% homografts of peach/peach and apricot/apricot provided viable plants. However, the percentage of surviving plants after 60 days was very low under incompatible combination of Prunus persica/ Prunus armeniaca (6.0%) and Prunus persica / Prunus cerasifera (1.25%). Though in vitro grafting techniques did not give viable plants but gave a prediction of incompatibility. Signs of this type of incompatibility often develop 5-10 years later after branch grafts are made in the orchard (Rodgers and Beakbane, 1957).
Micrografting was used to study the compatible and incompatible combinations of grape varieties, using survival rate as an index. The higher survival rate of grafting (>85%) was achieved in compatible combinations of RizamatV/Baixiangjiao and Canepubu/Muscat Hamburg.
Under incompatible combinations of Canepubu/Baixiangjiao and Carignane/Baixiangjiao, the survival rates were only 3.33 and 13.33%. Both translocated and localized incompatibilities exist in the Canepubu/Baixiangjiao, while Carignane/Baixiangjiao had only translocated incompatibility. At the late stage of grafting union, necrotic layer (isolation layer) of compatible combinations became thinner and finally disappeared, conducting tissue of rootstock-scion connected and the graft plants survived. To incompatible combinations, the necrotic layer always existed or disappeared partly, and the grafting failed, vascular disconnection contributes to the failure of grafting (JiLing, 2001).
Improvement of plant regeneration
Micrografting provides an alternative production technique for mass multiplication of plants which are difficult to root (Preece et al., 1989) or propagation of difficult-to-root novel plants created in tissue cultures (Barros et al., 2005). This is done by micrografting micro shoots of difficult to-root plants/cultivars on seedling rootstocks grown in vitro. Micrografting has been used to rejuvenate cashew cultivars which were found difficult to root (Thimmappaiah et al., 2002). The technique has been successfully used to multiply difficult to root plants including walnut (Pei et al., 1998;Wang et al., 2010), pistachio (Onay et al., 2007;Abousalim and Mantell, 1992) cashew (Ramanayake and Kovoor, 1999) and almond (Martinez-gomez and Gradziel, 2001;Ghorbel et al., 1998;Channuntapipat et al., 2003).
Mass multiplication
Micrografting is a technique that potentially can combine the advantages of rapid in vitro multiplication with the increased productivity that results from grafting superior rootstock and scion combinations (Gebhardt and Goldbach, 1988). Mass production of superior plants through micrografting can be achieved throughout the year under controlled conditions in the tissue culture laboratories, by grafting elite scions onto desirable rootstocks. Generally, micro-propagation of woody trees is difficult due to low regeneration capacity, especially mature plant tissues. A major limitation is root regeneration rather than shoot multiplication (Hartmann et al., 2002). In vitro micro-grafting is often used where rooting capacity of micro-cuttings is poor. It has been used in the propagation of novel plants created in tissue cultures through transgenic or of novel plants created in tissue cultures that are difficult-to-root (Barros et al., 2005). Genetically transformed shoots of Avocado from somatic embryos were rescued by micrografting them onto the in vitro germinated rootstock seedlings with 70% success (Simon and Richard, 2003).
Indexing viral diseases
Grafting is used to determine the presence of latent (unseen) viral diseases in plants. A plant (the indicator plant) that is known to be susceptible to the disease of interest may be grafted onto the suspect plant. If the plant is in question is infected, typical symptoms induced by the specific virus are expressed on the indicators after the virus has been moved into the indicator plants. This type of testing is regularly carried out on plants imported to the country including grapes and roses. This test does not require the formation of a permanent, compatible graft union. Tanne et al. (1993) used micrografting system which increased the speed of viral detection. They reported the detection of corky-bark virus 8-12 weeks after grafting. Pathirana and McKenzie (2005) reported that micrografting of leaf roll infected scion material on to virus-free indicator rootstock (Cabernet sauvignon) resulted in the development of symptoms within 2-3 week. Valat et al. (2003) demonstrated that grapevine fan leaf virus is transmitted from infected rootstock to the uninfected indicator 41B variety used as scion within 45 days. Kapari-Isaia et al. (2002) used Madam Vinous or pineapple, sweet orange as indicator plants for indexing of CPsV in Local Arakapas Mandarin in Cyprus. This type of microindexing can be used for post-entry quarantine of imported materials (Sivapalan et al., 2001;MAF, 2004).
Safe germplasm exchange
Small micrografted trees are a convenient way to exchange germplasm between countries (Navarro et al., 1975). The exchange of fruit tree propagation material between countries is a major cause of spread of new pests and pathogens, particularly graft-transmissible viruses and viroids. The expansion of Prunus breeding worldwide, mainly in peach, has resulted in more than 20 new breeding programs producing hundreds of new varieties yearly. The associated exchange of plant material has increased notably the risk of introduction of new pathogens and pests (Llacer, 2009;Llacer et al., 2009). Imports of fruit budwood lacking effective phytosanitary control measures present the highest risks. More than 100 virus or virus-like diseases have been reported to affect Prunus species worldwide. For approxi-mately half of these diseases, nothing is known about the causal agent except that it is graft-transmissible (Cambra et al., 2008). Moreover, traditional quarantine procedures are often ineffective, prompting the search for alternative procedures including those based on tissue culture techniques. An improved STG procedure based on the protocol described by Navarro et al. (1982) which is effective for virus and viroids elimination is a prerequisite for safe peach and Japanese plum budwood exchange (Conejero et al., 2013). It is a minimum risk method for importing plant material through quarantine.
CONCLUSION
Micrografting has great potential for improvement of fruit plants and has been used for the production of virus and viroid-free plants in horticultural crops without the application of harmful pesticides. Besides, it has also been used in prediction of incompatibility between the grafting partners, histological studies, virus indexing, production of disease-free plants particularly resistant to soil borne pathogens, safe germplasm exchange between countries and multiplication of difficult to root plants. It is a safe in vitro technique, which can be utilized for commercial production of virus-free grafted plants with desired cultivars and suitable rootstock throughout the year under controlled conditions. | 2017-09-07T20:38:21.390Z | 2014-06-18T00:00:00.000 | {
"year": 2014,
"sha1": "bc98bd1767d66586e47c7ddf62242112bb8b207b",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/168948745436",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bc98bd1767d66586e47c7ddf62242112bb8b207b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
15162593 | pes2o/s2orc | v3-fos-license | Preliminary Trial of Rebamipide for Prevention of Low-Dose Aspirin-Induced Gastric Injury in Healthy Subjects: A Randomized, Double-Blind, Placebo-Controlled, Cross-Over Study
Although low-dose aspirin is widely used, since it is a cheap and effective means of prevention of cardiovascular events, it can cause hemorrhagic gastrointestinal complications. The aim of this study was to evaluate the efficacy of rebamipide in preventing low-dose aspirin-induced gastric injury. A randomized, double-blind, placebo-controlled, crossover trial was performed in twenty healthy volunteers. Aspirin 81 mg was administered with placebo or rebamipide 300 mg three times daily for 7 consecutive days. The rebamipide group exhibited significant prevention of erythema in the antrum compared with the placebo group (p = 0.0393, respectively). Results for the body and fornix did not differ significantly between the placebo and rebamipide groups. In conclusion, short-term administration of low-dose aspirin induced slight gastric mucosal injury in the antrum, but not in the body or fornix. Rebamipide may be useful for preventing low-dose aspirin-induced gastric mucosal injury, especially which confined to the antrum.
Introduction
Helicobacter pylori (H. pylori) infection [1] and the use of non-steroidal anti-inflammatory drugs (NSAIDs) [2][3][4] can cause gastrointestinal disease. We have reported that H. pylori-negative, non-NSAID-induced gastrointestinal ulcers are extremely rare [5]. Aging of the population has recently been increasing in Japan, and long-term users of low-dose aspirin are correspondingly increasing in number. Low-dose aspirin is used for primary prevention of cardiovascular events such as myocardial infarction. Randomized controlled trials have demonstrated the preventive effect of administration of low-dose aspirin for patients with prior cardiovascular events [6][7]. Low-dose aspirin is widely used, since it is a cheap and effective means of prevention of cardiovascular events, and now over 5 million Japanese undergo this treatment. However, aspirin use can have hemorrhagic gastrointestinal complications. The recommended dose of aspirin for the prevention of vascular events is in the range of 75-300 mg/day, though when benefits and risks are taken into account the optimal dose is considered to be no more than 100 mg/day [8][9][10]. Few findings of randomized, placebo-controlled trials are available to clearly determine the risk of upper gastrointestinal ulcer with lowdose aspirin. Several small trials, usually in healthy subjects, have been performed, using a mucosal injury grading system as an outcome measure [11][12][13][14]. The clinical utility of such systems is uncertain, and these studies have not provided specific findings on ulcer development.
Recently, a large 12-week endoscopic double-blind study in osteoarthritis patients randomly assigned to placebo (n = 381) or 81 mg enteric-coated aspirin (n = 387) found no significant difference in rate of development of ulcer [15]. However, significant increases in mean number of erosions (mean change: 0.2 vs 0.9) and in the proportion of patients with increase in number of erosions (20% vs 32%) were observed with low-dose aspirin. The increase in rate of clinical events with low-dose aspirin is in general small. It may also be that, though aspirin causes only a very slight increase in rate of development of ulcer, the proportion of patients with complications is much higher due to the antiplatelet effects of aspirin. The increased risk of erosions does indicate that some damage to the upper gastrointestinal tract is occurring. In any case, the real issue in clinical practice is determining the incidence of clinical gastrointestinal events, such as gastrointestinal bleeding, associated with the long-term use of low-dose aspirin.
The aim of this study was to evaluate the efficacy of rebamipide in preventing low-dose aspirin-induced gastrointestinal complications in healthy subjects.
Methods
A randomized, double-blind, placebo-controlled, crossover trial was performed in twenty healthy volunteers. The study protocol was approved by the Ethics Committees of Hokkaido University Hospital, and written informed consent was obtained from all subjects.
Inclusion and exclusion criteria
Inclusion criteria were i) lack of gastric conditions such as erosions, erythema, bleeding, and ulcer on endoscopy, and ii) H. pylori negativity on 13 C urea breath test. Subjects who habitually smoked or drank alcohol were excluded.
Study design
The mean age of the 20 subjects was 24 ± 2 years; fifteen were male and five were female. The 20 healthy subjects were divided two groups, Group I and II. In the first period of this study, aspirin 81 mg with placebo (Group I) or rebamipide 300 mg (Group II) was administered three times daily for 7 consecutive days, while in the second period the two groups were crossed over. The washout period is longer than two weeks between treatments (Fig. 1). The 2-week washout period was determined with reference to the life span of platelets and the half-lives of rebamipide and aspirin.
Evaluation criteria
Endoscopic gastric mucosal injury was determined in three gastric areas, the antrum, body, and fornix. The categories of injury were erythema, erosions, and petechiae. Erythema was defined as an area clearly redder than surrounding mucosa, petechia as a bleeding area without mucosal deficit, and erosion as an area with mucosal deficit (Fig. 2). Gastric mucosal injuries detected on endoscopy were evaluated by Lanza score ( Table 1).
Evaluation of adverse events
The following gastrointestinal symptoms and complications were to be recorded in the symptom diary by all subjects over the entire study period.
Statistical analysis
Endoscopic gastric mucosal injuries were determined in each gastric area, and analyzed by Fisher's exact test. Findings of p<0.05 were considered significant. All statistical analyses were performed using SAS ® version 8.2 (SAS Institute, Cary, NC).
Effects of treatment on endoscopically detectable gastric mucosal injury
The effects of low-dose aspirin on endoscopically detectable gastric mucosal injury in the placebo and rebamipide groups are shown in Tables 2 and 3. Lanza scores in the antrum are shown in Table 2. The rebamipide group exhibited significant prevention of erythema in the antrum, compared with the placebo group (p = 0.0393). The incidence of neither erosions nor petechiae differed significantly between the placebo and rebamipide groups. The results of evaluation of gastric mucosal injury in the body and fornix are shown in Table 3. There were no significant differences between the placebo and rebamipide groups.
Adverse events
No gastrointestinal symptoms or complications were recorded in the symptom diary by any subjects during the study period.
Conclusion
Recently, the number of individuals using low-dose aspirin to prevent cardiovascular events has sharply increased in Japan. Increase in the rate of occurrence of lethal bleeding has been predicted with use of this treatment, though that of cardiovascular events should be decreased with it. The frequencies of aspirin-induced gastrointestinal bleeding and ulcer are lower than those of NSAID-induced gastrointestinal bleeding and ulcer, though their severity is very mild. Aspirin produces its anti-thrombotic effects through irreversible acetylation of a serine in cyclooxynase-1 in platelets [21]. This abolishes production of thromboxane A2 for the life of the platelet. Although the half-life of aspirin is short, at 0.4 h, it suppresses platelet aggregation for almost 1 week [22,23]. This 1-week period is due to the life span of platelets. The effects of aspirin on platelets are a significant problem for chronic aspirin users, compared with NSAID users. Patients with a history of gastric bleeding, who take corticosteroids, or who are elderly readily develop gastrointestinal bleeding [24]. Prevention of aspirin-induced gastric bleeding, especially for high-risk groups, may thus be necessary. Only for proton pump inhibitors is there evidence of prevention of aspirin-induced gastric bleeding [25]. However, administration of long-term acid-suppressive agents carries the risk of infection, such as ventilatorassociated pneumonia [26]. Candidate drugs other than acid-suppressing agents are thus needed.
Two reports are available on the use of rebamipide in healthy subjects with drug-induced gastric mucosal injury. The efficacy of rebamipide in reducing NSAID-induced gastric injury has been reported in healthy volunteers on indomethacin treatment [27]. In addition, Dammann et al. reported that rebamipide reduces gastric injury in individuals taking high-dose aspirin (1,500 mg/day) [20]. However, the efficacy of rebamipide in preventing low-dose aspirininduced gastrointestinal complications has remained unexplored. We therefore tested it as a candidate drug for the prevention of low-dose aspirin-induced gastric injury.
In the present study, rebamipide significantly prevented low-dose aspirin-induced erythema in the antrum compared with placebo (p<0.05) ( Table 2). Naito et al. demonstrated that lipid peroxidation mediated by oxygen radicals and activated neutrophil play a crucial role in the pathogenesis of NSAID-induced gastropathy. Rebamipide showed the inhibitory of these parameters [28]. On the other hands, it is well known that reduction of gastric mucosal blood flow (GMBF) induces gastrointestinal disorders [29]. Rebamipide prevented NSAID-induced gastric mucosal injury by maintaining GMBF [30]. Administration of low-dose aspirin frequently induced gastric mucosal injury in the antrum in the control group, though the degree of injury was slight ( Table 2). On the other hand, administration of low-dose aspirin did not induce injury in the body and fornix (Table 3). This finding suggests that low-dose aspirininduced gastric mucosal injury occurs to a slight extent in the antrum but not in the body or fornix. Asaki et al. demonstrated that NSAID-induced gastric ulcers are concentrated in the pyloric region to the antrum, regions which account for three-fourths of all cases of ulcer in the stomach [31].
On the other hand, Daniel et al. have reported that administration of aspirin (900 mg/day) induced more mucosal erosions and submucosal hemorrhages in the antrum than in the fundus (p<0.05) [32]. Gastrokine . This data demonstrated that low-dose aspirin down-regulates Gastrokine 1 expression specifically in antral mucosa, and induces gastrointestinal injuries in antrum [33]. A strategy to prevent low-dose aspirin-induced gastric injuries in the antrum will thus be needed. In addition, many patients undergoing low-dose aspirin treatment are asymptomatic. Sudden hospitalization due to bleeding is a serious problem. In our study, all subjects were asymptomatic (data not shown).
Aging of society is progressing in Japan, and the number of low-dose aspirin users will correspondingly be increasing. In addition, periodic endoscopic examination will be needed to detect serious gastrointestinal diseases other than bleeding and ulcer, such as gastric cancer, because of the high prevalence of gastric malignancy in Japan. Our study has several limitations. It was performed with only short-term administration of low-dose aspirin and was small in size. Although long-term administration of lowdose aspirin is associated with an increased risk of lethal upper gastrointestinal bleeding, the cumulative risk of upper gastrointestinal bleeding is still very low. A large 5-year observational Danish cohort study assessed hospitalization for gastrointestinal bleeding with low-dose aspirin [34]. The incidence of hospitalization for gastrointestinal bleeding was 0.6% per year for individuals taking aspirin alone. It was thus difficult to detect serious cases, such as bleeding, in our small, short-term study. Our study used a mucosal injury grading system (erosion, erythema, and petechia). Evaluation with this grading system is not clinically sufficient. Taking of high-dose aspirin induces submucosal hemorrhages and gastrointestinal ulcers with frequently, however most of low-dose aspirin-induced gastrointestinal injuries are just slight, such as erythema, erosions, and petechiae [31]. These slight injuries may progress to serious injuries, such as ulcer, bleeding, and perforation, under the chronic aspirin use and the additional risk factors that are induced gastropathy, although there is no evidence to resolve about this process. Since findings on patients with long-term use of low-dose aspirin are quite limited, larger long-term clinical research on patients with low-dose aspirin is needed.
In conclusion, administration of short-term low-dose aspirin induced slight gastric mucosal injury in the antrum, but not in the body or fornix. Rebamipide may be useful for the prevention of low-dose aspirin-induced gastric mucosal injury, especially which confined to the antrum. | 2018-04-03T00:41:57.434Z | 2009-08-28T00:00:00.000 | {
"year": 2009,
"sha1": "9e5b63134c6ed74ab48c2886a090b21af845e051",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcbn/45/2/45_09-24/_pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "87780cd9abb37b5c3ef86b41f9a1937eaf836c49",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265249496 | pes2o/s2orc | v3-fos-license | Maria Manturzewska’s model of the lifespan development of professional musicians in the light of recent research and cultural changes
The lifespan trajectory of musical achievement in the field of classical music and the factors that promote or hinder the development of talent in music are the focus of Manturzewska’s model of the lifespan development of professional musicians. This article aims to describe Manturzewska’s model briefly, relate it to recent research, and explore the extent to which it fits the diversity of musical careers of contemporary professional musicians. A brief depiction of the developmental model provides insight into the six developmental phases of the model and the factors that influence them. Recent research investigating talent development and career research in music suggests that Manturzewska’s theoretical model is largely consistent with findings from expertise research, research examining the determinants of musical development, and current models of talent development. The development of musicians with traditional careers as orchestral musicians in permanent, full-time employment or as successful soloists in contemporary musical culture is likely to be represented largely accurately in Manturzewska’s model. However, the diversity of typical portfolio careers, characterized by simultaneous irregular musical and non-musical activities in musical and non-musical fields, cannot be depicted in a single, more-or-less linear development model. Appropriate research, especially long-term studies that model the lifetime development of professional musicians (not only) in classical music, is lacking. Furthermore, Manturzewska’s model addresses important aspects that have been insufficiently studied by others, such as the development of professional musicians in the second half of life, and provides a sound and inspiring basis for future research.
The development of musical talent in childhood and adolescence has been an important area of research since the emergence of music psychology in the late 19th century until today.In contrast, the study of musical development throughout the lifespan has received significantly less research attention (cf.Brodsky, 2011).Until the 1970-1980s (and even decades later), very few authors have studied the lifetime achievement of professional musicians (e.g., Bühler, 1933;Dennis, 1966;Lehman, 1953;Lehman & Ingerham, 1939).More recently, Simonton (1977Simonton ( , 1991Simonton ( , 1999) ) published fundamental research on the development of the creative productivity of classical music composers, and Sosniak (1985) investigated the musical learning and development processes of professional pianists.
In 1975, Manturzewska implemented a unique, comprehensive, and multifaceted "longterm longitudinal research project" (Manturzewska, 1990, p. 113) at the Institute of Music Education of the Frederic Chopin Academy of Music in Warsaw, Poland, which focused on the lifelong development of the professional careers of contemporary musicians in Poland.Based on the data from this research program, she created a model describing the lifelong careers of professional musicians in six stages (Manturzewska, 1986(Manturzewska, , 1990(Manturzewska, , 2006)).
This article aims to (1) outline the theoretical and empirical underpinnings of Manturzewska's research on the lifespan development of professional musicians, (2) give a brief account of her model of the lifespan development of professional musicians, (3) contextualize some key aspects of the model in recent research on the lifelong development of musical talent, and (4) consider Manturzewska's model in relation to the diversification of professional musicians' careers today.
Theoretical and empirical underpinnings of Manturzewska's research on the lifespan development of professional musicians
Manturzewska studied the nature of musical talent, its development, and the factors that influence the development of musical talent for more than 10 years before she began her research project on the lifelong development of professional musicians (Manturzewska, 1991).In 1960, she conducted a ground-breaking study on the personality, musical development, and biographical-sociocultural background of the participants of the 6th International Chopin Competition in Warsaw, considered one of the most prestigious and demanding piano competitions worldwide (Manturzewska, 1986(Manturzewska, , 2011)). 1 To investigate the role of personality, biography, and environment in the development of talent and musical achievement, Manturzewska gathered biographical and quantitative data from competition participants.The results opened up a new perspective on the components of musical talent, which she describes as follows: As a result of this research we have formulated the concept of musical talent as a dynamic constellation of interacting characteristics consisting of five independent sets of factors: specifically musical abilities, general intelligence, personality characteristics, biographical and environmental factors, and practical qualifications acquired in the process of formal training and individual experience.Our assumption was that none of the five sets can alone determine the extent or artistic value of musical achievements.The part played by each of them is relative and depends on the context of the remaining four sets (Manturzewska, 1986, p. 88).This concept of musical talent was far ahead of the research of that time but shows remarkable conformity with the findings of today's research.Examples include the Differentiated Model of Giftedness and Talent in Music by McPherson and Williamon (2016;adapted from Gagné's, 2009, Differentiated Model of Giftedness and Talent), the Multifactorial Gene-Environment Interaction Model (MGIM) by Ullén et al. (2016), and the Talent Development in Achievement Domains Framework (Preckel et al., 2020).
Manturzewska's multidimensional, dynamic-interaction model of musical talent provided the basis for the subsequent study of the lifetime development of professional musicians.As she says: In order to obtain additional information about the determinants of life-long development and the professional achievement of musicians and the musically talented, we launched in 1976 biographical research into the careers of contemporary Polish musicians . . .[designed as] a preliminary and explorative field study (Manturzewska, 1986, p. 88). 2 Manturzewska aimed to investigate the course of professional achievement at different ages and explored the process of becoming and functioning as a performing musician in its various stages of development.She examined "the psycho-social determinants of musical achievement" to gain insights into the "emotional 'costs' and 'payoffs' of artistic careers" (Manturzewska, 1986, p. 112).This approach differs from most previous approaches to research in this field as it focused on the course of artistic achievement and was broadened to reflect the musician in their professional-biographical context and sociocultural environment.
The participants in Manturzewska's lifespan study included 165 professional Polish musicians (21-89 years, 35 female) representing seven fields of musical activity (composers, conductors, violinists, pianists, woodwind/brass players, and singers).The sample comprised 35 internationally outstanding musicians from Poland and a control group of 130 ordinary Polish musicians.Biographical interviews and quantitative questionnaires were used to collect data.These data were supplemented by concert diaries and programs, published reviews, photographs, demographic data, biographies, and archive data (cf.Manturzewska, 1990, pp. 114-115).Manturzewska applied qualitative and quantitative methods in longitudinal and cross-sectional designs for data analysis.
Manturzewska's model of the lifespan development of professional musicians
The study's main result was a model of the lifespan development of professional musicians (Figure 1).This model describes the general course of development and suggests six overlapping stages, each connected by critical transitions between the stages.
Manturzewska gives a detailed description of the stages, which are characterized briefly in the following sections.Stage I (0-6 years) is labeled "the stage of development of sensory-emotional sensitivity and spontaneous musical expression and activity" (Manturzewska, 1990, p. 131).Within this stage, Manturzewska distinguishes three sub-stages.In the first sub-stage, during the first 15 months, the development of sensory-emotional sensitivity to sounds and music is the main focus of the development.In the second sub-stage (up to three years of age), cognitive sensitivity to sounds and music and the categorical perception of pitches develop.In the third sub-stage (from about three years of age), the development of musical memory and imagination and spontaneous vocal and instrumental activities play a significant role.The young musician shows a strong attraction to music, which manifests in intense enjoyment of music, extended listening, and concentration on it.Manturzewska emphasizes the key role of a supportive family environment with strong musical interests.The presence "of at least one person with a strong interest in music, emotionally related to the child . . .[who engages in a] musical dialogue" (Manturzewska, 1990, pp. 132-133) with the child is considered to be a fundamental factor influencing the learning of musical language as a natural means of expression and communication.
Stage II (6-14 years) is the period of intentional musical development guided by a teacher.At about 6 years, future musicians often begin their first instrumental lessons.In this period, "basic technical and performance capacities and musical knowledge are gained" (Manturzewska, 2006, p. 40).In the following years, the structure of motivation changes from the need to play with music to the need to learn music.Starting adequate instrumental instruction early, and engaging in systematic practice behavior between 6 and 9 years of age seems to be a necessary antecedent for later achievement levels.Young musicians often give their first public performances between the ages of 10 and 14.
Stage III (15-24 years) is a "stage of formation and development of artistic personality . . .[characterised by] emergent artistic and professional awareness, philosophical and personal reflection" (Manturzewska, 1990, p. 135), the development of the individual's own interpretative conceptions, and an artistic personality and identity.The development of outstanding artistic achievements is crucial during this stage.To manage these demanding developmental processes successfully, the musical and social competence of the teacher, their support, and the quality of the student-teacher relationship are of paramount importance.The introduction into the professional community, participating in competitions, and beginning the search for employment opportunities are also important challenges in this stage.
Stage IV (24-44 years) is characterized as "the stage of the first professional stabilisation" (Manturzewska, 1990, p. 136) and starts with entry into professional life around the age of 25 years.The highest artistic performance is achieved between the ages of between 30 and about 45 to 50 years when most concerts are given and professional success is usually at its peak.Interests and motivation are directed toward performing, expanding the repertoire, and the musical career.Between the ages of 45 and 55, there is usually a critical period when the first signs of physical and psychological fatigue and a decline in performance become noticeable.Note: A more detailed description of the model is provided in the text; see the following paragraph (Manturzewska, 2006, p. 47).Copyright 2006 by Peter Lang GmbH.Reprinted with permission.
The musician may experience physical and mental fatigue and decreasing mental energy and performance, which may be associated with declining self-esteem and depression.This period is highlighted as "especially critical in creative, introvert types without sufficient support in the professional community.Extroverts and those with good social relationships can get through the period of crisis relatively mildly and almost imperceptibly" (Manturzewska, 1990, p. 136).The careful use of physical and psychological resources is especially important in this physically and mentally demanding phase.
Stage V (44-60 years), referred to as the "teaching phase" (Manturzewska, 1990, p. 127), is characterized by increased teaching activity, a more pronounced sense of social responsibility, and a growing involvement in the musical-organizational field.Violinists and singers often give their last concert around the age of 60.Pianists and orchestral musicians usually remain in this phase much longer.Orchestral musicians who play a wind instrument often retire earlier than violinists.
In Stage VI (from around the age of 60), a gradual withdrawal from professional activities occurs.However, many musicians continue to play and teach until the end of their lives, although not as actively as before.Outstanding musicians often reorient themselves in this last phase of life by taking on the roles of representatives such as jurors, honorary chairpersons, or committee members.
Development processes and influencing factors
Manturzewska describes the transition from one phase to the next as a time of emotional crisis.The more creative and differentiated the musician's personality, in intellectual, emotional, and cultural terms, the more threatening these transitions can be.Under favorable social and emotional conditions, no crisis may be noticed during the transition period.However, if the circumstances are unfavorable, the musician may face a dramatic crisis, psychological breakdown, or even death (cf.Manturzewska, 1990, pp. 137-138).
An essential element of Manturzewska's developmental model is that it does not focus solely on the acquisition or development of expertise and professional achievement.Instead, it adopts a holistic, multidimensional perspective and sheds light on the development of the artistic personality and phase-specific influencing factors that determine the development of musical performance.Other essential aspects include • • the "master-student" relationship (Stage III), which "steers the development of the entire personality" (Manturzewska, 2006, p. 42) • • the critical use of physical and psychological resources and the role of health and prevention, especially in the phase of greatest achievement (Stage IV) • • age-related performance losses and their effect on self-esteem, emotions, and health (Stage IV) • • the shift of musical interests, especially in the later stages of professional life.
A fundamental condition for the successful development of musical-artistic talent and creativity at any age "seems to be the 'musical dialogue' with someone who believes in the talented individual's potential, who understands his/her musical ideas and accepts them, who supports the musician emotionally in his/her endeavours and helps to overcome the stresses of life" (Manturzewska, 1990, p. 138).
Differences between times and durations of individuals' periods of greatest achievement
Manturzewska's model suggests general periods across the lifespan; however, differences may be observed between individuals' experiences: "The particular stages have different durations in the lives of different musicians, although apparently each has its optimal time and duration" (Manturzewska, 2006, p. 38).Furthermore, there are considerable age-related differences between individuals in terms of their greatest achievements.
The period of optimal performance depends on the instrument, the musical domain, the personality, when the career began, and other possible factors.Figure 2 illustrates differences between periods of optimal artistic achievement experienced by musicians working in different musical domains (conductors, singers, violinists, and pianists) and individual differences between musicians working in the same musical domain (see the different graphs in the same domain), based on musicians' self-assessments (Manturzewska, 2006, p. 36).Each bar in Figure 2 represents a single case.The positioning and length of the bars mark the chronological positioning and duration of the best performances.
For instance, singers usually give their (self-assessed) best performances before the age of 50, and the periods in which they give them are shorter than in other domains.Within the respective domains (e.g., for violinists), there are distinct differences between when the best performances first appear and how long the period of best performances last: some of them appear between the ages of 20 and 40, some around the age of 40 years, some last only a few years, and others extend over several decades beyond the age of 60 years.
Manturzewska's model in the context of recent research on the lifelong development of musical talent
Several decades since Manturzewska first presented her model, considerable research has confirmed her theory in important aspects.Fundamental aspects and details of the development of musical achievement, as described in Manturzewska's stages, align with recent research.Examples include the role of family and teachers (Creech, 2018;McPherson, 2009;Reeves, 2015), the process of acquiring expertise on an instrument (Platz & Lehmann, 2018;Preckel et al., 2020), the development of musical identity (Evans & McPherson, 2017;Hargreaves et al., 2018;Spychiger, 2017;Tafuri, 2017), the occurrence of age-related decline in performance at middle age and associated implications for identity and self-image (Gembris & Heye, 2014), and the shift of interests and activities into teaching in the later years of a career (Bennett & Hennekam, 2018).
Trajectories of professional performance from a lifetime perspective
Since the beginnings of research investigating lifelong development in music, its main topic has concerned the study of changes in the musical achievement of professional musicians at different stages of their lives (e.g.Lehman & Ingerham, 1939).This features in Manturzewska's developmental model, which describes a relatively rapid increase in musical performance in the first three decades of life.The greatest period of achievement (Stage IV) is reached between the ages of 30 and 45.After this phase, musical performance declines more or less markedly but steadily, both subjectively and objectively.This course is consistent with the lifetime trajectory of musical performance described in other studies and is generally regarded as an ideal-type pattern, despite many individual deviations (see Figure 2 for an example); methodological criticisms of data collection and interpretations (Lindauer, 2003); possible variations in trajectory due to different timing of career onset; domain-specific differences; and a general lack of consensus between theoretical interpretations (Simonton, 2018).Manturzewska's findings on the development of musical achievement also fit very well with the results of a large-scale study on growing older in a symphony orchestra (Gembris & Heye, 2012, 2014), which included 2,536 professional musicians from 133 orchestras in Germany.This study aimed to obtain insights into age-related changes, for example, in self-perception, performance, music-related experiences and attitudes, health, well-being, and perspectives for the future.In this context, the participants indicated at what age they believe musicians generally perform at the highest level on their instrument.While there were slight variations between instrument groups, participants were clear that in general the level of musical performance rises sharply at younger ages, peaks in the years between 35 and 40, and declines relatively quickly after that (see Figure 3).Participants indicated that the period in which musicians were most likely to give their best performances was between the ages of 30 and 45 years, as in Manturzewska's model.
According to Manturzewska's model, a critical period usually occurs between the ages of 45 and 55 years.The data of Gembris andHeye (2012, 2014) show that the self-perceptions of musicians between 40 and 50 years of age change significantly; when asked whether they felt they belonged to the younger or older group of musicians, almost all the musicians younger than 40 (97%) felt they belonged to the younger group.This ratio was reversed between the ages of 40 and 50.Almost all the musicians over 50 considered themselves to belong to the group of older musicians.These data confirm Manturzewska's description of a critical period of transition between 45 and 55 years.
The biggest change in the assessment of the ability to achieve musical excellence occurs between 40-45 and 50-54 years of age.While 36% of the musicians surveyed still considered musical excellence possible between the ages of 40 and 45, only 11% expected musical peak performance between the ages of 50 and 55.It can be assumed that these changes, in combination with the increasing incidence of physical complaints affecting instrumental playing (cf.Gembris et al., 2018), may be associated with negative changes in musical self-concept, lower self-confidence, and psycho-social crises.However, these data support Manturzewska's assumption of a critical period between 45 and 55 years of age, which may be individually shorter or longer and more or less pronounced (cf.Manturzewska, 1990, p. 136).Preckel et al. (2020) recently presented the Talent Development in Achievement Domains (TAD) framework, in which the authors explicitly refer to Manturzewska's (1990) model of development.The TAD framework is a general, cross-domain model representing talent development as a cumulative process structured according to four developmental levels involving a process of increasing specialization.It was developed on the basis of the Megamodel of talent development suggested by Subotnik et al. (2011).As a general, cross-domain developmental model, the TAD framework corresponds to other models, including Manturzewska's.Since the TAD framework is a lifespan-oriented talent development model that integrates the latest research, it is particularly suitable for comparison with Manturzewska's model to examine the Note: Professional orchestral musicians estimated the age phases when the highest performance using their instrument was generally to be expected.The percentages indicate the proportion of musicians who expected the highest performance for the respective 5-year period (from Gembris & Heye, 2014, p. 379).
Manturzewska's model and the Talent Development in Achievement Domains framework
extent to which the latter is valid in light of recent theory development; in addition, Preckel et al. (2020) directly refer to Manturewska's model (see below).
The starting point of talent development in the TAD framework is Aptitude, which is defined as the initial developmental potential for achievement in a particular domain (e.g.music)."It reflects individual differences in psychological variables (e.g.musicality, mathematical cast of mind, spatial ability) that would predispose a person to becoming interested in or engaging in activities relevant to a particular kind of achievement domain" (Preckel et al., 2020, p. 696).
second level of development is Competence, which "refers to a cluster of related and systematically developed abilities, knowledge, and skills that enable a person to act effectively in a situation and that result from systematic learning" (Preckel et al., 2020, p. 696).The third level is Expertise, which "refers to a high level of consistently superior achievement" (Preckel et al., 2020, p. 698) that is recognized by experts as being equivalent to professional-level achievement in the relevant domain.The fourth level of development, Transformational Achievement, goes beyond expertise and refers to extraordinary achievements that have a lasting impact on a field or domain (cf.Preckel et al., 2020, p. 698) and are only accomplished by a few.
The TAD framework and Manturzewska's model share some key points, but exhibit some important differences.They describe development from initial potential (aptitude or giftedness) to professional and exceptional excellence through several distinguishable developmental stages.Preckel et al. (2020, p. 708) state that "the four developmental levels of the TAD can be closely linked to Levels I to IV in Manturzewska's (1990) model of lifespan development of professional musicians" (see also Müllensiefen et al., 2022, p. 90).Both models are based on a similar, dynamic, and multidimensional understanding of giftedness and talent.According to this, specific musical potentials, general intelligence, personality traits, practice, and environmental factors interact dynamically (depending on the level of development) and contribute to the development of musical achievement.Alongside these similarities, there are also some differences.
One important difference is that the TAD is a general, non-age-related framework for achievement development that does not refer to any specific domain or age.In contrast, Manturzewska's model specifically models the development of high-achieving musicians in classical music and assigns the developmental stages to age-related periods.Since the acquisition of outstanding expertise in classical music usually requires an early start on the instrument, appropriate teachers, long-term practice, and institutionalized training at a music academy, it is possible to give rough information on the typical ages at which different stages of development take place, as Manturzewska suggested.
The achievement levels of the TAD framework start with the gradual development from initial potential and end with transformational achievement.Manturzewska's model goes beyond the peak of achievement (which is always more or less extended in any transitory phase) and describes two additional stages of development, including age-related decline in performance, shifts of interest to teaching and generative activities, and the withdrawal from professional activity.
Unlike the TAD framework, Manturzewska's model also addresses the costs of achievementoriented development, such as crisis-like transitions, health risks, age-related performance losses and their effect on development, and emotions.Nevertheless, overall, there are remarkable similarities between the essential elements of Manturzewska's model and the TAD framework.In taking a domain-specific developmental perspective, Manturzewska's model goes beyond the TAD framework, representing the development of professional musical performance as an interplay of gains and losses across the entire lifespan.Yet the two models have the potential to complement each other.
To draw an interim conclusion, it can be said that a good starting point for viewing Manturzewska's model from the perspective of current research is to compare it with the TAD framework because, while the latter provides only a framework for model development, requiring domain-specific elaboration, and Manturzewska's model provides both a framework and content-specific elaboration, both models deal with the development of talent from potential to excellence.Comparison of the current TAD framework with Manturzewska's older model reveals remarkable consistencies in the basic multifactorial-dynamic understanding of giftedness; in the indicators of potential and predictors of performance development; the sequence of different developmental phases for achieving peak performance; and the factors that play an essential role in performance development.In other words, Manturzewska's model, conceived almost 40 years ago, is a valid and theoretically compatible model of musical talent development into adulthood.In accordance with earlier research it suggests that (musical) peak performance and achievement is a temporary phase in life and that talent development changes direction in the second half of life.It can therefore be asked to what extent developmental phase(s) after the achievement of transformational achievement can be modeled by extensions of the TAD framework.Müllensiefen et al. (2022) have proposed several ideas as to how a model of musical talent development could be implemented in the TAD framework.Future studies could, for example, investigate the extent to which Manturzewska's model could be a starting point for modeling developmental stages beyond the developmental stage of transformational achievement.
Manturzewska's model in relation to the diversification of professional musicians' careers today
One of the aims of the project that Manturzewska initiated in 1975 was to investigate "functioning as a professional musician and artist in the contemporary society" (Manturzewska, 1990, p. 113).Almost 50 years have passed since then, and society and musical cultures have changed significantly.A major change affecting the careers of professional musicians is the decrease in permanent positions in orchestras, while portfolio careers encompassing various musical and non-musical roles in alternating employment have simultaneously increased significantly (e.g.Gembris & Langner, 2006;Mills & Smith, 2006).New technologies and digital media have led to profound transformations in the production, reproduction, and reception of music, including in the field of classical music.They have a significant impact on employment opportunities, musicians' activities, and the development of musical careers (Kavanagh, 2018).Burland and Bennett (2022) argue that top musicians who can base their careers exclusively on performance activities have always been the exception: [The] performance career in music-and this is the case across all musical genres-is more typically represented as a portfolio of project-based, self-generated activities that feature diverse performance roles alongside teaching, composition, community work, both music-and non-music roles, and the organisational capacity to bring it all together.(Burland & Bennett, 2022, p. 135) Although this may be true, potential differences in the labor market for musicians in different countries, for example, between the United States, the United Kingdom, and countries where permanent full-time positions in orchestras continue to exist and play an important role in musicians' careers, should be taken into account.Despite the dissolution of many orchestras and a significant loss of permanent orchestral positions in Germany, 129 professional, publicly funded orchestras (theater orchestras, concert orchestras, chamber orchestras, radio orchestras) with 9,746 permanent positions still existed in 2018 (Deutsches Musikinformationszentrum, 2020).
Data from a study of graduates from various music universities in Germany showed that 38% of players of string instruments and 42% of players of wind instruments had permanent positions in an orchestra at the time of the survey (Gembris & Langner, 2006, pp. 149-150).Also, since the number of permanent orchestral positions has decreased while freelance work has continuously increased, permanent positions still represent a substantial part of the music labor market and culture.Both forms of employment and the different career paths associated with them coexist simultaneously and contribute to the diversity of musical careers.
Against this background, the question arises as to the extent to which the courses of professional musicians' careers today are still represented in Manturzewska's developmental model.After all, the model refers to the talent development and careers of outstanding musicians who, at around 30 years of age (at the beginning of Stage IV in the model), have reached a "relatively stable professional position" (Manturzewska, 1990, p. 136) and can focus mainly on their careers as soloists or traditional orchestral musicians, as depicted in Stages IV, V, and VI.In the terminology of the TAD framework (Preckel et al., 2020), the participants in Manturzewska's study may be considered as musicians who have reached the highest level of achievement (transformational achievement), which relatively few musicians accomplish (cf.Preckel et al., 2020, p. 698).
Manturzewska's developmental model is tailored to this type of career trajectory for professional musicians in classical music that, although a minority, still represent a substantial component of contemporary musical culture.The occupational profile of the professional musician in permanent employment (e.g., as an orchestral musician or tenured teacher at a conservatoire), or of the soloist earning a living by performing, has not fundamentally changed.It can therefore be assumed that Manturzewska's model is still valid for this particular type of career in classical music, even in contemporary music culture.The model does not claim to represent other professional career paths, such as portfolio careers.
Nevertheless, Manturzewska's model appears to be largely transferable to the early development of professional musicians until the end of their professional training at a conservatoire.Professional classical musicians usually start their training on the instrument around the age of six, investing a lot of time and resources in practicing and acquiring expertise using their instrument and music-related knowledge.They go through rigorous training at the music academy before facing the challenge of finding professional jobs to earn a living.This rigorous process of expertise acquisition (represented in Manturzewska's model in Stages I to the end of Stage III and beginning of Stage IV) is similar for all musicians in classical music, regardless of whether freelance portfolio work is later pursued or if they find employment in a permanent position.This assumption is in line with models of expertise acquisition (Platz & Lehmann, 2018) and talent development according to the TAD framework, as pointed out by Preckel et al. (2020).
After the stages in which expertise is acquired, however, the developmental paths of musicians do diverge such that portfolio careers with their changing combinations of different occupational activities can take courses deviating from Manturzewska's model.However, Manturzewska's model includes some key aspects of development during adulthood and old age that are generally relevant for musicians' careers, such as health risks, age-related performance losses, the ongoing development of self-image and identity, the growth of a sense of social responsibility, and generative commitment, or the intensification of teaching activities.To find out which courses portfolio careers can take from a lifetime perspective and how they can be represented in a theoretical model, longitudinal studies, biographical studies, and case studies are particularly necessary, as Preckel et al. (2020) suggested.
In the last 20 years, an increasing number of studies of musicians' professional development and the characteristics and challenges of portfolio careers have been carried out (e.g., Bennett, 2012;Bennett & Hennekam, 2018;Burland & Bennett, 2022;Mills & Smith, 2006;Smilde, 2009).Bennett and Hennekam (2018) conducted case studies with 108 professional musicians, including 10 who were classically trained, and examined their working practices in the early, middle, and late stages of their careers.The authors investigated musicians' uses of the adaptive strategies of selection, optimization, and compensation (SOC theory) at different stages of their careers to adapt to changing working conditions and maintain their performance.Selection means the selection of goals according to available needs and opportunities, optimization describes focusing and increasing efforts on a specific goal, and compensation refers to strategies to maintain a desired level of performance (Bennett & Hennekam, 2018).The results show, for instance, that early in their careers, participants in the research reconsidered performance as the primary criterion for success to optimize their potential.In the mid-career phase, many participants realized that they "had underestimated the fierce competition for performance work" and "the need to engage in teaching and administration" (Bennett & Hennekam, 2018, p. 114).They changed their career goals accordingly (selection) to optimize their potential.These changes in the evaluation of the field of activity, the adjustment of goals and the focus of activity from the perspective of SOC theory can also be related to the corresponding changes described in Stages V and IV of Manturzewska's model.The lifespan perspective method of "using a retrospectively longitudinal approach to look back in time within individual accounts and to analyse snapshots of practice at different career phases" (Bennett and Hennekam, 2018, p. 122), also used by Manturzewska, has proved to be a robust method that is promising for future studies.
Conclusion
Since Manturzewska conducted her biographical study of the lifespan development of professional musicians in the 1970s, interdisciplinary research investigating talent and the factors influencing talent development has made considerable progress.The extent to which Manturzewska's model aligns with current knowledge is remarkable (e.g., in relation to the dynamic, multidimensional, and interactive concept of musical talent, the description of the acquisition of musical expertise and the later stages of development as well as concerning the factors and processes underlying the development of musical performance over the lifespan).
Manturzewska's model is outstanding compared to other models, as it not only considers the aspect of achievement, achievement maximization, and its determinants (like, for example, expertise research) from a lifetime perspective, but it also takes into account the musician as a human being, the development of their personality and identity, changes in performance, social relationships (family, teachers, people for a "musical dialogue").Moreover, it integrates critical phases and transitions, health risks, weaknesses, and emotional aspects from the musician's perspective.For this reason, Manturzewska's model can be described as a holistic, integrative, and humanistic model of the lifespan development of professional musicians close to musical practice.
All in all, it remains unique to this day, representing the most differentiated and multifaceted model of musical development of a traditional professional career in classical music.It has been the starting point for many researchers since the 1980s and provides a solid basis for further research.The research of Manturzewska and other authors converge on the goal of achieving a better understanding of the lifelong development of musicians in all its possible facets and the diversity of its course.This provides an empirical basis for more practice-oriented education and counseling of future musicians, which supports employability and the creation of sustainable careers.
Figure 1 .
Figure 1.Manturzewska's model of lifespan development of professional musicians.
Figure 2 .
Figure 2. Individual differences between periods of optimal achievement in different musical areas.Source: From Manturzewska (2006, p. 36).Copyright 2006 by Peter Lang GmbH.Reprinted with permission.
Figure 3 .
Figure 3.The estimated periods of the highest performance of professional musicians. | 2023-11-17T16:07:20.821Z | 2023-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "5374b3c600b5aca8d369532a0e1598cdadc04b0a",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/10298649231191430",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "be9680115a5152ff143d7c7deff52f352371f206",
"s2fieldsofstudy": [
"Art",
"Education",
"Psychology"
],
"extfieldsofstudy": []
} |
55304198 | pes2o/s2orc | v3-fos-license | Design and development of driving simulator scenarios for road validation studies
In recent years, the number of road-based studies using driving simulators is growing significantly. This allows evaluating controlled situations that otherwise would require disproportionate observations in time and/or cost. The Institute of Design and Manufacturing (IDF) of the Polytechnic University of Valencia (UPV) has developed, in collaboration with the Engineering Research Group Highway (GIIC) of the UPV, a low cost simulator that allows rapid implementation and effectively a new methodology for validation studies of different roads through the implementation in the simulator scenarios of existing roads. This methodology allows the development of new scenarios based on the analysis of a layers-file system. Each layer includes different information from the road, such as mapping, geometry, signaling, aerial photos, etc. The creation of the simulated scenario is very fast based on the geometric design software, making it easier to consulting firms using the system that can evaluate and audit a particular route, obtaining reliable conclusions at minimal cost, even if the road is not actually built. This paper describes the basic structure of the layers generated for developing scenarios and guidelines for the implementation thereof. Finally the application of this methodology to a case of success will be described. CIT2016 – XII Congreso de Ingeniería del Transporte València, Universitat Politècnica de València, 2016. DOI: http://dx.doi.org/10.4995/CIT2016.2016.4088 This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Introduction
Simulation can be described as a method of reproducing a situation similar to reality, yet controllable. To be achieved, it is necessary to reproduce an environment with identical stimuli to a real situation. This feature converts a simulator into a flexible scientific research toolassisting laboratory experiments that could be expensive, dangerous, or unrepeatable in the real world. In the field of driving, simulations are performed with driving simulators. These devices generate situations requiring the same responses as real-life drivingbut without the risks of being on the road. The basis for assessing any simulator is applicability. Simulators must be designed and equipped according to their intended use. Driving simulators have been developed enormously over the past years. Actually exist versions ranging from PC-based models to high-level models such as the Daimler-Benz (Nordmark, 1990;Weir and Clark, 1995). By using driving simulators, a wide variety of studies can be undertaken regarding: the driver; the vehicle; and the road.
Sometimes it is useful to categorize driving simulator research according to experimental variables instead of study objectives. These variables are in accordance: the driver; the vehicle; the environment; and the road. Examples of these types of studies can be seen in Table 1. Basically, the advantages of using simulators as a research tool can be summarized as: experimental works can be closely defined and easily repeated, parameters and experimental variables can be both easily modified and stored, the effects of driver fatigue can be safely studied, accidental or unpredictable situation can be analyzed safely, and prototypes and series design decisions can be made at an earlier phase of development. In accordance with more modern thinking, the driving simulation is regarded as a tool for promoting risk awareness and a way of allowing the driver candidate to try out various driving situations which cannot be planned in regular traffic or which would involve excessive danger on the road (Verwey, 1995). However, the purpose of the simulator in many human factors studies is to detect differences in performance produced by changes in the subject's capabilities (e.g. under the influence of alcohol or reduced capabilities due to illness or disability) or differences in secondarytask loading (e.g. use of an in-vehicle route-guidance system) (Read and Green, 1999).
A simulator with a good fidelity should be able to train the basic psychomotor aspects (control of the steering and speed) under complex conditions. Even more important, a simulator with a good control in the scenarios can teach a wide range of cognitive abilities that are required to deal with complex roads and conditions of difficult traffic, including appropriates situation awareness, hazard perception, decision taking and defensive techniques of driving (Allen et al. 2000).
The use of driving simulators in studies of road design, mainly to analyse the influence of geometry in driver behaviour has been widely performed by different researchers, both affecting the coordination of the horizontal layout and elevation, cross section analysis, overtaking manoeuvers, access to acceleration and deceleration lanes, intersection design and signalling. In addition, driving simulators constitutes a very useful tool to study road safety taking into account the human factors, as they may be able to generate virtual scenarios where the driver can act similarly as it would do in a real road. The road to analyse may have already been built or being under design, to be evaluated from a safety perspectives. In this case, driving simulators can introduce a better analysis due to research developed would allow to obtain data at a lower cost, lesser risk and greater control over the variables under study, mainly speed.
The analysis of a virtual road that only exists in the early design phase in a driving simulator, it provides an accurate risk assessment; in this case, the virtual scenario representing the road geometric design, is crucial. But depending on the objectives pursued in the research, the modelling scenario procedure requires a very careful control of the environmental conditions to reproduce. This design can be considered one of the most critical steps, constituting a real bottleneck in the implementation of experiments (Bhatti et al., 2012). The definition of where and how it will occur a series of events in the simulated environment and the characteristics of objectsroad profile and geometry, road signs and markings, geographical environment, textures and colours, lighting, shadows, etc.-will determine the success or failure of the scheduled experiments. Scenario modelling and specification requires knowledge of both traffic characteristics and conditions of the simulation, to be as close as possible to the real world. In addition, virtual scenarios are being increasingly more complex and sophisticated, to assess for example, the use of advanced driver assistance (ADAS), or even the evaluation of vehicle-infrastructure or vehicle-to-vehicle communication (V2X) (Hiblot et al., 2010).
Creation of databases with information on the road network to be used for the scenario design, it has traditionally required the use of different tools, not standardized and with different commercial origin. This generates a process of incomplete development with excessive cost-time and no exportable results to other simulation tools. Thus, market driving scenarios developers have been looking for the way to implement newer advanced design tools for simulation scenarios that would allow interoperability between databases and experimental simulation tools, both for training and research applications. This in fact means that the use of databases with interchangeable format and basic information could be used by different users and experimental tools.
An example of this trend is RoadXML software, developed by Oktal, whose main objective is to use a single data format for all modules of the simulator. The format used is based on XML files type and is designed to be flexible and extensible so that any user could improve the characteristics of the road network used, even with your own data. It is therefore an open database for road design format, whose structure is composed by different layers of data, each of which represents a type of differentiated information (Chaplier et al., 2010). These layers are divided into 4 levels of information, basically on Topology -location of components and connections to the road network-, Logic -elements information in the road environment-, Physics -characteristics of elements-and Visual -3D elements representation-. Nevertheless, it is difficult to find out scenarios in RoadXML format. There are libraries open to read data in this format programming, but there is only one software for editing (SCANER studio ™ software released by Oktal).
Another example of this methodology is the open format OpenDRIVE, developed by VIRES for the multinational Daimler AG. The architecture of this software is also based on the use of different layers of data that are interexchanged between several car manufacturer companies in collaboration. Such layers describe the relative position of other vehicles and pedestrians to simulated vehicle, road geometry characteristics, position of traffic signals, etc. (Dupuis et al., 2010).
Both solutions offer the possibility of working with open access software that theoretically enables the exchange of formats from databases for simulation environments design. However, in the RoadXML format developed by Oktal, and OpenDrive from VIRES, the participation and collaboration of an intermediate agent that manages access to simulation scenario file sharing is needed. This fact could be seen as a barrier and a real drawback for interoperability of file scenario databases between the market agents, users and developers, as always would be needed to filter the developed scenarios throughout private market companies.
Taking into account this methodology for developing road simulation scenarios, and based on the analysis of multiple layers-file format system, the Institute of Design and Manufacturing (IDF) of the Polytechnic University of Valencia (UPV), in collaboration with the Engineering Research Group Highway (GIIC) of the UPV, have created a new virtual scenarios design procedure that have been applied for the validation studies of different road geometric design, with the aim of a low cost simulator. This paper describes the basic structure of the layers generated for the scenarios developed and guidelines for the implementation thereof. The creation of the simulated scenario is very fast based on the geometric design software, and will make it easier to consulting firms to use the system for evaluating and auditing a particular route, obtaining reliable conclusions at minimal cost, even if the road has not already built.
Simulation software requirements
The programming environment developed in this research is based on Visual C ++ 2008 Express to run in real time. To generate meshes of objects in the environment, such as signs, walls, trees, etc. it will be used the 3D modeling Blender 2.70 and later. Python 3 it is used to process data and calculations offline. To support vehicle dynamic and their behavior with the collision system we use nVidia PhysX library 3.2. The audiovisual section rests with the Microsoft DirectX 8.1 libraries specifically for Direct3D graphics rendering and DirectSound for sound reproduction.
Simulation scenario development
To build virtual reality urban and interurban environments that already exist -or may be in the design phase-, are often used market edition and 3D modeling tools as 3DSmax or Blender. Alternatively it could be used gaming engines as Unity or UDK to simplify the design tasks. However, it is not possible to use these methods for environments of more than 100 km 2 , as the cost-time development and people involved is too high. Another problem that arises when using this type of modeling software is that usually suffers from technical limitations, such as excessive amount of memory required or limited precision floating-point numbers.
For this reason, in this research it has been employed another strategy to generate scenarios with low-cost familiar tools, and with minimum human supervision requirements, using as a basis the horizontal and vertical alignment of the road section, cartography and ortho-photography of the area, inventory of road and environmental elements, etc. Given these data, it is possible to procedurally generate the geometry and texture of the ground.
Some solutions to design large scenarios consist on working with the viewing distance of details, so that nearby areas are perceived with a lot of detail, but with increasing distance the lower the resolution thereof. For this purpose a hierarchical subdivision of the space in a quad-tree is performed to implement a system CLOD (Chunked Level Of Detail). Quad-tree nodes are updated in real time, loading and unloading meshes, reducing detail depending on the distance to the vehicle. The union of all the meshes of different resolutions generates an anisotropic surface with higher density of polygons around the vehicle. The CLOD system methodology allows us to see areas several kilometers away maintaining a rate of frames per second controlled and acceptable meshes quality. When using lesser powerful graphics card, it can be reduced the workload by simply reducing the resolution index of meshes.
Input data files
To generate an interurban environment that meets our needs it will be necessary a map of heights to generate the field, information about the axis dimensions and camber of the road to be evaluated, an ortho-photo of the environment to generate terrain textures and positions of objects as signs, trees, safety fences, walls, etc. All these data can be extracted from various sources and with different formats. If the road network data already exist, these sources are usually published free of charge and frequently updated by public administrations. In case of the road network in which will be based the simulation scenario is under development, this information would be obtained and provided by road geometry design market software, such as Civil3D or similar packages.
To obtain the terrain geometry it is needed a height map in ASC format. Road information is encoded in two files XLSX. The first one stores the path of the center road axis, encoded in three fields: Station, X and Y coordinates. Dimensions are encoded in four fields of the second file: station, height on axis, left and right height. The camber is calculated from these three levels.
Around the road all kinds of signs and vegetation are located. These objects, with relatively small size, have a shape, position and orientation to be specified when generating the complete environment of the scenario. To simplify the process and use familiar tools, in our case we used the AutoCAD DWG format program to separate data layers. Each object is encoded in a different layer using points, lines and polylines specifying their appearance.
Input files processing
The simulation software requires its own files and formats, so that is independent from actual file formats data environment. This is due to the original formats are not prepared to be treated together in real time, so we pre-process them to generate a more optimized and prepared files for CLOD system. These optimizations reduce the number and size of files, in order not to saturate the resources of the simulator computer and accelerate the process of loading and unloading in the background stage. The pre-processing is carried out in several stages, which are executed in code scripts. Each script takes some input files and transforms them into other files which, in turn, can be entered from other scripts. All this pre-processing is performed only once, taking several minutes to complete.
Other input data files
The vehicles are configured in a document with XLSX format, taking several books to parameterize the various parts and vehicle components. The physical library supports all of these parameters, so that it can be set infinite number of vehicle models. Along with the geometric description of the body, the physics engine can determine the physical and dynamic vehicles behavior in the simulation. To model the vehicle chassis and convex collision volumes it could be used Blender software. This program can also indicate the wheels position and the center of gravity of the vehicle. In addition to the rendering system, it will also need a graphical model of the vehicle.
Road geometry data to validate the scenario design
The road section selected for validating the simulator scenario developed was the road CV-35, from Losa del Obispo (station 53+500) to Titaguas (station 83+700), located in the Valencian Community (Spain). Lane width is 3.25 m, while shoulder width is 0.25 m. The ADDT for this road is 2012 vehicles per day. This road was chosen due to section it has three segments with different geometric characteristics which cover a wide range of curves. The total length is 30.185 km, but the effective length is 28.877 km because there is 1.308 km of urban road in the road section. In our validation study cartography and ortho-photography were downloaded free from the website of the National Plan of Air Ortho-photography (PNOA). The cartographic files were obtained based on a LIDAR with a density of 5 m mesh size, which is proper for this project. The horizontal alignment was obtained according to the methodology proposed by Camacho-Torregrosa et al. (2013). In addition, the vertical alignment was extracted from GPS data of the tests. The setting of the vertical alignment was carried out with the same program used to the setting of the horizontal alignment, but this module of the program was designed specifically for this research. With both alignments, the road section study was restituted in AutoCAD Civil 3D.
On the other hand, a CAD file was created in order to collect the different road and environmental elements, such as trees, traffic signs (limited speed, recommended speed, dangerous curve…) or safety barriers.
Driving simulator tool
The experimental tool used for validating the scenario designed was the SE 2 RCO driving simulator, which consists on an interactive fixed-base driving simulator (Llopis-Castelló et al., 2016). It is composed by a simulation computer, which provides the graphics performance required for the implementation of the simulation software; data collection in real time; wireless router; three-screen-display monitors 1.80x0.34 m with 120º of the field of view; Matrox TripleHead2Go graphics card; sound stereo system; steering wheel, pedals and gear shift of a Citroen Saxo and generic adjustable seat. All this provides a view of the road and the environment very close to the real conditions. The equipment of the simulator allows collect many variables, such as speed, location, azimuth or lateral speed, with a frequency of 10 Hz. In addition, the simulator has been sensorised by load cell to measure forces at brake pedal; potentiometers for measuring displacement in the three pedals; micro-switch for detected the gear-box lever position; encoder for measuring the steering wheel angle; and torque sensor for measuring torques on the steering wheel.
Validation experiment scheduled 2.5.1. Data collection
The main variable to consider for the validation of the SE 2 RCO driving simulator is the speed. The methodology to apply in this study was presented by García et al. (2015). The data collection was carried out by three VIRB Elite cameras, which allow for obtaining continuous speed profiles. These cameras were placed in the car of each participant. The experiment was scheduled to compare between the continuous speed profiles obtained in the field study and in a simulator. In this regard, the average and operating speed on 79 curves with a radius from 40 to 520 m and 52 tangents with lengths ranging from 120 to 1500 m was looked for. The test consisted in driving forward and backward along the entire road segment. Before performing the test, each driver was informed about the aim of the research and what had to do. Then, the driver had to sign an agreement.
This study was performed by 28 drivers for 7 days between March and April 2014 under favorable weather conditions and daylight. In addition to the data collected by the cameras, information about the driving experience, road familiarity, dizziness and workload was also collected through a survey. The goal of this survey was to characterize the sample of drivers based on different parameters and check drivers' consistency and the naturalness of the test. All drivers answered that they performed in a natural (or quasi-natural) way, with a low workload increase due to the experiment. In other case, the non-natural driver would have been removed from the analysis.
This testing process was repeated in the simulation scenario developed in the project that reproduced the same place an itinerary. The methodology applied for developing the scenario was the presented in this paper. Finally, the virtual reality was designed without opposing traffic, under favorable weather conditions and dry pavement.
Results
The results obtained in this experiment were presented by Llopis-Castelló et al. (2016). From the driving simulation scenario perspective, the graphics results of the multi-layer methodology applied can be observed in figure 1. This scenario was designed to reproduce the same road of the field test, approximately 30 km in length along CV-35 from "Losa del Obispo" to "Titaguas" cities (Valencia, Spain). The level of detail in the plan view of some sections has been captured in Figure 6a. The integration of different objects in successive layers that have shaped the virtual scenario -safety fences, defenses, signaling, road markings, vegetation, etc.-are showed in figures 6b and 6c. The degree of realism developed for the virtual scenario was defined based on surveys of all participants in the experiment. These surveys sought to determine the symptoms of adaptation to the simulation and perception of the simulated reality -naturalness of participants in driving, workload, familiarity with the road, environment etc.-made after their participation in driving simulator test.
The results showed that 62.5% of drivers assessed the quality of the virtual environment created as middle, while 33.3% considered high. Approximately 80% of participants felt that the degree of similarity between the real driving task and the driving simulator could be considered as middle or high. Regarding the assessment of the workload and the ease of driving, most participants considered average; these being very close to those obtained in the field study results. a) Aerial view of a section of the road. The road and terrain are overlapped with fair realism despite the data come from different sources. b) Driver's view of a straight section of the road. The vegetation adapts to the terrain and the environment. c) Driver's view of a high safety section with walls alienated to the axis of the road.
Discussion
The possibility of working with software packages, that theoretically enables the exchange of data formats from different databases for simulation environments design, has been analyzed in the past years. Some cases as RoadXML format developed by Oktal, or OpenDrive from VIRES, have demonstrated that a new methodology for developing road simulation scenarios based on the use of multiple layers-file format system is a fairly good procedure to do so. In these cases, data often come from different sources but known formats. Nevertheless, the participation and collaboration of private intermediate agents that manages and control access to simulation scenario file sharing, could be seen as a barrier and a drawback for real interoperability.
The same multi-layer methodology for developing virtual driving simulation scenarios has been used in this research program by the IDF and GIIC teams from UPV. But in our case, we have used common formats (XLSX, DWG, etc.) that can be edited using different programming libraries and software packages, both free and owners.
By using scripts to treat these formats, we can quickly adapt them to other formats, opening the door to different free data sources. This allows us to generate scenarios of different areas -urban, suburban, rural, highway, etc. -in an automated procedure. After collecting the necessary data and run the scripts, we get a scenario prepared to be executed by the simulation software. Since the data obtained came from updated measurements, the scenario obtained is quite faithful to virtual reality representation.
With this methodology the user can interact freely and easily with the virtual simulation cab, and visualize the road and surroundings several kilometers away thanks to CLOD system implemented. The physics engine integrated in the simulation software allows the vehicle to dynamically behave realistically, taking into consideration the unevenness, camber and frictional forces on the wheels.
Relative validity of the simulator tool has been achieved according to the operating speed and the average speed, since the speed determined in the simulator presents a high correlation with the speed identified in the field test. Further, the statistical analysis showed the absolute validity of the simulator regarding the average speeds, except in those configurations that were located near important intersections of the real road, where the simulated speed was very greater than the real speed. The quality of the virtual environment developed and the similarity of driving task between the simulator and the real world has help in the research done to fulfill the objectives pursued.
Conclusion
This new methodology for developing scenarios has been applied by the Institute of Design and Manufacturing (IDF) of the Polytechnic University of Valencia (UPV), in collaboration with the Engineering Research Group Highway (GIIC) of the UPV, for the validation studies of different road geometric design, with the aim of a low cost simulator. This paper has described the development of a real simulated scenario. The procedure used is very fast based on the geometric design software, and will make it easier to consulting firms to use the system for evaluating and auditing a particular route, obtaining reliable conclusions at minimal cost, even if the road is not actually built.
Drivers' perception supported the validity derived from speed analysis, so the most of volunteers assessed the quality of the virtual environment and the similarity of driving task between the simulator and the real world as medium or high. Only drivers who suffered dizziness evaluated the simulator features negatively. | 2018-12-07T17:05:18.140Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "a1f210bcf7207f2bb51241675725543635fe136a",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.trpro.2016.12.038",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1826d1ed337c3391856b4ab070821f69c1a2dc5d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
218491778 | pes2o/s2orc | v3-fos-license | Immune environment modulation in pneumonia patients caused by coronavirus: SARS-CoV, MERS-CoV and SARS-CoV-2.
Currently, we are on a global pandemic of Coronavirus disease-2019 (COVID-19) which causes fever, dry cough, fatigue and acute respiratory distress syndrome (ARDS) that may ultimately lead to the death of the infected. Current researches on COVID-19 continue to highlight the necessity for further understanding the virus-host synergies. In this study, we have highlighted the key cytokines induced by coronavirus infections. We have demonstrated that genes coding interleukins (Il-1α, Il-1β, Il-6, Il-10), chemokine (Ccl2, Ccl3, Ccl5, Ccl10), and interferon (Ifn-α2, Ifn-β1, Ifn2) upsurge significantly which in line with the elevated infiltration of T cells, NK cells and monocytes in SARS-Cov treated group at 24 hours. Also, interleukins (IL-6, IL-23α, IL-10, IL-7, IL-1α, IL-1β) and interferon (IFN-α2, IFN2, IFN-γ) have increased dramatically in MERS-Cov at 24 hours. A similar cytokine profile showed the cytokine storm served a critical role in the infection process. Subsequent investigation of 463 patients with COVID-19 disease revealed the decreased amount of total lymphocytes, CD3+, CD4+, and CD8+ T lymphocytes in the severe type patients which indicated COVID-19 can impose hard blows on human lymphocyte resulting in lethal pneumonia. Thus, taking control of changes in immune factors could be critical in the treatment of COVID-19.
INTRODUCTION
The family of coronaviruses (CoV) are enveloped RNA viruses which can be highly pathogenic to human beings [1]. Before long, the epidemics of the two highly infectious coronaviruses, severe acute respiratory syndrome coronavirus (SARS-CoV) [2] and Middle East respiratory syndrome coronavirus (MERS-CoV) [3] had resulted disastrous effects to human beings globally. The outbreak of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) and Coronavirus disease-2019 (COVID-19) originated from Wuhan, China in the end of 2019 has caused thousands of deaths [4]. Phylogenetic analysis of SARS-CoV-2 indicated that it is closely related to SARS-CoV (~79%) and a little more distant to MERS-CoV(~50%) [5]. The pathological changes of COVID-19 dead puncture suggest that its pathological characteristics are very similar to SARS-CoV and MERS-CoV-induced viral pneumonia [6]. Thus, it is critical to identify common patterns between these lethal pathogens and immune response.
Hence, identifying the key cytokines induced by coronavirus infection and the cells involved in the regulation of cytokine storms, blocking their signal transduction, will greatly reduce the inflammatory response and damage to the lung tissue and multiple organs of patients.
Invasion process and immune response of SARS-CoV, MERS-CoV and SARS-CoV-2
SARS-CoV-2 shows 88% identity to the sequence of SARS-like coronaviruses and about 50% to the sequence of MERS-CoV. Due to the similar structure, their pathogenesis is similar. SARS-CoV-2, just like SARS-CoV, requires the ACE-2. MERS-CoV enters target cells not via ACE-2, but via binding to DPP-4. Both ACE-2 and DPP-4 are expressed in several human tissues. While the virus enters the cells, antigen presentation subsequently stimulates the body's humoral and cellular immunity, which are mediated by virusspecific immune cells. Immune response causes a lot of symptoms and the main death cause of coronavirus is cytokine storm, which is the deadly uncontrolled systemic inflammatory response. COVID-19 induced strong immune response is resulting from the release of large amounts of pro-inflammatory cytokines and chemokines, which are similar to the symptoms of SARS-CoV and MERS-CoV infections. Hence, although the pathogenesis of COVID-19 is poorly understood, the similar mechanisms of SARS-CoV and MERS-CoV still can give us a lot of information on the pathogenesis of SARS-CoV-2 infection to facilitate our recognition of COVID-19 ( Figure 1).
SARS-CoV-induced immune responses
To explore SARS-CoV induced immune responses, infected mice group was analyzed. Lungs from mice were harvested at 12, 24, and 48 hours post-infection and at least 3 biological replicates were collected. As pneumonia in the elderly is more susceptible to infection and the symptoms are heavier, the changes in inflammatory factors at 12, 24, and 48 hours after the infection of the SARS virus in elderly rats were analyzed, and multiple factors were found to occur. IL-1α, IL-1 β, IL-6 and IL-10 presented a significant higher level and was more obvious at 24 hours while the level of IL-7 showed moderate fluctuation and IL-23α a decreased trend ( Figure 2). The results showed that SARS-CoV infection induced a cytokine storm. As for interferon system which protects mammals against virus infections, we analyzed the changes of interferon at 12h, 24h and 48h after infection with SARS virus in elderly rats. We found IFN-α2, IFN-β1 and IFN2 all demonstrated higher expression volumes especially in 24h ( Figure 3) which suggest the onset such as plasmacytoid dendritic cells (pDCs) and proinflammatory monocytes. In terms of changes in chemokines which synergistically induce a proinflammatory recruitment, the level of CCL2, CCL3, CCL5 and CCL10 are all drastically elevated in 24h and remained high level in 48h. In the meantime, CXCL3 expression increased in 24h but decreased in 48h. And CXCL5 expression showed a decreased trend in 24h and 48h compared to 12h ( Figure 4). Taken together, these rising molecules reflected anti-viral response from the host in the early phase.
MERS-CoV-induced immune responses
In order to explore the common pattern of immune response after coronavirus contagion, we analyzed the situation in MERS-CoV infected human microvascular endothelial cells. So we analyzed the expression genes of interleukins and interferons after 24h. And we found interleukins (IL-6, IL-23α, IL-10, IL-7, IL-1α, IL-1β) and interferons (IFN-α2, IFN2, IFN-γ) have increased dramatically ( Figure 5) which indicated an elevated anti-virus immune response.
Differences in immune responses in young and aged mice
To explore the immune differences between young and aged mice, we analyzed the cytokine variation after SARS-CoV infected for 12 and 24 hours. The results showed that several cytokines increase more significantly in aged mice than young mice ( Figure 6). It indicated that coronavirus may cause more severe cytokine storms in elderly patients. To quantify the immune response on cell level, we applied ssGSEA method to compare the variation of different immune cells of aged and young mice after SARS-CoV infection. The level of T cells, NK cells and monocytes increased significantly both in aged and young mice. Lymphoid cells show an elevated level in young mice but remained stable comparatively in aged mice. And AGING granulocytes tend to decrease both in aged and young mice after the infection. Interestingly, monocytes aged mice increased more quickly (24h) than in the young mice (48h) (Figure 7). The results showed that coronavirus infection can cause strong immune response in both young and old mice. Lymphocyte-mediated immune responses are more severe in young mice, but monocyte-mediated immune responses are more rapid in older mice.
Clinical immunoassay of COVID-19 patients
For further study, we analyzed immune cells in peripheral blood of 463 patients with COVID-19 disease (Table 1). We found that total lymphocytes, CD3+, CD4+ and CD8+ T lymphocytes significantly went down in the severe type patients compared to the common type ( Figure 8) which indicated SARS-CoV-2 can impose hard blows on human lymphocyte resulting in lethal pneumonia. Moreover, total lymphocytes, and CD8+ T lymphocyte counts decreased more severely in patients >= 50 years old than those below 50 which suggest that young patients are more likely to bounce back. And CD3+ or CD4+ lymphocyte counts showed no significant difference between different age groups.
DISCUSSION
Pathological manifestations of COVID-19 greatly resemble what has been seen in SARS and MERS infection which massive interstitial inflammatory infiltrates diffused in the lung [6]. The cellular fibromyxoid exudate which caused severe alveolar impairment from postmortem autopsy indicates the cytokine storm may play a critical role in patient rapid death. In this study, we found that genes coding interleukins(Il-1α, Il-1β, Il-6, Il-10), chemokines (Ccl2, Ccl3, Ccl5, Ccl10), and interferons (Ifn-α2, Ifn-β1 and Ifn2) raised significantly in SARS-CoV treated mice within 24h which in line with the elevated infiltration of T cells, NK cells and monocytes. And similar pattern of cytokine projection were found in the MERS-CoV infected group.
Investigating the inflammatory profile in SARS and MERS may advance our knowledge of the immunepathological process in COVID-19 treatment. In this study, we reviewed SARS-infected mice and MERStreated human micro vascular endothelial cells to clarify the association between temporal changes in cytokine/ chemokine profiles and the six immune cell infiltration patterns. We retrospectively reviewed the clinical data of 463 cases with common and severe type COVID-19, who discharged before February 6, 2020. We found that severe type of patients suffered more serious symptoms like higher fever and took more time to recover which may suggest the fluctuation of immune indices is of predictive value.
To explore the specific mechanism of immune environment changes, we analyzed potential influencing factors. Cytokines, not merely aid in the process of antimicrobial immunity but are liable for immunepathological damage to owner cells, causing significant morbidity or even fatality in multiple respiratory disorders as well [17,18]. Chemokines like CXCL10 (IP10) and CCL2 (MCP-1) proved to be up-regulated in monocytes/macrophages by SARS-CoV which is consistent with our results [19]. The clinical progression of MERS cases proves that secretion of monocyte chemo-attractant protein-1 (MCP-1), CXCL10 is out of control [20]. Pro-inflammatory cytokines (IL-6, CCL5), and interferon-stimulated genes (CXCL10) are involved in Toll-like receptors (TLR) signaling [21]. These molecules are effectors on the process of respiratory virus infections towards the context of Acute Respiratory Distress Syndrome (ARDS) which is lethal to the COVID-19 patients [22]. IL-12 is the main cytokine secreted by DCs that manages the differentiation of CD4+ T cells into Th1 cells and serves essential duty in cell-mediated immunity. And IL-23 which includes in the IL-12 Family are AGING predominantly pro-inflammatory cytokines which contribute critical roles in the growth of Th17 cells [23,24]. Increased expression of IL-12 and IL-23 after SARS-infected lung tissue in mice may indicate the activated response of Th1 and Th17 cells which is observed in MERS victims as well [10]. Interesting, in the SARS-CoV infected cells, the ACE-2 was significantly correlated with neutrophils, NK cells, Th17 cells, Th2 cells, Th1 cells, DC which may call for further investigations [25].
IFN-α/β is regarded as one of the body's primary antiviral defenses. IFN-β exerts its effects through intercellular communication resulting the induction of IFN-α/β and interferon-stimulated genes (ISGs), which make up an important aspect of host antiviral defense [26]. Notwithstanding, particular cell types, such as pDCs and monocytes, have been confirmed to produce more IFN than other cell types when viral infection committed [27]. And elevated level of IFN and monocyte infiltration in our analysis validates this. The AGING innate immune response on the basis of pDCs and monocytes may play substantial role in the formation of the cytokine storm which damages the lung severely.
Lymphopenia is common in COVID-19 patients. Severe lymphocyte reduction occurred in about 10% of patients, especially in the heavy group, which is consistent with the latest reported results [28]. Flow cytometry showed that CD3+, CD4+ and CD8+ T lymphocytes had decreased to varying degrees. And aged patients suffered a more severe decrease in total lymphocytes and CD8+ T lymphocytes. About 40% of patients had a decrease in CD4 + T lymphocytes, and the incidence was higher in the heavy group than in the common group. This shows that SARS-CoV-2 may mainly attack lymphocytes in the body, which can cause the reduction of CD4 + T lymphocytes, resulting in decreased immune function and infection, and severe cases of severe pneumonia.
CONCLUSIONS
In a word, we analyze the cytokine profiles in SARS-CoV infected mice and MERS-CoV infected human micro vascular cells. Interleukin (Il-1α, Il-1 β, Il-6, Il-10), chemokine (Ccl2, Ccl3, Ccl5, Ccl10), and interferon (Ifn-α2, Ifn-β1 and Ifn2) increased dramatically in SARS-CoV treated mice within 24h. As for MERS-CoV treated cells, interleukins (IL-6, IL-23α, IL-10, IL-7, IL-1α, IL-1β) and interferon (IFN-α2, IFN2, IFN-γ) showed a significant ascending trend in 24h. Subsequent analysis revealed elevated abundance of T cells, NK cells and monocytes in both young and aged mice group treated by SARS-CoV. And impaired lymphocyte system in severe and aged COVID-19 patients indicates the disease is more likely to progress when cytokines exhausted and functional lymphocytes suppressed. Thus, catching the window of treatment for COVID-19 according to these immune molecules may be critical.
Microarray analysis
Microarray datasets related to gene expression were obtained from the GEO database For SARS-CoV dataset (GSE36969), young (8 weeks old) and aged (1 year old) female BALB/c mice were intranasally infected with 10^5 PFU of MA15 epsilon (SARS-CoV pathogenic virus). For MERS-CoV dataset (GSE79218), human microvascular endothelial cells were infected with MERSCOV002 (MERS-CoV pathogenic virus) or mocks and the 24h post-infection time point was picked for analysis. All gene expression datasets above were independently log2 transformed and quantile normalized in the linear models for microarray data (LIMMA) package in the R language environment.
Clinical data
Patients who were diagnosed with COVID-19 and collected from Wuhan Jinyintan Hospital from January 1 to February 6, 2020 were collected. This study was approved by the Ethics Review Committee of Wuhan Jinyintan Hospital. Diagnostic criteria are according to the "Diagnosis and Treatment of New Coronavirus Pneumonia " issued by the General Office of the National Health and Health Commission as the diagnostic standard [13]. We classified patients into 2 types: (1) Common: fever, respiratory tract and other symptoms, with or without pneumonia manifestations on imaging; (2) Severe: meet any of the following: ① Respiratory distress, RR ≥ 30 beats / min; ② In resting state, refers to oxygen saturation ≤ 93%; ③ partial pressure of arterial oxygen (PaO2) / oxygen concentration (FiO2) ≤ 300mmHg.
Statistical analysis
The Wilcoxon t-test were used to determine differences between two groups for continuous variables and the Kruskal -Wallis rank sum test for more than two groups, respectively. And we applied Single Cell Gene Set Enrichment Analysis (ssGSEA) to estimate the infiltration of immune cells [14] using the GSVA R package [15]. Fingerprint genes of granulocytes, monocytes, NK cells, activated and naive T cells, B cells and lymphoid cells are extracted from the previous study [16]. Statistical analyses were performed in the R (version 3.6.1) language environment and P-value <0.05 (two-sided) is considered to be significant.
AUTHOR CONTRIBUTIONS
JHZ conceived the initial concept and designed the study, KW, ZZ and ZXY participated to design the study and in the data extraction. ZZ and ZXY wrote the manuscript. All authors read and approved the final manuscript. | 2020-05-05T13:03:44.385Z | 2020-05-02T00:00:00.000 | {
"year": 2020,
"sha1": "33a4a4bd7eddaff47650a316a6b6e0448bd42c59",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18632/aging.103101",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e44eeac3797c6dc84f6413a2e040d6d280ec7ce8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270190632 | pes2o/s2orc | v3-fos-license | A Comparative Full-Length Transcriptome Analysis Using Oxford Nanopore Technologies (ONT) in Four Tissues of Bovine Origin
Simple Summary A comparative transcriptomic analysis using Oxford Nanopore Technologies (ONT) was conducted in bovine testes (TESTs), ovaries (OVAs), muscles (MUSCs), and livers (LIVs). Once the samples (n = 18) were analyzed, TESTs exhibited the most alternative polyadenylation (APA) events related to male reproductive processes. Abstract The transcriptome complexity and splicing patterns in male and female cattle are ambiguous, presenting a substantial obstacle to genomic selection programs that seek to improve productivity, disease resistance, and reproduction in cattle. A comparative transcriptomic analysis using Oxford Nanopore Technologies (ONT) was conducted in bovine testes (TESTs), ovaries (OVAs), muscles (MUSCs), and livers (LIVs). An average of 5,144,769 full-length reads were obtained from each sample. The TESTs were found to have the greatest number of alternative polyadenylation (APA) events involved in processes such as sperm flagellum development and fertilization in male reproduction. In total, 438 differentially expressed transcripts (DETs) were identified in the LIVs in a comparison of females vs. males, and 214 DETs were identified in the MUSCs between females and males. Additionally, 14,735, 36,347, and 33,885 DETs were detected in MUSC vs. LIV, MUSC vs. TEST, and OVA vs. TEST comparisons, respectively, revealing the complexity of the TEST. Gene Set Enrichment Analysis (GSEA) showed that these DETs were mainly involved in the “spermatogenesis”, “flagellated sperm motility”, “spermatid development”, “reproduction”, “reproductive process”, and “microtubule-based movement” KEGG pathways. Additional studies are necessary to further characterize the transcriptome in different cell types, developmental stages, and physiological conditions in bovines and ascertain the functions of the novel transcripts.
Introduction
Cattle (Bos taurus) hold significant economic importance worldwide, and many studies have been conducted to improve or enhance productivity and reduce disease susceptibility [1].The testis (TEST) is an essential and complex reproductive organ composed of various somatic and germ cells, which interact to facilitate the development of the TEST and functional spermatogenesis.Compared with somatic organs, such as the liver (LIV) and muscles (MUSCs), besides basic cellular life processes, the TEST has highly specific and complex physiological processes closely related to male fertility, including spermatogenesis.The complexity of cellular composition and function in the TEST is reflected in the diversity of the transcripts expressed in this tissue.These transcripts encode a wide range of proteins and non-coding RNAs, including those involved in spermatogenesis, steroidogenesis, and Animals 2024, 14, 1646 2 of 20 cell signaling.The complexity of the TEST transcriptome is further reflected in the fact that many transcripts are unique to the TEST [2,3].These TEST-expressed genes (TEX) are likely to play essential roles in the specialized functions of the TEST [4,5], such as cell differentiation, germ cell development, and hormone-based regulation.The exploration of new transcripts and variable splicing of genes with high expression and important physiological functions in the TEST [6][7][8][9][10] is of great significance for further understanding the complex regulation of male reproductive function.
Some of the most studied TEST-specific genes (TSGs) are the Y-linked gene family, which includes sex-determining region Y (SRY) and several other Y chromosome-specific genes.SRY plays a vital role in determining male sex by initiating the differentiation of the gonadal primordium into a TEST [6].Other Y-linked genes, such as DAZ (deleted in azoospermia), are implicated in spermatogenesis [7].Another significant family of TSGs includes those encoding protamines.Protamines are small, Arg-rich proteins that replace histones during the later stages of spermatogenesis, allowing for the extreme condensation of the DNA in sperm [8].Protamine levels are associated with male infertility.The TEST-specific protein Y-encoded (TSPY) is another TSG implicated in the manifestation of testicular germ cell tumors.TSPY is thought to promote cell proliferation and inhibit apoptosis, contributing to tumorigenesis [9].TEST-specific serine kinases represent another group of TSGs.The proteins encoded by the mutants of these TSGs are involved in spermatid differentiation and associated with male infertility [10].
Oxford Nanopore Technologies (ONT) sequencing has revolutionized transcriptomic studies by offering longer read lengths, real-time data generation, and direct RNA sequencing capabilities [11].The technology of ONT sequencing works by detecting the changes in electrical conductivity generated as DNA or RNA molecules pass through a protein nanopore.The resulting signal is decoded into a sequence of nucleotides [12].ONT sequencing has been used for transcriptomics-based investigation of various aspects of domesticated animals.For example, it has been used to study the transcriptomic complexity in pigs (Sus scrofa) [13] and identify the genes involved in follicle selection in chickens [14].Until recently, transcriptome annotations, including the bovine genome, were primarily based on short-read RNA-seq data obtained using next-generation sequencing platforms.Only a few studies have focused on the ONT-based sequencing of bovine transcriptomes, such as a recent study that used ONT to characterize the poll allele in Brahman cattle [15].Another study demonstrated the power of long-read sequencing for transcriptome annotation by coupling ONT with large-scale multiplexing of 93 samples comprising 32 tissues collected from adult male and female Hereford cattle [14].Of these, >7000 transcript isoforms were extremely tissue specific, and 61% of these were attributed to the TEST, which exhibited the most complex transcriptome of all the analyzed tissues [16].
However, a comprehensive annotation of transcript isoforms in Chinese cattle is lacking.The potential of ONT sequencing in cattle transcriptomics is vast.Studying the TESTrelated transcriptome is a vital area of research that can provide a better understanding of the biology of this organ in cattle.According to data from the United States Department of Agriculture (USDA), in 2020, there were a total of eight countries and regions worldwide with beef production exceeding one million tons.The United States produced 12.4 million tons of beef, Brazil produced 10.1 million tons, the European Union produced 7.8 million tons, and China produced 6.8 million tons, ranking fourth.China is also the only Asian country among the top eight producers.Therefore, beef production is crucial for China.Bashan cattle are mainly distributed in the mountainous area of China's Daba Mountains.Bashan cattle are highly suitable for breeding, with a tolerance to rough feeding and strong disease resistance, which traits were strictly selected and bred locally [17].Bashan cattle originally served as draft animals for plowing but are gradually transitioning to meat production, playing a significant role in increasing income for mountainous farmers.Due to their relatively small size and lower meat yield, the population of Bashan cattle has been declining.They exhibit non-seasonal reproduction.Through the analysis of multi-tissue full-length transcriptomes, we aim to gain a better understanding of the genetic potential of this breed.Through our research, we aim to obtain comprehensive transcriptome sequencing data from the LIV, MUSC, TEST, and ovary (OVA) tissues of Bashan cattle.We hypothesize that transcriptome complexity varies across different tissues of Bashan cattle.Compared to the LIV (primarily involved in metabolism) and MUSCs (primarily used for meat production), the TEST (which contains both somatic and germ cells) is likely to exhibit numerous specifically expressed genes and transcripts.These differentially expressed genes and transcripts may be closely related to male reproductive processes, such as androgen metabolism and spermatogenesis.Prediction of testicular highly expressed transcripts and alternative splicing in this study may facilitate selection programs seeking to improve productivity traits, fertility, and environmental adaptation factors of considerable scientific and economic interest for cattle.The findings of this study regarding TEXs could also help the development of new methods for treating male infertility and other testicular disorders.
Animals and Sample Collection
LIV, MUSC, OVA, and TEST tissue samples were collected from three male and three female Bashan cattle, all aged 20-30 months old.For males, LIV, MUSC, and TEST tissue samples were obtained from each of the cattle (n = 3, 9 samples in total).For females, LIV, MUSC, and OVA tissue samples were obtained from each of the cattle (n = 3, 9 samples in total).The sampling time was January, 2023.The females were heifers, and the complete ovary was sampled for RNA extraction.The MUSC was taken from the longissimus dorsi MUSC, while the TEST was cut from the vertical axis and a small piece of tissue in the middle was taken.All samples were collected from a local slaughterhouse.All experimental procedures were approved by the guidelines established by the Institutional Animal Care and Use Ethics Committee of Southwest University (IACUC-20240506-01).Samples were collected within 30 min post-euthanasia, flash-frozen in liquid nitrogen, and stored at −80 • C until processing.
RNA Extraction and cDNA Library Construction
RNA extraction and cDNA library construction adhered to the standard protocol provided by ONT and were supported by Biomarker Technologies Co. Ltd. (Beijing, China).Total RNA extraction was conducted utilizing TRIzol kits from Solarbio LIFE SCIENCE (Beijing, China).Subsequently, the Nanodrop2000 (Waltham, MA, USA) was employed to determine nucleic acid concentrations and purity, while integrity was confirmed using the Agilent 2100 Bioanalyzer (Palo Alto, CA, USA) and the LabChip GX (Waltham, MA, USA).For library construction, Poly (A) mRNA was initially purified from total RNA using mRNA capture beads from Vazyme (Nanjing, China).Reverse transcription and double-stranded DNA amplification were conducted sequentially with reverse transcription and amplification primers.NEBNext FFPE DNA Repair Mix and the NEBNext Ultra II End Repair/dA Tailing Module were then utilized to perform damage repair and end repair with the addition of nucleic acid fragments.Finally, sequencing adapter ligation was carried out using the Ligation Sequencing Kit 1D (PM) (SQK-LSK109) from Nanopore (Oxford, UK).
Following a quality inspection of the library (concentration X > 2 ng/µL), Flow Cell Priming mix was prepared using a sequencing chip preparation kit (EXP-FLP001 PRO.6) (Oxford, UK).Subsequently, the library was prepared using the Ligation Sequencing Kit 1D (PM) (SQK-LSK109) (Oxford, UK).Finally, the MinKnow software (Version 2.2) was operated on the PromethION48 sequencer (Oxford, UK) to initiate sequencing with the PromethION Flow Cells (FLO-PRO002) (Oxford, UK) chip, running with a default time of 72 h.
ONT-Based Long-Read Processing
Raw reads underwent initial filtering, retaining reads with a minimum average quality score of six and a minimum length of 350 bp.Ribosomal RNA sequences were discarded after mapping to the rRNA database.Full-length non-chimeric (FLNC) transcripts were identified by searching for primers at both ends of the reads.Clusters of FLNC transcripts were obtained after mapping to the reference genome, ARS-UCD1.2,using Minimap2 (Version 2.16) [18].Consensus isoforms were derived by filtering within each cluster using Pinfish (Version 0.1.0).Mapped reads were further condensed using the cDNA_Cupcake package (Version 5.80), considering a minimum coverage of 85% and a minimum identity of 90%.Redundant transcripts were condensed without considering a 5 ′ difference.
Identification of Fusion Transcripts
Firstly, full-length transcriptomes were obtained through sequencing and analyses.The analytic process for obtaining a full-length transcriptome mainly includes three stages: full-length sequence recognition, full-length sequence polishing to obtain consistent sequences, and redundancy removal for consistent sequences.The detailed steps were as follows: (1) Filter low-quality (length less than 200 bp, Q score less than 6) sequences and ribosomal RNA sequences from the original downstream sequence and obtain the fulllength sequence based on the presence of primers at both ends of the sequence.(2) Polish the full-length sequence obtained from the previous step to obtain a consistent sequence.
(3) Perform fusion transcript screening on each sample using the consistent sequence before redundancy removal under the following conditions.Candidate fusion transcripts were identified based on the following criteria: (1) mapping to ≤2 loci, (2) minimum coverage of 5% and ≥1 bp for each locus, (3) total coverage ≥ 95%, and (4) loci distance ≥ 10 kb.
Prediction of Transcription Factors (TFs)
Animal TFs were identified using AnimalTFDB (Version 1.6).Putative protein-coding RNAs were filtered based on minimum length and exon number thresholds.
Gene Functional Annotation
Firstly, sequence alignment was performed for the reference genome to obtain known transcripts/genes.The remaining sequences were subjected to variable splicing analyses to obtain new transcripts/genes.The obtained new transcript/gene sequences were then compared with the NR [20], Swissprot [21], GO [22], COG [23], KOG [24], Pfam [25], and KEGG [26] databases to obtain annotation information.
Quantification of Gene/Transcript Expression Levels and Differential Expression Analyses
In the current study, we utilized full-length sequencing transcriptomes aligned against known transcriptomes from the genome as a reference for sequence alignment and subsequent analyses.We employed minimap2 to align full-length sequences with known transcripts of the reference genome, obtaining correspondence information of the transcripts.The reference genome used was Bos_taurus, and the version ARS_UCD1.3"https:// www.ncbi.nlm.nih.gov/datasets/genome/GCF_002263795.2/(accessed on 10 July 2023)", was employed.Statistical results of the comparison of full-length reads and the known transcriptome are presented in Table S6.Full-length reads were mapped to the known transcriptome sequences, with reads having a match quality of >5 being used for quantification.Expression levels were estimated as reads per gene/transcript per 10,000 mapped reads.To ensure that the number of fragments accurately reflected transcript expression levels, it was necessary to normalize the number of mapped reads in the samples.The counts per million (CPM) method was used as an indicator of transcript or gene expression levels.The CPM was calculated using the following formula ("reads mapped to transcript" represents the number of reads aligned to a specific transcript; "total reads aligned in sample" represents the total number of fragments aligned to the known transcriptome from the genome): CPM = (reads mapped to transcript/total reads aligned in sample) × 1,000,000.
Differential expression analyses of the two groups were performed using the DESeq2 R package (Version 1.6.3).DESeq2 [27] employs statistical routines based on the negative binomial distribution to determine differential expression in digital gene expression data.The resulting p values were adjusted using Benjamini and Hochberg's approach to control the false discovery rate.During the detection of differentially expressed transcripts, fold changes ≥ 1.5 and p values < 0.01 were used as screening criteria.The fold change represented the ratio of expression levels between two sample groups.The p value served as the significance indicator for screening differentially expressed genes.Genes that show significant differences in expression levels under different conditions are called differentially expressed genes (DEGs).Different transcripts refer to different mRNA variants transcribed from the same gene, and transcripts with significant differences in expression levels are called differentially expressed transcripts (DETs).
Functional Enrichment Analyses
GO enrichment analyses of DEGs were performed using the GOseq R package (version 1.24.0), which adjusts for gene length bias in DEGs using the Wallenius non-central hypergeometric distribution [28].KEGG pathway enrichment analyses were conducted using KOBAS (Version 3.0) [29] software, testing the statistical enrichment of DEGs in the KEGG pathways.
Protein-Protein Interaction (PPI) Analyses
DEG sequences from different tissues were aligned with the genome using BLASTx.Predicted PPIs of proteins encoded by these DEGs were obtained from the STRING database "http://string-db.org/(accessed on 12 July 2023)".These PPIs were visualized using Cytoscape (Version 3.10.1).
TF-Gene Interaction Network Analyses Were Performed Using the Network Analyst Tool and the JASPAR Database
Briefly, we first uploaded the gene list to the Network Analyst website "https:// www.networkanalyst.ca/(accessed on 15 July 2023)", then selected the corresponding species and assigned the analytic category as TF-gene interactions, and finally selected the JASPAR database "https://jaspar.elixir.no/(accessed on 15 July 2023)"for transcription factor prediction.
Alternative Splicing Structural Analyses of Bovine Tissues
The transcriptomes of 18 samples collected from four tissue types were obtained by ONT sequencing (Table S1).Identification of 4,198,897-6,416,610 full-length sequences using valid sequencing data yielded 4,669,243-6,931,917 clean reads with an average read quality score > 7 and a length > 50 bp (Table S2).The full-length sequence was filtered to obtain a consensus isoform, and redundancy analysis was performed after all the consensus isoforms were compared to the reference genome, due to which 114,316 transcripts were finally obtained.The principal component analyses (Figure 1A) and correlation analyses of gene expression for the 18 samples (Figure 1B) indicated similar expression patterns in the same tissues.In addition, 43,325 novel transcripts were functionally annotated according to nine databases (Table S3).
obtain a consensus isoform, and redundancy analysis was performed after all the consen sus isoforms were compared to the reference genome, due to which 114,316 transcript were finally obtained.The principal component analyses (Figure 1A) and correlation anal yses of gene expression for the 18 samples (Figure 1B) indicated similar expression pat terns in the same tissues.In addition, 43,325 novel transcripts were functionally annotated according to nine databases (Table S3).Among the five alternative splicing types prevalent in the tissues, alternative exon skipping accounted for a maximum of 54.13-67.55%and mutually exclusive exons accounted for a minimum of 1.88-4.54%for the 18 samples (Table S4).
Differential splicing events were present in various organs.Regardless of gender, the differential splicing between reproductive and non-reproductive organs was significantly higher than that observed among non-reproductive organs.And the largest quantities of differential alternative splicing events were observed between the TEST and OVA samples (Figure 1C).This suggested the complexity and diversity of ovarian and testicular functions, as they were responsible for crucial reproductive functions and gamete production, with gene splicing variations reflecting the intricacy and diversity of gene functions.Moreover, compared to TESTs, the number of differential splicing genes in male LIVs and MUSCs was 2182 and 2286, respectively, with a common set of 1536 differential splicing genes.These variations in splicing may be associated with specific functions in the TESTs (Figure 1D).In contrast, compared to OVAs, female LIVs and MUSCs had 937 and 1264 differential splicing genes, respectively, with a common set of 429 differential splicing genes likely related to specific functions in the OVAs (Figure 1D).There were 269 common differential splicing genes between TESTs and MUSCs/LIVs, as well as OVAs and MUSCs/LIVs (Figure 1D).The splicing variations in these genes between TESTs and OVAs suggested their potential key roles in the unique reproductive functions of the TEST and the OVA, possibly involving meiosis, a shared physiological process in the generation of sperm and oocytes.The biological processes implicated include RNA splicing, cytoplasmic translation, and molecular functions of structural constituents of ribosomes, cadherin binding, and electron transfer activity, among others (Figure 1E).Additionally, fusion transcripts were predicted for the consistent transcripts obtained, and 18 samples obtained fusion transcripts ranging from 21 to 144.SSR prediction was performed on the transcripts of all samples without redundancy.Six types of nucleotides-mono-, di-, tri-, tetra-, penta-, and hexa-nucleotides-and compound SSRs were identified, and 65,661 SSRs were obtained (Table S5).The Animal TFDB 3.0 [30] database was used to identify the animalspecific transcription factors (TFs).A total of 6044 TFs were predicted from the new transcripts identified in this study.
APA Analyses of Bovine Tissues
Polyadenylation refers to the covalent linking of polyadenylate with messenger RNA (mRNA) molecules.In the process of protein biosynthesis, this is part of the way in which mature mRNA is produced ready for translation.In eukaryotes, polyadenylation is a mechanism by which mRNA molecules are interrupted at their 3 ′ end.The polyadenylate tail (or polyA tail) protects mRNA from exonuclease attacks and is important for transcription termination, mRNA export from the nucleus, and translation.APA (APA) of precursor mRNAs may contribute to transcriptomic diversity, genomic coding capacity, and gene regulatory mechanisms.We used the TAPIS pipeline to identify APA [19].The results showed that the TESTs had the greatest number of genes with APA.The numbers of APA genes in the OVAs and TESTs were higher than those in the LIVs and MUSCs, while the numbers of APA genes in the LIVs and MUSCs of the females and males were similar (Figure 2A,D).The top ten motifs of APA are shown in Figure 2B.Signals in the pre-mRNA were recognized by core polyadenylation proteins.Several signals in the pre-mRNA direct the cleavage and polyadenylation (hereinafter, "polyadenylation") machinery to a site.The sequence AATAAA (or something similar) is called the polyadenylation signal and is generally found 15-30 bases upstream of the site of cleavage [31,32].To explore the function of genes with APA in the tissues, we focused on the top 200 genes with the highest APA aligned reads in each tissue.The functional annotation of the genes in TESTs showed that these genes were mainly involved in "reproductive process", "metabolic process", and "cellular process" processes, and so on (Figure 2C).Genes with high APA in OVAs were mainly related to "response to stimulus", "metabolic process", "positive regulation of biological process", and "homeostatic process" processes.
and "cellular process" processes, and so on (Figure2C).Genes with high APA in OVAs were mainly related to "response to stimulus", "metabolic process", "positive regulation of biological process", and "homeostatic process" processes.Furthermore, we investigated the APA genes among different tissues.In males, there were 2801 TEST-specific APA genes compared to LIV and MUSC tissues (Figure 2E, left).In females, there were 1421 OVA-specific APA genes compared to LIV and MUSC tissues (Figure 2E, middle).When comparing TESTs and OVAs, there were 493 common APA genes, while TEST-and OVA-specific APA genes numbered 2308 and 928, respectively (Figure 2E, right).Pathway analyses revealed that the 2308 genes are significantly enriched in reproductive processes, with associated cellular components including the motile cilium, the acrosome vesicle, the "9 + 2" motile cilium, the sperm flagellum, and the cilium, which are related to sperm flagellum assembly and fertilization function (Figure 2F).In total, TESTs had the highest complexity in terms of APA.
DETs and DEGs among the Various Tissues Indicated the Complexity of Testicular Expression
The full-length sequencing transcriptome was mapped to the known transcriptome of the genome and used for subsequent analyses (Table S6).The full-length sequence was compared with the known transcriptome of the genome using Minimap2 to obtain the corresponding transcriptomic information.DESeq2 was then used to detect the DETs between the two groups [18].The same tissues in the females and the males showed the fewest DETs (FDR ≤ 0.01; FC ≥ 2).However, the highest numbers of DETs were always identified in the comparisons between each tissue type and the TESTs (Table S7), indicating the complexity of the TESTs.Conversely, 26,053 overlapping DETs were identified among the following comparisons: LIVs vs. TESTs, OVAs vs. TESTs, and MUSCs vs. TESTs (Figure 3A).
The molecular functions of the GO class annotation indicated that these transcripts were involved in "binding", "catalytic activity", "molecular function regulator", etc. (Figure S2A).The functional annotation of the KEGG class indicated that the DETs mainly participated in "Cellular Processes", "Environmental Information Processing", "Genetic Information Processing", "Human Diseases", "Metabolism", and "Organismal Systems" (Figure S2B).These metabolic pathways especially included "Fatty acid degradation", "Fructose and mannose metabolism", "Starch and sucrose metabolism", "Butanoate metabolism", and "Valine, leucine, and isoleucine degradation" (Figure 3B).In addition, in terms of "Organismal Systems" pathways, 569 DETs (3.11%) were involved in "Thermogenesis", which is related to a vital function of the TEST.The enriched GO terms of the DETs were mainly associated with "Structural molecule activity", "Transporter activity", and "Binding", especially "Structural constituent of ribosome" (Figure S3).
Additionally, in total, 291 DEGs, 110 upregulated and 181 downregulated, were identified in the LIVs of the females and the males with respect to the significance criteria of |fold change| > 1.5 and p < 0.01 (Table S8), while 121 DEGs, 67 upregulated and 54 downregulated, were identified in the MUSCs of the females and the males, indicating a certain level of differential gene expression in the same tissues in the males and the females.Meanwhile, many DEGs (14,861) were detected in comparing the OVAs and TESTs.Additionally, 10,854 overlapping DEGs were identified in the comparisons of LIVs vs. TESTs, OVAs vs. TESTs, and MUSCs vs. TESTs (Figure 3C).
Annotation of these DEGs was also carried out based on the CGO, eggNOG, GO, KOG, and KEGG databases.Highly similar to the annotation results for DETs, the results for the annotation of DEGs indicated that most genes, 450 in number (11.95%), were involved in "General function prediction only" and that 422 genes (~11.21%) were involved in "Posttranslational modification, protein turnover, chaperones", according to the CGO database (Figure S4A); a majority of the genes, numbering 721 (~38.97%), were enriched in "Function unknown" according to the eggNOG database, and a large number of genes had unexplored functions, which indicated the complexity of the TEST (Figure S4B); and 1978 genes (~17.68%) were enriched in "General function prediction" according to the KOG database (Figure S4C).The annotation for TFs revealed that most DEGs belonged to C2H2, HB-other, and bHLH TFs (Figure 3D).
DETs and DEGs among the Various Tissues Indicated the Complexity of Testicular Expression
The full-length sequencing transcriptome was mapped to the known transcriptome of the genome and used for subsequent analyses (Table S6).The full-length sequence was compared with the known transcriptome of the genome using Minimap2 to obtain the corresponding transcriptomic information.DESeq2 was then used to detect the DETs between the two groups [18].The same tissues in the females and the males showed the fewest DETs (FDR ≤ 0.01; FC ≥ 2).However, the highest numbers of DETs were always identified in the comparisons between each tissue type and the TESTs (Table S7), indicating the complexity of the TESTs.Conversely, 26,053 overlapping DETs were identified among the following comparisons: LIVs vs. TESTs, OVAs vs. TESTs, and MUSCs vs. TESTs (Figure 3A).Molecular functional annotation of the GO class indicated that these DEGs were also involved in "binding" terms, etc. (Figure S5A).Functional annotation of the KEGG class revealed that the DEGs were mainly involved in "Herpes simplex virus 1 infection" in "Human Diseases" pathways (Figure S5B).Meanwhile, the "PI3K-Akt signaling pathway" was the most enriched "Environmental Information Processing" pathway.Significantly, the DEGs were also enriched in the "HIF-1 signaling pathway", revealing the high frequency of hypoxic responses in the TESTs (Figure 3D).The enriched GO terms of the DEGs were mainly related to "Structural molecule activity", "Transporter activity", and "Binding", especially "Catalytic activity" (Figure S6).
Based on the assumption that the functions of genes were known, the pathway enrichment of the DETs and DEGs was conducted to predict their potential roles in the TEST using Gene Set Enrichment Analysis (GSEA).The DETs and DEGs were mainly involved in the "spermatogenesis", "flagellated sperm motility", "spermatid development", "reproduction", "reproductive process", and "microtubule-based movement" KEGG pathways (Figure 4).
Based on the assumption that the functions of genes were known, the pathway enrichment of the DETs and DEGs was conducted to predict their potential roles in the TEST using Gene Set Enrichment Analysis (GSEA).The DETs and DEGs were mainly involved in the "spermatogenesis", "flagellated sperm motility", "spermatid development", "reproduction", "reproductive process", and "microtubule-based movement" KEGG pathways (Figure 4).
Testicular-Specific High-Expression Genes/Transcripts and Their Functional Annotation
The preceding data revealed the complexity of testicular gene expression and alternative splicing.To further elucidate the functional regulation and molecular pathways involved in TEST-specific gene expression and transcripts, we screened for highly expressed genes in TESTs and transcripts and conducted pathway enrichment analyses (fold change > 1.5, p < 0.01).Figure 5A depicts a Venn diagram of gene expression across the four tissues, with the numbers of genes specifically expressed in the LIVs, MUSCs, OVAs, and TESTs being 834, 431, 1665, and 6781, respectively.Meanwhile, in Figure 5B, the numbers of transcripts specifically expressed in the LIVs, MUSCs, OVAs, and TESTs are 2493, 1832, 4146, and 22,266, respectively.It was evident that the TESTs exhibited the highest
Testicular-Specific High-Expression Genes/Transcripts and Their Functional Annotation
The preceding data revealed the complexity of testicular gene expression and alternative splicing.To further elucidate the functional regulation and molecular pathways involved in TEST-specific gene expression and transcripts, we screened for highly expressed genes in TESTs and transcripts and conducted pathway enrichment analyses (fold change > 1.5, p < 0.01).Figure 5A depicts a Venn diagram of gene expression across the four tissues, with the numbers of genes specifically expressed in the LIVs, MUSCs, OVAs, and TESTs being 834, 431, 1665, and 6781, respectively.Meanwhile, in Figure 5B, the numbers of transcripts specifically expressed in the LIVs, MUSCs, OVAs, and TESTs are 2493, 1832, 4146, and 22,266, respectively.It was evident that the TESTs exhibited the highest numbers of genes and transcripts, further indicating the complexity of testicular function.GO pathway enrichment analyses revealed the 15 most significantly different pathways, with genes (Figure 5C-E) and transcripts (Figure 5F-H) exhibiting remarkably high similarity, being significantly enriched in biological processes closely related to spermatogenesis, such as reproduction, gamete generation, microtubule-based movement, cilium assembly, ciliary movement, and sperm development.Moreover, the cellular components were enriched in sperm flagella, "9 + 2" motile cilia, axonemes, microtubule organizing centers, centrioles, and microtubule cytoskeleton, indicating the importance of sperm development in testicular function.Molecular functions were involved in RNA binding, purine nucleotide binding, ATP binding, ATP-dependent activity, and microtubule binding, all of which were closely associated with sperm energy metabolism.Additionally, the results of the KEGG and Reactome pathway enrichment analyses are presented in Figure S7.These identified genes with testicular-specific expression included the CEP family, the CFAP family, the CCDC family, the DNAH family, the IFT family, the TEKT family, etc.Some of these genes have been proven to play key roles in mouse and human spermatogenesis through functional experiments (AKAP4, CCDC38, CFAP58, SPACA9, etc.), but there are still many genes whose functions have not been fully validated.Our study provides an expression profile of testicular-specific expression genes in cattle, and the study of these unknown genes is of great significance for deepening our understanding of male reproduction.
Expression of TEST-Specific Genes
To explore TEST-specific expressed genes, we next focused on the transcripts only expressed in TESTs (CPM > 0 in all 3 TEST samples, and CPM = 0 in the other 15 samples) compared with the other three tissues (LIV, MUSC, and OVA).As a result, 17,356 transcripts were detected (15,577 (89.75%) of them were novel transcripts), which corresponded to 8935 genes (5502 of them were novel genes).
In addition, among these genes, we focused on the TEX gene family.As a result, ONT sequencing detected the expression of 32 TEXs alongside 224 transcripts, 123 (54.91%) of
Expression of TEST-Specific Genes
To explore TEST-specific expressed genes, we next focused on the transcripts only expressed in TESTs (CPM > 0 in all 3 TEST samples, and CPM = 0 in the other 15 samples) compared with the other three tissues (LIV, MUSC, and OVA).As a result, 17,356 transcripts were detected (15,577 (89.75%) of them were novel transcripts), which corresponded to 8935 genes (5502 of them were novel genes).
In addition, among these genes, we focused on the TEX gene family.As a result, ONT sequencing detected the expression of 32 TEXs alongside 224 transcripts, 123 (54.91%) of which were novel.Of these genes, the most significant number of transcripts (33) were those of TEX51; the expression patterns of these transcripts (Figure 6A) and genes (Figure 6B) are shown.TSSK1B, TSSK6, DAZL, TSPYL6, TSSK4, and POLL, which were highly expressed in the TESTs, were differentially expressed in the TESTs and the three other tissues (Figure 6C).TSSK1B was significantly involved in the two GO terms "reproduction" and "reproductive process", and TSSK6 in seven GO terms, including "cell", "cell part", "reproduction", "reproductive process", "response to stimulus", "multi-organism process", and "biological regulation".DAZL was involved in the terms "reproductive process", "developmental process", "growth", and "multi-organism process".TSPYL6 was related to the terms "organelle" and "cell part".TSSK4 participated in the term "cellular process".POLL was involved in the GO terms "organelle" and "cellular process".
Animals 2024, 14, x FOR PEER REVIEW 14 of 21 part", "reproduction", "reproductive process", "response to stimulus", "multi-organism process", and "biological regulation".DAZL was involved in the terms "reproductive process", "developmental process", "growth", and "multi-organism process".TSPYL6 was related to the terms "organelle" and "cell part".TSSK4 participated in the term "cellular process".POLL was involved in the GO terms "organelle" and "cellular process".
Identification of TEST Specifically Expressed Transcription Factors
Additionally, according to previous studies which collectively identified 1253 human transcription factors [33,34], we identified 1023 of them by ONT in our study.To further explore TEST-specific transcription factors, we intersected the transcription factors in the TESTs with those in the LIVs, MUSCs, and OVAs and obtained a total of 146 transcription factors that were considered highly reliable with respect to TEST-specific expression (Figure 7A).The pathway enrichment analyses for these transcription factors are shown in Figure S8.Additionally, we further illustrated the interactions among these transcription factors.As shown in Figure S9, these transcription factors also had complex interactions with each other.Considering the high expression of these proteins in the TEST, their mutual interactions are crucial for the regulation of their respective functions.Moreover, the proteins involved in these interactions might form protein complexes, playing
Identification of TEST Specifically Expressed Transcription Factors
Additionally, according to previous studies which collectively identified 1253 human transcription factors [33,34], we identified 1023 of them by ONT in our study.To further explore TEST-specific transcription factors, we intersected the transcription factors in the TESTs with those in the LIVs, MUSCs, and OVAs and obtained a total of 146 transcription factors that were considered highly reliable with respect to TEST-specific expression (Figure 7A).The pathway enrichment analyses for these transcription factors are shown in Figure S8.Additionally, we further illustrated the interactions among these transcription factors.As shown in Figure S9, these transcription factors also had complex interactions with each other.Considering the high expression of these proteins in the TEST, their mutual interactions are crucial for the regulation of their respective functions.Moreover, the proteins involved in these interactions might form protein complexes, playing pivotal roles in spermatogenesis and male reproductive regulation.This aspect is poised to become a key focus for future researchers studying such proteins.Furthermore, to delve deeper into the transcription factors involved in regulating TEX gene expression in the TEST, we employed the Network Analyst tool "https://www.networkanalyst.ca/(accessed on 15 July 2023)" and the JASPAR database to predict the transcription factors (homo sapiens) for these 32 genes.Subsequently, we constructed a TF-gene interaction network, revealing that 82 transcription factors were implicated in the transcriptional regulation of TEX genes (Figure 7B).Following this, we then intersected these 146 transcription factors with the previously predicted 82, resulting in 4 transcription factors that were highly expressed in cattle TESTs, including CREB1, RUNX2, KLF5, and SOX5 (Figure 7C).revealing that 82 transcription factors were implicated in the transcriptional regulation of TEX genes (Figure 7B).Following this, we then intersected these 146 transcription factors with the previously predicted 82, resulting in 4 transcription factors that were highly expressed in cattle TESTs, including CREB1, RUNX2, KLF5, and SOX5 (Figure 7C).Before the commencement of this study, we hypothesized that, compared to the LIV Before the commencement of this study, we hypothesized that, compared to the LIV and MUSC, the TEST would exhibit numerous specifically expressed genes and transcripts closely related to male reproductive processes, such as androgen metabolism and spermatogenesis.Furthermore, we speculated that these differentially expressed genes and transcripts might be closely associated with TEST-specific transcription factors and APA events.Excitingly, our findings have significantly supported this hypothesis.Over a billion cattle are raised for meat and dairy production worldwide.Although selection programs have significantly benefited from genomics tools in the past decade, a comprehensive characterization of the bovine transcriptome is essential to improve our understanding regarding the biological processes that underpin complex traits such as productivity, efficiency, and disease resistance, especially through analyzing the transcriptome using ONT sequencing [35][36][37][38][39].TESTs are the male reproductive organs and play a critical role in sexual reproduction.Testicular growth and development are complex and strictly regulated processes [40].Tissue-specific transcripts are fundamental for understanding the basis of the differences between tissues.They could serve as valuable biomarkers to explore economically important traits, as they are often implicated in tissue-specific functions, development, and disease resistance [41][42][43].
This study comprehensively elucidated alternative splicing in bovine LIV, MUSC, TEST, and OVA tissues, providing valuable insights into the molecular complexity of bovine tissues.The prevalence of alternative exon skipping and differential splicing events across various organs underscores the importance of alternative splicing in regulating tissue-specific gene expression and function.The observed differences in splicing patterns between reproductive and non-reproductive organs highlight the role of alternative splicing in shaping reproductive processes.The higher number of differential splicing genes in the TESTs compared to the other tissues suggests the significance of alternative splicing in regulating male reproductive functions, such as androgen metabolism and spermatogenesis.Conversely, differences between male and female tissues, particularly those observed in the OVA, suggest the presence of critical gender-specific splicing patterns crucial for female reproductive processes.
In addition, we also found that TESTs have the greatest number of genes with APA events; this further confirms the complexity of transcription for testicular tissue.A previous study found that TEST-expressed APA displayed a lower incidence of AAUAAA, contained unique upstream and downstream sequence elements, and had shorter 3 ′ UTRs [44,45].APA was also affected by alternative splicing in male germ cells, especially during the transition through meiosis [46,47].Cytoplasmic chromatoid bodies are centers of multiple RNA metabolic processes in male germ cells [48].In addition, a more significant number of DETs and DEGs between the TESTs and the other three tissue types were identified, all indicating the complexity of this tissue.The complexity of TEST functions is underpinned by the unique and specific gene expression profile, as indicated by the TEX gene family numbers, all of which were transcribed several times and highly expressed in the TESTs.The complexity of TSGs is regulated by a multitude of factors and mechanisms, as indicated by the annotation of numerous CGO, eggNOG, GO, KOG, and KEGG terms or pathways.
Exploring Testicular-Specific Expression Genes by Comparing the Gene Expression of Four Tissues in Bovines
Among the identified transcripts, 52.20% were novel.A recent study revealed that 61% of transcript isoforms were extremely TEST-specific, and TESTs exhibited the most complex transcriptome compared with those of the other tissues examined [16].These novel transcripts might regulate spermatogenesis, as adult cattle were used in this study.The regulation of spermatogenesis involves expressing numerous genes in a precise celland stage-specific program [49].More than 2300 genes are predominantly expressed in the mouse TEST, hundreds of which might facilitate the normal functioning of the male reproductive system or contribute to infertility in males [50,51].
Our study similarly unveils the complexity and significance of the TEST, with a significantly higher abundance of tissue-specific genes and transcripts in the TEST compared to the other tissue types, highlighting its pivotal role in reproductive processes.Enrichment of TEST-specific genes and transcripts in biological processes related to spermatogenesis, cellular scaffolding, and energy metabolism underscores the importance of the TEST in normal reproductive function.Interestingly, the study identifies numerous novel genes and transcripts, some of which may play crucial roles in regulating testicular function and spermatogenesis.The discoveries of these new genes offer new avenues for further research into male reproduction and may help address challenges in bovine herd reproduction.While many TEST-specific genes have been functionally validated in murine and human spermatogenesis, such studies are relatively scarce in the field of livestock.Furthermore, there remain many genes/transcripts whose functions have yet to be fully validated.The identification of novel TEST-specific genes/transcripts provides an opportunity for further functional studies, deepening our understanding of testicular function in cattle.
Identification of TEX Gene Transcripts and Prediction of Their Testicular-Specific Transcription Factors
Furthermore, certain highly expressed TSGs, such as TEXs, were focused on.The term "TEX" gene was coined by Wang et al. (2001) after they used cDNA suppression subtractive hybridization to explore new transcripts that were detected only in purified mouse spermatogonia [52].However, little is known regarding their function [53].TEX orthologs have also been found in other vertebrates (mammals, birds, and reptiles), invertebrates, and yeast [4,54].To date, 69 TEXs (61 human and 61 mouse) have been identified in various species and tissues [4].Herein, the expression of each TEX gene in cattle was also checked, with 224 TEX transcripts and 32 DEGs being detected.Additionally, 32 transcripts were identified for TEX51.A study of the genetic association between obesity and the TEX51 [55,56] neuroendocrine disorder-related candidate gene suggested a lack of any correlation between TEX51 and maternal obesity and reported that the highest number of transcripts of any TEX was recorded for TEX51, indicating the potentially critical function of this gene in male cattle reproductively.Hence, in-depth functional studies are warranted in the future.The expression of TEX14 was also detected in other tissues.The TEST-enriched genes were ascertained to have novel functions and to be indispensable for male reproduction using an in vivo approach [57].For instance, TEX14 was essential for forming intercellular bridges and fertility in male mice [58] and might function similarly in cattle.In addition, our predicted potential transcription factors of TEXs that are highly expressed in the TEST, including CREB1, RUNX2, KLF5, and SOX5, are of great value in verifying their key regulatory roles and mechanisms in testicular spermatogenesis and male reproductive function when conducting functional experiments.
TSSK1B, TSSK6, DAZL, TSPYL6, TSSK4, and POLL were highly expressed in the TEST and determined to be involved in various pathways, including "reproduction" and "reproductive process".For example, TSSK1B played a pivotal role during spermatogenesis and was associated with male fertility and was explicitly expressed in yak TESTs and highly expressed upon sexual maturity [59].The regulatory mechanisms of TSSK1B in male yaks require further study.Future research could focus on further improving the accuracy of the technology, developing more sophisticated bioinformatics tools for analyzing longread data, and expanding their applicability to functional genomics.In addition, the function of the novel transcripts is largely unexplored; hence, further studies are needed, especially in the TEST.Moreover, we identified TEST-specific highly expressed transcription factors, including CREB1, RUNX2, KLF5, and SOX5, as transcription factors for testicular extracellular vesicle (TEX) genes.This identification holds significant reference value for further research into TEST-specific transcriptional regulation aiming to elucidate the unique functions of the TEST.
Conclusions
Our findings revealed that the TEST exhibited the highest number of alternative splicing events, indicating a significant complexity in gene expression that is closely associated with its diverse functions.The identification of novel genes and transcripts in the TEST is crucial for future functional studies aimed at elucidating the molecular mechanisms of spermatogenesis.Our analysis of the TEX gene family confirmed their specific expression in the TEST and predicted TEST-specific transcription factors.This study underscores the complexity of gene expression and alternative splicing in cattle TESTs.Future research into the molecular regulatory mechanisms of TEST-specific gene expression and alternative splicing could provide valuable insights for improving cattle fertility.This could ultimately promote cattle reproduction, safeguard reproductive health, and increase cattle productivity.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ani14111646/s1, Figure S1: Function annotation of differentially expressed transcripts (DETs) between each tissue and testis in cattle using CGO, eggNOG and KOG database; S1: Clean Data of 18 samples; Table S2: Full length sequence data statistics table; Table S3: Function annotation of novel transcripts; Table S4: Statistical table of variable splicing events; Table S5: SSR statistics; Table S6: comparison between full-length reads and known transcripts of the reference genome; Table S7: Number of differentially expressed transcripts (FDR = 0.01 FC = 1.5);Table S8: Number of differentially expressed mRNA.
Author Contributions: Conceptualization, methodology, writing-review and editing, funding acquisition, G.Z.; software, validation, writing-original draft preparation, funding acquisition, X.L.; formal analyses, J.W.; investigation, M.L.; resources, F.Z. X.L. mainly analyzed the data and wrote the manuscript.J.W. assisted in data processing and analyses.G.Z. and F.Z. provided research ideas and designed the entire study.All authors have read and agreed to the published version of the manuscript.
Figure 1 .
Figure 1.Alternative splicing events identified by ONT.(A) PCA analysis of 18 samples.(B) Spear man cluster analysis of transcript expression of 18 samples.(C) The differential alternative splicin events for each tissue type in the cattle.(D) A Wayne diagram showing genes with alternative splic ing events for each tissue type in the cattle.(E) The differential alternative splicing genes in TEST and OVA samples were enriched in biological processes, including RNA splicing, cytoplasmic trans lation, and molecular functions of structural constituents of ribosomes, cadherin binding, and
Figure 1 .
Figure 1.Alternative splicing events identified by ONT.(A) PCA analysis of 18 samples.(B) Spearman cluster analysis of transcript expression of 18 samples.(C) The differential alternative splicing events for each tissue type in the cattle.(D) A Wayne diagram showing genes with alternative splicing events for each tissue type in the cattle.(E) The differential alternative splicing genes in TEST and OVA samples were enriched in biological processes, including RNA splicing, cytoplasmic translation, and molecular functions of structural constituents of ribosomes, cadherin binding, and electron transfer activity, among others.MM: male MUSC; ML: male LIV; FM: female MUSC; FL: female LIV; O: OVA; T: TEST.
Figure 2 .
Figure 2. Schematic features of APA.(A) Number of genes with APA.(B) Top ten motifs of APA.(C) Functional analyses of top 200 highest-number-of-APA-event genes.(D) The numbers of APA genes in the LIVs and MUSCs between the females and males were similar.(E) Differential APA gene
Figure 2 .
Figure 2. Schematic features of APA.(A) Number of genes with APA.(B) Top ten motifs of APA.(C) Functional analyses of top 200 highest-number-of-APA-event genes.(D) The numbers of APA genes in the LIVs and MUSCs between the females and males were similar.(E) Differential APA gene analyses for each tissue in cattle.LM, male liver; MM, male muscle; LF, female liver; MF, female muscle; OV, ovary; T, testis.(F) Pathway analyses revealed that the 2308 TEST-specific genes were significantly enriched in male reproductive processes, with associated cellular components including motile cilium, acrosome vesicle, "9 + 2" motile cilium, sperm flagellum, and cilium, which are related to sperm flagellum assembly and fertilization function.
Figure 3 .
Figure 3. Analyses of differentially expressed transcripts (DETs) and genes (DEGs) for each tissue type vs. TESTs in cattle.(A) Venn diagram indicating the number of DETs identified through comparisons between LIVs and TESTs, MUSCs and TESTs, and OVAs and TESTs in cattle, respectively.(B) KEGG analysis diagram of the DETs that overlapped in the three comparisons.(C) Venn diagram indicating the number of DEGs identified by comparisons of LIVs and TESTs, MUSCs and TESTs, and OVAs and TESTs in cattle.(D) KEGG analysis diagram of the overlapping DEGs identified in the three comparisons.
Figure 4 .
Figure 4. Gene Set Enrichment Analysis (GSEA) enrichment plots.(A) GSEA enrichment plots of GO terms and KEGG pathways comparing the DETs in the TEST vs. the other three tissues of the LIV, MUSC, and OVA.(B) GSEA enrichment plots of GO terms and KEGG pathways comparing the DEGs in the TEST vs. the other three tissues of the LIV, MUSC, and OVA.
Figure 4 .
Figure 4. Gene Set Enrichment Analysis (GSEA) enrichment plots.(A) GSEA enrichment plots of GO terms and KEGG pathways comparing the DETs in the TEST vs. the other three tissues of the LIV, MUSC, and OVA.(B) GSEA enrichment plots of GO terms and KEGG pathways comparing the DEGs in the TEST vs. the other three tissues of the LIV, MUSC, and OVA.
Figure 5 .
Figure 5. Pathway enrichment analyses of TEST specifically expressed genes/transcripts.(A) Venn diagram of genes expressed in different tissues.(B) Venn diagram of transcripts expressed in different tissues.(C-E) Biological process (A), cellular component (B), and molecular function (C) enrichment of TEST specifically expressed genes.(F-H) Biological process (F), cellular component, (G) and molecular function (H) enrichment of TEST specifically expressed transcripts.
Figure 5 .
Figure 5. Pathway enrichment analyses of TEST specifically expressed genes/transcripts.(A) Venn diagram of genes expressed in different tissues.(B) Venn diagram of transcripts expressed in different tissues.(C-E) Biological process (C), cellular component (D), and molecular function (E) enrichment of TEST specifically expressed genes.(F-H) Biological process (F), cellular component, (G) and molecular function (H) enrichment of TEST specifically expressed transcripts.
Figure 7 .
Figure 7. Identification of testicular-specific transcription factors.(A) Compared to the LIVs, MUSCs, and OVAs, 146 TEST specifically expressed transcription factors were predicted by ONT.(B) Predicted transcription factors of TEXs using the Network Analyst tool; 82 potential transcriptional regulatory factors were identified.(C) Transcription factors of CREB1, RUNX2, KLF5, and SOX5 that might be involved in the regulation of TEXs were identified.4.Discussion 4.1.TESTs Have the Greatest Number of Differential Alternative Splicing Events and Genes with APA Events
Figure 7 .
Figure 7. Identification of testicular-specific transcription factors.(A) Compared to the LIVs, MUSCs, and OVAs, 146 TEST specifically expressed transcription factors were predicted by ONT.(B) Predicted transcription factors of TEXs using the Network Analyst tool; 82 potential transcriptional regulatory factors were identified.(C) Transcription factors of CREB1, RUNX2, KLF5, and SOX5 that might be involved in the regulation of TEXs were identified.
4 . Discussion 4 . 1 .
TESTs Have the Greatest Number of Differential Alternative Splicing Events and Genes with APA Events Figure S2: Analysis of differentially expressed transcripts (DETs) between each tissue and testis in cattle; Figure S3: The GO analysis diagram of the DETs between each tissue vs. testis in cattle and molecular functions are indicated; Figure S4: Function annotation of differentially expressed genes (DEGs) between each tissue and testis in cattle using CGO, eggNOG and KOG database; Figure S5: Analysis of differentially expressed genes (DEGs) between each tissue and testis in cattle; Figure S6: The GO analysis diagram of the DEGs between each tissue Vs. testis in cattle and molecular functions are indicated; Figure S7: Pathway enrichment analysis of testis specifically expressed genes/transcripts; Figure S8: Pathway enrichment analysis of transcription factors specifically expressed in bovine testes; Figure S9: Protein interaction diagram of bovine transcription factors.Table
Funding:
This study was supported financially by the Technological Innovation and Application Development Project of Chongqing (grant no.cstc2021jscx-gksbX0012), the National Training Program of Innovation and Entrepreneurship for Undergraduates (grant no.202310635023), and the Chongqing Modern Agricultural Industry Technology System (CQMAITS202313).Institutional Review Board Statement: All animals were handled and treated in accordance with the guidelines established by the Institutional Animal Care and Use Ethics Committee of Southwest University.This study adhered to the Guidelines for Experimental Animals set forth by the Ministry of Science and Technology of China.Informed Consent Statement: Written informed consent has been obtained from the owner of the animals involved in this study. | 2024-06-02T15:16:46.839Z | 2024-05-31T00:00:00.000 | {
"year": 2024,
"sha1": "96bdcfd0c9b55a65cf6892da9ffa2ea3ff23f631",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/14/11/1646/pdf?version=1717149518",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2247cc7cc85aa10f3aea4149e6e0d1b35d101667",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
244751681 | pes2o/s2orc | v3-fos-license | Effect of broadcast and precise satellite orbits in the estimation of Zenith tropospheric delay and integrated precipitable water vapour from GPS
lkj & HkweaMyh; fLFkfr ra= eq[;r% {kksHkeaMy esa tyok"Ik rRoksa ds QyLo:Ik f’kjksfcanq fn’kk esa ldy foyac dk vkdyu djrk gSA Hkwef.krKksa }kjk dkQh o"kksZa rd ldy foyEc ds bl izkpy dks fujFkZd ekuk tkrk jgk gSA mDr foyEc ds nks Hkkx gaS & 'kq"d foyEc vkSj vknzZ foyEc ftUgsa Øe’k % f’kjksfcanq tyLFkSfrd foyEc 1⁄4tsM,pMh-1⁄2 vkSj f’kjksfcanq vknzZ foyEc 1⁄4tsMMCY;wMh-1⁄2 dgk tkrk gSA lesfdr o"kZ.kh; tyok"i 1⁄4vkbZihMCY;woh-1⁄2 dk vkdyu /kjkryh; dsUnz esa vfHkxzkgh ds Åij fLFkr tsMMCY;wMhds ek/;e ls fd;k tkrk gSA mi;qZDr vkdyuksa dh lVhdrk d{kh; mixzgksa dh iwokZuqekuu xq.koRrk ij fuHkZj djrh gS tks izR;sd mixzg ds fy, leku ugha gSA Hkkjr ekSle foKku foHkkx vkbZihMCY;wohdk vkdyu ik¡p LFkkuksa ij izkpkyukRed :Ik ls fudV okLrfod le; ds vk/kkj ij djrk gS tks jsfM;ksalksans 1⁄4vkj,l-1⁄2 vk¡dM+ksa ls lgh esy 1⁄4~ 6-7 ,e,e=qfV1⁄2 [kkrs gSaA bl 'kks/k i= esa f’kjksfcUnq lady foyEc 1⁄4tsMVhMh-1⁄2 ds laca/k esa fLØIl vkWjfcV ,aM ijekusaV ,sjs laVj 1⁄4,lvksih,lh-1⁄2 }kjk miyC/k djk, x, varZjk"Vh; thih,llsok 1⁄4vkbZ th,l-1⁄2 }kjk iwokZuqekfur izhlkbt vkWjfcVksa vkSj fudV okLrfod le; iwokZuqekfur jSfiM vFkok czkWMdkLV vkWjfcVksa ds izHkko dk v/;;u fd;k x;k gSA vkbZ-th-,lvkSj czkWMdkLV vkWjfcVksa dk mi;ksx djds ik¡p Hkkjrh; dsUnzksa uker% ubZ fnYyh] dksydkrk] psUuS] xqokgkVh vkSj eqEcbZ ds fy, vkdfyr tsMVhMhvkSj vkbZihMCY;w oh1⁄4feehesa1⁄2 ds fy, izsf{kr vkSlr vfHkufr vkSj oxZ ek/; ewy =qfV 1⁄4vkj,e,lbZ-1⁄2 vf/kdka’k ekeyksa esa 1 feehls de jgh vkSj vkj,e,lbZesa 6 feehls de jghA blh izdkj] O;qiUu vkbZihMCY;w ohds ekeys esa izsf{kr vfHkufr yxHkx ux.; jgh vkSj vkj,e,lbZ1 feehls de jghA
Introduction
In the atmosphere the water vapour content is highly variable in space and time. Conventional method like radiosonde balloon, which carries weather sensors for measuring air temperature, pressure and relative humidity, reaches from earth surface to 20-30 km in the atmosphere thereby resulting in measured vertical profiles. The path of a radiosonde is affected by the wind, which often varies with height, measured twice a day and expensive. Space based geodetic technique like GPS (Bevis et al., 1992) can obtain data continuously with high temporal resolution. Radio signals transmitted from different sources in space are refracted and delayed while propagating through the atmosphere. The refraction effects in the upper atmosphere, the dispersive ionosphere, are frequency dependent and can be removed by using a linear combination of dual frequency data. This is, however, not (2007) the case for the troposphere, which is non-dispersive region. A number of studies for the retrieval of IPWV using ground based GPS observations at the same level of accuracy as radiosondes have shown by (Rocken et al., 1995, Duan et al., 1996and Tregoning et al., 1998. Comparative study of GPS derived IPWV data with MODIS, NCEP and Radiosonde data is done by Giri et al. (2007) and validation of GPS retrieved IPWV with radiosonde data for winter season during 2003 using different mean temperatures predictors, Giri et al. (2006) over Indian region. The processing is done in two ways one is near real time in which we are using rapid or broadcast orbit available daily at 2300 UTC from SOPAC, USA and second, post processing in which precise orbits are available after approximately 10-12 days from the IGS. The accuracy of the GPS satellite orbits is critical for GPS IPWV estimates (Dodson and Baker 1998). The current accuracy level of precise GPS orbits from the IGS is sufficient to provide IPWV estimate on the order of 1 mm, but these orbits are available approximately after 10-12 days. SPOAC rapid or broadcast near real time orbit also reach the same level of accuracy, but their accuracy is limited to 0.1 to 1.0 meter compared to IGS final orbits (Kouba and Mireault, 1998). Their accuracy decreases with time because of unpredictable non-conservative forces, reaching an average of 0.4 meter after 15 to 39 hours (Rocken et al., 1997). In addition, when satellite in maneuver the accuracy of their predicted orbit decreases to a few to hundred meters, when they are in eclipse it decreases 1-2 meters. Marong et al., 2000 have implemented the new strategy for predicting the orbits with minimum degradation of the ZTD estimates by estimating three Keplarian parameters, i.e., semi-major axis, inclination and argument of perigee. They showed that this implementation shows negligible bias and RMSE less then 6 mm. In this paper authors observed the bias for ZTD is almost of the order of less than 1 mm in most cases and RMSE is less than 6 mm. Similarly the bias observed in the case of derived IPWV is almost negligible and RMSE is less than 1 mm for the current operationally system working in India Meteorological Department, Lodi Road, New Delhi.
Data and methodology
The observation GPS data in Receiver Independent Exchange (RINEX) format for five stations; namely, New Delhi, Mumbai, Chennai, Kolkata and Guwahati have been processed using GAMIT 10.3.2.1 processing software (King and Bock 1999). The near real time GPS data and Radiosonde data for the year 2007 has been taken from India Meteorological Department, Lodi Road, New Delhi. The data is processed in two modes one is near real time and other is post-processing mode using rapid and precise satellite orbit files respectively for the period of Julian day 260 to 263 (16-19 October, 2008).
Results and discussion
The retrieval of ZTD and IPWV from ground base receiver using the precise IGS satellite orbit files and rapid or broadcast near real time orbit files have been computed for five stations. The difference of the two for both ZTD and IPWV is shown graphically from Figs. (1-10) for the Julian day 260 (16 October, 2008). The RMSE and bias in mm of all the five stations for four days is given in -0.12, 0.47, 0.35, -0.09 and -0.21 respectively. During the processing some of the abnormal values of the data, which is given abnormal peaks, have been omitted. These spikes are systematically occurring at the end of the hour in sliding window GAMIT processing strategy. This is may be due to the data gaps at the end or cycle slip in the signal during period of high tropospheric variability. During the post processing mode same data sets is used as in near real time mode so that the same variance is communicated to the final solution in both cases. The near real -time processing is essential for operational forecasting and Numerical Weather Prediction (NWP) model ZTD or PWV data assimilation, so knowledge of the satellite orbit accuracy is important. This accuracy even is improved when hourly-predicted GPS orbit becomes available (Fang et al., 1998). The comparison of derived IPWV from GPS and Radiosonde (RS) for the year 2007 are shown in along with their statistics indicated on the graphs. Season-wise comparison is also given in Table 2. The RMSE and Bias are more in monsoon season for each station may be due to the variability of the moisture during the season. For Guwahati and Kolkata pre-monsoon data was not available due to the delay of installation of GPS. The other possible source of error is due to the site location because the GPS and Radiosonde observations are not at the same place and local environment can modify the moisture contents.
Conclusions
(i) The RMSE values in ZTD and IPWV estimation using precise and broadcast or rapid orbits are less than 6 and 1 mm respectively.
(ii) Similarly the observed bias for precise and broadcast or rapid orbits are less than ±1 in most cases of ZTD estimation and almost negligible in IPWV estimates.
(iii) This study is useful in deciding the quality index for orbit to reject the bad satellite or satellite in manoeuvre and eclipse conditions. Later it can be applied to near-real time basis operationally in order to become usable data source in NWP models.
(iv) The comparison of GPS derived IPWV with RS matches fairly well (~6.7 mm) for all the stations with more variability in monsoon season. | 2021-12-01T16:33:42.626Z | 2021-11-27T00:00:00.000 | {
"year": 2021,
"sha1": "67d743999ddd9052fb40609c088edca3b7fd6006",
"oa_license": "CCBYNC",
"oa_url": "https://mausamjournal.imd.gov.in/index.php/MAUSAM/article/download/801/684",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "27715aaaaaf820745c3cd855534e5450eb8cfbbc",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": []
} |
232384198 | pes2o/s2orc | v3-fos-license | Real-Time Human Detection and Gesture Recognition for On-Board UAV Rescue
Unmanned aerial vehicles (UAVs) play an important role in numerous technical and scientific fields, especially in wilderness rescue. This paper carries out work on real-time UAV human detection and recognition of body and hand rescue gestures. We use body-featuring solutions to establish biometric communications, like yolo3-tiny for human detection. When the presence of a person is detected, the system will enter the gesture recognition phase, where the user and the drone can communicate briefly and effectively, avoiding the drawbacks of speech communication. A data-set of ten body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV on-board camera. The two most important gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively. When the rescue gesture of the human body is recognized as Attention, the drone will gradually approach the user with a larger resolution for hand gesture recognition. The system achieves 99.80% accuracy on testing data in body gesture data-set and 94.71% accuracy on testing data in hand gesture data-set by using the deep learning method. Experiments conducted on real-time UAV cameras confirm our solution can achieve our expected UAV rescue purpose.
Introduction
With the development of science and technology, especially computer vision technology, the application of unmanned aerial vehicles (UAVs) in various fields is becoming more and more widespread, such as photogrammetry [1], agriculture [2], forestry [3], remote sensing [4], monitoring [5], and search and rescue [6,7]. Drones are more mobile and versatile, and therefore more efficient, than surveillance cameras with fixed angles, proportions, and views. With these advantages, combined with the state-of-art computer vision technology, drones are therefore finding important applications in a wide range of fields. Increasingly researchers have made numerous significant research outcomes in these two intersecting areas. For example, vision-based methods for UAV navigation [8], UAV-based computer vision for an airboat navigation in paddy field [9], deep learning techniques for estimation of the yield and size of citrus fruits using a UAV [10], drone pedestrian detection [11], hand gesture recognition for UAV control [12]. It is also essential to apply the latest computer vision technology to the field of drone wilderness rescue. The layered search and rescue (LSAR) algorithm was carried out for multi-UAVs search and rescue missions [13]. An Embedded system was implemented with the capability of detecting open water swimmers by deep learning techniques [14]. The detection and monitoring of forest fires have been achieved using unmanned aerial vehicles to reduce the number of false alarms of forest fires [15]. The use of a drone with an on-board voice
•
A limited and well-oriented dictionary of gestures can force humans to communicate with UAV briefly during the rescue. So gesture recognition is a good way to avoid some communication drawbacks for UAV rescue. • A dataset of ten basic body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV's camera, which is used to describe some of the body gestures of humans in a wilderness environment.
•
The two most important dynamic gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively, well separated from the static gesture patterns.
•
The combination of whole body gesture recognition at a distance and local hand gesture recognition at close range makes drone rescue more comprehensive and effective. At the same time, the creation and application of these datasets provide the basis for future research.
In the subsequent sections, Section 2 presents technical background and related work, including machine specifications, UAV connectivity, and gesture data collection strategies. In Section 3, the proposed methodology is presented, followed by human detection, pose extraction, human tracking and counting, body rescue gesture recognition, and proximity hand gesture recognition, along with a description of the relevant models and training and system information. Finally, Section 4 discusses the training results of the models and the experimental results. Conclusions and future work are drawn in Section 5. Figure 1 shows that the practical implementation of this work was done on an onboard UAV with Jetson Xavier GPU. Stand-alone on-board system is crucial, since in the wilderness we do not have network to rely on. From Sabir Hossain's experiments [33] on different GPU systems, it was evident that Jetson AGX Xavier was powerful enough to work as a replacement of ground station for a GPU system. This is the reason why the Jetson Xavier GPU has been chosen. In the fourth chapter of this paper, in the experimental session, to ensure the reliable testing conditions we could not go to the field to fly the UAV for some external reasons, so we simulated the field environment in the lab and prepared for UAV motion control. The system for the testing part was changed, as shown in Figure 2. The testing part was done on a 3DR SOLO drone based on the Raspberry Pi system [34], which relies on a desktop ground station with GTX Titan GPU. The drone communicates with the computer through a local network. The comparison of the two GPUs is in Table 1 [35]. In Chapter 4 we also tested the real running time of the program on the proposed architecture. For the testing part in the lab, the ground station computer is equipped with a NVIDIA GeForce GTX Titan GPU and an Intel(R) Core (TM) I7-5930k CPU, which is als used for model training. The UAV is a raspberry pi drone, which is a single-board com puter with a camera module and a 64-bit quad-core ARMv8 CPU. The type of camera is 1080P 5MP 160° fish eye surveillance camera module for Raspberry Pi with IR night v sion. Table 2 presents the specification of the 3DR SOLO drone and the camera. For the testing part in the lab, the ground station computer is equipped with an NVIDIA GeForce GTX Titan GPU and an Intel(R) Core (TM) I7-5930k CPU, which is also used for model training. The UAV is a raspberry pi drone, which is a single-board computer with a camera module and a 64-bit quad-core ARMv8 CPU. The type of camera is a 1080P 5MP 160° fish eye surveillance camera module for Raspberry Pi with IR night vision. Table 2 presents the specification of the 3DR SOLO drone and the camera. For the testing part in the lab, the ground station computer is equipped with an NVIDIA GeForce GTX Titan GPU and an Intel(R) Core (TM) I7-5930k CPU, which is also used for model training. The UAV is a raspberry pi drone, which is a single-board computer with a camera module and a 64-bit quad-core ARMv8 CPU. The type of camera is a 1080P 5MP 160 • fish eye surveillance camera module for Raspberry Pi with IR night vision. Table 2 presents the specification of the 3DR SOLO drone and the camera.
Machine Specification and UAV Connection
The resolution of the camera is to be changed according to the different steps of operation of the system. The resolution of the drone camera is set to 640 × 480 for human detection and body gesture recognition and 1280 × 960 for hand gesture recognition. In the test, the drone flies at an altitude of about 3 m in the laboratory with the camera resolution set as above. The higher the resolution of the drone's camera, the higher the altitude at which the drone can fly, thinking of the minimal resolution needed for recognition. Therefore, the system can also work well at heights of more than ten meters with a highresolution sensor of the UAV camera.
Body Gesture Data-Set Collection
OpenPose [36] is a real-time multi-person framework displayed by the Perceptual Computing Lab of Carnegie Mellon College (CMU) to identify a human body, hand, facial, and foot key points together on single images. Based on the robustness of the OpenPose algorithm and its flexibility in extracting key points, we use it to detect the human skeleton and obtain skeletal data in different gestures of the body, thus laying the data foundation for subsequent recognition. The key thought of OpenPose is employing a convolutional neural network to produce two heap-maps, one for predicting joint positions, and the other for partnering the joints into human skeletons. In brief, the input to OpenPose is an image and the output is the skeletons of all the people this algorithm detects. Each skeleton has 18 joints, counting head, neck, arms, and legs, as appeared in Figure 3. Each joint position is spoken to within the image arranged with coordinate values of x and y, so there's an add up to 36 values of each skeleton. Figure 3 shows the skeleton data and key points information. There are no publicly available and relevant datasets in the field of the wilderness rescue of humans by drones. To deal with this problem, based on our preliminary work [37], we create a new dataset dedicated to describing brief and meaningful body rescue gestures made by humans physically in different situations. Considering that people in different countries have different cultural backgrounds, some gestures may represent different meanings. Therefore, we have selected and defined ten representative rescue gestures that are used to convey the clear and concrete messages without ambiguity that humans make in different scenarios. These gestures include Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction and PhoneCall. This dataset can of course be extended to a larger dataset.
In our dataset, the emphasis is on two dynamic gestures, Attention and Cancel, well There are no publicly available and relevant datasets in the field of the wilderness rescue of humans by drones. To deal with this problem, based on our preliminary work [37], we create a new dataset dedicated to describing brief and meaningful body rescue gestures made by humans physically in different situations. Considering that people in different countries have different cultural backgrounds, some gestures may represent different meanings. Therefore, we have selected and defined ten representative rescue gestures that are used to convey the clear and concrete messages without ambiguity that humans make in different scenarios. These gestures include Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction and PhoneCall. This dataset can of course be extended to a larger dataset.
In our dataset, the emphasis is on two dynamic gestures, Attention and Cancel, well separated from the static gesture patterns, as these represent the setup and reset functions of the system. The system will only alert when these two gestures are recognized. Attention represents the need for the user to establish communication with the drone, which will fly toward the user for further hand gesture recognition. Conversely, Cancel sends an alert that the user does not need to establish contact and the system is automatically switched off. The system will not sound an alarm when other rescue gestures are recognized. Except for Attention and Cancel, the remaining eight body gestures are considered as signs of normal human activity and therefore do not interact further with the drone. However, this is not absolute, for example, we can also set the PhoneCall gesture as an alarming gesture according to the actual demand, and when the user is recognized to be in the PhoneCall gesture, the drone quickly issues an alarm and later goes to recognize the phone number given by the user through the hand gesture, which can also achieve the rescue purposes. However, this specific function will not be discussed in this paper, because the latter hand gesture dataset we collected is limited and no recognition of numbers is added. From the gesture signs of usual human activity, we can build up a limited but effective clear vocabulary set for communicating simple semantics. It is not the task of the present paper, but it will be developed in the future; now the emphasis is on the gesture-based communication.
The datasets are collected using a 1080P 160 • fish eye surveillance camera module for raspberry pi on the 3DR SOLO UAV system. Six people from our lab participated in UAV body rescue gesture data collection and real-time prediction, the genders were four males and two females, aged btw 22 and 32 years old. They also have performed each rescue gestures with all the possible variations. The proposed system recognizes the very common ten body rescue gestures in real-time, including the ones listed above. In these ten body gestures, we collected as many as possible of the two gestures of Attention and Cancel, to make the system's setup and reset functions more powerful. It is important to note that this dataset describes the gesture signs of usual human activity that humans would make in a wilderness environment. Not all gestures will sound an alarm for rescue. Table 3 describes the details of each UAV body rescue gesture. Table 4 describes the details of the UAV body rescue dataset.
Hand Gesture Data-Set Collection
Based on the description in Section 2.2, when the user sends a distress signal to the drone, in other words, when the drone recognizes the human gesture as Attention, the system enters the final stage for hand gesture recognition and the drone automatically adjusts the resolution to 1280 × 960, while slowly approaching the user needs assistance. For hand gesture recognition, there are already many widely used datasets, the dataset for hand gesture recognition in this work is partly derived from GitHub [38] and partly defined by ourselves. We have adapted some of the gesture meanings to the needs. An outstretched palm means Help is needed, an OK gesture is made when the rescue is over, the gestures Peace and Punch are also invoked. Finally, we added Nothing for completeness of hand gesture recognition. In addition to the above four hand gestures, we also added the category Nothing, and in the dataset we collected some blank images, arm images, partial arm images, head images, and partial head images to represent the specific gesture of Nothing. Table 5 shows the details of hand gesture dataset. outstretched palm means Help is needed, an OK gesture is made when the rescue is over, the gestures Peace and Punch are also invoked. Finally, we added Nothing for completeness of hand gesture recognition. In addition to the above four hand gestures, we also added the category Nothing, and in the dataset we collected some blank images, arm images, partial arm images, head images, and partial head images to represent the specific gesture of Nothing. Table 5 shows the details of hand gesture dataset. We use hand gesture recognition to allow the drone to go further in discovering the needs of the user. Whole-body gesture recognition is distant, while hand gesture recognition is close. The combination of the whole body gesture and the partial hand gesture can make the rescue work more adequate and meaningful. We chose the limited gesture vocabularies for rescue gesture recognition to force the user to communicate briefly and effectively with the drone under certain conditions. Compared to speech recognition rescue, gesture recognition is a better way to get rid of the interference of the external environment.
Here we have selected only five hand gestures for recognition, but of course, we could also include more gestures such as numbers. As we discussed in Section 2.2, when the user's body gesture recognition in the previous phase resulted in a PhoneCall, then 803 2 Ok the gestures Peace and Punch are also invoked. Finally, we added Nothing for completeness of hand gesture recognition. In addition to the above four hand gestures, we also added the category Nothing, and in the dataset we collected some blank images, arm images, partial arm images, head images, and partial head images to represent the specific gesture of Nothing. Table 5 shows the details of hand gesture dataset. We use hand gesture recognition to allow the drone to go further in discovering the needs of the user. Whole-body gesture recognition is distant, while hand gesture recognition is close. The combination of the whole body gesture and the partial hand gesture can make the rescue work more adequate and meaningful. We chose the limited gesture vocabularies for rescue gesture recognition to force the user to communicate briefly and effectively with the drone under certain conditions. Compared to speech recognition rescue, gesture recognition is a better way to get rid of the interference of the external environment.
Here we have selected only five hand gestures for recognition, but of course, we could also include more gestures such as numbers. As we discussed in Section 2.2, when the user's body gesture recognition in the previous phase resulted in a PhoneCall, then 803 3 Nothing (something else) ness of hand gesture recognition. In addition to the above four hand gestures, we also added the category Nothing, and in the dataset we collected some blank images, arm images, partial arm images, head images, and partial head images to represent the specific gesture of Nothing. Table 5 shows the details of hand gesture dataset. We use hand gesture recognition to allow the drone to go further in discovering the needs of the user. Whole-body gesture recognition is distant, while hand gesture recognition is close. The combination of the whole body gesture and the partial hand gesture can make the rescue work more adequate and meaningful. We chose the limited gesture vocabularies for rescue gesture recognition to force the user to communicate briefly and effectively with the drone under certain conditions. Compared to speech recognition rescue, gesture recognition is a better way to get rid of the interference of the external environment.
Here we have selected only five hand gestures for recognition, but of course, we could also include more gestures such as numbers. As we discussed in Section 2.2, when the user's body gesture recognition in the previous phase resulted in a PhoneCall, then 803 4 Peace ages, partial arm images, head images, and partial head images to represent the specific gesture of Nothing. Table 5 shows the details of hand gesture dataset. We use hand gesture recognition to allow the drone to go further in discovering the needs of the user. Whole-body gesture recognition is distant, while hand gesture recognition is close. The combination of the whole body gesture and the partial hand gesture can make the rescue work more adequate and meaningful. We chose the limited gesture vocabularies for rescue gesture recognition to force the user to communicate briefly and effectively with the drone under certain conditions. Compared to speech recognition rescue, gesture recognition is a better way to get rid of the interference of the external environment.
Here we have selected only five hand gestures for recognition, but of course, we could also include more gestures such as numbers. As we discussed in Section 2.2, when the user's body gesture recognition in the previous phase resulted in a PhoneCall, then 803 5 Punch gesture of Nothing. Table 5 shows the details of hand gesture dataset. We use hand gesture recognition to allow the drone to go further in discovering the needs of the user. Whole-body gesture recognition is distant, while hand gesture recognition is close. The combination of the whole body gesture and the partial hand gesture can make the rescue work more adequate and meaningful. We chose the limited gesture vocabularies for rescue gesture recognition to force the user to communicate briefly and effectively with the drone under certain conditions. Compared to speech recognition rescue, gesture recognition is a better way to get rid of the interference of the external environment.
Here we have selected only five hand gestures for recognition, but of course, we could also include more gestures such as numbers. As we discussed in Section 2.2, when the user's body gesture recognition in the previous phase resulted in a PhoneCall, then 803 We use hand gesture recognition to allow the drone to go further in discovering the needs of the user. Whole-body gesture recognition is distant, while hand gesture recognition is close. The combination of the whole body gesture and the partial hand gesture can make the rescue work more adequate and meaningful. We chose the limited gesture vocabularies for rescue gesture recognition to force the user to communicate briefly and effectively with the drone under certain conditions. Compared to speech recognition rescue, gesture recognition is a better way to get rid of the interference of the external environment.
Here we have selected only five hand gestures for recognition, but of course, we could also include more gestures such as numbers. As we discussed in Section 2.2, when the user's body gesture recognition in the previous phase resulted in a PhoneCall, then the user can also give the phone number to the drone by using hand number gesture recognition. We refer here to the result of Chen [39], what will be developed in the future phase. From the gesture signs of usual human activity, we can build up a limited but effective clear vocabulary set for communicating simple semantics. In the hand gesture recognition phase, we can also capture the new hand gestures given by the user and later enter the names of the new gestures. By retraining the network, we can get the new model of gesture recognition with the new hand gesture of the user. Hand gesture prediction is shown in Chapter 4 by a real-time prediction bar chart.
Methodology
The system framework proposed in this paper is based on rescue gesture recognition for UAV and human communication. In this section, human detection, counting, and tracking are described. Body gesture recognition with set and reset functions and hand gesture recognition at close range are explained in detail. Figure 4 shows the framework of the whole system. First, the server on the onboard action unit drone side is switched on and the initial resolution of the drone camera is set. The input to the system is the live video captured by the drone's camera and the process is as follows: in the first step human detection is performed and when a person is detected by the drone, the system proceeds to the next step of rescue gesture recognition. In the second step, pose estimation is performed by OpenPose and the human is tracked and counted. The third step is the recognition of the body rescue gestures. Feedback from the human is crucial to the UAV rescue. The cancellation gesture idea comes from our user-adaptive hand gesture recognition system with interactive training [31]. When the user's body gesture recognition results in Attention, the system proceeds to the final step of hand gesture recognition. If the user's body gesture recognition is a cancellation, then the system switches off directly and automatically. The system uses gesture recognition technology to force the user to communicate briefly, quickly, and effectively with the drone in specific environments. detection is performed and when a person is detected by the drone, the system proceeds to the next step of rescue gesture recognition. In the second step, pose estimation is performed by OpenPose and the human is tracked and counted. The third step is the recognition of the body rescue gestures. Feedback from the human is crucial to the UAV rescue. The cancellation gesture idea comes from our user-adaptive hand gesture recognition system with interactive training [31]. When the user's body gesture recognition results in Attention, the system proceeds to the final step of hand gesture recognition. If the user's body gesture recognition is a cancellation, then the system switches off directly and automatically. The system uses gesture recognition technology to force the user to communicate briefly, quickly, and effectively with the drone in specific environments.
Human Detection
YOLO [40,41] is an open-source state-of-the-art object detection framework for realtime handling. Using a completely different approach, YOLO has a few advantages, compared to earlier region object detection systems and classification systems, within the way it performs detection and prediction. Region proposal classification systems perform detection by applying the model to an image with multiple predictions in different image regions and scales. High-scoring regions are considered as detections, however, YOLO uses a one-stage detector methodology and its design is similar to a fully convolutional neural network. The advantage of YOLO for real-time object detection is the improvement of deep learning-based location method. In our system, high speed is required. Previous YOLO versions apply a softmax work to convert scores into probabilities with an entirety rise to 1.0. Instead, YOLOv3 [42] uses multi-label classification by replacing the softmax function with free logistic classifiers to calculate the probability of an input belonging to a specific label. Hence, the model makes multiple predictions over different scales, with higher accuracy, in any case of the predicted object's size.
Considering the real-time problem of our proposed system, this paper selects yolo3tiny [42] for human detection. The dataset used in this method is a widely used COCO dataset [43], which contains a total of 80 categories of objects. Comprising a change of YOLO, yolo3-tiny treats detection to some degree differently by predicting boxes on two different scales whereas features are extracted from the base network. Its higher performance compared to YOLO was the most important reason for its selection. The model's
Human Detection
YOLO [40,41] is an open-source state-of-the-art object detection framework for realtime handling. Using a completely different approach, YOLO has a few advantages, compared to earlier region object detection systems and classification systems, within the way it performs detection and prediction. Region proposal classification systems perform detection by applying the model to an image with multiple predictions in different image regions and scales. High-scoring regions are considered as detections, however, YOLO uses a one-stage detector methodology and its design is similar to a fully convolutional neural network. The advantage of YOLO for real-time object detection is the improvement of deep learning-based location method. In our system, high speed is required. Previous YOLO versions apply a softmax work to convert scores into probabilities with an entirety rise to 1.0. Instead, YOLOv3 [42] uses multi-label classification by replacing the softmax function with free logistic classifiers to calculate the probability of an input belonging to a specific label. Hence, the model makes multiple predictions over different scales, with higher accuracy, in any case of the predicted object's size.
Considering the real-time problem of our proposed system, this paper selects yolo3tiny [42] for human detection. The dataset used in this method is a widely used COCO dataset [43], which contains a total of 80 categories of objects. Comprising a change of YOLO, yolo3-tiny treats detection to some degree differently by predicting boxes on two different scales whereas features are extracted from the base network. Its higher performance compared to YOLO was the most important reason for its selection. The model's architecture consists of thirteen convolutional layers with an input size of 416 × 416 images. Although it can detect the 80 objects provided by the COCO dataset very well, in our system we only need to detect people. When the object category detected by the UAV is a person, the system will issue an alarm and then proceed to the next human gesture recognition. The main aim of the first stage is to find the human, if no human is detected then the system will remain in this stage until the drone detects a human. Figure 5 shows the flowchart for the human body gesture recognition. OpenPose algorithm is adopted to detect human skeleton from the video frames. These skeleton data are used for feature extraction, which is then fed into a classifier to obtain the final recognition result. We make the real-time pose estimation by OpenPose through a pretrained model as the estimator [44]. OpenPose is followed by Deep Neural Network (DNN) model to predict the user's rescue gesture. The Deep SORT algorithm [45] is used for human tracking for the multiple people scenario. The main reasons for choosing this latest method are as follows. Human tracking is not only based on distance and velocity but also based on the features that a person looks like. The main difference from the original SORT algorithm [46] is the integration of appearance information based on a deep appearance descriptor. Deep SORT algorithm allows us to add this feature by computing deep features for every bounding box and using the similarity between deep features to factor into the tracking logic.
Body Gesture Recognition
model as the estimator [44]. OpenPose is followed by Deep Neural Network (DNN) model to predict the user's rescue gesture. The Deep SORT algorithm [45] is used for human tracking for the multiple people scenario. The main reasons for choosing this latest method are as follows. Human tracking is not only based on distance and velocity but also based on the features that a person looks like. The main difference from the original SORT algorithm [46] is the integration of appearance information based on a deep appearance descriptor. Deep SORT algorithm allows us to add this feature by computing deep features for every bounding box and using the similarity between deep features to factor into the tracking logic.
After OpenPose skeleton extraction and Deep SORT human tracking, we can obtain information about human beings. By counting the number of people, we finally determined the following three scenarios: nobody, individual, and multiple people. If the drone does not detect anyone, then the communication between the drone and the user will not be established and the gesture recognition is fruitless. If the drone detects one or more people, then the drone will enter the gesture recognition phase for those people and show different recognition results based on the user's body gesture to achieve communication between the user and the drone to assist humans. We are mainly concerned with the two gestures Attention and Cancel, which represent the two functions of setting and resetting respectively, so when these two gestures appear, the system will show a warning, turn on help mode or cancel the interaction. Compared to other gesture recognition methods, such as using 3D convolutional neural networks [47], we finally chose the skeleton as the basic feature for human gesture recognition. The reason is that the features of the human skeleton are concise, intuitive, and easy to distinguish between different human gestures. In contrast, 3DCNN is both time-consuming and difficult to train large neural networks. As for the classifiers, we experimented with four different classifiers, including kNN [48], SVM [49], deep neural network [50], and random forest [51]. The implementation of these classifiers was from the Python library "sklearn." After testing the different classifiers the DNN was finally chosen and the DNN showed us the best results. After OpenPose skeleton extraction and Deep SORT human tracking, we can obtain information about human beings. By counting the number of people, we finally determined the following three scenarios: nobody, individual, and multiple people. If the drone does not detect anyone, then the communication between the drone and the user will not be established and the gesture recognition is fruitless. If the drone detects one or more people, then the drone will enter the gesture recognition phase for those people and show different recognition results based on the user's body gesture to achieve communication between the user and the drone to assist humans. We are mainly concerned with the two gestures Attention and Cancel, which represent the two functions of setting and resetting respectively, so when these two gestures appear, the system will show a warning, turn on help mode or cancel the interaction.
Compared to other gesture recognition methods, such as using 3D convolutional neural networks [47], we finally chose the skeleton as the basic feature for human gesture recognition. The reason is that the features of the human skeleton are concise, intuitive, and easy to distinguish between different human gestures. In contrast, 3DCNN is both time-consuming and difficult to train large neural networks. As for the classifiers, we experimented with four different classifiers, including kNN [48], SVM [49], deep neural network [50], and random forest [51]. The implementation of these classifiers was from the Python library "sklearn." After testing the different classifiers the DNN was finally chosen and the DNN showed us the best results.
The DNN model has been programmed using Keras Sequential API in Python. There are four layers with batch normalization behind each one and 128, 64, 16, 10 units in each dense layer sequentially. The last layer of the model is with Softmax activation and 10 outputs. The model is applied for the recognition of body rescue gestures. Based on the establishment of the above DNN model, the next step is training. The model is compiled using Keras with TensorFlow backend. The categorical cross-entropy loss function is utilized because of its suitability to measure the performance of the fully connected layer's output with Softmax activation. Adam optimizer [52] with an initial learning rate of 0.0001 is utilized to control the learning rate. The demonstration has been trained for 100 epochs on a system with an Intel i7-5930K CPU and NVIDIA GeForce GTX TITAN X GPU. The total training dataset is split into two sets: 90% for training, and 10% for testing. Specific information such as the final body gesture recognition model accuracy and loss is described specifically in Section 4.
Hand Gesture Recognition
Further interaction with the drone is established by the user through an Attention body gesture. Whether it is a single person or a group of people, the drone enters help mode whenever a user is recognized by the drone in a body gesture of Attention. The camera resolution is automatically adjusted to 1280 × 960 as the drone slowly approaches the user. This is the final stage of this system, which is hand gesture recognition. Figure 6 shows the flowchart regarding this section. Hand gesture recognition is implemented by using a convolutional neural network (CNN) [53]. The 12-layer convolutional neural network model is compiled using Keras with TensorFlow backend. The CNN model can recognize 5 pre-trained gestures: Help, Ok, Nothing (i.e., when none of the above gestures are input), Peace, Punch. The system can guess the user's gesture based on the pre-trained gestures. A histogram of real-time predictions can also be drawn. The combination of recognition of overall body gesture at a distance and hand gesture at a close distance makes drone rescue more comprehensive and effective. Although the gestures that can be recognized at this stage are limited, the system can also capture and define new gestures given by the user as needed and get a new model by retraining the CNN. As an example, we can add the recognition of numbers by human hand gestures as described before in Section 2.3, when the body gesture recognition in the previous section results in a PhoneCall, at which point the two can be used in combination, and the user can provide the drone with the phone number to be dialed via hand gesture recognition, thus also allowing for rescue purposes. establishment of the above DNN model, the next step is training. The model is compiled using Keras with TensorFlow backend. The categorical cross-entropy loss function is utilized because of its suitability to measure the performance of the fully connected layer's output with Softmax activation. Adam optimizer [52] with an initial learning rate of 0.0001 is utilized to control the learning rate. The demonstration has been trained for 100 epochs on a system with an Intel i7-5930K CPU and NVIDIA GeForce GTX TITAN X GPU. The total training dataset is split into two sets: 90% for training, and 10% for testing. Specific information such as the final body gesture recognition model accuracy and loss is described specifically in Section 4.
Hand Gesture Recognition
Further interaction with the drone is established by the user through an Attention body gesture. Whether it is a single person or a group of people, the drone enters help mode whenever a user is recognized by the drone in a body gesture of Attention. The camera resolution is automatically adjusted to 1280×960 as the drone slowly approaches the user. This is the final stage of this system, which is hand gesture recognition. Figure 6 shows the flowchart regarding this section. Hand gesture recognition is implemented by using a convolutional neural network (CNN) [53]. The 12-layer convolutional neural network model is compiled using Keras with TensorFlow backend. The CNN model can recognize 5 pre-trained gestures: Help, Ok, Nothing (i.e., when none of the above gestures are input), Peace, Punch. The system can guess the user's gesture based on the pre-trained gestures. A histogram of real-time predictions can also be drawn. The combination of recognition of overall body gesture at a distance and hand gesture at a close distance makes drone rescue more comprehensive and effective. Although the gestures that can be recognized at this stage are limited, the system can also capture and define new gestures given by the user as needed and get a new model by retraining the CNN. As an example, we can add the recognition of numbers by human hand gestures as described before in Section 2.3, when the body gesture recognition in the previous section results in a PhoneCall, at which point the two can be used in combination, and the user can provide the drone with the phone number to be dialed via hand gesture recognition, thus also allowing for rescue purposes.
The dataset has a total of 4015 gesture images in 5 categories, with 803 image samples in each category. The total dataset is split into two sets: 80% for training, and 20% for testing. After training for 20 epochs, the model achieves 99.77% precision on training data and 94.71% accuracy on testing data. The dataset has a total of 4015 gesture images in 5 categories, with 803 image samples in each category. The total dataset is split into two sets: 80% for training, and 20% for testing. After training for 20 epochs, the model achieves 99.77% precision on training data and 94.71% accuracy on testing data.
Experiment
In this section, the model and performance of the proposed human detection and rescue gesture recognition system for UAVs are described as follows. Based on the introduction in Chapter 2, the testing phase of the designed system was done in the laboratory in a simulated field environment, and Table 6 shows the real running time required for each phase of the program to run on a proposed Jetson AGX Xavier GPU-based UAV. It should be noted that the results below are cutting images, and the original image should be in a 4 to 3 ratio, as we have tried to recreate the field environment without some clutter such as tables and chairs that we did not want to be included, so we have cut a fixed area of the output video. Figure 7 shows the results of human detection via yolo3-tiny. It is worth bringing up the point that we have simulated wild forest scenarios in the lab, but of course, it can detect humans in other scenarios as well. We can see that based on the COCO dataset, plants, squatting, and standing persons can be detected. If no person is detected, the system will not display a warning. Immediately after the warning appears the system goes into the recognition phase of the human rescue body gestures. In this section, the model and performance of the proposed human detection and rescue gesture recognition system for UAVs are described as follows. Based on the introduction in Chapter 2, the testing phase of the designed system was done in the laboratory in a simulated field environment, and Table 6 shows the real running time required for each phase of the program to run on a proposed Jetson AGX Xavier GPU-based UAV. It should be noted that the results below are cutting images, and the original image should be in a 4 to 3 ratio, as we have tried to recreate the field environment without some clutter such as tables and chairs that we did not want to be included, so we have cut a fixed area of the output video. Figure 7 shows the results of human detection via yolo3-tiny. It is worth bringing up the point that we have simulated wild forest scenarios in the lab, but of course, it can detect humans in other scenarios as well. We can see that based on the COCO dataset, plants, squatting, and standing persons can be detected. If no person is detected, the system will not display a warning. Immediately after the warning appears the system goes into the recognition phase of the human rescue body gestures. Based on the body rescue gesture dataset created in Table 3, we trained the model through a deep neural network to finally obtain the accuracy and loss of the body gesture recognition model. The changes in accuracy and loss function are shown in Figure 8 over the course of training. At first, the training and testing accuracies increase quickly. Afterward, slow growth between 10 epochs and 20 epochs and merging happens after 25 epochs. The accuracy and loss approach to their asymptotic values were seen after 40 epochs with minor noise in between. The weights of the best fitting model with the highest test accuracy are preserved. Both, training as well as testing loss diminished consistently and converged showing a well-fitting model. Based on the body rescue gesture dataset created in Table 3, we trained the model through a deep neural network to finally obtain the accuracy and loss of the body gesture recognition model. The changes in accuracy and loss function are shown in Figure 8 over the course of training. At first, the training and testing accuracies increase quickly. Afterward, slow growth between 10 epochs and 20 epochs and merging happens after 25 epochs. The accuracy and loss approach to their asymptotic values were seen after 40 epochs with minor noise in between. The weights of the best fitting model with the highest test accuracy are preserved. Both, training as well as testing loss diminished consistently and converged showing a well-fitting model. After training for 100 epochs, the model achieves 99.79% precision on training data and 99.80% accuracy on testing data. In Figure 9, the diagram on the left presents the confusion matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the confusion matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The high density at the diagonal shows that most of the body rescue gestures were predicted correctly. The performance is well over and close to perfect in most of the gestures. In the confusion matrix, we can see that the amount of data for Attention and Cancel is relatively large. This is because, in the data collection part, we collect the largest amount of data for Attention and Cancel. These two gestures are dynamic body gestures and well separated from the static gesture patterns, which represent the set and reset functions respectively. In Figure 10, the diagram on the left presents the standard matrix with predicted labels on X-axis and true labels on the Yaxis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The standard matrix is a scale for correctly identified gestures and mistakes, it shows that all body gestures in the training set have reached 1.00, and in the test set, all body gestures except Punch 0.98, Attention 0.99, and Walk 0.99 also reach 1.00. The sum of each row in a balance and normalized confusion matrix is 1.00, because each row sum represents 100% of the elements in a particular gesture. In addition to using the confusion matrix as an evaluation metric, we also analyzed the performance of the model from other standard metric. we use the equations below to calculate the macro-average. Based on the true positive (TP), false positive(FP), false negative(FN), and true negative(TN) of the samples, we calculate the p value (Precision), and R value (Recall), respectively, and the result macro F1 value is mostly close to 1.00.
Precision
, Recall macro 1 , macro 1 macro 1 2 macro macro macro macro After training for 100 epochs, the model achieves 99.79% precision on training data and 99.80% accuracy on testing data. In Figure 9, the diagram on the left presents the confusion matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the confusion matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The high density at the diagonal shows that most of the body rescue gestures were predicted correctly. The performance is well over and close to perfect in most of the gestures. In the confusion matrix, we can see that the amount of data for Attention and Cancel is relatively large. This is because, in the data collection part, we collect the largest amount of data for Attention and Cancel. These two gestures are dynamic body gestures and well separated from the static gesture patterns, which represent the set and reset functions respectively. In Figure 10, the diagram on the left presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The standard matrix is a scale for correctly identified gestures and mistakes, it shows that all body gestures in the training set have reached 1.00, and in the test set, all body gestures except Punch 0.98, Attention 0.99, and Walk 0.99 also reach 1.00. The sum of each row in a balance and normalized confusion matrix is 1.00, because each row sum represents 100% of the elements in a particular gesture. In addition to using the confusion matrix as an evaluation metric, we also analyzed the performance of the model from other standard metric. we use the equations below to calculate the macro-average. Based on the true positive (TP), false positive(FP), false negative(FN), and true negative(TN) of the samples, we calculate the p value (Precision), and R value (Recall), respectively, and the result macro F1 value is mostly close to 1.00. (a) (b) Figure 10. Standard matrix with predicted labels on X-axis and true labels on the Y-axis tested on body gesture training set (a) and in testing dataset (b).
As communication between the drone and the GPU-based ground station in the lab is dependent on the local network, requests sent from the client-side and accepted by the server directly reduce the value of the FPS, causing the system to run very slowly. The system only reaches approximately 5 FPS in a real-time operation. But running directly on a drone loaded with a Jetson Xavier GPU would solve this problem, i.e., a practical application scenario, as shown in Figure 1. It has a Jetson Xavier GPU as powerful as the ground station (GTX Titan GPU) and does not need to communicate over the local network, it will be fast enough to meet practical needs. In the laboratory tests, the drone was always flown at an oblique position above the person, approximately 2 to 3 m away from the user in the hand-gesture recognition (close) position. The oblique position ensures that the entire human body can be recognized with a higher probability than flying directly As communication between the drone and the GPU-based ground station in the lab is dependent on the local network, requests sent from the client-side and accepted by the server directly reduce the value of the FPS, causing the system to run very slowly. The system only reaches approximately 5 FPS in a real-time operation. But running directly on a drone loaded with a Jetson Xavier GPU would solve this problem, i.e., a practical application scenario, as shown in Figure 1. It has a Jetson Xavier GPU as powerful as the ground station (GTX Titan GPU) and does not need to communicate over the local network, it will be fast enough to meet practical needs. In the laboratory tests, the drone was always flown at an oblique position above the person, approximately 2 to 3 m away from the user in the hand-gesture recognition (close) position. The oblique position ensures that the entire human body can be recognized with a higher probability than flying directly above the user's head and downwards vertically. Because the work is based on the human skeleton, the flying position of the drone has some limitations on the recognition results. Figure 11 shows the recognition of the Cancel gesture and Attention gesture with warning messages in real-time. Figure 11 also gives information about the number of people, time, frame, and FPS. Next are the recognition display and detailed description of two basic gestures that we randomly selected from the dataset. In Figure 12, the diagram on the left shows us that when a user points in a specific direction, the purpose is to alert the drone to look in the direction the person is pointing to. For example, when the direction pointed has someone lying on the ground, this gesture solves the problem that when somebody lying on the ground, UAV cannot recognize the skeleton information about the lying person well due to flight position of the drone. Direction gesture is also helpful to the fainted or unconscious people, when there is a group of people, those who have motion can use the Direction gesture to give instructions to the drone to save those who cannot move. Practically, as the main issue, our proposed system is for helping people in a bad situation, but we do not want to disturb persons who do not want or could not interact. The on-board system may send messages to the central about non-moving people, but we leave them in peace if they simply have a rest. In Figure 12, the diagram on the right shows the user's gesture to make a phone call, which can be linked to hand gesture number recognition at a later stage. When the user poses to make a call, we can perform hand number recognition at a later stage to get the phone number the user wants to dial. above the user's head and downwards vertically. Because the work is based on the human skeleton, the flying position of the drone has some limitations on the recognition results. Figure 11 shows the recognition of the Cancel gesture and Attention gesture with warning messages in real-time. Figure 11 also gives information about the number of people, time, frame, and FPS. Next are the recognition display and detailed description of two basic gestures that we randomly selected from the dataset. In Figure 12, the diagram on the left shows us that when a user points in a specific direction, the purpose is to alert the drone to look in the direction the person is pointing to. For example, when the direction pointed has someone lying on the ground, this gesture solves the problem that when somebody lying on the ground, UAV cannot recognize the skeleton information about the lying person well due to flight position of the drone. Direction gesture is also helpful to the fainted or unconscious people, when there is a group of people, those who have motion can use the Direction gesture to give instructions to the drone to save those who cannot move. Practically, as the main issue, our proposed system is for helping people in a bad situation, but we do not want to disturb persons who do not want or could not interact. The on-board system may send messages to the central about non-moving people, but we leave them in peace if they simply have a rest. In Figure 12, the diagram on the right shows the user's gesture to make a phone call, which can be linked to hand gesture number recognition at a later stage. When the user poses to make a call, we can perform hand number recognition at a later stage to get the phone number the user wants to dial. During the human body gesture recognition, Attention and Cancel are dynamic gestures that function as set and reset respectively and should therefore confuse the UAV board recognition during the frame-by-frame check. When either of these two gestures is detected, the system will immediately give an alert. Figure 13 shows that when there are multiple people, one of them sends an Attention gesture to the drone. At this point, the drone sends a warning to inform that someone needs help. We can also see in Figure 12 that other people's gestures are well recognized in addition to the person making the Attention gesture. In our recognition system, about 10 people can be recognized at the same During the human body gesture recognition, Attention and Cancel are dynamic gestures that function as set and reset respectively and should therefore confuse the UAV board recognition during the frame-by-frame check. When either of these two gestures is detected, the system will immediately give an alert. Figure 13 shows that when there are multiple people, one of them sends an Attention gesture to the drone. At this point, the drone sends a warning to inform that someone needs help. We can also see in Figure 12 that other people's gestures are well recognized in addition to the person making the Attention gesture. In our recognition system, about 10 people can be recognized at the same time during human body gesture recognition. Figure 13 also shows the basic gesture recognition of multiple people without warning. We can see some people standing, some people walking, and some people kicking. Also, the number of people, time, frame, and FPS will be displayed. It should be noted that if a person is not fully present in the drone camera, then it will not be recognized. People's movements are generated continuously in real-time, and Figure 13 is a photo we took from the video, so there will be some inaccurate skeleton information. Of course, if a person's gesture is not in our dataset, that person's gesture will not be recognized and the recognition result information above it will be blank. and true labels on the Y-axis for predictions utilizing our model on the testing dataset.
The high density at the diagonal shows that most of the body rescue gestures were predicted correctly. The performance is well over and close to perfect in most of the gestures. In Figure 16, the diagram on the left presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The standard matrix shows that the corresponding values for the five categories of hand gestures can reach 0.99 or 1.0 on the training set and 0.9 or more on the test set. When the result given by the user in the previous stage is the body gesture of Attention, then the drone adjusts the resolution to 1280 × 960 and slowly approaches the user to perform the recognition of the hand gesture. We selected two more representative hand gesture recognition results to show, a Help gesture and an Ok gesture, where the user establishes further communication with the drone through the Attention body gesture in the previous stage. In the last close hand gesture recognition stage, the user can inform the drone that it needs to help him/her through the Help hand gesture, and when the drone is done helping the user, the user can inform it through the Ok hand gesture. Figure 14 shows us the results of the recognition of the Help and Ok gestures. From the displayed results we can see that the user's hand gesture recognition results can be well predicted by the histogram. Of course, we can also capture and define new gestures for the user on a case-by-case basis and add the new gestures to the gesture dataset by retraining the network. In Figure 15, the diagram on the left presents the confusion matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the confusion matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The high density at the diagonal shows that most of the body rescue gestures were predicted correctly. The performance is well over and close to perfect in most of the gestures. In Figure 16, the diagram on the left presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model tested on the training dataset. The diagram on the right presents the standard matrix with predicted labels on X-axis and true labels on the Y-axis for predictions utilizing our model on the testing dataset. The standard matrix shows that the corresponding values for the five categories of hand gestures can reach 0.99 or 1.0 on the training set and 0.9 or more on the test set. . Confusion matrix with predicted labels on X-axis and true labels on the Y-axis tested on hand gesture training set (a) and in testing dataset (b). Figure 15. Confusion matrix with predicted labels on X-axis and true labels on the Y-axis tested on hand gesture training set (a) and in testing dataset (b).
(a) (b) Figure 15. Confusion matrix with predicted labels on X-axis and true labels on the Y-axis tested on hand gesture training set (a) and in testing dataset (b).
(a) (b) Figure 16. Standard matrix with predicted labels on X-axis and true labels on the Y-axis tested on hand gesture training set (a) and in testing dataset (b).
Conclusion and Future Work
In this paper, we propose a real-time human detection and gesture recognition system for onboard UAV rescue. Practical application and laboratory testing are two different systems. The system not only detects people, tracks them, and counts the number of people, but also recognizes human rescue gestures in a dynamic system. First of all, the drone detects the human at a longer distance with a resolution of 640×480, and the system issues an alarm to enter the recognition stage when a person is detected. A dataset of ten basic body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV's camera. The two most important dynamic gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively, through which users can generate communication with the drone. After the Cancel gesture is recognized, the system automatically shuts down, and
Conclusions and Future Work
In this paper, we propose a real-time human detection and gesture recognition system for onboard UAV rescue. Practical application and laboratory testing are two different systems. The system not only detects people, tracks them, and counts the number of people, but also recognizes human rescue gestures in a dynamic system. First of all, the drone detects the human at a longer distance with a resolution of 640 × 480, and the system issues an alarm to enter the recognition stage when a person is detected. A dataset of ten basic body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV's camera. The two most important dynamic gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively, through which users can generate communication with the drone. After the Cancel gesture is recognized, the system automatically shuts down, and after the Attention gesture is recognized, the user can establish further communication with the drone. People coming to the foreground and making "Attention" dynamic gesture is for further investigation. The system enters the final hand gesture recognition stage to assist the user. At this point, the drone will automatically adjust the resolution to 1280 × 960 and gradually approach the user for close hand gesture recognition. From a drone rescue perspective, we did a good job of getting feedback from users. This work lays some groundwork for subsequent user rescue route design.
The detection of the human body is achieved through yolo3-tiny. A rescue dataset of 10 gestures is collected by using a fisheye surveillance camera for 6 different individuals in our lab. OpenPose algorithm is used to capture the user's skeleton and detect their joints. We built a deep neural network (DNN) to train and test the model. After training for 100 epochs, the framework achieves 99.79% precision on training data and 99.80% accuracy on testing data. For the final stage of hand gesture recognition, we use data collected online combined with our definitions to obtain a relevant dataset, which is trained by a convolutional neural network to obtain a model to achieve hand gesture recognition. Gestures can also be added or removed as required. The drone flies at an altitude of approximately 3 m and is flown diagonally above the user, rather than directly above the user. However, there are some difficulties and limitations when the system applies to the real wildness. In practice, the proposed system is subject to some extreme weather conditions and resolution issues. Another limitation is the flying position of the UAV. The system proposed in this paper requires drones fly over people at an angle in order to detect the human body gestures more accurately, rather than in a vertical user overhead position. For gathering enough experiment data we need more time and battery life-time limits the real-life data-gathering. For this reason, real-life data are only used for demonstration in Figure 1, while the exhaustive testing needed laboratory-based environment.
The main innovations and contributions of this paper are as follows: First, it is worth affirming that gesture recognition for wilderness rescue can avoid the interference of the external environment, which is the biggest advantage compared to voice recognition for rescue. A limited and well-oriented dictionary of gestures can force humans to communicate briefly. So gesture recognition is a good way to avoid some communication drawbacks. Second, a dataset of ten basic body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV's camera, which is used to describe some of the body gestures of humans in the wild. For the gesture recognition dataset, not only the whole body gestures but also the local hand gestures were combined to make the recognition more comprehensive. Finally, the two most important dynamic gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively. It should confuse the UAV-board recognition when checking frame-by-frame with a system warning. The system switches to a warning help mode when the user shows Attention to the UAV, and the user can also cancel the communication with the UAV at any time as needed.
In future work, more generic rescue gestures and larger hand gesture data sets could be included. The framework can be executed in real-time recognition with self-training. The system can automatically retrain the model based on the new data in a very short time to get a new model with new rescue gestures. Last but not the least, we also needed to conduct outdoor tests on a drone carrying a Jetson Xavier GPU.
The interpretation of the gesture based communication without predetermined vocabulary and unknown users will be a great challenge to linguistic research. Attention and Cancellation dynamic gestures will have a main role in generating a dynamic linguistic communication.
Author Contributions: C.L. designed and implemented the whole human detection and gesture rescue recognition system and wrote and edited the paper. She also created the body rescue gesture data-set with her colleagues. T.S. had the initial idea of this project and worked on the paper, who supervised and guided the whole project. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the author. | 2021-03-29T05:23:42.200Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "318a6e2947e8eddf3429963284449caa98ede254",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/6/2180/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "318a6e2947e8eddf3429963284449caa98ede254",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
191668281 | pes2o/s2orc | v3-fos-license | A FINANCIAL ANALYSIS AND CREDIT GAP ASSESSMENT OF M ADURAI MALLI FLOWER IN MADURAI DISTRICT OF TAMILNADU
Global competitive environment in recent trend shifts towards traditional products of quality with a strong cultural link particular geographical origin, with the opportunity to move away from commodity markets into more profitable markets through differentiation. Thus, geographical indications(GI) act as a protection measure for both consumers and producers apart from solving problems that arise from information asymmetry and free riding on reputation. One such GI tagged product from Tamil Nadu is Madurai malli. The paper has determined to assess the financial feasibility of Madurai malli cultivation in Madurai district of Tamil Nadu. Primary data was collected with the aid of a well-structured and pre tested schedule from a random sample of 120 farmers. The findings showed that Madurai malli cultivation was economically viable and financially feasible in the area of study. The average cost of establishing a Madurai malli farm was found to be Rs. 204224.00/acre for the first year. The cost and returns incurred in cultivation for the subsequent years after establishment was calculated on an annual basis. Results of the financial feasibility measures show that the Net Present Value at 12 per cent discount rate, at the end of seven years was found to be positive, BenefitCost ratio was more than one and Internal Rate of Returns for Madurai malli cultivation was very high with a payback period of around 1year and 3 months. Credit facilities for the crop is solely provided by the National Agricultural Bank for Rural Development (NABARD) through cooperatives functioning in the study area from which the credit gap prevailing was assessed from the total variable costs incurred annually. The credit gap was found to be Rs. 144734.00/acre/year over the variable costs on an average. The study has recommended that farmers investment on Madurai malli cultivation was feasible but the credit needs of the farmers has to be fulfilled by the financial institutions.
INTRODUCTION
Floriculture is an important component of Agriculture which has gained commercial tone in the recent years and has a very significant share in the economy of the country. India is the largest producer of Jasmine and marigold. Jasmine (Jasminum sambac) also called the "Queen of fragrance" is one among the leading exported is the first flower crop to gain Geographical indication (GI) in Tamil Nadu. The specialty of Madurai malli is its unique fragrance which is absent in jasmine flowers from other parts of the state. It is mainly due to the soil which has got aromatic alkaloids like "jamone" and "alpha terpineol". Geographical indications tend to enhance the market price of a product thereby ensuring the livelihood of the producers. Similarly, the consumers are provided with guaranteed quality of products which can have a significant influence on the economic performance of the product. Modernisation of agriculture has increased the use of inputs especially seed, fertilizers, irrigation water, machineries, implements etc. which has further increased the demand for agricultural credit. There is a positive association between agricultural credit and agricultural production in India and the agricultural sector deserves continued policy support in credit in order to move onto a sustainable and higher growth path (Pallavi chavan et al. ). World Bank has also stated that "Training farmers combined with increasing access to finance or the inputs required for agriculture acts as an impetus for improving agricultural productivity. Also providing cooperatives with resources improves linkages between agribusinesses along the value chain." (World Development Report, 2019). In this context, the present study was taken up to analyse the financial feasibility of Madurai malli farms and to assess the credit gap prevailing at the farm level.
METHODOLOGY
The study was carried out in Madurai district as it is one of the major Jasmine Producing areas of Tamil Nadu and it occupies about12.6 per cent of total Jasmine cultivated area. Primary data was mainly cross-sectional collected from 120 Madurai malli growing farmers randomly selected from a list of Malli growers in the following blocks: Thiruparankundram and Thirumangalam for the 2017-18 production seasons. In each block, 60 Malli growing farmers were randomly selected. Structured questionnaire was the main instrument used to collect the primary data. Primary data from the sample farms were collected with the help of a pre-tested interview schedule through personal interview. The The discount rate was assumed as 12 per cent for the analysis of the present study since the prevailing rate of interest for long term commercial banks is around 12 per cent. The credit gap prevailing among the jasmine farmers in the study area was calculated from the details obtained from the Cooperative banks.
Pay Back Period: The Pay Back Period (PBP) is the duration of time in years taken to liquidate the investment.
The payback period was estimated by summing up all the undiscounted net benefits over the years to make up the initial investment incurred for establishment.
Credit Gap Analysis
Total variable cost of a farm is dependent on production output. As the volume of production and output increases, variable costs will also increase. Credit provided should be able to cover the variable expenses of the farm.
Hence the difference over the variable cost is calculated every year to find the credit gap prevailing in the farm.
RESULTS AND DISCUSSIONS Establishment Cost of Jasmine Garden
The establishment cost per acre of the jasmine garden in Madurai district of Tamil Nadu was estimated for the year 2017-18 with the quantity of inputs and labour used at current market prices in Table 1 weeding decreases with the increasing age of the crop. This is due to the fact that as the crop grows, it establishes a smothering effect on weeds which naturally reduces the weed incidence in the field.
Cost and Returns Structure of Madurai Malli
The costs, and returns structure of jasmine in different age plants have been presented in table 4. It is evident that the yield per hectare was increasing year by year and will be reaching its maximum
Financial Feasibility of Investment in Madurai Malli Gardens
Net present value, benefit cost ratio, payback period and internal rate of return were employed to evaluate the feasibility of investment in Madurai malli orchards which is presented in table 5. The net present worth was Rs.1063636.00 per acre at 12 per cent discount rate. Thus it could be concluded that investment in Madurai malli cultivation is economically feasible. The higher magnitude of the net present value may be attributed to the fact that the initial investment and maintenance costs in Madurai malli cultivation was lesser when compared to the returns realised.
The benefit cost ratio at 12 per cent discount rate was found to be 1.95 which was more than unity indicating that investment in Madurai malli cultivation is financially viable. Similar findings were reported by Kumar et al., (2013) who observed that the benefit -cost ratio for jasmine was 2. This could be because of less initial investment for establishment of a jasmine orchard. The payback periodof the present study was found to be 1 year and 3 months. This clearly indicates that it would take a period of more than a year to recover the entire investment. This could be attributed to the fact that the initial investment itself was lower, besides higher rate of returns. However, this criterion neglects the net returns realized by the farmers in the subsequent years which may be more significant in the case of a long term enterprise like Madurai malli. The Internal rate of return was found to be very high, compared to the opportunity cost of capital indicating that the investment in Madurai malli cultivation was highly profitable, economically feasible and financially viable.
Credit Gap Analysis
The sample Madurai malli growers in the study area avail credit facilities only from the cooperative society
CONCLUSIONS
From the results of the study, it can be observed that the total costs incurred in cultivating Madurai malli increased gradually for the total period of crop whereas the gross returns from the crop increased till the year 6 and later decreased.
Net return of the crop increased till year four and then declined due to diminishing marginal returns implying Madurai malli growers to take up replanting after the sixth year of cultivation to acquire better profits from the enterprise.
The results of the multiple linear regression emphasize that the size of farm holdings, the institutional credit, labour usage, fertilizer used, non-farm income and net returns were the major determinants of total farm investment in Madurai malli farms.
The ability to invest largely depended upon farm income surplus and the extent of credit availed.
On the basis of above study it can be concluded that increasing the investment in floriculture helps in the growth of this sector in various aspects. But sufficient credit facilities are required in order to increase the investment since it is evident that the credit amount provided was not enough for the Madurai malli growers in the study area. Hence the following measures can be taken up to solve these problems. Steps should be initiated by the financial institutions to provide credit facilities to floriculture crops. The base of agricultural credit should be enhanced to the large proportion of rural population. Financial institutions however should simplify procedures so that the institutional agencies can become small farmers friendly.
The role of credit can be further enhanced by much greater financial inclusion by involving of region-specific market participants, and credit suppliers ranging from public sector banks, co-operative banks, the new private sector banks and micro-credit suppliers, especially self-help groups. The cost of cultivation of the crop may be used by the financial institutions as guidelines for fixing the scale of finance so as to bridge the credit gap prevailing in cultivation. Initiatives can be taken in upgrading the standard of flowers through modern technologies and practices to improve the export of Jasmine in the international market. The beneficiaries should recognize the practice and advantages of accumulated savings, which is often allowed to group when existing facilities are not fully adjusted. This can help the banks to hope that the loan will be paid and usher sustainability of bank and customer friendly relationship. To make the loan more productive special instructions and supervision should be carried out by loan issuing authorities. | 2019-06-14T15:32:17.501Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "dae62e940a12702f1e36570cd0df5e0bcb48b8ae",
"oa_license": null,
"oa_url": "https://doi.org/10.24247/ijasrjun201944",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "df17e56d0e9d2ea0d9d008963b63d473d1a4f678",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
246581276 | pes2o/s2orc | v3-fos-license | EVALUATION OF STUDENTS' PERCEPTIONS OF ONLINE-BASED SCIENCE LEARNING MATERIALS DURING THE COVID-19 PANDEMIC AT SMP IT KHAIRUNNAS, BENGKULU CITY
This study aims to determine the perception of students' understanding of science material at SMP IT KHAIRUNNAS Bengkulu City during the covid-19 pandemic or during face-to-face learning. The method used in this research is using a qualitative method. The data analysis technique used is descriptive qualitative technique. The results showed that students' perceptions of science learning during the covid 19 pandemic at SMP IT KHAIRUNNAS Bengkulu City showed that on average students were more aware of learning during face-to-face learning compared to online learning, because online learning was considered difficult because the science material presented by the teacher was lacking. Obviously, students also have difficulty understanding the formula from the explanation without being written directly by the teacher on the blackboard, when faceto-face learning makes it easier to explain material and answer questions given by the teacher.
Introduction
The 21st century is now marked as the century of openness or the century of globalization, meaning that human life in the 21st century undergoes fundamental changes that are different from the way of life in the previous century. As is well known in the 21st century, there has been a total change in both society and the world of education. Schools that are understood to date have been formed since the 19th century in the context of developing children's education and also encouraging industrialization, especially the current covid-19 pandemic. To prevent the spread of COVID-19 at this time, it is recommended to stop events that will invite crowds. Therefore, faceto-face learning will gather many students in a class during the lecture process.
Therefore, a scenario is held that can prevent physical relations between students and lecturers (Firman, F & Rahayu, S. 2020). So the solution during the COVID-19 pandemic is online learning. According to (Moore, Dickson-Deane & Galyen. 2011) online learning is learning that uses the internet network with accessibility, connectivity, flexibility, and the ability to bring up various types of learning interactions. Online learning is not only with the internet, but an important aspect is that it is safer for us to know Learning Management Systems (LMS) (Ali, Gunay Balim. 2009) Evaluation (assessment) is a process of collecting, reporting and using information about student learning outcomes obtained through measurement with the aim of analyzing or explaining student achievement in carrying out related tasks and making effective use of this information to achieve educational goals (Puskur, 2002). The impact of the global pandemic began to penetrate the world of education in Indonesia, until in the end the central government gave a policy to close all educational institutions in Indonesia. This is done as an effort to prevent the spread of the Covid-19 virus.
Research questions
How are students' perceptions of online science learning in the era of the covid-19 pandemic at SMP IT KHAIRUNNAS Bengkulu City?
Significance of the study
With government policies that replace educational activities, it makes the government and related institutions to present alternatives as an educational process for students who are unable to carry out the educational process at related institutions. So finally, the distance learning method or online (E-learning Class) was taken as a step to continue to be able to carry out the learning process by utilizing existing online learning applications such as Edmodo, Google Classroom, Zoom and so on (Kemendikbud, 2020). In the current study, we want to compare whether students of SMP IT KHAIRUNNAS Bengkulu City can understand and understand learning, especially science lessons online through existing media or better offline, and also we want to know whether online science learning is effective or not.
Research Design
The research used is descriptive qualitative research by collecting data based on research or surveys directly to SMP IT KHAIRUNNAS Bengkulu city and we also use online research because considering that currently the spread of covid 19 is still rampant in Indonesia. The supporting data used in this study are articles, documents, books or news related to the evaluation of learning during the COVID-19 pandemic.
Population and Sample
The population in this study amounted to 3 classes, each consisting of class A totaling 14 people, class B totaling 12 people, and class C totaling 12 people at SMP IT KHAIRUNNAS Bengkulu City. The question and answer interview technique is used to take samples for the data collection process. In this case, 38 students per class volunteered in the survey. In addition, teachers are also involved in helping in data collection. The decision to involve teachers in this study was taken to build on existing findings regarding the impact of online learning at school during the covid 19 pandemic.
Data Collection Instrumen
The data collection techniques used are: 1. observation, 2. interviews, 3. analyzing material, data collection is the method used to collect information or facts in the field based on a survey conducted.
Data Analysis
Qualitative research is research that collects data that has been analyzed. This qualitative is considered relevant to condition what happens in online learning during the pandemic. The type of data collected is the result of research from observations and interviews. Subjects and objects in this study were students and teachers in the science department interacting with each other to observe questions and answers (Sugiyono, 2009).
Findings
For approximately 2 years the spread of the covid 19 virus, schools conducted online learning including SMP IT KHAIRUNNAS Bengkulu City, the media that were often used based on the results of our interviews at the junior high school were the WA, Zoom and Google Classroom applications, which were used as supporting media. However, this online learning media is considered less effective by students and teachers because there are so many obstacles in the online learning media such as the lack of an adequate network, the material conveyed by the teacher is not clear, the tasks piled up which are not direct. done by students and the number of students who do not take part in learning when zooming or during discussions on wa or google classroom because of oversleeping and so on.
During the pandemic a lot of students at SMP IT KHAIRUNNAS Bengkulu City hoped to do face-to-face learning because based on our research on students at the junior high school, faceto-face learning is considered more effective because students understand more about the material delivered by the science teacher so that there are students who understand science formulas and science material that is conveyed face-to-face when learning online is very different, during UTS and UAS also students at SMP IT KHAIRUNNAS Bengkulu City prefer face-to-face because it is more effective because Pother students can't find answers on the internet when UTS and UAS are face-to-face, different when online, many students answer UTS and UAS questions by looking for answers on the internet, and another reason that we find a lot is that students don't like to study online because they can't meet their classmates.
One of the steps taken by science teachers so that the learning process can continue and not burden students is to use zoom, power point and video media. The selection of the zoom application is effective because it allows teachers and students to meet face to face even though they are far apart. And usually teachers use the wa application to send learning materials and assignments to make it easier for students and teachers in the learning process. The hope of science teachers when studying online is that teachers can freely write science learning materials such as formulas and so on, and students can be more active in listening to lessons.
Based on the results of research and interviews that we did at SMP IT KHAIRUNNAS Bengkulu city, we got the results of interviews from 3 rooms from class Vlll, totaling 36 students from class Vlll A, Class Vlll B and class Vll C and also we conducted interviews with teachers science online and face-to-face to determine the effectiveness of students in learning and understanding science material delivered by teachers during online learning and during face-toface learning.
Discussion
From the data above, it shows that on average students are more effective in their perception of learning understanding when face-to-face learning is compared to online learning, because online learning is considered difficult because the science material presented by the teacher is not clear, students also have difficulty understanding the formula from the explanation without being written in writing. directly by the teacher on the blackboard, when face-to-face learning makes it easier to explain the material and answer the questions given by the teacher.
Online and face-to-face learning will run smoothly if the supporting factors are met. Based on the results of interviews we conducted with students and teachers at SMP IT KHAIRUNNAS Bengkulu City, the supporting factors for student success when learning online are a stable internet connection, adequate internet quota, communication tools such as cellphones and laptops, there must be interest and motivation of students, as well as the role of teachers and parents.
Based on the results of the interviews above, it was found that the supporting factors in implementation of online and face-to-face learning on science subjects at SMP IT KHAIRUNNAS Bengkulu City. Factors that support online learning include learning facilities (smartphones) and internet networks. Online learning can be said as learning in an environment using devices such as mobile phones and laptops with internet access as a support (Singh & Thurman, 2019). While the supporting factors for face-to-face learning are blackboards, books, praga tools, and direct interaction of students and teachers. Face-to-face learning is considered by students to make it easier for the teacher to deliver the material and students also understand more about science material if it is explained directly.
According to Gikas & Grant (2013), online learning requires the support of mobile devices to access information anywhere and anytime, such as: smartphones, tablets and laptops. Purwanto, et al. (2020) stated that the importance of facilities such as laptops, computers or mobile phones for the smooth process of online teaching and learning. These facilities make it easier for teachers to provide learning materials. In order to maximize the supporting factors, teachers can look for learning media in the form of videos and continue to follow developments or progress students in participating in online learning reported by parents via Whatsapp groups. Teachers can provide information or things that are asked by students in learning.
We conducted an evaluation at SMP IT KHAIRUNNAS Bengkulu city when the teacher was not in class and when the class was empty so that the results of the interviews we conducted at the junior high school were effective and we also conducted research every day for approximately one week, because at this time Face-to-face learning has run smoothly again.
Currently, students and teachers are required to vaccinate to reduce the spread of the COVID-19 virus so that we can conduct research at SMP IT KHAIRUNNAS, Bengkulu City.
Conclusion
Based on the results of research that has been done regarding the effectiveness of online learning during the covid-19 pandemic in class VIII SMP IT KHAIRUNNAS Bengkulu City, it was found that during online lessons there were many students who were less active in the learning process and most students tended to feel bored because besides they feel they don't understand about the explanation of the material they have also been negligent of the tasks given by the teacher and also they have started acting a lot like truant during class hours, falling asleep, some even don't care about the online lessons given by the teacher.
In addition to the unclear material, students at SMP IT KHAIRUNNAS Bengkulu City are also often hampered by their online learning process due to various obstacles such as loss of signal during the learning process, then students fall asleep during lessons and there are even students who do not participate in learning at all. One of the steps taken by science teachers so that the learning process can continue and not burden students is to use zoom, power point and video media. The selection of the zoom application is effective because it allows teachers and students to meet face to face even though they are far apart. And usually teachers use the wa application to send learning materials and assignments to make it easier for students and teachers in the learning process. Therefore, in essence, students are more effective in understanding their learning when face-to-face learning is compared to online learning, because online learning is considered difficult because the science material presented by the teacher is not clear, students also have difficulty understanding the formula of the explanation without written explanation of the material and answer the questions given by the teacher. We and students hope that online learning activities are immediately stopped and that face-to-face learning is carried out immediately so that no more children are negligent in their duties and even neglect the school and the learning process.
Suggestion
In writing this article the researcher realizes that there are still many shortcomings, both in the form of theory, and other internal factors. Therefore, researchers need constructive criticism | 2022-02-06T16:52:37.112Z | 2022-01-31T00:00:00.000 | {
"year": 2022,
"sha1": "a38e17fc33d8181b58e5789f9e5952838bca1f10",
"oa_license": "CCBY",
"oa_url": "https://journals.umkt.ac.id/index.php/acitya/article/download/3205/1139",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "154aa7bcf7eb9ac9ead902466e8e0125cb504de5",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
7430857 | pes2o/s2orc | v3-fos-license | Cardioprotective Effect of Paeonol and Danshensu Combination on Isoproterenol-Induced Myocardial Injury in Rats
Background Traditional Chinese medicinal herbs Cortex Moutan and Radix Salviae Milthiorrhizaeare are prescribed together for their putative cardioprotective effects in clinical practice. However, the rationale of the combined use remains unclear. The present study was designed to investigate the cardioprotective effects of paeonol and danshensu (representative active ingredient of Cortex Moutan and Radix Salviae Milthiorrhizae, respectively) on isoproterenol-induced myocardial infarction in rats and its underlying mechanisms. Methodology Paeonol (80 mg kg−1) and danshensu (160 mg kg−1) were administered orally to Sprague Dawley rats in individual or in combination for 21 days. At the end of this period, rats were administered isoproterenol (85 mg kg−1) subcutaneously to induce myocardial injury. After induction, rats were anaesthetized with pentobarbital sodium (35 mg kg−1) to record electrocardiogram, then sacrificed and biochemical assays of the heart tissues were performed. Principal Findings Induction of rats with isoproterenol resulted in a marked (P<0.001) elevation in ST-segment, infarct size, level of serum marker enzymes (CK-MB, LDH, AST and ALT), cTnI, TBARS, protein expression of Bax and Caspase-3 and a significant decrease in the activities of endogenous antioxidants (SOD, CAT, GPx, GR, and GST) and protein expression of Bcl-2. Pretreatment with paeonol and danshensu combination showed a significant (P<0.001) decrease in ST-segment elevation, infarct size, cTnI, TBARS, protein expression of Bax and Caspase-3 and a significant increase in the activities of endogenous antioxidants and protein expression of Bcl-2 and Nrf2 when compared with individual treated groups. Conclusions/Significance This study demonstrates the cardioprotective effect of paeonol and danshensu combination on isoproterenol-induced myocardial infarction in rats. The mechanism might be associated with the enhancement of antioxidant defense system through activating of Nrf2 signaling and anti-apoptosis through regulating Bax, Bcl-2 and Caspase-3. It could provide experimental evidence to support the rationality of combinatorial use of traditional Chinese medicine in clinical practice.
Introduction
Ischemic heart disease (IHD) is the most serious crimes against human life, claiming 17 million lives per year [1]. Epidemiological studies indicate that IHD will constitute the major disease-burden worldwide by the year 2020 [2]. Myocardial infarction (MI) is a common and life-threatening manifestation of IHD. It occurs when myocardial ischemia surpasses the critical threshold level for an extended time, resulting in irreversible necrosis of the myocardium [3]. Myocardial infarction is invariably followed by numerous pathophysiological and biochemical alterations including hyperlipidemia, thrombosis, lipid peroxidation (LPO) and free radical damage etc., leading to qualitative and quantitative changes of myocardium [4]. It has also been suggested that oxidative stress produced by free radicals or reactive oxygen species (ROS), as evidenced by marked increase in production of lipid peroxidative products associated with decreased levels of antioxidants such as superoxide dismutase (SOD), catalase (CAT) and reduced glutathione (GSH), plays a major role in myocardial damage during MI [5].
Isoproterenol (ISO), a synthetic catecholamine and betaadrenergic agonist, has been found to produce MI in large doses due to generation of highly cytotoxic free radicals, causing cardiac dysfunctions, increased lipid peroxidation, altered activities of cardiac enzymes and antioxidants, resulting in infarct like necrosis of the heart muscle [6]. The pathophysiological and morphological alterations of myocardium following ISO administration have been observed similar to those taking place in human MI [7]. Therefore, ISO-induced myocardial injury serves as a well standardized model to study the cardiac functions and beneficial effects of many drugs.
Radix Salviae Milthiorrhizae (root and rhizome of Salvia miltiorrhiza Bunge) and Cortex Moutan (root bark of Paeonia suffruticosa Andrew) are two traditional Chinese medicinal herbs widely used in China, Japan, Korea and India for their putative cardioprotective and cerebroprotective effects [8,9]. In clinical practice, Radix Salviae Milthiorrhizae and Cortex Moutan are commonly used in combination, famous as ShuangDan prescription (SD), for the treatment of angina pectoris, myocardial infarction, and other cardiac symptoms [10]. It has been reported that SD significantly decreased infarct area of heart and CK, CK-MB levels in serum, modulated the expression of 23 oxidative stress related proteins in rats with coronary occlusion [11]. In order to evaluate the efficacy and to understand the mechanisms of ShuangDan prescription, two representative active principles of the two herbs, paeonol (Pae) from Cortex Moutan and danshensu (DSS) from Radix Salviae Milthiorrhizae, were selected for study.
Paeonol (Pae, Fig. 1) is a phenolic acid compound extracted from Cortex Moutan. A number of studies have revealed that paeonol possesses many physiological activities, including vascular dilation [12], prevention of cardiovascular diseases [13] and improvement of arrhythmia [14]. According to several phytochemical studies, danshensu (DSS, Fig. 1) is abundant and structurally representative of the water-soluble active components of Radix Salviae Milthiorrhizae [15]. It has been demonstrated that danshensu reduces lipid peroxidation on mitochondrial membrane by scavenging free radicals, and inhibits permeability and transmission of mitochondrial membrane by reducing thiol oxidation [16,17].
We previously reported the interesting pharmacokinetic phenomenon that the combination use of paeonol plus danshensu dramatically increased the concentration of danshensu in rat plasma and the concentration of paeonol in rat heart and brain [18]. Pharmacological studies also revealed that paeonol and danshensu in combination have synergistic protective effect on focal cerebral ischemia-reperfusion injury in rat, likely through improving the blood hemorrheology, decreasing oxidative injury and repairing the function of blood vessel endothelium [19].
However, there is little information on the cardioprotective effects of danshensu and paeonol combination in vivo. The present study was designed to evaluate the cardioprotective effects of danshensu and paeonol combination on ISO-induced myocardial injury and to understand their underlying mechanism.
Animals and Ethics
Male Sprague-Dawley rats (220620 g) were purchased from Experimental Animal Research Center, the Fourth Military Medical University (Xi'an, China). The animals were maintained in individually ventilated cages (IVC) system (12 h light/dark cycle, 20.3-23.1uC and 40-50% humidity) during the experiment cycle and fed with standard laboratory food and water ad libitum. There were no significant differences in the body weights of the treated rats when compared with control at the beginning of the study period. The treated rats did not offer any abnormal resistance to drug administration. The treatment schedule did not cause any change in food and water intake pattern. Experimental protocol (2011-0922-R) involving animals was reviewed and approved by the Institutional Animal Care and Use Committee of the Fourth Military Medical University.
Induction of myocardial injury
Myocardial injury was induced in experimental rats by injection of 85 mg kg 21 of ISO daily for 2 consecutive days.
Pilot study for dose fixation
Paeonol at the doses of 20, 40, 80 and 120 mg kg 21 day 21 and danshensu at the doses of 40, 80, 160 and 240 mg kg 21 day 21 were screened to determine the dose dependent effect of paeonol and danshensu combination in ISO-induced myocardial infarction in rats. The optimum dose exhibiting maximum cardioprotective effect during 21 days was evaluated by estimating electrocardiograph-abnormalities (determined as ST-segment), serum lactate dehydrogenase (LDH), creatine kinase (CK-MB), tissue lipid peroxidation and reduced glutathione content. Paeonol (80 mg kg 21 day 21 , i.g.) and danshensu (160 mg kg 21 day 21 , i.g.) were found to be most effective in functional recovery of biochemical alterations (data not shown). Hence these doses were selected for further evaluation (alone as well as in combination) in the present studies. Dose of isoproterenol was selected on the basis of reported literature [20].
Experimental Design and Protocol
The experimental animals were divided into five groups of eight rats each.
Group II (ISO group): rats received 0.3% CMC-Na solution for a period of 21 days and ISO (85 mg kg 21 , s.c.) in normal saline on 20th and 21st day at an interval of 24 h.
Group III (Pae+ISO group): rats received paeonol (80 mg kg 21 day 21 , i.g.) for a period of 21 days and ISO on 20th and 21st day.
Group IV (DSS+ISO group): rats received danshensu (160 mg kg 21 day 21 , i.g.) for a period of 21 days and ISO on 20th and 21st day.
Paeonol and danshensu were suspended in 0.3% CMC-Na solution. Control and ISO treated group received equal quantity of vehicle.
At the end of the experimental period, rats were anesthetized with pentobarbital sodium (35 mg kg 21 , i.p.), needle electrodes were inserted under the skin of the animals in lead II position. Electrocardiograph recordings were made using BL-420S Biologic Function Experiment system (Technology & Market Co., Ltd., Chengdu, China) and ST-segment elevation or depression (expressed in mv) in normal and experiment animals were considered.
After recording the ECG, blood was collected by abdominal aorta and allowed to clot for 1 h at room temperature. Serum was subsequently separated by centrifugation at 35006g for 15 min and stored at 280uC for biochemical assays. After blood collection, rats were sacrificed by cervical decapitation. Heart tissue was excised immediately and rinsed in ice-cold normal saline, then homogenized by an IKA T10 Basic homogenizer (Staufen, Germany) in 0.05M ice-cold phosphate buffer (pH 7.4, 1:10 w/v). The homogenate was centrifuged at 120006g for 10 min at 4uC and the supernatant was stored at 280uC for the estimation of various biochemical parameters.
Measurement of Myocardial Infarct Size
Direct triphenyl tetrazolium chloride (TTC) assay according to method of Lie et al. [21] was used to determine myocardial infarct size. In brief, the heart was transversely cut across the left ventricle, and sections of 2 mm to 3 mm thick were incubated in 1% TTC solution prepared in phosphate buffer (pH 7.4) for 30 min at 37uC, following which they were fixed with 10% formalin. The non-ischemic myocardium and viable ischemic myocardium were stained red, while the infarcted myocardium appeared pale grey or white. The slices were photographed using a digital camera, and the % infarction was analyzed using the computerized Image-Pro Plus 6.0 software (Media Cybernetics Inc, Silver Spring, MD, USA).
Assay of cardiac marker enzymes
Activities of creatine kinase-MB (CK-MB), lactate dehydrogenase (LDH), aspartate aminotransferase (AST) and alanine aminotransferase (ALT) in the serum were measured using commercial kits (Biosino bio-technology and science Inc., Beijing, China).
Estimation of cardiac troponin I
The levels of cardiac troponin I (cTnI) in serum were estimated using standard kit by enzyme-linked immunesorbent assay (Jiancheng bio-technology and science Inc., Nanjing, China).
Estimation of lipid peroxidation products and antioxidants
Tissue lipid peroxide level in heart was determined as thiobarbituric acid reactive substances (TBARS) by the methods of Ohkawa et al. [22] using a reagent kit (BioAssay Systems, CA, USA). The activities of superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), glutathione reductase (GR) and glutathione-S-transferase (GST) were assayed by the methods of Oyanagui [23], Goth [24], Rotruck et al. [25], Pinto and Bartley [26] and Habig et al. [27], respectively. The level of reduced glutathione (GSH) was estimated by the methods of Ellman [28]. The content of protein in the heart homogenate was determined by the method of Bradford [29].
Histopathological examination
After sacrifice, the cardiac apex was rapidly dissected out and washed immediately with ice-cold normal saline and fixed in 10% buffered formalin. The fixed tissues were embedded in paraffin, sectioned at 5 mm and stained with hematoxylin and eosin (H&E). The sections were examined under light microscope and photomicrographs were taken by a Zeiss Axioskop 40 photomicroscope at 6200 magnification.
Immunohistochemistry
The paraffin sections, 5 mm thick, were deparaffinaged in xylene, dehydrated in a graded series of ethanol, subjected to antigen retrieval in phosphate-buffered saline (PBS) at 92-98uC for 5 min. The sections were allowed to cool down to the room temperature, washed in PBS, then incubated in 3% H 2 O 2 in 0.01 M PBS for 10 min at room temperature, and in 5% BSA for 20 min at 37uC. Next, sections were incubated overnight at 4uC with primary antibody anti-Nrf2 (1:100, Bioworld Technology, Inc., USA), anti-Bax (1:200, Epitomics, Inc., USA), anti-Bcl-2 (1:250, Cell Signaling Technology, Inc., USA) and anti-Caspase-3 (1:200, Santa Cruz Biotechnology, Inc., USA). Negative controls included omission of primary antibody and use of PBS. Sections were then rinsed (PBS) and incubated for 1 h with specie peroxidase-conjugated secondary antibody (Boster Biotechnology, China). The reaction was visualized with a solution of diaminobenzidine (DAB). For quantification, integral optical density (IOD) of Nrf2, Bax, Bcl-2 and Caspase-3 staining were calculated with computerized Image-Pro Plus 6.0 software (Media Cybernetics Inc, Silver Spring, MD, USA).
Statistical analysis
Results are shown as mean 6 S.D.. One-way ANOVA was carried out, and the statistical comparisons among the groups were performed with Tukey's test using GraphPad Prism 5.0 statistical package program (GraphPad Software, Inc., La Jolla, CA, USA). P-values less than 0.05 were considered as statistically significant.
Results
Effect of paeonol and danshensu on heart weight, body weight and electrocardiograph parameters The mean body weight of rats at the end of experiment period in all experimental groups had no significant change (Table 1), although ISO treated rats showed a slight reduction in body weight which was not significant. The heart weight and the ratio of heart weight to body weight were increased significantly (P,0.001) in ISO-administered groups when compared with normal control groups. Rats pre-co-treated with the combination of paeonol and danshensu showed a significant (P,0.001) reduction in the heart weight and the ratio as compared to ISO treated groups. ISOadministered rats showed marked ST-segment elevation (Fig. 2). Pretreatment with paeonol or danshensu in ISO-administered rats showed a significant (P,0.001) decrease in ST-segment as compared to ISO-administered rats. The pretreatment of paeonol and danshensu combination in ISO-administered rats showed a significant (P,0.001) decrease in ST as compared to ISO, Pae+ ISO or DSS+ISO treated groups.
Effect of paeonol and danshensu on myocardial tissue damage
Representative illustrations of infarction size as stained TTC are shown in Fig. 3. While ISO administration indicated a large unstained area, however, the heart slice of the Pae+DSS+ISOtreated rats exhibited a major portion stained positively showing tissue viability. Fig. 3 also indicates the increased infarction area in ISO-administered group (33.63%), which was significant (P,0.001) reduced to 18.64% with the combined pretreatment of paeonol and danshensu.
Effect of paeonol and danshensu on serum marker enzymes As shown in Table 2, there was a significant (P,0.001) rise observed in the levels of diagnostic marker enzymes (CK-MB, LDH, AST and ALT) in the serum of group ISO-administered rats. During myocardial infarction condition, these enzymes were released into the blood stream. Pretreatment with paeonol and danshensu in combination showed a significant (P,0.001) reduction in the levels of all serum diagnostic marker enzymes compared to the ISO, Pae+ISO and DSS+ISO groups.
Effect of paeonol and danshensu on cTnI Fig. 4 shows the level of cardiac troponin I (cTnI) in the serum of normal and isoproterenol-induced rats. Rats induced with isoproterenol showed significant (P,0.001) elevation in the levels of cTnI in serum compared to normal control rats. Pretreatment with paeonol and danshensu in combination showed significant (P,0.001) decrease in the levels of serum cTnI when compared to the ISO, Pae+ISO and DSS+ISO groups.
Effect of paeonol and danshensu on lipid peroxidation and antioxidant enzymes
The myocardial TBARS and GSH levels and activities of enzymic antioxidants such as SOD, CAT, GPx, GR and GST in the heart of normal and ISO-administered rats are shown in Fig. 5 and Table 3, respectively. Rats administered with ISO showed a significant (P,0.001) increase in the level of TBARS while there was a significant (P,0.001) decrease in GSH, SOD, CAT, GPx, GR and GST as compared to control groups. Combination of paeonol and danshensu along with ISO showed significant (P,0.001) protection than individual treatment groups in mitigating the parameters of oxidative stress.
Effect of paeonol and danshensu on histological changes
Histopathological observations of control rat heart showed a normal myofibrillar structure with striations, branched appearance and continuity with adjacent myofibrils (Fig. 6 A). ISO-induced rats revealed marked myofibrillar degeneration with infiltration of neutrophil granulocytes and interstitial edema (Fig. 6 B). The tissue sections from all treatment groups [Pae+ISO (Fig. 6 C), DSS+ISO (Fig. 6 D) and Pae+DSS+ISO (Fig. 6 E)] showed some infiltration with neutrophil granulocytes, interstitial edema and some discontinuity with adjacent myofibrils but the morphology of cardiac muscle fibers was relatively well preserved with no evidence of focal necrosis when compared to ISO-induced group. Combination of paeonol and danshensu showed a better morphology than individual treatment groups.
Effect of paeonol and danshensu on protein expression of Nrf2, Bax, Bcl-2 and Caspase-3 The protein expression of Nrf2, Bax, Bcl-2 and Caspase-3 in the heart of normal and ISO-administered rats are shown in Fig. 7. Immunohistochemical analysis showed that ISO injection significantly (P,0.05 or P,0.001) increased the expression of Nrf2, Bax and Caspase-3 protein and decreased the expression of Bcl-2 protein in the myocardium when compared to control rats. However, by the treatment of combination of paeonol and danshensu, the expression of Bax and Caspase-3 or Bcl-2 and Nrf2 protein was down-regulated or up-regulated, respectively (P,0.001), when compared to ISO groups.
Discussion and Conclusions
ISO in supramaximal doses induces morphological and functional alterations in the heart leading to subendocardial myocardial ischemia, hypoxia, necrosis, and finally fibroblastic hyperplasia with decreased myocardial compliance and inhibition of diastolic and systolic function, which closely resembles local myocardial infarction-like pathological changes seen in human myocardial infarction [30]. Generation of highly cytotoxic free radicals through auto-oxidation of catecholamines has been implicated as one of the important causative factor in isoproterenol-induced myocardial damage [31]. It has been reported that auto-oxidation of excess catecholamines results in free radicalmediated peroxidation of membrane phospholipids and consequently leading to permeability changes in the myocardial membrane, intracellular calcium overload and irreversible damage [6,32].
In the present study, we have observed a significant increase in the heart weight and the ratio of heart weight to body weight in ISO-induced rats. Patel et al. [33] have reported that the observed increase in the heart weight in ISO-induced rats might be due to the increased water content, oedematous intramuscular space and extensive necrosis of cardiac muscle fibres followed by the invasion of damaged tissues by the inflammatory cells. Combined pretreatment of paeonol and danshensu significantly decreased the heart weight in ISO-induced rats. Table 1. Effect of paeonol and danshensu on heart weight, body weight and heart weight/body weight ratio in rats.
Group(s)
Body weight (g) at the end of treatment period Heart weight (g) Heart weight/body weight ratio (%) The main criteria generally used for the definite diagnosis of MI is evolving pattern of Electrocardiograph (ECG)-abnormalities. ST-segment elevation reflects the potential difference in the boundary between ischemic and non-ischemic zones and consequent loss of cell membrane function. It was observed either in patient with acute myocardial ischemia [34] or in isoproterenolinduced myocardial infarction in rat [33]. In the present study, we noted a significant elevation of ST-segments in ISO-induced rats. The observed ST-segment elevation might be due to myocardial necrosis accelerated by ISO, which is in consistent with the observations of the earlier reports [33]. Pretreatment of paeonol and danshensu in combination markedly inhibited isoproterenolinduced ST-segment elevation, suggestive of its cell membrane protecting effects.
Extent of myocardial infarction is detected by direct staining with TTC dye, which forms a red formazan precipitate in the presence of intact dehydrogenase enzyme systems, whereas the infracted myocardium lack dehydrogenase activity and therefore fails to stain with it. Area of infarction may relate to leakage of dehydrogenases and loss of membrane integrity [35]. ISO-induced rats showed increase in myocardial infarct size with less TTC absorbing capacity, indicating significant leakage of dehydrogenases from the myocardium. This ISO-induced loss of dehydrogenases was counteracted by the co-administration of paeonol and danshensu, which a significant decreased level of infarct size was observed in Pae+DSS+ISO group, further supporting the better protection from cardiac damage of the combination.
Cytosolic enzymes namely CK-MB, LDH, AST and ALT serve as sensitive indices to assess the severity of myocardial infarction. Increased activities of these marker enzymes in the serum are indicative of cellular damage and loss of functional integrity and/ or permeability of cell membrane [36]. Moreover, recent data have indicated measurement of cardiac troponin I (cTnI), a low molecular weight and contractile protein which is normally not found in serum, but released when myocardial necrosis occurs, may be even more significant in diagnosing acute MI and for risk prediction in subsequent infarction [37]. In the present study, a significant increase observed in the activities of CK-MB, LDH, AST, ALT and level of cTnI in serum of ISO-induced rats may due to the leakage of them from the heart as a result of necrosis induced by ISO. That is, the cardiac membrane becomes permeable or may rupture, due to deficient oxygen supply or glucose, thereby resulting in the leakage of enzymes and/or cTnI [38]. The combination of paeonol and danshensu seems to preserve the structural and functional integrity and/or permeability of the cardiac membrane and thus restricting the leakage of these indicative enzymes and cTnI from the myocardium, as evident from the markedly blunted levels of these enzymes and cTnI in Pae+DSS+ISO group when compared to the individual treatment groups, thereby establishing the cardioprotective effect of the combination of paeonol and danshensu.
Histopathological examination of myocardial tissue in control illustrated clear integrity of the myocardial cell membrane with no Table 2. Effect of paeonol and danshensu on serum marker enzymes in rats. Lipid peroxidation has been defined as the oxidative deterioration of polyunsaturated lipid. It occurs constantly at a low level in most cellular biological systems. Oxygen-derived free radicals can react with lipids, if not blocked by sufficient antioxidant molecules, to form lipid peroxides which do extensive damage [39]. Since the major constituents of biological membranes are lipids, their peroxidation can lead to cell damage and death. A significant increase in the levels of lipid peroxidation products in ISO-induced rats appear to be the initial stage to the tissue making it more susceptible to oxidative damage. Increased production of free radicals may be responsible for the observed membrane damage as evidenced by the elevated lipid peroxidation in terms of TBARS in the present study.
GSH is a tripeptide which has a direct antioxidant function by reacting with superoxide radicals, peroxy radicals and singlet oxygen followed by the formation of oxidized GSH and other disulfides. It plays an important role in the regulation of variety of cell functions and in cell protection against free radical mediated injury [5,6,40]. Thus, reduction in cellular GSH content could impair recovery after short period of ischemia. Depressed GSH levels may be associated with an enhanced protective mechanism to oxidative stress in MI. In this study, ISO administration was found to reduce the levels of GSH in cardiac tissue and the observation concurs with several earlier findings [20,32,33]. Decreased GSH levels might be due to increased utilization in protecting 'SH' containing proteins from lipid peroxides. Pre-cotreatment of combination of paeonol and danshensu decreased the levels of lipid peroxides (in terms of TBARS) while increased the level of GSH in the heart of ISO-induced cardiotoxic rats when compared to individual treatment groups. This shows the antilipid peroxidative effect of the combination of paeonol and danshensu against injury caused by free radicals. In this context, previous studies have shown danshensu and paeonol exerted significant GSH-elevating and MDA-scavenging effect in a PK-PD model of MI [41] and in a D-gal-induced cognitive impairment model, respectively [42].
Auto-oxidation of ISO produces quinones, which react with oxygen to produce superoxide anions and hydrogen peroxide, leading to oxidative stress and depletion of the endogenous antioxidant system. Free radical scavenging enzymes, such as SOD, CAT, GPx, GR and GST are the first line of cellular defense against oxidative injury, decomposing O 2 and H 2 O 2 before their interaction to form the more reactive hydroxyl radical [5,40]. The equilibrium between the enzymatic antioxidants and free radicals is an important process for the effective removal of oxidative stress in intracellular organelles [43]. However, in pathological conditions like MI, the generation of ROS can dramatically upset this balance with an increased demand on the antioxidant defense system [5]. In this study, a significantly lower activity of the enzymes SOD and CAT was observed in heart of ISO-administered rats when compared to control rats, which is consistent with similar findings in number of earlier studies [20,32]. The decrease in the activities of SOD and CAT might be due to their increased utilization for scavenging ROS and their inactivation by excessive ISO oxidants. The activities of GSHdependent antioxidant enzymes (GPx, GST and GR) were declined in hearts ISO-administered. GPx offers protection to the cellular and subcellular membranes from the peroxidative damage by eliminating hydrogen peroxide and lipid peroxides. The lowered activities of GPx and GST in heart in ISOadministered rats may due to the reduced availability of GSH. Decreased activities of these enzymes lead to the accumulation of these oxidants and make myocardial cell membranes more susceptible to oxidative damage. Inactivation of GR in the heart leads to accumulation of oxidized glutathione (GSSG), which is an oxidized product of GSH. GSSG inactivates the enzymes containing SH-group and inhibits protein synthesis [44]. Pretreatment with the combination of paeonol and danshensu significantly increase in the activities of SOD, CAT, GPx, GR and GST when compared to individual treatment groups.
Some recently studies have reported the changes in the transcriptional regulation of antioxidant genes in humans and mouse models of disease [45,46]. Evidence indicates that nuclear factor erythroid 2-related factor 2 (Nrf2) is the primary transcriptional regulator of a majority of the antioxidants including SOD, GPx, GST, GR and catalase [47]. Suh et al. have recently reported that a decrease of Nrf2 nuclear expression could be linked with down-regulation of transcription and translation products for GCLC and GCLM (rate-limiting enzyme for biosynthesis of GSH) [48]. Moreover, emerging evidence has revealed that Nrf2 signaling plays a key role in preventing oxidative cardiac cell injury in vitro [49]. In the present study, we observed elevated Nrf2 levels in the infarct border zone in the MI group. Significantly, the Nrf2 levels were further enhanced by paeonol and danshensu treatment. Thus, we presume that activation of Nrf2 related signaling is likely to be, at least partly, responsible for the protective effect of danshensu and paeonol against ISO-induced MI.
It is well established that both necrosis and apoptosis are involved in the acute stage of MI [4]. The interaction between Figure 7. Graph A showing the representative immunohistochemical staining of Nrf2 (a1, a2, a3, a4 and a5), Bax (b1, b2, b3, b4 and b5), Bcl-2 (c1, c2, c3, b4 and c5) and Caspase-3 (d1, d2, d3, d4 and d5) expression in myocardium. Control group (a1, b1, c1, and d1), ISO group (a2, b2, c2 and d2), Pae+ISO group (a3, b3, c3 and d3), DSS+ISO group (a4, b4, c4 and d4) and Pae+DSS+ISO group (a5, b5, c5 and d5). Bar graph B, C, D and E showing the levels of Nrf2, Bax, Bcl-2 and Caspase-3 protein in myocardium, respectively, expressed as the integral optical density (IOD). Values are expressed as mean 6 S.D. (n = 5). ??? P,0.05 vs. Control, * P,0.001 vs. Control; & P,0.05 vs. ISO, # P,0.001 vs. ISO; % P,0.01 vs. Pae+ISO, @ P,0.001 vs. Pae+ISO; ¥ P,0.01 vs. DSS+ISO, $ P,0.001 vs. DSS+ISO (one-way ANOVA). doi:10.1371/journal.pone.0048872.g007 pro-and anti-apoptotic proteins of Bcl-2 family integrates different death and survival signals to decide the fate of the cells [50]. During acute MI, the formation of ROS in infarction areas could exceed the anti-oxidative capacity hence initiate mitochondrial apoptotic signaling and activate Bax, a pro-apoptotic member of Bcl-2 family proteins [51][52][53]. In healthy cells, the majority of Bax is localized in the cytosol, but upon initiation of apoptotic signaling, activated Bax rapidly translocates to the mitochondria and undergoes a conformation shift, as well as interacts with Bak to form protein-permeable pores on mitochondrial membranes [52,53]. This results in the release of cytochrome C and other proapoptotic factors from the mitochondria to the cytosol. Cytochrome C in turn stimulates Apaf-1 and Caspase-9 to form apoptosomes, and then activates Caspase-3, leading to cell death [52,54]. On the contrary, the over-expression of anti-apoptotic Bcl-2 family protein has been shown to intercept the release of cytochrome C in response to a variety of apoptotic signals and therefore inhibits apoptosis. In our study, the Bcl-2 expression level was markedly decreased during the process of MI. However, paeonol and danshensu combinational treated groups inhibited the degradation of Bcl-2 protein in infarct regions. In contrast, an increased level of Bax and Caspase-3 protein was observed in MI groups. The upregulated Bax and Caspase-3 were inhibited by treatment with paeonol and danshensu in combination. Our data suggested that early treatment with danshensu and paeonol in combination might inhibit cadiocyte apoptosis, thus providing an anti-apoptosis strategy against MI damage.
In conclusion, our study reveals that the combination of paeonol and danshensu exerts significant cardioprotective effects against ISO-induced myocardial infarction in rats. This myocardial protective effect could be associated with the enhancement of antioxidant defense system through the activation of Nrf2 related pathway and anti-apoptosis through regulating Bax, Bcl-2 and Caspase-3. Further, the protective effect thus could provide experimental evidence to support the rationality of combinatorial use of traditional Chinese medicine in clinical practice. | 2018-04-03T06:00:08.377Z | 2012-11-06T00:00:00.000 | {
"year": 2012,
"sha1": "74adbaa657a6e9796e6aca10dc46cbec44539ab5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0048872&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74adbaa657a6e9796e6aca10dc46cbec44539ab5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263644840 | pes2o/s2orc | v3-fos-license | Higher Education in Public Health as a Tool to Reduce Disparities: Findings from an Exploratory Study among the Bedouin Community in Israel
The Bedouin community is a minority disadvantaged population in Israel that suffers from a variety of health and socioeconomic disparities and limited access to higher education. The current study aimed to examine perceptions, successes, and challenges experienced by Bedouin students during their studies and to assess an internship program developed on the principles of a community-based participatory research approach to public health. In-depth interviews were conducted with 34 Bedouin students studying in the public health academic track between January and April 2023. Grounded Theory was used to analyze the data. Three main themes emerged from the analysis: (1) facilitators for the decision to pursue higher education in public health, (2) challenges and coping strategies, and (3) experiences of success. The internship program included eleven Bedouin students who conducted six community intervention projects covering a range of topics with different target Bedouin populations. Higher education is crucial for empowering minorities, producing leadership, and reducing socioeconomic and health gaps. The field internship enabled the necessary alignment between academia and public health practice. It is important to further reflect on the integration of minority groups in public health studies and its role in decreasing health inequity.
Introduction
The Bedouin population in southern Israel is one of the largest minority groups in Israeli society, constituting about 3% of the total population and 14% of the Arab population in Israel [1,2].Residing in traditional and tribal villages, located in the periphery and characterized by a unique patriarchal social structure, the Bedouin community suffers from a variety of disparities as compared to other minority groups in Israel.The low socioeconomic and educational levels, limited availability of medical services, limitations of geographic mobility due to limited public transportation in the Bedouin villages, and language and socio-cultural barriers lead to difficulties in obtaining appropriate health services and thus contribute to significant health inequities [3][4][5][6].
There is an integral relationship between education and health within the structural and contextual frameworks of society, and it plays a fundamental role in the general wellbeing of individuals and societies [7].Education affects health in various mediated links including improving employment and economic status [8], enhancing health behaviors [9], developing better social-psychological supportive circles in life [10], and improving access to healthcare [7].
Health Disparities among the Bedouin Population in Israel
It is widely acknowledged that health disparities are highly prevalent in Bedouin villages due to poor living conditions as well as cultural and socioeconomic factors.About half of the Bedouin population lives in unrecognized villages with poor sanitation and limited basic infrastructures such as electricity, water supply, and access to healthcare services [11].As a result, poor health outcomes are common among the Bedouin population, including a high rate of congenital disorders due to consanguineous marriages [12,13], a high rate of infant mortality [14], an increasing rate of type 2 diabetes mellitus causing low life expectancy compared to Israeli Jews [15], a higher level of psychological distress [16], a predominance of injuries among Bedouin males [17], and more.
There are various efforts to decrease health disparities and increase health equity for the Bedouin population in Israel, primarily employed by the Ministry of Health, health maintenance organizations, and non-governmental organizations in Israel.Among these efforts are the expansion of mother and child clinics in the Bedouin villages, health promotion workshops for Bedouin women, and training nurses and other health professionals in the Bedouin population, such as nurse case managers for diabetes Bedouin patients [18].Despite these efforts, additional steps are necessary to significantly improve health outcomes, including using culturally sensitive, health-focused continuous interventions.
Higher Education among the Bedouin Population in Israel
Higher education is important for empowering minority development, expanding the leadership required for socioeconomic development, and reducing health gaps [19].However, evidence shows that the Bedouin population's access to higher education is limited and challenging.There are several barriers to higher education among the Bedouin population; of significance is poor education services and a lack of resources at the preuniversity stage.This includes, among other things, insufficient guidance for choosing an academic track.Due to this, many young Bedouins tend to choose the same study tracks, primarily academic programs in teaching and social sciences, although many of them prefer health sciences and medical programs.Therefore, they are less likely to complete their university degrees [19,20].Additional barriers to higher education among the Bedouin population include the admissions procedures in Israeli universities, which are challenging for young Bedouins [21], language barriers, the lack of sufficient financial resources for Bedouins for university studies, cultural and traditional gender roles, inadequate financial resources, and physical access barriers [22].
One of the latest intervention programs to decrease inequity in higher education among the Bedouin population is called "Gateway to Academia" developed and implemented by the Council of Higher Education in Israel as of 2016.The program assists Bedouins to integrate into higher education studies, even if they do not meet admissions requirements, by providing a one-year program (pre-undergraduate studies) provided in small group classes focusing on extra English and Hebrew language training and academic literacy, alongside financial and social support [23,24].Recent assessments of the program outcomes reveal high cognitive, economic, emotional, and social satisfaction among Bedouin students participating in the program [25].
Integrating Bedouins in Public Health
As part of the "Gateway to Academia" program, as of 2019, young Bedouins were integrated into an undergraduate degree in public health at the Ashkelon Academic College (AAC).The undergraduate public health track was established in 2014 at AAC, located in southern Israel.The process, challenges, and achievements of this unique public health track are described in a previous article [26].
The Bedouin students enrolled in the "Gateway to Academia" program at AAC are provided with academic skills, social and financial support, and a few basic introductory courses in health sciences.At the end of the pre-undergraduate one-year program, the students choose one of the three academic tracks included in the AAC School of Health Sciences: Nutrition, Nursing, and Public Health.To date, 34 Bedouin students study in the Public Health undergraduate track.To assist the Bedouin students in successfully integrating into the undergraduate public health track, we implemented several tools including periodic personal follow-up meetings of academic staff with Bedouin students, strengthening the social relationships between Jewish and Bedouin students through various activities, mentoring programs, personal tutoring by graduate students, and providing additional practice hours for Bedouins students.Moreover, in 2022-23, an internship program was developed and implemented with the participation of Bedouin students to integrate public health students in the field and reduce health disparities in their communities.
At the end of four years of participation in the public health track, an understanding of the integration of Bedouin students is important for future planning and expansion.Therefore, the current study aimed to examine perceptions, needs, successes, and challenges experienced by Bedouin students during their studies.In addition, we describe the implementation of an internship program developed on the principles of a community-based participatory research approach to public health to mitigate health and education disparities among the Bedouin community.Both the interviews and the internship program lay the groundwork for the future development of interventions to alleviate gaps in education and health among minority groups.
Materials and Methods
This exploratory study was conducted to gain in-depth insights into how Bedouin students perceive their experiences in the public health academic track.The research includes qualitative in-depth interviews and a review of intervention projects implemented by Bedouin students in their communities.The Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist was used to report the study [27].The study was approved by the Ashkelon Academic College Ethics Committee (Approval # 57-2023).
Interview Participants
Interview participants were recruited using purposive sampling.In purposive sampling methods, researchers select individuals who meet specific prescribed criteria [28].The criteria was being a Bedouin student in the Public Health undergraduate track.All 34 Bedouin students studying in the public health academic track at the AAC were recruited for the study and were interviewed between January and April 2023.Interviewees included 10 men and 24 women; 9 of them were first-year students, 10 were second-year students, and 15 were third-year students.The Bedouin students completed matriculation in local high schools.While the high schools in the Bedouin communities often have poorer education services, the students all had the opportunity to participate in the "Gateway to Academia" program at AAC in order to reduce gaps and allow them to qualify according to the same admission standards as students from other backgrounds.In the public health track, all of the Bedouin students are at the 35th percentile of scores.The students' age ranged between 19 and 23.Participants gave informed consent for inclusion in the study and were informed about the methods to protect the data for anonymity and privacy.
Qualitative Data Collection
Data were collected using a semi-structured interview format.The topics considered in the development of the interview guide included the decision to pursue higher education and to study public health, challenges, and unmet needs during their studies, sources for support, and successful experiences.
The interview guide was comprised of non-directive and open-ended questions about perceptions and experiences during academic studies (Appendix A).The wording and order of the questions were adapted according to the interview dynamics to maintain continuity and flow and encourage openness of interviewees.The content validation method was used to ensure that the questions in the guide were relevant to the study goals.The guide was pilot tested with two students to ensure a smooth interview flow and verify comprehension of the questions.
The interviewer was a third-year student in public health, trained in qualitative research methods, and supervised by the corresponding authors.All interviews were conducted face to face, individually, at the college and lasted 30-60 min.Interviews were audiotaped and transcribed verbatim in Hebrew in a standardized format.It was emphasized to all interviewees that their details would remain confidential, that they did not have to answer all the questions, and that they could stop the interview at any time.In addition, all interviewees approved the recording and transcript of their interview.
Qualitative Data Analysis
Interview transcripts were analyzed by the authors using a thematic analysis method based on Grounded Theory [29].Interpretive analysis was performed soon after the interviews were conducted.The analysis included incorporating deductive and inductive themes that arose from the research topics based on the literature and from the research data [30].The analysis stages included: (1) a literal reading of all the interviews to gain a comprehensive picture of the data; (2) identifying categories and themes related to the research objectives; (3) redefining central themes to include encoded quotes and examples based on re-reading the transcripts; (4) the themes and quotes were translated and documented in English.An ongoing internal quality audit was conducted to determine whether the data were collected, analyzed, and reported consistently following the study protocol [30].
Community Intervention Projects
In parallel to the qualitative interviews, a special internship program was piloted for third-year students.The purpose of the internship was to utilize academic skills acquired during public health track studies to reduce health disparities in the community.The internship was coordinated and guided by the authors and lasted eight months.The internship program was developed on the principles of the community-based participatory research (CBPR) approach to public health, a collaborative research approach that involves active participation of community members, researchers, and other stakeholders [31].The CBPR method has long been recognized as a method to cope with health disparities [32].In CBPR, the partnership between researchers and community members is emphasized with the goal of addressing issues and concerns that are relevant and meaningful to the community being studied, as intended when involving Bedouin students in actions that facilitate a change in the lifestyle and behaviors within their communities.As action researchers and public health representatives within their community, the Bedouin students are better able to question and grapple with issues that other researchers or professionals may be able to access as well as translate knowledge for the target population [33].
The internship program included three phases: 1.
The first stage included identifying a public health issue in the Bedouin community that needs to be addressed and conducting a literature review.
2.
The second stage included the development of small-scale health-promotion interventions aimed at addressing the defined problem in their community using models, tools, and competencies taught and acquired during their studies.The students developed materials, mentored by the course instructors.The students were required to work with partners in the community and seek and gain collaboration from different organizations to finalize and conduct the intervention project that they developed.
3.
The third stage included the implementation of their intervention.The students were responsible for presenting the sessions and leading the discussions in the community.The student conducted a process and outcomes assessment, which was presented at the end of the year in the college, as well as community and personal reflection.
The internship program included 11 students who conducted 6 community intervention projects.The program was assessed for its contribution to the integration of Bedouin students in the public health field, feedback on the effects of the projects in the community, and the influence of the projects on the students' perceptions and experiences.
Results
The interviews revealed common perceptions and experiences across the Bedouin student population.Analysis of the data from the interviews revealed three main themes: (1) facilitators for the decision to pursue higher education in public health, (2) challenges and coping strategies, and (3) experiences of success.
Theme 1: Facilitators for the Decision to Pursue Higher Education in Public Health
Most of the interviewees described the decision to pursue higher education as one that was made with significant encouragement from family members or high school teachers.Many students described the poor employment of their parents as a major factor affecting their decision to participate in higher education and the "Gateway to Academia" program.Most of the interviewees see the program as a significant tool helping them integrate into academic studies and appreciate the opportunity they have received to be better prepared for academia.They mentioned the challenge of meeting the requirements needed for enrolling in universities in Israel, in particular the psychometric test, which is mandatory for most of the academic tracks.
Many indicated their desire to study a field that could improve the health of their community as a main factor in the decision to study public health.Some students have relatives or other important role models who work in the health field or inspired them to enroll in the public health track.
Female student, 20 years old, second year: The decision to study public health was not my decision but my teacher's.He told me that there is a field that he thought would suit me, I wasn't familiar with this profession and the teacher just pushed me into it.I discovered that I really like this field, the studies and the field are very interesting, and I was really attracted to it.I decided that I wanted to do a master's degree in public health because it is more interesting to me and important in my community.
It is important to note that many of the interviewees described their former plan to study nursing but since they did not meet the admissions requirements, they decided to study in the public health track and attend an accelerated track for academics to transition to nursing in the future.
Male student, 19 years old, first year: I came here at first because you can do an accelerated transition to nursing at the college after graduating in public health.My matriculation grades were high, and I wanted to study nursing because that's what I relate to the most.I took psychometrics several times, but I could not get a satisfactory score.But now things have changed, I want to do a master's degree in public health, I like the field.
Female student, 23 years old, second year: From a young age, I wanted and aspired to get into a higher education institution to study something related to medicine.I chose to study public health because it was the only way to get into nursing without the psychometric exam.I focus on moving on, telling myself that I will succeed because it won't help me to complain and stand still.I need to move forward.Everyone helps me, especially my parents.
Theme 2: Challenges and Coping Strategies
The geographical distance and mobility barriers were mentioned by most of the students.As mentioned above, the Bedouin community suffers from insufficient internal and external public transportation networks, which affect students' mobility and access to the campus located about 60 km from the main Bedouin town and 80 km from most of the Bedouin villages.The support they receive from parents was mentioned as an important factor in coping with this barrier.
Male student, 19 years old, first year: Coming here from my home is the most challenging aspect for me.Despite the distance, I wake up at five in the morning and make my way here.My father is one of the people who helps me out; every day he expresses his pride in me and wishes to see me succeed.
Female student, 22 years old, third year: My most significant difficulty all these years is that I come from far away to study here, and I must take so many bus trips that are hard for me.I leave the house really early in the morning and ask Allah to help me stay strong and finish this degree.
Many of the participants stated that the language barrier significantly affects their ability to succeed.The language barrier often forced the Bedouin students to translate the learning material and lecturers into Arabic-Bedouin, causing them excessive workload and stress during their studies.Some have referred to the willingness of relatives and family members to assist in translation as a significant factor in their academic success.Others mentioned the assistance of their classmates and stated that forming close relationships with Hebrew-speaking classmates helped them better understand the language and the learning material.
Female student, 23 years old, third year: The first year was very difficult for me, I did not know how to speak Hebrew well, and the language was very difficult for me.The hardest thing for me was the difficulty in expressing myself in class and also doing presentations in front of the class, it really challenged me.During the years I tried to explain myself to the Jewish students, the lecturers tried to understand me, and I tried to understand them and that's how it works out.I got used to it and learned to accept it.
The students mentioned limited cultural competency in the academic system and among some of the academic staff as a factor that poses a challenge for them, especially in situations of asking for help from the academic staff.Female Bedouin students emphasized the continuous need to explain gender-based cultural challenges they face during their studies.For example, due to the socio-cultural structure of the Bedouin society and genderbased norms, Bedouin women feel embarrassment that prevents them from speaking in class during presentations or catching a ride from the village to the campus with a male student.In addition, several of the female Bedouin students are married and pregnant as expected from a woman their age in Bedouin society, which adds difficulties to maintaining a study routine and academic continuity.Nevertheless, the feelings of being a role model for women in the community and a sense of mission to promote the health of the Bedouin population were common among female Bedouin interviewees.
Female student, 22 years old, second year: Socially it's not easy for me.Academically, in the beginning, it was hard, but the lecturers provided us with extra classes, and I see how I'm improving little by little.Without a degree, I am worth nothing, so it is important for me to succeed.The hardest part is presenting in front of the class, it scares me the most.I practice at home in front of my family, they correct me when necessary and advise me how to speak.They give me a lot of confidence and tell me not to be afraid and that I will succeed in the end.I want to study for an advanced degree in physical activity because it is lacking in the Bedouin community.
Female student, 23 years old, second year: I decided to go to higher education because I think that the Bedouin society needs to progress, Bedouin women are not sufficiently educated, and I decided to be one of those who are.Public health is a field of knowledge that I really like.I am less connected to the practice of medicine on an individual level, but more on a community level, and I think that this thing is lacking in our community, that's why I chose public health.
Female student, 23 years old, third year: According to my perception, every woman should have a bachelor's degree because I see that when a woman studies, her thinking changes.But it was difficult for me that people here at the academy did not understand our culture and our difficulties.Many times, I wanted to quit my studies and then I saw how far I had come, how much I had achieved.I continued my study, and it makes me the happiest.This is my greatest success.
Theme 3: Experiences of Success
Overall, the students voiced experiences of success alongside the different challenges they face, in particular as they progress through their academic studies.All the interviewees mentioned good grades and the ability to continue to advance academic studies as the main success.Some of the first-and second-year students stated the ability to form good relationships with their Jewish peers and speak Hebrew fluently as an important accomplishment.
Female student, 20 years old, second year: Some Jewish students like to talk to us, they mainly want to know about our culture, about our religion, so they ask, and we answer.It connects us and helps us to feel comfortable in class.In addition, it is very helpful to get assistance from Bedouin students in advanced years who give smart tips and advice.My most successful and significant experience in this degree is that I manage to connect with people and cope successfully with challenges in my studies.
Third-year students specifically felt that they succeeded in the challenging journey of academic studies and described feelings of satisfaction, pride, and empowerment.Most of them stated that they feel self-confident in their academic and social abilities and their capability to cope with the academic challenges they faced during their studies.All of the Third-year students stated that they feel confident in their ability to contribute to and promote Bedouin community health.
Male student, 24 years old, third year: I started academic studies on a different track at first.After a while, I decided to switch to the public health track.I realized that the issues of public health really interest me.I have an attention disorder, so it was very difficult for me.I searched and found methods to cope with the difficulty.I recorded the lectures and used other Jewish students' summaries in class.At home, I invested time in understanding the learning material and I succeeded.My academic achievements are high, and I am proud of myself for that.This is my biggest success, and it encourages me to pursue a master's degree in public health.
Community Intervention Experiences in the Bedouin Community
During the internship, students experienced challenges and successes, implementing academic competencies, public health knowledge, and leadership as well as accountability skills in practice.The internship community projects covered a range of topics with different target populations (see Table 1).These topics were all selected based on the identification of an issue demanding action research and intervention within the Bedouin community.
All of the projects were coordinated with organizations in the community including schools, health organizations, municipal agencies, and NGOs.The projects were conducted in accordance with cultural norms as regards language, dress, and settings for example male public health students conducted meetings with men at community mosques; female public health students conducted physical activity promotion with adolescent girls in school.
Women in childbirth years
Two sessions were conducted with the welfare department (N = 30); information sharing and discussions regarding the causes of congenital disorders and the importance of early screening.
I always thought that disorders/diseases were my fault, but I learned in this meeting that the responsibility is shared between my husband and myself.
Prevention of early childbirth
Women with a high-risk pregnancy Qualitative focus group sessions (N = 7) which included discussions and provision of information.
There is a lack of workshops on childbirth. . .This is the first time the clinic invited us to participate in this type of program.
Promotion of healthy nutrition
Adolescent girls (15-16 years old) Four sessions in a high school setting (N = 36) provision of information on healthy nutrition and body image; short pre-post survey.
During the program due to the success, there was a request for additional meetings.Adolescents reported higher rates of healthy nutrition.
Promotion of physical activity (PA)
Adolescent girls (15 years old) Three sessions in two schools including the provision of information, PA sessions with an instructor, and a short pre-post survey; a school with a designated PA room for girls (N = 15) and a school without a designated PA (N = 15) In schools with a PA room, there were higher PA rates in school; following sessions, both groups reported higher PA rates outside of school hours and a higher rate of in-school PA rates also in the school without a designated PA room.A joint WhatsApp group was developed to share positive feedback and promote social norms.
Smoking prevention
Adolescent boys and girls (13-14 years) Three sessions in a middle school setting (N = 30), including provision of information, demonstration of effects of smoking, and short pre-post survey Adolescents spoke about problematic norms, in the family, among peers, and in the community.
My father prepares at least 10 nargila (hookah water pipe) heads for guests that visit over the weekend.We always buy cigarettes at the neighborhood shop and the owner doesn't even ask us who they are for (despite regulations).
Prevention of child unintentional injuries (backover crashes)
Men with children aged 0-4 Two sessions a month apart in the mosque (in multiple groups, total N = 50), provision of information, demonstration of the field of vision while backing up, and short pre-post survey Significant increases in reported knowledge regarding distance in the field of view, behavior checking behind the vehicle prior to backup, and environment setting up a separation between the vehicle and play areas.
In a review of the internship program and the community projects, several key findings emerge at the community level:
•
There is a lack of programs engaged in knowledge transfer in the field of health for members of the Bedouin community; In several cases, partners in the community project requested additional programming and recommended that the projects continue in some format.Program participants provided positive feedback and evaluation results were promising.
A review of the internship program also sheds light on the influence of the projects on the students' perceptions and experiences, including:
•
Feelings of success, empowerment, and recognition of their vocation in increasing awareness and promotion of healthy behaviors within the community; • Identification of knowledge gaps and cultural norms that may prevent the adoption of healthy behaviors within their community, particularly regarding gender disparities in the Bedouin community; for example, during recruitment for workshops with women, public health students met resistance from Bedouin husbands who would not permit their wives to attend; • A desire to work within their community to continue to promote public health.
The internship program provided the Bedouin public health students with deeper insights into health determinants within their community and practical exposure to health promotion.
Discussion
The current study examined perceptions, needs, successes, and challenges experienced by Bedouin students during their studies and assessed an academic framework for implementing a field internship, including community intervention experiences, in the Bedouin community, as a tool to decrease academic and health disparities.Our findings help lay the groundwork for the future development of community-based interventions to alleviate education and health gaps among minority groups.
The interviews revealed common perceptions and experiences across the Bedouin student population.The main findings indicate that young Bedouins experience a major challenge in meeting the requirements needed for enrolling in universities in Israel and see the "Gateway to Academia" program as a valuable opportunity to help them integrate into higher education.This finding is in line with previous studies describing the admission procedures to higher education institutions as a significant obstacle to minority and disadvantaged populations acquiring higher education [21].Admission procedures including psychometric exams are not good predictors of academic abilities among minority students, causing a culturally based disadvantage for students who wish to enroll in higher education institutions [34].In this context, a structured population-tailored solution such as that offered by the "Gateway to Academia" program is valuable in assisting minority students to cross this deep socio-cultural barrier and improving the access and integration of the Bedouin community to higher education.
Several major challenges were mentioned by the Bedouin students: geographical distance and mobility barriers, language barriers, and cultural and gender-based barriers.These challenges mentioned in previous studies conducted among minorities and the Bedouin population need to be carefully addressed [19,22].Since educational attainment is one of the key factors affecting the Bedouin community's socioeconomic status, it is important to develop innovative plans to promote education and professional development while addressing these social and structural barriers through necessary changes in government policy.Further consideration of appropriate additional support for Bedouin students, in particular during the first and most challenging year of academia, is recommended.Moreover, additional training on cultural competency for academic staff is recommended to improve their understanding and recognition of these aforementioned gaps.Cultural competency training of faculty and trainers in the health field is recognized as imperative to support a culturally diverse patient population [35].Moreover, in public health, cultural competency training is recognized as essential to target population-based health and reduce health disparities, including culturally sensitive research, programming, and evaluation [36].
Along with the barriers mentioned, our students noted experiences of success in the challenging journey of academic studies and described feelings of satisfaction, pride, and empowerment.Considering most of them indicated their initial desire to study a field that could improve the health of their community as a main factor in the decision to study public health, it was encouraging to see that all of the third-year students stated that they feel confident in their ability to contribute to and promote Bedouin community health.These findings were further supported by their positive experiences conducting small-scale intervention projects in the community during the internship program.The beliefs and attitudes regarding increased self-efficacy that the Bedouin students developed over the course of their studies have the potential to contribute significantly to their future performance in the fields of public health practice and research [37,38].
Based on our experience implementing field internships in the Bedouin communities, higher education is crucial for empowering minorities, producing the leadership necessary for social and economic progress, and reducing socioeconomic and health gaps among minority groups.Moreover, the field internship enabled the necessary alignment between academia and public health practice, which is known to improve public health actions, by conducting population-adjusted and collaborative interventions for complex public health issues [39].Involving Bedouin public health students in community-based participatory research and programming within their community adds the element of culture-centeredness, which is an important strategy in confronting social determinants of health and reducing health disparities [40].Accordingly, the Bedouin students who participated in the internship had the opportunity to explore and gain an in-depth understanding of the ways to deal with the variety of health determinants in their community, initiating collaborations and obtaining leadership competencies that impact the lived experiences and realities of their community [41,42].Finally, the community-tailored field internship offered undergraduate students exposure to potential employment opportunities after graduation, which may fulfill the need for highly qualified and skilled public health professionals with the appropriate foundation, practical experience, and knowledge of specific community structures [43,44].
This study has several limitations.First, the study was conducted in Israel, among a single minority group, the Bedouin community; therefore, it may be difficult to generalize the findings to other countries.In addition, we did not compare the experiences reported by the Bedouin students to those of other cultural groups of students.It is possible that such a comparison would have revealed additional organizational and social factors that may influence the students' experiences, difficulties, and successes.Nevertheless, we believe that our methodology, which included both qualitative interviews and the internship development and assessment, enables replication in other academic programs enrolling minorities aiming to decrease disparities among minority groups.
Conclusions
The integration of Bedouin students in academic studies is complex and challenging.It is important to reflect on the complete picture regarding the integration of Bedouin students in public health studies to adapt and meet the unique needs that were identified in this study.The findings of the research reveal the specific challenges and barriers students face in public health studies, as well as their achievements and successes, and enable a deeper understanding of the Bedouin student's perceptions and experiences.We believe that our experience with the integration strategies and the community internship model may be replicated in other academic programs aimed at decreasing disparities among minority groups.
Table 1 .
Overview of internship community projects in the Bedouin community. | 2023-10-05T15:36:03.889Z | 2023-09-30T00:00:00.000 | {
"year": 2023,
"sha1": "e9b79b8c87f22086e976325298038fd92344c1be",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2254-9625/13/10/147/pdf?version=1696042794",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f3fe11815713bf8f6937cf5e05869cd7fe24edd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
215769929 | pes2o/s2orc | v3-fos-license | The Performance of Seven Molecular Methods for the Detection of PRRSV
Abstract Porcine Reproductive and Respiratory Syndrome is a viral disease of swine characterized by reproductive failure of breeding animals and respiratory disorders in all categories. The first PRRS case in Serbia was recorded in 2001 after illegal import of boar semen. PRRS is economically the most important disease due to significant direct and indirect losses. Today, for routine diagnosis of PRRS in infected herds serological methods (ELISA) and molecular methods are used. Although modern diagnostic techniques are very robust, exceptional diversity of the viral strains is often the obstacle for an accurate diagnosis. To estimate the performance of seven different methods for PRRSV genome detection, twenty samples were used. However, none of the methods was able to detect all PRRSV strains. The best sensitivity was obtained by combining two methods. Until today, there is no absolutely accurate test which enables the detection of all circulating strains.
INTRODUCTION
Porcine Reproductive and Respiratory Syndrome (PRRS) is a viral disease of pigs characterized by reproductive disorders in breeding animals and respiratory symptoms in all categories. Retrospective examination revealed that the disease occurred in 1979 in Canada [1]. In Serbia, the fi rst case of PRRS was detected in 2001 after the illegal import of boar semen [2]. PRRS today is considered as the most economically signifi cant pig disease due to its large direct and indirect losses. The causal agent is an RNA virus from the Arteriviridae family, order Nidovirales [3]. There are two genotypes of this virus, genotype 1 formerly known as European and genotype 2 known as North American type [4]. In May 2006, a highly pathogenic strain of PRRS virus, genotype 2, appeared in China, leading to the death of more than 2 million pigs [5]. There are four subtypes within the genotype 1, whereas type 2 strains in Europe are genetically homogenous [6]. In Central and Western Europe, only Lelystad virus-like circulates, but in Eastern Europe, all subtypes are present [6]. The PRRS virus genome is not segmented, having a positive polarity and size of about 15 kb [7]. The genome contains 8 open reading frames (ORF) for encoding structural and non-structural proteins of the virus. A characteristic of the PRRS virus genome, like other RNA viruses, is a high mutation rate of 1.4x10 -2 to 7.7 ± 2.1x10 -3 nucleotide replacements per year, resulting in a virus divergence of 0.5% per year [8]. Furthermore, ORF7 is a highly conserved part of the genome at the genotype level, but the similarity between the two genotypes in this part is only 57-59% [9]. ORF 6 is almost 100% conserved in genotype 2, while similarity with genotype 1 is 78-81% [9]. Balka et al [10] suggest that, even though the local evolution is continuously happening, the transboundary movement of infected animals is more important for virus diversity.
The very divergent subtypes of Type 1, furthermore, compromise the accurate diagnosis due to the high rates of false-negative RT-PCR results [6].
PRRS is spread worldwide. The most important epizootiological factor is a persistent infection that allows the virus to be excreted for up to 157 days. PRRS is now considered as the disease with the greatest impact on pig production. Disease control, nowadays, generally involves vaccination. However, the load-close-expose strategy, which consists of closing the herd and exposing the pigs to the virus, is still being in use. In addition to the individual, farm level, there are regional approaches to eradication [11] but also at the national level as has been done in Chile [12]. The diagnosis of PRRS is based on the detection either of the virus itself, its genome or antigen, as well as indirectly by antibody detection, i.e. the immune response to the infection. Due to the signifi cant genetic and antigenic diversity of fi eld isolates, the laboratory diagnosis of PRRS sometimes is very complex [6]. Polymerase chain reaction (PCR) is the most commonly used method for the diagnosis of PRRS, being known for its extremely high analytical sensitivity and specifi city. However, diagnostic sensitivity is often not at a satisfactory level. Therefore, the choice of diagnostic methods must be based on the diagnostic characteristics of the assay as determined by examination of local isolates, which was the aim of this paper to demonstrate.
MATERIAL AND METHODS
Twenty samples of pig tissues, collected during the period 2015-2019 and stored at -80 °C, were used for investigation, regardless the initial results of PRRS testing, as of suspicion that not all methods had the same performance. Generally, the samples were taken from the animals suspected of PRRS based on clinical symptoms and pathomorphological alterations. The investigation included 13 commercial farms, out of which one was PRRS free. Lung tissue and associated lymph nodes were prepared as 10% suspensions in phosphate-buffered saline (PBS). After centrifugation at 2000 rpm for 10 minutes, the supernatant was used for RNA isolation, following the recommendations of the commercial kit manufacturer (Viral RNA + DNA Preparation kit, Jena Bioscience).
Seven different protocols were used for the PRRS virus genome detection, i.e. 5 for gel-based [13][14][15][16] (OneStep RT-PCR Kit, Qiagen) and 2 for real-time RT-PCR [17,18] (Verso I Step qRT-PCR ROX Kit, Thermo Scientifi c). Positive and negative controls of both RNA extraction and amplifi cation, as well as non-template control were used for validation purposes. External RNA (VetMAX™ Xeno™ Internal Positive Control RNA, Thermofi sher Scientifi c) was added to each sample enabling monitoring of the complete process (VetMAX™ Xeno™ Internal Positive Control -VIC™ Assay, Thermofi sher Scientifi c). The results of the gel-based RT-PCR were read after electrophoresis at 120V, for 1h, in 2% agarose gel. Samples were considered positive if Ct value was below 35 for real time PCR, or producing the amplifi ed band of expected length for the gel-based PCR.
RESULTS
By testing 20 tissue samples, the PRRS virus genome was detected in 5 to 70% of the samples (Table 1) depending on the protocol used. negative negative positive negative negative negative 2 negative negative negative negative negative negative negative 3 negative positive negative positive positive negative positive 4 positive negative negative positive negative negative positive 5 negative negative negative negative negative negative negative 6 negative negative negative negative negative negative negative 7 negative negative negative negative negative negative negative 8 negative negative negative negative negative negative positive 9 negative negative negative negative negative positive positive 10 negative positive negative positive negative negative negative 11 positive negative positive negative negative negative positive 12 positive negative positive negative negative negative positive 13 negative negative negative negative negative negative positive 14 negative negative negative positive negative negative positive 15 negative negative negative negative negative negative positive 16 negative negative negative negative negative negative positive 17 negative positive negative positive negative positive positive 18 negative positive negative positive negative negative positive 19 negative negative negative negative negative negative positive 20 negative positive negative negative negative negative positive The PRRS virus genome in 4 samples was not detected by any of the applied protocols. In none of the remaining 16 samples, the genome of the virus was detected by all 7 protocols, but at the most with 4. It has been shown that the most sensitive method [16] was real time RT-PCR intended for the detection of EU strains by amplifying short ORF7 sequence. By protocol G, only 2 strains could not be detected.
DISCUSSION
Considering the economic signifi cance of the disease, many authors have studied the diagnostic performance of laboratory methods for the diagnosis of PRRS. The common conclusion of the majority is that, before selection for routine diagnostics, it is necessary to evaluate the diagnostic performance of the method using local PRRS virus isolates. The results of this limited study showed that it is often not possible to detect all strains of a virus by one protocol, but it is necessary to combine two or more. By protocol G, 14 positive samples (70%) were detected, which classifi es it as the protocol with the highest diagnostic sensitivity (87.5%). However, since the PRRS virus genome was not detected by protocol E in two samples, by its combining with the D protocol, the diagnostic sensitivity reaches 100% and ensures that most isolates are detected. Since the genome of the PRRS virus in 4 samples was not detected by any protocol, it was assumed that these samples were negative for the presence of the PRRS virus and that the manifested clinical signs were due to another infection or disease. Analyzing the results concerning the part of the genome that was amplifi ed, it was not possible to identify the part that is most specifi c for the detection of local isolates from Serbia. Similar results were obtained by comparing gel-based and realtime PCR, where it was not possible to establish a pattern, even though real-time PCR is considered to be an extremely sensitive method.
However, it has been shown that the protocols intended for the detection of only one genotype have a higher sensitivity. To overcome the protocol's imperfections, the sequencing of multiple local isolates and constructing primers and probes for the most conserved part of the genome may be a potential solution. However, with this approach, the risk of false-negative results in the case of imported and emerging strains remains quite high. Sampling time and sample type also infl uence the PCR results. Blood serum, sperm, blood, and oral fl uid are commonly used for active monitoring [19]. The virus genome can be detected in serum, oral fl uid, and blood even after 24 to 48 hours after infection [19]. Furthermore, PRRS virus isolates also differ in their degree of replication and excretion. To eliminate these factors that may affect the test results, samples of the same type were used in this study from animals with a clinical presentation at approximately the same stage of infection. Besides, to rule out inhibition of the reaction that may be the cause of the false-negative results, an internal amplifi cation control was used. However, to determine the real reason for the extremely high discrepancy between the protocols used, it is necessary to sequence the local isolates and estimate the degree of disagreement between the primer/probe and template sequences [20]. The results of this study, besides showing that not all PRRS isolates can be detected by a single assay, indicate that negative laboratory results do not necessarily mean the absence of the infection, particularly if random sampling was applied. Since the laboratories are expected to employ reliable and accurate methods, continuous protocol updating for molecular detection of the genome of the virus is required. Collaboration between the laboratories and veterinarians, who are expected to indicate any change in clinical manifestation and submit samples for sequencing, allowing the monitoring of the changes in the genome of the virus, and timely adaptation of protocols, are paramount in order to achieve this goal. | 2020-04-16T13:32:29.674Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "e85b8ef57ffdedc22d5aeb903db8f51d1d88fc08",
"oa_license": "CCBY",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/acve/70/1/article-p51.pdf",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "e85b8ef57ffdedc22d5aeb903db8f51d1d88fc08",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
252861654 | pes2o/s2orc | v3-fos-license | Exploring Citizens Perception of the Police Role and Function in a Post ‐ Colonial Nation
: Before attempting to develop productive and harmonious working relationships between citizens and the police in a post ‐ colonial society such as Trinidad and Tobago (T&T), it is imperative to initially gain a more precise understanding of the role and function of the police. This qualitative study suggested that the current role and function of officers is parallel to the colonial model of policing, where officers operated in a paramilitary manner. This model of policing was concerned with law enforcement and public order duties, which was highlighted as counterproductive for police and public relations. The model was also popular for police treatment based on citizens socioeconomic status. The results of this study suggest that police officers should implement a Service Oriented Policing approach (SOP), which could allow police officers to become proactively involved with communities and citizens, build stronger and increasingly productive relationships and be more effective and efficient as an institution.
Factors such as corruption, coercive tactics and quasi-militarisation were previously highlighted in studies by Deosaran (2002); Townsend (2009); King (2009) and Pino and Johnson (2011) as being instrumental towards policing in T&T, but they did not specifically explore the role and function of the police. Therefore, according to the author's knowledge, there is no previous empirical study which attempted to explore the role and function of the T&T police.
Aim for the Study
This study was aimed at exploring citizens perception of what they consider to be the key role and function of the police towards improving their lives and communities. To achieve this, the following research questions needed to be answered. 1. Does the present role and function of police officers in T&T satisfy the needs of citizens, for example, safety, security, problem solving, minimising the fear of crime, crime prevention and resolution? 2. Are the role and function of police officers aligned to a "force or service"? 3. What is citizens expectation of change towards the role and function of the police in T&T?
The aim of this study is not to form a generalisation of the findings and conclusion, but to develop a foundation of policing research on the topic within a T&T context on which further research can develop. The conclusions and recommendations can be used to enhance policing in T&T and other societies to develop a modern institution.
Justification for this Study
Previous research on public perception of the police focused on citizens demographic characteristics, such as race/ethnicity, socioeconomic status, age, and gender in developed countries, for example Australia, Canada, USA, and UK (Brown and Benedict 2002;O'Connor 2008). In T&T, there is a nascent body of literature that exists on citizen perceptions of the police, and at the time of this research, there was no specific study found, which attempted to study citizens perception of the role and function of police officers. As a result, this study was aimed at using citizens (adults) views, opinions and expectations and testing it in a T&T context to generate an understanding of the topic with the intention to expand the present literature to include a post-colonial nation. Due to ethical principles, this study did not involve citizens under 18 years of age.
Earlier Perceptions of the Role and Function of the Police
To understand the role and function of the police in society, it is imperative to examine the work of Banton (1964) "the policeman in the community" which suggested that police officers are primarily observers of the peace and not law enforcers. He argued that officers monitor community activities and respond to citizen calls for assistance. His work suggested that discretion was imperative to the role and function of the police as it permits officers to exercise moral judgement on when the law should be enforced because officers were persuaders rather than prosecutors. However, prior to Banton (1964), La Fave (1962) and Williams (1954) argued that the role and function of the police should not provide officers with discretion because it was an opportunity for bias and delinquency.
A study by Cain (1973) suggested that the role and function of the police was best defined as order maintenance and law enforcement. However, Bayley (1985) and Uglow (1988) argued that the police role and function are aligned to controlling some level of political authority and do not exclusively maintain law and order. According to Mawby (2003) the role and function of the police varies in accordance with systems, cultures, and country. He highlighted that the most basic role and function of police is law enforcement and order maintenance, and sometimes incorporating other activities and tasks. The suggestions made by Mawby (2003) were supported by Reiner (2010) who stated that in the UK, a quarter of police time was spent assisting people with mental health conditions and attending to other vulnerable groups. Reiner (2010) suggested that the role and function of the police in liberal societies have often become controversial on how best to develop a specific definition. For example, "a force" where officers primary function is law enforcement and order maintenance or "a service" whereby officers' primary function is to assist citizens and pacify social problems. Manning (2014) argued that the role and function of the police is dependent on a given society and its history, culture, and social construction. He suggested that policing a stable society differs significantly from a problematic society. Therefore, the role and function of the police is difficult to contextualise.
An integral aspect of the police role and function is to account for decisions made and the process used in making such decisions (Mac Vean and Neyroud 2012). Decisions are important towards the outcome of a situation and therefore, morals and ethics are paramount (Delattre 2011) and officers are often challenged between personal and professional ethical decisions. Consequently, before officers make any specific decision, they should rely on professional ethical standards because they are acting on behalf of the police institution and the population and not act on personal attributes (Delattre 2011).
Service Oriented Policing Model
Over the past decades, police institutions have been experiencing several events of reform efforts due to the traditional model of policing being impersonal, ineffective, and costly (Greene 2000;Hill 2020). Police institutions carried increased liabilities due to the use of force and lack of training which was being scrutinised by citizens and policymaker (Hill 2020). During this time there was the establishment of Community Oriented Policing (COP), Problem Oriented Policing (POP), Evidence Based Policing (EBP) and Body Worn Video cameras (BWV) which was all designed to reduce the use of lethal force (Hill 2020). However, these advancements were soon overwhelmed by a new shift in policing which became known as the Service Oriented Policing (SOP) model. This model of policing relied on the motto "to protect and serve" which promoted a foundation for transparency, approachability and accountability within police institutions and their officers (Gill et al. 2014;Hill 2020). This model of policing was focused on reducing crime and its fear, and community disorder whilst increasing citizens satisfaction and legitimacy with the police (Hill 2020).
The business of policing is traditionally law enforcement, with communities and citizens being the customers. Within recent times the police have been providing a plethora of services to communities, and these often vary in accordance with the demographic's construction of individual community (Hill 2020). In the United States, police services are usually classified into four categories. The first category is a service to each citizen by attending, assisting, and resolving their problems. The second is a service to violators. The police have a duty to "protect and serve" therefore, violators being citizens should be treated in a courteous and professional manner. The third is stakeholders such as courts, children services and mental health institutions to name some. It is important for police institutions to form a multiagency approach with these stakeholders to provide a more diverse range of support and service to citizens and communities. The fourth category is police officers providing a support service to each other. It is paramount that officers support and establish a well-being environment for each other since their well-being is likely to impact their performance and service to citizens and communities (Hill 2020).
A study on Service Oriented Policing by Scheider et al. (2003) found that servicebased policing increased satisfaction within the role and function of the police. Another study by Pope and Pope (2012) suggested that SOP was responsible for enhancing citizens lives which in turn promoted safer communities, tranquil and developed environments and these all contribute to dramatic increases in property prices. According to Gill et al. (2014) SOP reduces citizens negative perception of crime in their communities and simultaneously increases their legitimacy with the police (Hill 2020). Whilst the SOP appeared to have been successful on many occasions, Hill (2020) argued that this model of policing did have some barriers. It was stated that SOP needed the participation of police officers, and this sometime could become contentious as some officers believe that their job was becoming that of a social worker. Hill (2020) argued that these officers did not believe that citizens were the customers of policing and thus, SOP was not a legitimate concept of policing. It was further established that budgets to police institutions often created financial implications on policing resources such as training and availability of officers (Hill 2020).
Colonial Policing Model
The Colonial Policing model was based on the ethos of the Royal Irish Constabulary (RIC) which was designed to suppress political disorders in Ireland (Sinclair 2006;Mathura 2019). The RIC mirrored the Imperial Army since many officers were ex-soldiers and maintained a military persona such as foot drills, firearm training, and public order duties (Tobias 1977;Anderson and Killingray 1991). When the British Empire began to expand, this model of policing was transferred to new colonies, but was often met with hostility and rejection from local citizens due to cultural differences. According to Anderson and Killingray (1991), all senior officers (gazette) and most inspectors were Caucasians (white) and were recruited from the army because of their military training and persona (Tobias 1977;Cole 2003). Mawby (2003) suggested that the recruitment of white senior officers was solely to create a divide between the police and local citizens whilst Junior officers (constable to sergeant) were recruited locally or from other colonies (Sinclair 2006;Bell 2013). This ensured that indigenous officers never attained a management rank which could have jeopardised the objectives of colonisation (Tobias 1977;Sinclair 2006). Stanislas (2014) stated that most colonial officers were recruited with poor educational attainment and Sinclair (2006) highlighted that it was important for officers of the colonies to favour sports, be no more than 35 years and be physically fit and well-built. Bell (2013) suggested that colonial officers were responsible for the colony's protection from internal and external threats, ensuring the local workforce were law compliant and protecting the foreign traders. Arnold (1986) highlighted that colonial police officers was not restricted to law enforcement duties but often performed the role of judge and jury by providing rapid punishment onto local citizens. According to Mars (1998) policing on colonies often relied on coercive and violence tactics onto local citizens (Jefferies 1952;Brewer 1994). Arnold (1986) argued that officers used coercion to prevent local citizens from challenging the colonial power and authority (Das and Verma 1998;Bell 2013). As a result, colonial officers developed a reputation for being rude, unhelpful, violent, and unsympathetic with the local people causing a lack of cooperation and strained relationships (Anderson and Killingray 1991;Cole 2003). In the colonies, officer's behaviours frequently became a concern; for example, corruption, viewed by citizens as enemies and oppressors, using intimidation tactics, agents of the state and coercion which caused the local communities and citizens to distant themselves from the police (Arnold 1986;King 2009;Bell 2013). The colonial policing system became known as a state's apparatus of authority and to fulfil the aims and objectives of the colonial government, whilst ensuring that the local citizens (subjects) were law compliant and did not resist (Deosaran 2002;King 2009; Mathura 2019).
Trinidad and Tobago Policing Service
The colonial model of policing was introduced to T&T when the British ceded the country in 1797 (King 2009;Wallace 2011). The senior officers were all from England and Ireland with military backgrounds and junior officers were recruited locally or brought from other colonies (Johnson 1991;Sinclair 2006). By 1843, the T&T police force had 12 police stations and approximately 100 officers across the country (De Verteuil 1986;Pino 2009). Prior to the country's independence in 1962, policing duties were mainly focused on political tensions and state affairs (Anderson and Killingray 1991;Brereton 1996). By the late 1960s the police force was officially recognised as a service and adopted the name, Trinidad and Tobago Police Service (TTPS).
There have been several efforts to reform T&T's police institution which dates back to 1959 with more than 200 recommendations for improvements (Job 2004;Mathura 2019). The Lee committee in 1959 recommended changes to the rank structure, in 1964 the Derby committee recommended administration upgrades, accountability procedures, higher education and training and advanced investigation techniques. The Carr committee in 1972 recommended changes for effectiveness and efficiency, in 1984 the Bruce committee recommended a comprehensive restructure of the TTPS and the O'Dowd (1991) committee recommended improved resource management, advanced training and revised duties for all officer (Job 2004;Mathura 2019). However, the majority of these recommendations were ignored in government (Job 2004;Mathura 2019). Apart from reform committees, a nascent body of policing research in T&T have recommended various institutional changes. Authors such as Johnson et al. (2008), King (2009), Wallace (2011, Pino and Johnson (2011), Ryan et al. (2013), Seepersad (2016) and Adams (2019) have recommended several policy and practice changes within the TTPS. However, many of these recommendations were ignored by the governments and police executives (Job 2004;Watson and Kerrigan 2018).
A community policing study by Deosaran (2002) highlighted that citizens in T&T were dissatisfied with the TTPS because of some officer's delinquent behaviour such as the use of brutal and excessive force towards citizens, accepting money from criminals to destroy evidence, renting of service firearms to criminals and giving special treatment to friends and family. Another study by Wells and Katz (2008) showed officers inabilities to prevent and solve crimes. A study by Townsend (2009) highlighted that some officers were instrumental in the illegal drug trade (Scott 1984;Griffith 2000) and gang activities (Pawelz 2020 Some experiments with foreign police strategies were tried in T&T due to the escalating crime rates. However, they mainly adopted an American and European "one size fit all" approach (Pino 2009;Watson 2016;Watson and Kerrigan 2018). These ideological assumptions and approaches failed because North American and European societal dysfunctions differed from T&T. These strategies did not acknowledge the differences and implications between the different societies, cultures and crime patterns (Harkness et al. 2015;Watson 2016;Watson and Kerrigan 2018). According to Watson and Kerrigan (2018), policing and crime fighting strategies in T&T has been historically influenced by political affiliation, poor police management and neglect for citizens involvement Watson and Kerrigan 2018). These factors collectively reduced community support for police initiatives and reform efforts remained challenged Watson and Kerrigan 2018;Mathura 2019).
A major problem with the TTPS was the inequality of male to female ratio (Job 2004;Mathura 2019). The institution continued to be influenced by its colonial heritage of being male dominated because men were perceived to be stronger than women (Sinclair 2006). Women in the TTPS were mainly bound to administration duties because of this perception and the recruitment of female officers have been much lower than that of their male counterparts (Deosaran 2002).
A fundamental problem that previous studies and reports on the TTPS failed to address was a more precise definition towards the role and function of police officers. Whilst the deficiencies of the institution and officers were imperative for reform efforts, a more precise account for the role and function is paramount as this can guide reform efforts such as recruitment, training and the needs of the communities and citizens. As a result of the "gap" in the current literature, this study is aimed at addressing the deficiency.
Research Design
Considering the lack of research on policing in T&T, this research design was aimed at exploring citizensʹ perception of the police role and function in a post-colonial nation, namely Trinidad and Tobago. This study used a qualitative design which provided descriptive information from participants. This type of data was best obtained from oneto-one in-depth interviews where participants were given an opportunity to fully express their views and opinions. According to Bryman et al. (2022) qualitative data provides a better understanding and evaluation of the various aspects and social dimensions of people's life, attitude, and experiences.
Research Population and Sample
Trinidad and Tobago are a twin-island nation; the most Southern Caribbean islands and North-East of the South American continent. The islands have a combined population of approximately 1.3 million people from diverse backgrounds, but the two major groups are from African and Indian ethnicities (Brereton 1996;Mathura 2019).
The general population of T&T were more likely to have individual perceptions about the TTPS, officer's role and function in the communities, and citizens expectation from the institution. This study consisted of 45 participants who were members of the public and represented the various geographical locations of T&T, for example, Point Fortin and San Fernando in the South; Mayaro and Sangre Grande in the East; Caroni and Chaguanas in the Central; Port of Spain and Curepe in the North, Maraval and Chaguaramas in the West and Crown Point and Charlotteville in Tobago. The selected areas consisted of rural and urban communities which provided a balanced representation to the data collected. Participants were also selected on demographic characteristics (see Table 1). The interviews were conducted using qualitative open-ended questions and recorded using a Dictaphone. The snowball sampling technique was used to recruit participants for this study because the author was not T&T based. Snowball sampling provided a link between citizens who had an interest in the study, generated a large pool which final participants was chosen from and promoted diversity (Parker et al. 2019;Bryman et al. 2022). Participants were not offer financial incentives, and was told that their participation was voluntary, and could discontinue at any time. No personal information was recorded, and all data would be destroyed after analysis and publication. For ethical reasons, citizens under the age of 18 years were not recruited for this study.
Data Analysis Method
This study used the thematic analysis approach from Braun and Clarke (2006). According to Maguire and Delahunt (2017), Thematic Analysis (TA) is a useful analytical approach for finding patterns and themes in qualitative data. Braun and Clarke (2006) proposed a six-stage analysis in TA which, when becoming familiar with the data, generate initial codes, begin to search for themes, review the themes, define the themes, and write up. For this study, TA was considered most applicable because of the lack of study on policing in T&T and furthermore on citizens perception on the role and function of the police in a post-colonial nation. As a result, TA allowed themes to be extracted from the data collected which produced variables for further exploration and to construct a foundation for future studies.
Findings
Research Question 1: Does the present role and function of police officers in T&T satisfy the needs of citizens for example, safety, security, problem solving, minimising the fear of crime, crime prevention and resolution?
Most participants (n 41) stated that police officers in T&T had a distant and fragile relationship with the citizens and communities. Participant explained that many police officers were lazy and unhelpful towards citizens. It was highlighted that T&T have been experiencing an increase in violent crimes especially with the use of firearms. However, the police were rather reactive to these problems and on some occasions, officers were renting their service firearms to the criminals. Participants indicated that they did not feel safe in their communities especially at night and felt fearful of criminals. It was stated that because of the poor citizen and police relationship and officer's poor attitude towards their job, the TTPS was not able to prevent and solve crimes in T&T. Most participants had first-hand experience and others had acquaintances who approached the police for assistance and were often told that there was a lack of resources and as such assistance was not guaranteed. Participants further stated that if the police responded, officers were often rude and aggressive towards victims which often developed confrontational situations and coercive tactics being used by officers.
According to these participants, most police officers in T&T did not have the skills and knowledge on how to resolve problems and provide advice. However, male officers were found to be more interested in sexual relationships with females in the communities, but female officers were identified as being more helpful and sympathetic. Participants were of the view that officers should receive training on problem solving, and communication techniques which would be applicable to citizens and communities. A 22year-old female participant stated: "These police officer in T&T only want to harass women for sex. If you have a husband or man, they try to come to your house when he not at home. But violent crime is out of control in T&T, and they cannot fix that. We have teenagers running around with guns and shooting innocent people and the police have no idea who they are. Most people do not want to speak with the police because they do not care and some of them even involved with the criminals. The women officer a bit better because they will listen and try to help but you do not see them often." The minority group of participants (n 4) stated that they had a good relationship with the police and found officers to be helpful based on prior experience or those of acquaintances. These participants explained that there were some violent crimes in T&T and citizens needed to consider their actions and not accuse the police of being ineffective. When asked about police corruption, participants highlighted that there were a small number of police officers involved in delinquent behaviour and all should not be labelled equally. These participants stated that their [and acquaintances] encounters and experiences with the police were positive, and they feel safe in their communities because the police conduct regular welfare checks and patrols. It was further highlighted by these participants that if they had a problem, they felt confident to approach the police for assistance and have their problem resolved. A 57-year-old male stated: "Some of these T&T people not easy, they don't want to work hard and live an honest life. They always stealing, robbing, selling drugs, prostituting themselves and killing innocent people. When the police arrest them, they quick to saying the police wicked and not helping them" Participants Demographics: The participants from the majority group were from disadvantaged and middle-class communities located on the outskirts of urban areas and some rural areas to a smaller extent. These participants were between the ages of 18 and 50 and were of Afrotrinidadian and mixed ethnicity with some Indotrinidadians to a lesser extent. There was a balance of males and females, employed, self-employed and a small number of unemployed. Most held a primary or secondary school education. The minority group of participants were from affluent communities, were above the age of 40 years, held a university qualification and were employed with a small number being self-employed business owners. They were of White and Indotrinidadian ethnicity, a balance of males and females and were all married.
Research Question 2: Are the role and function of police officers aligned to a "force or service"?
The majority of participant (n 39) were of the view that the TTPS and its officers were comparable to a "force". These participants highlighted that the institution (including officers) minimally provided a service to the citizens and communities. It was further explained that the priority of the police in T&T appeared to be law enforcement where officers were focused on arresting minor offenders. These participants explained that the word minimal was used because the institution did provide services such as liquor licenses, firearms licenses, and criminal record checks. However, police officers seldomly patrolled their communities and offer guidance, support, and advice to citizens. Additionally, these participants believed that many officers were aggressive and forceful in their demeanour and attitude towards citizens. It was stated that citizens sometimes have problems and needed assistance. However, officers were not able to assist. A 77-yearold male stated: "I am from the old colonial day when Trinidad and Tobago were a British colony. The policemen back in those times was wicked and heartless. The used to beat people up for no reason and the women officers did not have a voice because the male officers were in control. Those officers did not care about helping people, they cared about themselves. These young police officers now have a bit more education but they just driving around in fancy cars and vans but not really helping people. People often have problems and not sure who to talk to or how to resolve it, but these police nowadays have no time for helping people or resolving problems, therefore crime is out of control in this small nation. These officers just want to make a quick arrest to look good and say they are working hard." The minority group of participants (n 6) stated that they viewed the TTPS and its officers to be a "service". These participants explained that their contact with the police were positive, officers were able to resolve their problems and was always professional. They stated that the police provided a service of security and safety for their communities by conducting regular patrols, were friendly with citizens and would assist when required. A 44-year-old female stated: "I never had a problem with the police. They are always friendly and professional in doing their job. If I had a problem and called them, I know they will help to resolve it" Participants Demographics: participants from the majority group were from disadvantaged and middle-class communities which were located on the outskirts of urban areas with a small number from rural communities and were between the ages of 18 and 79 years. There was a balance of ethnicities [Afrotrinidadian, Indotrinidadian and mixed], males and females, employed, self-employed and unemployed. Most held a primary or secondary school education with a small number having a university qualification. The minority group of participants were from affluent and middle-class communities, between the ages of 35 and 55 years old, held university qualifications and were employed with a small number self-employed. They were of White and Indotrinidadian ethnicity, a balanced of males and females and most were married with a small number being single.
Research Question 3: What is citizens expectation of change towards the role and function of the police in T&T?
The majority of participants (n 40) stated that the TTPS and its officers needed a modern approach to policing. It was indicated that citizens often experience modern problems [financial and social] and officers were not able to assist. Participants explained that in recent times after the COVID-19 pandemic, the T&T economy became strained, and citizens found life difficult due to unemployment and poor mental health. However, police officers were not trained to offer advice and simultaneously navigate citizens (especially the younger ones) away from crime. A further problem identified by these participants was technology (social media) which had created a platform for delinquency amongst younger citizens. However, most police officers were not aware of the criminal activities on these platforms and how to effectively address them. These participants highlighted that many young people who were from the criminal fraternity often used social media platforms to communicate via smartphones. However, most police officers were not familiar with this technology and were helpless in detecting such crimes and keeping the communities safe.
According to these participants, it would be beneficial to have officers and citizens work together via technological platforms to share information more effectively and efficiently whereby the police could be notified of important information. Participants also highlighted that police officer training needed a more contemporary approach, in accordance with the problems that citizens and communities experience. It was stated that different communities in T&T experience different problems, so departmental training and continuous professional development was paramount towards serving the citizens of T&T. Participants further stated that officer's attitude towards citizens was a major concern for change. They explained that officers should have a more understanding and helpful approach towards citizens and less of a rude and aggressive attitude. A 27-yearold male stated: "In T&T we need a more professional police service, too much of these officers have that old mentality. Times has changed, people have changed, and the police need to change also. Long ago police did what they want, when they want, and no questions asked. But now they need to be accountable for their actions. They need better training, more advanced skills to help people, not apply force on people. I would personally say they need communication, problems solving and some social welfare training. Another big issue is the young generation now is into technology and with a smart phone they are doing everything, but most officers don't know how to use a smartphone. So, these young criminals make officers look old fashion and outdated, the youths ahead of the police." The minority group of participants (n 5) stated that they viewed police in T&T as professional, had good knowledge, skills and were helpful. These participants highlighted that their experiences with the police were positive, and the officers did everything possible to assist them. As a result, they did not feel that the TTPS need any immediate change. However, these participants acknowledged that there have been some media coverage of minor cases of unprofessional behaviour and activities by a small number of officers. Therefore, some future improvements to accountability could become useful. A 33-year-old male stated: "As far as I am concerned the police do a good job and have the right skills to do it. I don't think they need any immediate change but as times and things change, they might need to keep updated. However, that's not an emergency or something for immediate actions." Participants Demographics: participants from the majority group were from disadvantaged and middle-class communities located on the outskirts of urban areas, a small number were from rural communities and a small number from urban areas. These participants were between the ages of 18 and 75 years old. There was a balance of ethnicities [Afrotrinidadian, Indotrinidadian and mixed], males and females, employed, self-employed and unemployed. Most held a primary or secondary school education with a small amount having a university qualification. The minority group of participants were from affluent and middle-class communities, between the ages of 35 and 60 years old, held university qualifications and were employed with a small number self-employed [business owners]. They were of White and Indotrinidadian ethnicity, a balanced of males and females and most were married with a small number being single.
Discussion
The police in society are a representation of the state's apparatus for social and moral compliance. Police institutions were established to prevent crimes, the associated fear of crime, victimisation and to allow community safety whilst simultaneously promoting the development of sustainable communities (Mathura 2019;Sani et al. 2022). Whilst the police role and function are important to all societies, it was imperative to gain a better understanding of a post-colonial context such as T&T to determine the needs of citizens and institutional changes in a country that has been experiencing constant modernisation. Understanding citizensʹ perception of the role and function of police in a specific setting is more likely to inform and educate practitioners of the problems and needs of these citizens, but equally important is the services that the police institution can develop to serve the communities. Briefly, Service Oriented Policing (SOP) seek to provide a service to citizens, resolve personal and community problems, increase citizens satisfaction with the police, enrich the quality of life for citizens and maintain the moral fabric of communities (Scheider et al. 2003;Pope and Pope 2012;Hill 2020). Citizen's satisfaction with the police is more likely to develop trust and confidence between citizens and the police, promote information sharing on safety concerns and demonstrate that the police care about citizens and community problems (Merenda et al. 2021;Sani et al. 2022).
In this study, most participants described police officers as being lazy, rude, unhelpful, corrupt, and unskilled. These factors highlighted by participants were important towards understanding the need for future development of the TTPS and parallel with participants having negative and less favourable perception of the police and their role and function in the community. According to Bolger et al. (2021) positive perceptions of the police is important towards compliance of the law and citizens networking with the police to foster safer communities (Sani et al. 2022). The finding of this study appears to be similar to that of studies by Deosaran (2002) and King (2009) who suggested that the TTPS needed to become a modern institution to serve the needs of the citizens and minimise the use of coercive tactics by officers. These studies also made recommendations to improve police accountability, allowing officers to take responsibility for what they did and minimise any form of malpractice that might affect TTPS relationship with citizens. Whilst several studies have previously identified the need for a modern TTPS, there were minimal recommendations for it to be transformed into a service-oriented institution.
A notable observation associated with this study was the difference in participants responses and their socioeconomic status. Participants who formed the majority group represented the disadvantaged and middle-class communities to a higher extent and the affluent communities to a lesser extent. On the other hand, participants who formed the minority group represented the affluent communities to a larger extent and the middleclass to a lesser extent with no representation from the disadvantaged communities. As a result of these findings, it could be suggested that in T&T, the participants from disadvantaged and middle-class communities held negative or less favourable perceptions of the police role and function, were less satisfied with the help and assistance from officers and were less likely to approach the police. Based on the findings of this study, it could be further suggested that participants from the affluent communities and middle-class (to a lesser extent) held positive or favourable perceptions of the police and their roles and functions. As a result, these participants were more likely to be satisfied and approach officers for assistance. These findings are similar to studies conducted by Brown and Benedict (2002) and Mathura (2019) who suggested that the citizensʹ perceptions of the police often vary in accordance with their status on the socioeconomic hierarchy.
Participants demographic variables were instrumental towards obtaining a spectrum of responses that would promote inclusiveness and robustness to this study. According to the findings, there was a balance between those identified as male and female, with a minimal number identified as other sexualities. Ethnicities varied in accordance with groups characteristics. The majority group was comprised of predominantly Afrotrinidadian, Indotrinidadian and Mixed-race participants whilst the minority group comprised of predominantly Whites and few Indotrinidadians. There was a good balance between married and single participants and employed, self-employed with a minimal number of unemployed participants. However, the majority group of participants predominantly held a secondary school education and lower with only a small number with a degree qualification. The minority group on the other hand predominantly had a degree qualification and a small number stated they were self-employed (business owners). Throughout this study there was a good variety of age range amongst the participants. However, the environment variable fluctuated since most of the participants from the majority group were from semi-urban communities and a smaller amount from rural areas. The minority group of participants were all from semi-urban communities, but these communities were private housing complexes. According to Webb and Marshall (1995) citizens demographic characteristics play an important role towards how they perceived the police, and this is because people are different, and their needs and problems vary. The demographic variable found in this study is aligned to those found in a study by Webb and Marshall (1995) and was later echoed by Brown and Benedict (2002).
Summary of Discussion
The aim of this study was to explore citizensʹ perception of the role and function of the police in a post-colonial nation. To achieve this, three research questions were used. 1. Does the present role and function of police officers in T&T satisfy the needs of citizens for example, safety, security, problem solving, minimising the fear of crime, crime prevention and resolution? 2. Are the role and function of police officers aligned to a "force or service"? 3. What is citizens expectation of change towards the role and function of the police in T&T?
The findings from this study suggested that the TTPS needed to implement a Service Oriented style of Policing (SOP) capable of preventing crime, the fear of crime, victimisation and simultaneously promoting citizen and community development. If implemented, this will foster trust and confidence in the police, better communication, relationships between citizens and the police, and officers will be able to serve the communities better by providing guidance and support to citizens.
According to the findings from this study, there is a need for revised training of police officers in T&T. It could be suggested that advanced training, which is focused on officers being understanding and helpful, able to resolve problems in a professional manner, develop skills on social welfare and community relations would be able to serve the needs of citizens. It could be suggested that officers in the TTPS be given Continuous Professional Development (CPD) in accordance with the problems being experienced in the specific communities they serve, since problems often vary according to environmental context.
It was also identified that there was the need for improved accountability in the TTPS so that officers can be held accountable for their actions and omissions. The findings from this study suggested that there was police delinquency in the TTPS, and officers were not being held accountable. As a result, improved accountability would facilitate officers taking responsibility for their actions, become less reliant on the use of coercive tactics and use their persuasion skills and abilities, stop sexual harassment towards females in T&T and approach citizens in a professional manner.
Theoretical Implications
When studying citizensʹ perception of the police role and function in any society it is important to consider demographic variables such as age, socioeconomic status, community, race/ethnicity, gender, and education. These variables showed significant importance and fluctuation throughout the findings of this study and could be applicable to other societies. It is also imperative to conduct research on citizens perception of the police role and function using environmental context (urban and rural), since these variables played an important role in the findings of this study. Additionally, an important aspect of this study was the model of policing use and the type of training associated with the model. When researching citizens perception of the police role and function, it is important to examine the model of policing and style of training to gain a better understanding.
Future Research
The findings of this study highlighted that environmental context (urban and rural) was crucial towards participant perception of the police role and function. However, there is no previous research (at the time of this study) that attempted to examine this variable independently. As a result, it would be productive to conduct future research on this variable. This study used a qualitative method of 45 participants which could not be used towards generalisation. Therefore, it would be beneficial to conduct a quantitative study to obtain a great volume of responses which could be used for generalisation.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Not applicable. | 2022-10-13T15:18:41.278Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "7e173f7a7cec4d87101dbfe6c453dac7569288ad",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0760/11/10/465/pdf?version=1665409647",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c1c471b1eb2bea495052e2d64329cb435c4901b1",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
8072097 | pes2o/s2orc | v3-fos-license | Targeted Phenotypic Screening in Plasmodium falciparum and Toxoplasma gondii Reveals Novel Modes of Action of Medicines for Malaria Venture Malaria Box Molecules
The phylum Apicomplexa includes many human and animal pathogens, such as Plasmodium falciparum (human malaria) and Toxoplasma gondii (human and animal toxoplasmosis). Widespread resistance to current antimalarials and the lack of a commercial vaccine necessitate novel pharmacological interventions with distinct modes of action against malaria. For toxoplasmosis, new drugs to effectively eliminate tissue-dwelling latent cysts of the parasite are needed. The Malaria Box antimalarial collection, managed and distributed by the Medicines for Malaria Venture, includes molecules of novel chemical classes with proven antimalarial efficacy. Using targeted phenotypic assays of P. falciparum and T. gondii, we have identified a subset of the Malaria Box molecules as potent inhibitors of plastid segregation and parasite invasion and egress, thereby providing early insights into their probable mode of action. Five molecules that inhibit the egress of both parasites have been identified for further mechanistic studies. Thus, the approach we have used to identify novel molecules with defined modes of action in multiple parasites can expedite the development of pan-active antiparasitic agents.
IMPORTANCE The phylum Apicomplexa includes many human and animal pathogens, such as Plasmodium falciparum (human malaria) and Toxoplasma gondii (human and animal toxoplasmosis). Widespread resistance to current antimalarials and the lack of a commercial vaccine necessitate novel pharmacological interventions with distinct modes of action against malaria. For toxoplasmosis, new drugs to effectively eliminate tissue-dwelling latent cysts of the parasite are needed. The Malaria Box antimalarial collection, managed and distributed by the Medicines for Malaria Venture, includes molecules of novel chemical classes with proven antimalarial efficacy. Using targeted phenotypic assays of P. falciparum and T. gondii, we have identified a subset of the Malaria Box molecules as potent inhibitors of plastid segregation and parasite invasion and egress, thereby providing early insights into their genes (~30% of the total), are orthologous (28,29). Many of these conserved genes make up core components of indispensable cellular processes required by both parasites for development, replication, egress, or invasion. Hence, it is reasonable to expect comparable phenotypic responses upon exposure to a molecule that might target orthologous proteins. As a precedent, for example, antibiotics affecting plastid housekeeping functions by targeting orthologous proteins were shown to have similar phenotypic effects on P. falciparum and T. gondii (30)(31)(32)(33)(34). Furthermore, the apical organelles, such as the rhoptries and micronemes, and many of the resident proteins in these compartments involved in invasion are orthologous and conserved in these parasites (35)(36)(37). Inhibitors targeting these proteins are likely to be effective against both parasites. For example, perforins and proteases (38,39), which help in active egress of the parasite from infected host cells, are conserved in both P. falciparum and T. gondii. Moreover, despite differences in the host cell type specificity of these parasites, they appear to co-opt the same host factors to induce host cytolysis and egress (40), highlighting deeply conserved similarities at the molecular level.
We present here results of a systematic study on phenotypic screening of the MMV Malaria Box compounds against T. gondii and P. falciparum. Of a total of 390 molecules, 136 were found to significantly affect T. gondii growth and 24 of these hits had nanomolar potency against both parasites. Of a subset of 30 molecules that caused "delayed death" of T. gondii, 3 were found to act through a mechanism involving plastid missegregation during daughter cell formation. Using flow cytometry assays designed to monitor the schizont-to-ring transition of intraerythrocytic stages of P. falciparum, 26 molecules were shortlisted on the basis of their ability to either block schizont maturation and host cell rupture or inhibit host cell invasion by extracellular merozoites. These molecules were further validated in secondary assays to ascertain their specificity as egress or invasion inhibitors. None of the egress or invasion blockers identified in the Malaria Box collection had any effect on P. falciparum digestive vacuole (DV) physiology. Cross-species testing of P. falciparum egress inhibitors identified five molecules as potent blockers of ionophore-mediated egress of T. gondii tachyzoites. Taken together, our findings provide compelling evidence of how conserved phenotypic effects on two related parasites can help in prioritizing molecules from the MMV Malaria Box collection for further mechanistic studies and the development of novel antiparasitic compounds.
RESULTS
Overview of targeted phenotype screening of Malaria Box molecules against P. falciparum and T. gondii. In this study, we characterized 390 molecules belonging to the MMV Malaria Box collection on the basis of the phenotypic effects they induce in two evolutionarily related apicomplexan parasites, P. falciparum and T. gondii. We reasoned that such an approach will help to identify parasite-specific, as well as cross-species, effects of these molecules during life cycle stages that are relevant to disease progression and clinical outcome, such as the asexual intraerythrocytic stage of P. falciparum and the tachyzoites stage of T. gondii. To do this, we undertook a multipronged yet targeted phenotype screening approach, as schematically depicted in Fig. 1. Before embarking on the phenotypic studies, it was important to derive the bioactivity of all Malaria Box compounds in both species. For this, we first determined the in-house EC 50 s of all MMV Malaria Box molecules against P. falciparum and T. gondii by using the compound library received from MMV.
Comparing the efficacies of Malaria Box library compounds against P. falciparum and T. gondii For EC 50 determination in Plasmodium, sorbitol-synchronized ring stage parasites (strain 3D7) were treated with 2-fold serial dilutions of the molecules ranging from 10 M to~5 nM. Following this, parasitemia was scored at~60 h postinfection (hpi) by SYBR green I staining of parasitic DNA. As expected, nearly 50% of the molecules showed nanomolar efficacy against P. falciparum, in good agreement with previous results (24). In parallel, 48-h killing assays were carried out with transgenic T. gondii tachyzoites (type I RH strain) expressing firefly luciferase. The antitoxoplasma activity of Malaria Box molecules was estimated by comparing the parasitemia levels to 1% dimethyl sulfoxide (DMSO) control-and inhibitor-treated parasites with a standardized luminescence readout assay (see Fig. S1 in the supplemental material). Of the total of 390 molecules tested, 136 inhibited T. gondii growth by Ͼ75% at 10 M. Importantly, 49 of these compounds showed nanomolar potency (Table S1), in contrast to a previous report (41). We then compared the EC 50 s for P. falciparum and T. gondii and identified 24 molecules with nanomolar potency against both parasites ( Fig. 2; Table S1).
Identification of molecules causing delayed death in T. gondii. In the 48-h killing assays, 191 molecules from the Malaria Box collection showed only moderate inhibition of T. gondii tachyzoite growth (Ͻ50% inhibition at 10 M; Fig. S2; Table S1). Of these, 134 molecules had virtually no effect on parasite growth (Ͻ20% inhibition at 10 M). We were interested in evaluating if any of these molecules induce delayed death in T. gondii tachyzoites. Delayed death of apicomplexan parasites is a well-studied phenotype for antibiotics such as chloramphenicol, which targets the ribosomal machinery in the apicoplast (42)(43)(44). In this case, parasites continue to replicate within the first parasitophorous vacuole (PV) following inhibitor treatment and exhibit no apparent growth inhibition until they egress and reinvade a naive host cell, where they fail to replicate and die in the second vacuole. The molecular mechanism behind this delayed- death phenomenon appears to be associated with inhibition of housekeeping functions in the apicoplast (32,(45)(46)(47).
To evaluate possible delayed-death effects of the 134 Malaria Box molecules that showed no significant effects on T. gondii in 48-h killing assays, we allowed tachyzoite stage parasites to form plaques (zones of host cell lysis formed by multiple rounds of invasion and egress by parasites) on confluent host cell monolayers in the presence of inhibitors. Confluent human foreskin fibroblast (HFF) monolayers in six-well plates were inoculated with 50 parasites per well for plaque formation. Molecules were plated at 10 M, and 1% DMSO-treated cells were used as controls. The infected cultures were left undisturbed for 10 days, after which they were processed for visualization of plaque formation. The total number of plaques formed per well and the average plaque area were quantified for each treatment. A spectrum of growth phenotypes with respect to plaque number and size was obtained in the plaque assays. Most of the molecules tested did not significantly affect the number of plaques formed with respect to DMSO-treated control cultures and averaged between 20 and 35 plaques per well (Fig. 3A). In the case of two molecules (MMV666597 and MMV665882), we observed only three plaques each and these were dramatically reduced in size (~0.1 U compared to a control plaque size of~1.58 U). For many other inhibitors, even though the plaque counts were similar to those of control cultures, the plaque size was reduced to at least half of the control size ( Fig. 3A; Table S1). This can be due to either altered intracellular growth and replication of the parasite in the presence of the inhibitor or a delay in parasite egress from infected host cells, resulting in delayed progress in plaque formation.
We selected 21 molecules that caused a reduction in plaque size (for example, MMV020548, as shown in Fig. 3A, right panel) to test their inhibitory effects on T. gondii egress. For this, we adopted the widely used assay in which the calcium ionophore A23187 is used to induce intracellular tachyzoite egress (48) and monitored the time delay before the rupture of individual vacuoles and parasite release. Parasites were allowed to infect and replicate in HFF monolayers in the presence of either 10 M inhibitor (or 1% DMSO as a negative control) for 24 h, followed by ionophore addition to induce egress. The time required for parasite egress from individual vacuoles was recorded and compared between DMSO-and inhibitor-treated parasites. The average egress time obtained for 1% DMSO-treated control parasites was~1.8 min. In the case of inhibitor-treated parasites, we looked for an average egress time of at least 4 min to identify potential egress inhibitors. Of the 21 molecules tested, 20 had no significant inhibitory effect on ionophore-induced parasite egress (Fig. S3), suggesting that the reduced plaque size observed in the presence of these molecules is primarily due to diminished parasite fitness rather than a delay in parasite egress. However, one molecule, MMV001239, significantly inhibited parasite egress ( Fig. 3B and C). It is interesting that MMV001239 exhibited time-dependent egress-blocking activity, as the delay in egress was much less pronounced upon treatment with 10 M compound for 6 h (average egress time,~3.3 min) than upon treatment for 24 h, where parasites failed to egress from most of the vacuoles even after 10 min following ionophore treatment (Fig. 3B).
The egress inhibitory effect of MMV001239 on T. gondii appears to be very specific, as this molecule has no effect on parasite growth, and plaque assays revealed the formation of plaques slightly smaller than those of the untreated control (Fig. 3C). Reduced plaque size may result from delayed egress of parasites following each round of infection during the course of plaque formation. Thus, it appears that MMV001239 is a genuine inhibitor of egress in T. gondii. Interestingly, in a recent study, it was shown that MMV001239 targets the lanosterol-14-␣-demethylase enzyme in T. cruzi (49). Even though T. gondii does not encode an orthologue of this enzyme, it is likely that MMV001239 acts by interfering with membrane lipid dynamics, which is known to facilitate parasite egress from host cells. Although MMV001239 may not be useful as an antitoxoplasma agent because of its inability to abrogate parasite growth, we expect that further mechanistic studies with this molecule can help in dissecting the parasite egress pathway and identifying novel targets.
Effect of delayed-death molecules on apicoplast segregation in T. gondii Interestingly, 30 Malaria Box molecules that had no effect of T. gondii growth in the killing assays were found to completely abolish plaque formation because of delayed death of the parasites. A well-documented phenotypic consequence associated with the delayed-death phenomenon is a failure to properly segregate the apicoplast organelle to newly forming daughter cells during cell division. This results in a PV containing a mixture of daughter cells with and without the apicoplast organelle. This phenotype has also been validated by genetic intervention (45). The plastidless parasites survive until they egress and invade a new host cell, where they die. This phenotype can be tracked microscopically by identifying PVs containing plastidless parasites.
We tested whether plastid missegregation occurs in T. gondii upon treatment with the 30 molecules that were found to completely abolish plaque formation. For this, we first allowed the parasites to invade and replicate within HFF monolayers in the presence of 10 M inhibitor for 24 h. Parasites were then physically egressed, and host cell-free parasites were allowed to invade a naive confluent monolayer of HFF cells grown on glass coverslips in the continued presence of 10 M inhibitor. Twenty-four hours later, the glass coverslips were fixed with 4% paraformaldehyde and processed for microscopy. In this experiment, a transgenic parasite line expressing the apicoplastlocated triose phosphate isomerase II gene tagged with yellow fluorescent protein (YFP; TgtpiII-yfp) (Fig. S4) was used to visualize the apicoplast by the associated fluorescence. Among the 30 Malaria Box molecules tested in this assay, we observed an unambiguous plastid segregation defect in 3, MMV008455, MMV020885, and MMV019199, and a partial effect in 1, MMV666109 (Fig. 4). Chemically, these molecules are distinct from antibiotics such as chloramphenicol and clindamycin (Fig. S5), which are known to cause plastid missegregation and delayed death in apicomplexan parasites, and provide new tools to dissect this phenomenon in greater detail.
Effects of Malaria Box molecules on P. falciparum egress. Using a complementary approach combining flow cytometry and microscopy as previously reported (50,51), we monitored the effects of Malaria Box molecules on late intraerythrocytic stages of P. falciparum (i.e., Ͼ40-hpi schizonts) with the goal of identifying inhibitors of egress and/or invasion. In the primary screen, schizont stage parasites (~40 to 42 hpi) were treated with 1, 3, and 10 M inhibitors and allowed to develop until the appearance of ring stage parasite in DMSO-treated controls (~60 hpi, approximately 12 h postrupture). Parasitemia was scored by flow cytometry as described previously (50,52). In parallel, microscopic examination of Giemsa-stained thin smears was performed following inhibitor treatment. E-64, cycloheximide, trichostatin A, and heparin were included as positive controls in all egress and invasion studies. At 10 M, a large fraction (~62%) of the Malaria Box compounds inhibited the transition of schizonts to rings with Ն50% efficiency compared to that of DMSO-treated control samples. However, the observed effects of some of the molecules could be attributed to overall toxicity owing to the high drug concentration (10 M) used for preliminary screening. At lower concentrations of 3 and 1 M, the number of molecules that blocked the transition of schizonts to rings with Ն50% efficacy dropped to 109 and 35, respectively (Fig. 5A). Results of three independent flow cytometry experiments in combination with analysis of microscopic phenotypes allowed us to shortlist 26 molecules that appeared to affect the schizont-to-ring transition with a minimum of 50% inhibition efficacy (Table S1).
On the basis of these results, we shortlisted 26 molecules for comparison of their effects on the schizont-to-ring transition in three strains of malaria parasites, 3D7, K1, and KH004-003; K1 and KH004-003 are resistant to the frontline antimalarials chloroquine and artemisinin, respectively. In these parasites, we estimated the 50% rupture (R 50 ) values, which were determined as the concentrations at which a 50% reduction in the newly formed ring population (compared to that of DMSO-treated controls) was observed. After inhibitor treatment (20 h later), the fraction of rings that remained in the culture was estimated by flow cytometry and used to calculate the R 50 value. Of the 26 molecules, MMV019127, MMV000642, MMV396715, MMV006429, MMV396749, MMV006427, MMV007181, MMV665785, MMV665878, MMV665874, and MMV665831 inhibited the schizont-to-ring transition with nanomolar potency (R 50 , Յ500 nM) across all three of the strains tested ( Fig. 5B; Fig. S6 to S8). Notably, the EC 50 and R 50 values were not always in concordance, and therefore there appears to be no direct correlation between overall killing potency and schizont-to-ring transition inhibition. For instance, MMV665831 showed comparable potency in both parasite killing and inhibition of the schizont-to-ring transition. In contrast, molecules with nanomolar EC 50 s, i.e., MMV000653, MMV007617, and MMV665857, blocked the schizont-to-ring transition by Ն50% only at severalfold higher concentrations. A third group of hits, MMV000642 and MMV006427, appeared to be more potent in blocking the schizont-to-ring transition than in parasite killing, exhibiting nanomolar R 50 values, despite an overall EC 50 in the micromolar range. Collectively, these observations point toward distinct mechanisms by which the molecules of interest seem to affect the egress/invasion of P. falciparum and subsequent progression of the lytic cycle.
Next, we carried out a more detailed analysis of the egress/invasion inhibition effect of the 26 Malaria Box molecules selected from the initial screen. Magnetically purified schizonts (~40 to 42 hpi) were added to fresh red blood cells (RBCs) and incubated in the presence of inhibitors at 0.3, 1, 3, and 10 M. Samples were collected 20 h later to score unruptured schizonts and newly invaded ring stage infections by flow cytometry. DMSO-and E-64-treated parasites served as negative and positive controls, respectively. Several molecules, i.e., MMV000642, MMV396715, MMV396719, MMV006429, MMV665878, and MMV665831, were able to block infected RBC (iRBC) rupture effectively at concentrations as low as 0.3 M (Fig. 5C). The inhibitory potential of MMV665831 was profound; with an almost 90% reduction of ring formation across all of the concentrations tested. A Ͼ60% reduction of ring formation was observed at 0.3 M with the molecules MMV000642, MMV396715, MMV006429, and MMV665878.
Distinguishing egress and invasion inhibition activities. Since the flow cytometry experiments used to identify potential egress/invasion inhibitors cannot distinguish between the two types of mechanistically distinct molecules, we resorted to Phenotypic Studies on MMV Box To Find Modes of Action microscopic examination following treatment with selected inhibitors. This was done to segregate the molecules as specifically egress or invasion blockers. We examined representative Giemsa-stained smears postrupture (timing of rupture estimated from DMSO-treated control parasites) to distinguish the phenotypes associated with impaired late-stage parasite development. We found that following treatment with MMV019881 and MMV665785, merozoites were trapped within iRBCs, even though the PV appeared to have been dissolved, at 7 h after the rupture time point (~55 hpi) (Fig. 6A, top). In contrast, MMV007617 and MMV396719 induced a phenotype wherein merozoites appeared to be arrested within the PV (Fig. 6A, middle) inside intact iRBCs. Interestingly, when schizonts were incubated with MMV665878 or MMV006429, merozoite release was unaffected but merozoites were found attached to the surface of RBCs (Fig. 6A, bottom). This phenotype is typical of invasion inhibitors and is reminiscent of that previously reported for heparin, a proven invasion inhibitor (53, 54). Five of the molecules that appeared to interfere with invasion (MMV006429, MMV665878, MMV666025, MMV665874, and MMV665831) were chosen for invasion studies with isolated merozoites. Late-stage schizonts (~46 to 48 hpi) purified by magnet-activated cell sorting (MACS) were physically ruptured by passage through a 1.2-m filter as previously reported (55). After removal of the debris, noninfected RBCs and unruptured iRBCs were separated by differential centrifugation and merozoites were collected and added to fresh RBCs in the presence of serial dilutions of inhibitors. Heparin and DMSO served as positive and negative controls for invasion inhibition, respectively. Cultures were sampled for microscopy and flow cytometry after 4 and 15 h of incubation, respectively. MMV006429 and MMV665878 showed profound merozoite invasion inhibitory activity (~50%) at concentrations as low as 0.3 M (Fig. 6B). The reduction in the number of ring stage-infected cells (relative to DMSO-treated control cells) confirms the invasion inhibitory activity of these two molecules. This was again validated by microscopy, where merozoites were found attached to the RBC surface as typically observed when invasion is impaired.
Merozoite egress is mediated by a variety of factors, including proteolytic activities (56, 57) that compromise host membrane architecture and changes in iRBC permeability facilitated through pore-forming proteins (38) in addition to progressive biophysical and ultrastructural changes (58). Proteases involved in egress are triggered in a time-dependent fashion, with their active forms appearing only at the right time to facilitate membrane rupture and release of merozoites. In contrast, other molecular effectors of egress either remain throughout or progressively build up during schizogony, leading to iRBC rupture. It is apparent that the molecules affecting different stages of egress act through distinct mechanisms and therefore egress blockage at distinct stages will manifest as distinct phenotypes.
To evaluate the timing and specificity of egress inhibition in P. falciparum, we selected the top nine egress inhibitors from our screens and treated the parasites with these molecules for a brief period before washing them away and allowing the parasites to egress and invade again. Two sets of magnetically purified schizonts (~40 to 42 hpi) were resuspended with fresh RBCs and incubated for 2 h in the presence of 10 M inhibitors. In one set of cultures, the inhibitors were removed after 2 h by washing with RPMI and resuspending the cells in normal culture medium, while for the other set, the inhibitor was maintained for another 11 to 13 h (until 55 hpi). Following this, both sets of cultures were harvested and analyzed by flow cytometry. These experiments revealed remarkable differences among the molecules tested. For example, removal of drug pressure after initial exposure allowed parasites to egress normally in the case of MMV019127, MMV000642, MMV007181, MMV666061, MMV008956, and MMV665785. This was observed from the decreased presence of schizonts (Ͼ2-fold decrease) (Fig. 6C) and the corresponding increase in ring stage infection (Ͼ3-fold increase) (Fig. 6D) in washed cultures at 55 hpi. In contrast, the inhibition of the schizont-to-ring transition seen with MMV019266, MMV667490, and MMV665831 was not reversed when the inhibitors were washed away. The most significant reversal of the inhibitory effect was observed in the case of MMV007181, where inhibitor removal resulted in a Ͼ3-fold decrease and a Ͼ5-fold increase in schizont and ring populations, respectively. On the contrary, the least reversal of inhibition was seen in the case of MMV665831, reconfirming that this molecule is interfering primarily with egress and not invasion. Observation of egress inhibition even after the inhibitor is washed away could indicate that either the inhibitor has irreversibly modified its target or the target activity was inhibited at precisely the time when it was required for facilitation of egress (coinciding with the 2-h inhibitor incubation period between 40 and 44 hpi). Thus, these results indicate distinct modes of action and target selectivity for these compounds on P. falciparum development and maturation during late schizogony. P. falciparum egress is known to be mediated by cascading effects of several proteases (56), and protease inhibitors are potent blockers of parasite egress. To test whether any of the Malaria Box molecules identified as egress inhibitors have broadly specific protease-inhibiting activity, we tested their effect on P. falciparum DV physiol-ogy, which requires coordinated action by many proteases working in concert. We reasoned that this would also allow us to test the specificity of the Malaria Box egress inhibitors. We adopted a previously reported method to identify P. falciparum DV physiology disrupters (59,60), using the calcium reporter dye Fluo-4 AM to monitor alterations in DV permeability and staining with JC-1 probe to determine mitochondrial membrane integrity and cell viability. The results of this assay indicated that only two molecules, MMV006087 and MMV665879, affect DV integrity at 1 M. None of the 26 egress inhibitors showed DV disruption activity. This was further verified through microscopic examination of trophozoites treated with the inhibitors at their estimated R 50 values (Fig. S9).
Effects of P. falciparum egress inhibitors on T. gondii egress. Host cell cytolysis pathways of T. gondii and P. falciparum are known to have conserved aspects (61)(62)(63)(64)(65). Therefore, we wanted to test if the Malaria Box molecules affecting the egress of P. falciparum are also capable of inhibiting the egress of T. gondii tachyzoites. The 26 molecules for which a detailed examination of P. falciparum egress inhibition was carried out were tested against T. gondii in the ionophore-induced egress assay. At 10 M, 9 of the 26 molecules had a cytotoxic effect on HFF cells, which disqualified them from further T. gondii egress studies. Of the 17 remaining molecules, 5 (MMV396719, MMV019127, MMV007617, MMV006429, and MMV006427) were found to have a significant inhibitory effect on T. gondii egress. In particular, MMV396719 and MMV019127 were very potent. Even at 10 min after ionophore treatment, Ն80% of the T. gondii vacuoles remained intact ( Fig. 7A and B). Notably, all of these molecules, except MMV019127, are known to disrupt sodium and pH homeostasis in P. falciparum (66).
The fact that none of these molecules were subjects of egress studies until now in the broader context of antiparasitic target discovery highlights the utility of comparative screening efforts such as those undertaken in this study. With the identification of five molecules in the potent and proven library compiled by MMV with comparable phenotypic effects on two parasites with distinct host cell preferences, an avenue to probe conserved targets that could be explored for biochemical and genetic validation in future work is established.
DISCUSSION
The MMV Malaria Box molecules (4) have been extensively studied for their inhibitory potential against asexual and sexual stages of P. falciparum (24). This valuable antimalarial collection has also been screened against other parasitic species, such as kinetoplastids (19)(20)(21), helminths (17), Babesia (67), Theileria (68), Cryptosporidium (18), Toxoplasma (41), Giardia (69), and Entamoeba (41). These screens primarily focused on determining killing efficacies in whole-organism assays. However, little information is available on their mode of action in these parasites and only a small subset of molecules in this library have been successfully mapped to their targets.
Chemically induced phenotypes can facilitate downstream mechanistic studies, as they often serve as reliable indicators of the cellular pathways perturbed by the molecule of interest. This requires customized screening campaigns focused on the phenotype(s) of interest and is only possible when morphologically distinct cellular phenotypes can be linked to specific cellular pathways and targets. P. falciparum and T. gondii, two well-studied parasites, offer a good choice for carrying out such phenotypic screens, especially since experimental tools and reagents are readily available to dissect chemically induced cellular phenotypes. For instance, phenotypic features associated with impaired growth kinetics (i.e., fast versus delayed killing), host cytolysis (the endpoint of an intracellular replicative cycle of the parasite), and host invasion (to establish a new infectious cycle) are well characterized in both of these parasites. Furthermore, because of their shared evolutionary history, orthologous proteins in P. falciparum and T. gondii are often associated with similar cellular processes and, importantly, are likely to share sensitivity to inhibitors that affect these unique life stage events. This reasoning motivated us to undertake targeted phenotypic screening of Malaria Box molecules to identify delayed death, egress, and invasion inhibitors in T. gondii and P. falciparum, respectively. Establishing the linkage between unique chemical scaffolds and their resultant cellular phenotypes on two evolutionarily related yet distinct parasites will provide an avenue for conducting detailed mechanistic studies with the organism of choice by biochemical, cell biological, and/or genetic approaches, as appropriate.
In this study, we first determined the EC 50 s of the 390 Malaria Box molecules procured from the MMV for P. falciparum and T. gondii to confirm that the activities of Phenotypic Studies on MMV Box To Find Modes of Action the molecules obtained are in agreement with previously published data (24). For P. falciparum, we found that 168 molecules had nanomolar EC 50 s and 99 molecules had EC 50 s between 1 and 3 M, indicating overall agreement with the data previously released by MMV (Table S1). However, in the case of T. gondii, we obtained results markedly different from previously published work (41). The reported study used a short-term (24-h) killing assay performed with T. gondii tachyzoites and identified only seven molecules with significant antitoxoplasma activity, of which only one, MMV007791, was reported to have nanomolar activity. In our assays, parasites were incubated for 48 h with the Malaria Box molecules before they were checked for growth inhibition. We used a transgenic strain of T. gondii expressing the luciferase gene as a reporter, where the luminescence readout is more sensitive and reproducible than the previously used method of monitoring parasite killing. Overall, we found that 199 Malaria Box molecules exhibited Ͼ50% growth inhibition and 119 had a Ͼ80% growth inhibition effect on T. gondii tachyzoites at a 10 M concentration. Of the 49 molecules that exhibited nanomolar EC 50 s against T. gondii, 24 had nanomolar potency against P. falciparum as well ( Fig. 2; Table S1). Not much is known from the available literature regarding the mechanism of action of these 24 molecules. Of these molecules, a few, MMV665941, MMV006389, and MMV665977, were reported to inhibit P. falciparum gametocyte development (10) and MMV665941 was found to have transmission-blocking activity (70). In a separate study, MMV665886 was found to have translation inhibition activity against Plasmodium (71). Metabolomic profiling has revealed that MMV666596, MMV665977, MMV006309, and MMV665940 induce metabolic changes consistent with inhibition of pyrimidine biosynthesis, while MMV396669 and MMV000963 induce metabolic changes consistent with inhibition of hemoglobin metabolism (25,26). Further mechanistic studies of these potent molecules, especially in T. gondii, have yet to be performed.
Next, among the molecules that showed Ͻ20% growth inhibition in T. gondii killing assays, we identified 30 that completely inhibited plaque formation, suggesting that they cause delayed death of the parasite. Notably, about half of these molecules have potent and immediate-killing antiplasmodial activity, indicating that these molecules may be acting on distinct targets in T. gondii and P. falciparum. The mechanism of delayed killing of apicomplexan parasites has been extensively studied and is linked to the inhibition of apicoplast housekeeping functions (43,44). Examples of delayed-death inhibitors include antibiotics such as azithromycin, clindamycin, and doxycycline, which target the 70S ribosome, and ciprofloxacin, which targets the apicoplast-associated DNA gyrase (42). These antibiotics are equally effective in causing delayed death in P. falciparum and T. gondii, although phenotypic differences exist (43).
We identified four molecules, MMV008455, MMV020885, MMV019199, and MMV666109, that induced apicoplast missegregation during daughter cell formation in T. gondii. A previous attempt to identify Malaria Box delayed-death inhibitors of P. falciparum identified MMV008138, which was found to target the 2-C-methyl-Derythritol 4-phosphate cytidylyltransferase (IspD) enzyme required for isoprenoid biosynthesis in the apicoplast (72). MMV008138, however, exhibits no immediate-killing or delayed-death effects on T. gondii. It is worth noting that although P. falciparum undergoes delayed death, its association with a defective plastid segregation phenotype can vary markedly (72,73). The four MMV Malaria Box molecules that target the apicoplast are chemically distinct from the macrolide, tetracycline, lincosamide, and fluoroquinolone class of compounds (Fig. 4B) that are known to cause plastid missegregation and delayed death in T. gondii. Thus, it will be interesting to study the mechanism by which these molecules cause delayed death in T. gondii. For the remaining 26 molecules, which cause delayed death but have no apparent phenotypic effect on the plastids in our assays, it needs to be investigated whether they act by targeting plastid-associated housekeeping functions.
Parasite release is accompanied by host cell cytolysis, involving the concerted action of pore-forming proteins, kinases, proteases, and osmotic factors to compromise the host membrane and trigger catabolic enzymes to eventually dismantle the infected cell (56). In the case of P. falciparum, molecules targeting the cysteine and serine type proteases have differential inhibitory effects on the host RBC membrane and PV membrane, respectively (39,74), resulting in readily distinguishable microscopic features. Some of these effects are phenocopied in T. gondii as well (38,75). We identified 26 molecules from the Malaria Box collection that inhibited the egress of P. falciparum merozoites from iRBCs with low micromolar to nanomolar potency. Many of these molecules were found to have a comparable inhibitory effect on drug-resistant strains of P. falciparum as well (Fig. 5B). Of the 26 molecules tested here, 10 (MMV000653, MMV000642, MMV007617, MMV396715, MMV396719 MMV006429, MMV396749, MMV006427, MMV665878, and MMV666025) have been previously shown to affect parasite Na ϩ and pH homeostasis in a manner similar to that of the PfATP4 inhibitor (66) spiroindolone KAE609 (also known as NITD609) (76,77). While it is possible for these molecules to target PfATP4 and thereby indirectly affect the egress-related processes, the likelihood of their acting via novel targets and pathways cannot be ruled out. Intriguingly, MMV396749, a benzimidazole, has significant structural similarity to spiroindolone KAE609 and was reported to inhibit the liver stage development of malaria parasites (78). In our egress/invasion assays, MMV000642, MMV396715, MMV396719, MMV006429, MMV665878, and MMV665831 showed Ͼ50% inhibition of ring formation at 0.3 M. Microscopic analysis revealed three mechanistically distinct classes of molecules, RBC membrane rupture blockers, PV rupture blockers, and invasion blockers (Fig. 6). By washing away the drug, we were able to reverse the inhibitory effect of selected molecules. This is interesting, as it indicates that the timing of inhibition with respect to the egress phenomenon is critical for effective inhibition. This is also in agreement with previous findings showing that critical factors facilitating parasite egress from host cells, such as proteases, are present in their active forms only for a short duration when egress is in process (39,58). Moreover, the Plasmodium egress/invasion inhibitors did not affect the physiology of the DV, ruling out the possibility that they are broadly specific protease inhibitors.
It has been shown that sequential steps during invasion, such as host recognition, binding, local proteolysis, or junction formation, can be manipulated using small molecules (or antibodies), generating phenotypes that can be indicative of the specific step being affected. Among the 26 compounds that affected the transition of P. falciparum schizonts to rings, we were able to identify 2 (MMV006429 and MMV665878) that specifically affected invasion (Fig. 6B). Most importantly, we found that five Malaria Box molecules that were identified as potent blockers of P. falciparum egress were also highly effective in blocking the egress of calcium ionophore-induced T. gondii tachyzoites. Two of these molecules, MMV396719 and MMV019127, that strongly inhibit parasite egress are potential candidates for further mechanistic studies.
Taken together, our results provide comprehensive documentation of selective phenotypic effects of MMV Malaria Box molecules against P. falciparum and T. gondii. By employing complementary techniques, we have prioritized at least a dozen inhibitors that selectively impair (i) apicoplast segregation, (ii) host cell invasion, and/or (iii) parasite egress. A majority of these hits are "drug-like" molecules with not only proven antimalarial efficacy but also chemical characteristics that make them amenable to subsequent medicinal chemistry work. Further investigations dissecting their mode of action by combining biochemical and genetic means would allow the community to exploit these molecules for therapeutic use against malaria and toxoplasmosis.
MATERIALS AND METHODS
Ethics statement. All experimental procedures were conducted in accordance with approved guidelines from the respective institutions. The blood used to culture malaria parasites was purchased from the Interstate Blood Bank in the United States (for work conducted in Singapore) or the Poona Serological Trust Blood Bank in Pune, India, and was collected from anonymous donors.
Blood collection and storage. Blood collected in EDTA tubes (VACUETTE EDTA Tubes; Greiner Bio-One) was washed thrice with RPMI 1640 (Sigma-Aldrich) to remove the buffy coat. Washed RBCs were stored for a maximum of 4 weeks at 4°C in complete malaria culture medium (see below) at 50% hematocrit and washed once prior to every use.
Phenotypic Studies on MMV Box To Find Modes of Action
Preparation of stock and working concentrations of drugs. The Malaria Box libraries received from MMV, Geneva, Switzerland (two separate shipments to the collaborating labs in Singapore and India), contained 390 compounds as 10 mM stock solutions in a 96-well plate format and were stored in Ϫ80°C until use. Ninety microliters of cell culture grade DMSO (Sigma-Aldrich, United States) was added to 10 mM stocks to make 1 mM working stocks that were eventually used to set up the various assays. For both P. falciparum and T. gondii, the top concentration used in either killing or phenotype assays was 10 M. For EC 50 determination, 2-fold serial dilutions of the compounds were made by starting at 10 M. Atovaquone (P. falciparum and T. gondii), chloroquine (P. falciparum), clindamycin (T. gondii), and chloramphenicol (T. gondii) were used as standard positive controls in the killing and phenotypic assays. The concentrations of the standard drugs used in various assays are given in the relevant sections. P. falciparum culture protocols. P. falciparum (strain 3D7) was used for all experiments, unless stated otherwise. Chloroquine-resistant (K1) and artemisinin-resistant (KH004-003) parasites were obtained from MR4 and Peter Preiser's lab, Nanyang Technological University, Singapore, respectively. Parasites were cultured at 2.5% hematocrit in RPMI-HEPES medium at pH 7.4 supplemented with hypoxanthine at 50 g·ml Ϫ1 , NaHCO 3 at 25 mM, gentamicin at 2.5 g·ml Ϫ1 , and AlbuMAX II (Gibco) at 0.5% (wt/vol). Late-stage parasites were enriched with a MACS device (Miltenyi Biotech, Bergisch Gladbach, Germany) as reported elsewhere (79). To obtain enriched ring stage parasites, a standard sorbitol synchronization method was used.
T. gondii culture protocols. Tachyzoite stage T. gondii (type I RH strain) parasites were routinely maintained in HFF (ATCC) monolayers by standard protocols (80). Briefly, HFF cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% heat-inactivated fetal bovine serum (FBS; U.S. origin, Gibco), 25 mM HEPES, 2 mM GlutaMAX, and gentamicin at 50 g/ml in a 37°C humidified incubator maintaining 5% CO 2 . Confluent monolayers of HFF cells were infected with 10 5 freshly isolated, host cell-free T. gondii tachyzoites. The culture medium used for parasite growth was the same as that used for HFF cell maintenance, except that FBS was used at 1%. To isolate parasites residing within host cells, the infected HFF monolayer was scraped and passed through a 25-gauge needle to lyse the host cells and release the tachyzoites. The parasites were then separated from HFF cell debris by filtration through a membrane with a 3-m pore size. Ten microliters of the 1:10-diluted filtrate was placed in a hemocytometer, and cell counts were obtained with the Countess machine (Thermo).
Determination of EC 50 s for P. falciparum. The assays used to determine EC 50 s for P. falciparum were carried out in a 96-well plate format by SYBR green staining-based parasitemia determination as previously reported (81,82). The plates were first seeded with complete RPMI medium containing each MMV Malaria Box molecule at the required concentrations of 10 to 0.01 M, followed by the addition of an iRBC suspension. The final hematocrit and parasitemia were maintained at 2.5 and~2%, respectively. Each 96-well plate also included a standard antimalarial compound as a positive control (usually atovaquone or chloroquine at 1 M) and 1% DMSO as a negative control. Inhibitor treatment was done in duplicate, while controls were set up as four replicates. Assay plates were incubated for 60 h under optimal growth conditions, followed by staining with 25 l of 10ϫ SYBR Green staining solution consisting of 0.5% Triton X-100 and 10ϫ SYBR green I dye (Invitrogen) in phosphate-buffered saline at pH 7.4. Fluorescence readings were obtained after 15 min of incubation with a GloMax plate reader (Promega). The raw fluorescence readings were then processed using Microsoft Excel spreadsheets for dose response calculation and EC 50 estimation.
Determination of T. gondii growth inhibition and EC 50 s. T. gondii RH strain tachyzoites expressing the firefly luciferase gene were used to identify Malaria Box molecules with antitoxoplasma activity and determine their EC 50 s. Details regarding the generation of transgenic parasites expressing luciferase and standardization of the luminescence assay are provided in the supplemental material. The assays were set up in a 96-well plate format, and 100 l of culture medium containing 5 ϫ 10 3 parasites was inoculated into each well containing a confluent monolayer of HFF cells, preseeded with 100 l of culture medium containing Malaria Box compounds either at 10 M (for growth inhibition studies) or in serial 2-fold dilutions ranging from 10 to 0.01 M (for EC 50 determination). Each plate also included a standard drug as a positive control (usually atovaquone at 1 M) and 1% DMSO as a negative control. Inhibitor treatment was done in triplicate for growth inhibition and in duplicate for EC 50 determination. The controls were set up as four replicates each. After 48 h of growth under optimal conditions, 150 l of culture medium was removed and 50 l of 2ϫ luciferase assay reagent (Promega) was added and mixed well. The plates are immediately read with a VarioScan plate reader (Thermo Fisher, United States). The raw luminescence readings were then processed by using Microsoft Excel spreadsheets for calculation of percent growth inhibition and EC 50 estimation.
T. gondii plaque formation and delayed-death assays. The plaque-forming ability of T. gondii (RH strain) tachyzoites was tested to detect a delayed-death effect in Malaria Box molecules having Ͻ20% growth inhibition of tachyzoite stage T. gondii at a 10 M inhibitor concentration in 48-h killing assays. The plaques assays were initiated by inoculating 50 tachyzoites into each well of a six-well plate containing confluent monolayers of HFF cells preseeded with selected Malaria Box compounds at 10 M. The plates were left undisturbed for 8 to 10 days under optimal growth conditions, after which the infected monolayers were fixed with methanol and stained with crystal violet to visualize plaque formation. The plaques were then imaged, and all images were processed with ImageJ software to determine the plaque area and count in each well. The plaque area of inhibitor-treated cultures was compared with that of untreated and 1% DMSOtreated controls. Clindamycin (known to cause delayed death of T. gondii) at 10 M and pyrimethamine or atovaquone at 1 M were used as positive controls for parasite killing.
Assay of apicoplast missegregation in T. gondii. Transgenic RH-Tgtpi-II-yfp T. gondii parasites, in which the endogenous triosephosphate isomerase II gene is tagged with YFP, were used to track the apicoplast phenotype by microscopy. First, tachyzoite stage parasites (5 ϫ 10 3 ) were allowed to invade a confluent monolayer of HFF cells (first vacuole) in a 96-well plate in the presence of selected Malaria Box molecules (10 M) identified as having a delayed-death effect on the parasite. After 48 h of incubation under optimal growth conditions, parasites were harvested by trypsin treatment, followed by passage of the infected cells by syringe through a 25-gauge needle to release free extracellular parasites. These parasites were then added to a fresh monolayer of HFF cells grown on coverslips in 24-well plates to initiate a second round of invasion (second vacuole), again in the presence of a 10 M inhibitor concentration. After~20 h of growth under optimal conditions, the coverslips were fixed with 3.5% paraformaldehyde, stained with 4',6-diamidino-2-phenylindole (DAPI) to visualize cell nuclei, and mounted on a glass slide with Fluoromount (Sigma). The coverslips were then imaged with a 63ϫ objective fitted to an inverted fluorescence microscope (Carl Zeiss, Inc.). Apicoplast-associated YFP and nuclear DAPI were imaged with excitation/emission wavelength filter combinations of 514/527 nm and 350/470 nm, respectively.
Calcium ionophore-induced egress of intracellular T. gondii tachyzoites. The calcium ionophore A23187 was used to induce the egress of intracellular tachyzoites as previously reported (83). Briefly, at 24 hpi, when the vacuoles contained between 8 and 16 parasites, the plates were removed from the incubator and allowed to equilibrate to room temperature for 5 min before appropriate vacuoles were located for imaging with a 40ϫ objective fitted to an inverted bright-field microscope (Primo Vert; Zeiss). After ionophore addition (final concentration, 1 M) to the culture, the vacuoles were imaged continuously for 10 min. The timing of parasite egress following ionophore addition was monitored.
Flow cytometry analysis of P. falciparum cultures. Flow cytometry analysis was carried out to quantify parasitemia and determine the fractions of parasites in the ring and schizont stages. A benchtop flow cytometer (Accuri C6; BD Biosciences) was used, and at least 100,000 events were recorded in each sample. To determine parasitemia, a 50-l culture aliquot was fixed with phosphate-buffered saline (PBS) containing 0.1% glutaraldehyde (Sigma-Aldrich) at 4°C overnight. Cells were then washed in PBS, permeabilized with PBS containing 0.25% Triton X-100 (Sigma-Aldrich) for 10 min at room temperature, and finally washed again in PBS. After being washed, samples were incubated with Hoechst 33342 (Thermo Fisher) at 25 g/ml for 30 min in the dark and counted by flow cytometry as previously reported (50). Data analysis and statistical calculations were performed with GraphPad Prism (GraphPad Software, Inc.) in accordance with the recommended protocol for nonlinear regression of a log(inhibitor)-versusresponse curve.
Microscopic examination of P. falciparum. Thin smears of P. falciparum cultures were prepared on glass slides, fixed with 100% methanol, and stained with fresh Giemsa (Sigma) solution made in filtered distilled water. Smears were examined with a 100ϫ oil immersion objective and a standard phasecontrast microscope (Leica). Images from the smears were captured with an Olympus digital camera and processed with Adobe Photoshop.
Monitoring of the P. falciparum egress phenotype and determination of R 50 values. Tightly synchronized schizont stage parasites (~40 to 42 hpi) at 1% parasitemia and 2.5% hematocrit were incubated with selected drugs at 10, 3, 1, 0.3, and 0.1 M for 12 h and harvested for flow cytometry analysis. At the time of analysis, in DMSO-treated controls, schizont maturation and ring stage formation were almost 100%. Heparin (invasion inhibitor)-treated parasites served as negative controls for egress inhibition, while E-64 (egress inhibitor) was included as a positive control for egress inhibition. In parallel, Giemsa-stained smears were made at 52 hpi to monitor and confirm the rupture phenotypes. For a more detailed analysis of egress and/or invasion inhibition by the compounds, the above protocol was used, except with a higher parasitemia level of 2.5%.
Molecular protocols. The transgenic T. gondii parasite (RH) strains used in this study are RH-Luc and RH-Tgtpi-II-yfp. RH-Luc, which constitutively expresses the firefly luciferase gene, was used for killing assays. The firefly luciferase gene (PCR amplified from plasmid pGL3 [Promega]) was cloned into a modified pBluescript plasmid backbone as a BglII and NheI fragment for expression under the constitutively active T. gondii -tubulin 5= untranslated region (UTR) and the TgSAG 3= UTR. This plasmid includes the dihydrofolate reductase (DHFR) expression cassette for the selection of stable lines of transfected parasites with pyrimethamine.
RH-Tgtpi-II-yfp, which constitutively expresses the triose phosphate isomerase II gene tagged with YFP, was used for apicoplast missegregation studies. Since the TPI-II protein is naturally targeted to the apicoplast, in this transgenic parasite, the organelle is marked by YFP fluorescence. The genomic locus tagging plasmid construct used to generate transgenic parasites expressing RH-Tgtpi-II-yfp was made as follows. Genomic DNA was isolated from tachyzoite stage T. gondii with a commercial kit (Qiagen) in accordance with the manufacturer's protocol. The Topo 2.1 cloning vector (Invitrogen) was modified to make a 3= YFP tagging plasmid. PCR-amplified 1.7-kb T. gondii genomic DNA corresponding to the 3= region of the Tgtpi-II gene locus [TGME49_233500 and TGME49_chrVIII, 2,682,525 and 2,688,704 (Ϫ)], which includes the codon for the last amino acid but excludes the stop codon, was cloned as a HindIII and AvrII fragment. The YFP coding sequence, along with the Tgdhfr 3= UTR region, was then cloned downstream of the Tgtpi-II genomic region. A DHFR selection cassette, used to obtain a stable line by pyrimethamine selection, was cloned into the tagging plasmid as a NotI fragment. Prior to parasite transfection, the tagging plasmid was linearized with the BstXI enzyme, which cuts in the middle of the 1.7-kb Tgtpi-II genomic fragment (Fig. S4).
Generation of T. gondii transgenic lines. Freshly isolated T. gondii tachyzoites were washed and resuspended in complete parasite culture medium. A 50-g sample of linearized plasmid DNA dissolved in 100 l of medium was mixed with 300 l of a parasite suspension (~1 ϫ 10 7 cells), and the mixture was electroporated with a Gene Pulser electroporation system (Bio-Rad) with capacitance and voltage settings of 10 F and 1.5 kV, respectively. Transfected parasites were then allowed to infect a fresh layer of HFF cells and incubated for 12 h under optimal growth conditions. Infected HFF monolayers were then washed once, and 10 M pyrimethamine was added to the culture medium and incubated for 48 h to obtain stable transgenic lines, which were then cloned by the limiting-dilution method. A clonal isolate of the firefly luciferase-expressing parasite was tested for linearity in luminescence detection over a 2-fold dilution series of parasite numbers. The results (Fig. S1) indicate good linearity between parasite counts and luminescence readouts in the range of 10 4 to 10 8 parasites. We tested the linearity of the luminescence detection assay after inoculating HFF monolayers into 96-well plates with various numbers of parasites and allowing them to proliferate for 48 h under standard growth conditions. On the basis of the results obtained from this experiment (Fig. S1B), we decided to inoculate 5 ϫ 10 3 tachyzoite stage parasites per well for the drug inhibition assays. | 2018-04-03T00:05:48.589Z | 2018-01-24T00:00:00.000 | {
"year": 2018,
"sha1": "90cefe57f95d4e8f6eb916fed7dd3078aa0cea22",
"oa_license": "CCBY",
"oa_url": "https://msphere.asm.org/content/msph/3/1/e00534-17.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0e96bdfcf57f4627da470a69a13f87c62884eae",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249097495 | pes2o/s2orc | v3-fos-license | Railroad Bailouts in the Great Depression
The Reconstruction Finance Corporation and Public Works Administration loaned 50 U.S. railroads over $1.1 billion between 1932 and 1939. The government goal was to decrease the likelihood of bond defaults and increase employment. Bailouts had little effect on employment, instead they increased the average wage of their employees. Bailouts reduced leverage, but did not significantly impact bond default. Overall, bailing out railroads had little effect on their stock prices, but resulted in an increase in their bond prices and reduced the likelihood of ratings downgrades. We find some evidence that manufacturing firms located close to railroads benefited from bailout spillovers.
Introduction
The Reconstruction Finance Corporation (RFC) was created by President Hoover in early 1932 during the depths of the Great Depression. The objective of the RFC was to "make temporary advances upon proper securities to established industries, railways, and financial institutions which cannot otherwise secure credit, and where such advances will protect the credit structure and stimulate employment" (hereafter referred to as "the twin objectives"). 1 The Corporation approved $3.9 billion in loans from 1932 until 1939. 2 We call such loans 'bailouts' because they were provided at below-market interest rates to companies that struggled to access credit from commercial sources. 3 Although most of these loans went to the financial sector (e.g., Mason (2001), Calomiris, Mason, Weidenmier, and Bobroff (2013), and Butkiewicz (1995Butkiewicz ( , 1997), railroads were, by far, the largest non-financial sector recipient (i.e., $1.17 billion, including rollovers) during this eight-year period. In this paper, we explore how the RFC's bailouts for railroads, along with limited assistance from the Public Works Administration (PWA), impacted the economy.
First, we study the impact of the bailouts on the assisted railroads themselves, given the RFC's twin objectives. There is little impact of bailouts on railroad employment, although the average wage paid by bailed-out railroads increased. Bailouts allowed loan recipients to reduce leverage, although we find no evidence that bond defaults declined. Interestingly, bailouts were associated with a reduced probability of a bond rating downgrade. Overall, there was a significant increase in bond prices following the announcement of a loan application or approval. Indeed, in the nine days surrounding news of an application, bond prices experienced an abnormal return of 2.2%. An approval is associated with a 1.8% abnormal return for bonds, but little effect on equity prices. Bailouts appear 1 RFC Final Report (1959), page 1. 2 Over the entire period of the RFC's existence , the agency recovered 97.99% of the nominal value of the loans (see RFC Final Report,p. 163). We halt our examination in 1939 since the Great Depression is usually considered to have been over by the end of the 1930s, and World War Two began soon afterwards. 3 In 1932 and 1933, RFC loans were extended at the same interest rate as Federal Reserve loans to member banks. See https://www.federalreservehistory.org/essays/reconstruction-finance-corporation (accessed April 26, 2022). to have benefited existing employees and bondholders rather than aiding firm performance, such as railroad profits.
Second, we study the impact of railroad bailouts on the wider economy. Although New Deal railroad assistance was not explicitly aimed at the railroads' customers, we find that firms located in the same city or town as a bailed-out railroad experienced positive spillovers from news of a forthcoming railroad loan. Manufacturing firms whose operations had a high geographical overlap with the assisted railroad experienced a 0.9% abnormal return upon announcement of the bailout, compared to a 0.4% abnormal return for manufacturers with low levels of overlap with that railroad.
Railroads were a vital part of the U.S. economy in this era. The transportation sector employed over 3 million people in 1929 (6.9% of total employment), and generated $6.6 billion of national income (8.2% of total income), see Kuznets (1934). Railroads comprised roughly 10%, by number, of all NYSE stocks in CRSP in January 1929. By value, the railroad industry was about 13% of the market.
There is extremely granular data on railroads, which allows us to know where railroads operated, the products they transported, and their employment levels. We also have annual financial statements and monthly revenue reports for all railroads.
Details of government railroad loans were quickly made public by the railroad regulator, the Interstate Commerce Commission (ICC), and widely reported in the media. It was, by contrast, impossible to observe the immediate impact of government loans in most sectors during the Great Depression.
Loans to banks, farms, and industrial firms were largely kept secret, and financial claims on these firms were not usually traded in liquid financial markets. Anbil (2018) shows that banks that, unexpectedly, had their loans from the government revealed in August 1932 experienced drops in deposits of 18 to 25% compared to control banks. These banks were also 78% more likely to fail after their loans were publicly disclosed. Government loans to railroads required approval by both the RFC and the ICC. The New York Times reported on May 21, 1932 that (p. 21), "although the corporation (RFC) cannot approve loans unless they are sanctioned by the (ICC) commission, it is not required to make the advances sanctioned." A loan approval required eight conditions to be certified by the RFC board of directors: (i) that the applicant could not obtain funds on reasonable terms from banks or the public, (ii) approval of the ICC, (iii) a maturity of less than three years, (iv) the borrower existed prior to 1932, (v) total government loans to the applicant were less than $100 million, (vi) no fees or commissions were paid by the applicant, (vii) the applicant consents to examinations by the ICC or other authorities, and (viii) collateral must meet the terms of the Finance Corporation act. 4 In total, 191 railroad applications were successful, 32 applications were rejected, and fourteen applications were subsequently withdrawn. Railroads that obtained a government loan likely differed from railroads that did not receive government aid. Although we condition our results on the publicly available characteristics of railroads, it is likely that railroads also differed along unobservable dimensions. To address this issue, we take advantage of the political process that was inherent in RFC decision-making. That is, RFC directors were appointed by the President and confirmed by the U.S.
Senate. Political considerations were important in the 'New Deal' decision-making process, see e.g., Wright (1974), Wallis (1987), and Fishback, Kantor, and Wallis (2003). We find that railroad bailouts were more likely to be granted to railroads that operated in the home states of RFC directors. Using the composition of the RFC board as an instrument for government assistance, we fail to find any beneficial effects on railroad employment or the likelihood of bond defaults.
Direct government aid to the real economy has been rarely attempted during a crisis, although direct aid was a big part of many governments' COVID-19 responses (see e.g., Cirera et al. (2021), Elenev,4 See New York Times, March 5, 1932, page 23. Landvoigt, andVan Nieuwerburgh (2022), and Granja, Makridis, Yannelis, and Zwick (2020)). 5 As such, government aid to non-financial firms during a crisis has received little attention in the academic literature. Faccio, Masulis, and McConnell (2006) find that politically connected firms were more likely to be bailed out around the Asian financial crisis, especially if the national government had received an IMF or World Bank aid package. The authors conclude that bailed-out firms that were politically connected continued to underperform non-bailed-out firms in the same industry following the bailout, as measured by the return on assets (ROA). However, the ROA for non-connected firms improved relative to same-industry peers after a bailout. The study does not, therefore, fully determine whether real-sector bailouts are good public policy in a crisis. Goolsbee and Kruger (2015) argue that the bailouts of General Motors and Chrysler in 2008 helped to reduce the economic downturn in the U.S. They conclude, "The rescue has been more successful than almost anyone predicted at the time." Their study is necessarily restricted to two firms since the remaining Troubled Asset Relief Program (TARP) funds went to the financial sector. Berger and Roman (2017) investigate economic spillovers following TARP bailouts of U.S. banks. They find that in the counties in which banks received more TARP funds, there was better net job creation and faster wage growthperhaps because TARP recipients passed on more generous loan terms to their customers (see Berger, Makaew, and Roman (2019)). Assistance first went to Wall Street before going to Main Street. While Berger and Roman study indirect assistance to the real sector, we study direct loans (at preferential interest rates) from the government to industry.
Non-financial firms experience a crisis differently to banks. There can be non 'runs' on a railroad's assets. U.S. railroads, for instance, often issued 50-year bonds to finance their operations, so there could be no 'run' on the railroad's debt unless their bonds were close to maturity (see Benmelech, Frydman, and Papanikolaou (2019)). Railroads cannot quickly change their operations to 'gamble on resurrection' (see e.g., Hellmann, Murdock, and Stiglitz (2000) and Dewatripont and Tirole (2012)).
Tracks, and other real assets, are fixed and costly to divert in the search for new business.
We discuss the economic environment that led to the creation of the RFC and the Corporation's structure in section 2. We describe our data and sources in section 3. We present our main results in section 4, with robustness checks in section 5. We conclude in section 6.
The Great Depression and the Reconstruction Finance Corporation
The Great Depression was an unprecedented period of economic and financial collapse worldwide.
It struck the U.S. particularly severely with peak-to-trough industrial output falling 40% by late 1931 and GDP still 25% below trend six years after the recovery began (see Cole and Ohanian (2004) and Ohanian (2009)). There were several waves of banking crises in the early 1930s (see Bernanke (1983) and Friedman and Schwarz (1963)). In response to the weak economy and runs on troubled banks, President Hoover reluctantly created the Reconstruction Finance Corporation in January 1932, which was a component of what came to be known as the 'New Deal.' The RFC was initially permitted to loan to financial firms and railroads; loans were later permitted for farms, state and local government, infrastructure projects, and industrial firms. The RFC's initial capital stock came from a $500 million appropriation from the Treasury. While it obtained the bulk of its additional funding by issuing notes to the Secretary of the Treasury, a tiny part of its operations was provided for by direct borrowing from the public.
The New York Times reported on December 19, 1931, that Hoover believed that, "the plight of the American railroads is only temporary and that they will be able to work themselves out of the depression." 6 The United States had experienced severe railroad defaults during crises in 1873, 1884, 7 and 1893, in which multiple large lines defaulted, resulting in significant drops in railroad employment (see Schiffman (2003), Giesecke, Longstaff, Schaefer, andStrebulaev (2011), andCotter (2021)).
Part of the rationale for providing aid to railroads was that many railroads were not capable of repaying their maturing bonds, and it would be exorbitantly expensive for them to obtain alternative funding from the banking sector. In late 1931, Daniel Willard testified in the Senate that railroads "cannot borrow money from banks at less than 8 or 9 percent interest" when most maturing bonds had coupon rates of around 4 percent. 7 The bond market for railroads virtually closed during the worst years of the depression, 1932 to 1934 (see Figure 1), and the government acted to fill that gap.
Treasury Secretary Andrew Mellon saw the role of the RFC as to provide "a stimulating influence on the resumption of the normal flow of credit into channels of trade and commerce." 8 The Reconstruction Finance Corporation Act became law on January 22, 1932. The initial board of directors was appointed on February 2, and the first applications were received on February 5, 1932. Mason and Schiffman (2004) calculate that, in 1931, 31% of railways' debt was held by insurance firms, 17% by banks, 4% by foundations and educational institutions, and 7% by other railroads, with the remainder held by "other" investors. Helping railroads avoid bond defaults would have been expected to reduce negative spillovers to the financial sector.
RFC loans to railroads were limited to three years duration, had to be 'adequately secured' by collateral, and were restricted to railroads that could not obtain funds on "reasonable terms".
Collateral usually consisted of additional issuance of a railroad's bonds to the RFC, although occasionally direct liens on tracks or rolling stock was used. Most loans were for a three-year duration, and 83.5% of loans in our sample were rolled over at maturity. Railroads that were reorganizing under bankruptcy protection were also eligible for RFC loans. Overall, the RFC granted loans of, on average, $8.89 million.
Although disclosure of RFC loans to banks was very sporadic, the ICC had a policy of full and timely disclosure of railroad loans. All railroad loans were publicly disclosed at or near the time of loan application and approval. Loan information was sometimes, however, delayed slightly. The Baltimore and Ohio Railroad's loan application, for example, was kept secret for 10 days in August 1932. We show the distribution of railroad bailout loans over time in Figure 2. The biggest concentration of loans was in 1932 and early 1933, followed by another wave of new loans and rollovers in 1935 and again in 1938 and 1939. We depict the geographical distribution of loans by state in Figure 3. The large number of railroads that operated between New York and Chicago shows up as heavy approvals that went to railroads in the NorthEast and MidWest.
The initial board's ex-officio members were the Secretary of the Treasury, the Chairman of the Federal Reserve Board of Governors, and the Farm Loan Commissioner. Directorships were balanced by party affiliation, and care was taken that the directors came from different regions of the U.S. We read press reports, Final Report on the Reconstruction Finance Corporation (1959), and online biographies of the RFC directors to assign the directors' 'home' states. For example, the New York Times reported that two members of the initial RFC board would be "two Democrats from the Southwest, Harvey C. Couch of Arkansas and Jesse H. Jones of Texas." 9 We document the composition of the RFC board in Table 1, panel A. Most of the appointed RFC directors were businessmen, and four were former U.S. senators.
New Deal funding decisions are generally considered to have been at least partly politically motivated (see e.g., Wright (1974), Wallis (1987), and Fishback, Kantor, and Wallis (2003)). The RFC's decisions were similarly criticized. In April 1932, Representative La Guardia claimed, "everybody in 9 New York Times, January 26, 1932, page 1. the country knows a private wire from J. P. Morgan to the headquarters in Washington dictates the [RFC's] policy." 10 The RFC's initial president, former Vice President Charles G. Dawes, was heavily criticized by Senator Brookhart of Iowa for having loaned over $80 million to Dawes' own Chicago bank. 11 In our analysis, we demonstrate that RFC railroad loans were also partly determined by the geographical origins of the RFC board. We use the composition of the RFC board at the time loans were made as an instrument for loans.
The Public Works Administration also made government loans to railroads from late 1933 until early 1936. PWA loans tended to be smaller than the RFC's disbursements, and they were often used for capital expenditure rather than to service the railroad's debt. In total, PWA loans only comprise around 10% of our sample by value, and 15% of our sample by number. Since money is fungible, we consider both RFC and PWA loans in our analysis.
Data sources
We study the Class I railroads of the continental United States. Class I railroads owned over 90% of the nation's tracks by length, they employed roughly 98% of railroad employees (3.4% of the United States' total labor force), and they carried over 99% of the revenue-ton-miles of all U.S. railroads in 1929. We collect balance sheet, profit and loss, track network, and employment data for these railroads from the annual reports of the ICC, Statistics of Railways in the United States.
We compile annual statistics for each railroad's freight revenue sources (i.e., agricultural, animal, mining, forestry, merchandise, or manufacturing items) and monthly revenue from Moody's Manual of Investments -Railroad Securities. Data on freight revenue is important because some railroads concentrated on transporting a narrow range of products. As a result of the railroads' varied exposure to product markets, the Great Depression could affect railroads unevenly. Therefore, we use the ICC's classification to place railroads into eight geographical regions.
If a railroad had publicly traded equity, we gather stock prices from the Center for Research in Security Prices (CRSP). Many railroads did not have publicly traded equity, usually because they were fully owned by another railway or a related industrial firm. We compile price data on the most liquid bond per railroad. 12 We obtain bond prices from the section 'Bond Sales on the New York Stock Exchange' in the New York Times. We classify a railroad as being in default if it had failed to meet a coupon or principal repayment or if it in any way changed the terms of the issue. 13 Data on bond ratings, coupons, amounts outstanding, and maturity comes from Moody's Manual of Investments -Railroad Securities. We use Moody's index of daily corporate bond prices, reported in the Commercial and Financial Chronicle, as a proxy for the market return on railroad bonds.
We combine the track network of each railroad with two city-level data sources. First, we hand-collect data on factories operated by NYSE-listed manufacturing firms from Moody's Manual of Investments -Industrial Securities. In total, there were 471 manufacturing firms that have data on factories reported in Moody's. Second, we obtain city-level building permit data from Cortes and Weidenmeir (2019). The value of these permits is based on the costs of new commercial and residential buildings for 215 cities across the U.S., taken from Dun & Bradstreet's Review. We collect national bank capital in default from Annual Report of the Comptroller of the Currency. National bank capital per state comes from Flood (1998).
Bailouts
We identify railroad bailouts from the Reconstruction Finance Corporation's records at the National Archives in Washington. 14 For each railroad, we observe their application, approval, and refusal documents and the dates on which each document was sent. Loan decisions were made quickly, usually taking a couple of weeks to a month or two. The median (mean) value of Decision time is 27 (45) days.
To identify the public announcement date of railroad bailouts, we search the New York Times for the phrases "Reconstruction Finance Corporation" or "Public Works Administration" from January 1932 until December 1939. We identify loan applications, approvals, rejections, and details thereof in the archives. 89.5% of all applications, approvals, and refusals from the archives also appear in the New York Times.
Occasionally a railroad would revise the size of their loan request while the application was under consideration. For example, the RFC might agree to loan a railroad a smaller amount than initially requested and would invite the railroad to modify its application. Alternatively, a railroad might have been successful in obtaining some funds from other sources and reduce their loan amount. For every loan we calculate the difference between the application and approval sizes (Loan size difference).
RFC Board composition
In Table 1, panel A, we present data on RFC directors. The composition was supposed to be balanced by party affiliation and geographically diverse. However, it is possible that the appointment of RFC directors was partly determined by economic conditions in the home state of the director or even by 14 Records RG 234. financial conditions in the railway sector in their home states. In Panel B, we document that larger states were more likely to have RFC directors, and we find evidence that directors were less likely to be appointed if there was already a director from the same state. Our results show that the appointment of directors is not robustly associated with economic conditions in the directors' home states.
Therefore, concerns are alleviated that causality runs from the poor economic conditions of railroads to the appointment of RFC directors and thence to more railroad bailouts.
Summary statistics
In Table 2, we present our summary statistics on railroads. We divide railroads into those that were "bailed-out"-which we define as having received at least one loan from the RFC or the PWA between 1932 and 1939-and those that were not bailed-out. In Panel A, we show that there are large differences between the bailout recipients and others. Bailed-out railroads had less cash to total assets (a mean of 1.5% of assets vs. 2.3% for non-bailed-out railroads), were slightly more levered (mean long-term debt to total assets of 42.5% vs. 40.7%), were less profitable (a mean net income to total assets of 0.9% of total assets vs. 1.3%), and had more volatile operations (monthly volatility of 12.3% vs. 9.2%). Bailed-out railroads were much larger (mean total assets of $195.1 million vs. $29.2 million), employed more people (a median of 9,145 vs. 1,214), had higher average wages ($1,634 vs. $1,568), and had more connections to the RFC board. On average, bailed-out railroads operated in 0.8 states with an RFC director vs. 0.4 for non-bailed-out railroads. 15 In Panel B, we subdivide the bailed-out railroads into two groups: those that received a single loan from the RFC or PWA and those that received multiple bailouts between 1932 and 1939. The railroads that received multiple bailouts tended to have less cash (mean 1.4% of total assets vs. 1.9%) and were better connected to the RFC (connections in 0.9 states vs. 0.4 states). The companies that obtained multiple bailouts tended to be larger (mean size of $247.0 million vs. $65.1 million), employed more people (a mean of 10,076 people vs. 2,609 people), and focused more on passengers (12.5% vs. 7.6% of total revenue at the mean).
Bailout recipients
We use a two-step model to investigate which railroads received the government bailouts, in the spirit of Vossmeyer (2016). In the first step, we run an OLS model of railroads' applications on their lagged characteristics. Application equals one if the firm applied for at least one bailout in that year, and zero otherwise. We include year, bond rating, and region-year fixed effects to control for unobserved, time-varying shocks to economic conditions and RFC board processes.
In column 1, we highlight that railroads were more likely to apply for a bailout when they had more connections to the RFC. An increase in the number of connections of one leads to an increase in the probability of applying by 0.029 (significant at the 1% confidence level). Given that the average probability of applying equals 0.067, this effect is economically large. Railroad characteristics, such as cash holdings, leverage, and profitability do not change the likelihood of a railroad applying for a bailout, once we condition on a railroad's bond rating. In column 2, we allow for unobserved, timevarying shocks at the regional level. Allowing for these shocks does not change the coefficient estimates.
In the second step, we study which characteristics correlated with a loan approval, among railroads that applied for (at least) one loan. The dependent variable, Approval, equals one if a railroad received at least one bailout in a year in which it applied at least once. In columns 3 and 4, we find that successful applicants were, on average, larger, more levered, focused more on passengers, and had more bonds that were close to maturity. 16 Applicants that had received more loans in the past were less likely to be approved. When we take the unobserved, time-varying regional effects into account (column 4), we see little change.
In columns 5 and 6, we examine the size of loan approvals (scaled by total assets). We see that more levered railroads tended to receive larger loans, and applicants who had received more prior loans from the government tended to receive smaller loans. Railroads whose leverage was one standard deviation higher (15.9 percentage points of total assets) received a bailout that was, on average, 38.92% higher.
Market reactions to bailouts
We examine the reactions of a railroad's stock and bond prices to New York Times' reports of bailout applications and approvals. In Table 4, we compute abnormal returns (AR) and cumulative abnormal returns (CAR) on railroad debt and equity. We compute abnormal returns as the return on the stock or the bond less the CRSP market return or Moody's corporate bond index return, respectively. 17 We find statistically significant abnormal returns of 0.9% for bonds on the day a loan application was announced and a statistically insignificant abnormal return of 0.3% on the day an approval was announced (see Panel A). There is no statistically significant AR on refusal announcement dates, although there is a significant abnormal return the next day of -5.1%. In the window around the news release (t-4 to t+4), we find CARs of 2.2% (applications), 1.8% (approvals), and -6.6% (refusals), although the refusal return is statistically insignificant. 16 We include a railroad's debt maturing in the depths of the depression following the papers of Benmelech, Frydman, and Papanikolaou (2019) and Duchin, Ozbas, and Sensoy (2010). 17 We only hand-collect bond prices in a narrow window around RFC announcements. Therefore, we are unable to estimate a market model for railroad bonds. To maintain consistency in our measurement of abnormal returns between bonds and stocks, we compute abnormal returns for railroad stocks in the same manner. Effectively, we assume that alpha equals zero and beta equals one in the market model. Since many railroads applied for (and were granted) multiple loans, we investigate the differences between the initial loan and subsequent loans. Substantially more private information is likely to have been conveyed to the market by a firm's initial revelation that it desired federal government financial assistance. An application announcement for the first bailout is associated with a 4.1% bond CAR from t-4 to t+4, and a 2.1% AR on day zero (see Panel B). An approval announcement for the first bailout has a 1.2% bond AR on day zero (with a 5.9% CAR from t-4 to t+4), all statistically significant. Subsequent bailouts are reflected in more subdued bond responses. A second or subsequent application has a 0.4% bond AR on announcement day (1.4% from t-4 to t+4), both statistically significant. A second or subsequent approval has no movement on day zero, but a 0.6% AR (significant) on day one and a statistically insignificant 0.9% CAR from t-4 to t+4.
In contrast to the response of bond prices, there is little statistically significant movement of stock prices in response to bailout news events (see Panels A and B). News of subsequent RFC approvals resulted in a 1.5% AR on the announcement date and a 2.3% CAR (both statistically significant) from t-4 to t+4.
Determinants of announcement returns
We examine the association between a railroad's characteristics at the time of the bailout and its announcement returns. In Table 5, we regress the CAR of the railroads' bonds and equity (from t-4 to t+4) on firm and bailout variables. We find few railroad characteristics that are robustly associated with security returns. Most characteristics are insignificant and/or change signs depending on whether we examine applications versus approvals or bonds versus shares. Railroads with more leverage experience substantially worse returns on their equity upon announcements of loan applications, perhaps because a loan application indicated the railroad would struggle to service its debt, and, therefore, that equity was next to worthless. A one standard deviation increase in leverage corresponds to a 9.60% more negative CAR. Railroads with more employees tended to have lower announcement returns, perhaps because market participants expected government pressure on the railroad to maintain employment. 18 A one standard deviation increase in employment is associated with a 2.03% more negative CAR.
Effectiveness of government bailouts
We now examine if railroad bailouts achieved the RFC's twin objectives. We first investigate if the bailouts helped the railroads avoid defaulting on their bonds. Second, we look at how the bailouts affected railroad employment, wages, and operating performance. Finally, we search for evidence that these bailouts provided positive spillovers for firms that relied on the railroads to provide transport services.
Credit structure
Did the RFC protect the "credit structure" of the financial system? All else equal, an RFC loan should have made a railroad less likely to default on its debt. Jones (1951) claims that RFC funding reduced railroad defaults by half, whereas Schiffman (2003) and Mason and Schiffman (2004) claim that bailouts at least delayed defaults. 19 However, defaulting on debt is partly a choice, and Mason and Schiffman argue that bankruptcy "brought relief from high fixed charges that were often a principal cause of financial distress" (p. 61). In Figure 4, we plot Kaplan-Meier (1958) graphs with cumulative probabilities of failure for the bailed-out vs. non-bailed-out railroads. We observe that railroads that received a bailout are associated with a higher hazard rate of bond defaults, and that this difference increases with time from the bailout. In Panel B, we show that the higher default rate for bailed-out railroads survives the inclusion of control variables. The granting of a below-market-rate loan, ceteris paribus, is a good event. Hence, higher default rates for bailed-out railroads suggest that unobservable factors are likely influencing both a railroad's performance and the bailout decision.
We assess the effects of bailouts on bond defaults in Table 6. In column 1, we run a probit model of defaults in which the dependent variable equals one if a railroad defaulted on its bonds in that year.
We examine if loan Approval in the previous year is associated with the railroad defaulting upon its debt. We attempt to capture railroad unobservables by including bond rating fixed effects from Moody's. In this era, Moody's only released ratings once per year in its annual investors' compendium. Most railroads had multiple bonds and bond ratings so we use the rating of the bond closest to maturity. Overall, we show that lower net income, lower cash to assets, and younger railroads are associated with a higher likelihood of default. Government loan Approval does not have a statistically significant relation with defaults.
In column 4, we again run a probit model of defaults, but the dependent variable now equals one if a railroad defaulted on its bond in that year or the following two years. This offers a longer-run investigation of how a government loan Approval is associated with a railroad's likelihood of default.
We find that Approval has a significantly positive correlation with the probability of railroad default (at the 5% confidence level). Indeed, getting an RFC or PWA bailout is associated with a default rate of 6.39%, all other characteristics at sample means, relative to a non-bailed-out railroad's default rate of 1.51%. This finding is in line with the Kaplan and Meier graphs in terms of magnitude. 20 However, railroad bailouts are unlikely to be awarded at random, and a selection effect is likely to be present.
Hence, we turn to an instrumental variable approach to determine if bailouts have a causal effect on railroad defaults.
Instrumenting for bailouts
Our concern in determining if bailouts aided railroads is that there are likely to be variables omitted from our econometric specification that partly affected a railroad's default behavior. Railroad management and the RFC board had access to non-public information. The RFC required all railroads that applied for loans to disclose the "possibility of securing a loan from other sources" (item 3 of the application). For example, the Chicago and NorthWestern Railroad, in applying for $11,127,700 on January 16, 1933, reported to the RFC that "for many years Applicant ordinarily has been financed through Kuhn, Loeb and Company, Investment Brokers of New York City … on different occasions during the latter part of 1932 Applicant discussed … the possibility of securing a loan … but Kuhn, Loeb and Company declined to commit itself to any future loans." Railroads that had tried, but failed, to obtain bank or Wall Street assistance to raise additional funds would be more likely to default than its observable characteristics would suggest. In such a situation, the error from the regression of bond defaults on bailouts is likely to be correlated with the independent variable Approval. Therefore, the coefficient estimates on Approval, which measure the effectiveness of government aid, will be biased.
We would like to use an instrumental variable that is correlated with a railroad receiving a government bailout but only affects a railroad's financial performance via the granting of RFC loans. We take advantage of the prior literature (see e.g., Wright (1974), Wallis (1987), and Fishback, Kantor, and Wallis (2003)) that claims New Deal grants were influenced by politics. Fishback (2017), for example, concludes, "nearly every study finds that political considerations were important to the Roosevelt administration." There are, however, a few investigations of New Deal funding, such as Mason (2003), that find little political influence on the process. We use the composition of the RFC board as our instrumental variable. Specifically, we use the number of states a railroad passed through that were the home states of RFC directors in that particular year. We call this the number of a railroad's Connections to the RFC. For example, on February 5, 1932, the Chicago and Eastern Illinois railroad applied for an RFC loan for $3.629 million. This railroad passed through Illinois, Indiana, and Missouri. H. Paul Bestor (Missouri) and Charles G. Dawes (Illinois) sat on the board of the RFC at the time of the application. Therefore, our instrument takes a value of two.
In our first stage regression (Table 6, columns 2 and 5), we regress Approval on a railroad's lagged characteristics and our instrument, Connections. We see that Approval is positively and statistically significantly related to a railroad's Connections, even with region, year, and bond-rating fixed effects.
The F-statistic in the first stage regression is 75.791, which indicates that we have a strong instrument.
To have a valid instrument, we also require that the exclusion restriction is satisfied. The exclusion restriction requires that Connections are uncorrelated with the error term, the unobservable part of a railroad's financial position that partly determines default behavior. There was no realistic possibility that a railroad that was doing poorly, based on unobservable factors, could increase its number of Connections by altering its operations. Total track mileage in the U.S. declined from 1930 onwards, and it would be expensive and take years of construction for an existing railroad to begin operations in the home state of an RFC director. 21 It also would invalidate our instrument if railroads that were in worse financial shape than their observable characteristics would suggest were able to influence the president to alter the RFC board's composition, such that a new director was appointed from a state in which the railroad operated. RFC directors were responsible for approving all loans that the Corporation made, railroad and nonrailroad alike. Total railroad loans comprised a fraction of the RFC's disbursements, and loans to an individual railroad were a tiny percentage of total RFC expenditure. There were only five to seven directors at any one time, and the composition was balanced by political affiliation and by the need to have directors come from different parts of the country. Given these constraints on the composition of the RFC board, we believe it is extremely unlikely that certain railroads could have increased their Connections by lobbying. Therefore, we use Connections as our instrument for RFC bailouts.
In columns 3 and 6 of Table 6, we replace Approval with its predicted level from our first-stage regression. We see that bailouts do not have a statistically significant relation with a railroad's defaults. Government aid did not seem to aid railroads avoiding default. Other coefficients show that railroads that were more profitable, had more cash, and were older were less likely to default.
Bond ratings
Bailed-out railroads may have been viewed as "too big to fail" or perhaps investors anticipated that a bailout indicated that the government would share the financial losses with bondholders. 22 In this case, the bond market may have perceived the railroad's debt as being safer, despite our evidence in Table 6 showing that bailouts did not help a railroad to avoid default. To investigate perceptions of default, we examine a railroad's bond ratings.
In Table 7 we study the determinants of ratings changes. We examine the effect of receiving a bailout (Approval) last year, and condition on last year's observable firm characteristics. A bailout reduced the probability of being downgraded in the current year from 0.258 to 0.178 if the railroad was bailed out in the previous year (column 1). The probability to receive multiple downgrades, from year t to t+2, was reduced from 0.350 to 0.186 if the firm was bailed out in year t-1 (column 4). Railroads with more cash and lower employment were less likely to be downgraded.
Since concerns about selection effects remain, we again run instrumental variable probit regressions with Connections as our instrument. The first-stage regressions appear in columns 2 and 5, and the second-stage results are in columns 3 and 6. The IV results confirm the probit results. Bailouts in the previous year drive a decrease in the likelihood of getting one downgrade of 52.7% (column 3) or a decrease in the likelihood of getting multiple downgrades of 92.2% in the subsequent three years (column 6). Railroads with more employees, less cash-to-assets, and younger railroads were more likely to be downgraded. This result suggests that Moody's perceived increased railroad employment during the Great Depression to be in conflict with protecting bondholders' interests. Overall, our results show that government bailouts did not protect railroads against default, although they did alleviate bond ratings downgrades. 23
Difference-in-differences
We now investigate if the RFC succeeded in their objective to "stimulate employment." As such, we determine the ability of government bailout recipients to improve their economic performance, including employment numbers. We first conduct a difference-in-differences approach on RFC loan recipients' leverage, employment, the average wage, and profitability. Since we have variations in treatment timing, we use the technique of Callaway and Sant'Anna (2021). The treatment group is railroads that received at least one bailout in the sample period, the control group are railroads that never received a bailout. 24 In Panel A, we present average treatment effects (ATT). We document a significant negative impact of a bailout on leverage (column 1). Indeed, the estimated treatment effect on leverage is a reduction of 2.6 percentage points for the bailed-out railroad. The starting leverage for the average railroad was a little over 40 percent of total assets (see Table 2), so this coefficient is economically meaningful.
This result is also in line with the theoretical contribution of Elenev, Landvoigt, and Van Nieuwerburgh (2022), who study the effect of COVID-19 bailouts on the real economy.
We find positive but statistically insignificant effects of a bailout on employment (Column 2). There are statistically significant increases in the average wage of around 4.5% (Column 3). Our results of 23 In Table 6, Moody's ratings add information to understand bond default dynamics, even after controlling for the railroad characteristics. However, bond ratings mostly help to explain the default behavior of non-investment grade bonds. The only bond rating fixed effects which are significantly different to zero are the lowest rated (e.g., C, Ca, and Caa). Most government bailouts went to railroads with investment-grade bonds. Government bailouts helped such bonds to preserve their (high) credit rating (Table 7) and since investment-grade bonds are unlikely to default, a bailout did not greatly change their default risk (Table 6). 24 Using a chi-squared test we verify that the parallel trends assumption is never violated. weak employment and generous wages align with the findings of Ohanian (2009) and Cole and Ohanian (2004) that New Deal policies deepened the Great Depression. 25 Railroads appear to have used (some) government funds to inflate the average wages they pay their employees. Finally, we find statistically insignificant effects of a bailout on profitability (Column 4).
In Panel B, we present estimates of the treatment effect by year. We find that leverage decreases in years t+1 through t+4, although it is barely affected in the year of the bailout. We find employment of bailed-out railroads is statistically significantly higher than the control group in years t+1 and t+2 (by 4.5% and 6.9% respectively), although by year 3 and 4 the point estimate is lower and no longer statistically significant. Bailout recipients' average wages were higher in the year of the bailout and remained higher, by a little under 5%, in the following years. There are no statistically significant impacts on profitability after the treatment.
In Panel C, we run a placebo test in which we counterfactually assume that RFC bailouts took place in 1929, while still comparing bailed-out and non-bailed-out railroads (as in Berger and Roman (2017)). We use 1927 and 1928 as the "pre-RFC period" and 1930 to 1932 as the "post-RFC period".
We apply the doubly-robust difference-in-differences approach of Sant'Anna and Zhao (2020). In Panel C, we fail to find significant results for leverage, employment, average salaries, or profitability.
Instrumental variables
Although the difference-in-differences approach should give us a good idea of the impact of a railroad bailout, there remain concerns that the comparison group of non-bailed-out railroads does not provide an accurate counterfactual for the bailed-out carriers. In Table 9, we again make use of the board composition of the RFC and our measure, Connections, as an instrument for railroad bailouts. In the second stage, we regress railroad leverage, employment, wages, and profitability on the fitted level of bailouts after conditioning on railroad characteristics.
We find that a bailout causes a 10.2 percentage point decrease in leverage (column 2). We find no statistically significant impact of bailouts on employment (column 3), or profitability (column 5).
However, there was an increase of 15.4 percentage points in the average wage (column 4). Therefore, we conclude that the RFC failed in their second objective, which was to promote railroad employment. All regressions use firm, year, and region fixed effects and condition on lagged characteristics. Railroads appear to have used bailouts to reduce their leverage, and increase wages, with little beneficial impact on employment.
Economic spillovers
Bailouts do not seem to have provided any direct benefits for the recipients-save a jump in the value of their debt. They may, however, have provided spillover benefits for the regions in which they operated. For example, railroads may have been able to keep operating routes that would otherwise have been closed, or the carriers may have conducted a more frequent schedule for local businesses than if government support had not been made available.
Building permits
We examine if there were positive economic spillovers that flowed from the bailouts of railways that passed through a city. We create an explanatory variable, City RFC Approvals, which equals the fraction of all railroads that passed through a city that received an RFC or PWA loan in the previous year. We again use our instrumental variable, Connections, which is measured at the state level, as an instrument. Our focus is on spillovers to one of the few measures of local economic conditions available in this era, building permits (see Cortes and Weidenmeir (2019)).
We regress the natural logarithm of city building approvals per capita in a year on fitted City RFC Approvals. 26 In Table 10, we see that RFC board connections are very strong instruments for citylevel loan approvals. We find a negative relationship between railroad city-level loan approvals and new building approvals (columns 2 and 4). Once we add both year-and city-fixed effects, however, the estimated impact of City RFC Approvals on building approvals becomes statistically insignificant and close to zero in magnitude (column 6).
Manufacturing firms
In Table 11, we study if news of a railroad's bailout affected other railroads and manufacturing firms listed on the NYSE. 27 We calculate the abnormal returns and cumulative abnormal returns of other firms' equity. We focus on the more interesting cross-sectional evidence: which manufacturing firms and which railroads benefited most from news of one railroad's bailout?
We cross-sectionally split firms into two dimensions. First, did the other railroad overlap at all with the bailed-out railroad, meaning did both railroads service at least one common city (Yes) or not (No)?
Second, was the level of overlap (the fraction of the bailed-out railroad's cities also serviced by the other railroad) above the sample mean (High) or was the overlap positive but below the sample mean (Low). We construct similar overlap measures for manufacturing firms, but we consider the joint presence of manufacturing establishments (from Moody's Manual of Investments -Industrial Securities) and railroad tracks.
In Panel A we see that there was no statistical difference between the the mean CARs of railroads that overlapped with the bailed-out railroad and non-overlapping railroads upon news of an RFC 26 Our thanks to Gustavo Cortes for sharing his data on building approvals. 27 We exclude all stocks that have a zero return on all days of the event study. application. Nor was there a statistically significant difference between high overlap railroads and low overlap railroads. In Panel B, we observe slightly smaller, -0.5% (Yes less No) to -0.6% (High less Low), and statistically significant differences in CARs for other railroads upon RFC approvals.
We interpret this result to mean that competing railroads (i.e., those with some overlap with the bailout recipient) suffered slightly from a bailout announcement, relative to railroads that had little or no overlap. As this is a cross-sectional test, we are conditioning on any economy-wide railroad shocks, such as changes in government railroad policy, input costs, or demand changes.
We compare manufacturing firms that were co-located in the same city as the bailed-out railroad (an overlap of Yes or High) to manufacturing firms that were not located in cities through which the bailed-out railroad ran (No or Low). We see that a railroad bailout benefited co-located manufacturing firms relative to manufacturers that were not located near the bailout recipient's tracks. Co-located
Conclusion
The RFC distributed much of the U.S. government's New Deal assistance to the economy as it struggled with the Great Depression. Around 10% of the RFC's loans were given to private firms in the railroad sector, combined with a limited amount stemming from the Public Works Administration.
In our study, we ask if such bailouts aided railroads to avoid debt defaults and stimulate employment, which were the RFC's twin objectives.
First, we find scant evidence that government assistance was beneficial for railroad employment, although it led to an increase in wages for existing employees. Second, there is no evidence that government aid prevent bond defaults. However, we show that bailouts helped railroads to reduce leverage and assisted them to avoid ratings downgrades. This is also reflected by a jump in the bond prices on the days around loan announcements or approvals. As such, we argue that bailouts benefited employees and bondholders rather than aiding firm performance (e.g., profitability).
Non-bailed-out railroads that competed with the recipient seem to have suffered some harm from the bailout, presumably because one of their competitors was supported financially. We find evidence that government railroad support was beneficial for the manufacturing firms that were co-located near the railroad's tracks. As such, although RFC and PWA assistance proved of little benefit to the railroad itself, there were positive economic spillovers to manufacturers from this New Deal program. 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 The number of RFC or PWA loans to railroads that operated in each state. For railroads that operated in more than one state, we count each state in which that railroad operated as having received a loan.
Figure 4: Kaplan-Meier failure graphs
We show the hazard rates of bond defaults in the years after a bailout. Panel A shows the hazard rates for bailed-out vs. non-bailed-out railroads. Panel B shows hazard rates after controlling for lagged railroad characteristics, such as (log) total assets, net income to total assets, cash to total assets, leverage, (log) employment, (log) firm age, bonds due between 1930 and 1934 to total assets, passenger revenue to total revenue, and the freight composition. Connections is the number of states the railroad operated in that were homes to RFC directors in that year. Leverage is the ratio of long-term debt to total assets. Bonds Due / T.A. is the value of all bonds due between 1930 and 1934 to total assets in 1929. Passenger / Total Revenue equals passenger revenue divided by total revenue. Volatility is the standard deviation of the previous 12 month's earnings (if earnings was missing, the 12-month standard deviation of stock returns). Average Wage is the total expense on employees divided by the number of employees. We report tests of differences in means (t-test) and medians (Wilcoxon) between the groups. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We regress bailouts on railroad characteristics in a two-step regression. In step 1, Columns 1 and 2 present a regression where the dependent variable, Application, equals one if the railroad applied for at least one loan in that year, and zero otherwise. In step 2, we exclude all railroads that never applied for a bailout in Columns 3 to 6. Columns 3 and 4 present a regression where the dependent variable, Approval, equals one if the railroad applied for at least one loan in that year, and zero otherwise. Columns 5 and 6 present a regression where the dependent variable, Approval Size, is the railroad's approved bailout size divided by total assets. Approval (in last 3 Years) is a dummy variable that equals one if the railroad had an RFC or PWA loan approved in the last three years, and zero otherwise. Cum. Loan Size / T.A. equals the cumulative bailout loan amount a railroad has received since 1932 divided by its total assets. Other variables are defined in Table 2. All regressions include year-and region-fixed effects. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We calculate the abnormal returns (AR) and cumulative abnormal returns (CAR) of a security from four days before to four days after the announcement of an application, approval, or refusal. We measure AR as the security's returns less the Moody's bond index / CRSP market index on the same day. We average the AR (CAR) across securities. Standard errors are clustered by railroads. p-values appear in parentheses. Panel A presents the results for all bailouts. Panel B presents the results for the initial and subsequent bailouts. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We regress a railroad's cumulative abnormal bond or equity return from four days before to four days after an application or approval of an RFC or PWA loan. Variables are as defined in Table 2. We add region, year and rating fixed effects and cluster standard errors at the railroad level. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We regress bond defaults on lagged railroad characteristics. Default equals one if the railroad failed to meet a coupon or principal repayment, or in any way changed the terms of the issue in the current year. We drop all observations of the respective railroads the year after Default equals one. Approval equals one if the railroad obtained an RFC or PWA loan in the previous year. In columns 1 and 4, we run a probit regression model. In column 2 and 5, we present our first-stage regression for the instrumental-variable (IV) approach. We regress an indicator variable equal to one in the year the railroad received an Approval, and zero otherwise, on railroad controls. In columns 3 and 6, we present the second-stage instrumental variable (IV) regression. p-values, in parentheses, are adjusted for heteroskedasticity and clustered at the railroad-level. We include region, year and (in columns 4-6) bond rating fixed effects. For a railroad with multiple outstanding bonds, we use the rating of the bond closest to maturity. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. Tables 2 and 6. We include Time to maturity, the log of the number of years to maturity for the respective bond, and the Nominal outstanding amount of the bond to total long-term debt. In columns 2 and 5, we report first-stage regressions. We regress an indicator variable equal to one the year the railroad received an Approval, and zero otherwise, on railroad and bond controls. We present the second-stage instrumental variable (IV) probit regressions for one downgrade (column 3) and multiple downgrades (column 6). p-values, in parentheses, are adjusted for heteroskedasticity and clustered at the railroad-level. We include region, firm, and rating fixed effects. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We present difference-in-difference regressions for leverage, log(employment), log(average wage), and profitability, with variation in treatment timing and multiple periods following Callaway and Sant'Anna (2021). In Panel A, ATT is defined as the average treatment effect for the treated population in the years following the treatment. Panel B presents the results of an event study analysis. Panel C presents results if we assume that bailed-out railroads (counterfactually) received an RFC or PWA loan in 1929, and the post-bailout-period was 1929-32. p-values in parentheses use doubly robust standard errors, following Sant'Anna and Zhao (2020). Chi-squared is the chi-squared pretrend test. All regressions use year and region-fixed effects. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. In the first stage, we regress Approval on Connections and lagged railroad characteristics. In the second stage, we regress contemporaneous railroad leverage, log(employment), log(average wage), and profitability on the fitted level of lagged Approval and lagged characteristics. Variables are as defined in Table 2 and 6. p-values are adjusted for heteroskedasticity and clustered at the firm-level, in parentheses. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We regress the logarithm of building permits per city on City RFC Approvals (the fraction of all railroads that pass through the city that received an RFC/PWA railroad loan approval the previous year). We condition on state-level bank characteristics: the logarithm of bank loans per capita; the logarithm of bank deposits per capita; the logarithm of the number of all banks; and the capital of nationally chartered banks that operated in the state that were liquidated in year t divided by the capital of all nationally-chartered banks in that state in year t. We instrument City RFC Approvals with Connections. p-values are adjusted for heteroskedasticity and clustered at the city-level, in parentheses. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. We calculate the abnormal return (AR) and cumulative abnormal return (CAR) for related firms' equity after the announcement of a railroad's bailout application (Panel A) and approval (Panel B). We measure the AR as a firm's equity return less the CRSP market index. An overlap of Yes indicates the railroad/manufacturing firm operates in at least one city with the bailed-out railroad. An overlap of No indicates the railroad/manufacturing firm does not operate in any cities in which the bailed-out railroad operates. High indicates that the percentage overlap is above the mean level across all firms. Low indicates that the percentage overlap is non-zero and below the mean overlap across all firms. For railroads, the percentage overlap is defined as the number of cities that both railroads serve divided by the total number of cities of the bailed-out railroad. For manufacturing firms, the percentage overlap is defined as the number of cities that the railroad and manufacturing firm both operate in divided by the total number of cities the manufacturer operates in. p-values appear in parentheses. We report the p-values of t-test differences between the groups (Yes -No or High -Low) in Diff. *, **, and *** denote significance at the 10%, 5% and 1% levels, respectively. The returns are winsorized at the 2.5% level. | 2022-05-27T06:16:56.246Z | 2022-05-25T00:00:00.000 | {
"year": 2022,
"sha1": "5f4dd56d929522386df3c3d30b1eb001d913f9d1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bbb33d345581dd4eb718617a77e3853cac0dacd9",
"s2fieldsofstudy": [
"Economics",
"History"
],
"extfieldsofstudy": [
"Economics"
]
} |
249054949 | pes2o/s2orc | v3-fos-license | Fluoride-coated high-purity magnesium cage promotes bone fusion in goat models
Background Cervical fusion devices made by polyether ether ketone (PEEK) cause concomitant effects which decompress the spinal cord and nerve roots. Magnesium has good biocompatibility and bioactivity as a biodegradable orthopedic implant material; however, its fusion rate is low. In this paper, we aimed to improve interbody fusion rate of high-purity magnesium (HP-Mg) by coating it with fluoride. Methods Fluoride-coated HP-Mg (F-HP-Mg) cages were prepared, and HP-Mg cages served as controls. We tested hydrogen release in phosphate-buffered saline (PBS) and weight loss in chromic acid. Anterior cervical discectomy and bone graft fusion (ACDF) was performed at the C2-C3 segment on goats with F-HP-Mg and HP-Mg cages to evaluate fusion score. Results Hydrogen release of F-HP-Mg cages was significantly lower than that of HP-Mg cages. Weight was significantly decreased in both types of cages after rinsing with chromic acid, while F-HP-Mg cages were more resistant to corrosion compared to HP-Mg cages. There were no significant differences in disc space height (DSH) and remaining cage volume between the two groups in computed tomography (CT) images of goat cervical spine, while cavities were found at postoperative 12 weeks and confirmed by histological staining. No complications were found, while serum aspartate aminotransaminase (AST) level was significantly higher in the HP-Mg group compared to the F-HP-Mg group. Fusion rate at 24 weeks after ACDF was significantly higher with F-HP-Mg cages. Conclusions The use of F-HP-Mg improved histological fusion in the cervical intervertebral space of goats compared to HP-Mg and showed good biosafety.
standard" for clinical interbody fusion; however, common complications of donor site, such as pain, bleeding, fracture, and nerve injury have limited widespread use of the iliac crest as an intervertebral bone graft material. The cervical fusion device made by polyether ether ketone (PEEK), which is widely used in clinical practice, can cause a strong non-specific inflammatory response in the surrounding bone tissue, triggering fibroblast and macrophage infiltration as well as bone resorption and osteolysis, mimicking the local reaction caused by ultra-high molecular weight polyethylene powder after joint replacement.
Magnesium (Mg) has attracted attention in biodegradable orthopedic implant material research, displaying characteristics of good biocompatibility, bioactivity, and desirable elastic modulus similar to bone (3). Success has been reported in high-purity Mg (HP-Mg) (99.99 wt.%) screw in in vivo studies with femoral intracondylar fractured rabbit models (4) and HP-Mg interference screws can promote fibrocartilaginous entheses regeneration in the anterior cruciate ligament reconstruction rabbit model (5). However, the results of in vivo ACDF experiments on animal models of Mg-based cervical cages have been unsatisfactory (6,7). Potential reasons include that co-implantation of titanium and Mg accelerates degradation of Mg cages, and hydrogen and excessively increasing pH value produced by rapid Mg corrosion negatively influence surrounding tissues healing. Recently, Guo et al. claimed that they achieved histological fusion of HP-Mg cages in a goat model, although total fusion area was less than 30% (8).
Coating is an effective way to regulate degradation rate of Mg fusion cages (9). Some studies have suggested that fluoride-coated Mg alloy could improve corrosion resistance and promote osteogenic differentiation (10,11). In this study, we tried to improve interbody fusion of HP-Mg cages by coating fluoride on HP-Mg (F-HP-Mg). We present the following article in accordance with the ARRIVE reporting checklist (available at https://atm.amegroups.com/article/ view/10.21037/atm-22-2098/rc).
Sample and materials
The HP-Mg cages were built with 99.982 wt.% Mg (Suzhou Origin Medical Technology, Jiangsu, China), 0.0178 wt.% silicon (Si), <0.001 wt.% iron (Fe), and <0.001 wt.% aluminum (Al) in the State Key Laboratory of Metal Matrix Composites of Shanghai Jiao Tong University. Cages were designed and fabricated for ACDF and possessed dimension parameters similar to PEEK cage (Medtronic Cornerstone-SR PEEK; Medtronic, Parkway Minneapolis, MN, USA), which was modified with specification of 14 mm × 11 mm × 4 mm and a 4-degree wedge angle.
The F-HP-Mg cages (mainly composed of MgO and MgF 2 ) were made through immersion of HP-Mg cages in a hydrofluoric acid solution (9). Untreated HP-Mg cages of the same size were used as controls in goat models. An Mg sheet (size 12 mm × 10 mm × 1 mm) was prepared for in vitro testing.
Surface modification
The fluoridation process was operated in fume hood hydrofluoric acid. The Mg sheet samples that had been polished and ultrasonically cleaned and dried were removed and placed into a plastic beaker reaction vessel with a plastic agitator at the bottom and a suspended yarn net. The samples were placed into each beaker on the yarn net, and then 250 mL 40% hydrofluoric acid reagent was poured into the beaker. It was observed that all Mg samples were immersed in liquid. The beaker was then sealed with plastic wrap and the opening was secured with a rubber band. The beaker was placed on the magnetic stirrer at low speed, with the sample fully immersed in the acid flow at room temperature for 96 hours. Finally, all samples were rinsed in ethanol and dried. The untreated pure Mg group was used as the control.
Immersion test
The method of the immersion test was employed as described in our previous study, which was in accordance to ASTM-G31-72 (12). The F-HP-Mg and HP-Mg sheets were incubated in PBS (pH =7.40, 1 cm 2 /20 mL) separately at 37 ℃ for 2 weeks, with 3 samples in each group. The pH value of PBS was measured across time by pH meter (FE20; Mettler Toledo, Columbus, OH, USA). After 14 days, 1 side of the sheets was ultrasonically cleaned with 100% ethanol, and the other side was rinsed with 180 g/L chromic acid solution and distilled water, air dried, and weighed.
Animals and surgery
Experiments were performed under a project license (No. SYXK2011-0128) granted by ethics board of Shanghai Jiao Tong University Affiliated Sixth People's Hospital, in compliance with institutional guidelines for the care and use of animals. A protocol was prepared before the study without registration. Goats have similar cervical spines to human, especially at the C2-C3 level, which makes them ideal experimental models to test cervical spine fusion (13,14). A total of 10 healthy 1.5-year-old goats weighing 38.78±3.72 kg (range, 33 to 46 kg) were purchased from Shanghai Jiagan Biotechnology Co., Ltd. (Shanghai, China) and given adequate access to sterilized water and food. All animals were randomly assigned to either the experimental or control group for assessment at post-operative 4, 12, and 24 weeks.
Goats were anesthetized with 2.5% pentobarbital sodium, followed by ACDF surgery at C2-C3 segment, which was implanted with a HP-Mg or F-HP-Mg cage, a titanium plate, and fixed with screws. Cage holes were filled with autogenous bone from the anterior spinous process of cervical vertebra.
Radiographical analysis
Anteroposterior and lateral radiographs were performed between post-surgery and sacrifice to determine implant location, gas accumulation, disc space height (DSH), and implant settlement. The DSH is the distance between upper and lower anterior edge of the vertebral body fixed by screws of the titanium plate.
Micro-CT scan and three-dimensional (3D) reconstruction were performed on sacrificed specimens, and quantitative analysis of the fusion device was carried out. Interbody fusion could be scored with CT images (15), as follows: level 0, no new bone or even vertebral endplate destruction; level 1, new bone formation but not continuous between C2/3; and levels 2 and 3, continuous bridging new bone and comprises <30% and >30% of fusion area, respectively.
Histological analysis
Sacrificed goat vertebrae were fixed in paraformaldehyde, dehydrated in acetone, embedded in methyl methacrylate, cut into 300 µm sections and ground to 20 µm. Staining was performed on 4 sections with hematoxylin-eosin (HE), toluidine blue, Van Gieson, and Masson to determine osteogenesis and inflammation.
Statistical analysis
No data was excluded. Data were presented as mean ± standard deviation (SD). A two-tailed t-test was used to compare CT fusion scores between the HP-Mg cage group and F-HP-Mg cage group. The two-way analysis of variance (ANOVA) was used to compare weight of cages, Mg concentrations, alanine aminotransferase (ALT), aspartate aminotransferase (AST), and creatinine (CREA). Statistical analysis was conducted with the software SPSS 20 (IBM Corp., Armonk, NY, USA), and P<0.05 was statistically significant.
Gross characteristics
The MgF 2 films formed in 40% hydrofluoric acid after 96 hours appeared dark black. The MgF 2 layer did not change macro sizes of samples ( Figure 1A). No significant difference was detected in weight between the two groups.
Hydrogen release
Hydrogen release in the buffer was measured for 336 hours. With the prolongation of immersing time, hydrogen (H 2 ) release was increased from both HP-Mg and F-HP-Mg cages ( Figure 1B). The H 2 concentration was significantly higher in HP-Mg cages compared to F-HP-Mg cages.
Weight loss
Weight of HP-Mg cages and F-HP-Mg cages was similar before the weight loss experiment. After rinsing with 180 g/L chromic acid, both the HP-Mg cages and F-HP-Mg cages lost a significant volume of weight ( Figure 1C). The F-HP-Mg cages were significantly heavier than the HP-Mg cages after chromic acid, indicating that the former were more resistant to corrosion than the latter.
Radiological findings
The interbody fusion was evaluated with 64-slice-spiral CT. The DSH measurement is shown in Figure 2A. At 4 weeks postoperatively, there was no significant difference in DSH, but a significant difference could be observed in CT images between the two groups, with a small amount of gas observed in the anterior tissue of the cervical spine in HP-Mg cages ( Figure 2B). We found that HP-Mg cages caused damage to the upper and lower endplates and surrounding bone, forming cavities at 12 weeks postoperatively. The cavity was further enlarged at 24 weeks, and the autogenous bone filling in the middle hole of the cage was completely lost ( Figure 2C). No significant DSH difference and implant settlement was found in HP-Mg cages and F-HP-Mg cages at postoperative 4, 12, and 24 weeks ( Figure 2D). The remaining volume of both kinds of cages was significantly lower after 24 weeks implantation, while not significantly different between HP-Mg and F-HP-Mg cages ( Figure 2D).
Histological results
Bone fusion was observed in F-HP-Mg cages with a continuous bone found between endplates ( Figure 3A-3E). However, osteolytic phenomenon with destroyed bone tissues was found in HP-Mg cages.
Mg concentrations, ALT, AST, and CREA in serum
Serum Mg 2+ concentration did not markedly change in both groups ( Figure 4A). No complications related to hypermagnesemia were found. Serum ALT level was similar between the 2 goat groups ( Figure 4B), while AST level was statistically higher in the HP-Mg group compared to the F-HP-Mg group at postoperative weeks 12 and 24 ( Figure 4C). Serum CREA was not statistically different ( Figure 4D).
Fusion results
During the experiment, CT fusion score of F-HP-Mg cages increased over time, but the score of HP-Mg cage decreased at weeks 12 and 24. The fusion score of segments with F-HP-Mg cages was remarkably higher than HP-Mg cages at 24 weeks, with HP-Mg as (0.2±0.45) and F-HP-Mg as (2.8±0.45) (P<0.01) ( Figure 5A). After 24 weeks, new bone tissue between endplates in graft segments was found in CT 3D reconstruction images ( Figure 5B), while no fusion with a cage was observed ( Figure 5C).
Discussion
Guo et al. reported the use of an HP-Mg cage to perform ACDF surgery on a goat model, and achieved the first successful interbody fusion, indicating the possibility of Mgbased cage application in ACDF (8). However, this study also reported that the total area of interbody fusion was less than 30%. There was a 300-400 µm gap between the cage and the bone tissue, which was filled by hyperplastic fibrous tissue. The bone was not tightly bound to the cage interface.
The results of this experiment revealed that in the experimental group using an HP-Mg cage, because the cage caused osteolysis to the surrounding bone tissue, the CT sagittal view showed the formation of a cavity around the cage, and the pathological section showed that there was a space between the cage and the bone tissue. With infiltration of fibrous tissue, the effect of vertebral fusion was not ideal. However, after the surface modification of HP-Mg with hydrofluoric acid, the degradation rate of HP-Mg slowed down, continuous bone tissue formation was seen on a CT sagittal view, the gap between the cage and the bone tissue was reduced, and new trabecular bone formation was seen surrounding the cage.
For the cage material, we chose to use HP-Mg (99.98 wt.%) for 2 reasons: first, Mg-based implants had been revealed as osteoinductive in in vivo studies in unstressed bones, leading to successful fusion. The mechanism may be related to the release of Mg 2+ from Mg-based implants in neuronal calcitonin gene-related polypeptide (CGRP)-mediated osteogenic differentiation, which plays an important role in induction of osteogenesis (16). Secondly, compared with Mg alloys, HP-Mg has better corrosion resistance. Preliminary research shows that the higher the purity, the stronger the corrosion resistance, and the slower the degradation rate. This experiment required Mg-based cages in the vertebral column which takes about 6 months, so the purity of HP-Mg needs to be improved as much as possible to obtain sufficient support time.
It has been shown that Mg-based materials immersed in hydrofluoric acid can form a fluoride coating on the metal surface (10), which can effectively reduce the degradation rate of Mg, thereby slowing the release rate of Mg 2+ and increased deposition of localized calcium and phosphate. The results of this study showed that fluoride-coated Mg alloy had good osteogenic activity and biocompatibility, and the fluoride coating significantly upregulated expressions of type I collagen and bone morphogenetic protein 2 (BMP-2). Therefore, in this experiment, we modified the surface of the HP-Mg cage in hydrofluoric acid to form a MgF 2 coating, and the thickness of the MgF 2 coating depended on the reaction time of the MgF 2 cage in hydrofluoric acid, which we continued for 96 hours.
The results of this experiment showed that after the HP-Mg cage was implanted into the goat cervical spine, galvanic corrosion occurred with the fixed anterior cervical vertebrae titanium plate, which accelerated degradation of the HP-Mg cage and damaged the upper and lower vertebral body endplates. As a result, the fusion failed. Despite this, the prevailing view is that locally high Mg ion concentrations do not produce significant cytotoxicity and are safe for topical application as a degradable material (17). However, the accelerated corrosion rate of Mg can lead to loss of mechanical properties of orthopedic implants and failure (18). If Mg cages are coated with MgF 2 , the degradation rate can be reduced, galvanic corrosion can be isolated, and osteogenic activity and biocompatibility can be improved.
In this study, serological tests were performed at 4, 12, and 24 weeks after the operation. Serum levels of Mg 2+ , AST, ALT, and CREA were examined, and no obvious liver and kidney function damage was found. In addition, studies involving pathological sectioning on animal organs found no obvious lesions, which supports the good biosafety of HP-Mg fusion in vivo. Excessive Mg ions can be excreted in urine and not absorbed by the body.
The results of this study suggest that the application of the HP-Mg interbody fusion cage with MgF 2 coating can achieve cervical vertebral body fusion, and thus may be used in ACDF surgery. However, more in vivo and in vitro experiments and longer observation times are needed. In addition, the fusion effect of the HP-Mg cage group in our study was not satisfactory, the rapid degradation of Mg 2+ would destroy the bone of the upper and lower vertebral body endplates and form a cavity in the intervertebral space, although it was possible to form a bone bridge connection at the anterior and posterior borders of the vertebral body. Partial fusion was achieved, which was different from the related research results of using the same material to make cages (8), and might be the result of difference of surgical operation, animal breeding environment, and other factors.
The in vivo experiment of cervical degradable cage is time consuming, requires complicated operation, and has big individual differences. Previous researchers used cages with β-calcium triphosphate as the core and polylactic acid as the shell to conduct a phase III clinical trial of cervical interbody fusion (19). After an average follow-up of 27 months, the fusion rate was 96% (26/27). However, in another study, the results of cages with the same specifications were disappointing. Although the experiment planned to include 50 participants, it had to terminate early due to the high dislocation and complication rates after recruiting only 33 participants (20). Therefore, animal experiments should be carried out to evaluate safety of degradable cages before the implementation of clinical trials.
Conclusions
The HP-Mg cage with fluoride coating can successfully achieve histological fusion in the cervical intervertebral space of goat: that is, the new bone is connected between the upper and lower vertebral bodies through the middle hole of the cage. It is worth noting that the fusion area of the HP-Mg cage with fluoride coating was greater than 30%, the fusion results of the fluoride coated group at 24 weeks were significantly better than that of the uncoated group, and the degradation rate was also significantly lower than that of the uncoated group. The F-HP-Mg cage can play a stable supporting role in the early post-implantation period, and then degrade steadily during the observation period. The cage has sufficient mechanical support for intervertebral applications. In addition, the F-HP-Mg cage has good biocompatibility, no adverse reactions for vital organs. More studies are needed to evaluate the long-term fusion effect and degradation behavior of this cage, which is critical for potential use in ACDF in the future. | 2022-05-26T15:04:47.848Z | 2021-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ec61c27505252589670efcfa04a7ac842c85f0ff",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/95266/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fadc02d8f03690a57516b69a8b3d3025ba0cff1b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
10819844 | pes2o/s2orc | v3-fos-license | Pharmacokinetics of chewed vs. swallowed raltegravir in a patient with AIDS and MAI infection: some new conflicting data
Background While HIV, AIDS and atypical Mycobacterium infections are closely linked, the use of Integrase-Inhibitor based cART, notably raltegravir-based regimens is more widespread. RAL should be double-dosed to 800 mg semi-daily in situation of rifampicin co-medication, because RAL is more rapidly metabolized due to rifampicin-induced Uridine-5’-diphosph- gluronosyl-transferase (UGT1A1). Recently, it was speculated that chewed RAL might lead to increased absorption, which might compensate the inductive effect of rifampicin-rapid metabolized RAL, as part of cost-saving effects in countries with high-tuberculosis prevalence and less economic power. Methods We report measurement of raltegravir pharmacokinetics in a 34-year AIDS-patient suffering from disseminated Mycobacterium avium infection with necessity of parenteral rifampicin treatment. RAL levels were measured with HPLC (internal standard: carbamazepine, LLQ 11 ng/ml, validation with Valistat 2.0 program (Arvecon, Germany)). For statistical analysis, a two-sided Wilcoxon signed rank test for paired samples was used. Results High intra-personal variability in raltegravir serum levels was seen. Comparable Cmax concentrations were found for 800 mg chewed and swallowed RAL, as well as for 400 mg chewed and swallowed RAL. While Cmax seems to be more dependent from overall RAL dosing than from swallowed or chewed tablets, increased AUC12 is clearly linked to higher RAL dosages per administration. Anyway, chewed raltegravir showed a rapid decrease in serum levels. Conclusions We found no evidence that chewed 400 mg semi-daily raltegravir in rifampicin co-medication leads to optimized pharmacokinetics. There is need for more data from randomized trials for further recommendations.
Background
While Human Immunodeficiency Virus (HIV), Aquired Immunodeficiency Syndrome (AIDS) and atypical Mycobacteriosis infections are closely linked, the use of Integrase-Inhibitor (INI) based antiretroviral regiments (ART), notably raltegravir (RAL)-based regiments is more widespread during HAART era. Raltegravir's benefit is the favorable drug-interaction profile whilst treating AIDS-defining events in HIV-patients [1]. Rifampicin is known to be active against atypical Mycobacterial infections, including Mycobacterium avium infections [2]. [3] Nevertheless, RAL is more rapidly metabolized in case of rifampicin co-administration due to rifampicin-induced Uridine-5'-diphosphgluronosyl-transferase (UGT1A1) [1,4]. Therefore, it is recommended to double-dose RAL to 800 mg semi-daily (BID) in case of Rifampicin-use, which causes high additional costs in these patients [4]. Recently, there has been conflicting data, which suggests using a standard-dose of 400 mg RAL BID, taken chewed as part of an ART, might, in the event of Rifampicin co-administration, lead to higher drug absorption and lower drug intersubject pharmacokinetic variability [5,6]. As reported recently, it was speculated that chewed RAL might lead to increased absorption, which might thus compensate the inductive effect of rapid metabolized RAL due to rifampicin and therefore have cost-saving effects in countries with high-tuberculosis prevalence and less economic power [5]. Despite that aspect, some data for high inter-and intra-personal variability in pharmacokinetics were reported for HIV-patients in general [7]. Lack of sufficient RAL levels were not clearly associated with virological failure of antiretroviral therapy, but that there might be some influences of overall plasma exposure of the drug [7,8]. Therefore, conflicting data in differential RAL dosages play a major role in decisions on clinical HIV-patient care and overall antiretroviral therapy success.
Clinical objectives
We report our experience with a recent admission of a 34-year-old Caucasian male HIV-patient, who was diagnosed as being HIV/HBV-co-infected 5 years prior to actual presentation. CDC Stage was C3, WHO stage IV due to thrush esophagitis and systemic cytomegalovirus reactivation. CD4-nadir = 8 cells/μl, he was known to belong to the risk group of men having sex with men. 4 months before current presentation at our center, the patient was diagnosed with mycobacterium avium infection in a lymph node biopsy whilst having fever, weight loss and general lymphadenopathy in a peripheral hospital. Antiretroviral therapy with co-formulated Tenofovir disoproxil fumarate and Emtricitabine (TVD) in combination with integrase-inhibitor RAL was initiated. Oral antimicrobial chemotherapy with rifabutin, ethambutol and clarithromycin was started without delay. The patient was discharged from the peripheral hospital. Subsequently, persistent fever and weight loss appeared within the first months.
The clinical situation worsened despite medical therapy. He was admitted to our emergency department with shock and severe hypoglycemia 4 months after initiation of the first antimycobacterial treatment. The marasmic patient (body mass index of 15), suffered from 5 liters of watery diarrhea per day, severe hypoalbuminemia (albumin 1,4 g/dl), protein deficiency (serum protein 4,4 g/dl), and iron deficiency anemia (hemoglobin 7,0 g/dl). Subsequently, mycobacterium avium was diagnosed by microscopy and culture in samples of pleural effusion, stool, sigmoid biopsy and peripheral blood. No other microorganisms could be identified in blood culture diagnostics. Due to the lack of intestinal absorption, antimicrobial chemotherapy was escalated to parenteral rifampicin (600 mg once daily), ethambutol (400 mg three times daily), clarithromycin (500 mg semi-daily) and amikacin (750 mg once daily).
Before adjusting ART to the recommended dose of 800 mg RAL BID (adult formulation) in combination with Tenfovir disoproxil fumarate co-formulated with Emtricitabine, subsequent pharmacokinetic studies were performed after informed consent of the patient. In doing so, RAL concentrations were measured in serum before, 1, 2, 4, 8 and 12 hours after a BID oral application of RAL with a dosage of 400 mg and 800 mg. Before new measurement after every dose adjustment, a lead-in phase of more than 24 hours was performed. For optimized reliability, chewed RAL dosage studies were performed semidaily at different points of time. At any time, a standard meal was provided and pantoprazole was co-administered at a dosage of 40 mg once daily. Usage of pH-altering agents like pantoprazole was associated with a possible increase in AUC and C max in recent studies [9]. However, the clinical impact of pH-altering agents and RAL levels Raltegravir serum levels Figure 1 Pharmacokinetic data of raltegravir after administration of different dosages (400 mg or 800 mg, BD; either chewed or swallowed).
was less important in HIV/AIDS patients, compared to healthy volunteers [10].
Methods
RAL levels were measured with high performance liquid chromatography (HPLC, internal standard: carbamazepine, LLQ 11 ng/ml, validation with Valistat 2.0 program (Arvecon, Germany)). For statistical analysis, a two-sided Wilcoxon signed rank test for paired samples was used.
Results
Overall, comparable RAL C 2h concentrations were found with either chewed or swallowed 400 mg or 800 mg of RAL BID in the case of rifampicin co-administration. Similarly C max concentrations were found for 800 mg chewed and swallowed RAL BID, as well as for 400 mg chewed and swallowed RAL BID. While C max seems to be more dependent on overall RAL dosing than on swallowed or chewed tablets, increased AUC 12 is clearly linked to higher RAL dosages per administration. Chewed or swallowed RAL in a dosage of 800 mg BID led to acceptable AUC 12 levels (8767 ng/ml; 10965 ng/ml), whereas 400 mg RAL BID showed significantly lower AUC 12 levels when chewed or swallowed (3783 ng/ml; 4778 ng/ml). Interestingly, lower AUC 12 levels were seen in chewed dosages of RAL, compared to a similar dose of swallowed RAL in 400 mg and 800 mg. For chewed or swallowed 400 mg RAL dosages, a rapid decrease in serum levels was seen, beginning 2-4 hours after application. These findings were confirmed in low serum levels at the beginning of the next medication at C 0 or C 12 . Our finding results in dangerously low serum concentrations 8-12 hours after application of chewed or swallowed 400 mg RAL BID, as well as in low C 0 concentrations (<45 ng/ml; 66 ng/ml). Notably lower C 0 RAL levels were also seen for the chewed 800 mg RAL BID dosage when compared to the equivalent chewed 800 mg RAL BID dosage (47 ng/ml). Although a relevant intra-personal variability in raltegravir serum levels in this HIV-patient with watery diarrhea and intestinal mal-absorption due to systemic Mycobacterium avium infection, have to be taken into account, only swallowed 800 mg RAL led to a lasting RAL serum concentration of 439 ng/ml at C 0 and 885 ng/ml at C 12 with acceptable AUC 12 of 10965 ng/ml. Pharmacokinetic data in detail are shown in Table 1, a graph of RAL pharmacokinetics is shown in Figure 1. However, a comparison of RAL concentrations after chewed and swallowed administration of RAL showed no statistically significance.
Conclusion
Based on these findings we could not demonstrate optimized RAL levels after taking chewed compared to swallowed RAL tablets in contrast to previously reported data [5]. Even though there might be changes in RAL pharmacokinetics due to pantoprazole co-administration in our data, we would expect a 3-4 fold higher C max and AUC of RAL levels. It can be speculated that the RAL levels might be less high as here reported in the absence of gastric proton pump inhibitors. This might be another risk factor for lower RAL levels in case of rifampicin coadministration. Anyhow, whilst potential cost-saving effects of RAL dosing savings may be important, the risk of virological failure due to resistance-evolution plays a crucial role in INI-based antiretroviral therapy. Therefore, we keep using double-dosed swallowed RAL tablets in the case of rifampicin co-medicated patients, as recommended by the official prescribing information, until further data from controlled trials are available. Dose reduction can cause relevant and potentially dangerous decreases in RAL serum levels and AUC 12 , which can cause serious harm to HIV-patients. Potential cost-saving effects should not endanger effective antiretroviral therapy, until there is further evidence of any other superior strategy.
Submit your next manuscript to BioMed Central and take full advantage of: | 2015-03-07T18:39:34.000Z | 2015-01-17T00:00:00.000 | {
"year": 2015,
"sha1": "5397221fdbfb2bb4456c94a2e58232d90f112c37",
"oa_license": "CCBY",
"oa_url": "https://aidsrestherapy.biomedcentral.com/track/pdf/10.1186/s12981-014-0041-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5397221fdbfb2bb4456c94a2e58232d90f112c37",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221702178 | pes2o/s2orc | v3-fos-license | The Qualitative Report
Abstract Increasingly, researchers are conducting studies within a diversity of cultural contexts This paper discusses whether and how the researcher’s own cultural otherness plays a role in academic interview situations. The argument is based on Goffman’s theory of interaction under conditions of otherness and the empirical data from 118 interviews and notes during the years 2007 and 2010 and between 2013 and 2014. The empirical data presented in this paper illustrate how a lack of education, socialisation, and cultivation within the fieldwork context—one’s own cultural otherness—assumes ceremonial and substantial meaning in academic interview situations and merits being the subject of methodological considerations.
Understanding the object of research requires however direct interaction.Therefore, in the context of culturally challenged researchers, calls for methodological cosmopolitanism were issued some time ago (Beck, 2006;Livingstone, 2003) but have since diminished.The cosmopolitan researcher, in this case, is a type of academic that is interested in geographically diverse or unbounded objects and therefore frequently works at cultural intersections.Cosmopolitan researchers learn to explore the before described tensions between emic and etic approaches.
My "own cultural otherness" lead me to experience the tensions between emic and etic approaches in my research and to reassess the value of each for my own work.The interviewees articulated perceptions presented in this paper are an important manifestation of this analytical process of reassessment.Therefore, as a cultural different researcher it is important to respond to the articulations with the regard by exploring the experienced tensions.The "ideal" image of a researcher might differ according to the fields´ reactions.In this paper, I underline the necessity to actively reflect on the merits and challenges of the cosmopolitan researcher who relates inside and outside perspective on the pathways of knowledge making.
The argument leans on Goffman's (1959Goffman's ( , 1967Goffman's ( , 1969Goffman's ( , 1971Goffman's ( , 1974) ) theory of interaction under a condition of otherness, which outlines the idea of being the technician of reality, whereby each participant in the interaction can shape the interaction.In the paper, I explore whether and how the researcher's own cultural otherness matters in interview situations.Throughout the course of the paper, it is apparent that the researcher's cultural otherness is a recurring theme used by the interviewee.I have established that, based on this observation, as well as on the qualitative analysis and use of emblematic examples from the data corpus, there is general merit in the strategic use of the researcher's own cultural otherness in interview situations.The outlining of this argument implies that more analytical contributions towards the re-thinking of cosmopolitanism in social science research are needed.The paper, therefore, through thematising the cultural otherness of the researcher in elite interview situations, addresses Beck's (2006) call to deal with otherness in the social sciences.
Literature Review: Dimensions of Otherness in Social Science Research
Otherness, the condition of being different from the object researched or from the context in which the research is being conducted, is reflected in social science literature in the context of the wider fieldwork experience (Gurney, 1985;Wolf, 1996) and of interview situations in particular.Social science research addresses otherness in terms of gender identities (Gurney, 1985;Ostrander, 1993;Wolf, 1996), thematic identities (Mikecz, 2012;Rakow, 2009;Welch et al., 2002), and the cultural identities of the interviewees (Herod, 1999).The conceptualisation of otherness starts hereby from the role it plays in academic research as a part of underlying power relations.
Scholarly literature particularly emphasises the importance of power gaps in interview situations.Therefore, strategies on how to balance power gaps are important to reach a comfortable situation in which communication is liberated and the trust and credibility of the researcher, as well as the ability to access the information given, is enriched (Thapar-Bjoumlrkert & Henry, 2004;Thuesen, 2011).Feminist literature, for example, discusses otherness in fieldwork.Differences in gender have been argued as being critical for accessing data and for enhancing the richness of the provided information (Gurney, 1985;Ostrander, 1993;Wolf, 1996).Methodological work in elite interviewing also emphasises differences in status as being critical to the quality of elite interviews (Mikecz, 2012).
Scholarly accounts, however, hardly consider otherness by cultural experience.One exception to this is Herod's (1999) study on foreign elites.In this work, the author relativizes the "cult of the insider," a conceptual thought which is close to the concept of otherness as it separates the insiders from the outsiders, the others.Otherness in this case, however, is referring to the object of research while defining the interviewees as the other within a certain group of people.The particularity of this definition of the other is, however, that the researcher and her selection of interviewees create a construct of otherness that the researcher acts upon to understand the particularities of expatriate culture.Other accounts thematise the role of otherness less explicitly, as they establish terms such as "informed outsider" (Welch et al., 2002) or "concerned foreign friend" (Mikecz, 2012).These concepts describe, in fact, a form of otherness that has undergone transformation and certain approximation.The interviewee acknowledges both the informed outsider and the concerned foreign friend.The interviewers understand and describe themselves in similar ways while undergoing the research process and increasingly transcending the object.Both terms testify an interview interaction after which the otherness of the researcher reaches a positive twist, and thus either implicitly assumes that cultural otherness matters in interview situations.
In this paper, the discussion opens to the field of media and communication studies.Hereby, the argument considers the otherness of the researcher and his or her recognition and attendance by the interviewee.Interviewing media and communication policy elites per se is an endeavour which is challenging in terms of access, time, and availability of information (Herzog & Ali, 2015).Thus, conducting interviews within cultural foreign contexts poses multi-layered challenges.
Goffman's Theory of Interaction under Conditions of Otherness
The social figure of the other is a central theme in Goffman's work (1967Goffman's work ( , 1969Goffman's work ( , 1971Goffman's work ( , 1974)).The theme of the other as opposed to the normal is elaborated upon particularly in his work on stigmatic interaction (1963) but constitutes a constant concern in his writing on various kinds of ritual interactions (everyday, institutional, and public).Goffman (1959) understands interactions as being a part of and taking place within certain frames and defines frames as a social establishment which "is any place surrounded by fixed barriers to perception in which a particular kind of activity regularly takes place" (p.231).These can be social, cultural, or institutional frames that comprise the rules accepted throughout the interaction.Being new to a frame or constituting a frame in which the engaging persons stem from different backgrounds is thus critical in terms of negotiating and agreeing on a common set of rules that emerges from the interactive frame.Goffman's work was criticised for over-emphasising personal aspects of the interaction and for ignoring the power constituencies that might shape them.Rawls (1987) assessed these points of critique by explaining that the understanding of interaction includes the idea of an interactive order as a concept; Rawls argued that Goffman undertheorized this idea in his work, which leads to frequent assumptions about the failure to include power relations in his work on face-to-face interactions.Goffman himself believed that there is an order within any kind of face-to-face interactions we encounter and, as Rawls (1987) continued, Goffman considered the individual and structure not as competing entities but as "joint products of an interaction order sui generis" (p.138).The notion of order, which is comparable to the process of established rules reached through negotiation, occurs in this context as the result of the set framework in which the individuals emerge as interacting agents.In fact, Goffman referred to the role of power in interpersonal interactions, for example when introducing the notion of situational control (1959), as follows: Regardless of the particular objective which the individual has in mind and of his motive for having this objective, it will be in his interest to control the conduct of others, especially their responsive treatment of him.This control is achieved largely by influencing the definition of the situation which the others come to formulate, and he can influence this definition by expressing himself in such a way as to give them the kind of impression that will lead them to act voluntarily in accordance with his own plan.(p. 15) This example shows how Goffman included different characteristics of power relations, which are relevant when looking at communicative interactions such as academic interviews, in which both power constellations and maintaining control over the situation play an important role.In interview situations, particularly in those with elites, the creation of images in the interview situation is a dominant theme.Therefore, what the other person perceives is important to the definition of the cultural otherness one conveys.The other is, in this case, understood as opposing the known of both the interviewer and the interviewee.Goffman (1967) established in his theory of interaction across his work two dimensions of central interactions, which can assume (a) substantial and (b) ceremonial meanings.Whilst substantial meanings carry their meanings within their articulations, ceremonial meanings assume mostly trust-building functions.In the interaction, Goffman identified the five moments that emerge during the interaction and are reflected throughout both dimensions.These moments consist of the field of surprise, irritations, indirections, expectations, and disappointments within the interaction.Goffman used these key moments to describe the interaction as a general field of surprise, difficult to calculate, but possible to understand through so-called keyings.These keyings need to be observed in detail, are an element of learning, and thus help one to understand the situations.Understanding the keyings is critical for interactions, which is why Goffman believed that the individual has the ability to work as a technician of reality if the keyings are considered.This means that Goffman's theory of interaction under conditions of otherness departs from the idea that otherness generally matters in interactions and that experiencing the five moments helps to establish a situation in which otherness is negotiated and mutual codes and rules, the order of interaction, are being established through the interaction.Such an order of interaction is vital for academic interviews, particularly for elite interviews, in which power relations are of particular relevance.
This is where the analysis for this paper departs from.It is based on my fieldwork experience and starts from three core questions: (1) In which contexts does cultural otherness play a concrete role in the interaction as the result of thematization by the interviewee?(2) How does the interviewee react to the interviewer's cultural otherness in these concrete situations?(3) What is the trajectory of the interaction in these cases?
Methods: Analytical Foundation and Data Corpus
This paper explores the question of whether and how the researcher's cultural otherness plays a role in interview situations.In the centre of this piece of research is the observation that when conducting field-work interviewees react often in similar ways to the cultural difference of a researcher.In the research process, I developed a particular interest in the related patterns that my interviewees shared in their articulated reactions towards me in the research process.The interviewees are therefore for the purpose of this specific analysis approached as a group which is being analysed from an ethnographic perspective (Harris, 1968;Wolcott, 2010).The departing questions as well as the presented results of the analysis derive therefore from extended observations made during the diverse fieldwork phases which served as source of data.These observations, their description, analysis and interpretation helps us to understand how to turn own cultural otherness into something functional when interviewing elites in foreign contexts.
The analysis is based on 118 interview encounters and fieldwork notes recorded during the years 2007 and 2010 and between 2013 and 2014 in Argentina, Brazil, and Uruguay.The material derives from several research projects on developments within the media systems of the three countries and stems from interviews with media (policy) elites.The research design made it necessary to focus on local elites, including political, economic, and social elites represented by the leaders of organisations that work with audiovisual policies.The sample comprised individuals who were actively and significantly involved in the design of audiovisual-media policy processes, as well as journalists, media company representatives, and film-makers.In most cases, middle-range officials who were involved in the formulation and design of audiovisual policies were interviewed; in some cases, access to high-level officials, such as national deputies, senators, or state secretaries, was obtained.The sample also included actors that are not a part of the governmental mechanisms but who would engage in the discourse at some point.The aim was to obtain a sample of interviewees from the representative countries whose work with audiovisual policies was the most similar.
The interview settings varied: Some interviews were held in cafés, but most were conducted in the interviewees' offices.Some participants arrived for the interviewees in pairs or in threes, but most interviews were conducted on a one-to-one basis.Overall, 118 interviews, each of a 30-minute to 2-hour duration, were conducted by the author of this paper.
The home institution did not require a particular review for the protection of human subjects.Nevertheless, a declaration of ethical standards was sent to the participants in preparation of the interviews.In the declaration, it is stated that the research was conducted in full awareness and agreement with the ethical standards in research as proposed by the Ethics Commission of the home University and according to the propositions of the Economic and Social Research Council (ESCR) Framework for Research Ethics and the Nuremberg Code.Questions of confidentiality were discussed with the participants of the study and the data was used exclusively in the way and margins consented with the person interviewed before, during or after the interview.Subjects´ agreement to participate was given on a voluntary and informed basis.
The interviews were transcribed with the help of native speakers and analysed using NVIVO software.The data corpus was complemented and juxtaposed with notes from the fieldwork and discussions with colleagues from within the field.Based on the initial observation that my cultural otherness is a reoccurring theme throughout the data, the first round of coding helped to identify the interviewees articulated perceptions regarding my cultural otherness.In the search for patterned regularities (Wolcott, 1994), the material was organised in a second round of analysis in relation to Goffman´s five critical moments of interaction under conditions of otherness as defining moments of interpersonal interaction.This construct of interaction is used in the paper to describe, reconstruct and interpret (Wolcott, 1994) how interviewees perceive and articulate the researchers difference by cultural experience.
The reconstruction of the interaction focuses hereby consciously at the "cultural otherness" the interview partners both perceive and articulate.Throughout the analysis, it is shown that this articulation of "otherness" is a recurring theme used by the interviewee (1) strategically, (2) to trigger a momentum of learning, and (3) for explanatory reasons over the five momenta that Goffman identified as being key in each interpersonal interaction in which otherness is brought to the fore.
The analysis establishes the ways in which interactions with the cultural other are being constituted throughout the interview interactions during the five critical moments that constitute a field: surprise, irritations, indirections, expectations and disappointments.Linking the fieldwork material to the theory of interaction conveys underlying ceremonial and substantial meanings of a researcher´s cultural otherness.These are manifested as the interviewee perceives, articulates, and uses this particular encounter in the interaction.
Based on the analysis, I argue that cosmopolitan researchers need to be aware and to consider their cultural difference in both concrete and subtle ways self-reflexively to grant the validity of their research (Maxwell, 1992;May & Perry, 2014).More importantly, researchers should contemplate on this issue as a source for establishing richer repertoires for gaining access to interviewees and for handling interview situations.Speaking in Goffman's terms, the cosmopolitan researcher can be the "technician of reality" within this interaction, as this selfconsciousness reflection evolves and helps to strengthen the mutual exchange of (cultural) information in academic interviews as communicative interactions under conditions of cultural otherness.
Elite Interviews as a Case of Academic Communicative Interaction Under Conditions of Cultural Otherness
The term elite is one that qualitative researchers have been treating critically.An accurate determination of who could be claimed to have elite status is necessary when conducting elite interviews (Harvey, 2009;Peabody et al., 1990;Richards, 1996).As Harvey (2011) outlined, the term elite can mean many things in different contexts and thus has been used very broadly in the literature.Lipset and Solari (1967) defined Latin American elites as "these positions in society, which are at the summits of key social structures; i.e. the higher positions in economy, government, military, politics, religion, mass organizations, education and the professions" (vii).The understanding of elites, as applied in this work, is based upon the first definition.In the context of collecting the data used for this analysis, the term elite was defined in a broader sense to describe those that engage actively with policies to carry them forward and to shape them actively.
When integrating elite interviews as academic communicative interaction into Goffman's (1959Goffman's ( , 1967Goffman's ( , 1969Goffman's ( , 1971Goffman's ( , 1974) ) terminology, academic interview interactions can be seen as a fine-tuned act with performative properties.In this performance, the participants pursue diverse sets of interests and hereby depend on the perceptions and related reactions of the interviewed and the interviewee.The challenges of performativity tend to be particularly high in the communicative interaction with elite actors, as status, image, and resources are particularly tangible issues (Dexter, 1970;Hammer & Wildavsky, 1989;Harvey, 2011).Performativity can therefore hinder the success of the interview, its transparency and reliability, and its purpose to grant sense-making based on the produced data (Dexter, 1970).Herzog and Ali (2015) advocated the application of more ethnographic elements into the elite interview interactions and underlined particularly the aspect of methodological reflexivity.A part of this methodological reflexivity is the need to understand the challenges to produce data that are reliable and sound for the successful understanding of the researched object.The literature on interview interactions understands cultural otherness and its implications as an additional obstacle to successfully accomplishing that task (Kruse et al., 2012;Mikecz, 2012).
Indeed, from the perspective of a researcher who faces his or her own cultural otherness, conducting interviews with elites contains a variety of challenges.It is often difficult to gain access to elites, and sampling for elite interviews is therefore more complex and challenging than sampling for other qualitative interviews.Access to the elite interview and to good-quality information from within the interview depends on practical issues, such as time, availability, and sensibility to power structures and reputation.The general challenges of conducting interviews with elites have been examined widely in the literature (Dexter, 1970;Herzog & Ali, 2015;Hunter, 1995;Mikecz, 2012).For the most part, those interviewed for the data corpus used here, were people situated in places where audiovisual policy is designed (cities in which relevant policy bodies are located in each country, i.e., Buenos Aires in Argentina, Brasília and Rio de Janeiro in Brazil, and Montevideo in Uruguay).The particularities of the fields required strategies for adaptation regarding the establishment of contact, language, and negotiation of meaning, cultural codes, and interaction.
As about the cultural background of the author of this paper, it is important to mention that the own cultural otherness is a recurring theme in the interviews even though I have enjoyed intercultural university training on the pertinent cultural areas, and have lived in the studied region for a few years.The majority of interviews were conducted in Spanish as well as in Portuguese, although some were conducted in English.As a German native speaker, the language challenge had to be tackled through preparation and planning: the questions were prepared in Spanish, Portuguese, and English, with help from native speakers, before the field phase and were amended accordingly.The choice of the language was agreed upon before the interviews were conducted; most of the interviewees would choose their native language, unless, in those cases in which, due to their professional position, it would be natural for them to express their thoughts in Spanish or in English.The interviews were recorded with the consent of the interviewees and transcribed with the support of native speakers, using Listen N Write software.
The sample was derived from three studies conducted in Argentina, Brazil, and Uruguay between 2007 and 2014.As Maxwell (2008) pointed out, an "intensive, long-term involvement" is favourable to the quality of a research, as it "rules out spurious associations and premature theories" (p.244).The repeated entering of the field is favourable to prolonging the overall contact time with the field, as the memory of networks can be reactivated when getting back into the field.Even though this long-term engagement with the "foreign" region might reduce or change the impression of one's own cultural otherness over time and lead to a cosmopolitan rather than to a cultural, foreign researcher, the general perception of the researcher as not being born, educated, and socialised in this context prevails.In fact, across the sample, the researchers' cultural otherness prevails as a recurring theme.
Results: Meanings of Perceived and Articulated Cultural Otherness in Academic Interview Settings
In this section, the analysis comprises two dimensions which reflect (a) ceremonial and (b) substantial meanings.The cultural otherness of the researcher is reflected in ceremonial meanings, which, for the most part, assume trust-building functions and generate interest and trust, whilst the substantial meanings are based on their articulations.
Ceremonial Meanings
The ceremonial level starts long before the first face-to-face encounter via emails and telephone calls to the office staff or to the interviewee themselves.In the typical interview situation, the first 5 to 10 minutes are dedicated to helping the interviewee understand the interviewers' (cultural) background.In this phase, expectations about the interview and the interviewer are established.These expectations relate, on the cultural level, mainly to the researcher's language and cultural knowledge.Interviewees also establish expectations for the basic codes of conduct, such as punctuality, reliability, and forms of greeting.
The language topic emerges frequently during the email exchange and triggers both irritation and indirection, as interviewees ask questions that, when answered, reveal more cultural properties of the interviewer ("In which language will the interview be conducted?"Brazilian interviewee email).Also, the moments of surprise are being articulated at this first stage of contact: ("Your Spanish is very good […]."Argentinian interviewee email) and are directly indicating how this positive surprise is important for the established interviewerinterviewee relationship ("[…] it will be my pleasure talking to you"; ibid.).This first information frequently generates irritation, which frequently takes the form of curiosity on the side of the interviewee.That this irritation leads to increased interest can be derived directly from the following emails ("Are you Argentinian, but you study in Europe?"Brazilian interviewee email) or within the first minutes of the interview ("So, tell me, NAME, you are Brazilian and do your PhD in Austria-or where are you from originally?" Uruguayan interviewee, notes from the interview).The answers to these questions in email ("I am German, […], and I research South America.")very often lead to another follow up email announcing surprise and often including the confirmation of availability f++or an interview if not previously given ("How come you know Spanish?I look forward to talking to you."Uruguayan interviewee email).
Interviewees with this ceremonial history frequently would come back to this topic at the beginning of the interview by adding more questions.Sometimes interviewees use the opportunity to link these questions to the substantial dimension of the interview ("So, NAME, tell me, what brings you to South America?[…] Why are you interested in South American media?"Uruguayan interviewee, notes from the interview).Summing up, an interviewee who articulates interest in the cultural characteristics of the interviewer tends to make this a recurring topic during the first phase of email contact.This facilitates the desire to keep in touch, triggers compromise, and establishes the possibility of breaking the ice within the first minutes of the face-to-face interview situation.In some cases, interviewees also possess knowledge about the researcher's cultural context and use the opportunity to share little anecdotes, to showcase their own language knowledge, or to ask about recent developments in Europe.("I travel there once every couple of years.I always go to the city of Stuttgart and Pforzheim; we have some contacts in the industry there."Brazilian interviewee, notes from the interview).
The other recurring cultural themes that assume ceremonial meaning are basic codes of conduct, such as punctuality, reliability, or forms of greeting.In the data used for this paper, the latter is the prevailing expression of cultural differences other than language.Rooted in the expectation of my cultural otherness, it was common for me to greet interviewees with a handshake; however, they would say goodbye with a hug.It was often accompanied by the expression of insecurity or irritation through using a reassuring comment: ("I understand you know our habits quite well, so we can do this how we do it here."Brazilian interviewee, notes from the interview; "You know that this is how we say goodbye here, right?"Argentinian interviewee, notes from the interview).Sometimes, when meeting face-to-face, interviewees would raise the issue of otherness in greeting habits among the different cultures, thereby leading to indirections: ("How do you say hi and bye where you come from?"Argentinian interviewee, notes from the interview).The explanations following these questions and affirmations posed the possibility of continued asking and thus triggered other indirections towards the substantial level.Intermediate articulations happened frequently when interviewees asked about their reason for the interest in the purpose of the research ("Why are you doing interviews here?Are there no similar issues in Europe?"Co-worker of Argentinian deputy, notes from the preparatory interaction).
Overall, cultural otherness reached ceremonial meaning and matters in the interview when the interviewees articulated with the purpose of (1) learning about and displaying cultural experience and knowledge; (2) establishing common grounds; and (3) understanding, negotiating, and applying codes of interaction correctly.Ceremonial reactions additionally manifest the ways in which cultural otherness is relevant during the interaction.These ceremonial reactions include laughter, silence, agreement, and changes in behaviour, for example when the welcoming greeting differs from the goodbye.These (positive) reactions are favourable to the overall communicative interaction and can increase activity while triggering recurrent agreement of codes and meanings throughout the entire interaction, as the mentioned examples show.
The acknowledgement of cultural otherness during the interaction establishes common grounds for further interactions.Therefore, it is important to provide sufficient time for the interviewees to articulate their expectations, irritations, disappointments, and indirections that are grounded in the researcher's cultural otherness.Interestingly, in the data corpus used for this paper, cultural otherness still emerged as a theme when shifting to the substantial dimension of the interaction.The interviewees applied their knowledge gathered on the cultural otherness of the researcher during the ceremonial interactions to the substantial interaction in purposeful ways.Three main purposes underlying the articulation of perceived cultural otherness were identified within the substantial meaning of cultural otherness established by the interviewees: strategic; learning-oriented; and explanatory.
Substantial Meanings
The researcher's cultural otherness was articulated by the interviewees in the context of substantial questions and answers and was therefore given substantial meaning, as well.The knowledge about the interviewer's cultural difference frequently triggered expectations on the substantial level.These expectations ranged from facing an interviewer with little expert knowledge regarding the studied context ("Well, probably you do not know anything about this, as you are not from here, and in Europe things might work differently."Government consultant 1) to expecting a highly eloquent expert ("You, as European, maybe you can help us…council us so we understand what we have to improve."Communications Enterprise Workers Union representative 1, interview) or colleague ("We are starting this [….])initiative.Maybe you would like to participate; we want to give it an international characteristic."Government consultant 2, notes from the interview).
Other forms of expectations are linked to taking sides and to understanding the position of the interviewee: Well, I am sure you, as European, understand that this law is effectively a violation of the right of freedom of the press.[…] I am talking to you, because I think it is important that people in Europe learn about what is going on here.(Interview with representative of media company 1) These expectations led to irritations when interviewees understood that the role of the researcher was primarily bound to objectivity as researcher and that the foreign friend could not help with their agenda ("So, what is your opinion on the law, are you in favour or against it?"-"Well, I think that, as a researcher, I am not in the right place to answer that question."-"But if you had such a discussion in Germany or Austria or in the European Union, whose side would you take?" -"Well, it is very unlikely for such a discussion to appear in these contexts in the next years.Anyway, it is a very interesting debate you are having here."Interview with government representative 3).
In such a situation, the transformation of the articulated perception tends to shift rapidly from foreign friend to foreigner ("I do not understand what you mean; this is not Spanish."-Attempt to reformulate is interrupted -"We do not say this the way you express it."Interview with government representative 3).In this example, the irritation transforms into disappointment, and the researcher's cultural otherness becomes a power strategy of the interviewee.In other situations, in which language was an obstacle, interviewees would try to help in a positive manner to overcome those issues, for example through code, by switching to a second language ("We have to …. [is thinking about which word to use] it's juntar in Portuguese…like bridge or unite in English, I think-do you understand what I mean-did I explain that okay?" Government consultant 2, interview in English).
Irritation also transformed into a positive form of indirection when the interviewee tried to link local processes to European situations Well, in Europe you have the Television Without Frontiers Directive, right?And then you have the Audiovisual Media Directive, correct?You see, here, things are just evolving, but this is where we want to be; we need a better institutional framework.(Government representative 4) The interviewee also used indirection in the form of questions to find about how much background information needed to be given, so that the interviewer could understand the answer provided ("So, even though you're not from here, you know more or less what the debate is about…the main conflicts, right?So I do not have to start from scratch, right?"Representative of NGO 1, interview).
Based on the cultural otherness of the researcher, substantial reactions related to consultations about meanings, clarifying questions from the interviewee, and, in rare cases, the denial to answer a question.These reactions resulted in increased background information, reframing of the questions, and initiating new topics through the constant negotiation of meaning and relevance of the questions posed.However, all these reactions were linked to a set of purposes and manifested the framework of the interview in which the researcher's own cultural otherness and the interviewee's perception of this circumstance influenced the interaction.
These examples show that one's own cultural otherness in academic interview interactions can assume strategic relevance and is frequently integrated into the strategy of the interviewee.The interviewees' approaches to the researcher's cultural otherness become manifested in their articulated, related perceptions.These articulations can result in both positive and distanced communicative interactions before, during, and after the interview situation.Throughout the analysis, it was shown that this articulation of cultural otherness was a recurring theme used by the interviewee in a purposeful manner.The cultural otherness of the cosmopolitan researcher thus carries both ceremonial and substantial meanings (Table 1).Learning-oriented Understand, negotiate, and apply codes of interaction correctly.
Explanatory
Source: Presentation based on the qualitative content analysis of 118 interview encounters and note from the interviews between the author of this paper and media and media policy elites in Argentina, Uruguay, and Brazil between 2007 and 2014.
Discussion: Challenged Researcher or Technician of Reality?
The analysis of interview data with interviewees from South America during the last eight years suggests that the researcher's cultural otherness not only matters in terms of preparation for the interviews but also in the concrete interview situations themselves.Interviewees perceive, refer to, and use the cultural otherness of the researcher in a purposeful way throughout the interaction.The data suggest that ceremonial meanings of cultural otherness are raised with the purpose of learning about and displaying cultural experience and knowledge; establishing common grounds; and understanding, negotiating, and applying codes of interaction correctly.The perceived cultural otherness, in the ceremonial sense, offers the opportunity to create moments of identification and cultural bridges with which to break the ice and establish an interview situation in which both the interviewer and the interviewee enter into a positive communicative interaction.However, it is clear that the latter aspect has to be assessed according to the cultural context, as the data corpus relied on cultural exchanges between Western European and South American interview partners and thus bears little potential for cultural conflict.
Substantial meaning is reflected in three forms of acknowledgement: strategic; learning-oriented; and explanatory.The strategic use implies a strategy of persuasion, distraction, or belittlement.The momentum of learning was frequently created in situations during which the interviewee was perceived as a foreign friend or an informed outsider and often was attached to explanations that used cases from the interviewer's context for underlying communalities or differences between the contexts.This means that the interviewees acknowledged the cultural otherness of the interviewer in both favourable and less favourable ways with respect to the communicative interactions.In both cases, the particular situation of the researcher mattered.
In his theory of interaction, Goffman suggested that otherness is reflected throughout five key moments: surprise, irritations, indirections, expectations and disappointments.The strategic use of the interviewer's own cultural otherness by interviewees might account for some of the more disadvantaged interview experiences.However, Goffman also argued that the participants of the interactions can be technicians of the reality of the interaction.The interview data used for this analysis show that cultural otherness also carries opportunities for enriching the repertoire of interaction.
The scholarly literature on conducting interviews in cultural, foreign contexts primarily offers warnings about how to do it right and how to become the "ideal" researcher (Hammer & Wildavsky, 1989) regardless of the challenges cultural otherness clearly carries.In contrast to these accounts, this paper suggests that cultural otherness can be used as an additional tool for use by academic researchers.The interaction not necessarily has to end on the "down note" of disappointment in Goffman's circle of communicative interaction.However, it remains clear that preparation and observation are key to converting one's own otherness into a successful strategy.
The present analysis shows that, indeed, cultural otherness can raise new thematic issues and unexpected links, can increase interaction through asking back, and thus contributes to the engagement in the interview situation from both sides.Thus, the understanding, as well as the positioning, of the other is put into a place where it facilitates informational exchange and access to data.Thus, in accordance with Lefebvre (1991) it is important to recognize the creative capacity of being an outsider; he argues in his conceptualisation of space that in conditions of the modern world, the marginal, the peripheral has a creative capacity, as it is both inside and outside, included and excluded.
In line with this, the cultural otherness of a researcher clearly barriers challenges, but also potential for increased and mutual engagement in the interaction.To conclude, the argument developed based on the analysis presented in this paper is that the capacity of the cosmopolitan researcher opens new repertoires for actively mastering academic research situations in cultural, foreign contexts.This so-far under-studied aspect should be addressed when preparing and conducting interviews in cultural, foreign contexts. | 2015-03-21T17:44:09.000Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "94f2330f797a65df631c60c2bd4035f8d8488d81",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.46743/nsu_tqr",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ccb57c5408d05e02a3560136482115504471b15d",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
221238913 | pes2o/s2orc | v3-fos-license | Oxidative Stress and Pro-Inflammatory Status in Patients with Non-Alcoholic Fatty Liver Disease
Background: Nonalcoholic fatty liver disease (NAFLD) is characterized by excessive fat accumulation, especially triglycerides, in hepatocytes. If the pathology is not properly treated, it can progress to nonalcoholic steatohepatitis (NASH) and continue to fibrosis, cirrhosis or hepatocarcinoma. Objective: The aim of the current research was to identify the plasma biomarkers of liver damage, oxidative stress and inflammation that facilitate the early diagnosis of the disease and control its progression. Methods: Antioxidant and inflammatory biomarkers were measured in the plasma of patients diagnosed with NAFLD (n = 100 adults; 40–60 years old) living in the Balearic Islands, Spain. Patients were classified according to the intrahepatic fat content (IFC) measured by magnetic resonance imaging (MRI). Results: Circulating glucose, glycosylated haemoglobin, triglycerides, low-density lipoprotein-cholesterol, aspartate aminotransferase and alanine aminotransferase were higher in patients with an IFC ≥ 2 of NAFLD in comparison to patients with an IFC of 0 and 1. The plasma levels of catalase, irisin, interleukin-6, malondialdehyde, and cytokeratin 18 were higher in stage ≥2 subjects, whereas the resolvin D1 levels were lower. No differences were observed in xanthine oxidase, myeloperoxidase, protein carbonyl and fibroblast growth factor 21 depending on liver status. Conclusion: The current available data show that the severity of NAFLD is associated with an increase in oxidative stress and proinflammatory status. It may be also useful as diagnostic purpose in clinical practice.
Introduction
The most common chronic liver disease in western societies is nonalcoholic fatty liver disease (NAFLD), which affects up to 25% of the population and it is emerging as a serious and growing clinical problem with a 90% prevalence among obese individuals [1]. The prevalence of the more progressive form of NAFLD, nonalcoholic steatohepatitis (NASH), ranges from approximately 25-70% among obese patients [2]. NAFLD is characterized by an excessive fat accumulation, especially triglycerides in hepatocytes, and, consequently, is strongly linked to overweight, obesity, and insulin resistance [3]. NAFLD is accompanied by a broad spectrum of clinical and pathological manifestations hardly distinguishable from those seen in alcoholic patients [3]. If this pathological situation is not properly treated, it can progress to NASH and continue to fibrosis, cirrhosis or even hepatocarcinoma [4]. This disease is not directly associated with age since it can affect people younger than 40 years old [5]. NAFLD could be an additional risk factor for cardiovascular disease (CVD), chronic kidney disease, endocrinopathies (including type 2 diabetes mellitus (T2DM) and thyroid dysfunction) and osteoporosis [6][7][8][9]. NAFLD is a disease with no well defined signs or symptoms, and these include an enlarged liver, fatigue, pain in the right upper abdomen, and a slight increase in circulating transaminases [10]. To date, there are no effective pharmacological therapies against NAFLD, but therapeutic approaches to fight against this disease are basically dietary and lifestyle modifications [11]. Concretely, exercise and nutritional interventions are the first line of therapy, which are mainly aimed at controlling body weight, metabolic syndrome and cardio-metabolic risk factors [12].
Nowadays, the most reliable method of diagnosing NAFLD is through a liver biopsy, but since it is a long-term disease and an invasive method, it is complex to follow large groups of people through serial biopsies [13]. Other methods for diagnosis include a complete ultrasound, which is usually the first test when liver disease is suspected, magnetic resonance imaging (MRI), which allows for a good diagnosis, and elastography, which is an improved form of ultrasound to measure liver stiffness, indicative of fibrosis or scarring [14]. Therefore, many people suffering from NAFLD are not diagnosed until the disease has progressed to a more serious stage. In fact, in a significant number of cases, the diagnosis is made when there is already severe liver disease or cirrhosis and the patient may require a liver transplant [15]. Indeed, about 20-25% of adults with NAFLD develop cirrhosis in 10 years and 11.3% of cirrhotic patients with NAFLD develop hepatocellular carcinoma in 5 years [11].
Oxidative stress and inflammation are significant features involved in NAFLD. Reactive oxygen species (ROS) overproduction can initiate lipid peroxidation processes by damaging both the membrane structure and function, and may be responsible for the oxidation of key proteins for cell metabolism and function, and may cause nucleic acid oxidation [16,17]. All these actions can trigger apoptotic processes by affecting the mechanisms involved in the regulation of the cell life cycle [18][19][20]. Since the liver has a limited ability for triglyceride accumulation, lipid deposition under overfeeding conditions, as in the case of NAFLD, determines the accumulation of high levels of fatty acids, generally saturated ones, which are associated with cell dysfunction [21]. Indeed, the excess of fatty acids induces high rates of β-oxidation, increasing ROS production in the mitochondrial respiratory chain, which can cause cellular damage, and oxidative stress [22]. This situation is directly associated with an increase in oxidative damage markers, an activation of Kupffer cells and pro-inflammatory pathways, and the recruitment of circulating immune cells [23,24]. Chronic inflammation derives from an incorrect resolution of the acute inflammation, which can occur when the source stimulus persists over time. The most common cause of this pro-inflammatory condition is often associated with metabolic diseases, such as diabetes, obesity, metabolic syndrome, nonalcoholic fatty liver, and even in cancers which are characterized by a subclinical chronic inflammatory state [25]. The presence of NASH-associated inflammation identifies NAFLD patients at a higher risk of fibrosis and disease progression [26].
Since the diagnosis of this pathology needs invasive or expensive methods, it is important to find additional markers that allow for the evaluation of the degree of liver steatosis. The aim of this study was to identify the plasma biomarkers of liver damage, oxidative stress and inflammation that facilitate an early diagnosis of the disease and control its progression.
Design and Participants
One hundred 40-60 year-old adults recruited in the Balearic Islands, Spain, were selected considering the following inclusion criteria: (1) BMI (body mass index) 27-30 Kg/m 2 or an increased waist circumference of ≥94 cm in men and ≥80 cm in women; (2) triglycerides levels ≥150 mg/dL; (3) reduced HDL-cholesterol <40 mg/dL in men and <50 mg/dL in women; (4) increased blood pressure (BP), systolic BP ≥ 130 mmHg or diastolic BP ≥ 85 mmHg; (5) fasting serum glucose level ≥100 mg/dL. The following exclusion criteria were applied: previous cardiovascular disease; liver diseases (other than NAFLD); viral, autoimmune and genetic causes of liver disease; active cancer or a history of malignancy in the previous 5 years; previous bariatric surgery; nonmedicated depression or anxiety; alcohol (>21 and >14 units of alcohol a week for men and women, respectively) and drug abuse; pregnancy; primary endocrinological diseases (other than hypothyroidism); weight loss medications in past 6 months; concomitant therapy with steroids; inability or unwillingness to give informed consent or communicate with study staff.
The study protocols followed the Declaration of Helsinki ethical standards and all the procedures were approved from the Ethics Committee of the Balearic Islands (ref. IB 2251/14 PI). All participants were informed of the purpose and the implications of the study, and all provided the written consent to participate. This study has been registered in Clinicals Trials.gov ref. NCT04442620 [27].
General Data
Information on patients' socioeconomic status, medical history, and current use of drugs, previous diseases, smoking status and alcohol consumption were obtained from all patients during an initial interview with the study dietician and study nurse.
Diagnosis of NAFLD
The fatty liver analysis was performed with a 1.5-T Magnetic Resonance Imaging (MRI) (Signa Explorer 1.5T, General Electric Healthcare, Chicago, IL, USA) by using a 12-channel phased-array coil [28].
The upper abdominal MRI imaging protocol included the IDEAL IQ sequence, which provides volumetric whole-liver coverage in a single breath-hold and generates estimated T2* and triglyceride fat fraction maps in a noninvasive manner [29]. Breath-held abdominal imaging is able to evaluate diffuse liver diseases such as hepatic steatosis of the liver and corrects for challenging confounding factors such as T2* decay. The technique is designed for water-triglyceride fat separation with a simultaneous T2* correction and estimation based on the IDEAL technique. Six gradient echoes are typically collected using the 3D Fast SPGR sequence in one or two repetitions. The IDEAL IQ reconstruction produces water and triglyceride fat images, and a relative triglyceride fat fraction and R2* maps from the six echo source data.
The patients who met the inclusion criteria and agreed to participate in the study were classified into three groups according to the intrahepatic fat content (IFC) after performing MRI. The first group included patients without evidence of NAFLD (IFC 0); the second group included patients with IFC 1 of NAFLD and the third group included patients with IFC 2-3 of NAFLD. Cross-validated estimates of the diagnostic accuracy of MRI according to the liver proton density fat fraction (PDFF) threshold for grading hepatic steatosis were: IFC 0 (<6.4%), IFC 1 (6.4-17.39%), IFC 2-3 (>17.4%) [30]. Patients were classified as "IFC 0" (n = 44), "IFC 1" (n = 40) and "IFC ≥2" (n = 27) as described above and following the recognized clinical criteria [31,32].
Anthropometric Characterization
Weight (kg) was measured with calibrate scales and the subjects in bare feet and light clothes, so 0.6 kg was subtracted for their clothing. Height (m) was determined to the nearest millimetre with a wall-mounted stadiometer (Seca 213, SECA Deutschland, Hamburg, Germany) with the participant's head in the Frankfurt plane. Body mass index (BMI) was calculated according to kg/m 2 . Blood pressure was measured in triplicate in a seated position with a validated semi-automatic oscillometer (Omron HEM, 750CP, Hoofddorp, The Netherlands).
Protein Carbonyl Determination
Protein carbonyl derivatives were measured by using an OxiSelect TM Protein Carbonyl Immunoblot Kit (CELL BIOLABS ® , San Jose, CA, USA) following the manufacturer's instructions. The total protein concentration was determined by the Bradford method [33] using the Sigma-Aldrich Bradford reagent (Sigma-Aldrich, St. Louis, MO, USA). Firstly, 10 µg of plasma protein was transferred onto a nitrocellulose membrane by the dot blot method (Bio-Rad, CA, USA). Then, the membrane was incubated in carbonyl determination with 2,4-dinitrophenylhydrazine (DNPH). This step was followed by incubation with the primary antibody, specific to DNPH (1:1000). After that, the membrane was incubated with goat antirabbit IgG (1:1000). An enhanced chemiluminescence kit (Immun-Star ® Western C ® Kit reagent, Bio-Rad Laboratories, Hercules, CA, USA) allows for the development of immunoblots. An image analysis program, Quantity One (Bio-Rad Laboratories, CA, USA), was used to visualize and quantify the protein carbonyl bands.
Enzymatic Determinations
The activities of catalase (CAT) and superoxide dismutase (SOD) were determined both in plasma as described elsewhere [34,35]. Both enzyme activities were measured with a Shimadzu UV-2100 spectrophotometer (Shimadzu Corporation, Kyoto, Japan) at 37 • C. Plasma CAT activity was measured using Aebi's spectrophotometric method based on the decomposition of H 2 O 2 [34]. Plasma SOD activity was measured by an adaptation of McCord and Fridovish's method [35].
Malondialdehyde Assay
A marker of lipid peroxidation in plasma (malondialdehyde; MDA) was measured using the specific colorimetric assay kit (Sigma-Aldrich Marck ® , St. Louis, MO, USA), whose method is based on the reaction of MDA with a chromogenic reagent generating a stable chromophore. Plasma samples and standards were placed in glass tubes containing n-methyl-2-phenylindole in acetonitrile:methanol (3:1); HCl (12 N) was then added, and the samples were incubated at 45 • C/1 h, and the absorbance was measured at 586 nm. A standard curve of known concentrations was used to calculate the MDA concentration.
Statistics
Statistical Package for Social Sciences (SPSS v.25 for Windows, IBM Software Group, Chicago, IL, USA) was used to carry out the statistical analysis. Results were expressed the mean ± standard error of the mean (SEM). The level of significance was considered at p < 0.05 for all statistics. A Kolmogorov-Smirnov test was previously applied to assess the correct distribution of the data. The statistical significance of the data was assessed by a one-way analysis of variance (ANOVA). A Bonferroni post-hoc test was used in order to make multiple comparisons. The biomarker results were analysed by a receiver operating characteristic (ROC) curve and area under the curve (AUC), and by a multivariate logistic regression acccording to intrahepatic fat content IFC (dependent variable) after adjustments for sex, smoking, and alcohol consumption (categorical variables), and age (continuous variable) to control for potential confounding.
Anthropometric and Haematological Parameters
The anthropometric characteristics of participants stratified by IFC are shown in Table 1. An IFC ≥ 2 showed significantly higher values in weight, glucose, Hb1Ac, TG, AST and ALT with respect to IFC = 0. The systolic blood pressure and ALT also evidenced significant differences when compared with IFC = 1. The HDL-cholesterol was lower when IFC ≥ 2 than stage 0. LDL-cholesterol also showed significant differences. No differences were reported in the haematological variables of the participants, except leukocytes, which was significantly different when IFC = 1 when compared with IFC = 0.
Oxidative Stress and Inflammatory Biomarkers
The results of the enzymatic activities of CAT and SOD, and the ELISA assay of MPO, XOD, irisin, IL-6, and resolvin D1 and biomarkers of plasma damage such as MDA and protein carbonyl are shown in Table 2. CAT, SOD and irisin were significantly higher in subjects with IFC ≥ 2 of NAFLD than IFC = 0 and 1, while no differences were found in MPO, XOD, and protein carbonyls between groups. Differences between participants when IFC ≥ 2 were found for IL-6 which was significantly higher than IFC 0. The RvD1 reported differences between IFC = 0 and IFC ≥ 2, with significantly lower levels when IFC ≥ 2. As a lipid peroxidation marker, MDA levels were significantly higher when IFC = 1 and IFC ≥ 2 compared to when IFC = 0. Differences between sex were observed in CAT (IFC = 1 and IFC ≥ 2), IL-6 (UFC = 0) and MDA (IFC = 0 and IFC ≥ 2).
CK-18 and FGF21 Levels
The plasma levels of the CK-18 and FGF21 stratified by the NAFLD stages are shown in Figure 1. The levels of CK-18 were significantly higher when IFC = 1 and when IFC ≥ 2, with respect to when IFC = 0 ( Figure 1A), whereas the levels of FGF21 showed no differences ( Figure 1B). Figure 2 shows the accuracy of biomarkers in the assessment of IFC by means of a ROC curve (IFC pathologic or IFC ≥ 1 vs. IFC nonpathologic or IFC = 0). The best area under the curve (AUC) results were found for CAT, MDA, CK-18, and SOD, irisin, and IL-6, representing 88%, 82%, 80%, 76%, 65% and 63% of the AUC, respectively, all over the reference line. Resolvin D1 is a bad predictor according to the AUC values (44%); however, its inverted value could be a good IFC predictor. Figure 2 shows the accuracy of biomarkers in the assessment of IFC by means of a ROC curve (IFC pathologic or IFC ≥ 1 vs. IFC nonpathologic or IFC = 0). The best area under the curve (AUC) results were found for CAT, MDA, CK-18, and SOD, irisin, and IL-6, representing 88%, 82%, 80%, 76%, 65% and 63% of the AUC, respectively, all over the reference line. Resolvin D1 is a bad predictor according to the AUC values (44%); however, its inverted value could be a good I Table 3 shows the association of biomarkers and IFC, by means of a multivariate adjusted logistic regression (odds ratio and 95% CI) considering nonpathological (IFC = 0) as a reference value. After adjusting for possible confounders, catalase, malondialdehyde, cytokeratin 18; SOD, superoxide dismutase; IL-6, irisin, interleukin 6, and the inverse of resolvin was significantly associated to pathological IFC (IFC ≥ 1). Table 3 shows the association of biomarkers and IFC, by means of a multivariate adjusted logistic regression (odds ratio and 95% CI) considering nonpathological (IFC = 0) as a reference value. After adjusting for possible confounders, catalase, malondialdehyde, cytokeratin 18; SOD, superoxide dismutase; IL-6, irisin, interleukin 6, and the inverse of resolvin was significantly associated to pathological IFC (IFC ≥ 1).
Discussion
The main findings of this study are that oxidative stress and proinflammatory biomarkers are clearly related to the intrahepatic fat content, which may be useful for diagnostic purposes in clinical practice.
Moreover, the current findings also confirmed previous results [36][37][38] on blood biochemical markers (higher glycaemia, Hb1Ac, triglycerides, AST, and ALT, and lower HDL-cholesterol), which progressively get worse according to the IFC, as well as the altered levels of systolic and diastolic blood pressure. These outcomes are in accordance with previous studies that evidenced significant increases in the specific liver enzymes (ALT, GGT, and AST/ALT ratio <1) and in Hb1Ac in NAFLD subjects as compared to healthy subjects [36][37][38]. It has also been suggested that bilirubin and CRP could be good biomarkers for a good prediction of NAFLD [38][39][40]. The absence of differences in the levels of bilirubin, GGT, and PCR in the present study could derive from the fact that all the patients suffered from metabolic syndrome and there are no healthy patients.
Participants with a high IFC showed higher levels of oxidative damage markers (MDA), plasma antioxidant enzymatic activities (CAT, SOD), proinflammatory markers (IL-6, CK-18), and cytokines (irisin), but lower resolvin D1 levels, and no changes in protein hepatokynes (FGFD21). Previous studies showed increases in the antioxidant enzymatic levels in serum/plasma samples [37] as an adaptive mechanism to cope with the increase in ROS production associated with NAFLD [41]. This higher production of ROS, mainly derived from the respiratory chain [42], can cause cell damage, activate inflammatory cells, and induce cytokine production [37,43]. The accumulation of lipids in liver cells also induces lipotoxicity associated with endoplasmic reticulum stress [44,45], contributing to the induction of inflammatory responses and the development of chronic metabolic diseases such as NAFLD [46]. The current findings are in accordance with previous results, which showed high levels of MDA in NAFLD and chronic viral hepatitis patients [47].
The MPO and XOD levels, as biomarkers of pro-inflammatory states, did not evidence significant differences between their IFCs. Previous studies suggested that MPO could be a good noninvasive biomarker to distinguish NASH from steatosis [48,49]. XOD is an essential enzyme in the metabolism of nucleic acids, which is released into the circulation when liver damage occurs [50]; it mediates the peroxidation of lipids, and is involved in the occurrence and progression of liver damage [51,52]. The absence of differences may be because the participants in the current study have a high BMI and a low-grade subclinical inflammatory state, but they do not have additional inflammation associated with steatohepatitis. To our knowledge, there are no previous studies analysing plasma XOD levels in humans with NAFLD in comparison to healthy subjects; an increase in its activity has only been found in the serum of rats with NAFLD [53].
However, patients with a high IFC showed higher levels of the proinflammatory cytokine IL-6, which is in accordance with a previous study that evidenced higher pro-inflammatory cytokine levels (TNFα and IL-6) in NAFLD patients [54]. A progressive increase in IL-6 was also found in steatosis and NASH patients [55], as well as in patients with metabolic syndrome [56]. IL-6 has also been related to hepatocellular carcinoma (HCC), the most common liver cancer, mainly in males, both in humans and in mice, probably due to the inhibitory effects of oestrogens on IL-6 production in females [57,58]. However, the current findings showed higher IL-6 plasma levels in women just at IFC = 0 (nonpathological), but no differences with males when pathological IFC levels were obtained. The shortage of differences may be explained by the menopause stage of female participants which places them in an endocrinological situation similar to that of men, without the oestrogenic inhibition of IL-6 production.
CK-18, an inflammatory intermediate filament protein of hepatocytes, is released into the circulation when hepatocyte damage occurs, making it a biomarker of disease progression in NAFLD and liver injury [59]. Since CK18 is cleaved by caspases, the levels of CK18 in serum can be indicative of hepatocyte apoptosis, a typical feature of liver injury [60]. The current study revealed that CK-18 significantly increased with the liver steatosis. Previous studies described CK-18 as a noninvasive marker, which could allow for the identification of patients with NAFLD and it has also been reported a relation between CK-18 levels with the evolution of NAFLD [59,60].
Irisin, a cytokine secreted by muscles after physical exercise and amarker of insulin resistance or metabolic disease [61], showed high levels in when the IFC was high. Similar results were previously observed, showing a direct association between the plasma irisin concentration and BMI in obese and NAFLD patients [61][62][63]. A recent study showed that fibronectin type III domain-containing 5 (FNDC5), which by proteolytic cleavage produces soluble irisin, is elevated in NAFLD [64], suggesting that FNDC5 increase in hepatocytes may be a mechanism to cushion the development of NAFLD by reducing hepatocyte steatogenesis and damage.
Resolvin D1, a lipid mediator involved in restoration of normal cellular function following the inflammation after tissue injury, as in obesity [65], showed high levels in participants with a high IFC. Its inverse was also significantly associated to pathological IFC. These resolvin levels may be related to a loss of the ability to respond to chronic subclinical inflammation, which would favour the progression of NAFLD. A multivariate logistic regression analysis applied in the current study showed a direct association between the inverse of resolvin D1 levels and IFC, showing that it may be a good IFC predictor.
FGF21 is a protein hepatokine mainly released from hepatocytes [66], and previous studies reported that FGF21 levels increased in NAFLD patients [67,68]. In a 3-year prospective population-based cohort, FGF21 levels were elevated in patients who progressed to NAFLD when compared with patients who did not [69]. Then it has been pointed out that the serum FGF21 level was a good biomarker for NAFLD diagnosis [66]. However, the current findings did not evidence any changes in FGF21 levels as the IFC increased. It may be explained since all participants were obese and have metabolic syndrome features; therefore they may have elevated levels in all the groups studied. It has also been pointed out that increased FGF21 circulating levels in over-nutrition could show the presence of compensatory responses by FGF21 to the underlying metabolic stress [70]. Additionally, FGF21 has direct anti-inflammatory and antifibrotic effects on the liver that are not associated with insulin resistance and obesity [71,72]. Thus, the absence of differences in this marker could also be due to the fact that patients only present liver steatosis and not steatohepatitis which implies inflammation and fibrosis.
Finally, the current findings from the ROC curve and area under the curve, as well as the multivariate logistic regression, showed that the plasma levels of CAT, SOD, CK-18, irisin, IL-6 and MDA, as well as the inverse of resolvin D1 levels, may be good IFC markers useful in clinical practice.
It has been also pointed out that NAFLD by itself is associated with cardiovascular events [73], and may precede and/or promote the development of T2DM, hypertension, and atherosclerosis/CVD in a bi-directional relationship between NAFLD and metabolic syndrome components, in particular T2DM and hypertension [74]. All these results pointed out that NAFLD is a systemic disease, and not just a hepatic disease [75]. The latest review also pointed out that NAFLD is linked to chronic kidney disease, as well as to endocrine, pulmonary, dermatological, gynaecological and haematological disorders, and to several cancers [76], emphasizing that HAFLD is more than just a disease.
Our previous findings showed that a higher dietary inflammatory index was associated with a high degree of liver damage in obese, with relevant noninvasive liver markers (ALT, AST, GGT) and with fatty liver index (FLI) [32]. BMI and metabolic syndrome have been also associated with oxidative stress (MDA, MPO, CAT) and pro-inflammatory markers (IL-6, high sensitivity C-reactive protein) [54,77,78]. The plasma antioxidant enzymatic activities were low and oxidative damage markers were high in patients at high cardiovascular risk [79].
Conversely, it can be hypothesized that the increased oxidative stress and proinflammatory environment appeared both in NAFLD, CVD, T2DM, and metabolic syndrome, perhaps follows the same initial stimulus (high dietary intake, low physical activity and high fat deposition in adipocytes [80]), and reinforce each other, joining a local and a systemic response. In this way, obesity, metabolic syndrome, hyperlipidaemia, atherosclerosis, and thrombosis were previously related with oxidative stress and a low-grade inflammation status [81][82][83][84]. In any case, this hypothesis needs further research.
Strengths and Limitations
The main strength of the current study is the association between oxidative stress and proinflammatory biomarker plasma levels and IFC, which may be also useful for diagnostic purposes in clinical practices. A limitation of this study is that no liver biopsies have been taken. However, IFC assessments have been made using MRI, which is a well-accepted, reliable, and noninvasive technique to do it, significantly reducing the risk for the patients. A second limitation is that sample size was relatively small. However, this sample size was enough to demonstrate the differences in the biomarker levels between IFCs. A third limitation may be inter-observer variations in anthropometric measurements. In order to avoid this, an accurate training of personnel has been done.
Conclusions
The current study has shown that as the intrahepatic fat content progresses, the markers of oxidative stress, plasma proinflammatory status, and CK18 significantly increase in patients according to the IFC diagnosed with MRI. Because diagnostic tests such as MRI are not routinely performed in clinical practices to diagnose or monitor fatty liver disease, combining various noninvasive markers would allow for the monitoring and evolution of NAFLD. | 2020-08-20T10:06:06.076Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "3063020445557c4a4ff58152aafdcc817accf6d5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/9/8/759/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d88307b8e2cd0253f05e076a63dc13f164c4cc84",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248367728 | pes2o/s2orc | v3-fos-license | Dual RNA-seq reveals a type 6 secretion system-dependent blockage of TNF-α signaling and BicA as a Burkholderia pseudomallei virulence factor important during gastrointestinal infection
ABSTRACT Melioidosis is a disease caused by the Gram-negative bacillus Burkholderia pseudomallei (Bpm), commonly found in soil and water of endemic areas. Naturally acquired human melioidosis infections can result from either exposure through percutaneous inoculation, inhalation, or ingestion of soil-contaminated food or water. Our prior studies recognized Bpm as an effective enteric pathogen, capable of establishing acute or chronic gastrointestinal infections following oral inoculation. However, the specific mechanisms and virulence factors involved in the pathogenesis of Bpm during intestinal infection are unknown. In our current study, we standardized an in vitro intestinal infection model using primary intestinal epithelial cells (IECs) and demonstrated that Bpm requires a functional T6SS for full virulence. Further, we performed dual RNA-seq analysis on Bpm-infected IECs to evaluate differentially expressed host and bacterial genes in the presence or absence of a T6SS. Our results showed a dysregulation in the TNF-α signaling via NF-κB pathway in the absence of the T6SS, with some of the genes involved in inflammatory processes and cell death also affected. Analysis of the bacterial transcriptome identified virulence factors and regulatory proteins playing a role during infection, with association to the T6SS. By using a Bpm transposon mutant library and isogenic mutants, we showed that deletion of the bicA gene, encoding a putative T3SS/T6SS regulator, ablated intracellular survival and plaque formation by Bpm and impacted survival and virulence when using murine models of acute and chronic gastrointestinal infection. Overall, these results highlight the importance of the type 6 secretion system in the gastrointestinal pathogenesis of Bpm.
Introduction
Burkholderia pseudomallei (Bpm) is a Gram-negative bacterium and the causative agent of human melioidosis. 1 , 2 Melioidosis is becoming a health issue due to its increased global awareness and the recognition of the disease as an underreported neglected tropical disease. 3 For example, human melioidosis in northeast Thailand ranks as the third most common cause of death due to infectious diseases (after HIV and tuberculosis), 4,5 while in other parts of the world, melioidosis is being recognized as endemic in countries located within the tropics. [6][7][8][9][10][11][12] Worldwide, human and animal infections by this pathogen are significantly underreported, while the number of cases continues to increase. 7,13 A comprehensive epidemiological study estimated that 165,000 melioidosis cases and 89,000 deaths occur every year around the world. 7 While clinical manifestations of the disease are diverse, including acute sepsis, chronic localized pathology, or a latent infection that can reactivate decades later. Further disease surveillance is hindered by a limited understanding of the disease and the association of nonspecific symptoms that often mimic other persistent infections. 13,14 Community acquired cases of melioidosis are likely a consequence of the saprophyte bacterium in contaminated soil or water entering the host through cuts or skin abrasions, ingestion, or inhalation. 2,15 Treatment of the disease is difficult and includes parenterally delivered ceftazidime or carbapenem for 10-14 days, followed by oral trimethoprim-sulfamethoxazole for 12-20 weeks. 2,16 Bpm can bind and infect phagocytic and nonphagocytic cell types 17,18 upon coming in contact; however, most of the pathogen-host interactions at the epithelial interface have focused on those infections occurring at the abraded skin or the respiratory mucosal surface, during percutaneous inoculation or respiratory infection. 2,19 For example, attachment to human respiratory epithelial cells was initially thought to be mediated by capsular polysaccharides- 20 and type IV pili. 21 However, the function of the capsule in the internalization process has been questioned, and the role of the type IV pili in adherence has only been demonstrated during infection of the respiratory tract. 2,21 In contrast, the exact mechanism of entry and dissemination using other ports of entry remain relatively unexplored and an in-depth analysis of the bacterial mechanisms mediating gastrointestinal (GI) infection have not been undertaken. One prior study investigated whether Bpm was able to cause GI infection in mice and it was reported that Bpm is capable of infecting intestinal cells following oral inoculation and that chronically colonized GI cells might be the reservoir for dissemination to extra-intestinal sites. 22 This has been further validated in non-human primates where it was demonstrated that Bpm ingestion serves as the route for disseminated infection. 23 Due to the intracellular nature of this pathogen, it is critical to investigate the molecular mechanisms mediating GI invasion and dissemination to target organs, and we have previously found that the type 1 fimbriae is involved in initial attachment to the intestinal cells, while the type 6 secretion system participates in cellto-cell spread. 24 Although the exact mechanism of Bpm invasion to epithelial cells, particularly intestinal cells, remains unknown, it has been demonstrated that inhibition of actin polymerization reduces the level of entry and invasion. 2,25 Bpm possesses multiple secretion systems, which enable the translocation of proteins into the host cells, favoring invasion and cell-to-cell spread. Rearrangement of the host actin cytoskeleton is induced by effector proteins injected by the type 3 secretion system (T3SS) and autotransporter proteins that mediate actin-based motility. 2,19,26,27 The T3SS is crucial for vesicle escape, while the autotransporter BimA interacts with monomeric actin in one pole of the bacteria, resulting in formation of actin tails promoting Bpm intracellular motility. 25 Once Bpm moves freely around the host cytosol, it approaches neighboring cells, stimulating cell fusion and the formation of multinucleated giant cells (MNGCs). 28 Bpm is believed to manipulate host cells by utilizing the type 6 secretion system (T6SS), forming MNGCs for intercellular spread and to avoid interaction with the immune system. 29 Interestingly, Bpm possess six T6SS clusters, which share similarities with other T6SS of Gram-negative pathogens. 30 However, only the T6SS-1 cluster (or also known as the T6SS-5) is utilized by Bpm to manipulate the host cells for intracellular spread 29 and to stimulate the formation of MNGC, being cell fusion the pathogenic hallmark of Bpm and of a small group of intracellular bacterial pathogens.- 28,29,31 The remaining five clusters, although their function has not been fully elucidated, are speculated to function during interbacterial competition. 29,30 We have previously demonstrated, using a human intestinal epithelial cell line and mouse primary intestinal epithelial cells (IEC) that Bpm adheres, invades, and forms MNGCs, ultimately leading to cell toxicity. 24 Further, we demonstrated that Bpm requires a functional T6SS for full virulence, bacterial dissemination, and lethality in mice infected by the intragastric route. 24 However, several important questions remain unsolved including the mechanism underlying T6SS activity and its physiological role during infection, particularly during GI cell-to-cell spread. Therefore, in this paper we utilized a dual RNA-Seq analysis of infected IECs with wild type (WT) Bpm and a type 6 secretion mutant (Δhcp1) to determine which host and pathogen genes are differentially expressed during infection. Several genes up-and down-regulated were identified while comparing the Bpm WT and mutant strain and to further validate our analysis, we showed a dysregulation in the TNF-α signaling via NF-κB pathway in the host cells. Further, we demonstrate a regulatory control of the BicA protein, a T3SS, as a potential virulence factor during GI infection by Bpm, and an associated virulence factor with bacterial invasion of IECs.
Defining intestinal host IECs-Bpm transcriptomics using Dual RNA-Seq
To start understanding the molecular mechanism that governs Bpm and the host cells during infection of intestinal epithelial cells, we performed a dual RNA-seq analysis to define the impact of expressing the T6SS in the intracellular host environment, using the transcriptome of the pathogen and that of the host cell as readout. Dual RNA-seq has been previously successfully used to profile gene expression simultaneously of an intracellular pathogen and its infected host cell. 32-35 Therefore, we performed a fully validated dual RNA-seq experiment and then focused our analysis in subsets of bacterial and host proteins differentially expressed during intracellular infection of the Bpm wild type K96243 36 and its Δhcp1 mutant. 24 Briefly, primary intestinal epithelial cells (IECs) derived from rat were selected as host cells and the bacterial infections were performed using Bpm WT and Bpm Δhcp1 at an MOI of 10 at 6-and 12-h ( Figure 1, step 1). Total RNA was collected from infected cells (step 2) both timepoints and pure RNA samples (bacterial or host rRNA-depleted) were used for subsequent analyses (step 3). The genomic libraries were prepared and sequenced using next-generation sequencing (NGS) technology (step 4) and the sequences were mapped to the rat (step 5) or Bpm K96243 (step 6) reference genomes. Finally, we profiled different expression patterns between Bpm WT and Bpm Δhcp1 with high resolution, after infection of IECs. Differences in expression profiles are presented as heatmap (step 7) and volcano plots (Figures 2 and 4).
Differential host gene expression during Bpm infection and in the absence of a T6SS
The IECs were infected with Bpm WT and Bpm Δhcp1 for 6 and 12 h and the differential global gene expression were determined (Figure 2(a)). Panel A shows the host transcriptomic gene differences during infection with Bpm in the presence or absence of a functional T6SS at 6 and 12 h post infection. As depicted in the boxes, genes in box 2 are upregulated in the absence of a functional T6SS (Δhcp1) as compared to the WT strain (box 1) and suggesting that a transcriptional gene repression occurs during expression of the T6SS. Conversely, genes in box 4 are downregulated in comparison with those found in box 3, suggesting that a transcriptional activation of those genes occur during expression of the T6SS. This further suggests that Bpm infection modulates several signaling events upon infection of IECs as early as 6 h post infection, and further implicates the T6SS in driving transcriptional activation or repression of many signaling events.
To elucidate the potential pathways implicated in Bpm infection of IECs, we performed gene ontology (GO) and ingenuity pathway analysis (IPA) on the differentially expressed genes after 6 and 12 h of infection. To further establish the association between expression of the T6SS and transcriptional changes in the host cell, 400 of the most differentially expressed genes at 12 h post infection in IECs were further classified using GO (Figure 2 . Further, we found that dysregulation of these genes occurs in a T6SS-dependent manner (Figure 2(c)). Among the most significant differentially expressed genes, we noticed a downregulation of genes associated with cell death during infection, including FosL1, Mt-Co3, and Mt-Co1, among others ( Figure 2(c)). On the other hand, we observed an upregulation in genes associated with inflammation such as STAT2, GBP4, or cytoskeleton integrity such as Stmn1 (Figure 2(c)). Of note, we observed that core responses of Bpm interacting with IECs were dominated by a proinflammatory response such as those reported with other cultured epithelial cells. 37,38 We decided to further analyzed these genes because they are found in melioidosis patients and recommended as useful biomarkers of disease progression, 39 but also implicate the T6SS-dependent induction of inflammatory Using ingenuity pathway analysis, hallmark gene association suggests a role for the most differentially expressed genes to be associated with TNF-α signaling via NF-κB. In addition, gene ontology analysis suggests a role of differentially expressed genes linked to cell proliferation, cell death, among other metabolic processes. C. The volcano plot demonstrates specific differential expression of genes during Bpm IEC infection generated with p-values less than 0.05 in the presence (Bpm WT) or absence of a functional T6SS (Δhcp1). response which could be a critical defense mechanisms of IECs against Bpm.
Establishing differential host cell responses to Bpm WT and Δhcp1 strains
Based on our global differential gene expression analysis, we wanted to further understand specific genes associated with cell signaling pathways affected by the T6SS of Bpm. Our volcano plot analysis further confirmed the involvement of the NF-κB signaling pathway (Figure 2(c)); therefore, we evaluated whether Bpm blocks TNF-α-induced NF-κB activation in a T6SS-dependent manner. We first evaluated the biological relevance of NF-κB signaling during Bpm infection by imaging NF-κB nuclear translocation in the presence or absence of T6SS ( Figure 3(a)). Our results indicated that Bpm sequestered NF-κB in the cytoplasm of IECs, and sequestration is dependent on a functional T6SS, without affecting the intracellular survival of Bpm.
We then measured the phosphorylation of NF-κB by TNF-α activation by Western blot, where we observed that Bpm inhibits NF-κB activation despite TNF-α stimulation post infection ( Figure 3(b)). We further observed that this inhibition is time dependent and requires a functional T6SS (Figure 3(b)). Further, we quantified phosphorylation of p65 by densitometry and our results confirmed that this phosphorylation of p65 is partially rescued by 6 h post infection with Bpm Δhcp1 in the presence or absence of TNF-α ( Figure 3(c)). These results indicate a NF-κB blockage by the T6SS of Bpm during late IEC infection. It is well recognized that various virulence factors activated by bacteria during epithelial infection are targets of NF-κB, and often recruit inflammatory cells to facilitate dissemination. 40 In melioidosis infections, it is known that TNF-α and some other interleukins are involved in the early inflammatory response, and NF-κB is the key transcription factor that modulates their expression. 2 These results suggest a role of the T6SS system in controlling NF-κB blockage, and it is dependent on additional factors that may be related to early infection events, such as the T3SS. Together, our results, along with other studies, 38 confirmed that Bpm prevents the initiation of the NF-κB, thus reducing host inflammatory responses by utilizing T6SS proteins.
Defining Bpm transcriptomics during IECs infection
We used principal component analysis (PCA) plots of the dual RNA-Seq data comparing Bpm WT and Bpm Δhcp1 to show that different conditions cluster together (data not shown). Then, a volcano plot of the 200 most differentially expressed bacterial genes associated with Bpm IEC infection was generated (genes with p-value <0.05; Figure 4(a)). We observed a dysregulation in gene expression associated with two important virulence mechanisms namely, the T3SS, associated genes bopC (BPSS1516), bicA (Figure 4(a)). [41][42][43] To determine whether the gene products participate in Bpm intracellular survival or cell-to-cell spread, we used a Bpm transposon mutant library (library comprised of a total of 8,729 unique transposon mutants in the Bpm 1026b strain) and evaluated several of the mutants for their ability to form plaques (Figure 4(b)). We included some transposon mutants associated with the T3SS (bpspV, bopC, bopA, bicA), T6SS (tagL-2) and other genes (bhuR, hfq, etc.) and determined whether they affected plaque formation. We confirmed that a T6SS (Tn::tagL-2) mutant was unable to form plaques, but also found that some T3SS mutants reduced plaque formation ( Figure 4(b)). Of notice, we identified one protein encoded by the bicA gene that impacted cell-to-cell spread. The BicA protein forms a complex with BsaN to regulate the expression of T3SS effectors, but it has also been proposed as a regulator affecting the expression of the T6SS. 41 To corroborate our results, we wanted to further investigate the function of BicA as a virulence factor during IECs infection.
Defining the function of BicA as a virulence factor during IEC infection
To validate our global gene expression analysis, we focus our attention in one protein that is proposed as a chaperone/regulator interacting with BsaN, which is known to participate in the regulation of the T3SS expression, but has also been associated were infected with either Bpm WT or Bpm Δhcp1. After 12 h of infection, cells were fixed, stained, and imaged. Actin cytoskeleton and nuclei were stained with rhodamine phalloidin and DAPI, respectively. For immunofluorescence imaging, treated cells were fixed, permeabilized, and blocked with 1% BSA. NF-κB subunit p65 was detected using an rabbit anti-mouse p65 antibody, followed by goat anti-rabbit IgG antibody conjugated to Alexa-488 fluorophore. Images to the right of bottom two rows represent a magnified view of merged images. Magnified areas are denoted by squares. B. To confirm blockage of NF-κB translocation during Bpm infection and dependency of T6SS, the phosphorylation of NF-κB by TNF-α activation was measured by Western blot. Cell lysates from infected IECs were collected after 6 and 12 h of infection with Bpm WT or Δhcp1. In addition, separate infected cells were treated for 10 min with TNF-α (20 ng/mL) and subjected to lysis. An anti-phosphorylated p65 antibody was used followed by an HRP-conjugated goat antimouse IgG. As a positive control, cells were stimulated with TNF-α only. C. Phosphorylation of p65 was quantified for each infection and presented as a ratio phosphorylation p65/actin. :bicA were generated and tested for plaque assays (MOI 0.1 or 1) at 24 h. D. IECs were infected with Bpm WT, ΔbicA or ΔbicA::bicA for 1 h using an MOI of 10. After infection, extracellular bacteria were killed using 1 mg/mL of kanamycin for 1 h. Intracellular bacterial replication was quantified by CFU enumeration at 3, 6, 12 and 24 h. Error bars of the mean represent the average ± standard error of two experiments with three with the expression of T6SS genes. 41 Therefore, we constructed a Bpm ΔbicA mutant and ΔbicA::bicA complemented strains and confirmed the plaque phenotype observed with the bicA::Tn (Tn::1629) mutant, at two different MOIs (Figure 4(c)). Our results showed that the ΔbicA mutant was unable to form plaques, a phenotype reversible by complementation. Then, we also evaluated whether the ΔbicA mutant had an impact on intracellular replication. The IECs were infected with the Bpm strains for 1 h, followed by kanamycin treatment to kill extracellular bacteria and then incubated at different time points. Our results at 3, 6, 12 and 24 h post-infection demonstrated that the ΔbicA mutant displayed a survival defect in the intracellular compartment and that a functional bicA gene restores the ability of Bpm to survive intracellularly ( Figure 4(d)). Finally, we used immunofluorescence analysis to visualize the intracellular replication and dissemination of the different Bpm strains. We showed that the Bpm WT and complemented strains invade, disseminate, and spread from cell-to-cell, inducing the formation of MNGCs (Figure 4(e)). In contrast, the ΔbicA mutant was unable to disseminate, spread from cell-to-cell or mediate MNGC formation (Figure 4(e)). Overall, we have supported the dual-RNA-Seq analysis by showing a specific role for BicA in the induction of plaques on IECs suggesting that BicA may play a role in controlling T6SS expression, resulting in the inability of the ΔbicA mutant to disseminate within IECs.
Bpm ΔbicA mutant can colonize the GI tract but is unable to disseminate to target organs in acute and chronic mouse models of infection
We have previously developed and optimize murine models of acute and chronic melioidosis infection 24 and used to assess virulence of Bpm WT, ΔbicA or ΔbicA::bicA strains when delivered by the intragastric route. To validate the significance of BicA, as well as the importance of the pathways identified by our RNA seq analysis on rat cells, we performed in vivo animal mouse infection experiments. Cells used for the RNA sequencing experiment were commercially acquired as mouse primary IECs; however, the sequencing results found that the DNA reads were closer associated with the rat genome. For the acute GI murine model, we infected animals with 2.5 LD 50 of Bpm WT, ΔbicA or ΔbicA::bicA strains and evaluated survival for 21 days (Figure 5(a)). Three days after infection, animals infected with Bpm WT showed 75% survival and consistent weight loss ( Figure 5 (b)) that was recovered as the infection progress. In contrast, 100% survival was observed with the ΔbicA or ΔbicA::bicA strains ( Figure 5(b)). To assess colonization of the GI tract or dissemination by the different Bpm strains, we collected segments of the GI tract (stomach, small intestine, colon, cecum) and spleen to quantify bacterial loads, respectively ( Figure 5(c)). Animals infected with Bpm WT had recoverable bacteria from stomach, colon, cecum, and spleen, with a limit of detection of 1 CFU. We were unable to recover the ΔbicA mutant from stomach or small intestine, and only a few bacteria were recovered from the spleen of infected animals ( Figure 5(c)). Interestingly, the ΔbicA mutant as well as the Bpm WT strain colonized the colon and cecum. Complementation of the ΔbicA mutant restored colonization to wild type levels.
We further assess whether the Bpm strains disseminate to distal organs during a longer infection course using a chronic melioidosis GI infection model. Animals were inoculated with a sublethal dose (~1 LD 50 ) of Bpm WT, ΔbicA or ΔbicA::bicA strains and survival was assessed for 35 days ( Figure 5(d)). Animals inoculated with Bpm WT or complemented ΔbicA::bicA strains loss some weight early during infection, which was recovered with time ( Figure 5(d)). Mice inoculated with Bpm WT biological replicates. Statistical analysis was done using a one-way ANOVA followed by a Sidak multiple comparison test. *p < .05, **p < .01, ****p < .0001, ns, not significant. E. Primary IECs were infected with Bpm WT, ΔbicA or ΔbicA::bicA for 6 or 12 h at 37°C and 5% CO 2 , using an MOI of 10. After infection, cells were washed with PBS, fixed, and stained, prior to microscopic examination (20×). Bacterial cells were stained with sera collected from mice that immunized with Bpm PBK001 live attenuated vaccine, followed by an Alexa fluorophore 488 goat anti-mouse IgG, IgM, IgA (H + L) secondary antibody. Actin was stained using rhodamine-phalloidin and cell nuclei with DAPI. Magnified views (×5) are shown on the bottom panels. Images were processed and analyzed using ImageJ analysis software. and ΔbicA::bicA strains exhibited 62.5% or 87.5% survival, respectively, at 35 days post-infection (dpi), while 100% of animals that were inoculated with Bpm ΔbicA mutant survived to the end of the study ( Figure 5(e)). Bacterial load was evaluated by collecting individual sections of the GI tract as well as the liver and spleen which were processed for CFU enumeration (Figure 5(f)). Animals inoculated with Bpm WT or ΔbicA::bicA complemented strain exhibited persistent colonization of the stomach, colon and cecum, while the ΔbicA::bicA strain was also able to persist in the small intestine ( Figure 5(f)). We were unable to recover the ΔbicA mutant strain in any of the intestinal locations tested. Bpm WT was able to disseminate and replicate in the liver while the ΔbicA::bicA strain was also recovered in the liver and spleen ( Figure 5(f)). Like the GI tract segments evaluated, the ΔbicA mutant was not recovered neither from the liver nor from the spleen of any infected group (Figure 5 (f)) . Overall, our data demonstrate the ability of Bpm to colonize the GI tract and disseminate to target organs and this bacterium requires of a functional bicA gene to exert its full virulence phenotype.
Discussion
Although some progress has been done understanding the pathogenesis of this remarkable intracellular bacterium, not much progress has been done deciphering the host-pathogen interactions that occur during Bpm infection, and little information is available about transcriptional changes occurring during the intracellular lifestyle of Bpm of IECs. We have recently shown the importance of gastrointestinal melioidosis as an understudied route of infection and identified some Bpm virulence factors participating in the GI infection and pathogenic process of this intracellular pathogen. 24 Now, using dual RNA-seq, we were able to identify host and bacterial genes that are differentially expressed during Bpm infection and dissemination within gastrointestinal epithelial cells. We were able to identify a handful of host genes that primarily encode for proteins involved in the inflammatory response to infection, but also gene products that mediate Bpm regulatory control of secretion systems involved in bacteria entry to the host cell as well as those involved in cell-to-cell spread and eventually MNGC formation. Although the commercially available cells purchased for the RNA seq experiment were acquired as mouse primary IECs, our sequencing results identified closer sequence similarities to the rat genome. However, the cells used in this analysis also demonstrate a consistent phenotype in vitro in their ability to permit rapidly multiplying intracellular bacteria and induction of MNGC formation. Therefore, these results allowed us to identify BicA as an important virulence factor in the pathogenesis of Bpm.
As presented, the GI tract remains an underreported and understudied point of entry for Bpm, particularly in cases of unexplained origin of the disease. 7,22 However, recent evidence in nonhuman primates demonstrated that indeed, ingestion serves as a route for disseminated infection. 23 Our laboratory has made significant progress in the development of in vitro and in vivo GI pathogenesis murine models of Bpm infection and have identified the Type 6 Secretion System (T6SS) as one of the virulence factors used during GI infection. 24 We now offer evidence indicating that the T6SS is linked to the host inflammatory response which is dampened during Bpm intestinal infection. Although it is unclear whether the immune suppressive effect is true across different Bpm strains, our observation indicating that this may be true given that these virulence mechanisms are found in different Bpm strains.
A previous study demonstrated that NF-κB activation during Bpm infection occurs in a Toll-like (TLR) receptor and MyD88 independent manner which requires the T3SS gene cluster 3 (T3SS3) effector proteins. 37 However, the study was not able to demonstrate that the T3SS directly activates NF-κB, but the authors found that this system facilitates bacterial escape into the cytosol where the host is able to sense the presence of the pathogen leading to NF-κB activation. Further, another study using Bpm-infected primary human macrophages implicated the T3SS-3 dependent inflammasome activation and IFN-γ induced immune mechanisms as a defense mechanism against the pathogen, although reduced intracellular Bpm loads were observed, the inflammatory response was not completely abolished. 44 Now, we present evidence that a functional T6SS is required to block NF-κB activation during Bpm infection of IECs and this system seems to work in conjunction with the T3SS to modulate the inflammatory response observed during infection of host cells. One intriguing future direction would be to understand the role BicA plays in T6SS function or a regulatory mechanism controlling the T3SS and T6SS. Furthermore, understanding this mechanism would provide a deeper understanding controlling how the pathogen can disseminate from cell to cell before activation of an effective inflammatory response, allowing the pathogen to reach other target organs.
As demonstrated here and elsewhere, Bpm survives and replicates inside host phagocytic and non-phagocytic cells 2,24 and the mechanisms by which bacterial virulence factors, particularly the T6SS, interfere with host cell signaling are largely unknown. A significant gap in knowledge exists regarding the bacterial regulatory mechanisms that participate in Bpm intracellular survival, cellto-cell spread, and MNGC formation in epithelial cells. The completion of the dual RNA-seq analysis provided us with the opportunity to understand the molecular consequences of expressing the T6SS in the intracellular host environment. Further, we were able to identify a mutant Bpm strain in the bicA gene that was defective in plaque formation. The BicA protein has been proposed to form a complex with the regulator BsaN, controlling the expression of genes encoding the T3SS-3, but they have also been proposed as regulators affecting the expression of the T6SS. 41 In this study, we wanted to understand the function of differentially regulated genes during Bpm infection in the context of controlling the T6SS, its role on cell-to-cell spread, and MNGCs formation; therefore, we further analyzed BicA and its role affecting T6SS-mediated intracellular virulence.
As any other large transcriptomic analysis that tries to elucidate host-bacterial interactions, our study has some limitations. The first difficulty to interpret our data is that we cannot distinguish between differential host responses due to Bpm disseminating within the cell cytoplasm and those spreading from cell to cell. However, inclusion of the Δhcp1 mutant and subsequent construction of a ΔbicA mutant, allowed us to start dissecting the host responses elicited by the bacteria that gets trapped within the intracellular environment versus the WT strain that quickly hampered the inflammatory response to escape and disseminate to other organs. A second limitation in the interpretation of our data is that the Bpm WT replicates more effectively in the intracellular space than the Δhcp1 mutant, producing higher number of reads analyzed in the RNA-Seq analysis. As a result of this difference, we spent time normalizing the different samples and validating our findings with other in vitro and in vivo experiments to confirm the differences in the virulence phenotypes. Finally, our Bpm infection of IECs was relatively high using an MOI 10, which is significantly higher than a natural infection, but this level of infection guarantees that the differences that we are observing between the Bpm WT and Δhcp1 mutant are identified by the RNA-seq experiment. As demonstrated in our manuscript, the differential expression of host or bacteria genes were further validated with more in-depth experiments that are helping us to have a better picture about the interactions occurring during intracellular survival of Bpm in IECs.
The advantage of targeting BicA as the putative regulator that has been implicated in control of the T6SS, was to provide us with a better understanding of the genes identified in the dual RNA-Seq but also to establish possible mechanism that Bpm is exerting to control T3SS and T6SS expression and which occurs during intracellular conditions in the intestinal epithelial cells. To start defining the role of BicA in Bpm virulence, we constructed a ΔbicA in the Bpm K96243 background and the initial characterization indicated that this putative regulator represses T6SS genes during host cell invasion. The BicA-mediated regulation of the T6SS was rescued by complementation of the ΔbicA mutant to mimic wild-type levels. Future studies will investigate the details of the regulatory cascade controlling T6SS expression and how this secretion system interacts with the host immune mechanismsfor instance, by screening different Bpm strains and applying genome-or transcriptome-wide association studies on the bacterial side, or through genetic manipulation or the application of immunomodulating agents on the host side. Our results about the Bpm pathogenesis in vivo demonstrate the ability of this pathogen to cause disease in animals infected via the oral route. Furthermore, the role of BicA controlling the T3SS during early stages of infection is recapitulated in our acute infection model. Although we did not see the same effect in a chronic infection model, this suggest that other virulence factors, or a well-orchestrated regulatory mechanism controlling secretion systems takes place during the different stages of infection. BicA has been demonstrated to activate the T6SS-associated proteins VirA and VirG, which are both important for the polymerization of actin, thus important for bacterial intercellular motility. 45 In summary, we have used dual RNA-seq to gain insights into the transcriptome structure and mechanisms of gene regulation used by the complex intracellular pathogen, Bpm during GI infection. We provide evidence for the strict regulatory control of bacterial genes whose products are associated in hampering the host inflammatory response. We confirmed a relationship between inhibition of NF-κB activation via TNF-α and T6SS expression. These findings are opening the door to further studies on the regulation of gene expression in Bpm and its mechanisms of pathogenesis. In general, our study is only one of very few studies characterizing the host-pathogen transcriptome of intracellular bacteria that can inform and provide insights into to the design of novel therapeutics and vaccines that can prevent melioidosis disease.
Bacterial strain and growth conditions
All bacterial strains used in this study are listed in Table 1. B. pseudomallei wild-type (1026b or K96243), mutant, and complemented strains were routinely grown in LB agar plates and broth at 37° C. E. coli NEB-5α and S17-1 λpir were grown in LB agar plates and broth at 37°C, and when appropriate Kanamycin (50 or 100 μg/ml) was added for plasmid selection. For the counter selection, cointegrants were grown in YT medium supplemented with 15% sucrose.
Transposon library
We used transposon mutants constructed in the prototype Bpm strain 1026b. The Bpm transposon library was recently acquired from BEI Resources but this library was originally constructed by Dr. Herbert Schweizer, in collaboration with Dr. Brad Borlee and Dr. Colin Manoil. 50 This collection of mutants was generated using a comprehensive two-allele sequencedefined transposon mutagenesis of Bpm strain 1026b and such library has been used in multiple studies. 51,52 Dual RNA-Seq testing and analysis of the data Bacterial Infection, RNA extraction, and DNase treatment. For RNA-seq analysis, primary IECs were infected as described above with an MOI of 10 for 6 and 12 h with either Bpm WT or Bpm Δhcp1. After infection, cells were detached, lysed, and total RNA was extracted from whole-cell lysates using Directzol RNA Kit (Zymogen, USA) following manufacturer's instructions. To remove contaminating genomic DNA, samples were treated with 0.25 U of DNase I (Fermentas) per 1 μg of RNA for 45 min at 37°C. If applicable, RNA quality was checked on the Agilent 2100 Bioanalyzer (Agilent Technologies). rRNA depletion. Bacterial or eukaryotic host rRNA were removed using the RiboMinus™ Human/Mouse Transcriptome Isolation Kit or Bacterial RiboMinus™ Transcriptome Isolation Kit, respectively. Following the manufacturer's instructions, approx. 500 ng of total, DNase-I-treated RNA from infection samples was used as an input to the ribosomal transcript removal procedure. rRNAdepleted RNA was precipitated in ethanol for 3 h at −20°C.
cDNA library generation and dual RNA Seq analysis
Bacterial and eukaryotic host rRNA was depleted using the RiboMinus™ Human/Mouse and the 54 Reads per gene were quantified with FeatureCounts, v2.0.1, from the subread software suite, 55 using the version 132 annotation file obtained from the Burkholderia Genome Database. 56 Differential gene expression for the host and pathogen was estimated using the DESeq2 software package, version 1.28.1 following the authors' vignette. 57
Construction of bicA mutant and strain complementation
All cloning methods were performed using the Gibson assembly system and primers used in this study are listed in Table 1. For the mutant construction, upstream (530 bp) and downstream (530 bp) of Bpm K96243 of bicA gene (BPSS1533) were amplified from genomic DNA and purified before assembly into a linearized PCR product of the suicide vector, pMo130 following the manufacturer's recommendations. Assembled products were transformed into E. coli NEB-5α (NEB, Massachusetts) for clonal screening and the plasmids were confirmed by Sanger sequencing at GENEWIZ. The plasmid containing the fusion of upstream and downstream regions of BPSS153 were ultimately transformed into E. coli S17-1 λpir donor strain. Mobilizable vector was introduced into Bpm K96243 by biparental mating. Overnight culture (500 μl) of donor E. coli S17-1 λpir containing upstream-downstream/pMo130 plasmid and 12 h culture of a recipient Bpm K96243 (500 μl) were centrifuged separately or combined with equal volume then resuspended the pellet with 100 μl of PBS. The conjugation mixture was spotted on LB agar and incubated at 37°C for 8 h. Following conjugation, the spots were scraped and resuspended with 1 ml of PBS before plating on LB agar supplemented with 500 μg/ml kanamycin and 30 μg/ml of polymyxin B for Bpm transformant selection. Plates were incubated for 48 h at 37°C and the isolated colonies were exposed to 0.45 M pyrocatechol for merodiploid screening. Selected yellow colonies were sub-cultured in LB containing 100 μg/ml of kanamycin then incubated at 37°C with shaking for 12 h. Transformant colonies were grown in YT broth supplemented with 15% sucrose for 4 h at 37°C with shaking followed by serial dilution and plating onto YT agar +15% sucrose for resolved Bpm co-integrant selection. Following 48 h incubation at 37°C, resulting white colonies after exposure to pyrocatechol as mentioned above were analyzed by PCR, and sequenced to confirm gene deletion.
The in cis complementation of the Bpm bicA mutant was performed by inserting the bicA gene back into Bpm ΔbicA strain using similar manner as the bicA mutant construction. Purified PCR amplicon of upstream-BPSS153-downstream was assembled to linearized pMo130 vector PCR product using Gibson assembly then transformed to E. coli NEB-5α followed by sub-cloning to E. coli S17-1 λpir donor strain. The upstream-BPSS153-downstream/pMo130 plasmid was introduced into Bpm ΔbicA strain by biparental mating as described above. The clonal selection of complemented Bpm bicA mutant was confirmed by PCR and sequencing. Furthermore, the phenotype of both bicA mutant and complemented bicA was confirmed by plaque formation assay.
In vitro primary epithelial cells infection, plaque formation assay and immunofluorescence microscopic analysis
C57BL/6 mouse primary intestinal epithelial cells (Cell Biologics, Chicago, IL product No. C57-6051), identified by RNAseq results as rat-derived cells were grown in complete primary cell culture medium (Cell Biologics, product No. M6621) following manufacturer's protocol. Cells were seeded 5 × 10 5 cells/well in gelatin (Cell Biologics, product No. 6950) coated 24-well culture-plate for survival assay, 1 × 10 6 cells/well 12-well culture-plate for plaque formation and 2 × 10 5 cells/well into pregelatin coated cover slips in 12-well culture-plate overnight at 37°C, 5% CO 2 incubator. Cells were infected with Bpm K96243 wild type, ΔbicA and the complemented ΔbicA strain at a multiplicity of infection (MOI) 10 for 1 h, then infected cells were washed twice before incubation with media supplemented with 1 mg/ml kanamycin for an additional 1 h for extracellular bacterial killing. After 1 h, cells were washed and media without antibiotic was replaced then incubated for 3, 6, 12 and 24 h. For survival determination of intracellular bacterial in cells, cells were washed twice and lysed with 0.1% Triton X-100 in PBS prior serial diluted and plated onto LB agar. Plates were incubated at 37°C for 48 h then CFUs were quantified and % intracellular replication was calculated by comparing to input of each bacterial strain. For plaque formation assay, cells at 24 h infection were fixed with 4% paraformaldehyde (PFA) for 30 min and stained with 500 μl Giemsa stain (Gibco) for 30 min. After staining, cells were washed with PBS followed by water before imaging. For immunofluorescence microscopic analysis, after infection, media was removed and replaced with 4% PFA for 30 min. Cells were washed twice with PBS and coverslips were transferred to a new 12-well plate. Fixed cells were lysed with 0.25% Triton X-100 in PBS for 7 min and washed twice with PBS. Bpm bacteria were stained with sera collected from mice previously immunized with Bpm PBK001 48 live attenuated vaccine at 1:1,000 at RT for 1 h followed by 1:5,000 dilution of goat anti-mouse IgG, IgM, IgA (H + L) secondary antibody conjugated to Alex Flour 488 (Invitrogen) for an additional 1 h. After washing, actin and DNA of cells were stained with 1:10,000 rhodamine phalloidin and DAPI, respectively, for 1 h. For NF-κB visualization, treated cells were fixed, permeabilized with 0.25% TritonX-100, blocked with 1% Bovine Serum Albumin (BSA). and NF-κB subunit p65 was detected using an anti-p65 antibody (Cell Signaling Technology), followed by Alexa fluorophore 488 goat anti-rabbit IgG antibody (Thermofisher Scientific). Images to the right of bottom two rows represent a magnified view of merged images. Magnified areas are denoted by squares. Coverslips were mounted onto microscopic slides using Prolong diamond antifade mountant (Invitrogen). Slides were visualized with Olympus BX51 upright fluorescence microscope and further analyzed using an ImageJ software, National Institutes of Health. 58
Ethics statement
All manipulations of Bpm were conducted in CDC/ USDA-approved and registered biosafety level 3 (BSL3) facility at the University of Texas Medical Branch (UTMB) in accordance with approved BSL3 standard operating practices. The animal studies were carried out humanely in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals by the National Institutes of Health. The protocol (IACUC #0503014D) was approved by the Animal Care and Use Committee of UTMB.
Animal studies
Female 6-to-8-week-old C57BL/6 mice (n = 48) were purchased from Jackson Laboratory (Bar Harbor, ME, USA) and maintained in an Animal Biosafety Level 3 (ABSL3) facility. Animals were housed in microisolator cages under pathogenfree conditions with food and water available ad libitum and maintained on a 12 h light cycle.
Acute (high-dose) and chronic (low-dose) infection models
Food was restricted 12 h before infection but was administered throughout the remainder of the study. For the acute model of infection (n = 8/ group), animals were infected with 2.5 × 10 6 CFU (1 LD 50 ) of Bpm WT (K96243), Bpm ΔbicA or Bpm ΔbicA::bicA strain using a plastic oral gavage needle. For the chronic infection model, mice (n = 8/group) were infected with 7.5 × 10 6 CFU (3 LD 50 ). Infected mice were monitored survival and weight for 21-and 35-days post infection. At the study end point, mice were humanely euthanized, and the gastrointestinal tract (stomach, small intestine, colon, and cecum), liver and spleen of surviving animals were collected for bacterial enumeration. All organs were homogenized in 1 mL of PBS, serially diluted, and plated in either LB agar (Liver and spleen) or Ashdown selective medium (GI tract) to quantify bacterial loads.
Statistical analysis
All statistical analysis was done using GraphPad Prism software (v8.0). P-values of <0.05 will be considered statistically significant. Quantitative data is expressed as the mean ± standard error. All data were analyzed for normality before running the corresponding test. Intracellular replication and percent weight change were analyzed by one-way ANOVA followed by Sidak multiple comparison and Kruskal-Wallis test. | 2022-04-25T13:20:47.063Z | 2022-04-22T00:00:00.000 | {
"year": 2022,
"sha1": "31ad04200be02328fbd1008640952c889fa17dd0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2327add41e2e2c5a9759897151a34212926fd688",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14258329 | pes2o/s2orc | v3-fos-license | Identification of the Preprotein Binding Domain of SecA*
SecA, the preprotein translocase ATPase, has a helicase DEAD motor. To catalyze protein translocation, SecA possesses two additional flexible domains absent from other helicases. Here we demonstrate that one of these “specificity domains” is a preprotein binding domain (PBD). PBD is essential for viability and protein translocation. PBD mutations do not abrogate the basal enzymatic properties of SecA (nucleotide binding and hydrolysis), nor do they prevent SecA binding to the SecYEG protein conducting channel. However, SecA PBD mutants fail to load preproteins onto SecYEG, and their translocation ATPase activity does not become stimulated by preproteins. Bulb and Stem, the two sterically proximal PBD substructures, are physically separable and have distinct roles. Stem binds signal peptides, whereas the Bulb binds mature preprotein regions as short as 25 amino acids. Binding of signal or mature region peptides or full-length preproteins causes distinct conformational changes to PBD and to the DEAD motor. We propose that (a) PBD is a preprotein receptor and a physical bridge connecting bound preproteins to the DEAD motor, and (b) preproteins control the ATPase cycle via PBD.
, light blue) that form between them a mononucleotide crevice (7,9,10,12). DEAD motors acquire specificity for different catalytic processes through add-on nonhomologous structural appendages (8,11). SecA has two such structures (12)(13)(14) (Fig. 1, A and B): (a) the C-domain (aa 611-832), which is fused C-terminally to the IRA2 domain of the DEAD motor and regulates its mobility and properties by "stapling" together NBD and IRA2 (14,15), and (b) a second appendage (aa 221-377) of unknown function, which forms an independent structural domain ( Fig. 1A, magenta) that comprises a bilobate "Bulb" bridged with NBD through a "Stem" formed of two anti-parallel  strands (1 and 7; Stem out and Stem in , respectively; Fig. 1, B and C). The Stem is physically "rooted" in NBD (Fig. 1, A and C) without disturbing its structural integrity. As it protrudes out of NBD, the aa 221-377 appendase "embraces" loosely, mainly through Bulb-mediated contacts, parts of the C-domain (9,10,15). 3 At least two distinct Bulb conformational states have been identified in crystallographic studies of Bacillus subtilis SecA (10,16).
Numerous studies have demonstrated preprotein binding to SecA (12, 19, 24 -29). However, where precisely on SecA preproteins bind and how they stimulate its ATPase remains unresolved. This central question must be answered before the molecular basis of SecA-dependent preprotein movement through translocase can be understood. The Stem and Bulb substructures have been indirectly implicated in preprotein binding (12,24). Isolated NBD fragments that also include the 1 strand ( Fig. 1C) retain signal peptide binding of high affinity, whereas deletion of the 1 strand in SecA prevents signal peptide binding (12). However, this study could not conclusively determine whether signal peptide binding occurs on the Stem or whether the Stem is indirectly involved in binding. In another study, N-terminal SecA fragments that were refolded from inclusion bodies and included parts of the Bulb (aa 267-340) were cross-linked to a complete preprotein (24). However, the specificity of this reaction is unclear, since (a) cross-linking occurred only when SecA was partially "reconstituted" by mixing N-terminal fragments together with C-terminal peptides, and (b) the cross-linker used was nonspecific. Now we have investigated the functional role of the SecA domain that encompasses residues 221-377. We demonstrate that this domain is essential for cell viability, protein translocation, and preprotein-stimulated translocation ATPase. Moreover, we provide direct evidence that this region is essential for preprotein loading onto SecYEG-containing membranes and retains the ability to bind preproteins in solution even when detached from SecA. This region is therefore a preprotein-binding domain (PBD). Further, we show that PBD has two distinct domains with distinct subsites (Stem and Bulb) with discrete functional subsites. SecA binds signal peptide through the PBD Stem and mature regions through the PBD Bulb. These binding events influence PBD and DEAD motor conformation and subsequently DEAD motor ATP hydrolysis. We propose that PBD due to its strategic location acts as a physical and functional bridge between preproteins and the ATPase DEAD motor.
EXPERIMENTAL PROCEDURES
Bacterial Strains and Recombinant DNA Experiments-E. coli strains were grown and manipulated as described (7,14,30).
Membrane Flotation Experiments-Ultracentrifugal sedimentation experiments were carried out in a bench-top ultracentrifuge (TLX120 Optima; TLA120.2 rotor; Beckman), using polypropylene tubes (0.2 ml) as described (34,35). Reactions (15 l in buffer B (50 mM Tris-Cl, pH 8, 50 mM KCl, 5 mM MgCl 2 )) containing SecA or SecA⌬Bulb (0.5 g) in the presence or absence of inner membrane vesicles (IMVs) (Յ3 g) and proOmpA-His (0.12 g) (36) were adjusted to 1.74 M final sucrose concentration and deposited at the bottom of the tube. Samples were overlaid with one layer (20 l) of 1.6 M sucrose and two consecutive layers (75 l) of 1.4 and 1.25 M sucrose in buffer B following centrifugation (4°C; 436,000 ϫ g; 180 min). Nine fractions of 25 l were removed, analyzed by SDS-PAGE, and visualized by immunostaining. SecA was visualized with ␣-SecA antibodies. proOmpA was immunostained with ␣-hexahistidinyl antibodies to selectively label freshly bound proOmpA and not the abundant indigenous OmpA population in the IMVs. Flotation of IMVs inside the gradient was monitored by immunodetection of the integral membrane protein SecY.
NMR Spectroscopy-Isotopically labeled samples for NMR studies were prepared by growing the cells in M9 media. All NMR spectra were recorded on a Varian 600-MHz spectrometer. Sequential assignment of the 1 H, 13 C, and 15 N protein chemical shifts of N219 -379 was achieved by means of through-bond heteronuclear scalar correlations along the backbone using conventional three-dimensional pulse sequences (37).
RESULTS
PBD Is Essential for SecA-dependent Translocation-To analyze the effect of PBD on SecA function, we used the following mutants of oligohistidinyl-tagged SecA (Fig. 1C): SecA⌬219 -240, which lacks the Stem out (12,13); SecA⌬351-368, in which a 17-amino acid deletion removes the highly conserved helix 4 of Bulb 2 (13); SecA⌬233-365 (hereafter SecA⌬Bulb), which is missing the entire Bulb region but retains 1 and 7 of the Stem joined directly with a linker; and SecA(W349A), in which a highly conserved aromatic residue found in all SecA proteins is substituted by alanine.
The ability of SecA PBD mutants carried on a plasmid to rescue the BL21.19 secAts strain (30) at 42°C was examined (Fig. 1D). None of the deletion mutants could complement BL21.19 in vivo, whereas secA(W349A) could, albeit poorly.
It became clear from the above results that PBD is essential for SecAmediated protein translocation. To understand at which step of the translocation pathway the role of PBD is essential, we proceeded to characterize the effect of the mutations on each step of the reaction.
SecA PBD Mutants Retain Nucleotide Binding and ATP Hydrolysis-To examine the effect of PBD on the ability of SecA to bind nucleotide, we determined the equilibrium binding constants of SecA mutants for nucleotide (TABLE ONE), using a MANT-ADP fluorescence-based assay (15,21). All mutant proteins retain the ability to bind nucleotide, with either similar (SecA⌬351-368 and SecA(W349A)) or somewhat reduced (SecA⌬219 -240 and SecA⌬Bulb) affinities to that of SecA (TABLE ONE).
Binding of ADP by SecA causes conformational changes that lead to its thermal stabilization (7,10,14,39). To examine whether ADP binding elicits a similar conformational change in SecA PBD mutants, we monitored intrinsic Trp fluorescence upon thermal melting of the mutant proteins in the presence or absence of ADP (15,21). Under these experimental conditions, the T m(app) of SecA increases upon ADP binding by 5°C (TABLE TWO). All mutants display ADP-driven thermal stabilization, albeit to a different extent than SecA (TABLE TWO).
Finally, to examine the effect of PBD mutations on the ability of SecA to hydrolyze ATP, we determined the K cat for the basal ATPase activity of all SecA PBD mutants. The basal catalytic properties of SecA remained unaffected by mutations in PBD (TABLE THREE).
Mutations within PBD (including complete Bulb removal) abrogated neither basal ATP catalysis nor ADP association to SecA. Therefore, it seemed unlikely that PBD, which is located outside the ATP cleft ( Fig. 1A), is a physical determinant essential for nucleotide binding to/hydrolysis by the DEAD motor. Instead, PBD could have a regulatory role in DEAD motor catalysis.
SecA PBD Mutants Bind to SecYEG-containing Membranes-To examine the effect of PBD mutations on the ability of SecA to bind to SecYEG and to form the translocase ternary complex, we employed membrane flotation (34,35). Inverted IMVs, treated with urea to remove endogenous SecA (19), were loaded at the bottom of a sucrose gradient (corresponding to lanes 7-9 in Fig. 2) and were floated up during ultracentrifugation. Gradient fractions were subsequently analyzed by SDS-PAGE, and the migration position of IMVs in the gradient ( Fig. 2A, fractions 4 and 5) was identified on Western blots by immunostaining for SecY, the integral membrane subunit of translocase. SecA immunostaining revealed that the IMV preparation used in these experiments was not contaminated by endogenous SecA traces (Fig. 2B). SecA alone (Fig. 2C) or mixed with IMVs ( Fig. 2D) were loaded at the bottom of identical sucrose gradients. The position of SecA within the gradient was identified on Western blots by immunostaining with ␣-SecA antiserum. In the presence of IMVs, a significant portion (Ͼ50%) of SecA migrated to the less dense parts of the gradient and was present in the same fractions as the SecYEG-containing IMVs (Fig. 2, compare D with A). This indicated that SecA was bound to the membrane vesicles. This SecA-SecYEG association (Fig. 2D) is independent of nucleotide addition or temperature (data not shown) (3,18). In contrast, SecA remained at the bottom of the gradient (Fig. 2C, lanes 7 and 8). All SecA PBD mutants exhibited behavior identical to that of wild type SecA in this assay; they all associated stably with SecYEG-IMVs independently of nucleotide addition or temperature. Here only SecA⌬Bulb is shown as an example (Fig. 2, compare E with D). (30) using secA PBD mutants. BL21.19 cultures carrying pET5 vector alone or its derivative with cloned secA, secA⌬219 -240, secA⌬351-368, secA⌬Bulb, or SecA(W349A) genes grown at 30°C were adjusted to the same density. The indicated dilutions (10 n ) were spotted on LB/ampicillin plates and incubated (20 h; 42°C). E, PBD is essential for SecA-mediated translocation. In vitro translocation of [ 35 S]proOmpA into SecYEG-proteoliposomes by the SecA PBD mutants. Assays were performed in 50 l of buffer B containing the indicated SecA proteins (40 g/ml), SecB (20 g/ml), bovine serum albumin (400 g/ml), [ 35 S]proOmpA (20,000 cpm), and SecYEG-proteoliposomes (250 g/ml). Reactions were initiated by the addition of 2 mM ATP and incubated for 20 min at 37°C. After proteinase K treatment (1 mg/ml, 15 min, 4°C), proteins were precipitated (12.5% (w/v) trichloroacetic acid) and separated on 15% SDS-PAGE. [ 35 S]proOmpA protected from proteinase K treatment was detected by phosphorimaging.
These experiments demonstrated that mutations or deletions within PBD do not prevent SecA binding to SecYEG. Thus, it seems unlikely that PBD is an essential physical determinant for assembly of the SecA-SecYEG holoenzyme.
SecA PBD Mutation Prevents Recruitment of proOmpA to the Translocase-To examine whether mutations in PBD affect SecA interaction with preproteins, we used flotation gradients (Fig. 2, F-J). Untreated IMVs alone (Fig. 2F) or IMVs mixed with proOmpA complexed with its cognate chaperone SecB (Fig. 2H) or mixed with SecA, proOmpA, and SecB ( Fig. 2I) were fractionated as in Fig. 2, A-E. As a control, a mixture of SecA, proOmpA, and SecB was fractionated in the absence of IMVs (Fig. 2G). Gradient fractions were analyzed by SDS-PAGE, and the migration position of proOmpA was identified using an ␣-hexahistidinyl antiserum. This antiserum does not cross-react nonspecifically with the IMVs in the gradient (Fig. 2F). In the presence of SecA, proOmpA co-migrates in the gradient with SecYEG-containing IMVs (Fig. 2I, lanes 4 and 5; compare with Fig. 2A, lanes 4 and 5). In the absence of either SecA (Fig. 2H) or of SecYEG-containing IMVs (Fig. 2G), proOmpA remained at the bottom of the gradient (Fig. 2, lanes 7 and 8). This SecA-dependent proOmpA binding to SecYEG-IMVs was independent of nucleotide or temperature (data not shown). In contrast to SecA, SecA⌬Bulb failed to recruit proOmpA to the translocase under identical experimental conditions (Fig. 2J). Very low amounts of proOmpA, comparable with the background nonspecific binding of proOmpA to IMVs (Fig. 2H) (18), are seen to co-migrate with the IMVs.
We conclude that the PBD domain of SecA is essential for preprotein recruitment to the membrane-embedded SecYEG-SecA translocase holoenzyme.
PBD Mutations Compromise Preprotein-stimulated Translocation ATPase-To further study the role of PBD in the SecA-preprotein interaction, we monitored changes in the ATPase activity of SecA (19). Soluble SecA ATPase activity is low ("basal ATPase" ; Fig. 3A, lane 1) (19) and becomes marginally stimulated upon SecA binding to SecYEG at the membrane ("membrane ATPase"; lane 2). SecA ATPase becomes significantly stimulated (3-6-fold over membrane ATPase) only following preprotein addition ("translocation ATPase"; Fig. 3, A (lane 3) and B). It was anticipated that mutants that fail to interact with the preprotein will fail to acquire translocation ATPase activity. Indeed, only SecA(W349A) retained some translocation ATPase activity albeit to ϳ50% of the wild type levels (compare lane 15 with lane 3). No detectable preprotein-stimulated ATPase activity was observed with the SecA PBD deletion mutants (compare lanes 6, 9, and 12 with lane 3). Titration of proOmpA demonstrated that the membrane ATPase of SecA PBD mutants could not be stimulated by preprotein even at elevated concentrations (Fig. 3B).
These data indicated that SecA PBD mutants fail to acquire preprotein-simulated ATPase activity, suggesting that their interaction with preproteins is compromised.
A Model Preprotein with Stable Binding to Soluble SecA-Collectively, our data suggested that PBD is required for preprotein interaction with SecA (Figs. 2 and 3). We next sought to test whether this reflected direct binding of preproteins to PBD. However, widely used and well characterized model SecA-dependent preproteins, such as proOmpA, are not appropriate for such binding studies, since they bind very poorly to soluble SecA (19,40). 4 To overcome this limitation, we resorted to using an alternative model preprotein: proM13coatH5EE (hereafter 4 S. Karamanou and A. Economou, unpublished results.
Endothermic transitions of SecA derivatives
The intrinsic tryptophan fluorescence of the proteins was monitored as a function of temperature, as described (6 referred to as pCH5EE) (31,32). pCH5EE, like proOmpA (Fig. 2), requires SecA for its membrane binding (supplemental Fig. 2) and for translocation into the membrane (31) and outcompetes the binding of proOmpA to membranes (31). Its short length, which satisfies a minimal size required for SecA-mediated secretion (73 aa) (41), and its tight binding to soluble SecA (see below; TABLE FOUR) made pCH5EE optimally suited for mapping preprotein binding surface(s) on SecA.
Mature Regions of Preproteins Bind to N68-Signal peptides bind to the N68 (a polypeptide that encompasses residues 1-610 of SecA and comprises the NBD, IRA2, and PBD domains) (12,28). In contrast, direct physical binding of mature preprotein regions to N68 has never been demonstrated. To determine whether mature preprotein regions can bind to N68, CH5EE was immobilized on an optical biosensor, and the equilibrium dissociation constants for SecA as well as for N68 were determined (TABLE FOUR and Fig. 4B). Both proteins bind to CH5EE with comparable low micromolar affinities. Similar results were obtained using a CH5EE(W49F) biosensor (data not shown).
To corroborate these findings, we used an activity-based binding assay. Binding of the model signal peptide 3K7L suppresses N68 ATPase (12). As anticipated, M13SP had a similar effect in this functional assay (Fig. 4C). The addition of pCH5EE or CH5EE also led to suppression, whereas that of proOmpA did not. This lack of suppression by proOmpA may correlate with its low affinity for soluble SecA (19,40).
These data demonstrated that the three amino-terminal domains of SecA (610 residues) are necessary and sufficient to bind signal peptide (12) and mature preprotein regions of pCH5EE.
Isolated PBD Binds Preproteins-We next sought to identify which SecA domain present in N68 is responsible for preprotein binding. To this end, we used N1-419, a fragment that contains both the NBD and T, translocation ATPase) were incubated (1 mM ATP, 30 min; 37°C). ATP hydrolysis was measured as described (19). B, translocation ATPase activity of SecA PBD mutants (as in A) as a function of proOmpA concentration (as indicated). Activity is expressed as n-fold stimulation over "membrane ATPase" measured in the absence of proOmpA. DECEMBER 30, 2005 • VOLUME 280 • NUMBER 52
SecA Preprotein Binding Domain
PBD domains and N420 -610, a fragment that encompasses all of IRA2 (12). In addition, the sequence encoding residues 219 -379, that fully encompasses PBD (aa 221-377), was cloned as an independent gene. The derived oligohistidinyl-tagged polypeptide (N219 -379) was soluble, chromatographically stable, and monomeric (supplemental Figs. 3 and 4). To determine its folding, N219 -379 was isotopically labeled, and its fingerprint two-dimensional 1 H-15 N heteronuclear single quantum coherence spectrum was recorded by NMR (Fig. 5A). Backbone assignment revealed that the Bulb region of N219 -379 is well folded in solution, as evidenced by the dispersion of the corresponding peaks. However, the Stem element is clearly unfolded and very flexible even at temperatures as low as 10°C. Most likely, tethering of the Stem to NBD in SecA is important to stabilize the Stem.
To analyze binding of N1-419, N420 -610, and N219 -379 to preproteins, we used an optical biosensor with immobilized pCH5EE. The isolated PBD fragment retains binding to immobilized pCH5EE (TABLE FOUR), albeit with reduced affinity compared with SecA and N68. N1-419 binds to pCH5EE with an affinity indistinguishable from that of SecA and N68 (TABLE FOUR). In contrast, no detectable binding was measured with N420 -610, indicating that IRA2 is not involved in preprotein binding. We next sought to test the specific role of NBD in the binding reaction constructing and using N1-419⌬Bulb in the biosensor system. However, all ⌬Bulb derivatives tested aggregated on the sensor surface and could not be studied in this assay system (data not shown).
We concluded that isolated PBD can bind preproteins, although optimal binding activity may require either its tethering to NBD or the presence of specific NBD residues.
Isolated PBD Binds both Signal Peptide and Mature Regions-Can N219 -379 bind to distinct preprotein moieties? To address this question, we monitored changes in the tryptic accessibility of specific Arg/ Lys residues (Fig. 6A) upon the addition of either M13SP or CH5EE or pCH5EE (Fig. 5B). In the absence of any ligand, trypsin-cleaved N219 -379 at Arg 220 (at the base of the Stem) and at Lys 360 (located at the Bulb; lane 2). The presence of each of the preprotein segments resulted in distinct and characteristic tryptic patterns. In the presence of M13SP, Arg 220 and Lys 360 became less accessible (compare amounts of N219 - Following SDS-PAGE, polypeptides were immunostained with ␣-N68-specific antibodies (12,15,21), and their identity was determined by N-terminal sequencing. A representative of six experiments is shown. Identical results were observed when SecA was used in the same assay (data not shown). lanes 3 and 2), whereas Lys 329 became exposed (lane 3; see p221-328). The addition of CH5EE (lane 4) prevented efficient cleavage at Arg 360 (compare p221-360 in lanes 4 and 2). Finally, the addition of pCH5EE exposed Lys 329 (compare p221-328 in lanes 5 and 2). Under identical conditions, no changes to the tryptic profile of the IRA2 subdomain fragment (Fig. 1A) were observed (data not shown).
in
We concluded from the tryptic accessibility assays that the isolated PBD polypeptide can bind complete preproteins, signal peptides, and mature regions.
Bulb Deletion Affects Binding of Mature Preprotein Regions-PBD is composed of a Bulb and a Stem (Fig. 1, A and B). Are both elements necessary for preprotein binding? Deletion of the 1 strand of the Stem prevents signal peptide binding to N68 (12). To determine whether the Bulb substructure of PBD is required for preprotein binding, we compared binding of preprotein segments with N68 and with N68⌬Bulb. To this end, we developed a novel fluorescence-based assay.
Changes in the intrinsic fluorescence of N68 as a function of temperature were monitored upon the addition of either the nonfluorescent derivative CH5EEW49F or M13SP (Fig. 5C). In this assay, N68 displayed a characteristic T m(app) (44°C) that was reduced when M13SP (by 5°C) or CH5EEW49F (by 2.5°C) were added, indicating that binding of either preprotein moiety to N68 caused its destabilization. The addition of either ligand had a similar effect on SecA (data not shown).
As observed with N68, the addition of M13SP caused a reduction in the T m(app) of N68⌬Bulb (by 3.5°C; Fig. 5D), suggesting that Bulb deletion does not prevent the N68/M13SP interaction. However, in contrast to what was observed for N68 (Fig. 5C), the addition of CH5EEW49F failed to cause any reduction in the T m(app) of N68⌬Bulb (Fig. 5D), suggesting that Bulb deletion compromises binding of the mature region but not that of signal peptide to N68.
Our data suggested that binding of mature regions to SecA may require the Bulb (Fig. 5D), whereas binding of signal peptides to SecA may require the Stem (12).
Signal Peptide and Mature Preprotein Regions Induce Distinct PBD Conformations-We next sought to determine the effect of preprotein binding to PBD within the physiological context of the complete SecA and its fully functional N68 derivative. To this end, we examined the tryptic profile of SecA and N68 upon the addition of M13SP, CH5EE, or pCH5EE. In this experiment, the tryptic profile of SecA (data not shown) and N68 (Fig. 6B) are identical. Of all of the resulting peptides, only the five (indicated in Fig. 6B) that result from cleavage within or near PBD (Fig. 6A) were selected.
In the absence of preprotein fragments, three of the five peptides were seen (lane 1). When CH5EE was added, all five peptides were detected (lane 2). When M13SP was added none of the five peptides was detected (lane 3) due to significant protease protection of N68 (see remaining N68 amounts; top). This proteolytic protection presumably reflects a more compact conformational state. Trypsin activity is not inhibited by signal peptides (12). Finally, when pCH5EE was added three of the peptides were prominent, and the other two were barely detectable (lane 4).
It seemed that when complete preproteins, signal peptides, and mature regions bound to PBD attached to the DEAD motor, they stabilized distinct PBD conformations.
A 25-aa Mature Region Peptide Binds to PBD-We next sought to determine whether the full length of the mature preprotein region is required for binding to PBD. To address this possibility, a peptide that encompasses the first 25 mature region residues (hereafter referred to as CH5EE24 -48; Fig. 4A) was chemically synthesized. SecA (Fig. 7A, lane 2) and N68 (lane 3) bind to CH5EE24 -48 immobilized on an optical biosensor, whereas a control protein did not (lane 1). Binding of CH5EE24 -48 suppresses N68 ATPase (Fig. 7B) although clearly less efficiently than CH5EE, whereas a control peptide of similar length (p13) did not.
These data indicated that the N-terminal 25 aa of CH5EE can bind to PBD albeit inefficiently. This observation identified for the first time a minimal mature domain segment capable of binding to SecA.
DISCUSSION
Using two different preprotein substrates and derivative peptides, SecA mutants, and isolated domains, we determined that the second "substrate specificity" appendage of SecA (residues 221-377) is a PBD. Further biochemical dissection led to a two-subsite model: PBD binds signal peptides with its Stem and mature preprotein regions with its Bulb (Fig. 5). PBD is required for cell viability (Fig. 1D), protein translocation (Fig. 1E), preprotein-stimulated translocation ATPase (Fig. 3), and loading of preproteins to the membrane-embedded translocase (Fig. 2). PBD is therefore an essential element of the bacterial protein translocase catalytic cycle.
Signal peptide binding to the Stem of PBD is supported by several lines of evidence: (a) it is prevented by a deletion mutation in the Stem (12), (b) it can occur minimally within the aa 220 -234 region of PBD that encompasses Stem out (12), (c) it does not require the Bulb (Fig. 5D), and (d) it prevents tryptic cleavage at Arg 220 at the base of the Stem (Figs. 5B and 6B). Several hydrophobic residues on the Stem (see hypothetical model in Fig. 8) could provide a binding surface for residues of the signal peptide hydrophobic core shown by NMR studies to face the pocket (29). One of these residues suppresses defective signal peptides when mutated (A373V) (42). An extreme carboxyl-terminal SecA peptide may "shield" these hydrophobic residues in cytoplasmic SecA (10), thus preventing premature preprotein binding. Fig. 4B). Refractive index change was measured at equilibrium (t ϭ 750 s). Maltose-binding protein (MBP) is used as a nonbinding control. B, suppression of N68 ATPase in the presence of the indicated peptides (as in Fig. 4C). p13 is a chemically synthesized peptide (25 aa) from the yeast ␣ factor.
The mature preprotein region binds to SecA or N68 ( Fig. 4B and TABLE FOUR) or to the isolated PBD ( Fig. 5B and TABLE FOUR) but not to N68⌬Bulb (Fig. 5D). Binding of the mature preprotein region invariably affects the accessibility of Bulb residue Lys 360 (Figs. 5B and 6, A and B) in tryptic digestion assays. These data indicate that the Bulb binds mature preprotein regions. Such a specific binding function could explain why signalless preproteins still require SecA (43) and could rationalize previous cross-linking data (24). Cavities (10) and a groove formed between Stem, Bulb, NBD, and the IRA1 helical hairpin (Fig. 8) (16) could accommodate mature region peptides. Bulb mutations (44 -47) may affect this binding.
Preproteins bind distinctly to the isolated PBD substructures identified here ( Fig. 5B and TABLE FOUR). Nevertheless, the measured affinity of preproteins for PBD is reduced compared with that for the N1-419 fragment that contains both NBD and PBD (TABLE FOUR). We therefore anticipate that additional surfaces or particular residues from NBD may optimize preprotein binding. Electrostatic contacts between signal peptides and SecA were proposed to require NBD residues Asp 217 and Glu 218 at the root of the Stem (29). In other Superfamily 1 and 2 helicases, some DEAD motor residues are known to contribute to oligonucleotide substrate binding (48 -50). In addition, the possibility cannot be excluded that preprotein molecules longer than the minimal preprotein used here may make additional contacts to other parts of SecA outside of PBD.
The small size of CH5EE24 -48 suggests that a limited Bulb surface may be sufficient for a minimal primary binding. This site could reject positive charges in the mature region N terminus (51-54) and could act as a "molecular ruler" dictating the 20 -30-aa step size seen in OmpA translocation (23,55). Interestingly, a 30-residue region downstream of the signal peptide was proposed to act as an "export initiation domain" (54). However, binding of CH5EE24 -48 is less efficient than that of CH5EE (Fig. 7B), suggesting that longer peptides may be better retained on the Bulb or may even engage additional parts of SecA.
Preprotein signal and mature region peptides can bind to their respective PBD subsites independently of each other in our in vitro assays (Fig. 5). Nevertheless, in bona fide secretory proteins signal peptides and mature regions are covalently adjoined and would therefore be expected to occupy both PBD subsites simultaneously. Preprotein binding to the two physically connected and spatially proximal subsites affects PBD conformation (Figs. 5B and 6B). This involves movement of residues at both the base of the Stem (Arg 220 /Lys 386 ) and within the Bulb (Arg 270 , Lys 360 , and Lys 329 ) (Fig. 6A) and could influence Bulb swiveling around the Stem (16). The binding of signal peptide, mature region, and full-length preprotein affects PBD conformation in distinct ways (Figs. 5B and 6B). At the same time, the conformation of the Bulb is altered by signal peptide binding to the Stem (Fig. 5B) (12). These data raise the possibility that Stem-Bulb allosteric communication could facilitate "latching" of mature regions onto SecA. Tethering of PBD to NBD through a "rooted" Stem could optimize formation of this ternary complex by enhancing the measured affinity of isolated PBD for the preprotein (TABLE FOUR). NMR analysis indicated that once detached from NBD, the Stem region of PBD becomes unstructured (Fig. 5A). Binding to SecA promotes the acquisition of ␣-helical structure by signal peptides (29). This change in the signal peptide may affect the possible conformations that the Stem can acquire in SecA. The availability of NMR-based atomic resolution tools (Fig. 5A) (29) will greatly facilitate the analysis of PBD conformation and dynamics and its interaction with the preproteins in atomic detail.
Our data and those of others (10,16,17,24,25,56) lead to a multistep working model where the mobile PBD domain acts as a flexible "control lever" that controls the mechanical activities of the SecY-bound SecA nanomotor. (a) Preprotein binds to PBD (Figs. 5B and 6B and TABLE FOUR) (24). (b) Preprotein binding modulates PBD conformation (Figs. 5B and 6) (12). (c) This change is transmitted through the Stem to the DEAD motor and affects the conformations of NBD (Fig. 6B), where PBD is physically rooted (Fig. 1, A and B), but also that of IRA2 as judged by tryptic accessibility of Arg 586 /Lys 609 ( Fig. 6B; peptides p361-610 and p361-586). (d) Altered PBD conformation is also transmitted to the C-domain residues (10,25), such as the helical hairpin of the IRA1 global regulator to which the Bulb is attached through defined contacts (Fig. 8, yellow) (15). (e) The net result of these conformational changes (Figs. 5C and 6B) detach the C-domain "staple" from its NBD and IRA2 binding sites (7,14,15), thereby loosening and destabilizing the DEAD motor (Figs. 5C and 6B). PBD mutations that display reduced ADP binding (TABLE ONE) and stabilization (TABLE TWO) may mimic this state. (f) "Loosening" of the DEAD motor affects ADP release (7,17,56) and nucleotide cycling (Fig. 4C). (g) ATP binding to the empty DEAD motor reverses the previous conformational events and drives the PBD-attached preprotein into the membrane. PBD could "rotate" between the two distinct conformational states identified crystallographically (10,16). PBD may be free to acquire additional as yet unidentified conformational states when the C-domain is detached. (h) ATP hydrolysis, triggered via helicase Motif III (12,21), partially releases the bound preprotein (23). (i) SecA returns to the stable ADP state (7,17) and is primed for a new catalytic cycle triggered by preprotein binding to PBD. | 2014-10-01T00:00:00.000Z | 2005-12-30T00:00:00.000 | {
"year": 2005,
"sha1": "8bc8ab3f5d3b00384fc113acd62e72174428f797",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/52/43209.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "8bc8ab3f5d3b00384fc113acd62e72174428f797",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245548055 | pes2o/s2orc | v3-fos-license | The Subject of Holodomor in the Ukrainian Artistic Space: Historical Projections of the Art of Contemporary Composers
The Great Famine (Holodomor) is man-made famine that convulsed the Soviet republic of Ukraine in the 1930s. Since 2006, the Holodomor has been recognized as a genocide of the Ukrainian people carried out by the Soviet government. The article aims to highlight specific historical, cultural and social conditions that contributed to the dynamics of the Holodomor theme in music. It focuses especially on the musical compositions of this historical tragedy performed at the Kyiv Music Fest Competition. We can observe the linguistic and musical semantics of the opus of tragic imagery, along with the ethnic motifs of the Ukrainian cultural space, including musical rhetorical figures of the Baroque period, Christian symbolism of suffering and salvation, infernal stylistics.
Introduction
The Holodomor or Great Famine is a genocide of the Ukrainian people. The term "Holodomor" emphasizes the specially created conditions and deliberate aspects of the starvation, such as isolation and the impossibility of receiving external aid from other countries, as well as the confiscation of all food. The murder of millions of Ukrainians by man-made famine was the result of deliberately destructive socio-economic policies of totalitarian power over the Ukrainian Soviet Socialist Republic in 1931Republic in -1933 In the extensive scientific research, the subject of Holodomor has been dealt with in detail only in terms of historical and social studies. In historical perspective, we point to the so-called canonical discourse of Holodomor by Stanislav Kulchytsky. 1 It is based on the basic concepts of communist terror, formed by scholars James Mace and Robert Conquest. In the terms of social analysis, Raphael Lemkin's theory of the genocide's impact on the integrity of traditional culture as a nation's mental foundation is of considerable importance. He argued that the Soviet totalitarian system aimed at willingly starving the Ukrainian people by eradicating the traditional foundations of the oppressed culture and imposing a national culture of oppressors. 2 It should be noted that these ideas dominate in the compositions' conceptions of the composers on the subject of Holodomor.
The concept of carrying out genocide against a nation by deliberately creating unfavourable living conditions (famine) was formulated by the General Assembly of the United Nations in 1948. The Article of the Convention on the Prevention and Punishment of the Crime of Genocide from 9 December 1948, provides the basis for the introduction of the term Holodomor into scientific and social sphere. In the thirties of the twentieth century, British journalist Gareth Jones, who visited Soviet Ukraine three times during the "Great Famine," first used it in the Western press. 3 Raphael Lemkin, the author of the above-mentioned Convention, first used this term in legal documents in 1953, to define it as a crime of a totalitarian state. The murder of millions of Ukrainians by a man-made famine was the result of a deliberately destructive socio-economic policy of totalitarian power over the Ukrainian Soviet Socialist Republic in 1931Republic in -1933 In Soviet times, the true cause of artificial hunger was deliberately hidden from society and distorted in historical scientific research. As an artistic subject, the topic of Holodomor was heavily censored, surviving in literary and musical Gareth Jones, "The Great Famine-Genocide in Soviet Ukraine (Holodomor)," The Financial Times, April 13, 1933, accessed November 30, 2021, http://www.artukraine.com/famineart/ jones4.htm. folk traditions. 4 It became a subject of discussion in the Soviet-Ukrainian 4 Tetiana Kononchuk, Zatemnennya ukrayins'koho sontsya, abo Trahediya holodomoru 1932-1933rokiv u fol'klori Ukrayiny (Kyiv: Tvim inter, 1998. 5 Wendy Ide, "Mr Jones Review -Gripping Stalin-era Thriller with James Norton," The Guardian, February 10, 2020, https://www.theguardian.com/film/2020/feb/09/mr-jones-james-nortonstalin-thriller.
Oleg Kiva, Yevhen Stankovych, Myroslav Skoryk, Ivan Karabyts, Igor Shcherbakov, Oleksandr Yakovchuk, Bohdana Frolyak, Viktor Stepurko, and many others. Their compositions on the tragic theme of an artificial famine as a result of the crime of a totalitarian state attract the attention of researchers and art critics.
Literature Review
The Holodomor subject in the works of Ukrainian art is an extremely important topic for understanding the processes in the World culture of the twentieth century. The tragic theme of artificial hunger is widely represented in art but has not been systematically studied. Here are just a few important articles about different types of art that depict the Holodomor. Film critic V. Syvachuk examines the socio-cultural aspects of Holodomor theme interpretation in documentary and fiction cinematography from different angles: as an artistic embodiment of a civic position; as a dynamic process of the search for historical truth; and as an artistic reflection of the epic tragedy of the Ukrainian people. 6 The importance of his research lies in the observation of the peculiarities of the artistic method, drama, and cinematographs. D. Darewych, 7 an art critic, explores the problem of Holodomor in the paintings of Kazimir Malevich and other Ukrainian artists. Philologists have thoroughly studied various artistic aspects in literary works on the Holodomor theme. An exemplary study is, for example, the monograph by N. Tymoshchuk. 8 So far, there have been no scientific studies in Ukrainian musicology dedicated to the history of the implementation of Holodomor theme in music. There are only numerous works of scholars on individual pieces of music. We will note valuable nonfictional materials by L. Oliinyk 9 and special scientific articles by Ukrainian musicologists between the years 1993-2020. In 1993, Halyna Stepanchenko analyses the composers competition of Kyiv Music Fest 1992. 10 In 2003, Olha Vilchynska analyses the history of creation, features of the genre and dramaturgy of Ivan Karabyts' cantata Prayer of Kateryna. 11 13 In 2013, Zoya Lavrova examined the tragic imagery of the oratorio about Yuri Lanyuk's Holodomor Skorbna maty (The Sorrowful Mother). 14 In 2013, Olha Kushniruk analysed Oleksandr Yakovchuk`s symphonies in the context of postmodernism of Ukrainian music culture. 15 In 2015, Kateryna Babkina examines the dramaturgy and semantic concept of the Spiritual Requiem-Concert "A Dream" by Igor. V. Shcherbakov. 16 In 2020-2021, Olha Vasylenko studied commemorative tradition 17 and main compositions among which are Stabat Mater by Anatoly Haydenko and Gennady Sasko's choral concert of 1993 Duma about 1933 for mixed choirs and soloists is addressed through the stylistics of the folk duma genre. 18 Our article aims to highlight specific social and historical and cultural conditions that contributed to the dynamics of Holodomor theme in music. Particular emphasis is placed on illuminating the ways of integrating this theme into the artistic space by the forces of the world Ukrainian community.
Socio-cultural Context
In the given perspective, it is interesting to consider the socio-cultural context of Ukrainian composers' creativity of the late twentieth century. The emergence of a subject which was extremely difficult for the musical embodiment -the subject of Holodomor -in the works of Ukrainian composers was initiated by the activities of the diaspora representatives. Thus, the formation of the renowned academic festival Kyiv Music Fest is linked to the creative drives of American Ukrainian activists Virko Baley and the family of Marian and Ivanna Kots. A New York University graduate, Marian Kots was born and died in Lviv, but lived most of his life outside the territory of his native Ukraine. As the Head of the Association of Holodomor Researchers, he did everything possible to reveal the terrible historical facts of the Great Famine, hidden by the totalitarian regime of the Soviet Union. 19 Marian Kots has funded a considerable number of scientific and popular science projects devoted to the history of Ukraine. Thousands of memories of Holodomor were collected by the activists of the Association of Holodomor Researchers in Ukraine during the nineteen years of its existence. The composers competition with works on the subject of Holodomor, within the framework of the Kyiv Music Fest named after Ivanna and Marian Kots was first held on the initiative of its sponsors. According to Marianna Kopytsia- Karabyts,20 at that time the festival was a large-scale musical action that actually performed the creative tasks of the painfully dying official Union of Composers of Ukraine.
Since 1990, the festival has started a large-scale action presentation of the achievements of contemporary Ukrainian music of the late-twentieth and early-twenty-first centuries. The first Kyiv Music Fest was initiated with the creative and organizational support of the prominent American musician Virko Baley. The son of a well-known public figure, publicist, political scientist, and writer Peter Baley was born in the town of Radekhiv Lviv region. Subsequently, in 1949, Virko emigrated to America. For the first time, Baley performed music of prominent Ukrainians as a conductor in the United States. Promoting creativity of V. Sylvestrov, L. Grabovsky, M. Skoryk, E. Stankovych, I. Karabyts, V. Zagortsev and others is his outstanding contribution to the native culture. Oksana Harmel, a scholar, points out:
In V. Baley's compositions one can find clear examples of works in which he turned to the sharply dramatic topics that constitute the traumatic zones of Ukrainian history, the traumas of cultural memory -this is the chamber opera
Hunger (Hunger, 1985(Hunger, , 1995(Hunger, -97, 2011 on the libretto of the poet Bohdan Boychuk , in which the tragedy of nation is conveyed as a deeply lived personal drama. 21 In the 1990s, the paths of the families of Kots, Virko Baley and Ivan Karabyts intersected in Kyiv, and this fact directly influenced the integration of Holodomor theme into the music of prominent Ukrainian composers.
Implementation of the Holodomor Theme into the Music of Ukrainian Composers
Music festivals of various composers and masters of the performing arts associated with the Holodomor regularly begin in large cities of Ukraine, primarily in Lviv and Kiev with the support of philanthropists from the diaspora. In 1992, the third Kyiv Music Fest competition for composers was directed by its organizers and sponsors to cover the Holodomor theme in music in Kiev. Ivanna and Marian Kots had initiated a musical panorama in memory on the sixtieth anniversary of tragedy in Lviv. Then the music of Lviv composers Myroslav Skoryk, Viktor Kaminsky and Yury Lanyuk was presented.
Thus, with the support of diaspora figures, the festivals focused on the introduction of the subject of Holodomor into musical compositions. These tragic topics have played a special role in consolidating the nation and improving the moral climate of the Ukrainian intellectual elite.
The first competition was held from 3rd to 10th October 1992, and was commissioned by the Association of Researchers of Holodomor Genocide in 1932-1933. That determined the subject of the academic compositions written for the competition -the Holodomor. The American composer Virko Baley was appointed as the coordinator. An Ukrainian jurey, headed by the Odessa composer Oleksandr Krasatov, evaluated the works of the first round. In the second round, the jury was international: Theodor Kuchar (Australia), Walter Zimmerman (Germany), Olgerd Pisarenko (Poland), Lovell Lieberman (USA), Miroslav Skoryk (Ukraine). Twenty compositions were selected from the second round, including: Spectrum by John Lennon (USA), Volodymyr Runchak's Con mesto sereno, Pro memoria by Gennady Lyashenko, Scorched Mallow by Galina Ovcharenko, Threnody by Zbigniew Baginski (Poland), and Crying and Prayer by Valentyn Bibik. Zbigniew Baginski won the third prize, Valentyn Bibik the second, and the first prize was not awarded at all.
The Theme of Holodomor in the Repertoire of the First Concerts of Kyiv Music Fest in the 1990s
The historical truth of the twentieth-century famine tragedies already had a certain tradition of artistic interpretation, such as in the Requiems or commemorative works of the composers of the 1920s Mykola Leontovych, Kiril Stetsenko, dedicated to the victims of the First Famine of 1921-22. In the second half of the twentieth century, there was Kyiv Music Fest by Ivan Karabyts. The tragic events of the Second (Artificial) Famine of the year 1933, even in the years of censorship, were covered in the Orchestra Concerto no. 3 "Lamentation". The composition was performed twice -at the first (1990) and the second Kyiv Music Fest (1991). The appeal in symphonic music to lamentoso intonations as a sign of the tragic in music is conditioned by the composer's desire to comprehend the folklore genre of lamentation as a common musical symbol of the dramatic era of Ukrainian history. This approach has established a new way of grasping the genre of crying in contemporary symphonic composition.
Principles of Interpretation of the Theme of Holodomor in the Works of Ivan Karabyts
At the Third Kyiv Music Fest (1992), Ivan Karabyts presented the cantata Prayer of Kateryna for a reader, children's choir, and a large symphony orchestra on Kateryna Motrych's poems. At its premiere, the cantata suffered an unfortunate performance due to the conflict of performers (orchestra director and choir director). Therefore it was only performed at the festival concert and was not nominated as a competition piece. In three parts of the cantata the images of death, moral and spiritual catastrophe of Holodomor, in particular the tragedy of cannibalism, are revealed with incredible power. The author entrusts the choir party to children. It is the image of the child which, for the composer, personifies purity and innocence of the Ukrainian people. A striking force in music is the juxtaposition of two worlds: "Ukraine on earth" and "Ukraine in heaven." The first image depicts the tragic realities of a devastated country. Here the chaos of orchestral aleatoric, amplified by the choral glissando, is realized: the slipping of the sounds is like falling into the abyss. The score's powerful autonomous orchestral and vocal layers are unified by an authentic folklore modal organization. The double harmonic minor is associated with Ukrainian ethnic music, it is the main key in the composer Ivan Karabyts' system of thought. The process of imitation of holding the melody of the child's prayer request makes the development more dynamic. Of great importance for music is the symbolism of the theme of the cross, which acquires particular infernal expressiveness in the low register of various musical instruments. There is an impression of moaning, crying, and tension. The culmination of this imaginative sphere is a poignant account of the infanticide and madness of a peasant woman called Hanna, in whose personal destiny the composer sees more profound analogies: "And blind with grief, bruised, gray, half-blissful, She stood, propped up the sky with her torture, glowingly looking around, Mother Ukraine, crucified on a giant cross." 22 The second musical image that emerges from the lyrics of Kateryna Motrych and is reflected in the music is a kind of Paradise, is an image of a happy Heaven Ukraine. The souls of the children turned into cranes, flew into the sky, and became the stars of the Milky Way. White shadows from Heaven Ukraine sit down to the funeral supper to mourn and sing the unburied, as Earth Ukraine has turned into a solid grave. Orchestral music is lit up, pastoral singing is concentrated in high register. The final mourning episode in the sound of the brass quartet is perceived as singing at a memorial service for the starving dead. After the last words of the reader, the music dissolves in the air (the composer again resorts to aleatoric music).
Valentyn Bibik's Diptych Cry and Prayer for Symphony Orchestra
Within the framework of the mentioned competition, the work of the prominent Kharkov citizen Valentyn Bibik was presented, and he received the second prize (the first prize was not awarded). An author of eleven symphonies and an extraordinary personality of Ukrainian music named his composition in memory of Holodomor: Cry and Prayer for the Symphony Orchestra (1992, op. 89). The lamentoso intonation of the silent lament consolidates the melody-independent instrumental layers and unfolds during the first part of the symphonic diptych, being the meaningful embryo of all melody lines of woodwinds. The instrumental crying reaches its dynamic climax -the peak of emotional tension, the roar, the cry and the tears. The composer counterbalances this tragic break with the acoustic signs of the funeral service (bells are heard in the orchestra, an allusion to the theme of brass instruments playing a melody from the funeral mass of the Orthodox Christians Rest with the Saints). The second part of the diptych -"Prayer" -uses genre signs of psalmody, recitation, and instrumental imitation of choral singing, which are successfully reproduced in the instrumental layer of the composition.
The Theme of Holodomor in the Choral Concerts by Gennady Sasko and Larisa Donnik
The history of Holodomor of the 1930s has been reflected in other large-scale compositions performed at the Kyiv Music Fest in different years. The theme of Holodomor in Gennady Sasko's choral concert of 1993 (Duma about 1933 for mixed choirs and soloists) is addressed through the stylistics of the folk duma genre. Dumas and historical songs were written and sung in the cities of Ukraine by blind lyricists and kobzars. Blind folk singers and epic storyteller kobzars are the narrators of Ukrainian epic. They play the kobza -an ancient lute, a string plucking instrument. There is a series of documentary evidence of the persecution of duma performers about the dead from starvation by a violent death in the years 1932 and 1933. All Ukraine knew about the destruction of hundreds of kobzars who wrote and sang the horrors of the First Holodomor of the 1920s. The kobzars were forcibly taken to the "congress" in Kharkiv and shot, and their musical instruments were destroyed.
The brave singers of the historical tragedies of the Stalinist regime are reflected in the contemporary music on the Holodomor. It is worth mentioning the musical composition by the Kharkiv citizen Larisa Donnik Little Slobid Poems, the second part of which is entitled: "On the Dedication of the Kobzars Executed in Kharkiv Oblast in 1929." The allusive title of Sasko's choral concert thus appeals to a well-known fact of history: the violent extermination of folk singers. In Gennady Sasko's choral concerto the style of the epic genre of Ukrainian music was used: choral instrumental background for the epic narrative, the music is made with the use of crying vocal improvisations in the style of a lamento. Mykola Tkach is the author of a poetic text. Poetic text of Gennada Sasko' choir concert is full of symbols. The image of a black hook in the sky, typical of the folk duma, symbolizes the Soviet invasion. It is a sign of distress, a sinister symbol of war, famine, and death, and particular the destruction of the kobza culture of Ukraine. Gennady Sasko's choir also sounds in the traditions of Ukraine's funeral singing -memorial services for the executed kobzars. The music synthesizes the intonation of a memorial church prayer (baritone solo) with the stylistics of a kobzar virtuoso instrumental sound and a bourdon in the bass layer of the score (imitating the virtuoso part of the choir). The peculiar Duma-Requiem for those killed in the Holodomor ends with an allusion to the funeral march from Sonata No. 2 by Frederic Chopin, voiced by Mykola Tkach: "Disturb the memory! Revive up memory with words with the word!"
Theme of Holodomor in a Spiritual Concert-Requiem "A Dream" by Igor Shcherbakov
The subject of Holodomor is decided in the contemporary musical stylistics of large-scale instrumental and choral works of the early twenty-first century in modern concert programs of Kyiv Music Fest. The spiritual Concert-Requiem "A Dream" for a tenor, reader, children's and mixed choirs, a large symphony orchestra and an organ by Igor Shcherbakov was written in 2008 and performed at the 21 st International Music Festival Kyiv Music Fest. The composer is himself involved in the writing of the libretto. He combines the Latin text of the Requiem with the poetry of his contemporary Ukrainian poet Yuriy Plaksyuk -a contemporary witness to the terrible days of the Holodomor. Following Britten's example, the librettist-composer inserts the poetic texts into the canonical parts of the Requiem.
A part of Dies irae: "Eternal Pain" combines the text of a Latin sequence from the Mass of the Dead with Plaksyuk's poems. Similarly constructed are Lacrimosa: "Hell's Tears," the crying Benedictus, and Agnus Dei: "In a Dream and in a Waking," as well as Crucifixus: "The Atonement of Despair." Vocal and choral sections are intertwined with dramatically important instrumental interludes -"The Ghost of Death" and "Healing." The fifth, central part uses Mykhailo Vorobyov's poems The Snow of Sorrow. The piercing pain of autobiographical confessions of the Ukrainian poets, who in the early childhood lived through the tragic events of Holodomor, echoes in every line of poetry. It is this bundle of painful emotions that has been embodied in the expressive music of the composer through use of the avant-garde techniques (these being the elements of quasi-dodecafony, micropolyphony, regulated aleatorics, and sonoristics).
In choral episodes of music by Igor Shcherbakov, the motive reminiscences and micro quotes of all the outstanding requiems of the world music culture can be heard, and in solo voices the style of the kobza free improvisational singing and recitative is conveyed. Between each of the composition's numbers there are small inserts with recited poems or a children's choir. Organic motifs are based on the symbolic themes of the cross; the musical material is clearly imbued with the rhetorical figure of catabasis, representing in European music images of suffering, hellish anguish, death. The musical conciliar performance of Igor Shcherbakov's work convincingly conveys, on the one hand, the tradition of Christian singing of the dead. On the other, expressionist stylistics painstakingly portrays the dreaded dream of oblivion, of death, of the hellish hunger tortures. Elements of vocal lamentation, apocalyptic picture of the past in the finale dissolve in chords of a choral psalm. The cathartic idea of Igor Shcherbakov's composition and the use of musical rhetorical figures is similar to the dramatic concept of Bach's majestic Masses.
Yevhen Stankovych, Funeral Service for the Dead from Famine as Dedication to the Sixtieth Anniversary of the Tragedy of Holodomor
Since 1993, a massive opus on the Holodomor sounded every five years as a part of the country's official events to commemorate the anniversary at the Shevchenko National Opera and Ballet Theatre. The sixtieth anniversary of the Ukrainian people tragedy was marked by the performance of Yevhen Stankovych's composition Funeral for the Dead from Hunger on Dmytro Pavlychko's poem. A monumental composition of fifteen movements for two different choirs (academic and folk), soloists, and a large symphony orchestra was created within a month in Vorzel in 1992. Interestingly, the composer learned about the Holodomor during his two-month stay in Canada, reading the historical materials and memoirs of witnesses for the first time. He was greatly impressed by the theme of this historical tragedy. Thus, Yevhen Stankovych's music became music of the cross-country path of Ukraine of the twentieth century and a super-emotional imprint of those terrible memories: "The heavy snows of 1933 sank over the expanses of Ukraine, presenting the world corpse stench, apocalyptic visions, commensurate only with the paintings of the Last Judgment." 23 Another tragic piece, Black Elegy on the topic of the Chernobyl tragedy, by Yevhen Stankovych was performed by the Canadian orchestra Canadian Sinfonietta and a choir named after Oleksandr Koshytz.
Performers of the 1993 premiere were the National Choir Dumka with the artistic director Yevhen Savchuk, G. Verevka National Folk Choir with the artistic director Anatoly Avdiyevsky, the Symphony Orchestra of Shevchenko National Opera and Ballet Theatre with the conductor Volodymyr Kozhuhar, and the soloists Nina Matvienko (folk voice) and Constantine Klein (bass). Funeral's multi-layered genre is due to the principle of combining the canon of the ritual funeral church service and the artistic images of Dmytro Pavlychko's poetic text. The conflicting dramaturgy of the Stankovych-symphonist brings together, in the space of the work, the relentless progress of the death of the Soviet people and praying for the pardon of the souls of the people who have been starved with incredible force. The musicologist Elena Zin'kevych points out: "The dramaturgical unfolding of a memorial service takes place in two simultaneous movements of two 'plots': the church funeral service and the human memory of the terrible tragedy of Holodomor." 24 Both coexist in different temporal and spatial dimensions: in the enclosed space of the temple and in the open space of Ukrainian history. Complex dramaturgy of the work skilfully conveys a colossal degree of the tragedy of Funeral music by Yevhen Stankovych.
The Fourth Symphony-Requiem "Thirty-Third" by Oleksandr Yakovchuk (for the Seventieth Anniversary of Holodomor)
At the beginning of the third millennium, large-scale compositions of major oratorical and vocal-symphonic genres of Ukrainian composers encompassed the cultural space of many events initiated by the Institute of National Memory. The fourth Symphony-Requiem "Thirty-third" by Oleksandr Yakovchuk, written in 1990, was performed on the occasion of the seventieth anniversary of Holodomor in 2003 at the Shevchenko National Opera and Ballet Theatre. Emotionally insightful poetry of Vasyl Yukhimovich, who personally experienced these terrible events at the age of ten, became the literary basis of the musical composition. The composition has six parts, in which the composer combines the artistic principles of modern symphony with the traditions of the Funeral Mass. Symphony-Requiem -the commemoration of millions of lives lost because of artificial famine, repentance for the crimes of power, is complemented by the idea of exposing the evils and phantom ideals of totalitarian states. Their musical portraits serve as parodies of fascist and Soviet bravura marches, which in the collage fabric of Yakovchuk's Symphony-Requiem are mixed with the lamentoso motives of anguish, pity for the dead, and reveal the special significance of the funeral rite.
The Musical-text Model Stabat mater in the Memorial Compositions of Academic Composers
On the commission by the state, Yuri Lanyuk created a theatrical oratorio to commemorate the seventy-fifth anniversary of the Holodomor called The Sorrowful Mother (poetic text by Pavlo Tychyna). Oratorio score includes a large symphony orchestra, two choirs (mixed and children's) and two soloists (soprano and baritone). The composition was performed under the guidance of conductor Volodymyr Sirenko twice: in 2008 in Kyiv and in 2009 in Lviv. Director and producer Vasyl Vovkun worked with the composer to create the libretto.
At first, the composition was conceived as a requiem, but in the process of comprehending the literary source as a genre model Stabat mater by the composer, the composition became a type of the allegorical oratory. The contextual concept of the requiem in the oratorio is related to the tragic fate of the artists of the "Shooted Renaissance," to whom the composer Pavlo Tychyna also refers; the timbre of the folk lyre, which sounds in the open source of the oratorio, became a kind of memorial to the Ukrainian kobzars and lyricists shot in the 1930s. 25 The genre of the music-text model Stabat mater in the memorial compositions of academic composers lifts the indigenous traditions of the Ukrainian lamentation and mourning to the level of high philosophical generalizations of an idea where the tragic fate of a woman, a mother, reflects the fate of the whole Ukraine and is compared with suffering. The horrific tragedy of the martyrdom of starving children becomes Christian. A similar "Marian" theme is clearly resolved in the composition of Anatoly Haydenko (poetic text by Vasyl Zabashtanskyi), who dedicated his requiem for a mixed choir a cappella Stabat mater to the seventieth anniversary of the tragedy of the 1930s.
Ukrainian Lemkivsky Requiem by Oleksandr Kozarenko -A Synthesis of the Model Catholic Requiem Model with the Ukrainian Ethnic Culture
The subject of the Holodomor is embodied in different confessional genres: the Catholic requiem and the Orthodox memorial service of contemporary Ukrainian composers. Quite often, the mourning of victims in such musical The ethnic space stated in the name of the Lemkivsky Requiem is clearly reflected in the performance of the composition. The requiem was written for a choir, soloists and two different orchestras at once -a symphony orchestra and an orchestra of folk instruments. Both European and folk instruments are heard at the same time: violins, violas, dart, and cymbals. The Lemkivsky Requiem consists of twelve parts and combines Western European traditions with authentic layers of Ukrainian folklore. At the heart of the composition stands the burial mass in the cult Latin, the text of which is artfully crafted by Lviv poet Nazar Fedorak. His poems paint canonical prayers with Ukrainian tragic folk imagery. This allows Ukrainian composer Kozarenko to create a kind of equivalent to the Polish Requiem by Krzysztof Penderecki in modern music -a composition about the tragic events of the history of European peoples in the twentieth century.
The tragic theme of Holodomor, presented in a large number of multifaceted compositions of large and chamber formats, has today found a worthy place in the commemorative traditions in Ukraine and worldwide.
Conclusions
To summarize, we can conclude that the theme of Holodomor first appeared in the outstanding musical composition of composers in the 1990s in the important era of Nezalezhnosti (Independence) of Ukraine. At that time, the official ban on the discussion about the Holodomor was abolished and the attention of the artists was confined to the opening of the tragic pages of Ukrainian history. Ivan Karabyts was one of the first artists to embody the tragedy of Holodomor in symphonies, cantatas, and oratories. With the support of the diaspora artists Virko Baley and Marian Kots, an academic festival of contemporary music Kyiv Music Fest was founded, with a thematic competition of composers with compositions on the subject of historical tragedies of Ukraine.
In the period 1992-1995, the subject of Holodomor was integrated into the academic music of Ukrainian composers. At the same time, the foundations of the eponymous commemorative tradition were laid in the music space. The educational activities of diaspora figures served as a catalyst for numerous cultural events (festivals, thematic music venues) to commemorate the anniversary of the tragedy. This supported the interest of Ukrainian artists in the extremely complex and morally traumatic theme of the Holodomor. Internal creative intentions in the development of the theme were directed by the deep mechanisms of cultural memory, which overcomes the tragic spheres of life in this way, and, according to Oksana Harmel, "[...] comprehends the traumas of Ukrainian history, the traumas of cultural memory." 26 These tragic topics have played a special role in the consolidation of the nation and the formation of a new generation of intellectual elite in our country. The striking compositions written by Ukrainian composers on the subject of Holodomor depicted and exposed the crimes of communist leaders, which created an artificial famine that led to a large number of victims, loss of national culture and tradition, and destruction of natural cycles. The criminal acts of the authorities in the 1930s eradicated the foundations of Ukrainian material and spiritual culture: traditional ethnic space, economy, religion, and customs that govern the natural cycles of the universe. Ways to overcome the apocalyptic tragedy in the dramaturgy of musical compositions on the subject of Holodomor are usually reduced to catharsis, a state of humility, and spiritual purification.
Two topoi of all musical compositions -mournful lamentoso intonation and expressive energy of Holosinnya (lament) in the artistic concept of symphonies, cantatas, concerts of Ukrainian composers depict the tragic fate of children, women, peasants, and executed kobzars crippled by hunger with incredible power. The images of evil carriers -party leaders, communist-lovers, functionaries, are revealed by parody marching themes, fragments, or expressive means of the masculine military complex in music (aggressive orchestral sound forms). The sound landscapes of the dead earth, devastated nature, that is, "landscape themed complexes," have at their core a specific cruciform melodic outline, and they usually sound in an out-of-space frozen music space.
The picture of traditional ideas about the universe and the Ukrainian cosmos has been deformed. In the popular consciousness there are two parallel universes: Earth Ukraine -dead earth, it ceased to exist. All the souls of those who were executed by famine were transported to the Heaven Ukraine -a flowering picturesque paradise garden. Similar paintings in program compositions are resolved spatially, as a kind of dialogue between low and high registers with opposition to tragic and enlightened themes.
The themes of death and hunger, delusions, devilish attacks, suffering and salvation, Marian themes and imperatives of protest are embodied in compositions of different genres. Usually, these are requiems, memorial services, choral concerts, symphonic poems, and orchestra concerts. The distinctive features are scale of form and posterity of expression; the fresco of the interior space of the composition, where each image is rendered in a spreading horizontal space and inscribed in vertical coordinates "earth" -"heaven". The peculiarities of genre and style transformation of the iconic commemorative compositions on the subject of Holodomor are the saturation of European musical canons with Ukrainian ethnic stylistics, and the textual component -the poems of Ukrainian poets.
Composer Ivan Karabyts stated in his explanation to the composition of commemoration of Holodomor victims: "Between the past and the present, between those who live today and those who have gone to Eternity, our Memory appears -without it, there is no future; there is a continuation of everything in it." 27 Translated by Irene Okner | 2021-12-30T16:12:03.596Z | 2021-12-28T00:00:00.000 | {
"year": 2021,
"sha1": "81671dd131481916cef0712f8bfe4ababe3b1a69",
"oa_license": "CCBYSA",
"oa_url": "https://revije.ff.uni-lj.si/MuzikoloskiZbornik/article/download/9469/9990",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c9d872a379f05d840f5514b38e46dbeea6ab433e",
"s2fieldsofstudy": [
"History",
"Art"
],
"extfieldsofstudy": []
} |
119050494 | pes2o/s2orc | v3-fos-license | Confined states and direction-dependent transmission in graphene quantum wells
We report the existence of confined massless fermion states in a graphene quantum well (QW) by means of analytical and numerical calculations. These states show an unusual quasi-linear dependence on the momentum parallel to the QW: their number depends on the wavevector and is constrained by electron-hole conversion in the barrier regions. An essential difference with non-relativistic electron states is a mixing between free and confined states at the edges of the free-particle continua, demonstrated by the direction-dependent resonant transmission across a potential well.
Recent studies have demonstrated the production of stable, ultrapure, two-dimensional (2D) carbon crystals, also known as graphene [1,2,3]. These 2D crystals possess unusual properties, such as unconventional quantum Hall effect [4,5,6,7] and a strong electric-field effect [8]. A large part of these new properties are a consequence of the linear (in wavector) energy spectrum near the Fermi energy and are expected to lead to a new class of carbonor graphene-based nanoelectronic devices. Previous theoretical studies of relativistic fermions interacting with strong fields have indicated that the quantum behavior of the particles may differ considerably from the nonrelativistic case [9]. In this paper we investigate the nature of electron states in graphene QWs and their quantized spectrum.
Graphene layers consist of a honeycomb lattice of covalent-bond carbon atoms, which can be treated as two interpenetrating triangular sublattices, labelled A and B, and are often discussed in terms of unrolled, singlewall carbon nanotubes. The low-energy band structure of graphene is gapless and the corresponding electronic states are found near two cones located at unequivalent corners of the Brillouin zone [10]. The low-energy carrier dynamics is equivalent to that of a 2D gas of massless charged fermions. Their behavior is governed by the 2D Dirac Hamiltonian [11,12], where the pseudospin matrix σ has components given by Pauli's matrices,p = (p x , p y ) is the momentum operator, and v F the effective speed of light of the system, which in this case corresponds to the Fermi velocity v F ≈ 1 × 10 6 m/s. The Hamiltonian (1) acts on the states represented by the two-component spinors Ψ = [ψ A , ψ B ] T , where ψ A and ψ B represent the envelope functions associated with the probability amplitudes at the respective sublattice sites of the honeycomb graphene structure. The lowenergy spectrum of free carriers is E = ± v F (k 2 x +k 2 y ) 1/2 , with k x and k y the wavevector along the x and y axes, in the vicinity of the cones at the Brillouin zone; the + (−) sign refers to electron (hole) bands. Equation (1) also implies that the carriers are chiral particles, with the pseudospin aligned parallel (antiparallel) to the direction of propagation of the electrons (holes).
Representing the effect of an external electrostatic field by an external potential U and including a diagonal effective mass-like term m v 2 F leads to the Dirac equation The term ∝ m v 2 F creates a gap in the dispersion and may arise from spin-orbit interaction or from the coupling between the graphene layer and the substrate [13]. For a circularly symmetric potential with m = 0, the solutions inside the potential well match free-particle solutions outside, therefore ruling out bound states [14]. This is caused by the conservation of the chirality in the interaction with the potential and the absence of a gap in the spectrum and can be understood as a manifestation of a relativistic tunneling effect first discussed by Klein [15,16] for one-dimensional (1D) potentials, in which Dirac fermions can propagate to hole states across a steep potential barrier without any damping. For massless particles this tunnelling is expected to occur for any value of U 0 . However, as we show below, for a 1D potential a finite value of the momentum parallel to the potential barrier can suppress this tunnelling and thus allow the confinement of electrons. Very recent studies have demonstrated the confinement of electrons in a graphene strip [17]. In this case, in order to obtain the confinement the authors assumed a position-dependent effective mass for the particles. This assumption does not permit the observation of Klein tunnelling and of the momentum-dependent reflection and transmission. Therefore, the confinement in this case is qualitatively different from ours specified below. In order to demonstrate the confinement in an electrostatic quantum well, we consider a zero or constant effective mass throughout the system and first a 1D square-well potential U (x) = U 0 θ(|x| − L/2), U 0 > 0, cf. Fig. 1, which allows an analytical solution for the eigenstates and sheds light on some general features of the problem. Later on, we consider a parabolic confinement.
With momentum conservation in the y direction, we look for solutions in the form ψ C (x, y) = φ C (x)e iky y , C=A, B, and obtain Decoupling Eqs. (3) and (4) gives for φ A the result where u ′ is the derivative of the potential. For a square well, these derivatives are Dirac delta functions. The character of the solutions depends on the value of β, which determines the sign of the last term on the left side of Eq. (5). The solutions are of three types: (i) traveling waves, which describe free electrons, free holes, as well as mixed states that occur due to the Klein tunneling of electrons to holes outside the potential well; (ii) standing waves, which for massless fermions arise only from finite values of β above an energy-dependent cutoff and decay exponentially in the barrier regions; (iii) tunneling waves, which are oscillatory outside the well whereas inside it they are combinations of exponentials with real exponents; these correspond to holes that undergo ordinary tunneling across the potential well. Type (ii) solutions occur in energy and wavevector ranges for which there are no hole states available at the barrier regions. This suppresses the Klein tunneling, since it depends on the electron-hole conversion at the interface.
In this work we focus on type (ii) solutions which describe electron states confined across the well and propagating along it. Their energies are in the region delimited by the curves At smaller wavevectors, tunneling across the barriers introduces a cut-off in the spec- For confinedstates, the spinor components decay exponentially in the region ξ < −1/2. Then the A component can be written The solutions φ A and φ B for |ξ| ≤ 1/2 are of the type with κ 2 = ǫ 2 − β 2 − ∆ 2 . For ξ > 1/2 the solutions are similar to those for ξ < −1/2 but with a nega- It should be stressed that, in contrast with the non-relativistic case, the spinor components are neither even nor odd functions, despite the symmetry of the potential. This symmetry, however, is reflected in the probability density ρ = Ψ [14] , which is an even function. Moreover, for a step potential the derivatives of the spinor components are not continuous because u ′ in Eq. (5) is a delta function. This can be demonstrated by considering the continuity of the y component of the probability current, j y = v f Ψ † σ y Ψ, across the potential interface: using Eqs. (3) and (4) we obtain (u + = u 0 /(ǫ + ∆)) where the arrows indicate the limiting values from the left and right of the interface. Notice in Eq. (8) that, even for large values of ∆, a continuous derivative of φ A may be assumed only for u 0 β << ∆.
Requiring the continuity of φ A and φ B at ξ = −1/2 and ξ = 1/2 we obtain the following transcendental equation for the energy eigenvalues where S ± (ǫ, β, s) = β − f ± (ǫ + ∆) − sκδ ∓s and δ = tan(κ/2). The non-relativistic limit can be obtained using ǫ = ǫ c + ∆, where ǫ c corresponds to the classical energy and considering the limit ∆ >> ǫ c , to give where α ≈ [β 2 + 2∆(u 0 − ǫ c )] 1/2 , κ ≈ (2∆ǫ c − β 2 ) 1/2 , and Γ ≡ u 0 /2∆. Equation (9) For Γ << 1 and βu 0 << ∆ we recover the familiar transcendental equation for a non-relativistic QW. In this limit a non-zero value for β is equivalent to a simple shift of the energy scale ǫ ′ → ǫ c − β 2 /2∆ and the spectrum of the confined states becomes a set of nested parabolas. On the other hand, Eq. (11) shows that, even for massive particles, the QW spectrum does depend on the y component of the momentum, in contrast with the nonrelativistic results. Thus, a significant modification of the parabolic spectrum occurs as β increases. Equation (9) was solved numerically. The results are shown in Fig. 2 for U 0 = 50 meV, L = 200 nm, and ∆ = 0. The dashed lines delimit the continuum region, which corresponds to free-electrons (E ≥ k y + U 0 ) with energies greater than the barrier height, and free-holes (E ≤ − v F k y + U 0 ) that propagate in the system by means of the Klein tunneling mechanism. The cut-off at low wavevectors thus arises due to the conversion of confined electrons to free holes. For large values of k y the dispersion branches are given approximately by where n is an integer. For any given k y , the accuracy of this approximation improves as L increases. The lower inset shows (a) φ A (solid curve) and iφ B (dashed curve) for the confined state, with k y = 0.03 nm −1 , shown by the solid triangle and (b) the corresponding probability density in arbitrary units. The plot clearly indicates a discontinuity in the derivative of the spinor component functions at the barrier interfaces. The vertical dotted lines indicate the walls of the well. The upper inset shows the effect of the mass, with mv 2 F = 10 meV. The dashed lines again represent the limits of the free-particle continua. In this case, confined states are allowed, for k y = 0, in the range u 0 − ∆ < ǫ < u 0 + ∆. This energy range broadens as k y increases and remains constant for k y > (u 2 0 /4 − ∆ 2 ) 1/2 . At lower energies, there is again a cut-off, due to the Klein tunnelling at the barriers, which disappears for 2∆ > u 0 .
Next, we consider a QW with a parabolic potential profile U (x) = U 0 (2x/L) 2 for |x| ≤ L/2 and U (x) = U 0 for |x| > L/2. Figure 3 shows the spectrum of confined states obtained from a numerical solution of Eqs. (3) and (4) for U 0 = 50 meV and L = 200 nm. The results are qualitatively similar to those of the previous case, but now with the eigenvalues being approximately equally spaced for large wavevectors.
An essential difference with non-relativistic electrons, evident in all cases, is the appearance of new confined states at the edges of the continua, where the quantized electron branches intercept the free-particle regions. Thus, by an adiabatic increase in k y one can transform a free-electron or a free-hole state into a bound electron state. This occurs because the presence of the barriers allows a mixing of electron and hole states with the same energy and y component of momentum. As a result there is constructive interference between confined states and unbound electron or hole states that are resonantly transmitted across the QW. We demonstrate this by calculating the transmission coefficient of electrons incident on a square well. Consider the propagating solutions ψ A (x, y) = φ A (x)e iky y , with where α = [(ǫ − u 0 ) 2 − β 2 − ∆ 2 ] 1/2 ; the solutions for φ B are obtained as in the previous calculation. Then, the transmission coefficient is obtained as T = |A 3 | 2 , where g ± = (β ± iα)/(ǫ + ∆) and f ± = (β ± iκ)/(ǫ − u 0 + ∆). A (k y , α/L) contour plot of the transmission T is shown in Fig. 4 for U 0 = 50 meV and L = 200 nm. As seen, T depends on the direction of propagation and displays an oscillatory behavior. As α → 0, T reaches a series of maxima for values of β that coincide with the wavevectors for which mixing occurs, cf. Fig. 2. Notice that for a significant range of incident angles T is always equal to 1. This includes the case of nearly normal incidence, k y ≈ 0, and is in sharp contrast with the non-relativistic case in which T exhibits periodic maxima equal to 1 as a function of k x . A similar direction-dependent transmission through graphene barriers was reported recently [18]. A direction-dependent transmission is also possible for non-relativistic electrons tunneling through magnetic barriers [19]. The y components of the momentum for which mixing is allowed correspond to confined states for which the asymptotic limit α → 0 applies. This yields the condition sin (κ) = 0 or κ = nπ, where n is an integer. Using the definition of κ and α gives Since β 2 > 0, the values of n can be obtained from the condition ±(n 2 π 2 /2u 0 −u 0 /2) ≥ ∆, where the + (−) sign is associated with the upper (lower) continuum edges. From this condition we find that for U 0 < 2mv 2 F there is no mixing at lower energies, although it persists at the upper continuum edge and the minimum value of β for the mixing increases with ∆.
A complementary way to see the direction dependence of the transmission T is shown in Fig. 5(a), with T plotted versus the angle of incidence θ = arctan (k y /α), for different electron energies as indicated. The QW parameters are L = 200 and U 0 = 50 meV. Notice that for θ ≈ 0, we have T ≈ 1 in agreement with the k y ≈ 0 part of Fig. 4. In Fig. 5(b) we plot T versus the energy E for θ = π/3. As seen, T oscillates with the energy due to the resonance effect caused by the confined states (as in the Ramsauer-Townsend effect). The energies for the maxima of the transmission can be obtained from Eq. (15) as ǫ = (nπ) 2 /2u 0 + u 0 /2.
In summary, we showed that it is possible to confine massless charge carriers by means of electrostatic potentials, due to the wavevector-dependent suppression of the electron-hole conversion at the potential barriers. We thus obtained the quantized spectrum of confined electron states in graphene quantum wells (QW) as a function of the y component of the wavevector. The results show a remarkable dependence of the eigenvalues on the momentum with a cut-off at low wavevectors. The relativistic correction to the classical QW spectrum leads to a wavevector dependence of the number of confined states due to the electron-hole conversion at the continuum edges. Accordingly, such QWs must be treated as inherently 2D systems. This is further demonstrated by the directional dependence of the transmission shown in Figs. 4 and 5. Studying the resonance transmission of electrons across a QW with energies above the height of the confining walls, E > U 0 , can probe the discrete levels which can be populated by tuning the Fermi energy of the system with the electric-field effect [1]. This work was supported by the Brazilian Council for Research (CNPq), the Flemish Science Foundation (FWO-Vl), the Belgian Science Policy (IUAP) and the Candian NSERC Grant No. OGP0121756. | 2019-04-14T02:11:37.157Z | 2006-06-21T00:00:00.000 | {
"year": 2006,
"sha1": "f5cc25e8f90ce9089f3f4875c54007eb89f93643",
"oa_license": null,
"oa_url": "https://repository.uantwerpen.be/docman/irua/dda26f/60091.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f5cc25e8f90ce9089f3f4875c54007eb89f93643",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
225051318 | pes2o/s2orc | v3-fos-license | Nubian Levallois reduction strategies in the Tankwa Karoo, South Africa
The Middle Stone Age record in southern Africa is recognising increasing diversity in lithic technologies as research expands beyond the coastal-montane zone. New research in the arid Tankwa Karoo region of the South African interior has revealed a rich surface artefact record including a novel method of point production, recognised as Nubian Levallois technology in Late Pleistocene North Africa, Arabia and the Levant. We analyse 121 Nubian cores and associated points from the surface site Tweefontein against the strict criteria which are used to define Nubian technology elsewhere. The co-occurrence of typically post-Howiesons Poort unifacial points suggests an MIS 3 age. We propose that the occurrence of this distinctive technology at numerous localities in the Tankwa Karoo region reflects an environment-specific adaptation in line with technological regionalisation seen more widely in MIS 3. The arid setting of these assemblages in the Tankwa Karoo compares with the desert context of Nubian technology globally, consistent with convergent evolution in our case. The South African evidence contributes an alternative perspective on Nubian technology removed from the ‘dispersal’ or ‘diffusion’ scenarios of the debate surrounding its origin and spread within and out of Africa.
Introduction
Southern Africa is a critical location for understanding the origins of modern humans in the Middle Stone Age (MSA), about 300 ka to 40 ka. Numerous cave and rock shelter sites in the coastal-montane belt have provided key evidence for complex and innovative behaviour in a succession of distinctive technocomplexes, particularly during late Marine Isotope Stage (MIS) 5 and MIS 4 [1][2][3]. Specifically, the Still Bay and Howiesons Poort have received special attention due to the early evidence of art and symbolism alongside high levels of technological investment in producing characteristic artefacts, bifacial points (Still Bay) and backed artefacts (Howiesons Poort) [4,5]. The Fynbos vegetation biome, where these tend to occur, provides a dense and predictable food supply in its juxtaposition of terrestrial and marine resources, the latter often linked with increasing technological, social and cognitive developments in the MSA [6][7][8]. However, the Fynbos Biome is only one of the nine terrestrial biomes that make up southern African environments [9], providing past hunter-gatherers with a range of which prevent the westward penetration of summer rain or easterly movement of winter cold fronts. While both high-elevation mountain ranges receive comparably high rainfall, the basin itself lies in the rainshadow and, as a result, is one of the most arid areas of South Africa, with mean annual precipitation within the 0-100 mm range. As such, the vegetation is characterised by arid-adapted dwarf shrubland, dominated by leaf succulents and a wide variety of geophytes, chamaephytes and therophytes [22]. Surveys in this previously unstudied area aimed to compare past adaptations to this environment, in terms of lithic technology and landscape use, with the well-resolved record for the neighbouring Cederberg [23,24]. These results are presented in [16] and will be published in full separately. A significant find in the course of this fieldwork was the first reported occurrence in South Africa of Levallois preferential point cores [25] that use a specific preparation technique, known as Nubian technology [26]. This involves the preparation of a point by creating a steeply-angled distal guiding ridge on a triangular-shaped core, through distal removals (Type 1), lateral removals (Type 2) or a combination of both (Type 1/2). This technology is a feature of the MSA or Middle Palaeolithic of Northeast Africa, Arabia and the Levant [27][28][29][30][31] but recently has been observed in South Africa at further sites in the Doring River area [32]. Retrospectively, descriptions and illustrations of Nubian cores have also been identified in the Karoo region of South Africa, in Sampson's [33] Orange River MSA study and, potentially, at Driekoppen shelter [34]. Surveys in the Tankwa Karoo have recorded 134 cores using the Nubian Levallois technique in 11 survey localities, most notably at Tweefontein (pronounced "Twee-er-font-eyn") where 121 cores were sampled forming the main focus of this paper. The assemblage at Tweefontein is, so far, the largest assemblage of Nubian technology reported in South Africa.
Nubian Levallois technology
The features which distinguish Nubian Levallois technology from other forms of Levallois production were noted by several early studies in Egypt [35][36][37]. The first formal detailed description was by Guichard and Guichard [38,39] based on artefacts identified in rescue surveys in Nubia (lower Nile Valley, southern Egypt/northern Sudan). In contrast with the 'classic' unidirectional convergent method of point production, whereby two convergent proximally struck removals create a triangular guiding scar for the preferential point removal [40], the Nubian method uses a distal platform to create a steep Distal Median Ridge (DMR), shaped either by distal preparation in the case of 'Type 1' cores, or bilateral preparation in the case of 'Type 2' cores ( Fig 2). A third intermediate form, 'Type 1/2' has been acknowledged where a combination of distal and lateral preparation is observed, showing some flexibility in the strategy used to maintain the DMR [29,30,41,42].
The bilateral preparation involved in the Type 2 organisational system shares similarities in approach to centripetally prepared Levallois cores (falling under the category of radial cores in African lithic nomenclature), although the latter have a circular to ovate shape and produce ovate rather than pointed end-products. Chiotti et al. [41] note that the removal of the pointed distal end of a Nubian core would transform a Type 2 Nubian into a radial core, leading them to question whether they can be regarded as separate reduction strategies and rather represent stages of the same reduction sequence. However, this is not supported by metric analyses [26] and crucially they differ in the steeper curvature of the Nubian core distal and the opposed proximal and distal striking platforms. For the same reasons, Goder-Goldberger et al. [31] reject the inclusion of Type 2 cores with a flat-angled flaking surface within the Nubian core definition, instead regarding these as within the classic centripetally prepared Levallois method. Van Peer et al. [43] suggest that the preparation of Type 2 cores grades between Type 1 and classic centripetal Levallois, with the sometimes very short distal ridge on some Type 2 cores installed by distal removals on an otherwise centripetally prepared core. In their initial definition, Guichard and Guichard [38: 69] observe that a Type 2 core without a final preferential removal "might recall a biface" albeit with unequal treatment of each face, but again the DMR is the key distinguishing feature.
Although points produced by Nubian and classic Levallois point cores share features such as their faceted striking platform, they differ in shape, the latter generally being a short nearequilateral triangle and the former more elongated. Additionally, the dorsal scar patterns are characterised by a Y-shaped unidirectional convergent scar on classic Levallois points, but bidirectional scars on Type 1 points with bilateral removals on those from Type 2 [26]. However, dorsal scars can be difficult to distinguish between Nubian and classic Levallois products, such as where the product terminates above the DMR or where the ridge is less pronounced as in classic Levallois flakes, or where the dorsal preserves a previous proximal point removal yet shows limited lateral or distal repreparation, appearing like a classic unidirectional convergent Levallois point.
As its name suggests, the identification of Nubian technology was initially geographically focused on Northeast Africa and became associated with a specific technocomplex, the Nubian Complex [27,44]. This term is applied broadly to a range of assemblages, some of which do not in fact include Nubian cores, making the existence and coherence of the Nubian Complex a point of some controversy [45][46][47]. This is compounded by relatively few absolute dates since many are surface occurrences; while it is generally considered to be an MIS 5 phenomenon [48,49], OSL ages span from the Middle Pleistocene (181-156 ka) [50,51] to some of the youngest MSA ages in Africa (16-15 ka) [52].
More recently, finds reported in the Levant [31,53,54] and in various parts of Arabia [28][29][30]55] have sparked further debate over whether this is a regional technocomplex shared by populations expanding out of Africa during MIS 5, the result of cultural diffusion, or has convergent origins. Cases of Nubian technology also extend to the Horn of Africa with sites in Eritrea [56], Somalia [13], Ethiopia [13,57,58] and Kenya [59,60], and several Nubian cores are noted in the Thar Desert in India [61]. While arguably the occurrence of Nubian technology in these neighbouring regions could be the result of dispersals or diffusion, the substantial spatial and temporal gaps between these and the South African Nubian occurrences strongly suggests the convergent evolution of the technique in South Africa [25,32].
This paper presents the current results of our on-going fieldwork at the site of Tweefontein and tests our identification of Nubian Levallois technology in the assemblage against the rigorous criteria agreed on elsewhere as requirements of Nubian technology. We complement the Tweefontein data with additional occurrences in the Tankwa Karoo region, set this in a wider South African context, and consider the potential drivers behind the convergent evolution of this distinctive method of point production at a global scale. We specifically do not enter any debate about the relationship between the North African, Levantine and Arabian Nubian, or the status of the Nubian Complex, nor do we attempt to challenge or redefine the criteria for evaluating Nubian Levallois technology [47].
Materials and methods
The main study site, Tweefontein, is located on a low, flat-topped ridge (approximately 330 x 180 m) on the Tankwa River floodplain, lying between two channels of the river that were dry at the time of survey (Fig 3). The Tankwa floodplain is formed of unconsolidated Quaternary alluvium with virtually no surface archaeological material observed, representing a very different depositional setting to the archaeologically-rich sediment stacks studied along the nearby Doring River [15,62]. Instead, archaeological evidence is well-preserved on the elevated rocky ridges that flank the floodplain, formed of Dwyka Group (Elandsvlei Formation) geology of the Karoo Supergroup, a glacial tillite containing a wide array of clasts from across the subcontinent in a fine-grained matrix [63]. The Tweefontein ridge is raised approximately 3 m above the surrounding floodplain and the bedrock itself is a highly compacted diamictite and thus relatively erosion-resistant. The site has outcroppings of Dwyka boulders and diamictite bedrock and the land surface is covered with angular clasts of rocks forming a single deflated 'desert pavement' surface of artefacts and rocks overlying sand (Fig 3B).
Desert pavements and site formation processes
Desert pavements are lag-gravels, usually only one or two stones thick, covering finer-grained sediments, and they are a common phenomenon in arid or semi-arid environments [64]. They occur widely on the land surface of the Tankwa Karoo [65]. Typically these surfaces are formed by the aeolian accretion and/or deflation of fine sediment underneath the stone pavement which remains on the surface [65][66][67][68]. These surfaces are regarded as long-lived geomorphic features, producing surface ages up to 1.8-1.5 Ma [69] and many Pleistocene dates in other parts of the world [70][71][72].
Although no geomorphological investigation has yet been carried out at Tweefontein directly, a recent study has assessed pavement formation in a similar setting about 2 km to the east [65]. A small test excavation of the desert pavement on a pediment north of the Tankwa floodplain revealed a thin clast-free vesicular A horizon (5 cm thick), overlying a heavily rubified B horizon, also virtually free of clasts. The light-coloured A horizon is formed of young aeolian sediments, with the rubified sediments below showing considerable pedogenic alteration [65]. The authors confirm that the desert pavement at the sampling location has been established since at least the late Pleistocene and may be several hundred thousand years old. However, it is unlikely that the pavement has survived intact for timescales approaching a million years. This is consistent with the archaeological observations from our surveys in the region where Later Stone Age (LSA) and MSA artefacts are found on the same surface, as is the case at Tweefontein, but no older Earlier Stone Age artefacts have been found in desert pavement contexts.
Although desert pavements are stable and long-lived landforms, they are dynamic entities and multiple episodes of formation and burial observed in some regions raises the question of whether clasts (and artefacts) were buried initially and reworked onto the surface to create the palimpsest observed today [64,70,73]. The degree of pavement development can be a proxy for age, assessed through the coverage of clasts, although other factors such as plant cover and animal activity can disturb the surface [74]. Generally, the smaller and more closely interlocked the clasts, the older the pavement surface [75][76][77]. Small-scale processes such as wind or rain splash can cause lateral movement of clasts at the centimetre-scale, allowing disturbed pavement surfaces to 'heal' at a relatively fast rate of tens to hundreds of years [71]. In an archaeological context on the Libyan Plateau, Adelsberger et al. [78] observed the presence of small artefacts from 5-25 mm in maximum dimension in assemblage samples on desert pavement surfaces. The proportion of small-fraction artefacts never exceeded 42.4% of the total number of artefacts, with an average of 8.4%. This was tested at Tweefontein which found this small fraction was well-represented on the surface in the complete sample squares recorded (54-62% artefacts were 10-25 mm in three 1 m 2 samples). This suggests that, at least in the context of high-density artefact scatters, desert pavements in the Tankwa Karoo can preserve assemblages with minimal size-sorting.
Desert pavements depend on aeolian activity for their formation but the high winds experienced by arid, exposed environments can also impact on artefact taphonomy directly [79][80][81]. In Patagonia, wind speeds of 90 km/h can move lithics of up to 50 mm in size and 13 g in weight [79]. Tweefontein is very exposed to the elements and wind speeds of up to 95 km/h have been recorded nearby, meaning that some lateral displacement of artefacts by the wind cannot be precluded.
A common result of wind-abrasion on artefacts is the distinctive polish or 'desert varnish' that patinates artefact surfaces. In the Tankwa Karoo, this is particularly pronounced on hornfels, producing red-brown dorsal and ventral surfaces, but affecting artefact edges to a much lesser degree. The use of desert varnish as a dating tool has been explored with varied results [82][83][84][85] but it may hold greater potential as a palaeoenvironmental indicator [86,87]. While there is no universal relationship between patination colour and age of artefacts [82], variation in colour within assemblages in the same setting can be a reasonable relative measure of chronology [88]. At Tweefontein artefacts that are technologically and typologically characteristic of the LSA show very little colour alteration from the original dark-grey hornfels or dolerite, whereas MSA artefacts are consistently patinated to red-brown.
While surface sites should always be treated with caution due to their vulnerability to postdepositional disturbance, Tweefontein can be assumed to have experienced relatively stable conditions given: (1) its desert pavement surface, (2) the high representation of small fraction lithics, (3) numerous refits and conjoins observed in the field and (4) temporally distinctive patterning in the site structure with different clusters of MSA and LSA artefacts. The MSA artefact scatter extends across most of the ridge but it is densest at the north-east corner. Two circular dry-stone walled structures along the south-western edge of the ridge are attributed to the LSA, one of which is associated with a small LSA assemblage with a further LSA artefact cluster in the middle of the ridge (Fig 3C). Although no systematic attempt at refitting was made, one preferential product refitted to a Nubian core and six artefacts were found in two conjoining portions including one point, one Nubian core and four elongated flakes, further affirming the spatial integrity of the site.
Sampling
In total we have analysed 3266 artefacts at Tweefontein over two field seasons in 2014 and 2015, using a number of different sampling strategies with different aims (Table 1). Every artefact studied was assigned a unique identification code and its precise spatial location was marked with a corresponding numbered flag ( Fig 3B). Non-destructive attribute analysis and detailed photography were carried out at a temporary local recording station, prior to returning the artefacts to their original location. The spatial integrity of the artefacts was retained owing to this careful recording protocol and the "catch and release" approach preserves the surface archaeological record for future monitoring [32,89]. Since this research was nondestructive and no artefacts were permanently collected or displaced, no permits were required for the fieldwork, which complied with all relevant regulations. Permission to conduct research on the farm Tweefontein was granted by the land-owner. 2014 field season. Tweefontein was initially identified during surveys in August 2014 when its importance was noted for Nubian-like cores, points and silcrete use. Four days of survey time were dedicated to documenting the site, with two analysts walking perpendicular transects across the site recording non-metric attributes (raw material, basic technological features and typology), GPS points and photographs of retouched pieces, points, point fragments, and cores in all raw material types, as well as all silcrete artefacts ( Fig 3C). Complete artefact samples were recorded in three one-metre squares, one positioned at the centre of the MSA scatter and one at each of the LSA artefact scatters. Additional semi-systematic surveys recorded Nubian cores and points across the site more widely. The data collected were in line with the strategies employed in the broader survey programme [16,24].
field season.
A second fieldwork period of nine days was spent recording the site more systematically, involving a detailed attribute analysis of the artefacts. This specifically aimed to test whether the Tweefontein cores showed the classic features of Nubian technology, principally following Usik et al. [29]. Three sampling methods were employed. Firstly, an arbitrary 9 x 6 m grid was set up at the site (Fig 4) (a total station was not available for use), and the one-metre squares were systematically searched for artefacts of interest: these included Nubian cores, other formal cores (excluding irregular or informal cores and chunks), retouched points, possible point fragments, and other artefacts such as blades, core rejuvenation and preparation flakes. These were marked with numbered flags, temporarily removed for recording and photographing, and replaced in their original location. Each square was photographed with the flagged artefacts and GPS points were taken for each, although good GPS precision at this scale was not guaranteed.
The second method allowed for a more complete sample of the assemblage from the grid squares to be recorded. Three squares (18/-1, 18/+1 and 18/+2) included all artefacts >10 mm and, owing to impractically high numbers of hornfels shatter, a size threshold of >20 mm was introduced for a further three squares (18/+3, 18/+4 and 18/+5). The third method implemented was a systematic survey of the area outside the grid to the north and west, aiming to increase the sample size of Nubian cores and points. Artefact locations were recorded using a Garmin eTrex H GPS device, but lacked the additional spatial anchoring of the grid. This strategy was effective in doubling the number of cores and points recorded.
Attribute recording methods
Nubian cores were explicitly tested against Usik et al.'s [29] criteria which have been used as a benchmark in a number of other studies [31,32,90]. This recorded the following attributes: a steep distal median ridge (less than 90˚), a pointed core shape, distal (Type 1), lateral (Type 2) or a combination of distal and lateral (Type 1/2) preparation, and a prepared proximal striking platform. As stated by Usik et al. [29: 249] "such a rigid definition is necessary to prevent any
PLOS ONE
Nubian Levallois reduction strategies in the Tankwa Karoo, South Africa unwarranted broadening of this particular reduction strategy". Besides these attributes specific to Nubian cores, other attributes that were recorded for all of the artefacts included artefact class, raw material, cortex type and coverage, completeness, morphology, technology (scar patterns), retouch type and degree of patination (Table A in S1 Appendix).
Metric data for lithic artefacts were captured using digital callipers with 0.01 mm precision and electronic scales with 0.01 g precision. Angles were recorded with a goniometer, accurate to 5 degrees. All data were entered directly onto a laptop by a recorder (MS) working with an analyst (EH). All cores and points were photographed comprehensively since they were not collected.
Results
Results from the 2015 field season are the primary focus of this paper, including the 1417 artefacts recorded in the complete grid squares as well as the selective formal core and point samples. The complete grid squares provide raw material proportions and size fractions representative of the overall composition of the site (Tables 2 and 3). Artefact density for the three squares which included the smallest hornfels fraction and the 2014 MSA sample square are 185, 223, 413 and 297 respectively, giving a mean value of 280 artefacts/m 2 .
The dominant raw material used at Tweefontein is hornfels, available as tabular cobbles in the Tankwa River cobble beds located directly on either side of the site. There is comparatively more cobble cortex (7.4%) observed on hornfels than outcrop cortex (2.2%) which indicates a preference for secondary raw material sources. Hornfels makes up 80-86% of material in the
PLOS ONE
sampled squares, and while it is the most common raw material used for Nubian cores and points, the proportions are lower at 62% and 44% respectively. Hornfels generates a large amount of undiagnostic small shatter when knapped, with 32% of artefacts measuring 10-20 mm, and 73% smaller than 30 mm (Fig 5, Tables B and C in S1 Appendix), hence the introduction of a 20 mm size cut-off for half of the grid sample squares. Dolerite is also available in the same cobble bed context as hornfels or from primary outcrops 5-10 km away, but it was used less frequently at Tweefontein, representing only 1-3% of the overall raw material composition. The site's location on an outcrop of Dwyka tillite means that a range of other rocks are directly available from the bedrock, as well as the diamictite cementing the clasts together. Generally, the use of these rocks was low (1%), but nodules of fine-grained translucent quartzite from this context were exploited, comprising 5-9% overall, with roughly similar proportions seen in the Nubian core (13%) and point (7%) samples.
Raw materials are considered here to be non-local to the site where primary and secondary sources occur over 10 km away, based on an average hunter-gatherer daily foraging radius [91]. Silcrete outcrops on Cape Supergroup geology, a minimum of 10 km from the site, and is available as secondary cobbles in the Doring River at a similar distance. Although silcrete only comprises 4-5% of the raw materials recorded in the sample squares, much higher proportions are noted among the Nubian cores and points (14% and 29% respectively). The small fraction present (42-54% 10-20 mm) and cortex retained on 27% of artefacts attests to the transport of silcrete nodules to Tweefontein for on-site knapping.
CCS (cryptocrystalline-silicate) is a heterogeneous raw material category (equivalent to chert in other regions) but the main type used for artefacts is a fine-grained light-grey material with white or orange cortex. This is likely to derive from the Matjiesfontein Member (Ecca Group), entering the Doring River in the southern Tankwa Karoo and transported as cobbles
PLOS ONE
Nubian Levallois reduction strategies in the Tankwa Karoo, South Africa at least 10 km from the Doring's closest point. As observed for silcrete, overall CCS proportions are low (2-3%) but the use of CCS for Nubian cores and points is higher (7% and 11% respectively).
Nubian cores
A total of 121 Nubian cores were recorded in the 2014 and 2015 field seasons (Fig 6). Detailed attributes were recorded on 108 cores, with 100 of these being sufficiently complete for full metric evaluation. Thirteen of these cores were recorded during the 2014 sample so only qualitative attributes are available. A further 18 cores in the 2014 sample have been noted as preferential Levallois cores showing some Nubian characteristics in their morphology and preparation strategy, but there is insufficient information to confidently identify them as Nubian. An important consideration is that cores reflect the final stages of a reductive process which at discard, may include broken, overshot, re-prepared or exhausted pieces. As such, some cores which did not possess all of the attributes recorded within Usik et al.'s [29] system (e.g. an overshot core distal) could still be considered technologically to fit within the framework of Nubian technology based on the features preserved. Each attribute is considered independently below. Core morphology. Nubian cores are expected to show a pointed morphology due to the focus on distal and lateral preparation, categorised as triangular (greatest width at the proximal), cordiform (widest one-third above the distal) and pitched (parallel elongated laterals with a convergent distal end) [29]. Core shape was recorded on all cores, although 12 (10%) were incomplete or overpassed so shape could not be determined. Almost all identifiable cores had a pointed distal end (97%, n = 106); 34% were triangular (n = 41), 31% cordiform (n = 38), and 21% pitched (n = 25) ( Table 4). Three of the remaining cores were more ovate than pointed due to reworking of the distal platform or being overshot; nevertheless, they had other features consistent with Nubian technology (preparation from a distal platform, preferential point removals). Two cores are described as foliate, possessing a tapering distal but also an angled proximal platform creating a double-pointed shape (see also [92: 247]). An additional 64 radial cores were recorded at Tweefontein, 12 of which had a preferential flake removal and the rest with recurrent centripetal removals. These all had a circular to ovate morphology. As mentioned previously, there is some discussion over whether the Nubian Type 2 strategy grades into radial cores, with bilateral preparation being an extension of centripetal preparation, thus the presence of a distal ridge and a pointed core would be the key distinguishing factors [41,43]. The removal of the distal pointed end of the core, which happens relatively frequently due to a high rate of overshot removals, would effectively transform a Type 2 core into an ovate-shaped radial core. Shape is one of the weaker attributes within the Usik et al. [29] system since it is difficult to strictly define where one shape ends and another begins-triangular, pitched and cordiform shapes grade into ovate as the pointed shape broadens at the distal end. Since a generally pointed core shape is clearly an important factor in determining the pointed shape of the end-product, and core shape is affected by the preparation strategy, this is an important attribute to be able to quantify.
Organisational system. Two main Nubian core organisation systems are recognised, Type 1 and Type 2, with a combination of distal and lateral preparation acknowledged in a third category, Type 1/2. Core type could be determined on 101 of the cores from 2014 and 2015, but 20 (16.5%) were indeterminate owing to breakage, reworking or overshot removals. The majority of cores showed Type 2 preparation of the DMR from the laterals (n = 64, 52.3%), 35 showed a combination of both lateral and distal (Type 1/2) preparation (28.9%), and two had Type 1 distal preparation (1.7%) ( Table 5). The low number of Type 1 cores, both of which are hornfels, and the observation that these are larger than average (see below) may indicate that this strategy was employed in early stages of Nubian core reduction, but the later reduction phase favoured lateral preparation as seen on Type 2 cores. This is difficult to test without more detailed study of the Tweefontein debitage, but numerous elongated products (blades) at the site could derive from Type 1 distal removals.
Distal median ridge and distal platform. The installation of a distal platform and a steep DMR to guide the preferential removal are key features that distinguish Nubian from centripetal Levallois methods. Of the 63 (58% of the total) cores that preserved the DMR, 97% (n = 61) had a DMR of less than 120˚, regarded as sufficiently steep to be classified as Nubian under Usik et al.'s [29] scheme (range = 50-140˚) ( Table 6). More than half of these (59%, n = 37) were less than 90˚(within the steep (n = 3) and semi-steep (n = 34) category), with the mean DMR being 88.3˚. The two cores with DMR angles of 125˚and 140˚had both undergone several phases of repreparation and were abandoned when the distal convexity got too shallow, resulting in the final removals terminating with hinges or steps very close to the proximal.
PLOS ONE
Most cores (82%, n = 89) retained the distal platform, of which 14% were acute (n = 12), 63% were semi-acute (n = 56) and 9% were right-angled (n = 8); 15% (n = 13) exceeded a right-angle (95-110˚) ( Table 7). The mean distal platform angle (DPA) was 76.1˚(range = 40-110˚). Twelve cores were missing their distal portion due to overshooting, where the final removal has extended beyond the distal end of the core. This is a common technological accident associated with Nubian production [38], occurring either because the convexity is not steep enough [26], or because the distal end of the core is too high relative to the rest of the flaking surface [41].
Prepared striking platform and preferential products. All complete cores had a prepared proximal striking platform. For 100 cores, the number of discernible preferential removals on cores ranged between one and seven, with eight that had been broken or re-prepared and abandoned with no clear preferential removals. A total of 67 had clear preferential point removals, 12 had preferential flake removals, and the remaining 21 cores had aberrant (hinged or stepped) flake scars indicating the early termination of the intended point removal, often due to an insufficiently steep central guiding ridge. The rate of aberrant scars on cores overall was high, observed on 72 cores, 39 of which had more than one aberrant scar. Furthermore, more than half of the preferential removals on 58 of the 75 Nubian cores were aberrant, while all visible preferential removals on the remaining 17 cores were hinged or stepped. This high rate suggests that cores were used to their maximum exhaustion; for example, on six out of ten silcrete cores the final scar had an aberrant termination. It should also be noted here that although points are expected to be the intended end-product, depending on the core convexities, flakes could also be produced [47,93]. In fact, many products-including the one refitting Nubian core and point (Fig 6J)-possessed asymmetries or shape variation due to technological accidents that are not adequately captured by a simple category of points as described below.
Points
Artefacts were identified as points at Tweefontein if they had convergent lateral edges, but overall this encompassed a wide range of morphologies and included both preferential Levallois pointed products and points whose edge morphology is shaped by retouch [59,94] (Fig 7). We present a preliminary description of the point assemblage here, with more detailed analysis to follow. The distinction between both point types within the assemblage is not clear-cut
PLOS ONE
since many preferential points (identified on the basis of dorsal scar patterns) have subsequent retouch or edge-damage that modifies the laterals. While debates often focus on the use of points as projectiles [95][96][97], we make no assumptions here about point function in the Tweefontein assemblage. A total of 218 points were recorded; 101 (46%) of these are complete points, 82 are proximal and medial portions but have broken distal thirds (38%), and 35 are medial or distal fragments (16%) ( Table 8). All of the fragmentary artefacts are still identifiable as having a convergent morphology. While the proportions of complete and fragmented points are roughly even (between 44% and 56%) across most raw material types, silcrete is notably different with 71% of points recorded being fragmentary and only 29% complete. This is unlikely to be due to inherent differences in raw material properties since one would expect to see higher breakage rates in more brittle rock types like hornfels. Instead, it may indicate that silcrete points were preferentially used to the point of exhaustion, adding to a number of observations that suggest silcrete was treated differently from other raw materials at the site.
A total of 169 (77.5% of all points) points had a discernible platform. The data suggest that most of these points originate from prepared cores: 155 points have faceted platforms (91.7%), 11 are plain (7.9%), two are cortical (1.3%), and one is punctiform ( Table 9). The spatial association between points and Nubian preferential point cores suggests this was the primary production method, but dorsal scar patterning on the points is variable and the diagnostic distal portion is missing from 38% of points. Dorsal scar patterning could be identified on 84.4% of points: 109 points have unidirectional scars originating from the proximal (50.8%), 59 have crossed or radial scars from one or both laterals, and 15 have bidirectional scars with removals from the proximal and distal (Table 10). This indicates various reduction strategies including Type 2, 1/2 and, to a lesser extent, Type 1 Nubian methods (Figs 8-10). A major limitation in the study of Nubian technology is that it has been defined principally with reference to the cores, with very few studies focusing on the features of the products and how these differ from points made using other Levallois methods.
In accordance with the generally accepted definition of unifacial points in southern Africa [95,98], we use it here to refer to points with unifacial retouch (n = 81, 37%), either invasive (n = 21, 10%) or marginal (n = 60, 28%) on one or both margins (Table 11). One complete silcrete point could be regarded as parti-bifacial, with some invasive retouch on the dorsal margins and thinning around the bulb on the ventral. A large number of points had edge-damage (n = 127, 58%) representing very informal retouch on part of an edge, potential use-wear or post-depositional damage. Two points had notches formed by single blows along the margins and seven points were unmodified. While other unifacial point assemblages in South Africa show distinct morphologies and clear cycles of reduction [98], the Tweefontein assemblage is highly diverse. Future study of the point assemblage aims to take a more holistic technological approach, using two-and three-dimensional geometric morphometric techniques for the quantification of point shape, alongside detailed study of scar patterns, retouch extent and intensity in order to better understand point variability. [41,90], but closer to those less than 80 mm described as 'micro-Nubian' from Dhofar, dubbed the 'Muddayan' industry by Usik et al. [29]. Metric data were available for 70 complete points (Table 13). Although sample sizes were small, hornfels points were the most variable in size (range of 27.9-95.4 mm), with dolerite, CCS and quartzite showing similarly low variation (ranges within 18, 23 and 24 mm respectively) (Figs 11B and 12B, Table E in S1 Appendix). Silcrete lies in between with the smallest point at 32.9 mm and the largest at 73.3 mm (Fig 7P). The smallest points on average were CCS (mean 41.6 mm, median 40.9 mm) and quartzite were the largest (mean 52.0 mm, median 49.7 mm). Silcrete points have a mean length of 45.3 mm and median of 44.0 mm.
The length of the last preferential point scar could be determined on 58 Nubian cores (Table F in S1 Appendix); on silcrete cores this had a mean of 30.8 mm (n = 6) and on hornfels cores the mean was 41.3 mm (n = 28). Silcrete cores were on average 15 mm smaller than hornfels cores when they were eventually discarded. A further indication that silcrete was used more intensively than hornfels is the relationship between numbers of cores and points: hornfels shows a ratio of 1 core to 1.3 points, and silcrete 1 core to 3.7 points.
Across all raw materials, most cores retained some cortex on the lower surface since preparation of the core convexities focused on the distal and laterals (Fig 13). Similar proportions of silcrete (65%) and hornfels cores (67%) retained cortex despite silcrete cores being smaller at discard. For the point sample, as would be expected for predominantly Levallois products,
PLOS ONE
only 11% of points retained cortex (n = 23). Most of these had less than 20% cortex (74%), although one unusual high-quality silcrete point preserved red cortex on 80% of the dorsal with fine regular retouch on both laterals (Fig 7O).
Additional sites with Nubian technology
In addition to the large assemblage at the main Tweefontein site, several smaller sites identified in surveys have Nubian cores, unifacial points and high levels of silcrete use, at KOB20 on the Tra-Tra River 15 km to the east, and at TWEE7 on a ridge 1 km north-east of Tweefontein (Fig 14). KOB20 is located at the juncture of the Kobaskloof (currently dry) and Tra-Tra Rivers at the foot of a cliff. The site was sampled in three 1 m squares with recording of additional diagnostic artefacts. Only non-metric attributes were recorded. Twenty-nine points, including unretouched, unifacial, parti-bifacial and one bifacial form were observed (Table 14, Fig 15). One Nubian core was identified and a further four artefacts preserve features of Nubian cores in the form of overshot flakes and a partially reworked core (Fig 15L-15P). Silcrete dominates the assemblage, accounting for 57% (n = 128) and 65% (n = 145) in two 1-metre sample squares, with over 1.3 kg of silcrete in the latter. Silcrete nodules were observed in the general area, with an additional source recorded on top of the plateau immediately to the north.
PLOS ONE
TWEE7 is a small scatter on the western edge of the high ridge to the north of Tweefontein. There are four Nubian cores, nine retouched and unretouched points, and a high localised incidence of silcrete (n = 20) compared to the low-density scatter across the rest of the ridge
PLOS ONE
where silcrete is absent. The artefacts were less refined than those observed at Tweefontein which may be due to poorer-quality raw material used, perhaps due to local availability on top of the ridge (Fig 16). Two invasively flaked quartzite bifacial points also occur at the site which is interesting given the occurrence of parti-bifacial and bifacial forms at KOB20.
Evidence of Nubian technology was identified at a further nine locations in the Tankwa Karoo (Table 14, Figs 14 and 17), either as isolated finds or within more substantial MSA assemblages, such as RWF1, a raised ridge overlooking the Tankwa River in a similar setting to Tweefontein. Silcrete is rare in the eastern Tankwa Karoo but seven silcrete artefacts alongside Nubian technology at RWF1 and fifteen at RWF3 are over 50 km from potential sources on Cape Supergroup geology, with cortex retained on 40% of artefacts from the latter. An isolated silcrete point from a Nubian core was found at REN1 (Fig 17E), 40 km from potential sources. The Nubian cores in the wider Tankwa Karoo were predominantly hornfels (n = 8) and Type 2 cores were the most common (46%, n = 6). In four of the locations where isolated Nubian cores were observed, unifacial or unretouched points were also present (Table 14).
Nubian Levallois technology in local context
Our survey results from the Tankwa Karoo show the repeated association between Nubian Levallois technology, unifacial points and silcrete use. In the southern African archaeological sequence, unifacial points are most characteristic of the late MSA post-Howiesons Poort technocomplex, dating to MIS 3 around 58-50 ka [99]. In the Western Cape region, unifacial point-bearing post-Howiesons Poort assemblages occur in stratified excavated assemblages at Klein Kliphuis and Mertenhof in the Doring-Cederberg area, and Diepkloof and Varsche Rivier further afield [23] (Fig 1). Locally, this period is also associated with an emphasis on silcrete use, with heat-treatment noted at Mertenhof [100]. The rock shelter site Mertenhof (50 km from Tweefontein) provides a probable temporal anchor for Nubian Levallois technology in South Africa, with one core and two unretouched points from Nubian cores associated with unifacial points and elevated silcrete use within a stratified sequence [32]. These artefacts occur in the unit Upper BGG/WS which overlies typically Howiesons Poort layers, characterised by backed artefacts. The authors attribute this unit to the post-Howiesons Poort, bracketed above by an OSL age of 51.2+/-2.2 for unit DGS [101]. Based on similarities with Mertenhof, the co-occurrence of Nubian technology, unifacial points and a high incidence of silcrete at the nearby open-air site of Uitspankraal 7 (UPK7) is also described as post-Howiesons Poort [32]. By the same reasoning, we suggest a post-Howiesons Poort MIS 3 age for the assemblage at Tweefontein.
PLOS ONE
UPK7, located 40 km north-east of Tweefontein, is currently the only published site that describes Nubian technology in South Africa besides our Tankwa Karoo evidence. Thirty-six Nubian cores which meet the requirements of Usik et al. [29] are reported alongside 18 unifacial points [32]. The majority of Nubian cores are silcrete (56%), followed by quartzite which outcrops at the site. Core sizes at Tweefontein are similar to UPK7 which has a mean of 44 mm and range of 33-86 mm [32] (Tweefontein: mean 48.8 mm, range 28-95 mm). A t-test between core length and width in the two assemblages gave p-values of 0.06083 (length) and 0.9619 (width), indicating that there is no statistically significant variation between the two (UPK data extracted from graphs in [32] using the online tool WebPlotDigitizer). The authors note that the small size of silcrete cores (mean 39 mm) likely relates to raw material nodules <100 mm from a likely source at Swartvlei, 5 km from the site. A low number of cores are made of chert and hornfels although both are available from the Doring River adjacent to the site. The mean hornfels core length of 49 mm (n = 2) is close to the mean of 52 mm at Tweefontein. At UPK7, 58% cores (n = 20) are Type 1/2 and 36% cores (n = 13) are Type 2, with one Type 1 core identified. This contrasts with Tweefontein where Type 2 cores are most common (53%), followed by Type 1/2 (29%). There is a similarly low number of Type 1 cores (n = 2), and at both sites Type 1 cores are larger than average. Two other isolated Nubian cores have been identified in the region, both in silcrete. The first was found in the Olifants River Valley [24] and the second in surveys of the Bos River [17]
PLOS ONE
( Fig 17F and 17G). The Olifants River core is the apparent outlier to the geographic and environmental pattern that is emerging for Nubian technology, situated some distance from the Tankwa Karoo (70 km from Tweefontein) to the west of the Cederberg Mountains, and in a Fynbos Biome setting on the banks of a reliable perennial river. However, this distance is small when considered within the context of hunter-gatherer mobility ranges and wider regional technological trends. Thus, current evidence suggests that Nubian technology occurs geographically to the east or inland side of the Cape Fold Mountain belt, in areas with seasonal
PLOS ONE
Nubian Levallois reduction strategies in the Tankwa Karoo, South Africa watercourses receiving overall low annual rainfall (260-160 mm for modern data), and ecologically in the Succulent Karoo Biome.
Preferential Levallois technology in regional context
The Nubian Levallois method of point production is a prominent and novel feature of MSA technology in the Tankwa Karoo, but this prompts important questions about its relationship with other methods of preferential point production. In Tankwa Karoo surveys more broadly, only three point cores have preparation directed from the proximal end as expected from the unidirectional convergent point production method [26]. The rest (n = 10) have radial or lateral preparation and three also have at least one distal scar, and all have a triangular to cordiform morphology. The main issue that prevents these cores from fulfilling the criteria of Nubian cores is that they do not all have a prominent DMR, in most cases due to technological accidents (overshot removals) or breakage. In terms of preferential Levallois cores that were not used to produce points, only 27 radially prepared cores with preferential flake scars were recorded in the Tankwa Karoo (in contrast with 293 radial cores without preferential scars), 15
PLOS ONE
of which were from Tweefontein. When considering preferential Levallois technology overall, even when the Tweefontein sample is excluded, the Nubian Levallois method seems to be the dominant preferential technique observed in the Tankwa Karoo.
PLOS ONE
In a separate research project surveying the Olifants River Valley which yielded a sample of over 13,000 artefacts, only two preferential Levallois points were recorded and no preferential point cores besides the Nubian core on the Olifants River mentioned above [24]. A total of 209 radial cores were recorded, two of which were noted as having preferential Levallois flake removals and another as a bidirectional Levallois core [102]. In Shaw's [17] surveys of the Bos River to the north of the Tankwa River, 88 radial cores were recorded but only three (non-Nubian) preferential Levallois cores, two of which were unidirectional point cores.
In considering the published information available for excavated MSA sites in the wider region, there is little data that specifically refer to Levallois point cores. This is partly an issue of terminology, with the Levallois concept rarely applied in a way that identifies preferential products. Furthermore, the alternative term 'parallel' [103] and the frequent grouping of prepared, radial and Levallois cores [104] masks variation in Levallois technology. Broadly, radial cores represent a morphological category of circular to ovate cores with two convex hemispheres and centripetally struck removals around the perimeter; the specific technological strategies are only sometimes distinguished as discoidal or recurrent centripetal Levallois [105]. In reality, many cores display flexibility in the role of the hemispheres for preparation or exploitation [106], hence the umbrella term 'radial' is widely used.
At Klein Kliphuis, 9.7% of cores (n = 35) were identified as Levallois (presumed to be preferential after [104]) occurring in greatest numbers in Spit Dvi9 when backed artefacts are the dominant tool form [107]. In spits Dvi6-5 when unifacial points dominate, Levallois cores are rare and cores are mostly radial and platform types. Mertenhof is the only excavated site in the region with Nubian Levallois technology confirmed. Unretouched Levallois points are most common in the upper part of layer BGG/WS, assigned to the post-Howiesons Poort, where 31 occur alongside nine unifacial points and five backed microliths [32]. Two of these points have Nubian characteristics and these occur with the single Nubian core in the lowest stratum of this layer. Silcrete use is high (27.3%), although not as high as in the underlying Howiesons Poort layers (32.2%). At the open-air site of Uitspankraal 7, 14 pointed products were identified with features characteristic of Nubian cores, and three overshot flakes preserve the distal platform of Nubian cores [32].
At Diepkloof, the post-Howiesons Poort is directed towards blade technology, flakes seldom show platform preparation and preferential point production is rare [108]. Rather, the MIS 5d industry 'MSA-type Mike' involves Levallois reduction with preferential points comprising 20% of flakes produced, 47.5% of which have faceted platforms. One point production strategy involves convergent unidirectional or orthogonal preparation resulting in typical Levallois points with a trapezoidal section. The second method employed produces points with a triangular section, central ridge and usually one cortical side, termed 'pointes accourcies' [108]. The emphasis on point production in the MSA-Mike industry is likened to the MSA II at Klasies River where unidirectional convergent point production is common [109,110]. Similarly high levels of triangular blank production (20%) are observed in the MIS 5 assemblage at Blombos, with high levels of platform preparation (54%) [111]. Characteristic unidirectional convergent Levallois cores (n = 4) are rare but associated with 13 points and six pseudo-points. Another site consistent with this pattern is Varsche Rivier where convergent flakes are most common in the lower Layers 06 and 07 [112]. Although this assemblage is attributed to the earlier MSA, the OSL ages for these layers are younger than expected (59-61 ka), as is the currently the case for all dates at the site.
When MSA assemblages from across the Western and Southern Cape are considered, the period when preferential point production was most prominent was late MIS 5 or MSA II [109,111,113,114]. In contrast, unifacially retouched points are most common in the post-Howiesons Poort but do not appear to be technologically dependent on preferential point production, using a range of blank forms. The southern African interior may show a different pattern that presents a better fit with the evidence from the Tankwa Karoo. Trimmed (unifacially retouched) and untrimmed (unretouched) points are a common feature in Orange River assemblages at Orangia 1 and Zeekoegat 27a, alongside Levallois point cores which resemble Nubian cores based on the illustrations [33]. The excavated assemblage at Driekoppen also favours prepared point production, although no cores were reported at the site [34]. While the technological similarities with point production in the Tankwa Karoo are suggestive, for the moment, the chronology for the interior Karoo is poorly resolved. However, the stratigraphy for Orangia 1 and thermoluminescence dates from Driekoppen support Nubian-like technology in the later part of the MSA, which is consistent with the timing of the post-Howiesons Poort as it is recognised in the near-coastal Cape Fold Belt mountains.
Even though Levallois point production is characteristic of MIS 5 at certain sites [109,111,113,114], we argue that the other important features that distinguish the Tweefontein assemblage-high retouch rates and use of fine-grained silcrete-are more consistent with MIS 3 patterns [115]. Supported by the post-Howiesons Poort age associated with unifacial points, high silcrete use and Nubian technology at Mertenhof [32], the occurrence of this specific method of point production accords with the wider trend towards technological regionalisation during MIS 3. In contrast with the spatially widespread technologies of the Still Bay and Howiesons Poort in MIS 4, MIS 3 lithic assemblages are notably more heterogeneous [115], although this is further compounded by various different terminologies applied to them (e.g., post-Howiesons Poort, Sibudan, late MSA, final MSA, MSA 3/III) [11,98,116]. Often the only feature shared by these assemblages is that unifacial points are the dominant implement type, but the form of these points show considerable regional variation. This is consistent with the proposal that populations became geographically fragmented under increasingly diverse environmental conditions [115], with the~30 kyr span of MIS 3 adding a temporal dimension.
Particular contrasts are seen between the Fynbos Biome/Winter Rainfall Zone regions discussed above, and the KwaZulu-Natal region of eastern South Africa, encompassing the Indian Ocean Coastal Belt and Grassland Biomes in the Summer Rainfall Zone [9,10]. At Sibudu Cave, where 1.2m-thick post-Howiesons Poort or 'Sibudan' deposits at 58 ka document a short-lived but intense occupation episode, different unifacial point types have been distinguished on techno-functional grounds [98,117]. These include 'Tongati' and 'Ndwedwe' types, the former characterised by a short triangular functional end and the latter emphasising lateral retouch along the length of both edges. These types are also recognised at nearby Holley Shelter, supporting the notion of a regional Sibudan technocomplex [116,118]. While Tongati point forms have been identified in the post-Howiesons Poort at Diepkloof, Ndwedwe points are absent, therefore extending the 'Sibudan' designation to include Diepkloof would be premature [108]. At another site in the Indian Ocean Biome, Umbeli Belli, broad and narrow points have been distinguished on morphological grounds with possible functional differences implied [119]. A point form restricted to the final MSA of eastern South Africa is the hollowbased point, which occurs in small numbers at Sibudu [95,120], Umhlatuzana [121,122], Umbeli Belli [119,123], and single instances at Border Cave [124] and Kleinmonde [125].
In the Succulent Karoo Biome, an assemblage characterised by the large-scale production of awl-like points at the open-air site Swartkop Hill in Namaqualand also presents a localised point form attributed to MIS 3 [126]. However, it should also be noted that not all MIS 3 assemblages in the arid biomes are characterised by points, as at Varsche Rivier, Spitzkloof A and Apollo 11 [112,127,128]. Additionally, the open-air site of Putslaagte 1, on the Fynbos/ Succulent Karoo Biome boundary, has yielded an MSA assemblage that post-dates 61-58 ka with no unifacial points or other MIS 3 features seen in the regional rock shelter record, hinting at more unrecognised variability when open-air assemblages are considered [129]. The Nubian technology seen at Tweefontein, Uitspankraal 7 and other Tankwa Karoo surface localities contributes a further regional technological expression to this broader MIS 3 pattern, also highlighting the importance of incorporating open-air sites and biogeographic diversity into future research.
Nubian Levallois technology in global context
Outside of South Africa, Nubian technology as it occurs in north-eastern Africa (Egypt, Sudan and Libya) is often associated with the Middle Palaeolithic/MSA Nubian Complex [27,44] with controversy surrounding this relationship and definitions only amplified now Nubian cores are found in the Levant, Arabia and India (see [47] for discussion). Given that this region is critical in debates surrounding early modern human dispersal routes 'Out of Africa', the recurring presence of Nubian technology has been suggested to show "trails of. . . stone breadcrumbs" [28: 18] tracking past human movements along ecological corridors. The similarities that Rose et al. [28] observe between the Nile Valley and southern Arabian Nubian cores are the basis for their argument that populations with Nubian technology dispersed along the 'southern route' through the Horn of Africa (where Nubian cores also occur), across the Red Sea at the Bab-el-Mandeb strait. Conversely, others have proposed a 'northern route' from the Nile Valley across the Sinai Peninsula into the Levant [27,44,130]. Rather than a simple dispersal model along this route, Goder-Goldberger et al. [31] argue that since Nubian Levallois cores are found alongside other similar artefact forms, they represent part of a 'technological package' and thus reflect cultural diffusion between interacting populations-"diffusion with modification"-rather than demic diffusion.
A complicating factor in assessing dispersal and diffusion models is that Nubian technology either is not a prominent reduction strategy or does not occur at all in some Middle Palaeolithic assemblages across Northeast Africa [92,131,132], the Levant [94,133,134] and Arabia, including some dated to MIS 5 [135][136][137]. The poorly-resolved chronology is a further hindrance to tracking the relationship between these occurrences and determining any directionality in their spread. Very few secure ages are available and these currently span over 100,000 years, therefore few solid conclusions can be drawn at the moment. A new discovery of Nubian technology in buried, though currently undated, deposits at Dimona in the Negev Desert, Israel, has great potential to contribute to this [138].
Although these regions discussed above (referred to collectively hereafter as 'northern') are spatially contiguous, they span vast distances (Fig 18) and are divided by major biogeographic barriers. The high levels of technological and typological diversity in the assemblages accompanying Nubian cores-or without them entirely-at varying spatial scales, set against considerable climatic fluctuation during MIS 5, prompts the question of whether Nubian technology could have arisen independently in these areas through convergence [47,49,53]. The opening up of corridors between different biogeographic zones during humid phases could account for the spread of populations with Nubian technology, but the subsequent isolation of populations in refugia under harsher conditions could also have driven the innovation of this specific strategy multiple times. Foley et al. [139] draw a pertinent distinction here between range expansions into previously arid zones during wetter phases, and what they regard as true arid adaptations that allowed humans to persist in marginal environments.
A key similarity that we observe between almost all locations where Nubian technology is found is the arid desert context, with 90% of the 59 reported Nubian occurrences falling within arid climate zones (Fig 18, Table G in S1 Appendix). Setting the temporal separation and chronological ambiguity aside, when compared with the modelled climatic extremes of glacial and interglacial scenarios (Fig 19), these locations all receive comparatively little rainfall which points to their persistence as arid environments, although occupation is likely to have favoured wetter phases as seen in MIS 5e, MIS 5c and MIS 5a [141,142]. Additionally, like the Tankwa Karoo, most of the northern sites occur in desert pavement settings which form under conditions of aridity and represent a long-lived, stable land surface [28,67,78,80,93,139]. While it is currently difficult to unravel the 'dispersal, diffusion or convergence' debate that surrounds the northern Nubian cores, we propose that the South African evidence-separated substantially in time and space-presents a good independent opportunity to examine whether, and why, Nubian technology might represent an adaptation to arid environments. Central to this is addressing whether it is the system of point production or the features of the points themselves that might confer an advantage on hunter-gatherers foraging in a high-risk environment.
At an assemblage level, the similarities between the South African and northern Nubian sites are limited to the reduction method for point production, with no other shared characteristics in terms of retouched tools and other core types [26,28,54,90]. Additionally, Type 1 cores are dominant at northern sites while Type 2 cores are most common in the South African assemblages recorded at Tweefontein and Uitspankraal 7 [32]. It has also been noted that Koppen-Geiger Climate data from CliMond.org, after [140]. Open-source spatial data from NaturalEarthData.com. Site numbers refer to Table G
PLOS ONE
the DMR is much more pronounced on the northern cores than southern [54], confirmed by current data from Tweefontein. A further difference lies in average core size with the South African cores being considerably smaller. All Tweefontein cores are below 80 mm, consistent with what is described as 'micro-Nubian' in Dhofar assemblages [28,29], and the even smaller cores present at K'One in Ethiopia [57]. Another noteworthy point is that very few northern assemblages have comparable numbers of cores to Tweefontein; only Nazlet Khater 1 and the western Dhofar sites have over 100 cores [28,29,145] suggesting different patterns of provisioning and mobility [93]. These larger Nubian core assemblages share a similar setting, in close proximity to water on the fringes of otherwise arid regions-the fertile Nile Valley, reliable springs in Dhofar and the perennial Doring River in the Tankwa Karoo.
The greatest constraint on past humans occupying an arid environment is the availability of water, which in turn dictates the abundance and distribution of food resources. To mobile hunter-gatherers, it is not only resource availability which is important, but also predictability and reliability are key to scheduling when and where these resources can be obtained [146][147][148]. In a water-poor environment, the frequency and distribution of food resources is likely to be more limited than in wetter environments, meaning that the risk associated with missing out on these, either temporally or spatially, is very high. It is therefore particularly important that an individual is provisioned with a suitable and functional tool at these critical windows of opportunity that allows them to target the resource successfully [149]. Mobility is one aspect of the strategies hunter-gatherers can employ to buffer against risk. High levels of mobility allow foragers to exploit widely-spaced resources or compensate for resource uncertainty. This is particularly relevant in arid environments and may account for large Nubian core assemblages near water sources, with cores transported away and discarded in more marginal settings-a pattern noted in Egypt, Dhofar and the Tankwa Karoo [28,93].
Levallois technology is often cited as being suited to high levels of mobility, producing flakes with an efficient cutting-edge length to raw material mass ratio and generating a high number of blanks relative to raw material waste [150,151]. A further advantage is producing a preferred and standardised end-product [152]. Although Van Peer [26] compares the efficacy of 'classical' (centripetal) Levallois flake production against Nubian point production and Red areas receive less than 300 mm rainfall. Annual precipitation taken from bioclimatic data downloaded from Woldclim.org, after data from [143,144]. Opensource spatial data from NaturalEarthData.com. https://doi.org/10.1371/journal.pone.0241068.g019
PLOS ONE
concludes there is little difference in productivity, we are unaware of explicit technological or experimental comparison between the unidirectional convergent point and Nubian reduction methods. As mentioned previously, other researchers have viewed Nubian technology as an extension of Levallois centripetal reduction, with the main difference being increased attention to the distal portion of the core by installing the DMR [30,31,92]. One effect of the DMR appears to be that products are more elongated than the more-or-less equilateral triangle produced by the unidirectional method, often described as pointed flakes rather than "true" points [26,31]. A potential benefit of this elongation is the greater cutting-edge length to mass efficiency which is favourable in a toolkit geared to high mobility. A further effect of the DMR emphasised by Groucutt [47,94] is that the resultant points are straighter, thicker at the distal and therefore stronger than those produced using non-Nubian methods. Groucutt [47,94] argues that more robust points would be less prone to breakage and therefore more reliable under the risky conditions of an arid environment; however, this suggestion currently only rests on qualitative observations and remains to be properly tested.
Within the context of the South African MIS 3 technological systems that favoured unifacially retouched tools [115], it might be expected that thicker blanks with greater resharpening potential would be preferred over thinner ones. In post-Howiesons Poort assemblages, unifacial points appear to be produced through retouch on a range of blank types (flakes and blades) and predetermined point methods are rarely mentioned in the literature; the focus is on the modification of the blank by retouch [98]. A hypothesis that we propose for the regionally-specific Tankwa Karoo technology is that Nubian Levallois products were an effective way of producing predetermined points that were thick enough to withstand use and multiple resharpening episodes. This would reflect an adaptation of the wider tradition of unifacial point production to incorporate a strategy that conserves raw material and reduces the risk of tool exhaustion or breakage, suited to the higher-risk demands of a marginal environment. While one interpretation of Tweefontein is as a tooling-up site to provision individuals with points that could be transported for use elsewhere on the landscape, alternatively Nubian cores themselves could have served to provision individuals [149]. Current evidence from surveys at a landscape scale show the presence of both points and heavily reduced Nubian cores in parts of the eastern Tankwa Karoo, which may indicate that both components played a role in transported toolkits.
Although Nubian technology is often described as distinctive and emphasised in numerous key debates surrounding cultural transmission and human adaptations, the current state of affairs means that very little data can be meaningfully compared at regional or wider scales. Until this is rectified, the discussion presented above must be treated as hypotheses to be tested in future research.
Conclusion
The identification of Nubian technology at a number of regional sites in South Africa, in independent studies, can offer a new perspective removed from the 'dispersal' or 'diffusion' scenarios of the debate surrounding its occurrence. The clear chronological (MIS 3 vs. MIS 5) and geographical (~6000 km) separation of the South African samples precludes either of these as explanations for its origin. Rather, the technology is proposed to have arisen through convergence out of existing Levallois technologies [25,32].
Until recently, interest in the MIS 4 Still Bay and Howiesons Poort technocomplexes of the South African MSA has eclipsed the study of the subsequent post-Howiesons Poort and final stages of the MSA in MIS 3 [11,98,[153][154][155][156]. Initial suggestions that human behaviour experienced a devolution, regression or behavioural reversal following the innovative bursts seen in MIS 4 are no longer upheld [157][158][159] and the period is now generally viewed as reflecting shifts in technological organisation and adaptive strategies [4,115,160,161]. The climate of MIS 3 was not uniformly characterised by hyper-aridity as is sometimes stated [162][163][164], instead seeing rapid fluctuations and considerable variability in South Africa's different biomes [11]. While it has been noted that the Still Bay and Howiesons Poort broadly occupy a coastal ecological niche [165], sites attributed to the post-Howiesons Poort and MIS 3 more widely occur in almost all of South Africa's biomes [11,115]. This expansion out of the higher-rainfall Cape Fold Belt Mountains and Lesotho Highlands into more arid parts of the South African interior is accompanied by a diversification of MIS 3 technologies suggesting that populations became more disconnected [115]. In the Tankwa Karoo and potentially the interior Karoo more widely, the use of the Nubian Levallois technique to produce points demonstrates flexibility [166] in adapting existing lithic traditions (unifacial points) to what are interpreted here as environmentally-specific challenges.
Continued research in the Tankwa Karoo as part of the EU-funded 'TANKwA' project intends to further our understanding of Nubian technology and points at Tweefontein and related sites. Specifically, the application of geometric morphometrics to Nubian cores and points will generate quantitative data that allows more rigorous comparison between the South African sample and Nubian technology elsewhere. Three-dimensional geometric morphometric methods will be used to quantify core shape, which plays a key role in defining Nubian technology but is currently insufficiently described by qualitative categories. Twodimensional methods will be applied to the point assemblage, together with more detailed quantification of the extent and location of retouch, in order to better understand variability in point shape and form. Further insights into will come from a detailed study of scar patterning and other associated debitage at the site to determine earlier phases of core reduction and the role of Nubian Levallois methods in the assemblage more broadly. New approaches to the study of Nubian cores that move beyond attribute-based data are necessary if the definition and distribution of Nubian Levallois technology is to be refined within a thorough global comparative framework. | 2020-10-24T13:05:44.612Z | 2020-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "7c2e973e2a95b02519827e1d234d3df6a0fcedd3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0241068&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "661fc25e2c208f59d53a5272651a0d0f853ac44a",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
247241676 | pes2o/s2orc | v3-fos-license | Rolling Bearing Weak Fault Feature Extraction under Variable Speed Conditions via Joint Sparsity and Low-Rankness in the Cyclic Order-Frequency Domain
: Rolling bearings are critical to the normal operation of mechanical systems, which often undergo time-varying working conditions. When the local defects appear on a rolling bearing, the transient impulses will generate and be covered by the strong background noise. Therefore, extracting the rolling bearing weak fault feature with time-varying speed is critical to mechanical system diagnosis. A weak fault feature extraction strategy of rolling bearing under time-varying working conditions is proposed. Firstly, the order-frequency spectral correlation (OFSC) is computed for transferring the measured signal into a higher dimensional space. Then, the joint sparsity and low-rankness constraint is imposed on OFSC to detect the time-varying faulty characteristics. An algorithm in the alternating direction method of multipliers (ADMM) framework is derived. Finally, the enhanced envelope order spectrum (EEOS) is applied to further detect the defective features, which can make the fault features more obvious. The feasibility of the proposed method is confirmed by simulations and an experimental case.
Introduction
Rolling bearings are among the most vital rotating elements in a mechanical system (wind turbine [1], centrifuges, helicopters, washing machines, and so on [2,3]), which may be a direct failure for machines breakdown [4]. When rolling bearings are in normal operation, the vibration will generate, caused by raceway waviness, radial play, friction and so on [5,6]. Once the local defects occur, the transient impulses in vibration signals will generate (quite different from the vibration signals in normal operation), covered by a strong background noise [7]. Thus, the vibration-based analysis can be used for the rolling bearing fault diagnosis, and the fault feature extraction task aims to extract the feature of weak transient impulses from the noisy signal, which is valuable for fault diagnosis [8][9][10] and equipment maintenance.
When the rotating speed is stationary, several methods can be used to identify the stable fault characteristic frequency (FCF) to determine the types of faults (inner race, outer race and rollers). Different techniques have been developed to identify the fault types according to different FCFs, like envelope analysis [11] (shift the higher resonance frequency band into a lower fault frequency band to achieve a higher resolution), wavelet analysis [12] (denoise the vibration signal via wavelet decomposition), spectral kurtosis [13,14] (adaptively choose the resonance frequency band via kurtosis), empirical mode decomposition and its development [15][16][17] (denoise the vibration signal via mode decomposition), frequency band entropy [18] (adaptively choose the resonance frequency band via the frequency band entropy), time-frequency analysis (TFA) [19,20] (extract the fault feature in both time domain and frequency domain). Moreover, the time-frequency analysis method can transfer the measured vibration signal into a higher-dimensional space, which can highlight fault characteristics. Yang et al. [21] introduced the sparsity constraint in the time-frequency domain to extract the rolling bearing fault feature. Unfortunately, the weak fault feature is always hidden by the strong background noise, increasing the difficulty of extracting the fault feature in time frequency representation (TFR) [22]. Note that extracting the fault feature via single sparsity in the time frequency domain is hard to achieve a pleasant performance. In order to denoise the TFR, Yu et al. [23] revealed the sparsity of the fault feature and low-rankness of the background noise in the time-frequency domain. The TFR is denoised and the expected fault feature is obtained according to the robust principal component analysis (RPCA) [24]. However, in this case, the fault feature is also low-rank [25,26], not just sparse. If the parameter selection is unreasonable, the obtained fault feature will be confused with the noise component. To tackle this issue, a periodical sparse low-rank model was proposed [27,28] to consider the faulty characteristics in the time-frequency domain as a simultaneously sparse and low-rank component [29]. Actually, most of the abovementioned methods only consider the stable rotating speed conditions and are invalid to variable speed conditions.
When the rolling bearing undergoes the time-varying working condition, the FCFs will also be time-varying. The time-varying FCFs also increase the difficulty of the fault feature extraction task [30]. If the shaft rotating speed can be determined, the measurement can be resampled into the angular domain [31]. Then, the order tracking technique [32] enables the FCFs to be transferred into a fixed order. Thus, the system kinematics of rolling bearings (periodical impulses and modulations) are reflected in the angular domain. Moreover, the system dynamics of the rolling bearings (system resonances) are still shown in the time domain [33]. Along these lines of thought, a concept of angle-time cyclostationary (AT-CS) signal [34] was proposed to describe the rolling bearing fault signal in both time and angular domain. A useful tool named order-frequency spectral correlation (OFSC) [35] was also proposed to transfer the signal into the cyclic order-frequency domain via double Fourier transform, which transfers the angle to cyclic order domain and the time to frequency domain. When calculating the OFSC, the shaft rotating speed is needed and the vibration signal is resampled into the angular domain. Thus, the time-varying FCF in OFSC is transferred into a specific order. By identifying this specific order, the type of fault can be determined. However, like other TFA methods, the specific cyclic order in OFSC will also be interrupted by the strong background noise in real scenarios, where the time-varying fault feature may be difficult to identify. Thus, the problem of how to successfully extract the time-varying rolling bearing fault feature in the heavy interference should be investigated.
In this paper, a weak fault feature extraction strategy of rolling bearing with variable speed is proposed. The measured vibration signal is firstly transformed into the cyclic order-frequency domain, which is analyzed by OFSC. Then, the fault feature contained in the OFSC is extracted according to the joint sparsity and low-rankness model via an ADMM based algorithm. Finally, the EEOS is further used to enhance the extracted fault feature, making the fault feature more obvious to identify. The following are the primary contributions of this paper:
•
The OFSC is calculated by converting the measurement of a rolling bearing with variable speed into the cyclic order-frequency domain. The joint sparsity and low-rankness of the fault feature in OFSC is firstly revealed in this paper, which can be utilized to extract weak fault features of rolling bearings under variable speed conditions; • A joint sparsity and low-rankness constraint is imposed on the OFSC to model the fault feature. To optimize the proposed model, an algorithm named ADMM-SLRJEM is developed to extract the fault feature in OFSC.
The rest of this paper is structured as follows. The measurement of a faulty rolling bearing with variable speed is considered, and the corresponding concepts of AT-CS signal Appl. Sci. 2022, 12, 2449 are presented in Section 2. Then, the fault feature in OFSC is revealed to possess the joint sparsity and low-rankness and an ADMM based algorithm is derived in Section 3. Finally, in Sections 4 and 5, The feasibility of the proposed method is confirmed by simulations and an experimental case, respectively.
Problem Statement
The most prevalent places for rolling bearing local defaults are the inner race, outer race, and rollers. The measured vibration signal z(t) can be considered as the superposition of transient impulses by local defects and background noise as where x(t) denotes transient impulses and n(t) is the strong background noise. The faulty characteristics detection is devoted to separating the features of x(t) from z(t). When the rolling bearing undergoes the time-varying working conditions, the feature of x(t) is time-varying as well. Moreover, the background noise n(t) is very strong in the real scenarios, i.e., the signal-to-noise ratio (SNR) is very low even negative, which increases the difficulty of the extraction task. Overall, this paper aims to extract the fault features of the rolling bearing with variable speed and heavy interference.
The Angle-Time Cyclostationarity of the Faulty Bearing with Variable Speed
The vibration signal generated by a rolling bearing with constant speed is typically cyclostationary. However, the transient impulses and the carriers related to time are no longer constant when the rotating speed is time-varying. The system kinematics of rolling bearings (periodical impulses and modulations) is reflected in the angular domain. Moreover, the system dynamics of the rolling bearings (system resonances) are still shown in the time domain. Therefore, the measurement should be evaluated by combining the time domain and angular domain, which can employ the concept of angle-time cyclostationary (AT-CS) signal.
An effective tool for AT-CS signal analysis is called order-frequency spectral, which can transfer the analyzed signal into the (cyclic) order-frequency domain. Due to the periodic characteristics of rolling bearings in the angle domain, which are contained in the process of variable speed rotation, this tool can be used to effectively extract the cyclic orders of the fault characteristics. Let Z denote the measured signal transferred from z(t) in the OFSC, which can be decomposed as where X and N in OFSC are related to x(t) and n(t) respectively. When the noise N is strong, the expected fault feature may be overwhelmed in Z. Hereafter, the problem can be re-described as how to feasibly extract the time-varying fault feature X from the measurement Z.
The Calculation of the Order-Frequency Spectral Correlation
The angle-time autocorrelation function (ATCF) R 2x (τ, θ) of x(t) can be computed by the Fourier series at cyclic orders i/Θ as where E{·} is the statistic means, τ represents the time-lag, (·) * denotes the complex conjugation, R i 2x (τ) is the angle-time cyclic correlation function and θ(t) = t 0 ω(t)dt where R 2x (θ, τ) is the angle-time autocorrelation function, F θ→α denotes the Fourier transform from the variable θ to α, f is the spectral frequency and α is the cyclic order. The enhanced envelope order spectrum (EEOS) is also introduced to further highlight the faulty characteristics, and the EEOS is defined as where f 1 and f 2 are the upper and lower limits of the given spectral frequency band.
The Proposed Rolling Bearing Fault Feature Extraction Method under Variable Speed Conditions
The sparsity and low-rankness of the rolling bearing fault feature in the OFSC are first demonstrated in this section. Then, an algorithm in the ADMM framework is derived to optimize the proposed model.
The Joint Sparsity and Low-Rankness of the Fault Feature in OFSC
In this subsection, a specific example is given to illustrate the sparsity and low-rankness of the faulty rolling characteristics in OFSC. According to the previous study, the faulty rolling bearing signal with variable speed is where L m represents the impulses count, A(t) is the amplitude modification with µ the amplitude modification magnitude, β is the coefficient related to damping, T i is the duration of i-th impulse, u(t − T i ) denotes the step signal with T i as the starting point, and f resonance is the resonance frequency.
Here, a simulated signal of the outer race fault in a speeding up condition and its OFSC are displayed in Figure 1a,b. The detailed parameters are introduced in Section 4. It can be seen that the intervals between two fault impulses are variable with time. The feature of the fault is sparse in the obtained OFSC. Meanwhile, the singular value decomposition (SVD) is applied on the simulated OFSC, as seen in Figure 1c. It can be seen that the faulty characteristic is relatively large on only several singular values. Hence, the joint sparsity and low-rankness can be used for the extraction of the fault feature.
Combined with Equation (2), the optimization problem of the sparsity and lowrankness jointly enforced X is where λ 0 and λ 1 can be used to adjust the weight of the low-rank and sparse component, · * is the nuclear norm, · L1 denotes the L1 norm, · F is the Frobenius norm and δ is the upper bound of the noise N ( N F ). Once the optimazation problem (8) is solved, the joint sparsity and low-rankness fault feature X in the OFSC can be extracted.
The Optimization Algorithm Derivation
Equation (8) can be further expressed as an unconstrained optimization problem as The three terms in Equation (9) are all convex, which means the whole optimization problem is convex and has a globally optimal solution. The ADMM is a high-efficiency computational framework, which allows parallel computing [36]. The convergence of ADMM has been proved in ref. [37]. It can be seen from Equation (9) that the sparsity and low-rankness of the expected fault feature are simultaneously constrained, which makes the optimization more difficult. To decouple the sparsity and low-rankness constraint, an additional constraint X = Y should be introduced, which can be further written as Then, the argumented Lagrangian function L is where D is the Lagrange multiplier, ρ is the plenty parameter related to the convergence speed. The problem (9) can be solved according to Equation (11) by alternatively solving three sub-problems by iteration as follows: where (·) k denotes the k-th iteration. (12) Combining similar terms and ignoring constant terms (the detailed process can be found in Appendix A), the sub-problem of Equation (12) is rewritten as
The Solution of Sub-Problem
To minimize the L1 norm, the soft-threshold function [38] is used, which can be defined as where a can be regarded as an element in matrix A and T is the threshold. Thus, X (k+1) can be calculated as (13) Before minimizing the nuclear norm, the SVD of X (k+1) − D (k) should be firstly operated, which can be marked as
The Solution of Sub-Problem
where U and V are the unitary matrices, S is a diagonal matrix composed of singular values. The nuclear norm can be defined as the total of all elemental absolute values in S, i.e., S L1 . Therefore, the iteration step of sub-problem (13) is Hereafter, the Algorithm 1 can be concluded to optimize the sparsity and low-rankness jointly enforced model (SLRJEM).
Algorithm 1 ADMM-SLRJEM.
Input: measured vibration signal in OFSC Z, λ 0 , λ 1 , ρ, K (K is the maximum number of iterations), ( is the stopping rule) 1: Initialization: (17) 4: [U, S, V] = SVD X (k+1) − D (k) % according to Equation (18) 5: The OFSC of the measured vibration signal is firstly calculated. Then, the obtained OFSC is constrained as jointly sparse and low-rank and the derived algorithm is used to detect the time-varying faulty characteristic. Finally, the EEOS is computed to further enhance the fault feature in OFSC and the fault characteristic order can be further analyzed. The detailed process is: • Step 1: Resample the obtained vibration measurement into the angular domain and calculate the OFSC via Equation (4). • Step 2: Separate the fault feature X from the obtained OFSC Z via the proposed algorithm. The obtained OFSC is constrained as jointly sparse and low-rank, which is explained in Section 3.1. The derived algorithm in Section 3.2 named ADMM-SLRJEM is applied to separate the time-varying fault feature in the obtained OFSC. • Step 3: Calculate the EEOS according to Equation (5) to further enhance the separated fault feature in OFSC. Then, the enhanced fault feature can be more obvious for fault diagnosis of rolling bearing with variable speed.
Measured Vibration Signal
Step 1: OFSC Calculating Step
Simulation Study
In this section, several cases are simulated to confirm the performance of the proposed method. Firstly, three parameters λ 0 , λ 1 and ρ of the Algorithm 1 should be determined. The parameters λ 0 , λ 1 ∈ [0, 1] are related to the ratio of sparse and low-rank component. The more sparse and low-rank X is, the larger λ 0 and λ 1 are. The parameter ρ > 1 is related to the convergence speed and guarantee convergence, which has little impact on the extraction performance. Thus, λ 0 = 0.29, λ 1 = 0.2 and ρ = 1.5 is adopted in this paper to achieve a better performance and proper speed for fault feature extraction in OFSC. The simulations in this section are carried out by MATLAB R2019b on Intel(R) Core(TM) i7-7700 CPU @ 3.60 GHz system with 16 GB-RAM. Note that there are few relevant studies focusing on the fault feature extraction in the cyclic order-frequency domain. Inspired by refs. [21,25] in the time-frequency domain, the proposed method is compared with methods imposing the single sparsity constraint and single low-rankness constraint on noisy OFSCs. Then, the outer and inner race fault (following the Equations (6) and (7)) is simulated in this section as two specific cases. Then, OFSC is computed to transfer the time-domain signal into the cyclic orderfrequency domain. The time costs of the three methods are listed in Table 1, which can be seen that the computation burdens are similar. The OFSCs of the two simulated signals are shown in Figure 4. The fault cyclic orders (fault feature) can be found in the OFSCs of the pure transient impulses clearly. However, the strong background noise almost completely covers the fault feature as shown in Figure 4b,d. Then, the EEOSs of the two noisy OFSCs are tried to enhance the fault feature. Figure 5 shows that the fault cyclic orders are more pronounced compared with that in OFSCs, especially at the condition of α Q BPOO = 5. The first order component at the condition of α L BPOO = 2.5 is obvious. However, its multipliers are still covered by the strong background noise, which needs to be effectively extracted. Table 1. Time costs of the three methods.
Constraint Time
Joint sparsity and low-rankness 6.04 s Single sparsity 6.23 s Single low-rankness 6.11 s Hereafter, the three methods are applied to the noisy OFSCs. The denoised results can be seen in Figures 6 and 7. The fault feature can be effectively extracted with the proposed method for its joint sparsity and low-rankness inducing capability, as shown in Figures 6a and 7a. The proposed method can effectively extract the outer race fault feature in OFSCs under variable speed conditions. As described in Section 3.1, the fault feature in OFSC is modeled as joint sparsity and low-rankness component. Thus, the components which mismatch the joint sparsity and low-rankness are unexpected (i.e., the strong back-ground noise). The extraction result (seen in Figures 6c and 7c) via single sparsity constraint can not extract the fault feature well, because the low-rankness of the fault feature in OFSC are not fully considered. Moreover, compared with the other two methods, the extraction results are the worst (seen in Figures 6e and 7e). Similar to the noise component in the time-frequency domain, the noise component in the cyclic order-frequency domain also shows the low-rankness property. Thus, the noise component is also contained in the extraction result.
To make the fault cyclic order more clear, the EEOSs of the denoised OFSCs obtained by the proposed method are demonstrated in Figures 6b and 7b. It can be seen that the noise near the fault cyclic order is significantly suppressed, which makes the fault feature more obvious. In the EEOSs obtained by the other two methods, the unseparated noise components in OFSC interfere the clear identification of α BPOO s, especially in Figures 6f and 7d.
From all the above analyses, the proposed algorithm can effectively extract the outer race fault feature from strong background noise-contaminated OFSC with its superior sparsity and low-rankness inducing capability. The proposed method can achieve a better extraction performance than the other two methods in Case 1. Combined with EEOS, the fault cyclic order can be shown more obviously, which makes the determination of the fault type handier.
Case 2: Inner Race Fault
Similar to Section 4.1, the inner race fault is simulated to further confirm the proposed method. The ball-pass-order on the inner race (BPOI) of fault feature cyclic orders are set to α L BPOI = 2.5 and α Q BPOI = 5 respectively with the rotating shaft order 1. The rest of the parameter settings keep the same with Case 1. The simulated signals in Case 2 are displayed in Figure 8. One of the most obvious differences from Case 1 (Figure 8a,c) is that the transient impulses in Case 2 are modulated by the rotating frequency. The transient impulses are also covered by strong background noise from simulated signals.
Then, OFSCs in Case 2 are calculated and shown in Figure 9. The cyclic order one appears on the OFSCs in Case 2 as shown in Figure 9a,c, which indicates the shaft rotating order. The side cyclic order with the width two (BPOI ±1) is also appeared and can be viewed as a characteristic of inner race fault. The OFSCs of the noisy signals are shown in Figure 9b,d, where the fault features are almost completely covered as well. The EEOSs of the noisy OFSCs in Case 2 are shown in Figure 10. The cyclic order of the inner race fault is relatively obvious on the single and double BPOI. The amplitude on triple BPOI tends to be covered by the noise. Moreover, the side cyclic orders are covered by the strong background noise, which are hard to identify. The results after operating the three methods are shown in Figures 11 and 12. At the quadratically changing speed condition (α Q BPOI = 5), the proposed algorithm can effectively extract the fault feature (seen in Figure 12a, and the corresponding EEOS in Figure 12b shows it more clear. At the linearly changing speed condition (α L BPOI = 2.5), the fault cyclic orders and their multipliers are effectively extracted. In comparison, the side cyclic orders are not obvious in Figure 11a. This may be due to the relatively lower amplitude on the side cyclic orders. From the EEOS of the denoised OFSC with linearly changing speed (seen in Figure 11b), the amplitude on the BPOI and its sides are larger than the components around them, which is more obvious. Compared with the other two methods, the noise components in OFSCs are not greatly eliminated due to the inadequate constraints, which make the time-varying fault features not clear enough to identify. Even according to EEOSs, the noise components interference the accurate judgment of the fault feature, especially in Figures 11d and 12f. From the above analysis, the proposed method is suitable for not only outer race fault but also inner race fault. The fault feature in OFSC can be effectively extracted via the proposed method. With the help of EEOS, the fault type can be further identified.
Experimental Layout
An outer-race fault experiment in a gearbox is carried out at Shanghai Jiao Tong University, of which the test layout is demonstrated in Figure 13. The shaft rotating frequency (SRF) is controlled by the inverter motor (YVP80M1) to generate the timevarying rotating speed signal. Four acceleration sensors (manufactured by Wuxi Houde Automation Meter Co., Ltd. with the type of HD-YD-221 and the sensitivity of 100 mV/g) are arranged on the test points as shown in Figure 13a. The sampling frequency is set as 51,200 Hz. There are two shafts in the gearbox, where three healthy bearings (corresponding to test points 1, 2 and 4) and one fault bearing (corresponding to test point 3) are illustrated as displayed in Figure 13a,b. Thus, the vibration signal measured on test point 3 is used for outer race fault feature extraction. Two standard spur gears with tooth numbers 28 and 39 respectively are meshed in the gearbox, which means that the measured signal on the measuring point will contain not only the rolling bearing fault component but also the gears meshing component.
In this experiment, the rolling bearing (type: 6203) with the outer race point corrosion defect (seen in Figure 13c) is processed by electric discharge machining (EDM). The tested rolling bearing parameters are listed in Table 2. Note that the α BPOO in this experiment can be calculated as where n is number of rolling balls, θ is the contact angle, d and D denote the pitch diameter and rolling element diameter respectively.
Data Preprocessing
As mentioned in Section 5.1, the measurement in this experiment contains the gear meshing component, which may have a great impact on the result of the proposed method. Therefore, the band-pass filtering should be carried out for the resonance frequency band. Figure 14b shows that the components of the gears meshing are focused on the lower frequencies and the centers of the resonance frequency band can be chosen as 6000 Hz, 12,000 Hz, 18,000 Hz and 24,000 Hz. In this paper, the filter band is set as [12,000 − 1000, 12,000 + 100] Hz. The filtered signal can be seen in Figure 14c. The impulses generated by the outer race fault are relatively clear compared with that unfiltered (seen in Figure 14a). The spectrum of the filtered signal is displayed in Figure 14d, which turns out the feasibility of the filtering. Then, the filtered signal with fewer gear meshing components can be used for fault feature extraction.
Extraction Results of the Outer Race Fault
In this subsection, the filtered signal is used for the outer race fault feature extraction. Figure 15a shows that the OFSC of the filtered signal can remain not to identify the BPOO, which is covered by the strong noise. The EEOS of the filtered signal's OFSC is displayed in Figure 15b. The α BPOO and 2α BPOO is clear, while 3α BPOO covered by the noise is hard to identify. The OFSC and its EEOS of the proposed method are displayed in Figure 15c,d. Figure 15c illustrates that the fault feature in the OFSC is effectively extracted. Although the component of 3α BPOO is weak overall, it is still stronger than the nearby components. This point of view can be also be confirmed in Figure 15d. Moreover, the noise in Figure 15c is also weakened by the joint sparsity and low-rankness constraint, which can make the target fault component more obvious. Compared with the other two methods, the noise components in OFSC are still greater than the denoised OFSC obtained by the joint sparsity and low-rankness constraint. The noise components in OFSC make the 3α BPOO s hard to identify, which are covered by noise-generated cyclic orders nearby.
According to the experiment results, it turns out that the proposed method is capable of extracting the rolling bearing fault feature in OFSC. Moreover, the noise components in not only OFSC but also EEOS can be weakened, which is consistent with the results of simulation studies. (c) the denoised OFSC obtained by the proposed joint sparsity and low-rankness constraint; (d) the EEOS obtained by the proposed joint sparsity and low-rankness constraint; (e) the denoised OFSC obtained by single sparsity constraint; (f) the EEOS obtained by single sparsity constraint; (g) the denoised OFSC obtained by single low-rankness constraint; (h) the EEOS obtained by single low-rankness constraint.
Conclusions
This paper proposes a weak fault feature extraction method for rolling bearings under variable speed circumstances. In the proposed method, the measured vibration signal is transferred into the (cyclic) order frequency domain and visualized by OFSC. Then, The joint sparsity and low-rankness of the fault feature in the OFSC is then exposed, which is utilized to extract the fault feature. Moreover, an algorithm is developed based on the ADMM framework to optimize the sparsity and low-rankness jointly enforced fault feature in the noisy OFSC. Finally, using EEOS, the fault feature of the rolling bearing is improved and the fault type can be more easily justified. In the simulation study, the proposed method can extract the fault feature under the strong background noise with SNR = −16 dB. In addition, an experimental case is carried out to validate the practicality of the proposed method. It shows that the proposed method can effectively extract the rolling bearing fault feature under variable speed conditions in a complex gearing system. However, there are some shortages of the proposed model and the algorithm needs to be further improved. For example, the adaptive selection of parameters λ 0 and λ 1 is still worth studying.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: = arg min . | 2022-03-06T16:33:53.544Z | 2022-02-26T00:00:00.000 | {
"year": 2022,
"sha1": "b05db6f3e45f8edaf0ddc587892e570a6a1bd195",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/12/5/2449/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e18774af218aa9d9e6bdea8e34810e486e75501e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
237617244 | pes2o/s2orc | v3-fos-license | Efficacy and secondary infection risk of tocilizumab, sarilumab and anakinra in COVID‐19 patients: A systematic review and meta‐analysis
Summary As the pandemic progresses, the pathophysiology of coronavirus disease 2019 (COVID‐19) is becoming clearer and the potential for immunotherapy is increasing. However, clinical efficacy and safety of immunosuppressants (including tocilizumab, sarilumab and anakinra) treatment in COVID‐19 patients are not yet known. We searched PubMed, Embase Medline, Web of Science and MedRxiv using specific search terms in studies published from 1 January 2020 to 20 December 2020. In total, 33 studies, including 3073 cases and 6502 controls, were selected for meta‐analysis. We found that immunosuppressant therapy significantly decreased mortality in COVID‐19 patients on overall analysis (odds ratio = 0.71, 95% confidence interval = 0.57–0.89, p = 0.004). We also found that tocilizumab and anakinra significantly decreased mortality in patients without any increased risk of secondary infection. In addition, we found similar results in several subgroups. However, we found that tocilizumab therapy significantly increased the risk of fungal co‐infections in COVID‐19 patients. This represents the only systematic review and meta‐analysis to investigate the efficacy and secondary infection risk of immunosuppressant treatment in COVID‐19 patients. Overall, immunosuppressants significantly decreased mortality but had no effect on increased risk of secondary infections. Our analysis of tocilizumab therapy showed a significantly increased risk of fungal co‐infections in these patients.
| Study selection
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Checklist was used to improve the reporting of our meta-analysis. We searched PubMed, EMBASE, MEDLINE, Web of Science and MedRxiv using the search terms immunosuppressants, anakinra, sarilumab, siltuximab, tocilizumab, bacterial/fungal coinfection, coronavirus, severe acute respiratory syndrome coronavirus 2, SARS-CoV-2, 2019-nCoV and COVID-19 for studies published from 1 January 2020 to 20 December 2020, and we manually searched the references of select articles for additional relevant articles ( Figure 1).
| Data extraction and verification
The inclusion criteria for the meta-analysis were as follows: (1) research focus on immunosuppressants (such as tocilizumab, anakinra, sarilumab, siltuximab, sirukumab, etc.) and COVID-19; (2) the number of cases and controls; (3) randomised controlled trial (RCT) or retrospective study, including case-control study and cohort study; (4) the citations that met our inclusion criteria and extracted all data. If at least two of the three researchers agreed, the study was included in the meta-analysis. Next, everyone extracted while the other crosschecked the data. Disagreements were resolved by reviewing and discussing.
| Statistical analysis
The statistical significance of the pooled odds ratio (OR) was determined with the Z-test, considering the values of p < 0.05 to be statistically significant. Data were pooled from the meta-analysis with the random-effects model using the DerSimonian and Laird method and the fixed-effects model using the Mantel-Haenszel method. In cases where I 2 < 50% and the p-value for heterogeneity was >0.10, thus indicating an absence of heterogeneity between studies being compared, the fixed-effects model was used to evaluate the summary ORs. Conversely, if I 2 ≥ 50% or the p-value for heterogeneity was ≤0.10, thus indicating a higher degree of heterogeneity between studies but still met our inclusion criteria for meta-analysis, we used the random-effects model to evaluate the summary ORs. To evaluate the influence of individual data sets on overall pooled ORs, we conducted forest plot analysis to determine the stability of our results. We also carried out sensitivity analysis in which a single study within the overall metaanalysis was deleted one at a time. We applied Funnel plots and Egger's linear regression test to assess publication bias. All statistical analyses were carried out using STATA version 11.0 (Stata Corporation College Station).
| Study selection and characteristics
The combined search terms yielded all related articles, and the primary review of titles and abstracts identified 157 articles that Table 1.
| Mortality
We Specific data are summarised in Figure 2a and Table 2.
| Fungal co-infection risk
We found tocilizumab therapy significantly increased the risk of
| Publication bias and sensitivity analysis
Begg's funnel plot and Egger's test were performed to assess publication bias. We additionally conducted sensitivity analyses by omitting one study at a time in the calculation of a summary outcome.
Although the sample sizes for cases in all eligible studies varied, corresponding pooled proportions and 95% CIs were not qualitatively altered between studies with small and large sample sizes. No other single study influenced pooled proportion and 95% CI qualitatively.
| DISCUSSION
The World Health Organization has declared that COVID-19 may progress to a pandemic associated with substantial morbidity and mortality and is a public health emergency of international concern as of 1 February 2020. 11 In a recent study, anakinra significantly reduced both the need for invasive mechanical ventilation in the ICU and mortality among patients with severe COVID-19, with no serious side-effects. 6,15,16 Kooistra et al. 16 and Cauchois et al. 6 also investigated whether anakinra was effective at reducing clinical signs of hyperinflammation in To the best of our knowledge, this is the only systematic review and meta-analysis conducted to investigate the efficacy and secondary infection risk of immunosuppressants treatment in COVID-19
F I G U R E 2 Forest plot of the associations between immunosuppressants and mortality and secondary infection risk in COVID-19 patients. Forest plot of association between (a) immunosuppressants and mortality and (b) immunosuppressants and secondary infection risk
patients. In our meta-analysis, we also found that tocilizumab significantly decreased mortality in COVID-19 patients without any increased risk of secondary infection. In addition, we found similar results in several subgroups. However, we found that tocilizumab therapy significantly increased the risk of fungal co-infection in COVID-19 patients. Therefore, our data suggest that clinicians should be aware of antifungal therapy when COVID-19 patients are receiving tocilizumab therapy.
There are several limitations to our current study, which needs to be addressed. Firstly, only 33 studies were included, and the relatively small total sample size had limited power for the exploration of real associations. Secondly, subgroup analyses involved relatively small groups, which may not impart sufficient statistical power to explore real associations and are more likely to reveal greater beneficial effects than large-scale trials. Thirdly, every doctor has a different treatment for clinical diagnostic and treatment algorithms, which would allow for adjustment by other factors. 3,[25][26][27] In addition, the inclusion of zero-event trials can sometimes decrease the effect size estimate and narrow the CIs.
| CONCLUSION
Overall, immunosuppressants significantly decreased mortality in COVID-19 patients without any increased risk of secondary infection. Our analyses of tocilizumab therapy showed that there was a significantly increased risk of fungal co-infections. | 2021-09-25T06:17:03.197Z | 2021-09-24T00:00:00.000 | {
"year": 2021,
"sha1": "1dd9de0c718bec36eed9c56a996964e6326b59d4",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rmv.2295",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "465dd70edd856e81f005fdd257e23860dc93ba95",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268785709 | pes2o/s2orc | v3-fos-license | Effect of the tip speed ratio on the wake characteristics of wind turbines using LBM‐LES
In this paper, the wake characteristics of Zell 2000 wind turbine under different tip velocity ratios are studied by using the lattice Boltzmann method and large eddy simulation. The adaptive mesh refinement method is performed to capture the fine flow structure and wake characteristics development. In this paper, we mainly focus on the effect of the tip speed ratio on the flow structure and unsteady characteristics of wind turbine wake. The three‐dimensional flow vorticity structures, the section vorticity diagram, the pressure fluctuation of wake and the lift coefficient of wind turbine wake are utilized to explore the effect of the tip speed ratio on the unsteady physics mechanism of wind turbine wake. With the increase of the tip speed ratio, the distance between two adjacent vortex rings along the axial direction gradually decreases, as the position of the broken vortex circles gradually approaches the center of the blade, separated vortexes are rapidly generated, and the coherent structure appears closer to the wind turbine. A relationship is established between the tip speed ratios and the positions of the broken vortex circles. It is further found that the dominant frequency amplitude gradually increases with the increase of tip speed ratio and the pressure amplitude spectra of vortex increases with the decrease of the distance between the wake and the center of the blade axis. The above series of studies can provide significant physical insight into deep understanding the influence of the tip speed ratio on the wake characteristics of wind turbines.
| INTRODUCTION
Wind energy and other renewable energy sources are expected to grow significantly in the coming decades and play a key role in mitigating climate change and achieving energy sustainability.With the growth of wind energy demand, the wind energy industry is developing toward large-scale design, which is expected to operate reliably under various environmental conditions.2][3][4][5][6] Due to the complex aerodynamic characteristics of wind turbine rotor, the numerical simulation of wind turbines still is a great challenge and a hot topic.
The wake model based on computational fluid dynamics (CFD) solves the Reynolds-averaged Navier-Stokes (RANS) equation 1 or large eddy simulation (LES) 2 controlling the whole flow field.With the rapid growth of computing level, the LES method is widely applied in the CFD of wind energy engineering.Sedaghatizadeh et al. 3 simulated wind turbine wakes using the LES approach and investigated the detailed information about the flow field as well as the wake development.In the above-mentioned studies considered the full wind turbine, one of the most challenging issues is the mesh generation and is to deal with the rotating rotor in the computational domain.In recent years, the adaptive mesh refinement (AMR) method is proposed for numerical simulation of wake flow. 4Zeoli et al. 4 used the AMR method to simulate wind turbine wake.The grid is adaptively refined in a region of strong vortices.The mesh is refined in the region closely related to the vortex structure, so that the wake dynamics can be properly captured.
The wake of wind turbines by changing the operating conditions of wind turbines and their impact on wind turbine power generation are widely studied in engineering application. 5,6Yin et al. 5 studied the method of reducing the wake loss of wind farm through the startup and stop scheduling of wind turbines based on Jensen wake models, and obtained the influence of wake recovery coefficients on unit power generation.Yang et al. 6 conducted the user-defined function modeling for normal and extreme wind conditions and studied the influence of dynamic incoming flow and tower shadow effect on the downstream wake characteristics of wind turbine.Yang also studied and evaluated the torque and power of the tower under the influence of tower shadow effect of wind turbine and the service life of wind turbine.Wen et al. studied the influence of blade number on the near-wake dynamic characteristics of wind turbines and designed an airfoil with high lift coefficient for SHAWT model operating at low Reynolds number. 7he tip speed ratio is a very important parameter to describe the characteristics of wind turbines. 8,9The tip speed ratio is defined as where Ω denotes the angular speed of the rotor and R D = /2, V is the incoming flow velocity. 8,9The tip speed ratio is an important parameter in the study of wind turbines.When the tip speed ratio is low, the ratio of maximum lift to drag coefficient increases exponentially.It decreases linearly when the tip speed ratio is high.Okulo et al. 10 explored the development characteristics of wind turbine wake by controlling the incoming flow velocity to change the tip speed ratio.In this paper, the tip speed ratio is implemented to control the change of wind turbine angular velocity.][10] However, the tip speed ratio markedly affects the unsteady flow structure and mechanics of wind turbine wake.The unsteady flow structure and mechanics of wake plays a dominant role to study the aerodynamic performance and noise of wind turbine.How do we reveal the effect of tip speed ratios on the wake structure of wind turbine?In this paper, the AMR method is used to encrypt the grid and the lattice Boltzmann method (LBM) and LES (LBM-LES) is implemented to explore the effect of the tip speed ratio on the wake characteristics of wind turbines.The unsteady flow structure and characteristics are accurately captured in the near flow and far field.In this work, we extend the previous work to deeply explore effect of the tip speed ratio on the physical mechanism of the threedimensional (3D) vortex structure of wake in the near and far fields, 2D vorticity characteristics of the tip vortex region, behind wind turbine nacelle and behind the tower in the near and far fields.The effects of the tip speed ratio on the interaction mechanism among the tip vortex region, behind wind turbine nacelle and behind the tower are deeply revealed in the near and far fields.The pressure fluctuation spectrum of wake in the near and far fields and the lift coefficient of wind turbine wake are further explored to deeply understand the unsteady mechanism of wind turbine wake.
The main framework of this paper is as follows.The LBM-LES and the AMR are described in Section 2. Section 3 mainly displays the model of wind turbine and computational domain.In Section 4, the unsteady 3D and 2D flow characteristics of wind turbines wake at different tip speed ratios.Finally, some constructive conclusions are presented.
| LBM-LES
2][13][14] The D3Q19 scheme is used for numerical simulations in this paper.Figure 1 represents the schematic diagram of the D3Q19 model.
The evolution equation of mesoscopic approach is 15 where f i is the particle distribution function, f i eq is the equilibrium distribution function, x is the coordinate of the particle, c i are the direction vectors, and τ is the relaxation time, respectively.The equilibrium distribution function is given as where ρ is the mass density, u is the macroscopic velocity, c s is the speed of sound, and w i is the weight coefficients, respectively.The parameters in D3Q19 are given as 15 The relation between the relaxation time (τ) and the kinematic viscosity (ν) is expressed as The density and momentum can be calculated by the following equation.
Using the Chapman-Enskog expansion, which can obtain the Navier-Stokes equation for incompressible flow as follows 15,16 : 8][19][20] The LBM-LES is implemented in this paper.The idea of LES is mainly added to LBM in the LBM-LES.With the increase of Reynolds number (Re), the viscosity decreases, the maximum velocity gradient increasingly affects the convergence, and the over dense mesh causes the compressibility error.The idea of LES is introduced here, The turbulent viscosity coefficient is calculated by WALL model.The kinematic viscosity is defined as 21 where ν σ is laminar viscosity coefficient and ν t denotes the turbulent viscosity coefficient.
where U denotes the incoming flow velocity and D is the diameter of the wind turbine, respectively.
| Adaptive mesh refinement
Vortex shedding occurs in the wake of wind turbines.
The unsteady flow and characteristics of wind turbine wake can be accurately captured by the AMR method.Compared with fine mesh, the AMR method requires less manual operation, and it can selectively generate rough mesh in areas with low-turbulence.This method also has lower computational expense. 22eoli et al. have proved that it is feasible to encrypt the grid of wind turbine simulation domain by AMR method. 4The AMR method can capture the wake and refine the spiral vortex structure at the blade root and tip. 23The AMR method is implemented to accurately capture the wake characteristics of wind turbine.The work of this paper is to study the development characteristics of wind turbine wake, which effectively assures the accuracy of numerical simulation.
Figure 2 shows the blade and grid distribution in the numerical simulation of wind turbines.It can be seen from Figure 2 that the AMR method encrypts the grid with the development of wake.The grids are fined in the wake area, while the girds are rough in other computational areas.It is not only the multiple resolution mesh, but also the multi-time step integration should be considered.The ∆ ∆ are chosen as the grid spacing of the fine and coarse mesh.The time intervals of multi-time step integration in the fine and coarse mesh, t f ∆ and t c ∆ , are, respectively, determined as 23 The fine mesh is simulated at the time evolution of two subtime steps per a time step of the coarse mesh.
The relaxation parameters ω αβγ should be different along with the multi-time step.It is worth noting that the following equations (Equations 13 and 14) are discussed with the relaxation parameter of BGK collision,but we can apply the same argument also into the collision model.The relaxation parameters satisfying Equation (12) keeps the continuity of the kinetic viscosity (i.e., v v = c f ), and are derived as f c (13) Moreover, to satisfy the continuity of the shear tensor, the distribution functions are interpolated from coarser mesh to finer mesh and vice versa are modified as 23 ( ) where the superscripts c f and f c denote the modification from coarse to fine and vice versa, respectively.The equilibrium function f ijk eq remains unchanged regardless of the mesh resolution, and the macroscopic variables should be conserved within the modifications.
where τ c and τ f in Equations ( 19) and ( 20) should be replaced into ω 1 in Equation (17).The modifications are done on the velocity space ( f ijk ) even when the collision process is modeled by the cumulated model.
| MODEL OF WIND TURBINE AND COMPUTATIONAL DOMAIN
The model of wind turbine mainly includes the rotor (blade and hub), nacelle and wind tower.The height of wind turbine tower is 12 m and the length of rotor blade is 5 m.It is tested in the NASA ames of wind tunnel.The model of wind turbine is one of the most comprehensive experiments, which is carried out on a full-scale wind turbine. 24Figure 3 displays the model of wind turbine and computational domain.As shown in Figure 3, the computational domain is 200 m × 50 m × 50 m corresponding X-direction coordinate, Y-direction coordinate Mesh in the computing domain.
CUI ET AL.
| 1641 and Z-direction coordinate in this paper, respectively.In all numerical simulations, the wind turbine is placed 50 m away from the air inlet.The inlet speed is set to 7 m/s, the maximum size of the grid is set to 1.0 m in the computational domain, and the maximum adaptation level is set to 5, respectively.The adaptive fine grids (1.0 m/25 = 0.03125 m) are distributed on the surface of wind turbine and area of complex wake.The tip speed ratios of λ = 4, 5, 6, 7, 8 are implemented to study the complex flow mechanism of the wind turbine wake.
| NUMERICAL RESULTS AND DISCUSSIONS
In this section, the 3D vortex structure of wake, the 2D vorticity characteristics of the tip vortex region, behind wind turbine nacelle and behind the tower will be explored in the near and far fields.The pressure fluctuation spectrum of wake and the lift coefficient of wind turbine wake will be studied with increasing tip speed ratio in the near and far fields.
| 3D wake vorticity structure at different tip speed ratios
In general, the wind turbine wake is divided into the tip vortex, vortex behind the wind turbine nacelle and vortex behind the wind turbine tower.In this paper, the wake characteristics of wind turbines are studied at different tip speed ratios of wind turbines.Figure 4 illustrates the comparison of 3D wake flow structure around wind turbine between the previous numerical method and present LBM-LES at λ = 5.As illustrated in Figure 4, one can see that the tip vortex breaks up rapidly after five vortex circles in previous work. 25The finer wake vortex is well captured due to implementing the AMR method of LBM-LES in this paper.The previous studies of Hsu et al., 25 is shown in Figure 4C, and the 3D wake flow structure around wind turbines is not well captured due to numerical dissipation.Nevertheless, the very fine flow vortex structure is accurately captured by LBM-LES, which is well consistent with experimental results at the same working condition. 24igure 4A displays the vortex structures of wind turbine by the finite volume method. 24Figure 4B illustrates the comparison of vortex structure between numerical and experimental results at the same inflow condition (v = 7 m/s).As shown in Figure 4C, the vortex structures of wind tunnel by LBM-LES are effectively captured.However, a lot of vortex structures are numerically dissipated by finite volume method.The eight-circle vortex structures of wind tunnel are clearly captured by LBM-LES, which is well consistent with the experimental visualization wake flow of the wind tunnel.The LBM-LES has very low numerical dissipation and the vortex structures of wind tunnel are effectively captured.
In the following subsection, the tip speed ratios of λ = 4, 5, 6, 7, 8 are performed to explore the effect of the tip speed ratio on flow vortex structure of wind turbine.In this paper, the isosurface value of 3D wake flow structure of wind turbine vorticity diagram is 7.4.Figure 5 shows the 3D wake flow structure of wind turbines at different tip speed ratios.Figure 5A displays the 3D flow vortex structure of the wind turbine at λ = 4.As displayed in Figure 5A, it can be seen that 11 relatively whole flow structures of tip vortex circle appear in the near field wake of wind turbine.However, due to the influence of tower, each vortex circle is responsively interrupted along the lower edge region of the vortex and vertical tower.The level of interrupting tip vortex circle increases with the evolution of flow in the near field wake of wind turbine.Moreover, it is also obtained that all flow structures of tip vortex circle are broken and gradually disappeared in the far field of wake of wind turbine.It is surprisingly found that the vorticity inside the tip vortex circle is higher than that of tip vortex circle outside.Figure 5B demonstrates the 3D flow vortex structure of wind turbine at λ = 5.In Figure 5B, the 12 relatively whole flow structures of tip vortex circle can be clearly demonstrated in the near field.And the pitch of tip vortex decreases.With the influence of wind turbine tower, these vortex circles are interrupted along the lower edge region of the vortex and vertical tower.In the near field wake of wind turbine, the influence of tower on the interruption of blade tip vortex gradually is enhanced.It is also demonstrated that all flow structures of tip vortex circle are broken and gradually decomposed in the far field of wake of wind turbine.Figure 5C illustrates the 3D flow vortex structure of wind turbine at λ = 6.In Figure 5C, the 13 relatively whole flow structures of tip vortex circle can be clearly illustrated in the near field.Spiral tail vortex appears behind the nacelle.Due to the influence of tower, the tip vortex is responsively interrupted along the lower edge region of the vortex.In the far field of wake of wind turbine, the tip vortex begins to decompose and disappear.Figure 5D illustrates the 3D flow vortex structure of wind turbine at λ = 7.In Figure 5D, 14 relatively whole flow structures of tip vortex circle can be clearly illustrated in the near field.It is also obtained that spiral tail vortex appears behind the nacelle and disappears quickly.Figure 5E demonstrates the 3D flow vortex structure of wind turbine at λ = 8.In Figure 5E, the 15 relatively whole flow structures of the tip vortex circle can be clearly demonstrated in the near field.By increasing the tip speed ratio, the pitch of tip vortex gradually decreases.It is also obtained that all flow structures of the tip vortex circle begin to mix and gradually broken in the far field of wake of wind turbine.Because the pitch of tip vortex is Comparison of three-dimensional wake flow structure around wind turbine between previous numerical method and present LBM-LES.
short, the mixing phenomenon occurs in the adjacent vortex circles.
In Figure 5, the tip vortex is a spiral continuous curve.And the wake is greatly affected by the blades.In the blade tip area, it is further found that the vortex shedding of the tower is obviously influenced by the blade, which shows that the wake structure of the tower have obvious differences between the upper and lower parts.It is noted in Figure 5 that the wake at the bottom of the tower is not impacted by the blade, and the vortex street is well preserved at the lower part.It is further found that the distance between two adjacent vortex rings along the axial direction gradually decreases with increasing tip speed ratio.In this paper, the L D / is used to study the relationship between tip speed ratios and the positions of the broken vortex circles.L is the distance from wind turbine to the positions of the broken vortex circles and D is the diameter of wind turbine rotor, respectively.
The dimensionless ratio between the position of the broken vortex circle and the diameter of wind turbine rotor is built to quantitatively obtain the influence mechanism of tip speed ratios on separated vortex with increasing tip speed ratio.Figure 6 displays the relationship between tip speed ratios and the positions of the broken vortex circle.A quantitative relation is obtained between the tip speed ratios and the positions of the broken vortex circles.As displayed in Figure 6, it is clearly observed that the vortex circles are broken up at about 4D distances from the wind turbine.Surprisingly, it is further found that the position of the broken vortex circles gradually decreases with the increase of the tip speed ratio, which is helpful to deeply understand the influence mechanism of tip speed ratio on the wake flow characteristics of wind turbine.The secondary vortex generated by the breaking of vortex circle will be further dissipated, which has certain reference significance for the study of wind field layout of wind turbine.
The vortex behind the tower appears a phenomenon similar to Karman vortex street, but the interaction between the rotor and the vortex causes the vortex to disorderly fall off.In the near wake region, the tip vortex and the vortex behind wind turbine nacelle are separated from each other, and stretch and roll up at the action of axial velocity.In the far wake vortex, the rotor interacts with the wake further, which makes the wake vortex further to be unstable.It is obtained that the tip vortex interval behind the wind turbine also increases with the increase of the tip speed ratio.In addition, the fluid vortices are mainly concentrated at the blade tip and the wind turbine nacelle with increasing tip speed ratio.The tip vortex is the main part of the wind turbine wake.The vortex pitch gradually decreases with increasing the tip speed ratio.
| Effect of tip-speed ratio on wake at different sections
To further study the wind turbine wake, the multiple cross-sections are made in the flow field to observe the wake changes and further study the wind turbine wake characteristics.Figure 7 shows the section selected for wind turbine wake vorticity analysis.
Figure 8 displays the vorticity field is made on the section plane at the XOY section to investigate the velocity evolution in the wake region behind the wind turbine.It can be seen from Figure 8 that the wind turbine wake is obviously divided into blade tip vortex, the vortex behind the wind turbine nacelle and vortex behind the tower in the near field.In the laminar flow region, the three vortices are obviously different and developed backward.With the increase of turbulence intensity, the tip vortex intensity and the pitch of tip vortex gradually decreases.The coherent structure gradually becomes unstable and diffuses downstream by strong disturbance.Besides, the trajectory becomes messy and the radial mixing between vortex systems is enhanced.It is surprisingly captured that with the increasing of λ, the tip vortex of the wake begins to fluctuate up and down at 2D, and the vortex behind the wind turbine rotor interacts and integrates at about 4D.
Figure 8A displays the XOY section flow vortex structure of wind turbine at λ = 4.As illustrated in Figure 8A, it is clearly seen that the connected vortex between the tip vortex and the behind nacelle, which is called as attached vortex.In the near field, the tip vortex, vortex behind the nacelle and attached vortex are clear and separated.With the development of the wake, the attached vortex disappears and the vorticity of vortex behind the nacelle decreases rapidly around the 2D distance from the wind turbine.Besides, the vortex behind the wind turbine rotor interacts and integrates at about 4D. Figure 8B illustrates the XOY section flow vortex structure of wind turbine at λ = 5.In the near field, the evolution intensity of the tip vortex, the vortex behind the nacelle and the vortex behind the tower develop are significantly lower behind the wind turbine.The tip vortex is closely related to the separation vortex from the 6D distance from the wind turbine.Figure 8C displays the XOYY section flow vortex structure of wind turbine at λ = 6.As shown in Figure 8C, it is seen that the vortex behind the tower and the vortex behind the nacelle begin to gradually mix at 4D from the wind turbine.Figure 8D illustrates the XOY section flow vortex structure of wind turbine at λ = 7.The tip vortex begins to fluctuate obviously after 4D distance from the wind turbine.More attached vortices are retained in the near field.The vortex behind the tower and the vortex behind the nacelle begin to mix at 4D from the wind turbine and completely develop to be mixed at 6D of the wind turbine.Figure 8E demonstrates the XOY section flow vortex structure of wind turbine at λ = 8. Figure 8E demonstrates the blade tip vortex has obvious fragmentation and dissipation from the 2D distance of the wind turbine.The vortex behind the nacelle and the vortex behind the tower also interact and begin to mix at a close distance.Due to the short pitch of the tip vortex, the tip vortex is closely related to the separation vortex from the 2D distance of the wind turbine.In the far field, the tip vortex, vortex behind the wind turbine nacelle and vortex behind the wind turbine tower mutually mix and gradually develop coherent structures.
Figure 9 illustrates the vorticity field is made on the section plane at the XOZ section.The velocity evolution is clearly captured in the wake region behind the wind | 1645 turbine.It can be seen from the Figure 9 that the blade tip vortex and the vortex behind the wind turbine nacelle tend to be disordered in the laminar flow region and transition region, and the blade tip vortex appears disturbance.However, the interaction between the tip vortex and the vortex behind the wind turbine nacelle is not obvious.It is clearly obtained that the tip vortex and the wake vortex behind the wind turbine nacelle begin to contact and mix with each other, and continue to develop into the coherent structure in the turbulent region.
Figure 9A displays the XOZ section flow vortex structure of the wind turbine at λ = 4.In Figure 9A, the tip vortex and the vortex behind the wind turbine nacelle develop backward independently in the near field.The attached vortices are formed between tip vortices and the vortex behind the nacelle in the near F I G U R E 8 Vorticity field on the section plane at XOY section.field. 26It is clearly observed that the attached vortex rapidly disappears around the 2D distance from the wind turbine with the development of the wake.In the far field, the blade tip vortex is evidently shredded and dissipated.Nevertheless, the tip vortex and the vortex behind the wind turbine nacelle begin to interact and mix. Figure 9B demonstrates the XOZ section flow vortex structure of wind turbine at λ = 5.In Figure 9B, we can see that the tip vortex emerges obviously fluctuation and produces separation vortex at a closer distance than λ = 4 case.The tip vortex begins to obviously fluctuate after the 4D distance of the wind turbine.The blade tip vortex and F I G U R E 9 Vorticity field on the section plane at XOZ section.
the vortex behind the nacelle completely mix at about 6D distances behind the wind turbine.Figure 9C illustrates the XOZ section flow vortex structure of wind turbine at λ = 6.It is obtained that with the increase of λ, the separated vortex is generated rapidly and the coherent structure of wind turbine wake appears earlier behind the wind turbine.The destruction and dissipation of the vortices are increasingly enhanced with the increase of λ. Figure 9D displays the XOZ section flow vortex structure of wind turbine at λ = 7.The increasingly attached vortices are retained in the near field.The vortex behind the tower and the vortex behind the nacelle begin to mix at 4D from the wind turbine and completely mix at 6D from the wind turbine.Figure 9E demonstrates the XOZ section flow vortex structure of wind turbine at λ = 8.Due to the increase of angular velocity, the increasingly attached vortices are retained in the near field.Meanwhile, the decrease of tip vortex pitch makes the adjacent vortex circles to continuously develop towards the back.The blade tip vortex begins to appear separation vortex at about 2D distance, and gradually interacts with the vortex behind the nacelle.
The detailed flow information characteristics of the vortex behind the wind turbine are accurately seized at different heights by the influence of the rotor.In this paper, the vorticity field is made on the section plane at different heights (l R / = 1.02, 1.04, 1.08, 1.16, 1.24) to investigate the wake evolution process behind the tower, where l is the distance from section to rotor center, and R is the radius of wind turbine.
Figure 10A demonstrates the section flow vortex structure of wind turbine at l/R = 1.02.Illustrated in Figure 10A, it is seen that the transverse separation vortices occur around the tower, which is due to the lateral movement of the fluid caused by the rotating and sweeping of the rotor.The separation vortex is generated at each sweep of the rotor and develops backward with the wake evolution, which also enhances the degree of vortex turbulence behind the wind turbine tower.Figure 10B displays the section flow vortex structure of the wind turbine at l/R = 1.04.In Figure 10B, the wake has traces swept by the wind turbine every time, but the lateral separation vortex is dissipated outward less than that in Figure 10A.The transverse separation vortices around the wind turbine tower are significantly decreased.Figure 10C illustrates the section flow vortex structure of wind turbine at l/R = 1.08.It is obtained that the horizontal fluctuation of the vortex around wind turbine tower is obviously weakened in Figure 10C.The vortex structure behind the tower is consistent with that behind the wind turbine nacelle with the development of wake. Figure 10D demonstrates the section flow vortex structure of wind turbine at l/R = 1.16.The effect of the rotor gradually decreases.The transverse separation vortex caused by transverse force obviously decreases.Figure 10E displays the section flow vortex structure of the wind turbine at l/R = 1.24.Compared with other wind turbine wake sections, the transverse separation vortex of the wake is not obvious in Figure 10E.As shown in Figure 10E, it is clearly observed that the wind turbine wake is greatly affected by the tower and the wind turbine wake behind the tower tends to flow around a cylinder.
It can also be seen from Figure 10 that the wake near the section of the blade presents an obvious phenomenon of diffusion to both sides due to the speed in the transverse direction brought by the blade rotation.The vorticity of the flow on both sides of the tail flow behind the tower is increasingly enhanced and the middle vorticity is increasingly suppressed.The wake is gradually dissipated in Figure 10A,B.The vortex behind the tower tends to cylindrical turbulence in Figure 10E.It is further demonstrated that the wake diffusion weakens with increasing distance from the corresponding vertical tip.
As mentioned in the above discussion, we can see that the tip vortex and the vortex behind the nacelle increasingly break up and produce separation vortex.For the vortex behind the tower, the increasing influence of blade sweep on the wake mainly occurs in the increasingly adjacent blade.The lateral force caused by blade sweeping makes the separation vortex and the separation vortex develop to both sides.The wake near the ground tends to flow around a cylinder.The effect of tower exists at 4D downstream of wind turbine, but the effect of tower disappears due to the growth of wake and the mixing of turbulence with the increase of distance. 27
| Development of the vortex system behind the wind turbine
To systematically and accurately capture the wake characteristics of wind turbines, the pressure fluctuations behind the wind turbine are studied at different positions.In this paper, the 180 monitoring points are set behind the wind turbine.As shown in Figure 10, the 60 monitoring points are set every 1 m from y = 1 m along the Y-direction at z = 5.Similarly, 60 monitoring points are set in the same way at z = 0 and z = −5 m.The pressure fluctuation of the tip vortex, the vortex behind the nacelle and vortex behind the tower are studied in this paper.
To comprehensively study the wake characteristics, six points are selected for comparative analysis of wake characteristics.The near field is the flow field before 4.5D the intense of pressure fluctuation behind the tower is higher than that of the nacelle and the tip vortex.Figure 13B demonstrates the pressure fluctuation of vortex behind the tower at the distance of wind turbine of 40 m and λ = 4, 5, 6, 7, 8.The mean value of pressure decreases with the increase of λ.The separation vortex behind the tower mainly occurs in the near field of the wind turbine wake at λ = 4. Figure 13D decreased.The maximum amplitude mainly appears at the secondary frequency and λ = 4, which reveals that the separation vortex behind the nacelle appears in transition region of the wind turbine wake.Figure 14C demonstrates the pressure amplitude spectra of vortex behind the nacelle at the radial distance of rotor shaft of 45 m and λ = 4, 5, 6, 7, 8.It can be seen from Figure 14C that the amplitudes obviously decrease with the development of wake.The amplitude of the secondary frequency increases at λ = 4. Figure 14D illustrates the pressure amplitude spectra of vortex behind the nacelle at the radial distance of rotor shaft of 50 m and at λ = 4, 5, 6, 7, 8.The amplitude increasingly decreases with the development of wake.In the turbulent region, the weakening of pressure fluctuation indicates that the large dissipation mainly occurs in the separation vortex.Figure 14E displays the pressure amplitude spectra of vortex behind the nacelle at the radial distance of rotor shaft of 55 m and at λ = 4, 5, 6, 7, 8.It is obtained that the dominant frequency amplitude increases with the increase of λ, which reveals that many separation vortexes emerge behind the wind turbine.Figure 14F demonstrates the pressure amplitude spectra of vortex behind the nacelle at the radial distance of rotor shaft of 60 m at λ = 4, 5, 6, 7, and 8.The pressure fluctuation in the far field is obviously lower than that of near field, which indicates that the distance in the center of the wind turbine decreases, the pressure fluctuation intensity increasingly enhances.The intensity of vortex shedding is gradually weakening along the axial direction.It is further found in Figure 15 that the vortices gradually disappear at the point far away from the wind turbine.This phenomenon mainly results from many separation vortices in the near field, and the vortex has been basically broken up and dissipated.
Figure 16A displays the pressure amplitude spectra of the tip vortex at the distance of the wind turbine of 35 m at λ = 4, 5, 6, 7, and 8.The maximum amplitudes of the characteristic frequencies occur at λ = 5.However, the secondary frequency peak is significantly larger than that of λ = 4. Figure 16B demonstrates the pressure amplitude spectra of the tip vortex at the distance of the wind turbine of 40 m at λ = 4, 5, 6, 7, and 8.The secondary frequency at λ = 4 has the maximum amplitudes, which reveals that the separation vortex behind the wind turbine appears in the near field of the wind turbine wake.Figure 16C displays the pressure amplitude spectra of the tip vortex at the distance of wind turbine of 45 m and λ = 4, 5, 6, 7, 8.It can be seen from Figure 16C that the dominant frequency amplitude gradually increases with the increase of λ. Figure 16E demonstrates the pressure amplitude spectra of the tip vortex at the distance of wind turbine of 55 m and λ = 4, 5, 6, 7, and 8.The amplitudes obviously decrease with the development of wake.
Figure 17A displays the pressure amplitude spectra of vortex behind the tower at the distance of wind turbine tower of 35 m and λ = 4, 5, 6, 7, 8.It can be seen from the Figure 17A that the pressure fluctuation of the vortex behind the tower is violent owing to the impact of the tower.This indicates that a large number of the separation vortexes are produced in this region.Figure 17B demonstrates the pressure amplitude spectra of vortex behind the tower at the distance of wind turbine tower of 40 m and λ = 4, 5, 6, 7, 8.It is obtained that the dominant frequency amplitude increases to a certain extent with the increase of λ.The secondary frequency has the maximum amplitudes at λ = 4, which demonstrates that the separation vortex behind the tower appears in the near field of the wind turbine wake.amplitude decreases in the far field.Figure 17E demonstrates the pressure amplitude spectra of vortex behind the tower at the distance of wind turbine tower of 55 m and λ = 4, 5, 6, 7, 8.In the far field, the weakening of pressure fluctuation indicates that large dissipation mainly occurs in the region of separation vortex.Figure 17F illustrates the pressure amplitude spectra of vortex behind the tower at the distance of wind turbine tower of 60 m and λ = 4, 5, 6, 7, 8.It is obtained that the amplitudes obviously decrease with the development of wake.The pressure fluctuation in the far field is obviously lower than that of the near field.
In the near field, the dominant frequency amplitude has a higher pressure fluctuation peak and many separation vortexes are produced.The highest the dominant frequency amplitude emerges at the λ = 5.It is clearly observed that when the λ is more than 5, the dominant frequency amplitude increases with the increase of λ.In the near field, the pressure fluctuation is weak.The dominant frequency amplitude gradually increases with the increase of λ.At λ = 5, many vortexes are produced in the near field and separation vortexes are well dissipated in the far field.In the wind farm, the incoming flow disturbance caused by the wind turbine in front is weak when λ = 5, which is well consistent with the work of Okulov. 9he probability density function (PDF) of pressure fluctuation is an effective method to study characteristics in the turbulent flow.The probability distribution function represents the probability that the instantaneous amplitude falls within a specified range.And the PDF of random data is where the normal distribution is an important probability distribution.The PDF is: To further study the pressure fluctuation and vorticity dissipation of wind turbines, the PDF of pressure fluctuations is used to study the physical mechanism of wind turbines.Figure 18A range of pressure fluctuations increase with the increase of λ.
| CONCLUSIONS
In this paper, the wake characteristics of wind turbine are studied by changing the tip speed ratio, and some conclusions are obtained.
In first, 3D vortex structure around the wind turbine and 2D section vorticity field of the wind turbine at different tip speed ratios are investigated.The results show that with the increase of tip speed ratio, the tip vortex increasingly breaks up, dissipates, and mixes with the vortex behind the nacelle.The vortex behind the nacelle develops backward in a spiral shape, finally mixes with the tip vortex and the vortex behind the tower at about 4D distances from the wind turbine.The attached vortex increases with the tip velocity ratio and finally disappears at a distance of about 2D from the wind turbine.Tower effect exists at 4D downstream of wind turbine.However, with the increase of distance, the tower effect is gradually weakening with the increase of wake and turbulent mixing.
In addition, the tip vortex intensity and the pitch of tip vortex are decreased with the increase of λ in the near field.And the position of the broken vortex circles is gradually decreased.The destruction and dissipation of the vortices are increasingly enhanced with the increase of λ.In the far field, the mixing time of tip vortex with the vortex behind the nacelle increasingly increases with the increase of λ, the attached vortex disappears and the vorticity of vortex behind the nacelle rapidly decrease with the development of the wake.The turbulence intensity of wind turbine wake is increasingly enhanced with the increase of λ.The interaction between tip vortex and the vortex behind the nacelle is enhanced with the increase of λ.
A large number of tip-separating vortices appear at a distance of 2D-4D from the wind turbine, where the vortices dissipated rapidly.The vortices behind the nacelle are dissipated behind the wind turbine.The vortex behind the tower produces transverse separation vortices under the lateral influence of blade rotation.In the mixing zone, the separating vortex is quickly broken up and dissipated.In the turbulence zone, the tip vortex, the vortex behind the nacelle and the vortex behind the tower are mixed to a coherent structure.The pressure amplitude spectrum of vortex deceases with the increase of the distance between the wake and the center of the blade axis, which indicates that the strength of vortex shedding gradually decreases along the axial direction.
In the near field, the dominant frequency amplitude has a higher pressure fluctuation peak.Many separating vortexes emerge in this region.The highest amplitude of dominant frequency emerges at the λ = 5.It is found that the dominant frequency amplitude increases with the increase of λ when the λ is higher than 5.In the far field, the dominant frequency peak increases with the increase of λ, which indicates that the separation vortices behind the wind turbine are producing an increase of λ.
The analysis of the PDF of pressure fluctuations leads to the conclusion that in the near-flow field, the wake pressure increases gradually due to the mixing of wake vortices.And in the far field, the pressure fluctuation decreases rapidly due to the large amount of vortex dissipation.The pressure fluctuation is gradually violent with an increase of λ.
F I G U R E 3
Model of wind turbine and computational domain.(A) Wind turbine model.(B) Computational domain.
F
I G U R E 5 Three-dimensional wake flow structure at different tip speed ratio.F I G U R E 6 Relationship between tip speed ratios and the positions of the broken vortex circles.
F I G U R E 7
Section of the wind turbine.CUI ET AL.
from the wind turbine.And the far field is the flow field after 4.5D from the wind turbine.Six monitoring points are measured at the radial distance of rotor shaft of 35 m (3.5D), 40 m (4D), 45 m (4.5D), 55 m (5.5D), and 60 m (6D), which accurately captures the characteristics of wind turbine wake in the near and far field.
Figure
11A displays the pressure fluctuation of the vortex behind the nacelle at the distance of wind turbine of 35 m at λ = 4, 5, 6, 7, and 8.It can be seen from the Figure 11A that the pressure fluctuation of the vortex behind the nacelle is violent at λ = 4.A large number of separation vortexes are produced in this region.The mean value of the pressure behind the nacelle is the highest at λ = 5. Figure 11B demonstrates the pressure fluctuation of the vortex behind the nacelle at the distance of wind turbine of 40 m at λ = 4, 5, 6, 7, and 8.It is clearly observed that the mean value of the pressure behind the nacelle gradually increases with the increase of λ.
Figure11Ddisplays the pressure fluctuation of vortex behind the nacelle at the distance of wind turbine of 50 m and λ = 4, 5, 6, 7, 8.The mean value of the pressure behind the nacelle is the lowest at λ = 5.The mean value of the pressure in the far field is lower than that of near field, which indicates that the pressure intensity gradually decreases with the increase of distance in the center of the wind turbine.Figure11E
F
I G U R E 10 Vorticity field on the section plane at different heights.demonstrates the pressure fluctuation of vortex behind the nacelle at the distance of wind turbine of 55 m at λ = 4, 5, 6, 7, and 8.It is further demonstrated that the pressure obviously decreases with the development of wake.
Figure 12A displays the pressure fluctuation of the tip vortex at the distance of wind turbine of 35 m at λ = 4, 5, 6, 7, and 8.It can be seen from the Figure 12A that the pressure fluctuation of the tip vortex is violent at λ = 4.The faster angular velocity makes the pressure fluctuation not violent the separation vortex.
Figure 12B demonstrates the pressure fluctuation of the tip vortex at the distance of the wind turbine of 40 m at λ = 4, 5, 6, 7, and 8.The mean value of the pressure at λ = 5 is not F I G U R E 11 Pressure fluctuation of the tip vortex at different monitoring points and the tip speed ratios.(A) monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) Monitoring point at 50 m.(E) Monitoring point at 55 m. (F) Monitoring point at 60 m. the highest value.The mean value of the pressure increases with the increase of the λ.
Figure 12D displays the pressure fluctuation of the tip vortex at the distance of the wind turbine of 50 m at λ = 4, 5, 6, 7, and 8.The mean values of the pressure are similar.The pressure fluctuation of the tip vortex is still violent at λ = 4. Figure 12E demonstrates the pressure fluctuation of the tip vortex at the distance of the wind turbine of 55 m at λ = 4, 5, 6, 7, and 8.With the development of wake, the pressure decreases.And there are still a large number of separated vortices causing disturbance here at λ = 4. Figure 13A displays the pressure fluctuation behind the tower at the distance of wind turbine of 35 m at λ = 4, 5, 6, 7, and 8.It can be seen from the Figure 13A that the pressure fluctuation behind the tower is violent at λ = 4. Due to the influence of the tower, F I G U R E 12 Pressure fluctuations behind the tower at different monitoring points and the tip speed ratios.(A) Monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) Monitoring point at 50 m.(E) Monitoring point at 55 m. (F) Monitoring point at 60 m.
displays the pressure fluctuation of vortex behind the tower at the distance of wind turbine of 50 m and λ = 4, 5, 6, 7, 8.The pressure decreases with the development of wake. Figure 13E demonstrates the pressure fluctuation of vortex behind the tower at the distance of wind turbine of 55 m, λ = 4, 5, 6, 7, and 8.The mean value of the pressure behind the tower is the lowest at λ = 5.The separation vortex is rapidly dissipated.
F
I G U R E 13 Amplitude spectra of vortex behind the nacelle at different monitoring points and the tip speed ratios.(A) Monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) Monitoring point at 50 m.(E) monitoring point at 55 m. (F) Monitoring point at 60 m.
Figure
Figure 14A displays the pressure amplitude spectra of vortex behind the nacelle at the radial distance of rotor shaft of 35 m and λ = 4, 5, 6, 7, 8.It is clearly observed that the maximum amplitudes of the characteristic frequencies occur at λ = 5. Surprisingly, we can see that the value of secondary frequency peak is
F
I G U R E 14 Pressure fluctuation of vortex behind the nacelle at different monitoring points and the tip speed ratios.(A) Monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) monitoring point at 50 m.(E) Monitoring point at 55 m. (F) Monitoring point at 60 m.
Figure 17C displays the pressure amplitude spectra of he vortex behind the tower at the distance of the wind turbine tower of 45 m and λ = 4, 5, 6, 7, 8.The dominant frequency amplitude increases with the increase of λ.Many separation vortices have emerged in the near field.Figure 17D illustrates the pressure amplitude spectra of vortex behind the tower at the distance of wind turbine tower of 50 m at λ = 4, 5, 6, 7, and 8.It is obtained that the F I G U R E 15 Schematic diagram of monitoring point selection.
F
I G U R E 16 Amplitude spectra of the tip vortex at different monitoring points and tip speed ratios.(A) Monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) Monitoring point at 50 m.(E) Monitoring point at 55 m. (F) Monitoring point at 60 m.
F
I G U R E 17 Amplitude spectra of vortex behind the tower at different monitoring points and tip speed ratios.(A) Monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) Monitoring point at 50 m.(E) Monitoring point at 55 m. (F) Monitoring point at 60 m.
demonstrates the PDF of pressure fluctuations behind the nacelle at the distance of wind turbine of 35 m and λ = 4, 5, 6, 7, 8.It can be seen from Figure 18A that the absolute value range of pressure fluctuation increases with the increase of λ.The absolute value range of pressure fluctuation is 0.2 at λ = 4 F I G U R E 18 PDF of pressure behind the nacelle at different monitoring points and the tip.(A) Monitoring point at 35 m. (B) Monitoring point at 40 m.(C) Monitoring point at 45 m. (D) Monitoring point at 50 m.(E) Monitoring point at 55 m. (F) Monitoring point at 60 m.and the absolute value range of pressure fluctuation is 0.3 at λ = 8.It can be seen in Figure18C,D that the pressure fluctuation behind the nacelle mainly occurs around 4D from the wind turbine compared with Figure18A,B.Figure18E,F displays the PDF of pressure behind the nacelle at the distance of wind turbine of 55 and 60 m at λ = 4, 5, 6, 7, and 8.It can be seen from Figure18E,F that the fluctuations of pressure gradually decrease, which reveals that the separation vortex has been dissipated in this area.
Figure
Figure 19A displays the PDF of the tip vortex pressure at the distance of wind turbine of 35 m and λ = 4, 5, 6, 7, 8.It is obtained that the absolute value range of pressure fluctuation at λ = 4 is 0.28 and the absolute value range of pressure fluctuation is 0.3 at λ = 8.The pressure fluctuation of the blade tip vortex is significantly stronger than that of the vortex behind the nacelle.It is found that the sharp pressure fluctuation of blade tip vortex occurs around 4D from the wind turbine.The pressure fluctuation mainly falls on the negative axis.The | 2024-03-31T15:20:59.997Z | 2024-03-27T00:00:00.000 | {
"year": 2024,
"sha1": "53f5a10733b4de5e67256e6787c9bb76fc0226ba",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ese3.1693",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "0326947e93f6dfbd52b9ee1a3c61ce6d694f2248",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
30245444 | pes2o/s2orc | v3-fos-license | Elafin, an Elastase-specific Inhibitor, Is Cleaved by Its Cognate Enzyme Neutrophil Elastase in Sputum from Individuals with Cystic Fibrosis*
Elafin is a neutrophil serine protease inhibitor expressed in lung and displaying anti-inflammatory and anti-bacterial properties. Previous studies demonstrated that some innate host defense molecules of the cystic fibrosis (CF) and chronic obstructive pulmonary disease airways are impaired due to increased proteolytic degradation observed during lung inflammation. In light of these findings, we thus focused on the status of elafin in CF lung. We showed in the present study that elafin is cleaved in sputum from individuals with CF. Pseudomonas aeruginosa-positive CF sputum, which was found to contain lower elafin levels and higher neutrophil elastase (NE) activity compared with P. aeruginosa-negative samples, was particularly effective in cleaving recombinant elafin. NE plays a pivotal role in the process as only NE inhibitors are able to inhibit elafin degradation. Further in vitro studies demonstrated that incubation of recombinant elafin with excess of NE leads to the rapid cleavage of the inhibitor. Two cleavage sites were identified at the N-terminal extremity of elafin (Val-5—Lys-6 and Val-9—Ser-10). Interestingly, purified fragments of the inhibitor (Lys-6—Gln-57 and Ser-10—Gln-57) were shown to still be active for inhibiting NE. However, NE in excess was shown to strongly diminish the ability of elafin to bind lipopolysaccharide (LPS) and its capacity to be immobilized by transglutamination. In conclusion, this study provides evidence that elafin is cleaved by its cognate enzyme NE present at excessive concentration in CF sputum and that P. aeruginosa infection promotes this effect. Such cleavage may have repercussions on the innate immune function of elafin.
Elafin is a neutrophil serine protease inhibitor expressed in lung and displaying anti-inflammatory and anti-bacterial properties. Previous studies demonstrated that some innate host defense molecules of the cystic fibrosis (CF) and chronic obstructive pulmonary disease airways are impaired due to increased proteolytic degradation observed during lung inflammation. In light of these findings, we thus focused on the status of elafin in CF lung. We showed in the present study that elafin is cleaved in sputum from individuals with CF. Pseudomonas aeruginosa-positive CF sputum, which was found to contain lower elafin levels and higher neutrophil elastase (NE) activity compared with P. aeruginosa-negative samples, was particularly effective in cleaving recombinant elafin. NE plays a pivotal role in the process as only NE inhibitors are able to inhibit elafin degradation. Further in vitro studies demonstrated that incubation of recombinant elafin with excess of NE leads to the rapid cleavage of the inhibitor. Two cleavage sites were identified at the N-terminal extremity of elafin (Val-5-Lys-6 and Val-9 -Ser-10). Interestingly, purified fragments of the inhibitor (Lys-6 -Gln-57 and Ser-10 -Gln-57) were shown to still be active for inhibiting NE. However, NE in excess was shown to strongly diminish the ability of elafin to bind lipopolysaccharide (LPS) and its capacity to be immobilized by transglutamination. In conclusion, this study provides evidence that elafin is cleaved by its cognate enzyme NE present at excessive concentration in CF sputum and that P. aeruginosa infection promotes this effect. Such cleavage may have repercussions on the innate immune function of elafin.
Elafin is a cationic 6-kDa non-glycosylated serine protease inhibitor belonging to the chelonianins, a distinct family of the canonical inhibitors also including secretory leukoprotease inhibitor (SLPI) 2 . It is also known as SKALP (skin-derived antileukoprotease) or ESI (elastase-specific inhibitor). The molecule displays a compact structure maintained by four conserved disulfide bridges characteristic of WAP (whey acidic protein) family and shares 40% sequence identity with SLPI. In addition to its ability to inhibit porcine pancreatic elastase, elafin is a potent inhibitor of two neutrophil serine proteases, neutrophil elastase (NE) and proteinase 3 (1,2), and is therefore thought to protect tissue from degradation by these enzymes. Elafin is released by proteolytic cleavage from a larger molecule called trappin-2 or pre-elafin, which possesses at the N terminus of the whey acidic protein domain a cementoin domain containing several motifs having the consensus sequence GQDPVK that can act as transglutaminase substrate, allowing the cross-linking of the inhibitor to extracellular matrix proteins (3)(4)(5). It has been shown that tryptase, a mast cell-derived protease, may be involved in the proteolytic processing of trappin-2 into elafin (6). In vivo, elafin is mainly detected under inflammatory conditions, suggesting that the inhibitor is inducible. Several studies have demonstrated that elafin expression can be up-regulated in response to pro-inflammatory stimuli such as lipopolysaccharide (7), NE (8), and proinflammatory cytokines like IL-1 or TNF-␣ (9 -12). Elafin/trappin-2 is mainly expressed by epithelial surfaces such as skin (13)(14)(15) or lung epithelium (10,16) where the inhibitor acts as an antiprotease to protect tissue against proteolytic damages caused during inflammatory events. Inflammatory cells such as alveolar macrophages (17) and neutrophils (18) have also been shown to express the inhibitor. In addition to an antiprotease property, it has recently been demonstrated that trappin-2 and elafin possess antiinflammatory and anti-bacterial activities. The two molecules display anti-bacterial activities against Pseudomonas aeruginosa (Gram-negative) and Staphylococcus aureus (Gram-positive) (11,19), which appear to be independent of their anti-elastase activity or charge properties. In mice, trappin-2 has been shown to dosedependently reduce LPS-induced neutrophil influx into alveoli in addition to inhibiting LPS-induced production of matrix metalloproteinase-9 and the potent neutrophil attractants Cxcl1 and Cxcl2 (chemokine ligands 1 and 2), suggesting an immunomodulatory role in innate immunity (20). By reducing NF-B activation, trappin-2 has been demonstrated to attenuate IL-8 secretion by endothelial cells in response to various pro-inflammatory stimuli (TNF-␣, LPS, oxidized low density lipoprotein) as well as LPSinduced TNF-␣ secretion by macrophages (21). Recently, our group demonstrated that elafin inhibits the LPS-induced production of MCP-1 in monocytes by inhibiting AP-1 and NF-B activation (22).
During lung inflammation some components of the innate immune response have been shown to be sensitive to exacerbated host proteolytic activity emanating from dysregulated elastolytic enzymes (23). We have demonstrated that elastolytic cysteine cathepsins present in the lung under inflammatory conditions are involved in the inactivation of several host defense molecules such as SLPI, defensins, and lactoferrin. Cysteine cathepsins were shown to cleave and inactivate SLPI and defensins (human -defensin 1 and 3), respectively, in epithelial lining fluid from individuals with emphysema (24) and CF (25). Additionally, lactoferrin degradation observed in P. aeruginosa-infected CF sputum was found to be due to an excess of cathepsin activity (26). Furthermore, other elastolytic proteases are potentially involved in the cleavage and inactivation of host defense molecules. Pseudomonas elastase, also referred as pseudolysin, has been demonstrated to cleave SLPI (27), and high concentrations of pseudolysin, Pseudomonas alkaline protease, and NE were also able to inactivate lactoferrin after a prolonged exposure (28).
In the present study we demonstrate that levels of elafin are lower in P. aeruginosa-positive as opposed to P. aeruginosanegative CF sputa and that recombinant elafin incubated in P. aeruginosa-positive CF sputa is rapidly cleaved. Our data provide evidence that NE is involved in the cleavage of elafin in CF sputum as only NE inhibitors are able to inhibit this process. Furthermore, NE activity is higher in P. aeruginosa-positive CF sputa compared with P. aeruginosa-negative sputa, confirming that the low levels of elafin observed in P. aeruginosa-positive CF sputa are due to elevated NE levels with subsequent increased cleavage of elafin. We also demonstrate that purified NE in excess can cleave elafin at two distinct sites in its N-terminal extremity and that both fragments of the inhibitor generated upon NE activity still retain inhibitory activity. Although NE preserves the antiprotease activity of elafin, we show on the other hand that the protease considerably affects the capacities of elafin to cross-link with fibronectin by transglutamination and to bind LPS.
CF Sputum Processing-The protocol for sputum processing was modified from that described elsewhere (29). Sputum was weighed and then treated with 4.5 volumes of normal saline per weight. Each sample was briefly vortexed and then rocked for 15 min at room temperature. An additional 4.5 volumes of normal saline were added to each sample and rocked for 5 min more. The sample was then filtered through sterile 48-m nylon gauze, and the soluble sputum phase was obtained by centrifugation of the filtrate for 10 min at 790 ϫ g. The supernatant was stored at Ϫ80°C until required.
Determination of Elafin Levels in CF Sputum by Enzymelinked Immunosorbent Assay (ELISA)-Goat anti-human elafin antibody (R&D Systems, AF1747, 1:500 in Voller's buffer, 100 l per well) was added to Immunlon-2 plates and left overnight at 4°C. The plate was washed 3 times with PBS, 0.05% Tween 20 (PBS-T) and blocked in PBS-T containing 0.1% bovine serum albumin for 1 h at room temperature. After washing 3 times in PBS-T, elafin standards (100 l per well, concentration ranged from 62.5 to 20,000 pg/ml) and CF sputum samples (100 l per well) were added to the plate for 2 h at room temperature. The plate was then washed, and biotinylated anti-human elafin antibody (R&D Systems, BAF1747, 1:250 in PBS-T, 100 l per well) was added to the plate for 2 h at room temperature. After washing, the plate was incubated with horseradish peroxidaseconjugated streptavidin (Zymed Laboratories Inc., 1:2500 in PBS-T, 100 l per well) for 20 min at room temperature and washed with PBS-T. Peroxidase activity was measured by the addition of ABTS (100 l per well) and reading the absorbance at 405 nm.
Western Blot Analysis of Recombinant Elafin Incubated with CF Sputum and NE-Recombinant elafin (8.5 ϫ 10 Ϫ7 M) was incubated with 10 l of CF sputum in 30 mM Tris-buffered saline to a final volume of 20 l for 0, 10 min, 1 h, 6 h, or 24 h at 37°C. In some cases, CF samples were preincubated for 1 h at 37°C with the following protease inhibitor before adding elafin: sample treatment buffer containing reducing or non-reducing agent and boiling samples for 5 min. Samples were separated by Tricine SDS-PAGE using a 17.5% polyacrylamide gel and blotted onto a 0.2-m nitrocellulose membrane (Sigma-Aldrich). The membrane was blocked for 1 h at room temperature with 3% bovine serum albumin in PBS containing 0.1% Tween 20. Elafin was detected by using a biotinylated anti-elafin antibody (R&D Systems, 1:500, overnight at 4°C) followed by peroxidase-conjugated streptavidin (Zymed Laboratories Inc., 1:2500, 20 min at room temperature). Peroxidase activity was detected with a chemiluminescent substrate (SuperSignal West Femto Maximum Sensitivity Substrate, Pierce).
HPLC Mass Spectrometry-Cleavages of elafin by neutrophil elastase were assessed by incubating recombinant elafin (1 g; 4.1 ϫ 10 Ϫ6 M) with purified NE (10 g; 8.3 ϫ 10 Ϫ6 M) for 1 h in 30 mM Tris-buffered saline, pH 7.5, in 40 l final volume at 37°C. Elastase activity was neutralized with 1 l of phenylmethylsulfonyl fluoride, 100 mM, for 30 min at room temperature. Samples were lyophilized until analysis when they were redissolved in 10 l of 6 M guanidine HCl, 100 mM Tris, pH 8.5, 1 mM EDTA. 1 l of 10% trifluoroacetic acid was added to each sample to bring the pH to Ͻ3. Samples were then analyzed by reverse phase HPLC coupled to electrospray mass spectrometry as described (30).
Protease Assay-Recombinant elafin (1 g; 4.1 ϫ 10 Ϫ6 M) was incubated with purified NE (10 g; 8.3 ϫ 10 Ϫ6 M) for 2 h in 30 mM Tris-buffered saline, pH 7.5, in 40 l final volume at 37°C. Elafin fragments were purified by reverse phase HPLC, dried, and reconstituted with 50 l of 0.1 M Hepes, 0.5 M NaCl, pH 7.5, containing 0.1% Brij-35. Various volumes of reconstituted fractions F1 and F2 (from 0 to 5 l) were incubated for 30 min at 37°C with 3.3 nM NE in 0.1 M Hepes, 0.5 M NaCl, pH 7.5, containing 0.1% Brij-35. Residual activity of NE was detected by adding the chromogenic substrate MeOSuc-AAPV-pNA at a final concentration of 1 mM and by measuring absorbance at 405 nm over the time at 37°C in a 96-well microplate reader (Victor2 1420 Multilabel Counter, Wallac). Residual activity was expressed in percentage as 100% activity is determined by control NE.
Measurement of Elastase Concentration in CF Sputum-10 l of P. aeruginosa-positive and -negative CF sputum were diluted in 0.1 M Hepes, 0.5 M NaCl, pH 7.5, containing 0.13 mM E-64, 0.11 mM pepstatin A, and 5.4 mM EDTA and treated with or without NE inhibitor (1 mM MeOSuc-AAPV-CMK or 1.6 M SLPI) for 1 h at 37°C in a 100-l volume. 50 l of the chromogenic substrate MeOSuc-AAPV-pNA, was mixed with each sample to a final concentration of 1.4 mM, and absorbance at 405 nm of samples was measured over time at 37°C in a 96-well microplate reader (Victor2 1420 Multilabel Counter). The concentration of NE in samples was determined by comparing the elastase activity (given by the rate of hydrolysis of the substrate) with a standard curve of purified NE. All measurements were performed in duplicate.
Analysis of Elafin Cross-linking to Fibronectin by Transglutaminase-Recombinant elafin (100 ng, 3 M) was incubated with increasing concentrations of neutrophil elastase (0 -1000 ng, 0 -6.1 M) in 30 mM Tris-HCl, 150 mM NaCl, pH 7.5 for 1 h at 37°C. Neutrophil elastase was inactivated by 5 g of ␣1-antitrypsin for 30 min. The resulting mix was incubated in 200 mM Tris acetate, pH 6, containing 5 mM CaCl 2 and 0.1 mM dithiothreitol for 1 h at 37°C with 5 g of plasma fibronectin and 0.38 milliunits of guinea pig liver transglutaminase (one unit catalyzes the formation of 1.0 mol of hydroxamate per minute from N ␣ -CBZ-glutaminylglycine and hydroxylamine at pH 6.0 at 37°C). The reaction was stopped by adding sample treatment buffer without reducing agent and by boiling samples for 5 min at 100°C. Samples were separated by 4 -12% Bis-Tris SDS-PAGE and analyzed by Western blotting using a biotinylated anti-elafin antibody according to the method described above.
LPS ELISA-A microtiter plate was coated with 100 ng/well LPS (Sigma, Escherichia coli 055:B5) which had been diluted in serum-free RPMI media and incubated at 37°C for 3 h. Unbound LPS was washed off the plate with distilled water. Excess water was gently tapped off, and the plate was left to air-dry overnight at room temperature. The next day the plate was blocked with 200 l/well blocking buffer (PBS with 1% (w/v) bovine serum albumin) for 2 h at 37°C). The plate was washed 3 times with PBS, 0.05% (v/v) Tween, and 100 l/well of the appropriately diluted proteins (diluted in serum-free RPMI media) were added to the plate at 37°C for 2 h. Control wells, to which serum-free RPMI media alone was added in place of proteins, were included on each ELISA plate. Again, the plate was washed three times before primary antibody (R&D Systems, AF1747) was added at the appropriate dilution (1:100) for 37°C for 2 h. After washing 100 l/well of diluted horseradish peroxidase-conjugated secondary antibody was added to the plate at 37°C for 2 h, and the plate was washed 3 times. Substrate, 100 l/well (ABTS Single Solution, Zymed Laboratories Inc.), was added, and the plate was incubated at room temperature for 20 min. The A 405 of the wells were measured on a microtiter plate reader, and results were analyzed using Prism, Version 3.0 (Graphpad Software, San Diego).
Levels of Elafin in Soluble CF
Sputum-Levels of elafin in soluble CF sputum from 11 patients with positive P. aeruginosa sputum cultures and 8 patients with negative P. aeruginosa sputum cultures were determined by sandwich ELISA. As shown in Fig. 1, NOVEMBER 21, 2008 • VOLUME 283 • NUMBER 47 levels of elafin were found to be significantly lower in P. aeruginosa-positive CF sputa compared with P. aeruginosa-negative CF sputa (502 Ϯ 204 versus 2761 Ϯ 876 pg/ml, p Ͻ 0.05).
Cleavage of Elafin by Neutrophil Elastase in CF Sputum
Incubation of Exogenous Elafin with CF Sputum-Effects of negative (PsϪ) and positive (Psϩ) P. aeruginosa CF sputa on elafin were assessed to determine their ability to potentially cleave or degrade the protease inhibitor. Recombinant elafin was incubated with Ps-and Psϩ CF sputa for various times at 37°C and analyzed by Western blot under reducing or nonreducing conditions using a biotinylated anti-elafin antibody (Fig. 2). As shown in Fig. 2A, no elafin was detectable after an incubation for 24 h with Psϩ CF sputum, whereas a faint band corresponding to intact elafin could still be detected with PsϪ CF sputum in the same conditions of incubation. Although no fragments of elafin were detectable under reducing conditions ( Fig. 2A), Western blot analysis performed under nonreducing conditions resulted in the detection of three distinct bands (Fig. 2B); in addition to the upper band displaying a similar size to intact elafin, two lower bands corresponding to proteolytic fragments of elafin were detected. A time-course experiment was performed by incubating recombinant elafin with PsϪ or Psϩ CF sputum for 0, 10 min, 1 h, 6 h, or 24 h at 37°C. As shown in Fig. 2C, levels of recombinant elafin detected by Western blot under reducing conditions decreased in both Ps-and Psϩ CF sputa over time. However, elafin was more rapidly cleaved in Psϩ than in PsϪ CF sputum. Most of the elafin was cleaved after incubation for 10 min in Psϩ CF sputum, whereas intact elafin was still clearly detected after 6 h of incubation in PsϪ CF sputa (Fig. 2C).
These findings show that the proteolytic activity directed against elafin was higher in Psϩ than in PsϪ CF sputum.
Identification of the Protease(s) Involved in the Cleavage of Elafin in CF Sputum-To identify the protease(s) involved in the cleavage of elafin in CF sputa, different protease inhibitors were preincubated with Psϩ CF sputum samples before adding recombinant elafin. After 24 h of incubation at 37°C, samples were analyzed by Western blot under reducing or non-reducing conditions using an anti-elafin antibody. First, the use of nonspecific protease inhibitors targeting each protease family (serine, cysteine, acidic proteases, and metalloproteases) allowed identification of the family of the CF sputum protease(s) involved in the cleavage of elafin. As shown in Fig. 3A, Pefabloc, a nonspecific serine protease inhibitor, inhibits elafin cleavage in Psϩ CF sputa, whereas neither leupeptin, E-64 (cysteine protease inhibitor), pepstatin A (acidic protease inhibitor), nor metalloproteinase inhibitors (EDTA, GM6001, phosphoramidon) had any effect. In addition to Pefabloc, several other serine protease inhibitors were used to identify more precisely the serine protease involved in the cleavage of elafin. As shown in Fig. 3B, no effect was observed with trypsin-like (TLCK, benzamidine, soybean trypsin inhibitor) and chymotrypsin-like (TPCK) inhibitors, suggesting that neither trypsin-like nor chymotrypsin-like proteases are involved in elafin cleavage in Psϩ CF sputa. However, a slight inhibition of elafin cleavage by the nonspecific serine protease inhibitor aprotinin was observed (Fig. 3B). Because no trypsin-like or chymotrypsin-like proteases mediated elafin cleavage in CF sputum, we hypothesized that elastase-like proteases, particularly neutrophil elastase and proteinase 3, could be involved in this process. Among inhibitors targeting neutrophil serine proteases (e.g. neutrophil elastase, proteinase 3, and cathepsin G) that have been tested in our study, only AAT, SLPI, and MeOSuc-AAPV-CMK inhibited elafin cleavage in Psϩ CF sputa (Fig. 3C). According to our previous results, ␣1-antichymotrypsin, a chymotrypsinlike inhibitor targeting cathepsin G, did not prevent elafin cleavage (Fig. 3C). Non-reducing Western blot analysis provided similar results as Pefabloc, and the neutrophil elastase inhibitor MeOSuc-AAPV-CMK completely prevented elafin cleavage (not shown). Taken together, these results suggest that NE is involved in the cleavage of NE in CF sputum.
Given that NE is responsible of the cleavage of elafin in CF sputum and that the inhibitor is more rapidly degraded in Psϩ than in PsϪ CF sputum, NE activity was measured in both CF sputa by using the chromogenic substrate MeOSuc-AAPV-pNA. Elastase activity in CF sputa was calculated using a standard curve of purified NE to determine the concentration of free NE in samples. As shown in Fig. 4, the concentration of free NE was increased 3.9-fold in Psϩ compared with PsϪ CF sputa (2.86 versus 0.73 M). Additionally, free elastase activity in both samples was totally abrogated using the NE inhibitors MeOSuc-AAPV-CMK and SLPI (Fig. 4), thus confirming the specificity of the measurements.
Effects of Neutrophil Serine Proteases on Elafin Integrity-The ability of human neutrophil serine proteases to cleave elafin was evaluated in vitro (Fig. 5). Recombinant elafin was incubated at 8.5 ϫ 10 Ϫ7 M with purified proteases and analyzed by Western blot with an anti-elafin antibody. Firstly, dose-response incubations were carried out for 24 h at 37°C using concentrations of neutrophil serine proteases ranged from 3.5 ϫ 10 Ϫ9 to 8.5 ϫ 10 Ϫ6 M. As shown in Fig. 5A, human NE and Pr3 were found to cleave recombinant elafin only in conditions with excess of proteases; although a partial cleavage of the inhibitor was observed at equimolarity, excess of neutrophil elastase and proteinase 3 completely cleaved elafin (Fig. 5A, 1 and 2). In contrast to these proteases, human cathepsin G appeared less effective in cleaving elafin as 10-fold excess of the enzyme was necessary to partially cleave elafin (Fig. 5A). Time-course incubations of elafin with NE at equimolarity (3.5 ϫ 10 Ϫ6 M) and in excess ducing conditions (Fig. 5B). Of note, patterns of cleavage of the inhibitor in these conditions were similar to that observed in PsϪ and Psϩ CF sputa, respectively (Fig. 2B).
To further investigate the elastase-mediated cleavage of elafin, products from NE-elafin incubations using two distinct [enzyme]:[inhibitor] molar ratios (1:2.5 or 2:1) were analyzed by HPLC and mass spectrometry. HPLC separation (under nonreducing conditions) of elafin products generated within 1 h by a slight excess of NE ([enzyme]:[inhibitor] molar ratio ϭ 2:1) resulted in the formation of three distinct peaks (Fig. 6A, peaks 1, 2, and 3). Identification of elafin fragments was carried out by analyzing peaks in mass spectrometry (data not shown). The measured mass of the peak 1 was 542.27 Da, identifying it as elafin residues 1-5 (calculated mass ϭ 542.27 Da). Likewise, the measured masses for the peaks 2 and 3 were 5474.9 and 5093.4 Da, respectively, identifying them as elafin residues 6 -57 (calculated mass ϭ 5474.6 Da) and 10 -57 (calculated mass ϭ 5093.1 Da). Although native elafin used as a control exhibited an observed mass of 5999.1 Da (calculated mass ϭ 5999.2 Da) with all cysteines in disulfide leakage, no full-length inhibitor as the intact form or possessing a mass with additional 18 Da was detected with an excess of NE. The extracted ion chromatogram was examined for the presence of peptides released from elafin by NE under these conditions (not shown). Only peptides corresponding to elafin residues 1-5 (observed mass ϭ calculated mass ϭ 542.27 Da) and 6 -9 (observed mass ϭ 399.25 Da, calculated mass ϭ 399.24 Da) were found in the extracted ion chromatogram. Conversely, the peptide 1-9 was not detected using this method. These results indicate that a 2ϫ excess of NE cleaved elafin at Val-5-Lys-6 and Val-9 -Ser-10 peptide bonds. In contrast, such cleavages were not detected using con-ditions with a slight excess of elafin ([enzyme]:[inhibitor] molar ratio ϭ 1:2.5), even after 24 h of incubation. Instead, a partial cleavage of the inhibitor was identified at the scissile peptide bond Ala-24 -Met-25 (not shown). Taken together, these findings indicate that NE-mediated cleavages of elafin at Val-5-Lys-6 and Val-9 -Ser-10 peptide bonds only occur with excess of NE. All the cleavage sites generated by NE within elafin are summarized in Fig. 6B.
Effects of Excess of NE on Elafin Properties-Inhibitory activity of elafin cleaved by NE in excess was investigated to determine whether the NE treatment abolished the antiprotease activity of the inhibitor. Elafin fragments 6 -57 and 10 -57 obtained by incubating elafin with a 2ϫ excess of NE for 2 h were separated by HPLC in two fractions (Fig. 7A1, fractions F1 and F2) and investigated for their anti-NE activity by protease assay. As shown in Fig. 7A2, fractions F1 and F2 corresponding to elafin fragments 6 -57 and 10 -57 were able to inhibit NE in a dose-dependant manner.
The ability of NE in excess to cleave elafin and to remove an N-terminal peptide containing the transglutaminase substrate motif AQEPVK prompted us to examine the effect of such treatment on the capacity of the inhibitor to covalently crosslink to fibronectin by transglutamination. Like elafin, fibronectin is a substrate of transglutaminases (31) and was recently demonstrated to bind recombinant elafin in vitro by transglutamination (5). After treating elafin with increasing concentrations of NE, the inhibitor was incubated with fibronectin in the presence or absence of guinea pig liver transglutaminase and analyzed by Western blot. As demonstrated in Fig. 7B (lane 2), the treatment of elafin and fibronectin with transglutaminase led to the generation of an immunoreactive band of high molecular mass characteristic of elafin-fibronectin complexes. Elafin was preincubated with increasing amounts of NE to assess the ability of elafin and NE-cleaved elafin to bind fibronectin by transglutamination. As shown in Fig. 7B, treatment with NE in excess led to the cleavage of elafin (lanes 7-10), whereas submolar amounts of NE preserved elafin integrity (lanes [3][4][5][6]. Under these latter conditions, elafin was shown to covalently bind to fibronectin by transglutamination in contrast to the conditions using excess of NE that abolished the cross-linking reaction between the inhibitor and the fibronectin. To confirm that this effect was only due to the cleavage of elafin and not to the cleavage of fibronectin as NE was previously shown to degrade fibronectin (32), all samples were treated by AAT to neutralize NE, and the nitrocellulose membrane was stripped and reprobed with an anti-fibronectin antibody. The Western blot revealed no cleavage of fibronectin (data not shown). Taken together, these results indicated that NE-mediated cleavage of elafin abolished the capacity of the inhibitor to cross-link to fibronectin by transglutamination.
Elafin is an antibacterial component possessing the capacity to interact with the endotoxin LPS of Gram-negative bacteria and to modulate macrophage responses after LPS stimulation (33). We, therefore, investigated the effect of NE on the ability of the inhibitor to bind to LPS. Recombinant elafin was incubated alone or with an excess of NE and then examined for its capacity to bind to the endotoxin by ELISA using an LPS-coated plate and an anti-elafin antibody (Fig. 7C). Our results showed peaks (1, 2, and 3) were obtained corresponding to various elafin products. Peaks were analyzed by mass spectrometry, and results revealed that peak 1 corresponded to the fragment Ala-1-Val-5, peak 2 to Lys-6 -Gln-57, and peak 3 to Ser-10 -Gln-57. This indicates that NE cleaves elafin at Val-5-Lys-6 and Val-9 -Ser-10. mAU, milliabsorbance units. B, schematic representation of NE cleavage sites in the amino acid sequence of elafin (NH 2 -Ala-1-Gln-57-COOH). The lines represent the disulfide bridges linking the paired cysteine residues, and the asterisk identifies the residue in P1 position within the protease binding loop. Arrows represents cleavages sites generated by 2-times excess of NE for 1 h. The arrowhead indicates the partial cleavage site detected after a 24-h NE exposure using a 2.5ϫ excess of elafin. This arrowhead also represents the scissile bond in the elafin inhibitory loop. that native elafin (not treated with NE) was able to interact to LPS in a dose-response manner. On the contrary, little or no elafin was detected when the inhibitor was treated with NE. These results, thus, indicated that NE in excess suppresses the ability of elafin to bind to LPS.
DISCUSSION
Elafin is an inducible and multifunctional peptide expressed by mucosal surfaces including lung epithelium. In addition to being a potent inhibitor of NE and proteinase 3, two neutrophil serine proteases, the molecule displays anti-bacterial and antiinflammatory properties and, thus, participates in the innate defense of the lung during inflammatory events. In the present study we have demonstrated that elafin is rapidly cleaved in P. aeruginosa-positive CF sputum compared with P. aeruginosa-negative CF samples. Airways from CF individuals is known to be low (pH [5][6], as demonstrated in exhaled breath condensate from patients with exacerbated and stable CF (34).
Although all incubations performed in our study were carried out at pH 7.5, we found that the incubation of elafin with CF sputum at pH 5.2 also led to the cleavage of elafin, although it did not improve hydrolysis rate of the inhibitor (data not shown). Our findings provide evidence that NE plays a pivotal role in the cleavage of elafin in CF sputum. NE inhibitors (AAT, SLPI, Me-OSuc-AAPV-CMK) were able to inhibit the cleavage of elafin, and purified NE was shown to rapidly cleave elafin in vitro in conditions of excess enzyme. Two cleavage sites were identified in elafin at Val-5-Lys-6 and Val-9 -Ser-10 peptide bonds using a 2ϫ excess of NE. These cleavage sites are in accordance with the specificity of NE, as this enzyme preferentially cleaves at the C terminus of small aliphatic residues including valine and alanine (35,36). The ability of excessive NE to cleave elafin suggests that NE preferentially interacts with the tprotease-binding loop rather than cleaving the N-terminal extremity of elafin. Structural studies of elafin (37,38) indicated that the N-terminal extremity of the inhibitor is at the opposite side of the protease binding loop (inhibitory loop). Therefore, free NE can likely cleave the inhibitor complexed with its target enzyme.
The rate of elafin cleavage in CF samples correlates with NE activity. Samples containing greater amounts of NE activity degraded elafin more quickly. NE activity was shown to be highest in P. aeruginosa-positive CF sputum where elafin was degraded rapidly compared with P. aeruginosa-negative samples where NE activity was lower. A previous study demonstrated similar variation in NE activity (26). This observation can be explained at least in part by bacterial infection. P. aeruginosa is an opportunistic Gram-negative bacterium that frequently infects the lungs of CF patients. This human pathogen is known to induce expression of neutrophil chemoattractants including the CXC chemokine IL-8, and a prolonged IL-8 expression is observed in CF epithelial cells compared with non-CF cells after P. aeruginosa exposure (39). Therefore, increased recruitment and activation of neutrophils at inflammatory sites in response to bacterial infection leads to increased secretion of NE, a potent mediator of lung inflammation involved in a number of pro-inflammatory processes. In addition, we have previously demonstrated that CF neutrophils are more sensitive than control neutrophils to TNF-␣ and IL-8 contained in CF sputum resulting in secretion of higher levels of NE (40). In this report we have detected elafin in CF sputum by ELISA, and our results further demonstrate that the levels of the inhibitor are decreased in P. aeruginosa-positive CF samples. We believe that this is due to increased cleavage rather than downregulation of the inhibitor. IL-1, LPS, and NE are pro-inflammatory components playing a crucial role in the regulation of inflammation in cystic fibrosis (41), and several studies showed that these inflammatory mediators up-regulate the expression of elafin (7,8,10). Moreover in this study experiments related to elafin incubation with CF sputum demonstrated proteolytic cleavage of the molecule in CF samples, thus confirming our hypothesis. In addition, we found by ELISA that recombinant elafin treated with an excess of purified NE was less immunoreactive than native elafin (data not shown). Therefore, increased hydrolysis of elafin at its N terminus can explain at least in part the diminution of elafin detection observed in P. aeruginosa-positive CF sputum.
Functional investigations of elafin fragments Lys-6 -Gln-57 and Ser-10 -Gln-57 revealed that NE-mediated cleavages at Val-5-Lys-6 and Val-9 -Ser-10 peptide bonds have no dramatic consequences on the inhibitory capacity of elafin. Some fragments of elafin cleaved at the N terminus were previously identified in vivo in biological fluids such as urine from patients with psoriasis (42) and purulent sputum (43). Interestingly, the fragment found in purulent sputum started after Val-9 and was found to display inhibitory activity (43). The preservation of antiprotease properties of elafin fragments Lys-6 -Gln-57 and Ser-10 -Gln-57 can be explained by the analysis of elafin structure. NMR analysis of elafin indicates that the N-terminal extremity of elafin (Ala-1-Pro-13) is a flexible region linked to a rigid and flat structure, also referred as "four-disulfide core" possessing the inhibitory activity (38). A previous structureactivity study of synthetic elafin using variants of the inhibitor demonstrated that the N-terminal part of elafin is not essential for expressing full inhibitory activity (44). Therefore, it is likely that cleavages occurring in the N-terminal part of elafin will not perturb the structure of the four-disulfide core, thus preserving the inhibitory properties of elafin. We also found a partial cleavage in the disulfide core (at Ala-24 -Met-25) mediated by NE within 24 h of incubation using elafin in slight excess (data not shown). The x-ray crystallographic analysis of elafin complexed with porcine pancreatic elastase (37) indicated that this peptide bond is located in the inhibitory loop of elafin and corresponds to the scissile peptide bond, also called P1-P1Ј bond, according to the nomenclature of Schechter and Berger (45). Such a cleavage occurring at the scissile bond can be observed for protease inhibitors that obey the standard mechanism described by Laskowski et al. (46). A complex between a canonical inhibitor and its target enzyme is reversible and can dissociate the enzyme from the inhibitor in an intact (virgin) or cleaved form. According to the standard mechanism, the modified inhibitor, which is thermodynamically identical to the virgin inhibitor, can react with the enzyme and inhibit it completely. This, thus, suggests that the partial cleavage observed at Ala-24 -Met-25 of the elafin inhibitory loop is the result of the standard interaction of the inhibitor with its target enzyme and may not lead to a dramatic loss of the antiprotease activity.
Elafin and its precursor trappin-2 (pre-elafin) possess transglutaminase substrate motifs having the consensus sequence GQDPVK and allowing the inhibitors to be covalently crosslinked to various extracellular matrix (ECM) proteins by transglutamination. Elafin contains one transglutaminase substrate motif located in its N-terminal extremity, whereas trappin-2 possesses five motifs, four of which are in the N-terminal moiety (cementoin domain), and one is located in the C-terminal part (elafin domain). Transglutaminases are a family of enzymes catalyzing the formation of an isopeptide bond between the side chains of a Gln and a Lys residue belonging to two different proteins. By this mechanism, both elafin and trappin-2 are able to be cross-linked with a variety of proteins, although trappin-2 was shown to be more rapidly immobilized, likely due to a higher amount of transglutaminase substrate motif (5). In vivo, trappin-2/elafin have been found covalently complexed in several epithelia and colocalized with type-1 transglutaminase, suggesting that the immobilization process results from transglutaminase activities. In the cornified cell envelope of the epidermis, both molecules are cross-linked to several proteins (including involucrin, keratin-1, loricrin, cystatin ␣, filaggrin) and are thought to function as cross-bridging molecules (47). In tracheal epithelium, trappin-2/elafin is also found in a complexed form (4). Previous in vitro studies showed that trappin-2 and/or elafin can covalently bind to a number of ECM proteins including elastin, fibronectin, laminin, fibrinogen, collagen, and -crystallin by transglutamination (4,5,48). The transglutaminase-mediated cross-linking of trappin-2 and elafin to fibronectin and elastin was demonstrated to preserve their antiprotease activity and to protect the associated ECM molecule against proteolysis mediated by NE (5,48). Hence, the immobilization of trappin-2/elafin ECM proteins in elastic tissues such as lung and skin may play a protective role by preserving the structural integrity of the tissue against damages caused by neutrophilic infiltrations during inflammation. In the present study, a cleavage of the elafin N terminus containing the transglutaminase substrate motif AQEPVK was demonstrated to likely occur in vivo by NE in the sputum of patients with CF. Functional investigation of this cleavage showed that it completely suppresses the ability of elafin to cross-link with fibronectin by transglutamination in vitro. Therefore, it is likely that the cleavage of elafin by excess of NE observed in CF sputum can regulate in vivo the amounts of immobilized and soluble forms of the inhibitor in the CF lung. Furthermore, the regulation of the attachment of elafin to ECM proteins may also affect the protection of these proteins against neutrophil-mediated proteolysis and the structural integrity of the lung tissue.
A number of cationic antimicrobial peptides and proteins are able to bind the endotoxin LPS and modulate the cell inflammatory response induced by the endotoxin LPS (49). Some molecules like BPI (bactericidal/permeability-increasing protein), granulysin/NK lysin, histatins, histone H2A, LL-37, and SLPI can neutralize the pro-inflammatory effects of LPS, whereas another molecule like azurocidin/HBP (heparin-binding protein) can enhance the capacity of LPS to induce the release of pro-inflammatory mediators by cells. A recent study demon-strated that trappin-2 (pre-elafin) and a fragment of elafin starting from Pro-13 were able to bind LPS and modulate the macrophage response to LPS (33). Although trappin-2 binds LPS much more strongly than the elafin fragment, both molecules can inhibit and enhance the LPS-induced activation of macrophages, respectively, in the presence and in absence of serum (33). The reason for these antagonistic effects remains unknown. More intriguingly, the elafin fragment, which is less potent than trappin-2 to bind LPS, is much more effective in enhancing the secretion of TNF-␣ by macrophages exposed to LPS in serum-containing conditions (33). In our report we found that NE cleaves elafin at its N terminus, generating a fragment starting from Ser-10, and similarly, we demonstrated that elafin binds to LPS more strongly than NE-treated elafin. Hence, these findings suggest that NE in excess and, more presumably, the generated cleavage of elafin may alter the immunomodulatory properties of the inhibitor in CF airways.
We and others have shown the importance of elafin as an antiprotease, immunomodulatory, and anti-microbial. That an antiprotease such as elafin can be cleaved by its cognate protease underlines the delicate protease-antiprotease balance which exists in the CF lung. Although the cleavage mediated by NE preserves the antiprotease activity of elafin, it may affect the innate immunity properties of the molecule in CF by altering the capacity of elafin to be immobilized by transglutamination and to bind LPS. The degree to which elafin may be cleaved is determined by the inflammatory milieu and the underlying disease process as illustrated by the role of P. aeruginosa in perpetuating the NE burden in the CF lung. | 2018-04-03T01:37:44.650Z | 2008-11-21T00:00:00.000 | {
"year": 2008,
"sha1": "cb05b5c526ad2658807411679c6d034a70657c9d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/47/32377.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "29f4e47d8eaff9f5b211d67d1904f6ab20887b62",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
195782315 | pes2o/s2orc | v3-fos-license | Beneficial Effects of Total Phenylethanoid Glycoside Fraction Isolated from Cistanche deserticola on Bone Microstructure in Ovariectomized Rats
The present study was designed to estimate the antiosteoporotic activity of total phenylethanoid glycoside fraction isolated from C. deserticola (CDP) on rats induced by ovariectomy (OVX) as well as the related mechanisms. After 3 months of oral administration, the decreased bone mineral density, serum Ca, and P in OVX rats were recovered and the deteriorated trabecular bone microarchitecture was partly improved by CDP (60, 120, and 240 mg/kg) intervention, the activities of bone resorption markers were downregulated, and the bioactive of the bone formation index was upregulated; meanwhile, the content of MDA was declined, and GSH was increased by CDP treatment. Compositionally, 8 phenylethanoid glycoside compounds were identified in CDP, with the total contents quantified as 50.3% by using the HPLC method. Mechanistically, CDP declined the levels of TRAF6, RANKL, and RANK, thus suppressing RANKL/RANK/TRAF6-induced activation of downstream NF-κB and PI3K/AKT signaling pathways and ultimately preventing activities of the key osteoclastogenic proteins of NFAT2 and c-Fos. All of the above data implied that CDP exhibited beneficial effects on bone microstructure in ovariectomized rats, and these effects may be related to the NF-κB and PI3K/AKT signaling pathways which were triggered by the binding of RANKL, RANK, and TRAF6.
Introduction
Postmenopausal osteoporosis, where 1 in 3 women older than 50 years will suffer, is becoming a main health hazard afflicting more than 200 million women all over the world [1]. At menopause, the sharp decline of the estrogen level usually leads to an exceed bone resorption caused by enhanced osteoclastogenesis; then, the balance between osteoblast-induced bone formation and osteoclast-induced bone resorption was disrupted, and the accelerated bone resorption finally caused osteoporosis and even hip or spine fracture [2]. It was believed that the differentiation of the osteoclast was triggered when receptor activator of nuclear factor kappa B (RANK) bound to RANKL, the ligand of RANK. However, the combination of RANK to RANKL cannot be activated unless protein tumor necrosis factor receptor-associated factor 6 (TRAF6) was joined in it [3], followed by the stimulation of downstream signaling pathways including PI3K/AKT and NF-κB. And finally, the expressions of nuclear factor of activated T cells c2 (NFAT2) and c-Fos were regulated [4] to modulate the differentiation of the osteoclast as well as bone resorption. Thus, the factors and regulators which are directly or indirectly related to the activation and differentiation of osteoclast were believed as crucial targets for preventing bone loss.
There are indeed some clinical and synthetic hormone replacement therapy drugs like estradiol valerate which is effective on treatment of postmenopausal osteoporosis. Unfortunately, some of which enhanced the risk of serious cancers including breast and endometrial cancers [5], which limited their clinical applications. Therefore, it is necessary to select other alternatives with both efficacy and minimal side effects. Traditional Chinese medicines (TCM), as well as the isolated bioactive compounds and fractions [6][7][8][9], were proved effective on various ailments including postmenopausal osteoporosis. Among these bioactive components and fractions, phenylethanoid glycoside (PhG) compounds with potential efficacy were believed as promising agents for the treatment of osteoporosis [10][11][12]. The structures of PhGs consist of cinnamic acid aglycone, a hydroxyl phenyl ethyl group which is combined with β-glucopyranose, apiose, galactose, rhamnose, or xylose via a glycosidic bond. They widely exist in medicinal species of genus Cistanche [13]. Cistanche deserticola Y.C. Ma is an official TCM which is recorded in Chinese pharmacopoeia, and besides being an important TCM [14], C deserticola is also an antiaging tonic herb with few side effects which has been developed into medicinal liquor and nutritional liquid approved by the State Food and Drug Administration. Based on the record of Chinese pharmacopoeia, C. deserticola had been traditionally used by natives to handle kidney essence deficiency problems like muscle debility and lumbar weakness, and phenylethanoid glycoside compounds including echinacoside and acteoside are the main bioactive constituents in this herb. According to the TCM theory of "kidney-govern-bone," the bone system is governed by kidney essence [15], and the bone-related troubles like osteoporosis could be recovered by herbs or compounds possessing the activity of nourishing the kidney essence. Therefore, we hypothesized that the total phenylethanoid glycoside fraction isolated from C. deserticola, at least partly, was beneficial on the treatment of osteoporosis. The current experiment was therefore devised to validate our hypothesis by using an ovariectomized (OVX) rat model; besides the bone resorption and formation markers which must be estimated, the antioxidation index as well as RANKL/RANK/TRAF6induced PI3K/AKT and NF-κB signaling pathways were also employed to investigate the main mechanisms of the antiosteoporotic bioactivity. , and a corresponding specimen (#20150901) was preserved in the Department of Pharmaceutical Analysis. Firstly, 30.0 kg of powdered C. deserticola was extracted by using the reflux method with 70% ethanol as solvent; the ratio of material to solvent was set as 1 : 10, and the reflux time was 2 h for 3 times. Then, all of the filtrates were combined together and condensed under reduced pressure at 60°C. Secondly, AB-8 macroporous resin columns were used for the preliminary separation, and different ratios of ethanol in water (0%, 20%, 30%, 40%, 50%, and 60%, each 60 L) were employed for eluting. Thirdly, the 40% and 50% eluents were combined and further purified by using repeated AB-8 macroporous resin columns with the eluents of 0%, 20%, 30%, 40%, and 50% ethanol in water, and each eluent was 12 L. Finally, the 40% fraction was collected and condensed under reduced pressure to obtain 150 g pale yellow sediment phenylethanoid glycoside fraction of C. deserticola (CDP, the yield was 0.5%). For in vivo experiments, 0.5% CMC-Na solvent was employed to dissolve CDP; oral administration to animals was set as 1 mL/100 g of body weight; for in vitro Western blot analysis, CDP was dissolved with DMSO and then diluted with DMEM to obtain the final concentrations of 0.1 mg/mL, 0.01 mg/mL, and 0.001 mg/mL.
Animal Experimental Protocol.
A total of 60 female adult Sprague-Dawley rats aged 3 months were purposed from the center of animal testing of Ningxia Medical University, with the average initial body weights of about 234 ± 25 g. The rats were housed in a standard specific pathogen-free environment to acclimate for 1 week. Then, all of the rats were anesthetized (chloral hydrate, 100 mg/kg, i.p.) only or sham ovariectomized (SHAM), or two ovaries were both removed and then randomly divided into 5 subgroups: orally treated with vehicle (0.5% CMC-Na) was set as the model group (OVX), estradiol valerate (1 mg/kg/day) as the positive group (EV), and 60, 120, and 240 mg/kg/day of CDP as low (CDPL), moderate (CDPM), and high (CDPH) dosage groups, respectively. All the rats were orally administered daily and lasted for 3 months with the dosage adjusted every 2 weeks which depended on the change of the whole body weights. At the last day of the animal experiment, 24-hour urine was obtained by using metabolic cages; serum was collected from the femoral artery of anesthetized rats; the right femora, tibia, and all the organs were dissected and stored at -80°C for further analysis. The animal experiments that we conducted were approved by the Institutional Animal Care and Use Committee of Ningxia Medical University.
Bone Mineral Density Determination and Micro-CT
Analysis. Firstly, a dual-energy X-ray absorptiometry machine (Lunar, USA) was used to estimate the total bone mineral density of the right femur of each rat; secondly, the same femur was used to estimate the 3D image of trabecular bone microarchitecture by employing a micro-CT scanner apparatus (GE, America). The isotropic resolution was set as 14 μm to obtain an ideal 3D image; the region of interest (ROI) was chosen by setting the same coordinates in the growth plate of the femur of each sample; and the bone morphometric parameters including trabecular separation (Tb.Sp), trabecular number (Tb.N), trabecular thickness (Tb.Th), bone mineral content (BMC), tissue mineral density (TMD), and tissue mineral content (TMC) were obtained by analyzing the ROI. Figure 2: Effects of OVX and 12 weeks of treatment with CDP or EV on total bone mineral density in the right femur of rats which are assessed by using dual-energy X-ray absorptiometry (n = 10/group). Data were presented as the mean ± SD; * p < 0 05, * * p < 0 01, and * * * p < 0 001 versus the OVX group; ### p < 0 001 versus the SHAM group. The OVX rats expressed notable reduction of the microarchitecture area and trabecular number. CDP-treated rats and EV-treated rats partly reversed the abovementioned findings at the same degree after 12 weeks of treatment. All values were presented as the mean ± SD. * p < 0 05, * * p < 0 01, and * * * p < 0 001 versus the OVX group; ### p < 0 001 versus the SHAM group.
2.6. Serum and Urine Biochemical Assay. The activities of serum cathepsin K, TRAP, SOD, and GSH as well as the contents of serum PTH, calcitonin, ERRα, MDA, BGP, and urine DPD were determined by employing corresponding reagent kits according to the manufacturer's instruction, and the level of alkaline phosphatase (ALP) and the contents of serum and urine calcium (Ca) and phosphorus (P) were estimated by employing an automatic machine (Ciba-Corning 550 Diagnostics Corp., Oberlin, OH, USA).
Western Blot Analysis.
Osteoclasts were induced by using RAW 264.7 cells added with macrophage colony-stimulating factor (MCSF) and RANKL. Briefly, 1 × 10 7 RAW 264.7 cells were cultured in a 6-well plate in the presence of 30 ng/mL of MCSF and 25 ng/mL of RANKL. After 6 days of induction, the matured osteoclast cells were identified by using the TRAP-stained method with the corresponding kit, then treated with CDP (0.1, 0.01, and 0.001 mg/mL, respectively) for 24 h; then, the cells were lysed with a lysis buffer containing 0.5 mmol phenylmethylsulfonyl fluoride, protease and phosphatase inhibitors. The lysate was then separated by using 10% SDS-PAGE and transferred to a PVDF membrane, which was probed with AKT1, NF-κB-p65, RANKL, PI3K-p85α, RANK, NFAT2, TRAF6, c-Fos, and β-actin (1 : 400) after blocking with 5% nonfat milk for 2 h. The same membranes were stripped and probed again with the above 9 corresponding antibodies, respectively, then were detected by the Image Lab software at the end. The experiments were repeated three times.
2.8. Statistical Analysis. All of the data obtained from in vivo and in vitro experiments, described as the mean ± SD, were analyzed by using one-way ANOVA followed by Dunnett's test (SPSS 22.0 software, SPSS, USA); p < 0 05 was statistically significant.
Effects of CDP on Bone Mineral Density and
Microarchitecture of Trabecula. The total bone mineral density of the rats in different subgroups was shown in Figure 2. An obviously decreasing trend in the content of bone mineral density was observed in rats of the OVX model Figure 4: Effects of OVX and 12 weeks of treatment with CDP or EV on urine and serum Ca and P as well as PTH and calcitonin of rats (n = 10/group). All data were expressed as the mean ± SD. * p < 0 05, * * p < 0 01, and * * * p < 0 001 versus the OVX group; # p < 0 05, ## p < 0 01, and ### p < 0 001 versus the SHAM group.
group, which decreased by nearly 12.0% after 12 weeks of the operation as compared with the rats of the SHAM group (p < 0 001). All of CDP-treated rats exhibited significantly increased bone mineral density by 11.2%, 12.0%, and 10.7% (p < 0 01), respectively, as compared with the rats of the OVX model group. Furthermore, consistent with the data of total bone mineral density, micro-CT reconstruction as well as histomorphometric determination of the femur showed that the rats in the OVX group showed obvious deterioration in trabecular architecture evidenced by the notably declined number and area of trabecula as well as markedly increased Tb.Sp when compared with the rats of the SHAM group. Treatment with CDP prevented the OVX-induced deterioration in trabecular architecture; as Figure 3 show, the BMC, TMC, and Tb.N values were significantly increased and the area of Tb.Sp was notably decreased, while the values of TMD and Tb.Th were seemed not significantly affected by the OVX operation and our CDP intervention.
the urinary level of P and serum content of calcitonin in rats of the OVX model group were detected, which was nearly 30% and 60% less than the SHAM rats (p < 0 001), respectively, whereas no obvious increasing or decreasing trends in urinary levels of Ca and serum Ca as well as serum P and serum PTH were observed between OVX and SHAM groups. Treatment with CDP significantly prevented the loss of the serum P and Ca in OVX rats, evidenced by the levels of serum P and Ca notably upregulated (p < 0 05) as compared to the rats of the OVX model group. In addition, increased but nonstatistically significant trends of calcitonin were observed in both the low and high dosage groups of CDP as compared to the OVX model group.
Effects of CDP on Bone Formation and Bone Resorption
Markers. The beneficial effects of CDP on the bone formation index as well as inhibition influences on bone resorption markers were described in Figure 5. Concerning the bone formation markers, the levels of serum BGP were almost not influenced by the ovariectomized surgery evidenced by nonsignificant changes observed in all treated groups, whereas statistically significant improvements of serum ALP were obtained both in low (60 mg/kg) and moderate (120 mg/kg) dosages of CDP intervention groups when compared with the rats of the SHAM group (p < 0 01). Concerning the bone resorption index, the levels of serum cathepsin K and DPD as well as TRAP in rats of the OVX model group were significantly enhanced by about 75.0%, 41.4%, and 21.0%, respectively, as compared with the SHAM rats, and when treated with CDP, especially the low dosage of 60 mg/kg, the properties of cathepsin K and DPD as well as TRAP in the OVX model group were notably inhibited by 67.3%, 41.4%, and 25.9%, respectively, as compared to the rats in the OVX model group.
Effects of CDP on the Vagina and Uterine as well as
Whole Body Weights. Nonsignificant differences in the initial whole body weights of rats were observed before treatment in six groups ( Figure 6). However, the ovariectomized operation led to a significant increase in the final body weight of rats in the OVX model group which is nearly 36.0%, whereas the uterine and vagina wet weights were drastically declined by nearly 90.0% and 60.0%, respectively, as compared to the SHAM rats (p < 0 001). Although the content of ERRα exhibited no significant difference between OVX and SHAM groups, all of the treatment groups including CDP and EV significantly increased the level of ERRα. And furthermore, when treated with EV, the above gained Initial Final Figure 6: Effects of OVX and 12-week treatment with CDP or EV on ERRα expression, body weight, and uterine and vagina weights of rats (n = 10/group). Data are presented as the mean ± SD. * p < 0 05, * * p < 0 01, and * * * p < 0 001 versus the OVX group; ### p < 0 001 versus the SHAM group.
whole body weights as well as the loss of vagina and uterine weights of OVX rats were partly reversed (p < 0 001) but not affected by CDP intervention.
Effects of CDP on Levels of Serum MDA, SOD, and GSH.
There was no statistically significant difference on the properties of serum SOD and GSH between the SHAM and OVX model groups; as Figure 7 describes, an increasing trend in GSH can be observed between the above two groups. In addition, the level of serum MDA was sharply upregulated by nearly 50% in the rats of the OVX model group when compared with the SHAM rats. The activity of SOD was not influenced by CDP treatment, whereas the property of GSH was significantly improved by CDP intervention, and CDP notably decreased the level of MDA by 33.9% and 42.4% at the doses of 60 and 240 mg/kg, respectively (p < 0 001).
Effects of CDP on Protein Expression Levels.
Our data, shown in Figure 8, suggested that CDP treatment significantly decreased the protein levels of TRAF6, RANK, and RANKL as compared to the control. The downstream signal pathways including NF-κB was suppressed, and PI3K/AKT was stimulated by CDP intervention, evidenced by the expression of NF-κB-p65 downregulation, whereas PI3K-p85α and AKT1 were upregulated. Consequently, the expression of NFAT2 was significantly decreased, and c-Fos was obviously increased after treatment with CDP at concentration of 0.001-0.1 mg/mL. A suggested mechanism is described in Figure 9, where CDP downregulated the levels of RANKL and RANK, leading to the reduction of the binding quantities of this ligand with its receptor, and the connection of RANKL with RANK was further decreased by the downregulation of TRAF6, followed by the suppression of the downstream pathways including the NF-κB pathway whereas the PI3K/AKT signal pathway was stimulated, which finally lead to the decrease of NFAT2 expression and increase of the c-Fos level.
Discussion
Phenylethanoid glycosides are naturally occurring watersoluble components which widely exist in the medicinal plant kingdom [11]. Thus far, the compounds of phenylethanoid glycosides had attracted more and more researchers because of their evident role in handling with various human aliments and abnormality [13]. Numbers of antiosteoporotic bioactive fractions and compounds, including polyphenol and phenylethanoid glycosides, were identified and isolated from dozens of natural medicinal herbs [5,10,12,16,17]. C. deserticola is well known as "ginseng of the desert" which implied the safety profile of this edible TCM [18,19]. As a general tonic herb and natural health food which has long been used in Asian countries, C. deserticola exhibited beneficial function for the enhancement of kidney strength. It was found that TCM, traditionally used to invigorate and keep kidney essence, were usually used to treat osteoporosis, both in vitro and in vivo published data had proved the antiosteoporotic activity of C. deserticola [20][21][22][23], and phenylethanoid glycoside constituents including echinacoside and acteoside are the main bioactive components that exist in this edible medicinal plant; all of which suggested that not only echinacoside and acteoside themselves but also other phenylethanoid glycoside components contained in C. deserticola were considered as responsible for the antiosteoporotic property of this herb. In our present study, a favorable safety macroporous resin was used to isolate and enrich the phenylethanoid glycoside fraction from C. deserticola, and by using the HPLC method, eight main phenylethanoid glycoside components, namely, acteoside F, echinacoside, cistanoside A, acteoside, isoacteoside, acteoside C, 2 ′ -acetylacteoside, and 6 ′ -acetylacteoside, were found in the isolated phenylethanoid glycoside fraction, and the contents were 3.6%, 8.8%, 5.0%, 13.3%, 3.3%, 3.6%, 9.9%, and 3.2%, respectively. Echinacoside, one of the main activity compounds recorded in C. deserticola [14], had been proved possessing antiosteoporotic activity; however, the dosage of 270 mg/kg was so high which limited its further clinical application [24]. In the current experiments, the total phenylethanoid glycoside fraction with a lower dosage of 60-240 mg/kg body weight/day was used on OVX rats, and the contents of identified constituents were nearly 50% pure in this fraction by using the HPLC method.
It was well known that OVX can cause osteoporosis, and an OVX rat was believed as a classical and suitable model to simulate human postmenopausal osteoporosis. At the same time, a significant decrease in bone mineral density, trabecular bone microarchitecture, uterine and vagina wet weights, and estrogen level, as well as the obvious enhancement in bone resorption and body weight, were observed after ovariectomy surgery, of which were in part due to estrogen loss [25]. Our data, thus far, clearly demonstrated that OVX indeed induced postmenopausal osteoporosis and is always accompanied by sharp decline in bone quality, bone microarchitecture, and uterine and vagina wet weights. As EV is a general hormone replacement agent which has been used in the clinical practice, it was used as a positive control in our in vivo experiment, and the gained body weight and atrophy uterus weights as well as deteriorated bone mineral density and trabecular bone microarchitecture were expectedly reversed by EV supplementation. Totally different to the positive control, the decreased vagina and uterine weights as well as the gained whole body weight of rats in the OVX model group were not affected by CDP treatment, which implied that CDP could enhance the bone formation without inducing the side effects on body and uterine organic tissues. Although the levels of ERRα were significantly upregulated by CDP treatment, it was just like a phytoestrogen effect that no side effects on uterine and vagina organic tissues were observed. In addition, treatment of CDP significantly strengthened the quality of bone in OVX rats which had been deteriorated by ovariectomy surgery.
In addition, the levels of P and Ca in urinary and serum of OVX rats were also used to reflect the antiosteoporotic effect, and the concentrations of Ca and inorganic P were usually dependent on the levels of calcitonin and PTH [26]. In the present study, although no significant declining or increasing trends in the urinary excretion level of Ca, , and c-Fos (h) (n = 3/group); the protein expression was normalized to β-actin, and quantitative data of every signal protein was shown as percentages of the value of the control. Data were described as the mean ± SD. * p < 0 05, * * p < 0 01, and * * * p < 0 001 versus the control group.
serum P, serum Ca, and PTH in rats of the OVX model group were obtained, the significant urinary levels of P and calcitonin (p < 0 001) were observed. Consistent with the published data that estrogen deficiency caused by ovariectomy surgery always led to a decreased calcitonin level in blood, this decreased serum calcitonin finally led to an increased PTH level, where Ca was believed as the major regulator of PTH secretion. Because the concentration of PTH showed no significant difference between the OVX and SHAM groups in the present study, the level of Ca in both serum and urine also exhibited no obvious changes between the above two groups. However, a significantly declined tendency on the level of calcitonin between the OVX and SHAM groups was obtained, and consequently, the content of P in urine of OVX was potently decreased. We believed that the above data may explain the contradictory phenomenon of why the urinary excretion of the Ca level in OVX rats showed no obvious change as compared to SHAM rats, and this phenomenon may be also related to the increased rate of bone turnover [27]. After treatment with CDP, the levels of P and Ca in serum were notably upregulated, and the content of P in urine was obviously downregulated in OVX rats, which reflected that CDP could not only prevent bone mineral element excretion but also enhance the serum content of those elements, thus indirectly suppressing bone loss. Furthermore, the bone formation and resorption markers as well as the antioxidant enzymes including SOD and GSH were also employed to explain the underlying antiosteoporotic mechanisms of CDP. Similar to the published data, the level of ALP in rats of the OVX model group exhibited a nonstatistically significant increasing trend which indicate an accelerated rate of bone turnover in postmenopausal osteoporosis [10]. However, after treatment with the CDP (60, 120, and 240 mg/kg/day), the property of ALP was significantly enhanced. It was well known that OVX caused a sharp decline of estrogen levels which usually lead to an exceed bone resorption and oxidative stress [28], evidenced by the levels of TRAP, cathepsin K, and DPD as well as MDA notably upregulated in rats of the OVX model group. However, those deteriorations were partly improved by CDP intervention. In addition, OVX rat treatment with CDP (60 and 240 mg/kg) demonstrated a significant increase in activity of GSH (p < 0 05). The above results implied that CDP exhibited therapeutic effect on OVX-induced osteoporosis, and this effects were both by enhancing bone formation and suppressing bone resorption as well as improving the bone antioxidant system. Activation of RANK by its ligand RANKL stimulated the expressions of NFAT2 and c-Fos via PI3K/AKT and NF-κB signaling [29]. NF-κB was proved essential for osteoclastogenesis as the disruption of NF-κB could lead to an impaired osteoclast differentiation with an osteopetrotic phenotype, and NF-κB upregulated c-Fos and downregulated NFAT2 expressions during RANKL/RANK/TRAF-induced osteoclastogenesis. To estimate the beneficial influence of CDP on NFAT2 and c-Fos-mediated osteoclastogenesis, the expression levels of RANKL and RANK were analyzed. Expectedly, CDP significantly inhibited NFAT2 and stimulated c-Fos levels through downregulating the expressions of RANKL and RANK. Meanwhile, RANK itself lacked intrinsic kinase property unless joined by TRAF6 to trigger the downstream signaling [3]. CDP also downregulated the expression of TRAF6, which led to the binding quantities of RANKL and RANK significantly reduced. A hypothesized antiosteoporotic mechanism of CDP on OVX rats covered the above signaling pathways, and regulators were described in Figure 9. Concisely, CDP declined TRAF6, RANKL, and RANK levels, thus suppressing the downstream signaling pathways including PI3K/AKT and NF-κB which are triggered by RANKL/RANK, and finally reduced the expressions and activities of the key osteoclastogenic proteins NFAT2 and c-Fos. Therefore, multiple clues of data implied the beneficial effect of CDP on bone metabolism of OVX rats mainly through RANKL/RANK/TRAF6-mediated PI3K/AKT and NF-κB pathways. Figure 9: Hypothesized molecular mechanism: CDP could prevent bone loss on the OVX rat through RANKL/RANK/TRAF6-induced inactivation of NF-κB and activation of PI3K/AKT pathways as well as c-Fos stimulation and NFAT2 suppression, which are evidenced by the downregulation of the expression levels of TRAF6, RANKL, RANK, NF-κBIA, and NFAT2, whereas c-Fos, AKT, and PI3K were significantly upregulated by CDP treatment as compared to the control group.
Conclusion
In summary, the total phenylethanoid glycosides, isolated from C. deserticola, exhibited significant beneficial effects on postmenopausal osteoporosis of OVX rats, and the therapeutic potential in suppressing bone loss was mainly through stimulating bone formation and inhibiting bone resorption as well as improving the bone antioxidant system; the mechanisms may be related to RANKL/RANK/TRAF6-induced NF-κB activation and PI3K/AKT inactivation as well as c-Fos stimulation and NFAT2 suppression, and finally, the differentiation of osteoclast was inhibited.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors had no conflict of interest. | 2019-07-02T19:53:43.961Z | 2019-06-27T00:00:00.000 | {
"year": 2019,
"sha1": "f9e1fae2d0854b872583deb90e52f0f8a7d00b2f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2019/2370862.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9e1fae2d0854b872583deb90e52f0f8a7d00b2f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251228365 | pes2o/s2orc | v3-fos-license | P 2 A: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging. In this work, we release yet another sports video dataset P 2 A for P ing P ong-A ction detection, which consists of 2,721 video clips collected from the broadcasting videos of professional table tennis matches in World Table Tennis Championships and Olympiads. We work with a crew of table tennis professionals and referees to obtain fine-grained action labels (in 14 classes) for every ping-pong action appeared in the dataset, and formulate two sets of action detection problems— action localization and action recognition . We evaluate a number of commonly-seen action recognition (e.g., TSM, TSN, Video SwinTransformer, and Slowfast) and action localization models (e.g., BSN, BSN++, BMN, TCANet), using P 2 A for both problems, under various settings. These models can only achieve 48% area under the AR-AN curve for localization and 82% top-one accuracy for recognition, since the ping-pong actions are dense with fast-moving subjects but broadcasting videos are with only 25 FPS. The results confirm that P 2 A is still a challenging task and can be used as a benchmark for action detection from videos.
INTRODUCTION
Videos have become one of the most popular media in our every day life and video understanding draws much attention of researchers in recent years, including video tagging [1,56,66], retrieval [13,16,29], action recognition [15,21,59] and localization [10,37,62]. There are many applications of video understanding, such as recommendation systems [7,30,63]. One of the most attractive applications is understanding sports videos, which is able to benefit coaching [3,52], player training [4,46] and sports broadcasting [28,53,53]. To better understanding sports videos, localizing and recognizing the actions in untrimmed videos play crucial roles, however, they challenging tasks with unsolved problems.
First, there are many types of sports, such as team sports like football, basketball, volleyball, and individual sports like tennis, table tennis, golf, and gymnastics, and each of them has specific actions. Therefore, it is difficult to build a dataset covering all sports and their specific actions. Moreover, the duration of actions in different sports are diverse. For example, the strokes in tennis and table tennis are extremely fast, usually less than one second, while actions in soccer could endure for several seconds, such as long passes and corner ball. Hence, it is difficult to model the actions with various lengths using one model.
Second, data annotation of sports actions is challenging. Compared with common actions in our daily life like walking, riding bicycles that can be easily recognized by annotators, it could be difficult to identify the actions in sports videos, for example, Axel jump, Flip jump and Lutz jump in figure skating, hence, professional players should be involved in data annotation. In addition, players normally perform deceptive actions, for example, table tennis players can perform similar actions, but serve balls with different kinds of spin (top spin, back spin and side spin), which is difficult for annotators to distinguish one action from the others.
Recently, researchers pay much attention to sports videos to address the challenges, including building datasets [12,20,24,35,42] and proposing new models [23,36,48]. In this paper, we focus on a specific sport -table tennis and propose a dataset -Ping Pang Action (P 2 A) for action recognition and localization to facilitate researches on fine-grained action understanding. The properties of P 2 A are as follows: • We annotate each stroke in videos, including the category of the stroke and the indices of the starting and end frames. Plus, the stroke labels are confirmed by professional players, including Olympic table tennis players. • To the best of our knowledge, P 2 A is the largest dataset for table tennis analysis, composed of 2,721 untrimmed broadcasting videos, and the total length is 272 hours. Though [24,35] propose table tennis datasets, the number of videos is smaller and they use self-recorded videos where there is only one player, which is much easier than using broadcasting videos. Moreover, the datasets proposed in [24,35] are only for action recognition, whereas P 2 A can also be used for action localization. • The actions are fast and dense. The action length ranges from 0.3 seconds to 3 seconds, but more than 90% actions are less than 1 second. And typically, there are around 15 actions in 10 seconds, which is challenging for localization.
With the properties, P 2 A is a rich dataset for research on action recognition and localization, and a benchmark to assess models (see section 3 for more details). We further benchmark several existing widely used action recognition and localization models using P 2 A, finding that P 2 A is relatively challenging for both recognition and localization.
To sum up, the main contributions of this work are twofold. First, we develop a new challenging dataset -P 2 A for fast action recognition and dense action localization, which provides high-quality annotations confirmed by professional table tennis players. Compared with existing table tennis datasets, P 2 A uses broadcasting videos instead of self-recorded ones, making it more challenging and flexible. In addition, P 2 A is the largest one for table tennis analysis. Second, we benchmark a number of existing recognition and localization models using P 2 A, finding that it is difficult to localize dense actions and recognize actions with imbalance categories, which could inspire researchers to come up with novel models for dense action detection in the future.
RELATED WORK
Action Localization. The task of action localization is to find the beginning and the end frames of actions in untrimmed videos. Normally, there are three steps in an action localization model. The first step is video feature extraction using deep neural networks. Second, classification and finally, suppressing the noisy predictions. Z. Shou et al. [43] propose a temporal action localization model based on multi-stage CNNs, where a deep proposal network is employed to identify the candidate segments of a long untrimmed video that contains actions, then a classification network is applied to predict the action labels of the candidate segments and finally the localization network is fine-tuned. Alternatively, J. Yuan et al. [65] employ Improved Dense Trajectories (IDT) to localize actions, however, IDT is time-consuming and requires more memory. In contrast, S. Yeung et al. [64] treat action localization as decision making process, where an agent observes video frames and decides where to look next and when to make a prediction. Similar to Faster-RCNN [41], X. Dai et al. [10] propose a temporal context network (TCN) for active localization, where proposal anchors are generated and the proposals with high scores are fed into the classification network to obtain the action categories. To generate better temporal proposals, T. Lin et al. [27] propose a boundary sensitive network (BSN), which adopts a local-to-global fashion, however, BSN is not a unified framework for action localization and the proposal feature is too simple to capture temporal context. To address the issues of BSN, T. Lin et al. [26] propose a boundary matching network (BMN). Recently, M. Xu et al. [62] introduce the pre-training-finetuning paradigm into action localization, where the model is first pre-trained on a boundary-sensitive synthetic dataset and then finetuned using the human annotated untrimmed videos. Instead of using CNNs to extract video features, M. Nawhal et al. [37] employ a graph transformer, which significantly improves the performance of temporal action localization.
Action Recognition. Action recognition lies at the heart of video understanding, an elementary module that draws much attention of researchers. A simple deep learning based model is proposed by A. Karpathy [21], where a 2D CNN is independently applied to each video frame. To capture temporal information, the fusion of frame features is used, however, this simple approach cannot sufficiently capture the motion of objects. A straightforward approach to mitigate this problem is to directly introduce motion information into action recognition models. Hence, K. Simonyan et al. [44] proposed a two-stream framework, where the spatial stream CNN takes a single frame as input and the temporal steam CNN takes a multi-frame optical flow as input. However, the model proposed by [44] only employs shallow CNNs, while L. Wang et al. [55] investigate different architectures and the prediction of a video is the fusion of the segments' predictions. To capture temporal information without introducing extra computational cost, J. Lin et al. [25] propose a temporal shift module, which can fuse the information from the neighboring frames via using 2D CNNs. Another family of action recognition models is 3D based. D. Tran et al. [49] propose a deep 3D CNN, termed C3D, which employs 3D convolution kernels. However, it is difficult to optimize a 3D CNN, since there are much more parameters than a 2D CNN. While J. Carreira et al. [8] employs mature architecture design and better-initialized model weights, leading to better performance. All above mentioned models adopt the same frame rate, while C. Feichtenhofer et al. [15] propose a SlowFast framework, where the slow path uses a low frame rate and the fast path employs a higher temporal resolution and then the features are fused in the successive layers.
Recently, researchers pay more attention to video transformera larger model with multi-head self-attention layers. G Bertasius et al. [5] propose a video transformer model, termed TimeSformer, where video frames are divided into patches and then space-time attention is imposed on patches. TimeSformer has a larger capacity than CNNs, in particular using a large training dataset. Similarly, A. Arnab et al. [2] propose a transformer-based model, namely ViViT, which divides video frames into non-overlapping tubes instead of frame patches. Another advantage of the transformer-based model is that it is easy to use self-supervised learning approaches to pretrain the model. C. Wei et al. [59] propose a pre-training method, where the transformer-based model is to predict the Histograms of Oriented Gradients (HoG) [11] of the masked patches and then the model is fine-tuned on downstream tasks. Using the pre-trainingfine-tuning paradigm, we can further improve the performance of action recognition.
In this paper, we adopt multiple widespread action recognition and localization models to conduct experiments on our proposed dataset -P 2 A, showing that P 2 A is relatively challenging for both action recognition and localization since the action is fast and the categories are imbalance (see sections 3 and 4 for more details).
P 2 A DATASET
Our goal for establishing the PingPangAction dataset is to introduce a challenging benchmark with professional and accurate annotations to the intensive and dense human action understanding regime.
Dataset Construction
Preparation. The procedure of data collection takes the following steps. First, we collect the raw broadcasting video clips of International/Chinese top-tier championships and tournaments from the ITTF (International Table Tennis As a convenience for further processing (e.g., storing and annotating), we split the whole game videos into 6-minute chunks, while ensuring the records are complete, distinctive and of standard high-resolution, e.g., 720P (1280 × 720) and 1080p (1920 × 1080). Then, the short video chunks are ready for annotation in the next step.
Annotation. Since the actions (i.e., the ping pong strokes) in video chunks are extremely dense and fast-moving, it is challenging to recognize and localize all of them accurately. Thus, we cooperate with a team of table tennis professionals and referees to regroup the actions into broad types according to the similarity of their characteristics, which are the so-called 14-classes (14c) and 8-classes (8c). As the name implies, 14c categorizes all the strokes in our collected video chunks into 14 classes, which are shown in Fig. 1 (right side). Compared to 14c, we further refine and combine the classes into eight higher-level ones, where the mapping relationships are revealed accordingly. Specifically, since there are seven kinds of service classes at the beginning of each game round (i.e., the first stroke by the serving player), we reasonably combine these service classes aside from those non-serving classes. Note that, all the actions/strokes we mentioned here are presented in the formal turns/rounds either conducted by the front-view player or back-view player from the broadcasting cameras (the blue-shirt and red-shirt player in Fig. 1), and others presented in highlights (e.g., could be in a zoom-in scale with slow motions), Hawk-Eye replays (e.g., with the focus on the ball tracking), and game summaries (e.g., possibly in a zoom-out scale with a scoreboard) are ignored as redundant actions/strokes. For each of the action types in 14c/8c, we define a relatively clear indicator for the start frame and the end frame to localize the specific action/stroke. For example, we set the label of the non-serving stroke in a +/-0.5 seconds window based on the time instance when the racket touches the ball, where the serving action/stroke may take a long time before the touching moment so that we label it in a +0.5/-1.5 seconds window as the adjustment. The window size is used as the reference for the annotator and can be varied according to the actual duration of the specific action. We further show the detailed distribution of the action/stroke classes in Section 3.2.
Once we have established the well-structured classes of actions, the annotators then engage to label all the collected broadcasting video clips. Unfortunately, the first edition of annotated dataset achieves around 85% labeling precision under sampling inspection by experts (i.e., international table tennis referees). It is reasonable that even with the trained annotators, labeling the actions/strokes of table tennis players is still challenging, which lies in the existence of 1) the fast-moving and dense actions; 2) the deceptive actions; 3) the entire/half-sheltered actions. To improve the quality of the labeled dataset, we work with a crew of table tennis professionals to further correct the wrong or missing samples in the first edition. The revised edition can achieve about 95% labeling precision on average, which is confirmed by the invited international table tennis referee [39].
Note that, beyond labeling the segment of each action/stroke, we also label some additional information along with it (after finishing each action), which includes the placement of the ball on the table, the occasion of winning or losing a point, the player who committed the action (the name with the status of front view or back view in that stroke), whether it is forehand or backhand, and the serving condition (serve or not). The full list of labeling items is attached in the open-sourced dataset repository on Github 1 .
Calibration. The last step for annotation is to clean the dataset for further research purposes. Since the whole 6-minute video dataset has some chunks without valid game rounds/turns (e.g., the opening of the game, the intermission and break, and the award ceremony), we filter out those unrelated chunks and focus on only action-related videos. Then, to accomplish the main goals of the research, which are tasks of recognition and localization, we further screen the video chunks to reserve the qualified ones meeting the following criteria: • Camera Angle -The current most-common camera angle for broadcasting is the top view, where the cameras are hosted on a relatively high position upon the game court and overlook from the back of two sides of players. As shown in Fig. 1, we only select the videos recorded in the standard top view and remove those with other views (e.g., side view, "bird's eye" view, etc.) to keep the whole dataset consistent. Note that, the broadcasting recordings with non-standard broadcasting angles rarely (less than 5%) appear in our target pool of competition videos, where only a few WTT (World Table Tennis Championships) broadcasting videos experimentally use the other types of camera angles. • Game Type -Considering the possible mutual interference in double or mixed double games, we only include the single games in P 2 A. In this case, there at most two players appear simultaneously in each frame, where usually one player is located near the top of the frame and another is at the bottom of the frame. • Audio Track -Although we do not leverage the audio information for the localization and recognition tasks, the soundtracks are along with the videos and awaiting to be explored or incorporated in the future research. We also remove those with broken or missing soundtracks. • Video Resolution -We select the broadcasting videos with a resolution equal to or higher than 720P (1280 × 720) in MP4 format and drop those with low frame quality. • Action Integrity -Since we trim the original broadcasting video into 6-minute chunks, some actions/strokes may be cut off when delivering the splitting. In this case, we make efforts to label those broken actions (about 0.2%) and it is optional to incorporate the broken actions or not in a specific task.
In the next subsection, we introduce the basic statistics of calibrated P 2 A dataset and provide some guidance on pre-processing that might be helpful in specific tasks. Broadcast TV 2022 † We denote the task of recognition, localization, and segmentation as "Rec", "Loc", and "Seg". ‡ We calculate the Density here as the value of Segments/Samples. * We also include some popular video-based action detection datasets, which are not only limited to sports-related actions, but with a considerable amount of them in sub-genre.
Dataset Analysis
In this section, we conduct a comprehensive analysis of the statistics of P 2 A, which is the foundation of the follow-up research and also serves as the motivation for establishing P 2 A in the video analysis domain. The analysis lies in four parts, which are (1) general statistics, (2) category distribution, (3) densities, and (4) discussion.
In the rest of this subsection, we go through each part in detail. General Statistics. The P 2 A dataset consists of 2,721 annotated 6-minutes-long video clips, containing 139,075 labeled action segments and last 272 hours in total. They are extracted from over 200 table tennis competitions, involving almost all the top-tier ones during 2017-2021. These events include the World Table Tennis Championships, the Table Tennis World Cup, the Olympics, the ITTF World Tour, the Asia Table Tennis Championships, the National Games of China, as well as the China Table Tennis Super League. Since we intend to establish an action-focused table tennis dataset, the quality and standard of actions are expected to achieve a relatively high level, where the official HD broadcasting videos selected from the above-mentioned top-tier competitions lay a good foundation as the source of the target actions.
For the general statistics of actions, Table 1 presents the basic statistics of P 2 A compared with the popular streams of sports-related datasets. The length of action for P 2 A varies from 0.32 to 3 seconds, which is significantly shorter than most of the other datasets except the OpenTTGames and the TenniSet. Since the OpenTTGames targets only the in-game events (i.e. ball bounces, net hits, or empty event targets) without any stroking annotations, the action length is even shorter and with a considerably higher Density. Compared to these two table tennis related datasets, our P 2 A dataset has a longer Duration (over 30 times, and with more samples and segments), where it means a lot for accurately reflecting the correlation and distribution of actions/strokes in different classes. Furthermore, compared to the sources of the OpenTTGames and the TenniSet, P 2 A leverages the Broadcast TV games with the proper calibration by the international table tennis referee, which are more official and professional. Compared to the commonly used datasets such as ActivityNet and FineGym, P 2 A focuses on the actions/strokes with the dense and fast-moving characteristics. Specifically, the action length of P 2 A is around 30 times shorter and density is 5 to 50 higher than these two datasets, which means P 2 A introduces a series action/strokes in a totally different granularity. We later show that the SOTA models confront a huge challenge when applying on P 2 A. Although the number of classes of P 2 A is relatively small due to the nature of the table tennis sports itself, the action recognition and localization tasks are barely easy on P 2 A for most of the mainstream baselines. In the next subsection, we show the distribution of categories in P 2 A, which is one of the vital components causing the aforementioned difficulties. Category Distribution. Figure 1 right side shows the targeting categories of actions/stokes in P 2 A. The P 2 A dataset has two main branches of strokes -Serve and Non-serve, where each of them owns 7 types of sub-strokes. Since the strokes are possibly taken by different players in the same video frames, we annotate strokes from both the front-view and the back-view players without time overlaps. For example, Figure 2 represents consecutive frames of annotations on a front-view player, where a player is producing a "Side Spin Serve" at the start of a turn. Notice that this player stands at the far-end of the game table and turns around from the side-view to the front-view, then keeps the front-view in the rest of this turn. A similar annotating is taken to record the player's strokes at the near-end of the game table.
As aforementioned, the P 2 A dataset consists of two versions, which are 14c (14 classes) and 8c (8 classes). The 14c flattens all the predefined classes in Figure 1 and treats them equally in the future tasks. We measure the number of strokes for each of the classes in Figure 3. Overall, the drive category contains more than half of the samples (we sort the categories in descending order), which means there exists unbalancing category phenomenon in P 2 A. We also observe an interesting point that the non-serve categories generally last a shorter duration than the serve categories (as shown from the blue bars). This is because of the long preparation of the serve strokes and could be an important feature to distinguish the serve and the non-serve strokes. Another fact is that the non-serve categories dominate the whole dataset, where there are seven non-serve categories out of the left-most eight categories (in the descending order). This unbalancing phenomenon leads to the creation of the second version -8C to combine all the serve categories into one unified "Serve" category. Figure 4 measures the number of actions in the serve, nonserve , and combined 8c categories separately. As a result, the subcategories are unbalanced, where "Side Spin" and "Drive" dominates the serving and non-serving actions in (a)&(b). In the combined 8c dataset, "Drive" unsurprisingly takes a large proportion among all eight categories due to its frequent appearances in modern table tennis competitions. Thus, it is necessary to adopt additional sampling techniques to mitigate the side effects, and the implementation details are introduced in Section 4. Densities. As one of the unique characteristics compared to other datasets, the high density of actions/strokes plays a vital role in P 2 A.
We analyze the density of actions/strokes in two aspects, where the first one is the duration distribution of each actions/strokes category. Figure 5(a) shows the duration distribution of all classes of strokes in P 2 A. We can observe that most of the action types have relatively stable distributions (with short tails) except the "Control" and the "Neutral" types. The "Neutral" is one of the serving strokes representing those irregular/unknown actions in the serving process, where the duration of it widely ranges from 0.4 to 3 seconds in Figure 5(a). It is similar that the "Control" category stands for the non-serving strokes which cannot be categorized to any other classes, and has a long tail of 2 seconds at most. On the whole, the average duration of strokes sticks to around 0.5 seconds, which demonstrates the actions in P 2 A is fast-moving compared to other datasets in Table 1. We also compare the duration distribution between the serve and non-serve classes in Figure 5(b), where the stacked plot of both classes presents that the serve class has a slightly longer tail than the non-serve class. For the second respect, we measure the action frequency and the appearance density in each turn of the game. One turn in a game starts from a serving action by one player and ends with one point acquired by either of the players. Thus, the action frequency can reflect the nature of the table tennis competition (e.g., the strategy and the pace of the current table tennis game) that how many strokes could be delivered in a single turn. As shown in Figure 6(a), most of the turns have around 3 ∼ 5 actions/strokes and the distribution is also long-tailed to a maximum 28 actions/strokes per turn. It is reasonable that the pace of the game today becomes faster than ever, and a point could be outcome within three strokes for both players (i.e., summed up within 6 strokes). To further differentiate from other sports datasets, we additionally measure the appearance density in 10 seconds, which is usually the time boundary for long-duration and short-duration sports actions. Figure 6(b) depicts the histogram of the action counts in 10-second consecutive video frames. The results follow a normal-like distribution with average counts as 15, which means the actions appear densely and frequently in a short time in the most of the time in P 2 A. The density can be achieved as about 1.5 actions per second on average and it demonstrates the characteristics of dense actions for the P 2 A dataset. Summary. In this section, we comprehensively report the statistics of P 2 A. Specifically, we leverage rich measurements to have a thorough view of the volume of the dataset, the category distribution, the duration of samples, and the densities. In conclusion, the P 2 A dataset is unique comparing to the other mainstream sports datasets in the following sides, • Comparing to the table tennis datasets [14,51], P 2 A has an adequate sample size (10 times ∼ 200 times) and more subtle set of categories. • Comparing to the large datasets in other sports domain [6,38,42,45], P 2 A focuses on the dense (5 times ∼ 50 times) and fastmoving (around 0.1 times) actions. • Comparing to most of the aforementioned sports datasets, P 2 A includes two players' actions back-to-back (the action appears in turns with a fast pace), which is more complicated and mutually interfering when analyzing.
In the next section, we investigate the performance of the stateof-the-art models and solutions for the predefined action recognition and action localization tasks on our P 2 A dataset. Then, we observe the challenges those algorithms confront and demonstrate the research potential of the P 2 A dataset.
BENCHMARK EVALUATION
In this section, we present the evaluation of various methods on our P 2 A dataset and show the difficulties when applying the original design of those methods. Then, we provide several suggestions to mitigate the ineffectiveness and tips to improve the current models.
Baselines
We adopt two groups of baseline algorithms to separately tackle the action recognition and action localization tasks. For Action Recognition, we includes the following trending algorithms, • Temporal Segment Network (TSN) [55] is a classic 2D-CNNbased solution in the field of video action recognition. This method mainly solves the problem of long-term behavior recognition of video, and replaces dense sampling with sparsely sampling video frames, which can not only capture the global information of the video, but also remove redundancy and reduce the amount of calculation.
• Temporal Shift Module (TSM) [25] is a popular video action recognition model with shift operation, which can achieve the performance of 3D CNN but maintain 2D CNN's complexity. The method of moving through channels greatly improves the utilization ability of temporal information without increasing any additional number of parameters and calculation costs. TSM is accurate and efficient: it ever ranked first place on the Something-Something [17] leaderboard. Here, we adopt the industrial level variation from the deep learning platform PaddlePaddle [34]. We improve the original design with additional knowledge distillation, where the model is pretrained on Kinetics400 [8] dataset. • SlowFast [15] involves a Slow pathway operating at a low frame rate to capture spatial semantics, and another Fast pathway operating at a high frame rate to capture motions in the temporal domain. SlowFast is a powerful action recognition model, which ranks in second place on AVA v2.1 datasets [18]. • Video-Swin-Transformer (Vid) [31] is a video classification model based on Swin Transformer [32]. It utilizes Swin Transformer's multi-scale modeling and the efficient local attention module. Vid shows competitive performances in video action recognition on various datasets, including the Something-Something and Kinetic-series datasets.
For Action Localization, we investigate the following classical or trending algorithms, • Boundary Sensitive Network (BSN) [27] is an effective proposal generation method, which adopts "local to global" fashion. BSN has already achieved high recalls and temporal precision on several challenging datasets, such as ActivityNet-1.3 [6] and THUMOS14' [54]. • BSN++ [47] is a new framework which exploits complementary boundary regressor and relation modeling for temporal proposal generation. It ranked first in the CVPR19 -ActivityNet challenge leaderboard on the temporal action localization task. • Boundary-Matching Network (BMN) [26] introduces the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which leads to generating proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. Combining with existing feature extractors (e.g., TSM), BMN can achieve state-of-the-art temporal action detection performance. • Self-Supervised Learning for Semi-Supervised Temporal Action Proposal (SSTAP) [58] is one of the self-supervised methods to improve semi-supervised action proposal generation. SSTAP leverages two crucial branches, i.e., the temporal-aware semi-supervised branch and relation-aware self-supervised branch to further refine the proposal generating model. • Temporal Context Aggregation Network (TCANet) [40] is the championship model in the CVPR 2020 -HACS challenge leaderboard, which can generate high-quality action proposals through "local and global" temporal context aggregation and complementary as well as progressive boundary refinement. Since it is a newly released model designed for several specific datasets (the original design shows unstable performance on P 2 A), we slightly modify the architecture to incorporate a BMN block ahead of TCANet. The revised model is denoted as TCANet+ in our experiments.
Experimental Setup
Datasets. As aforementioned, we establish two versions -14classes (14c) and 8-classes (8c) -in P 2 A. The difference is that we combine all the serving strokes/actions in a single "Serve" category in 8c version. We evaluate the baseline action recognition algorithms on both versions and compare the performances side by side. Since the performances of the current action localization algorithms on 14c are far from satisfactory, we only report the results on 8c for reference. Metrics. We use the Top-1 and Top-5 accuracy as the metric for evaluating the performance of baselines in the action recognition tasks. For the action localization task, We adopt the area under the Average Recall vs. Average Number of Proposals per Video (AR-AN) curve as the evaluation metric. A proposal is a true positive if it has a temporal intersection over union (tIoU) with a ground-truth segment that is greater than that or equal to a given threshold (e.g, tIoU>0.5). AR is defined as the mean of all recall values using tIoU between 0.5 and 0.9 (inclusive) with a step size of 0.05. AN is defined as the total number of proposals divided by the number of videos in the testing subset. We consider 100 bins for AN, centered at values between 1 and 100 (inclusive) with a step size of 1, when computing the values on the AR-AN curve. Data Augmentation. The category distribution analysis in Section 3.2 reveals that P 2 A is an imbalanced dataset, where it might cause trouble to the recognition task. To fully utilize the proposed P 2 A dataset, we design a simple yet effective data augmentation method to train the baseline models. Specifically, we introduce an up/down-sampling procedure to balance the sample size of each category. First, we calculate the mean size of all the samples (denoted samples) by the defined number of categories (e.g., 8c or 14c) as , where = /8 or /14. Then, we sample the actual number of action segments within a range ≤ ≤ 2 . For those categories with less number of segments than , we up-sample by the random duplication. For those with much more samples over 2 , we random down-sample from the original sample set and try to make the sampled actions cover all the video clips (i.e., uniform sampling from each 6-minute chunk). We apply this data augmentation strategy to all the recognition tasks and the significant performance gains. As shown in Figure 7, we report the ablation results of action recognition baseline TSM on P 2 A with-/without the data augmentation. In Figure 7(a), the confusion matrix illustrates that TSM falsely classifies most of the strokes/actions into the "Drive" category without the class balancing, while this kind of misclassification is significantly alleviated after we apply the data augmentation. Note that, since the localization tasks normally require feature extractor networks as the backbones (e.g., TSM is one of the backbones), localization performance could also be benefited from the designed data augmentation. The baseline algorithms in the following results section also involve data augmentation.
Results
This section first presents the performance results of action recognition tasks. Table 2 provides a summary of these results in terms of top1/top5 accuracy on both 8c and 14c datasets. As we can observe, the top5 accuracy is unsurprisingly higher than the top1 accuracy for all the methods. For comparing the 8c and 14c datasets in the 6 16 7 107 20 0 0 0 2 11 8 107 28 0 0 0 5 46 18 0 0 0 0 28 12 7 12 109 16 0 0 0 9 21 3 19 0 0 0 2 11 44 5 8 0 0 same setting, the performances of baselines on 8c are relatively better than on 14c, which confirms that the combination of serving classes is reasonable and actually eases the recognition task. In general, our refined TSM (PaddlePaddle version) method achieves the highest accuracy in all four settings except the top5 accuracy on the 14c datasets. The popular transformer-based method Vid also shows competitive performances in the second place as a whole. Note that, since the same strokes/actions done by the player facing the camera and with back towards the camera differs drastically (e.g., the blocked views and the opposite actions), we mainly focus on the facing players' actions/strokes as the recognition targets. As for the back view player's actions/strokes, the performances are poor for all methods. Alternatively, we report the performance of the mixed (i.e., face and back) one, the results are far below the expectation, where even the best performer, TSM, only achieves 59.46 top 1 accuracy on the 14c and 68.39 top5 accuracy on the 8c dataset. Although the top5 accuracy of all the methods appears to be promising, considering the total number of categories in P 2 A, which is 14 or 8 accordingly, the results are trivial to some degree.
In conclusion, for the action recognition task with the mainstream baseline algorithms, P 2 A is considerably a challenging dataset and there is a vast room and potential for improvement and research involvement. For the action localization task, we summarize the performances in Figure 9. We selected several representative state-of-the-art action localization methods that are publicly available and retrained them on our P 2 A dataset. We report the area under the AR-AN curve for the baselines as the measurements of the localization performance. For example, in Figure 9(a), we draw the AR-AN curves with a varying value of tIoU from 0.5 to 0.95 separately and one mean curve (solid blue curve) to represent the overall performance of BSN on P 2 A. Among all the methods, TCANEt+ shows the highest overall AUC score which is 47.79 in Figure 9(f) and also outperforms other baselines in each tIoU level. To straightforwardly compare the localization results, we visualize the located action segments from each method in Figure 8. The top figure is on a large scale to show the predicted segments compared with the ground-truth segments in a complete 6-minute video chunk, where the bottom one zooms in a single turn for a series of consecutive actions/strokes by the player facing the camera. From such visual observations, we can find that TCANet+ can almost reproduce the short action segments from the ground-truth labeling. However, the first period of serving action is barely located and TCANet+ mis-predicts with a much shorter one instead. Compared to the action recognition task, the action localization seems more difficult on P 2 A since the best AUC score is even in a massive gap from the average achievement of these baselines on ActivityNet [6] (47.79 ≪ 68.25 on average from top35 solutions in the 2018 leader-board). Summary. From the above experimental results, we can conclude that the P 2 A is still a very challenging dataset for no matter the action recognition and the action localization tasks. Compared to the current release sports datasets, the representative baselines hardly adapt to the proposed P 2 A dataset and the performance is far lower than our expectation. Thus, the P 2 A dataset provides a good opportunity for researchers to further explore the solution in a domain of fast-moving and dense action detection. Note that, the P 2 A dataset is not limited to the action recognition and localization tasks only, where it is also potential to be used for video segmentation and summarizing. Actually, we are on the way to explore more possibilities of P 2 A and will update the released version to include more use cases. Due to the limited space, we will release the implementation details for all the baselines in our Github repository with the dataset link once the paper gets published.
CONCLUSION
In this paper, we present a new large-scale dataset P 2 A, which is currently the largest publicly available table tennis action detection dataset to the best of our knowledge. We have evaluated several state-of-the-art algorithms on the introduced dataset. Experimental results show that existing approaches are heavily challenged by fast-moving and dense actions and uneven action distribution in video chunks. In the future, we hope that P 2 A could become a new benchmark dataset for fast-moving and dense action detection especially in the sports area. Through a comprehensive introduction and analysis, we intend to help the follow-up users and researchers to better acknowledge the characteristics and the statistics of P 2 A, so as to fully utilize the dataset in action recognition and action localization tasks. We believe the P 2 A dataset will make an early attempt in driving the sports video analysis industry to be more intelligent and effective.
A SUPPLEMENTARY
Due to the page limit in the main body of the manuscript, we supplement the complete implementation details and evaluation results with brief discussions in this supplementary section. LGTE, SEBlock [40] Remarks. In this paper, we mainly target the 2D video action recognition algorithms considering the following sakes, • Traditional 3D/4D video action recognition algorithms such as I3D [8], R3D [19], S3D [61], and Non-local [57] are widely acknowledged to be inefficient in computation and optimization compared to the 2D algorithms (i.e., it is barely affordable and scalable in some real world applications), even though the 3D/4D algorithms generally have a more robust performance [67] on video recognition tasks. • Analyzing table tennis broadcasting video is not only limited to academic purposes, which includes the research on dense, fastmoving, and noisy (e.g., multiple action sources from players) action recognition and localization, it also benefits table tennis sports affairs, for example, real-time competition analysis, and actions/events summarizing on stream broadcasting videos. Thus, the effectiveness and efficiency are of equal importance for the solutions in industrial practices, where the 2D and the lightweight video action recognition algorithms have received a great deal of industrial and research attention in recent years. • We also include one 3D action recognition algorithm -Slow-Fast [15]. Compared to the traditional 3D networks, SlowFast does not heavily rely on the concatenation of 3D CNNs, where its fast pathway could be very lightweight by reducing its channel capacity, so as to largely improve the overall efficiency. Furthermore, SlowFast is proved to be competitive with other full-size 3D recognition algorithms on several benchmark datasets [67].
Notice that the proposed P 2 A is not only available for the action recognition and the action localization tasks. For the general action detection task, which incorporates the recognition and localization in a unified objective, the P 2 A dataset is still challenging and remains explored in future study. As shown in Figure 10, we conduct the experiments for the action detection task on P 2 A using a pipeline with the components of (1) TSM (PaddlePaddle variation), (2) BMN, and (3) Attention-LSTM. Specifically, we first extract the frame features using a TSM module (TSM module is pre-trained on a recognition task on the P 2 A dataset). Once we have the well-represented features, we then feed the features into a BMN module to generate the action proposals. The last step is to train an Attention-LSTM module to obtain the action class within each proposal. We further provide ablation tests in Table 4. As we can observe, several optimization strategies improve the average mAP (mean average precision) of the proposed pipeline, which includes the proposal extension and data augmentation. Since the proposal generated by the BMN module sometimes cannot cover the entire action, we intend to extend the proposal duration before and after for 1 second. However, we find that not all the actions benefit from this kind of extension, where we additionally design a partial proposal extension strategy for the target actions. Although applying the above mentioned strategies, we hardly obtain a satisfied average mAP (IoU from 0.5 to 0.9 with a step size of 0.05), where the value is only 49.74 for the best model after the fine-tuning. It makes sense with the unsatisfactory result that the performance of action detection on P 2 A depends on the performance of both the action recognition and the action localization tasks, which have been shown as two challenging tasks in this paper. Additional recognition algorithms. Beyond the most representative action recognition algorithms introduced in the main body, we also conduct the experiments using the following algorithms, • Vanilla-TSN/TSM: the original version of TSN [55] and TSM [25] models. Note that, the revised TSM (denoted as TSM) includes (1) better backbones: ResNet50-D series; (2) Two-stage knowledge distillation: TSM adopt a semi-supervised distillation (SSD) strategy for the first stage and leverages the CSN [50] as the teacher network with a ResNet152 backbone in the second stage. • Attention-LSTM [33]: recurrent neural networks (RNN) are often used in the processing of sequence data, which can model the sequence information of consecutive frames, and are commonly used methods in the field of video classification. As one of the RNNs, Attention-LSTM uses a two-way long and short-term memory network with attention layers to encode all the frame features of the video in sequence. • MoViNet [22]: Movinet is a mobile video network developed by Google research. It uses causal convolution operators with stream buffer and temporal ensembles to improve the accuracy of video classification. It is a lightweight and efficient video model designed for online video stream reasoning. • TimeSformer [5]: TimeSformer is a video classification model based on a vision transformer, which equips with the global receptive field, and strong time series modeling without convolutions. At present, it has achieved SOTA accuracy on the Kinetics-400 data set, approximating the performance of classic 3D-CNN based video classification models, while it has a considerably short training time.
As shown in Table 5, we present the full performance results of the implemented video action recognition algorithms in different settings. The TSM (revised) algorithm outperforms the other logarithms in most cases. We also observe that the recognition task of Face&Back is even more challenging than the recognition task of Face only, where it is reasonable since the actions/strokes from the back-view player are commonly blocked partially/entirely by the body of the player and the mixed actions/strokes from both players in the overlapped frames might affect the learning process of the baseline models. The overall performance of the baseline algorithms demonstrates the action recognition task on P 2 A has a lot of room to be explored and improved in terms of the performance. Implementations. We list the key hyper-parameters of all the implemented action recognition and action localization algorithms in Table 3. Note that, the TSN and TSM stands for the revised versions compared to the released version from the original paper. For other baselines, we follow the original design of the critical components from each algorithm and adjust specific hyper-parameters to fit them on the P 2 A dataset. | 2022-07-27T06:47:17.422Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ccc84f3c3e1896e9f763cc5fefdecb30457d71de",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ccc84f3c3e1896e9f763cc5fefdecb30457d71de",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252846357 | pes2o/s2orc | v3-fos-license | Task Compass: Scaling Multi-task Pre-training with Task Prefix
Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models. Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks. To tackle the challenge, we propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks. We conduct extensive experiments on 40 datasets, which show that our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships. The task relationships reflected by the prefixes align transfer learning performance between tasks. They also suggest directions for data augmentation with complementary tasks, which help our model achieve human-parity results on commonsense reasoning leaderboards. Code is available at https://github.com/cooelf/CompassMTL
Introduction
Recent years have witnessed a growing interest in leveraging a unified pre-trained language model (PrLM) to solve a wide range of natural language processing tasks (Tay et al., 2022;Chowdhery et al., 2022;Xie et al., 2022;Zhang et al., 2022).The pre-training recipe of a PrLM is driving from self-supervised learning (Peters et al., 2018;Radford et al., 2018;Devlin et al., 2019;Lan et al., 2020;Clark et al., 2020) to multi-task learning (MTL) with a mixture of standard selfsupervised tasks and various supervised tasks, [dream], class, making, you, with, worth, effort, the, woman [sciq]: A wetland is an area that is wet for all or part of the year.Wetlands are home to certain types of plants.
What is an area of land called that is wet for all or part of the year? 1) tundra, 2) "plains", 3) "grassland", 4) "wetland" [MASK]: M: I am considering dropping my dancing [MASK].I am not [MASK] any progress.","W: If I were [MASK], I stick [MASK] it.It's definitely [MASK] time and [MASK].What does the man suggest [MASK] [MASK] do?
MTL MLM
Figure 1: Input-output view.We append a task prefix for each data sequence to capture common patterns from the dataset and require the model to predict some randomly masked prefixes to capture task differences.
which takes advantage of learning from both largescale unlabeled corpus and high-quality humanlabeled datasets (Raffel et al., 2019;Aribandi et al., 2021). 1 Benefitting from supervision from related tasks, MTL approaches reduce the cost of curating deep learning models for an individual task and provide a shared representation that is generally applicable for a range of tasks (Wu et al., 2020b).
In the research line of multi-task learning for PrLMs, a typical solution is to cast all tasks into a text-to-text format and utilize an encoder-decoder PrLM such as T5 to predict the target sequences (Raffel et al., 2019;Aribandi et al., 2021).Despite the extensive efforts on leveraging supervised tasks in strengthening PrLMs, the latest trend is extreme scaling of task numbers, with little attention paid to the relationships between tasks (Sanh et al., 2021;Wei et al., 2021).Aribandi et al. (2021) investigated co-training transfer effects amongst task-families and empirically found that tasks in different families may have side effects between each other, e.g., summarization tasks generally seem to hurt performance on other task families such as dialogue system (Mehri et al., 2020), natural language inference (Bowman et al., 2015), and commonsense reasoning (Lourie et al., 2021).
When the task number scales up, the training of PrLMs would be more vulnerable to negative transfer due to the severe inconsistency of domain and data distribution between tasks (Wu et al., 2020b;Padmakumar et al., 2022).As one of the key concepts underlying MTL, task relationships potentially provide an effective basis for employing PrLMs in a more effective and interpretable way.
To handle the issue of negative transfer during multi-task learning, early studies have taken task relationships into account by employing a dualprocess model architecture that is composed of a shared encoder and task-specific layers.The two parts are supposed to integrate the common features of all the learning tasks and explore the task relationship in a predefined manner (Zheng et al., 2019;Liu et al., 2019a;Bai et al., 2020;Ma et al., 2021), respectively.However, these methods require additional modifications to model architecture and increase the model complexity and computation cost.Therefore, they are suboptimal for applying to PrLMs in terms of generality and computational bottlenecks.
All the considerations above lay down our goal to investigate simple yet effective ways to measure the task relationship without additional cost and keep the generality of PrLMs.In this work, we propose a prefix-guided multi-task learning framework (CompassMTL) to explore the mutual effects between tasks (Figure 1) and improve model performance with complementary tasks.Targeting natural language understanding (NLU) tasks, we employ a discriminative PrLM2 as the backbone model and train the model on 40 tasks.Experimental results show that our model achieves human-parity performance on commonsense reasoning tasks.We further probe into the task relationship entailed in the tasks prefix representations, finding that the measured relationship highly correlates with task-to-task transfer performance, and it is also of referenced value for optimizing the PrLM on a target task with its complementary tasks during MTL, i.e., fewer tasks with better performance.
In summary, our contributions are three folds: 1) A unified discriminative multi-task PrLM for NLU tasks will be released as a strong counterpart for the dominant T5-based encoder-decoder PrLMs trained with MTL.
2) A probing tool of using task prefix to explore the task relationships in large-scale MTL.We observe that the task relationships reflected by the prefixes manifest a correlation with transfer learning performance, and they help our model achieve better results with complementary tasks.
3) State-of-the-art results on a variety of NLU tasks, especially human-parity benchmark performance on commonsense reasoning leaderboards, i.e., HellaSwag and αNLI.
2 Background and Related Work 2.1 Self-supervised Pre-training PrLMs are commonly pre-trained on large-scale corpora and then used for fine-tuning individual tasks.One of the most widely-used pre-training tasks is masked language modeling (MLM), which first masks out some tokens from the input sentences and then trains the model to predict them by the rest tokens.There are derivatives of MLM including permuted language modeling in XLNet (Yang et al., 2019) and sequence-to-sequence MLM in MASS (Song et al., 2019) and T5 (Raffel et al., 2019).Beyond the general-purpose pre-training, domain-adaptive pre-training and task-adaptive pretraining have attracted attention in recent studies. 1) Domain-adaptive Pre-training.To incorporate specific in-domain knowledge, domain-aware pretraining is designed, which directly post-trains the original PrLMs using the domain-specific corpus.Popular models have been proposed in the dialogue domain (Whang et al., 2020;Wu et al., 2020a), as well as in the medical and science domains (Lee et al., 2020;Beltagy et al., 2019;Huang et al., 2019a;Yu et al., 2022).
2) Task-adaptive Pre-training.The goal of taskadaptive pre-training is to capture task-specific skills by devising the pre-training tasks.The popular application scenarios include logical reasoning and dialogue-related tasks Kumar et al. (2020); Gu et al. (2020); Zhang and Zhao (2021); Li et al. (2021).For example, Whang et al. (2021) proposed various utterance manipulation strategies, including utterance insertion, deletion, and retrieval, to maintain dialog coherence.[RTE]: Twelve of Jupiter's moons are ...
Multi-task Learning for PrLMs
Our concerned MTL in the field of PrLMs is partially related to the studies of task-adaptive pretraining discussed above.The major difference is that the PrLMs in MTL are fed with humanannotated datasets instead of those automatically constructed ones for self-supervised tasks.Figure 2 overviews the paradigms of MTL PrLMs.Existing methods in this research line mostly vary in model architectures and training stages.For example, MT-DNN (Liu et al., 2019a) applied multi-task learning to train a shared model on all the target datasets in the fine-tuning stage, and there are several task-aware output modules to adapt the shared representations to each task.Recent studies, such as ExT5 (Aribandi et al., 2021), T0 (Sanh et al., 2021), and FLAN (Wei et al., 2021), commonly applied an Encoder-Decoder architecture and convert a variety of tasks into the same text-to-text format and train those tasks jointly (Figure 2-a).
We argue that they are not the optimal solution considering the model complexity and the gap between original and transformed task formats, especially for natural language understanding tasks that are in a discriminative manner, e.g., classification, multiple-choice, etc. Actually, there are studies (McCann et al., 2018;Keskar et al., 2019;Li et al., 2020;Khashabi et al., 2020) that transform traditional tasks into other formats like reading comprehension or question answering and achieve better results than prior methods.These studies motivate us to explore superior model backbones and data formats, especially for the application in NLU tasks.
Modeling Task Relationships in MTL
Modeling task relationships is a classic topic in deep learning studies.Bingel and Søgaard (2017) studied the research question about what task relations make gains in traditional natural language processing tasks and investigated when and why MTL works in sequence labeling tasks such as chunking, sentence compression, POS tagging, keyphrase detection, etc. Wu et al. (2020b) found that task data alignment can significantly affect the performance of MTL and proposed architecture with a shared module for all tasks and a separate output module for each task.Since these methods require additional modifications of model architecture, they are suboptimal for employment in PrLMs, considering computational bottlenecks and generality when task scaling.In the era of pre-trained models, Geva et al. (2021) analyzed the behavior transfer in PrLMs between related jointly-trained tasks such as QA and summarization and thus provided evidence for the extrapolation of skills as a consequence of multi-task training.ExT5 (Aribandi et al., 2021) evaluated the transfer performance among task families in a multi-task co-training setup and observed that negative transfer is common, especially when training across task families.Although there are recent studies that insert prompts to describe the task requirements in the data sequences (Liu et al., 2021;Su et al., 2022;Qin et al., 2021;Vu et al., 2022), it is still not clear whether the prompts help negative transfer or whether the prompts necessarily capture task relationships.In this work, we find that using task prefixes along with the MLM for prefix prediction effectively indicates task relationships and helps MTL with fewer datasets but better performance.
Task Format
According to prior studies (McCann et al., 2018;Keskar et al., 2019;Khashabi et al., 2020), the benchmark results on a task can be affected dramatically by training a model on different formats of the same dataset.In contrast to converting all tasks in a text-to-text manner, we choose to model our tasks in a multiple-choice-like format to minimize the format transformation for NLU tasks.Our transformation aims to ensure that each example in a task has a specific number of k candidate options during the multi-task training stage.The original pair-wise input texts are regarded as context and question in the view of the multiple-choice problem.If there is only one text given, then the question will be kept empty.For the outliers, the data will be processed as follows (Examples are provided in Appendix A.1).
1) If the number of candidate options > k, the redundant options will be randomly discarded; 2) If the number of candidate options < k, add "N/A" placeholder options.
3) If the ground truth is a list, randomly select a correct option from the gold list and randomly sample k − 1 negative options from the held-out set3 except the left items in the gold list.
4) If the ground truth is a list and there is an empty choice, construct the truth option manually.For example, "there is no violation"; the negative examples are constructed as the same as 3).
As a result, each training example will be formed as a sequence like {[Prefix]: context, question, option}, where [Prefix] indicates the task name in natural language such as [hellawag] prepended to each data example.
CompassMTL
Our model is encoder-only, which is based on the DeBERTa architecture (He et al., 2021).The model is trained by using both the supervised task objective and the standard self-supervised denoising objective as described below.
Suppose that we have a dataset D = {(y i , c i , q i , r)} N i=1 , where c i represents the context, q i represents the question, r denotes a set of answer options r = {r 1 , . . ., r k }, and y i is the label.N is the number of training data.Each data example is formed as The goal is to learn a discriminator g(•, •) from D. For the supervised task, the loss function is: j=1 log(g(c i , q i • r j )).At the inference phase, given any new context c i , question q i and options r, we use the discriminator to calculate g(c i , q i • r j ) as their matching score where • denotes concatenation.The option with the highest score is chosen as the answer for the i-th example.
Let xi denote the masked sequence where a certain proportion of tokens in x i are randomly replaced with a special [MASK] symbol.Using xi as the input fed to the model in parallel with x, the self-supervised denoising objective is computed in the way of MLM: , where t i, j is the j-th token in x i and M denotes the index set of masked tokens for which the loss will be computed.To encourage the model to learn from both supervised and self-supervised signals, we combine L mtl and L mlm during training: L = L mtl + λL mlm where λ is a hyper-parameter to balance the weight of the training objectives.
Compared with traditional MTL methods, Com-passMTL is data-centric, without any modification of model architecture (Figure 2-b).It can be regarded as an efficient implementation of the traditional MTL method composed of a shared representation module and multiple task-aware modules.Since the data from the same datasets share the same task prefix, the prefix is supposed to reflect the common patterns from the dataset, which works in a similar operational principle to the shared representation module.During the training with our self-supervised objective, task prefixes will be randomly masked in a specific probability. 5he model is required to distinguish the task prefixes and predict the right prefix according to the input data.Therefore, the task differences will also be necessarily captured.
Task Relationship Exploration
Regarding the task prefixes as the compass to navigate the task relationships, it is possible to use our framework to analyze the relevance of For a target task, we can directly rank the toprelated tasks according to the correclation scores and use those complementary tasks for MTL before fine-tuning a target task (Figure 2-c).
Implementations
Our model is implemented using Pytorch and based on the Transformers Library (Wolf et al., 2019).To save computation, we initialize our model with the released checkpoints of DeBERTa-V3-Large, and the hyper-parameter setting generally follows DeBERTa (He et al., 2021) (Aribandi et al., 2021).ExDeBERTa is our imitation of ExT5-style (Aribandi et al., 2021) MTL training by using DeBERTa backbone trained on 40 datasets with a multi-task objective of self-supervised denoising and supervised task objective, after which is transferred to each individual task."w/ Tailor" denotes multi-task training with related datasets (14-subset) according to our discovery in Section 5.3.are run on 8x32GB Tesla A100 GPUs.The maximum input sequence length is 512.Similar to Lourie et al. (2021), the implementation of CompassMTL includes two procedures.We first conduct multi-task pre-training on all the datasets and then continue to train on each target dataset alone to verify the performance.For multi-task pretraining, we use a peak learning rate of 6e-6 with a warm-up rate of 0.1.We run up to 6 epochs using a batch size of 128.The masking ratio of MLM is 0.25, and λ is set to 0.1.To avoid large-scale datasets dominating the pre-training, the training data is randomly sampled by a limit of 10k on the maximum dataset size according to Raffel et al. (2019).For fine-tuning experiments, the initial learning rate is selected in {3e-6, 6e-6, 8e-5} with a warm-up rate of 0.1.The batch size is selected in {16, 32}.The maximum number of epochs is chosen from {6,10}.More fine-tuning details are available in Appendix A.2.
Main Results
Our main results are reported on the Rainbow and LexGLUE benchmark datasets for comparisons with public methods.As the statistics shown in Tables 1-2, we see that CompassMTL models outperform the related public models in general.Specifically, it is observed that our encoder-only models yield better performance than the T5-based encoder-decoder models under similar model sizes.
Relationship Probing
Figure 4 illustrates the heatmap of task relationships probed by prefix embeddings.We see that the datasets inside the same task family (e.g., GLUE and Rainbow) correlate highly with each other.The LexGLUE tasks are less related to other tasks because the texts are mainly legal descriptions.In addition, the correlation scores also accord with the common practice of data augmentation.For example, the NLI datasets (MNLI, QNLI, RTE) share close relevance, and it is helpful to initialize parameters from an MNLI model to fine-tune RTE (Liu et al., 2019b;Qu et al., 2020).We are interested in whether the probed relationship scores coordinate with the model performance transferred between tasks.We first obtain transfer accuracy between tasks in a dual-task training setup (Aribandi et al., 2021).Assume that we have 13 source tasks from GLUE and Rainbow tasks and 5 target tasks (αNLI, HellaSwag, MRPC, PIQA, QNLI, and RTE).We first train individual models using the mixture of training sets from each pair of source and target tasks, and then evaluate the model on the validation set of the target dataset.As a result, we have 5 × 13 transfer results.For each target dataset, we calculate Pearson correlation between relationship scores and transfer accuracy among the source datasets.In Table 4, we find that the relationship scores are positively bound up with the transfer performance.The results indicate the potential to find related tasks by the relationship scores.In other words, the relationship scores essentially reflect task relationships.Task relationships may also be reflected by shallow token distributions, such as vocabulary overlap or sentence length.To investigate if our relationship probing can be replaced by comparing the token distributions, we further analyze the correlation between the similarity of token distributions and dual-task transfer accuracy.For sentence length, we first calculate the absolute values of the average length difference between source and target datasets and then convert them to negative values (intuitively less difference in length, more close the relationship).The vocab overlap of the source and target datasets is also computed for comparison.The similarity between datasets reflects weak correlations with the transfer accuracy (2/5 and 3/5 datasets, respectively in Table 4).These results are less consistent than our probing method, which indicates that our method mines more complex patterns toward task relationships.
Complementary Transfer
To inspect whether using more datasets always leads to better performance and whether using the most related datasets can lead to competitive results.In this part, we conduct a complementary transfer analysis by selecting a group of datasets to train an MTL model and fine-tuning the model on target datasets.Four choices of dataset mixture are compared: 1) 40-fullset: the same as our basic setting of CompassMTL in this work; 2) Top-5 ranked dataset according to based on our probed relationship scores; 3) Family: the datasets belonged to the same family with the target dataset, i.e., 6 datasets for Rainbow tasks and 7 datasets for GLUE tasks; 4) 14-subset: the mixture of Rainbow and GLUE datasets.Table 5 presents the comparison results.We observe that the top-5 ranked variant yields comparable, even better results than the others, which indicates that models trained with more datasets may not always bring benefits.The results also indicate that small-scale datasets (e.g., MRPC and RTE), which have relatively high average correlation scores with the other datasets, are more likely to benefit from the complementary transfer.With the tasks scaling up, the performance (family → 14-subset) may improve as more related tasks are involved in training.
Human-parity on Commonsense
Reasoning Leaderboards graphs, our models establish new state-of-the-art results and reach human-parity performance.
Beyond The Unified Format
To verify whether our model can be employed for tasks that are unavailable to be transformed into our unified format, we evaluate the effectiveness of CompassMTL by using the typical reading comprehension datasets SQuAD v1.1/2.0 (Rajpurkar et al., 2016(Rajpurkar et al., , 2018) ) and named entity recognition (NER) dataset CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003), which represent extractive question answering and sequence labeling task formats, respectively.We first replicate the baselines for fine-tuning QA and NER tasks using the Transformers toolkit.9For comparison, we initialize the baseline parameters with our model weights to see if CompassMTL is better than the baselines.Results in Table 6 show that our model is generally effective across formats.The results also indicate that CompassMTL can serve as a strong off-the-shelf representation encoder that is applicable for new tasks without needing to be pretrained again.
Implementation Using The T5 Backbone
Although our method is implemented by the encoder-only backbone to compete in NLU tasks, it is supposed to be generally applicable to other kinds of PrLMs, such as encoder-decoder T5.To verify the effectiveness, we employ the pre-trained T5-base model (Raffel et al., 2019) as the backbone.We use the Rainbow datasets for MTL and convert the data into text-to-text format following the standard processing for T5 training, with task prefixes inserted before each data sequence.The baselines are the single-task T5 trained on each individual task and UNICORN (Lourie et al., 2021) trained on the Rainbow datasets.Results in Table 8 verify that our method is generally effective.
Conclusions
This work presents a task prefix guided multi-task method by making use of task prefix to explore the mutual effects between tasks and improve model performance with complementary tasks.Our released model can not only serve as the strong foundation backbone for a wide range of NLU tasks but also be used as a probing tool for analyzing task relationships.Our model shows generalizable advances over tasks in diverse formats and establishes human-parity results on commonsense reasoning tasks.Based on our pre-trained model, we find that the prefixes necessarily reflect task relationships, which correlate with transfer learning performance between tasks and suggest directions for data augmentation of complementary tasks.In summary, our work has the following prospects for future studies: 1) Collaborative multi-task learning of PrLMs.The recipe of using task prefixes in conjunction with prefix prediction in MLM training has shown effective for large-scale MTL pre-training.
2) Suggestive choice for data augmentation.
The task relationships probed by the prefix embeddings have shown informative in finding the complementary tasks.Using complementary tasks helps obtain better performance for a target task, especially for small-scale task datasets.
3) Guidance for skill-aware model evaluation.
The discovery of task relationships may help determine redundant datasets that assess similar patterns of models.Recently, there has been a trend to evaluate the comprehensive skills of deep learning models by using a large number of datasets (Srivastava et al., 2022), the selection of distinctive datasets can be guided by our relationship discovery criteria to avoid evaluation redundancy and save computation.
Limitations.We acknowledge the major limitation of this work is that our model may not readily apply to new tasks.It is based on the common assumption of MTL that the set of tasks is known at training time.Adaptation to new tasks could be future work.
After the model is pre-trained with MTL, we fetch the prefix embeddings from the model embedding layer and calculate the Pearson correlation between each task pair with min-max normalization.Assuming that we have n tasks, the process will result in n × n correlation scores to indicate the task relationships.
Table 2 :
Results on LexGLUE test sets.The baseline results except ours in the last column are from Chalkidis et al.
(2021).Since the LexGlue tasks except CaseHold are multi-label classification problems, the ExDeBERTa model is not directly applicable for those tasks without extra task-specific fine-tuning; thus, the results are not reported."w/ Tailor" denotes multi-task training with the seven datasets in the same LexGLUE family.
Table 3
mtl and L mlm , respectively.The results suggest that both supervised and selfsupervised tasks contribute to the overall model performance, and the supervised task is more beneficial than the self-supervised task in our study.Further, to inspect the role of the task prefixes, we
Table 3 :
Ablation Study of the training objectives and task prefixes.We calculate the average accuracy scores on the development sets of all the 40 datasets.
Table 4 :
Dataset RTE MRPC QNLI HellaSwag αNLI Avg.Pearson correlation between each relationship measure and the transfer accuracy.
Table 8 :
Results on the Rainbow validation sets by using T5-base as the backbone model. | 2022-10-13T01:15:58.107Z | 2022-10-12T00:00:00.000 | {
"year": 2022,
"sha1": "0979695b5d74016e97ab8f306f632114e98bd6d9",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2022.findings-emnlp.416.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4642591518d9b803c4f560ee9dc556d2383e2220",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246701597 | pes2o/s2orc | v3-fos-license | Challenges of imaging interpretation to predict oligodendroglioma grade: a report from the Neuro-Oncology Branch
Background: To illustrate challenges of imaging interpretation in patients with oligodendroglioma seen at a referral center and evaluate interrater reliability. Methods: Two neuro-oncologists reviewed diagnostic preradiation MRIs of oligodendroglioma patients; interrater reliability was calculated with the kappa coefficient (k). A neuroradiologist measured presurgical apparent diffusion coefficient (ADC), if available. Results: Extensive enhancement was noted in four of 58 patients, k = 0.7; necrosis in seven of 58, k = 0.61; calcification in seven of 17, k = 1.0; diffusion restriction in two of 39 patients, k = 1.0 (all only in grade 3). ADC values with receiver operator characteristic analysis for area under the curve were 0.473, not significantly different from the null hypothesis (p = 0.14). Conclusions: Extensive enhancement, necrosis and calcification correlated with grade 3 oligodendroglioma in our sample. However, interrater variability is an important limitation when assessing radiographic features, supporting the need for standardization of imaging protocols and their interpretation.
Oligodendrogliomas are diffusely infiltrative primary CNS tumors representing less than 10% of all gliomas [1]. A multidimensional approach is required to address challenges in oligodendroglioma and other primary rare CNS tumors [2]. Addressing these challenges is the aim of the NCI-CONNECT program at the National Cancer Institute (NCI) (https://ccr.cancer.gov/neuro-oncology-branch/connect). The WHO defines oligodendrogliomas molecularly by the presence of IDH mutation and codeletion of the short arm of chromosome 1 (1p) and the long arm of chromosome 19 (19q) [3,4]. Despite ongoing efforts to identify molecular alterations determining prognosis within molecularly defined oligodendrogliomas, little is known about optimal biomarkers for stratifying risk [5,6], and oligodendrogliomas are still classified on the basis of classical histological features into two grades, WHO grades 2 and 3 [7,8]. Tumor grading therefore has prognostic implications. Additionally, histological grading still drives therapeutic decisions in grade 2 gliomas, particularly oligodendrogliomas. For example, a 'wait-and-see' approach is often considered in patients with grade 2 gliomas in circumstances such as tumors located in unresectable locations or after gross total resection of a WHO grade 2 oligodendroglioma because delaying radiation therapy has not been shown to carry an adverse impact on overall survival [9,10]. However, such an approach is not common practice for grade 3 oligodendrogliomas.
After clinical presentation, imaging studies serve as the initial tool in the diagnosis of all brain tumors, including oligodendrogliomas. Tumor heterogeneity and location in the eloquent brain may hamper accurate diagnosis after a biopsy or incomplete resection. Previous studies have looked retrospectively into imaging features to predict tumor grade in oligodendrogliomas, using calcification, contrast enhancement [11], perfusion, and restricted diffusion [12]. However, there is no definitive set of imaging features to predict oligodendroglioma grading [13], likely in part because of the challenges in correctly identifying these features and variability among readers.
In this report, we retrospectively reviewed a cohort of patients with oligodendroglioma and attempted to validate imaging findings evaluated in previous studies as predictors of grading and evaluated inter-rater reliability.
Methods
All 58 evaluable patients had a centrally confirmed histopathological diagnosis of oligodendroglioma (molecularly confirmed: 54; not otherwise specified [NOS]: four). Appropriate imaging was available for review, performed preoperatively and/or before starting radiation therapy. MRI minimal sequences required were T1, T2, T2-FLAIR and T1 post-gadolinium contrast injection. The presence of calcification was only evaluated if CT scans were available (n = 17). Two neuro-oncologists (M Penas-Prado and MR Gilbert) with extensive clinical experience in the field of neurology and neuro-oncology, both blinded to histopathological grade and the radiology report, reviewed independently MRIs from all 58 patients. Reader interpretations for the following conventional MRI features were recorded: contrast enhancement (not present/partial/extensive), necrosis (yes/no), calcification (yes/no) and restricted diffusion (yes/no). 'Partial' contrast enhancement was defined as less than 50% of the 2D lesion area on axial imaging containing the largest area of contrast enhancement, and 'extensive' contrast enhancement was defined as ≥50% of the 2D lesion area showing contrast enhancement. T1 and T1 postcontrast images were evaluated when the presence of acute hemorrhage was suspected as a confounding factor for enhancement. 'Necrosis' was defined as the presence of a ring enhancing lesion on T1 postcontrast along with central hypointensity. The interrater reliability among the two neuro-oncologists was measured with Cohen's kappa statistic. A neuroradiologist (R Shah), blinded to grade, measured presurgical apparent diffusion coefficient (ADC) when available (n = 21). Images were reviewed using Carestream Vue PACS Version 12.2.2.1025.
Patients
Thirty-one patients with oligodendroglioma WHO grade 2 and 31 patients with anaplastic oligodendroglioma WHO grade 3 were identified in the Neuro-Oncology Branch Natural History Study. As shown in Table 1, almost half of the patients presented with tumor in the frontal lobe. Fifty-four of the 62 patients had histopathological findings suggestive for oligodendroglioma along with IDH mutation either via immunohistochemistry staining or gene mutation testing, and 1p 19q codeletion by FISH testing performed on paraffin embedded tissue using a dual color probe set (positive deletion of the short arm of chromosome 1 including 1p36 and positive deletion of the long arm of chromosome 19 including 19q13). Four patients had a centrally confirmed morphological diagnosis of oligodendroglioma but either no IDH or no 1p19q testing; three did not have IDH testing, and one had no 1p19q status available but were also included in the analysis. We included these cases to avoid selection bias based on date of surgery (before required molecular diagnosis of oligodendroglioma). We excluded four patients (WHO grade 3) due to lack of imaging studies before receiving radiation to avoid misinterpretation of radiation-related imaging changes [14]. The total number of patients considered evaluable for analysis was 58. As detailed in Figure 1, among these 58 evaluable patients, 31 had grade 2 and 27 had grade 3 tumors; gross total resection was achieved in nine patients, subtotal resection in 33 patients and biopsy in 16.
Contrast enhancement, necrosis & calcification
Two neuro-oncologists reviewed images from 58 evaluable patients. The first neuro-oncologist reported partial contrast enhancement (less than 50% of lesion area) or extensive contrast enhancement (more than 50% of lesion area) in 18 patients (six with WHO grade 2, 33%; 12 with WHO grade 3, 66%). The second neuro-oncologist reported partial or extensive contrast enhancement in 35 patients ( Cohen's kappa statistic showed the following interrater agreement between the two neuro-oncologists in the preceding measures: interrater agreement on the contrast enhancement was k = 0.37 on grade 2 and k = 0.5 on Table 2. Interrater agreement: Cohen's kappa interrater agreement was used to assess agreement between readers, ranging from 0 (agreement by chance) to 1 (perfect agreement).
grade 3; interrater agreement on extensive enhancement was k = 0.7. The agreement on necrosis interpretation was k = 0.61. There was complete agreement on the interpretation of calcification with k = 1.0. As shown in Table 2.
Apparent diffusion coefficient
Both neuro-oncologists reported restricted diffusion in only two of the 21 patients with available diffusionweighted imaging (DWI) and ADC sequences, and both patients had WHO grade 3 tumors. Interrater agreement on restricted diffusion was k = 1.0. When calculating the ADC values with receiver operating characteristic analysis by a neuroradiologist, the area under the curve was 0.473, which was not significantly different from 0.5 (null hypothesis; p = 0.14).
Discussion
Patients with suspected brain tumors discovered on imaging are usually referred for surgical intervention because this provides tumor tissue for histological diagnosis and molecular testing and the extent of resection is an important prognostic factor. Although MRI is the best noninvasive technique to diagnose brain tumors, there is no set of imaging features that fully predicts tumor histology or histological grade with a high level of accuracy, and interpretation of these features is not easy to standardize. This is particularly relevant in the context of deep tumors or those in eloquent brain where extensive (or even subtotal) resection may not be possible. In these situations, it would be useful if conventional imaging findings (contrast enhancement, necrosis, calcification) helped predict the most malignant component of the tumor to guide a biopsy, thereby reducing the likelihood of 'undergrading' the tumor.
Contrast enhancement is reported in 40-60% of oligodendrogliomas in different studies, and enhancement has been suggested as an imaging feature indicating higher grade [15,16]. However, the 'chicken-wire' capillary network is believed to be behind the contrast enhancement occasionally observed in low-grade tumors [17]. In our study, contrast enhancement (of any amount) was noted in 31 and 60% of all cases by the readers, respectively; extensive contrast enhancement (more than 50% of the lesion area) was noted only in grade 3 tumors by both readers (in 15 and 26%, respectively). Thus, the presence of an extensive area of contrast enhancement can be helpful in predicting higher grade, but not all high-grade oligodendrogliomas show contrast enhancement. The origin of contrast enhancement is thought to be related to neovascularization of the tumor [18]. However, this needs to be interpreted with caution as previous studies showed that contrast enhancement is not a specific feature to simply differentiate grade 2 from grade 3 oligodendroglioma [19].
Necrosis was noted by the first reader in 12% of our sample patients (all of them were grade 3 in histology) and reported by the second reader in 19% (82% of those were grade 3), suggesting this feature increases diagnostic confidence to predict a higher-grade pathology. Of note, the presence of necrosis in oligodendroglioma does not carry the same unfavorable implication as in astrocytic tumors. In a study published in 2006, necrosis was a predictor for poor overall survival in anaplastic oligoastrocytoma but not in anaplastic oligodendrogliomas [20].
In our small sample of patients (only 17 had pre-surgical CT head available for review), calcification was noted by both readers only in grade 3 tumors (seven patients had calcification out of nine WHO grade 3 with available pre-surgical CT head). Although a larger sample size would be needed to reach stronger conclusions, the high frequency of calcification is of interest. Calcification is commonly reported in oligodendroglioma and could suggest 1p/19q co-deleted tumor [21]. In concordance with our data, a previous study suggested that calcification is more common in grade 3 oligodendroglioma, although this was not statistically significant [12].
ADC sequence depends on the diffusion of water molecules within the tissue and correlates with the cellularity of such tissue. This sequence was proposed to 'predict' the glioma grade and help identify appropriate tumor biopsy sites [22]. In agreement with previous observations suggesting that diffusion restriction is not commonly reported in oligodendroglioma, we noticed restricted diffusion in only 9.5% of our samples (two of 21 patients), and both patients had anaplastic oligodendroglioma [23] Khalid et al. [12] proposed a ADC cutoff below which higher grade can be predicted. However, when calculating the ADC values with ROC analysis for our sample the AUC did not meet predetermined significance cutoff, implying the ADC value may not be a reliable indicator of a grade 3 tumor.
Notwithstanding the potential role of conventional MRI features in noninvasively predicting grading, in this article, we call attention to the pitfalls of identifying these features by reporting the interrater agreement between two expert neurologists/neuro-oncologists when interpreting MRI scans of patients with WHO grade 2 and 3 oligodendroglioma. Our goal was to recapitulate common issues found in clinical practice, especially in clinical settings where there is no immediate access to neuroradiologists with expertise in primary brain tumors, or in referral centers where patients request a new opinion providing previous imaging studies obtained at several institutions. Patients are commonly referred for consultation from multiple other academic and/or private practices with variable experience in the imaging diagnosis and management of primary brain tumors. Notably, interrater agreement was poor for interpretation of contrast enhancement in grade 2 tumors. Upon additional review, common reasons for disagreement were the presence of small areas of enhancement that were considered normal vasculature and variations in the amount of enhancement due to heterogeneous techniques used in studies obtained at different institutions. Conversely, there was good interrater agreement in interpreting the following measures: presence of any amount of contrast enhancement in grade 3 oligodendroglioma, extensive contrast enhancement and necrosis. Complete agreement was reported in interpretation of calcification and restricted diffusion, although the number of appropriate studies to evaluate these features was smaller. Our observations, when taken together with findings from previous studies in which experts were shown to have higher agreement than novices in imaging interpretation [24], highlight the limitations of imaging interpretation and the need to use a homogenous MRI protocol across different centers to decrease variability in interpretation due to technical factors. Importantly, a Brain Tumor Imaging Protocol for standardization of imaging in neuro-oncology has already been proposed [25], and even though it is primarily designed for incorporation of imaging in multicenter clinical trials, our findings support the need to incorporate this standard into routine clinical imaging to decrease variability of imaging interpretation.
Our study had several limitations. Sample size was relatively small, although comparable with a previous study reporting similar findings in 75 patients [12]. Because this was a retrospective study, there were limitations regarding the availability of imaging studies (i.e., CTs) before surgical intervention, or the availability of specific sequences (DWI, ADC); however, this reflects the real-world challenges of imaging interpretation, especially by clinicians in their clinical practice, needing to evaluate and compare imaging studies obtained with variable protocols. Moreover, ADC values could vary depending on multiple factors (different scanners, software, etc.), and this might be the reason why the use of ADC is limited in clinical practice [26]. Adding advanced imaging technology to conventional MRI is showing promise to help obtain better data for grading oligodendrogliomas, but further study is needed [27]. About one-quarter of the patients included in our series underwent biopsy only, which could have assigned an incorrect grading to the tumor due to sample error; hence our study goal of trying to decrease the chance of biopsy target error due to intratumoral heterogeneity [28]. Seven patients were originally diagnosed before the implementation of molecular requirements for diagnosis of oligodendroglioma by the WHO 2016 classification (IDH mutations and 1p19q codeletion), and whereas these tumors were centrally confirmed as histological oligodendrogliomas, they can only be classified as oligodendroglioma or anaplastic oligodendroglioma NOS based on current WHO guidelines. However, the inclusion of this small number of tumors without full molecular confirmation (which could potentially be molecular astrocytoma or other rare tumor entities) does not invalidate the analysis of imaging features for prediction of grade and the analysis of low interrater agreement when evaluating the same imaging studies because these challenges likely apply to imaging interpretation of all gliomas and brain tumors in general. Importantly, the relevance of histological grade to predict prognosis in oligodendroglioma is being revisited. Recent retrospective data have suggested superior importance of molecular profile over histological grading (2 vs 3) to predict prognosis of diffuse glioma patients in general [29] and oligodendroglioma in particular [30]. Although these studies convey valid points, they need confirmation in prospective studies, and histological grading is still included in the WHO 2016 and current WHO 2021 classification of gliomas and it is shown to affect survival independent from other factors in IDH mutant tumors [4,31,32]. Hence, histological grading continues to bear importance in clinical decision-making in oligodendrogliomas and other gliomas.
Finally, although the imaging technique was not homogenous across all studies, and we reported the inter-rater agreement between two non-radiologists, we consider these strengths of our study as they reflect the challenges of daily clinical practice and the importance of introduction of homogeneous imaging protocols for evaluation of brain tumors to minimize variability in interpretation. A prospective study using this homogeneous imaging protocol, introducing standard quantification of findings and evaluating interrater readings would help establish the value of this protocol in decreasing reading variability and predicting grade more accurately.
Conclusion
In our patient population with oligodendroglioma, preoperative brain imaging demonstrating extensive enhancement, necrosis and calcification suggested higher tumor grade. These findings may provide a guide for the optimal biopsy site if extensive resection is not feasible. However, imaging interpretation is variable among readers, underscoring the need for both standardized imaging protocols and reproducible quantification of findings such as contrast enhancement, calcifications and provision of measurements of calculated values such as ADC.
Summary points
• In keeping with previous studies, areas with extensive enhancement and/or necrosis were associated with higher grade. • Such areas with extensive enhancement and/or necrosis can guide biopsies if resection is not feasible.
• Our findings of interobserver discrepancies call attention to the pitfalls of imaging interpretation in a real-world setting. • Our findings underscore the need for both standardized brain tumor imaging protocols to decrease variability of imaging acquisition at different institutions and quantification of findings, such as contrast enhancement and calcifications. • This study is the first to explore the difficulties of visual interpretation, even among expert clinicians in predicting oligodendroglioma grading.
Authorship
All listed authors participated in the writing of the manuscript and have read and approved the final version.
Financial & competing interests disclosure
The authors are members of The NCI Comprehensive Oncology Network for Evaluating Rare CNS Tumors (NCI-CONNECT), which is a program within the Rare Tumor Patient Engagement Network (RTPEN), an initiative supported by the Cancer Moonshot funds and managed at the National Institutes of Health, National Cancer Institute, Center for Cancer Research, Neuro-Oncology Branch. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
No writing assistance was utilized in the production of this manuscript.
Ethical conduct of research
The authors state that they have obtained appropriate institutional review board approval or have followed the principles outlined in the Declaration of Helsinki for all human or animal experimental investigations. In addition, for investigations involving human subjects, informed consent has been obtained from the participants involved.
Open access
This work is licensed under the Attribution-NonCommercial-NoDerivatives 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ | 2022-02-11T06:24:37.072Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "9de0eda4d78f94841d7b15b208365b407ae7b6b2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.2217/cns-2021-0005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d3ef31b6955420d781cbaeb1b71984a25d35b18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254346968 | pes2o/s2orc | v3-fos-license | Aperture-Sensitive Distal Depth Scanning Raman Spectroscopy
Depth sensitive Raman spectroscopy has shown its advantages in the detection of Raman spectra for layered tissues. However, till now the depth scanning mode in Raman spectroscopy is the proximal scanning mode, which has disadvantages in significant variation of probe-sample contact and the size of the probe. Here, we developed a new Raman system with an axicon lens and a graduated ring-activated zero aperture. The focal range of the system was about 35 mm and nearly two orders of magnitude higher than that of the traditional Raman system. The Raman spectrum obtained from different materials at a fixed depth was demodulated with two Raman spectra from two adjacent depth ranges. Two and three-layer models were used to demonstrate the effectiveness of the proposed Raman system. The Raman spectra of polyvinyl chloride, polyphenylene ether and silica gel were demodulated successfully from overlapped Raman spectra. The new Raman system used the distal scanning mode and offered the following advantages. First, the Raman spectra of full range and a fixed depth can be obtained simultaneously. Second, depth scanning is performed far from the sample. Third, the method can be used in endoscopic applications to reduce the size of the probe.
I. INTRODUCTION
C OMPARED with other optical spectroscopy, Raman spectroscopy has advantages in chemical discrimination, nondestructive and label free characteristics [1], [2], [3]. The method provides the fingerprints of functional groups, and these fingerprints can be used to identify intrinsic molecular constituents [4], [5], for example, Olivo et al. developed a portable ultrawideband CRS system which used dual-wavelength excitation with a dual passband laser cleaning filter and high speed fiber array multiplexer to improve system robustness. Corneum thickness was quickly and roughly simultaneously obtained with its Raman spectra in both the fingerprint and high wavenumber regions [6]. Moreover, since Raman scattering background from water in Raman spectroscopy is weak, it is suitable for tissue measurements [7], [8], [9], [10]. For this application, especially for endoscopic measurements, step motor is not suitable for depth scanning [11], Manuscript [12], [13]. Therefore, novel depth scanning techniques attract much attention. Depth scanning techniques have improved significantly in the past decade, and spatial offset Raman spectroscopy (SORS) [14], [15], [16] was proposed for depth-sensitive Raman spectroscopy. SORS uses the divergent excitation light from an optical fiber [17], a collimated beam [18], or a focused beam for excitation but a ring of fibers spatially apart from the excitation spot for subsurface detection. However, these methods based on SORS rely on moving the area of collection of scattered light away from the laser-illuminated zone. Accordingly, SORS uses a proximal depth scanning mode, which is the same depth scanning mode used in the motor stage-based Raman system. This scanning mode increases the complexity of the system and the size of the probe because of the existence of a moving part in the system that is used to reposition the collection probe [17], [19].
In this paper, we developed a distal scanning Raman system. In this system, a graduated ring-activated zero aperture was used to select the Raman spectrum from the wide focal range of an axicon lens. The main difference between our system and the traditional Raman system was the lens and increased aperture. Instead of an objective lens, an axicon lens was used to modulate incident light over the depth direction to obtain the full range of Raman information without depth scanning [20], [21]. Depth scanning was realized by changing the sizes of the graduated ring activated zero aperture to select the Raman spectrum from a fixed depth range.
A. Experimental Setup
The full range optical setup of Raman spectroscopy was illustrated in Fig. 1. The near infrared excitation light source with a central wavelength of 785 nm was collimated by lens L1. The collimated light was filtered by a Rayleigh filter to reject the undesired wavelength and subsequently reflected. The excitation light was focused onto the sample by an axicon lens to obtain an extensive focus depth. The backscattering light from different depths was collimated by the axicon lens. The Raman signal was selected with a long-wavelength pass filter. An aperture whose size can be adjusted was inserted in the front of the pinhole to select the Raman signal with a different depth. Finally, the Raman signal was collected by a spectrometer. Different from traditional Raman spectroscopy, this Raman spectroscopy used an axicon lens instead of an objective lens to extend the focus depth and obtain the full-range Raman signal. The Raman signal with a different depth was selected by adjusting the aperture sizes.
B. Principle
The focal length of the axicon lens is given by [22] (1) where D is the diameter of incident light, α is the base angle of the axicon lens, and n is the refractive index of the axicon lens. The equation indicates that focal length varies with the diameter of incident light when the axicon lens is used. The diameter of incident light in our system is about 12 mm. The base angle and refractive index of the axicon lens are 20°and 1.45, respectively. The calculated focal length is about 36 mm. When only the backscatter Raman signal is considered, the relationship between the selected depth and the change of aperture sizes is [22], [23] ΔL= 2.97 * ΔD According to (2), depth resolution is related to the minimum aperture sizes that the Raman spectra can be detected when the optical power and performance of the spectrometer are ensured. However, in practical application, when the sizes of the aperture are selected, the measured depth may be larger than the theoretical value. The reason is that part of the Raman signal from other direction, except for backscatter light can also be detected.
To demodulate the overlapping Raman spectra of different materials, two Raman spectra are required. One is the Raman spectrum S(L 1 ) on the front surface of the desired material (located at L 1 corresponding to the aperture size D 1 ), and the other is the Raman spectrum S(L 2 ) on the back surface of the desired material (located at L 2 corresponding to the aperture size D 2 ) The Raman signal of this material can be obtained by The Raman characteristic of the material can be demodulated from the overlapping spectrum when the material's thickness is larger than the depth resolution of the system.
As previously stated, only the backscatter Raman signal can be detected as a portion of the Raman signal. That means even though the aperture size is corresponding to the depth L 1 , the Raman signal from deeper region can still be detected. The Raman signal of the desired material can be reconstructed by improving (3). The detail reconstruction process can be performed as follows.
To identify the Raman spectrum of a single layer from the multi-layer samples, different Raman spectra with various aperture sizes was measured. For example, the Raman spectra when the aperture sizes are 5 mm and 3 mm are selected to reconstruct the Raman spectra of PVC and PPE. The Raman spectrum of PVC can be obtained by whereS Ramanshif t (D 1 ) is the unique Raman intensity for material 1 when the aperture size is D 1 and S(D 1 ) is the overlapping Raman spectrum obtained when the aperture size is D 1 .
For the data presented in Fig. 4, the Raman spectrum of PVC derived from the multi-layer sample can be expressed as Focal range is measured by Si which is located close to the apex of the axicon lens. Raman spectra are acquired with a depth step of 5 mm. The power of light source is set to 300 mw and the imaging time of one Raman spectrum is 10 s. The Raman spectra are obtained 10 times. The depth range of Si when the size of the aperture is 3 mm is obtained with a depth step of 6 mm. The power of the light source is 300 mw and the imaging time of one Raman spectrum is 10 s. The Raman spectra are collected only once. The aperture is positioned on one side of the optical axis to avoid the error induced by the position of the aperture.
C. Axial Resolution
The axial resolution of the Raman spectroscopy between the objective lens and axion lens didn't measure in this manuscript because of the following two factors.
1) The focal length of the objective lens and axion lens was different which means axial resolution for both system should be different. As a result, the comparison between them is meaningless. 2) In reference (6), Olivo et al. has demonstrated that the axial resolution of the Raman spectroscopy is related to the value of NA of the objective lens and the core size of the fiber. Considering the NA of the axion lens for different diameters of incident light keeps the same [23] and the core size of the fiber is also the same, the axial resolution for different depth of the axion lens keeps the same. And if the focal length of the axion lens is the same as the objective lens, the theoretical axial length should be same.
D. Preparation of PVC and PPE
The PVC and PPE are bought from Alibaba. The thickness of PVC and PPE are about 4 and 1 mm, respectively. PPE is positioned above PVC. The distance between PPE and the apex of the axicon lens is about 7 mm.
III. RESULTS
We demonstrated the wide focal range of our system by positioning Si at different depths. As shown in Fig. 2, the Raman peak of Si was at 520 cm −1 . The measured depth varied from 0 mm to 35 mm (distance between Si and the apex of the axicon lens). The imaging depth of our system was ∼35 mm when the diameter of the incident beam was 12 mm. This imaging depth was almost one order of magnitude larger than the Raman system using a low NA objective lens. Therefore, the full range of Raman information was obtained without depth scanning in our system. Fig. 3 showed the imaging depth when the aperture was set to 3 mm. The aperture was placed at the left side of the optical axis of the Raman system to avoid aperture-induced errors. The Raman spectra of Si, whose depth varies from 10 mm to 30 mm, were obtained. The Raman peak was evident when the depth varies from 11 mm to 29 mm. When the depth was 10 or 30 mm, the Raman peak was almost equal to the peak of noise. The imaging depth was about 18 mm which was a little bigger than the theoretical value 17.82 mm. The depth at which the Raman spectrum can be detected varies from 11 mm to 29 mm instead of 0 mm to 18 mm because the center of the aperture was far from the optical axis of the Raman system. Fig. 4 presented the relationship between the change of aperture size and the change of depth which was introduced in (2). The aperture sizes used here were 2.5 mm, 3 mm, 3.5 mm, 4 mm, 4.5 mm, 5 mm and 5.5 mm. The depth scanning was realized using a stepper motor. The step between two adjacent sampling points was 0.5mm. The imaging depth was defined as the depth where the Raman intensity dives to two times of the noise. Fig. 4(a) showed the fitting curve of the intensity when the aperture sizes were 3.5 mm, 4.5 mm and 5.5 mm. Fig. 4(b) showed the fitting curve of the relationship between aperture size and depth. The slope of the line in Fig. 4(b) was 2.585±0.088 which was close to the theoretical value 2.97 in (2). The main error was induced by the scattering light from different direction, the error of depth adjusting by the stepper motor and noise. The main difference in the Raman peaks of PVC and PPE can be observed at 636 and 809 cm −1 , where the Raman peak was not overlapped, and the intensity value is at its maximum. Fig. 6 showed the Raman spectra obtained from the multilayer sample of PVC and PPE. The shapes of the spectra were the same, except when the size of the aperture was 1 mm. The intensity ratios at 636 and 809 cm −1 varied with the size of the aperture shown in Table I. This result indicated that the Raman signal of PVC decreased when the size of the aperture decreased.
To demonstrate that this phenomenon was not induced by the signal-to-noise ratio of the system, we calculate the maximum peak ratio of PVC, which was at 636 cm −1 . The results were shown in Table I. When the aperture size decreases from 12 mm to 5 mm, the ratio is almost equal; when the aperture size decreases from 5 mm to 3 mm, the ratio changes. When the aperture size was changed to 2 mm, the theoretical imaging depth became 5.94 mm. This value was larger than the thickness of PVC. The reason was that the imaging depth calculated by (2) was in air, and the refractive index of the material was not considered. The refractive index of PVC was about 1.52 and its optical thickness was about 6.08 mm, which was close to the measured value of 5.94. The main error arose from the assumption that only the backscattered Raman signal was collected. In practical application, the exits of other Raman signal were also collected to extend the imaging depth. Fig. 7(a) and (b) showed a comparison of the separated PVC and PPE from the multi-layer sample with single PVC and PPE. The results indicated that using the axicon lens and graduated ring-activated zero aperture was effective in obtaining the Raman spectra from different depths. The Raman peak of PPE obtained from the multi-layer sample was similar to the Raman peak obtained from the pure PPE. In Fig. 7(a) and (b), the spectra were smoothed and the baseline was removed with the algorithm developed by Renishaw. After these pre-processing procedures, the Raman peak separated from the multi-layer sample and that obtained from pure PVC or PPE became similar. Compared with A three-layer model included PET, silica gel and PC was used to analyze the influence of the number of layers. The full range Raman spectra at different aperture's sizes were shown in Fig. 8. And the demodulated Raman spectra of PET, silica gel and PC were shown in Fig. 9. Compared with that of pure PET, pure silica gel and pure PC obtained by traditional Raman spectroscopy, the main Raman peaks were the same, especially for the upper layers. For the bottom layer PC, there was only one Raman peak for the signal demodulated from full range Raman spectra. One of the reason was the light power incident on PC which is the bottom layer is relatively low and the Raman signal collected need penetrate through PET and silica gel which would decrease the Raman signal. The other reason was the demodulated algorithm used here was the subtraction of two Raman spectra. Since the subtraction would induce that the Raman intensity that we want to demodulate would decrease when subtraction is performed, while the noise would increase. The signal-to-noise of the demodulated signal would decrease. So the demodulated algorithm need to be improved to obtain high signal-to-noise. Fig. 10 was the Raman spectra of pork in a range of 1000 cm −1 to1700 cm −1 . The skin in the pork is thinner than 1 mm. It's mostly fat below the skin. The Raman spectra for skin and fat are nearly the same except 1299 cm −1 (attributed to CH2 twist) for fat and 1253 cm −1 (attributed to amide III band) for skin. As shown in Fig. 10, when the aperture size is 3 mm the intensity at 1253 cm −1 was higher than that at 1299 cm −1 . As the aperture size increased, the ratio of the intensity at 1253 cm −1 and 1299 cm −1 decreased. Especially when the aperture size was 12 mm, the intensity at 1299 cm −1 was higher than that at 1253 cm −1 . This demonstrated that it was effective to modulate the Raman intensity from different depth by adjusting the sizes of the aperture.
V. CONCLUSION
In this study, we described a new kind of Raman spectroscopy. It can obtain the comprehensive Raman spectra for full range and demodulate the Raman spectra of the desired material over the full range. Two and three-layer models were used to demonstrate the new system. The advantage of this depth-sensitive Raman system is that it uses distal depth scanning mode which allows for the simplification and size reduction of the endoscopic probe. This design is convenient for clinical measurements especially, especially in vivo ones, because precise control of the lens-sample distance is unnecessary. Considering that the Raman signal can be obtained synchronously, future improvements are likely to use a spatial light modulator to select the Raman signal from different depth ranges synchronously and improve the demodulated algorithm to avoid the decrease of the signal-to-noise ratio. | 2022-12-07T20:07:48.001Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "9f966f980ce2a0a9e0efa7ad939215afa2063c55",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4563994/4814557/09965592.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "2ed409f1eaeebd8e766a12312b6fb9be176d419e",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Physics"
],
"extfieldsofstudy": []
} |
25770361 | pes2o/s2orc | v3-fos-license | Acute Abdomen Secondary to a Spontaneous Perforation of the Biliary Tract, a Rare Complication of Choledocholithiasis
Highlights • The spontaneous perforation of the biliary tract is an extremely rare cause of peritonitis.• We present a case of spontaneous perforation of the biliary tract in a patient with a history of 4 previous ERCP, a current diagnosis of choledocholithiasis and a probable cholangitis.• This is the first case of spontaneous perforation of the biliary tract treated with ultrasound-guided drainage and ERCP successfully reported in México.
Introduction
Abdominal pain is one of the most common reasons for visits to the emergency room, comprising 7-8% of all visits [1,2]. Typically the patient is admitted to the emergency department with abdominal pain and a systemic inflammatory response, including fever, tachycardia, and tachypnea; abdominal rigidity suggests the presence of peritonitis [3]. The most common causes of peritonitis are appendicitis, cholecystitis, postoperative complications, and colonic non-diverticular perforation, among others.
The precise cause of SPBT is unknown; it has been explained by obstruction leading to excessive pressure and duct compromise in areas of weakness [4,5,8,9], 70% of the cases are related to choledocholithiasis [14].
Since the symptoms and signs are not specific, the diagnosis is often delayed. So that, in most of the cases the diagnosis is made upon surgery [6,[10][11][12]. Here we present a case of spontaneous perforation of the biliary tract, in a patient with choledocholithiasis, which was treated with ultrasound-guided drainage and ERCP. The following case has been reported in line with the SCARE criteria [13].
Case presentation
A 51-year-old male was admitted to the emergency room for 15-day evolution jaundice, localized pain in the right flank and hypochondrium of 3 days, and poor food tolerance. He had a history of two episodes of cholangitis resolved with ERCP, with subsequent cholecystectomy 15 years ago. In 2014 he developed severe cholangitis, entering the ICU, where another ERCP was performed. He was admitted again in 2015 with grade II cholangitis treated with ERCP and stent placement, which was withdrawn later that same year.
On physical examination, we found a conscious and oriented patient, tachycardic, with jaundice. The abdomen was distended, with decreased intestinal peristalsis, depressible, painful upon superficial and deep palpation mostly in the right flank and hypochondrium, and rebound tenderness.
The laboratory test showed leukocytosis of 22,640/l with 88.39% neutrophils. Procalcitonin of 9.26 ng/ml. Total bilirubin of 5.55 mg/dl, ALP 94 U/l, GGT 135 U/l; amylase and lipase within normal parameters.
A computed tomography (CT) scan was performed, which showed free peri-renal fluid in the Morrison space and parieto-colic gutter, as well as dilation of the intra and extrahepatic biliary tract (Figs. 1 and 2). Therefore it was decided to perform an ultrasoundguided drainage with a pigtail catheter, obtaining 200 ccs of biliary fluid on the first day. The next day a magnetic resonance imaging (MRI) was performed, that showed evidence of obstruction of the biliary tract secondary to probable choledocholithiasis, in addition to a possible biliary leakage in the middle third (Fig. 3).
The patient was sent to ERCP, where the gallstone was removed with no evidence of biliary leakage. He was discharged 9 days after his admission without systemic inflammatory response and with a biliary drainage of about 50 ccs. The catheter was removed 7 days later during an outpatient visit, without further complications.
Discussion
Spontaneous perforation of the biliary tract is a disease entity in which wall of the extrahepatic or intrahepatic duct is perforated without any traumatic or iatrogenic injury [5]. The gall bladder, common bile duct, common hepatic duct and anomalous ducts of the liver are specific sites of biliary compromise; in most of the cases they are extrahepatic and are found frequently at the junction of the cyst duct and the common bile duct [6,9]. In our case, the leak was located in the middle third of the common bile duct.
Although the precise cause of SPBT is unknown, there are some factors that can be involved, such as ischemia, elevated pressure within bile ducts, congenital weakness of the bile duct wall and pancreaticobiliary reflux [4,5,8,9]; 70% of the cases are related to choledocholithiasis [14].
One of the mainstays in the treatment of choledocholithiasis is the ERCP with endoscopic sphincterotomy. Even though the endoscopic sphincterotomy brings a risk of long term complications (5.8-24%), including recurrent duct stones and cholangitis [15,16], the late complication rate after repeat ERCP is higher representing 36% [16]. Our patient had a history of 4 previous ERCP and previous episodes of cholangitis, which could have caused weakness of the bile duct wall. In addition to a current diagnosis choledocholithiasis and a probable cholangitis as a risk factor, which may have triggered the SPBT.
The clinical presentation of SPBT varies greatly from nonspecific abdominal pain to biliary peritonitis, without a characteristic history [4,9,10,17]. In most of the cases, the encapsulation of the bile within the omentum and mesentery prevents generalized peritonitis, forming bilomas; which are generally localized in the right upper quadrant of the abdomen. As the bile is sterile and is absorbed by the peritoneum, the patients may not present symptoms for weeks, until the bile becomes superinfected [10,18].Thus the diagnosis of biliary tract perforation is often delayed, which results in high morbidity [6].
Several imaging modalities can be used in the diagnosis of biliary leaks and bilomas. These modalities include ultrasound (US), computed tomography (CT), Magnetic resonance imaging (MRI), and nuclear medicine hepatobiliary cholescintigraphy [18,19].
The US is often the initial diagnostic modality and can show an anechoic well-circumscribed fluid collection. A complex loculated collection with internal septations is suggestive of infection [18,20]. The CT can show discrete fluid collections with or without surrounding peripheral capsule; commonly the density is less than 20 Hounsfield units. Furthermore, the CT is an excellent study to identify a collection and assess the surrounding anatomy [18][19][20].
On an MRI, a biliary leak or a biloma appears as a hypointense signal on T1 w and a hyperintense signal on T2w. The Magnetic Resonance Cholangiography (MRC) can also be helpful in demonstrating communications between the biliary ducts and a fluid collection. The accuracy rate of diagnosis and localization of an extravasation of bile by T2 w in MRC is within the range 70-74% [21]. The Hepatobiliary cholescintigraphy is able to demonstrate continuity of fluid collections with the biliary tree, nevertheless, this modality does not provide highly detailed anatomy; so identifying the precise location of the leak can be difficult [20].
The differential diagnosis should include hematoma, seroma, liver abscess, cyst, pseudocyst, and lymphocele. Percutaneous aspiration, which is usually performed utilizing an 18-22 gauge co-axial needle under CT or US guidance, can also aid in diagnosis and treatment; but in most of the cases the diagnosis is made upon surgery [10][11][12]20].
Universal management involves decompression of the biliary tree and repair of the leak site [9]. Formerly the treatment used to be an open or laparoscopic surgical approach [5,8]. Nowadays, the percutaneous drainage of the collection and endoscopic modalities are preferable to surgery as the first step in treatment for patients who are stable and without peritonitis [5,8,10,22]. The procedure for repairing the perforation is not always necessary; spontaneous closure has been observed in case reports [5], such is the case of our patient.
Conclusion
The spontaneous perforation of the biliary tract is a disease with nonspecific symptoms and represents a diagnostic challenge. The SPBT should be suspected in a patient with a history of biliary disease that presents an acute abdominal pain and a CT scan or US with compatible findings of biloma. The performance of an RMC is helpful for identifying the site of injury in the biliary tract and can be determinant to decide the treatment modality, so it must be done if the resource is available. The treatment in the patients with SPBT is not well established and has to be individualized for each case, depending on the history of the patient, the site of perforation, the time of evolution, the suspicion of infection, and the patient status. In our patient, we performed drainage of the biloma and ERCP as the treatment because of the history of previous cholecystectomy, the presence of choledocholithiasis and hemodynamic stability. In a patient with gallbladder disease, the laparoscopic modality can be used to perform a cholecystectomy and a bile duct exploration, with or without placing of a "t" tube. In all cases, the treatment must involve the decompression of the biliary tract and drainage of the biloma. As to our knowledge, this is the first case of spontaneous perforation of the biliary tract treated with ultrasound-guided drainage and ERCP successfully reported in México.
All authors declare no conflict of interest about the publication of this article. No external funding was needed. The patient signed an authorization for publication of the case.
Conflicts of interest
None.
Funding
None.
Ethical approval
The written consent was sign by the patient
Consent
The written consent was sign by the patient. No personal information is given nor modified. | 2018-04-03T00:51:46.265Z | 2017-10-27T00:00:00.000 | {
"year": 2017,
"sha1": "d62a6df100dc4f2692ed6559afef8abd6d80173c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2017.10.040",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d62a6df100dc4f2692ed6559afef8abd6d80173c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17814810 | pes2o/s2orc | v3-fos-license | MRI-based Algorithm for Acute Ischemic Stroke Subtype Classification
Background and Purpose In order to improve inter-rater reliability and minimize diagnosis of undetermined etiology for stroke subtype classification, using a stroke registry, we developed and implemented a magnetic resonance imaging (MRI)-based algorithm for acute ischemic stroke subtype classification (MAGIC). Methods We enrolled patients who experienced an acute ischemic stroke, were hospitalized in the 14 participating centers within 7 days of onset, and had relevant lesions on MR-diffusion weighted imaging (DWI). MAGIC was designed to reflect recent advances in stroke imaging and thrombolytic therapy. The inter-rater reliability was compared with and without MAGIC to classify the Trial of Org 10172 in Acute Stroke Treatment (TOAST) of each stroke patient. MAGIC was then applied to all stroke patients hospitalized since July 2011, and information about stroke subtypes, other clinical characteristics, and stroke recurrence was collected via a web-based registry database. Results The overall intra-class correlation coefficient (ICC) value was 0.43 (95% CI, 0.31-0.57) for MAGIC and 0.28 (95% CI, 0.18-0.42) for TOAST. Large artery atherosclerosis (LAA) was the most common cause of acute ischemic stroke (38.3%), followed by cardioembolism (CE, 22.8%), undetermined cause (UD, 22.2%), and small-vessel occlusion (SVO, 14.6%). One-year stroke recurrence rates were the highest for two or more UDs (11.80%), followed by LAA (7.30%), CE (5.60%), and SVO (2.50%). Conclusions Despite several limitations, this study shows that the MAGIC system is feasible and may be helpful to classify stroke subtype in the clinic.
Introduction
Stroke is a heterogeneous neurological disorder, and it is clear that functional outcome, recurrence, and strategies for secondary prevention differ by subtype. 1 To enable robust research in these areas, accurate classification of different stroke syndromes into homogenous subgroups is important. 2 The Trial of Org 10172 in Acute Stroke Treatment (TOAST) investigators devised a classification system of ischemic stroke subtypes based on inferred cause, 3 which has gained wide acceptance for both clinical and research purposes. 2 However, in a series of 18 patients who were independently assessed by 24 physician-investigators, the scheme had moderate inter-rater reliability (overall κ = 0.54) 4 mainly because of inflating the category of "stroke of undetermined etiology. " Depending on the quality, completeness, and rapidity of the work-up, most registries that use the TOAST system for stroke classification have failed to identify a definite cause in 25%-39% of patients. 5 Recent advances in stroke imaging (diffusion weighted imaging, DWI; high-resolution wall imaging) and evaluation of embolic sources makes it possible to develop criteria for the most likely mechanism behind the ischemic stroke. 1,6 Refining algorithms for determining the subtype of ischemic stroke enables physicians to make a consistent decision on the diagnosis of the stroke subtype and improve inter-rater agreement. 3 In this study, in an attempt to improve inter-rater reliability of the TOAST system and to minimize the proportion of undetermined stroke, we developed and validated the magnetic resonance imaging (MRI)-based diagnostic algorithm for acute ischemic stroke subtype classification (MAGIC) that reflects recent advances in stroke imaging and thrombolytic therapy. We describe and compare baseline characteristics, extent of diagnostic evaluation, and acute management and stroke recurrence of our stroke population according to stroke subtypes diagnosed by this algorithm.
Study population
This study was performed based on a prospective stroke registry of the fifth section of the Clinical Research Center for Stroke . 7 The CRCS-5 was sponsored by the Korean government and began to collect data from hospitalized patients with stroke in order to characterize the epidemiology of stroke and the status of stroke care in Korea. Among the patients who were hospitalized in the 14 participating centers between July 2011 and May 2013, those who met all of the following eligibility criteria were enrolled in this study: 1) diagnosed ischemic stroke; 2) arrived at a hospital within 7 days of onset; and 3) had relevant lesions on DWI.
Information on demographics, medical history, risk factors, stroke characteristics, including TOAST classification using MAGIC, and acute management were obtained directly from the registry database of the CRCS-5.
In all participating centers, approval was obtained from local institutional review boards for collection of anonymized clinical data without patients' consent into the registry database from 2008 to monitor and improve the quality of stroke care. In 2011, we got further approval from institutional review boards for collecting the outcomes for hospitalized patients with informed consent from them or their family members.
Algorithm development
MAGIC was first proposed in 2008 and was revised during the subsequent two years through monthly steering committee meetings of the CRCS-5. For training on MAGIC, we developed a website (www.crcsmagic.com). All the neurology residents, stroke physicians, and stroke nurses participating in the CRCS-5 were requested to train themselves through this website.
MAGIC is composed of the five following steps: 1) consideration of other determined etiology of stroke; 2) screening for small-vessel occlusion (SVO) on DWI; 3) consideration of relevant artery stenosis or occlusion; 4) consideration of recanalization status after thrombolytic therapy; and 5) consideration of follow-up recanalization status without thrombolytic therapy. The order of steps and other details of MAGIC were designed for improving the feasibility and convenience in applying the algorithm in clinical practice and were finalized after feedback from participating stroke physicians.
Step 1. Consideration of other determined etiology of stroke ( Figure 1A) The other causes category includes patients with a diverse array of stroke mechanisms. Disorders included in this category are difficult to categorize into more homogenous groups. A patient who has a rare cause of ischemic stroke (Supplementary Table 1) 1 would be classified as "other determined cause (OD)" or "two or more undetermined causes (UD ≥ 2)", according to coexistence of other stroke etiology such as large artery atherosclerosis (LAA), SVO, and cardioembolism (CE).
Step 2. Screening for SVO using DWI ( Figure 1B) A single lesion with the largest diameter of ≤ 20 mm in an axial slice of DWI for penetrating artery infarction of the basal ganglia, corona radiate, thalamus, or pons would be classified as a SVO. 1 If medical history or electrocardiography (ECG) identifies high-risk cardioembolic sources (Supplementary Table 2), 1 then that infarction is classified as UD ≥ 2. If accompanied by relevant stenosis of a corresponding cerebral artery on angiographic evaluation, including CT (computerized tomography) angiography, MR angiography, or conventional angiography, then it is classified as a "LAA-LC (large artery atherosclerosis with lacunae)". Infarctions in the midline extending from the base of the pons into the tegmentum without significant relevant artery stenosis would be classified as "branch atheroma- tous disease (LAA-BR). " Step 3. Consideration of relevant artery stenosis or occlusion ( Figure 1C) Relevant arterial pathology was defined as stenosis or occlusion of arteries supplying the vascular territory of acute ischemic lesions detected on DWI. Stenosis less than 50% was also regarded as being relevant when clinical syndromes, lesions patterns on DWI, and new imaging techniques such as high-resolution wall imaging supported its relevance.
In cases of a single lesion with the largest diameter > 20 mm or multiple lesions with no steno-occlusion of relevant artery on angiographic evaluation, a possibility of cardioembolic stroke should be considered. "Extensive embolic source evaluation", including 24-h Holter monitoring (24-h Holter), transthoracic echocardiography (TTE), and transesophageal echocardiography (TEE) are recommended. In some uncooperative patients, cardiac multi-detector computed tomography (MDCT) 8 substitutes for TEE. Infarctions in which a definite cardioembolic source is not revealed in spite of a comprehensive work-up would be classified as "undetermined negative (UD-negative)". However, when relevant lesions are located at the anterior choroidal artery territory, single territory of cerebellum, or medullar oblongata, where SVOs do not seem to be causing infarctions, traditional MRI techniques cannot detect vascular pathologies of a relevant artery, and atherosclerosis may be a dominant vascular pathology, [9][10][11][12] the infarctions are classified as "LAA-NG (large artery atherosclerosis with normal angiography)" instead of "UD-negative".
When a relevant pathology of a corresponding artery is observed, it is divided into stenosis and occlusion. If medical his-tory or ECG identifies high-risk cardioembolic sources with coexistence of relevant stenosis, then that infarction is classified as "UD ≥ 2. " When there is evidence of chronic occlusion, or no or low risk cardioembolic source with relevant stenosis, it is classified as "LAA. " Occlusion on pre-stroke angiographic evaluation, border zone infarction with clinical settings suggestive of hemodynamic failure, or recent (within 1 month of stroke onset) transient ischemic attack corresponding to occlusion site is considered as evidence for chronic occlusion.
Step 4. Consideration of recanalization status of occluded artery after thrombolytic therapy ( Figure 1D) If there is occlusion but no evidence of chronic occlusion and thrombolytic therapy, including mechanical thrombectomy, is performed, then the recanalization status after thrombolytic therapy should be considered. When residual stenosis exists, or angioplasty or stenting is performed for atherosclerotic stenoocclusion, the underlying vascular pathology is considered primarily as atherosclerotic. When occlusion is resolved completely, comprehensive cardioembolic work-up is recommended. In this situation, low-risk cardioembolic sources are regarded as explaining the etiology of stroke.
Step 5. Consideration of follow-up recanalization status of occluded artery without thrombolytic therapy ( Figure 1E) When there is occlusion and thrombolytic therapy is not performed, follow-up angiographic evaluation is recommended and recanalization status on that evaluation guides further investigation and determination of stroke subtypes.
Assessment of stroke recurrence
During hospitalization and up to one year from stroke onset after discharge, stroke recurrence was captured prospectively by dedicated and trained stroke nurses through review of medical records, and direct or telephone interview. Monthly review of event data quality and adjudication of events were performed by the independent outcome adjudication committee.
Within 21 days of the index stroke, recurrent stroke or any neurologic worsening after a period of neurological stability or improvement lasting at least 24 h 13 and attributable to new discrete lesions on follow-up CT or DWI was defined as a new neurological symptom/sign. After 21 days from the index stroke, recurrent stroke was defined as suddenly developed focal neurological deficits attributable to occlusion or rupture of cerebral vessels and lasting 24 h or more. Both ischemic and hemorrhagic strokes were considered as recurrent strokes.
Validation of MAGIC
We validated MAGIC by comparing the inter-rater reliability of subtype classification with and without applying MAGIC. Forty patients with acute ischemic stroke were randomly selected among those registered in the CRCS-5 database before 2011. Five stroke physicians and five neurology residents from 10 participating centers took part in this validation study (Supplementary Figure 1). The intra-class correlation coefficient (ICC) was obtained to assess intra-rater reliability.
Statistical methods
Values are presented as means ± SD or median (interquartile range [IQR]) for continuous variables and as the number (%) of subjects for categorical variables.
Baseline characteristics ( Table 1) and extent of diagnostic evaluation (Table 2) were compared according to stroke subtypes using analysis of variance or Kruskal-Wallis test for con- tinuous variables and Pearson's chi-squared test for categorical variables. Stroke recurrence rates were estimated using the Kaplan-Meier product-limit method. Cumulative 7-day, 30-day, 90-day, and 1-year rates of stroke recurrence and their 95% confidence intervals (CIs) were calculated for the entire study population and according to stroke subtypes. All statistical analyses were performed with SPSS (version 18.0; SPSS Inc., Chicago, IL, USA), and a 2-sided P value < 0.05 was considered as the minimum level of statistical significance.
During the study period, 11,652 patients with acute ischemic stroke were admitted to the participating hospitals within 7 days after stroke onset. After excluding 1,876 patients due to absence of acute ischemic lesions in DWI or unavailable DWI, 3,092 patients due to unavailability of MAGIC data, and another 60 patients lacking information on stroke recurrence, a total of 6,624 patients were enrolled in this study.
Baseline characteristics according to ischemic stroke subtypes are presented in Table 1. The proportion of LAA subtype was the highest (38.3%) followed by CE (22.8%), UD (22.2%; UD ≥ 2, 4.8%, UD-negative, 6.9%, UD incomplete, 10.5%), SVO (14.6%), and OD (2.1%). The CE subtype was found in the oldest patients (mean ± SD, 71.9 ± 11.6 years), had the shortest onset to arrival time (median, 3 hours) and showed the most severe presenting symptoms (median National Institute of Health Stroke Scale (NIHSS), 8) compared to the other subtypes. SVO showed the lowest frequency of previous stroke history (15.8%), but interestingly the highest frequency of current smoking (47.2%), and the highest systolic or diastolic blood pressure.
Diabetes mellitus, hyperlipidemia, and total cholesterol level, were more frequent in LAA than in CE or SVO subtypes. Coronary artery disease and atrial fibrillation were most prevalent in CE and UD ≥ 2. Thrombolytic therapies were most frequently performed in CE (30.4%) followed by UD ≥ 2 (22.2%) and UD-negative (19.0%). Aspirin (70.7%) and clopidogrel (33.9%) were the most frequently prescribed antithrombotics at discharge, and statin was administered to more than 80% of patients at discharge. The prescription rate was the highest in LAA (89.8%) followed by SVO (88.1%). Many of the patients with UD ≥ 2 had both characteristics of patients with CE and LAA; UD ≥ 2 were more likely to be older, have stroke history, hypertension, diabetes mellitus, hyperlipidemia, atrial fibrillation, coronary artery disease, and more severe neurologic deficits. Table 2 presents the extent of diagnostic evaluation according to stroke subtypes. All subjects underwent DWI and most patients across all subtypes underwent magnetic resonance angiography (MRA) (77%-90%). Extensive embolic source evaluation was performed on an average of 12.5% of patients, and the highest rate was for UD-negative among all stroke subtypes (30.9%).
MAGIC has three special categories in addition to TOAST subtypes: LAA-BR, LAA-NG, and LAA-LC. Table 3 shows clinical characteristics of special categories compared with classically defined LAA (pure LAA) and SVO. With respect to clinical characteristics, LAA-BR and LAA-LC were similar to pure LAA, and LAA-NG was similar to SVO. One-year stroke recurrence rates were highest in pure LAA (8.3%) followed by LAA-LC (6.2%), LAA-NG (4.2%), LAA-BR (3.7%), and SVO (2.5%) (Figure 3).
Discussion
TOAST classification was used widely in practice and research from the 1990s before the use of MRI in patients with acute stroke became popular. However, TOAST's major weakness is that the proportion of patients whose stroke mechanisms are undiagnosed (UD) is quite high, 14 which may be attributed to the underdevelopment of diagnostic technologies specific to stroke mechanisms, 15 as well as confusion in determining stroke subtypes in patients who are treated with thrombolytic therapy. With the hope of improving inter-rater reliability and reducing the proportion of UD, MAGIC is an attempt to diagnose stroke mechanisms, keeping in mind the rapidly developing MR technologies and wider use of thrombolytic therapy. Compared to previous studies, the use of MAGIC did not improve the inter-rater reliability of subtype classification. However, we observed a modest improvement of ICC with MAGIC in the validation study population. 1,2 The increase of ICC with MAGIC may be attributed to the algorithm and reduced ambiguity with 3 special categories (LAA-LC, LAA-BR, and LAA-NG), potentially enabling most physicians to ascribe more consistent stroke subtypes.
Contrary to our expectations, applying MAGIC could not reduce the proportion of UD (22.2% vs. 16.2%-40.6%). [16][17][18] A decrease of UD-negative by extensive etiologic evaluation seemed to be offset by an increase of UD incomplete due to a stringent requirement of extensive evaluation in patients possibly with an embolic stroke. 18,19 The proportions of stroke subtypes by MAGIC other than UD were not different from those in other hospital-based stroke registries except for CE, which showed relatively higher frequency (LAA [38.3% vs. 16 [16][17][18] However, the proportions of stroke subtypes in community-based studies were different with LAA being less common (9.3%-20.9%), and SVO (20.5%-27.0%) and CE (25.6%-30.2%) being more common. [19][20][21] Comparisons of baseline characteristics according to stroke subtypes (Table 1) confirm our previous work: LAA is more tightly associated with risk factors of atherosclerosis, and patients with CE are more likely to be older and female and have more severe neurologic deficits. 16,20,21 Despite relatively low frequency of diabetes mellitus in CE, that CE is associated with the highest fasting blood sugar levels may be due to stress-induced hyperglycemia related with more severe neurologic deficits than other subtypes.
MAGIC recommended extensive embolic source evaluation in cases with non-lacunar infarction and no arterial pathology in relevant arteries. Furthermore, using new imaging techniques such as DWI and high-resolution wall imaging, the definition of relevant arterial pathology was extended to stenosis in less than 50% of patients. As a result, the proportion of UD-negative was reduced to 6.9%, which is substantially lower than those reported by previous studies (17.5%-24.2%). 19,21,22 It should be noted that in this study, less than a third of patients underwent extensive embolic source evaluation, even in the UD-negative group. More active application of extensive embolic source evaluation and other advanced technologies may lead to further reduction of UD-negative classifications. 15 One-year stroke recurrence rate was highest for UD ≥ 2 (11.8%) (Figure 2), which may be explained by the fact that UD ≥ 2 had characteristics of both LAA and CE, the riskiest profiles with respect to stroke recurrence. Most previous studies reported high risk of recurrence in LAA or UD. 6,13,20 Recurrence rate of LAA was the second highest in this study (7.3%). The low recurrence rate of UD-negative (3.1% at one year) must also be noted. A previous study reported that stroke recurrence rate was highest for UD-negative followed by UD ≥ 2 and LAA. 6 However, in that study, the proportions of cases of UD-negative and CE were 18.1% and 10.8%, respectively, and the discrepancies with respect to the proportions of UD-negative and CE in our current study are probably due to the difference in the extent of embolic source evaluation. The extensive embolic source evaluation in our study may explain the lower rate of recurrence for UD-negative by exclusion of high-risk CE from this subgroup. All recurrences in OD occurred within 3 months of onset, in line with the natural history of arterial dissection, the most common cause of OD. 23 LAA-LC and LAA-BR have a single small lesion on DWI like SVO but their clinical profiles, such as age, risk factors of atherosclerosis, and history of stroke were similar to that of pure LAA (Table 3) and their one-year recurrence rates (6.2% and 3.7%, respectively) were between those of pure LAA and SVO (8.3% and 2.5%, respectively) (Figure 3). High-risk profiles of LAA-LC may be attributed to the coexistent LAA. In the design stage of this study, we expected LAA-NG to be similar to LAA. On the contrary, the study results revealed that LAA-NG was clinically similar to SVO and its recurrence rate was slightly higher than SVO (4.2% vs. 2.5%). The recurrence rate of LAA-BR (3.7%) was between that of SVO and LAA-NG. Previously, LAA-BR was reported as a subtype of LAA with high risk of recurrence. 22 Limitations of this study should be noted. First, despite all effort, the proportion of cases of UD was not reduced, although the proportion of cases of UD-negative seemed to be decreased. The retrospective nature of subtype classification, namely determining subtypes after discharge through regular registry meetings, may contribute to the high proportion of UD. Second, MA-GIC was only a recommendation to the participating physicians; we only provided a useful algorithm for determining stroke subtypes, and the stroke physicians themselves decided whether to adopt it. We are developing a smart phone application for MA-GIC, which is expected to improve the feasibility and applicability of this algorithm. Third, several definitions used in MAG-IC such as LAA-NG, LAA-BR, and LAA-LC were somewhat arbitrary. Their clinical meanings should be determined by further research. Forth, despite its uniqueness and applicability, previous studies do not support the consideration of results of recanalization therapy in determining stroke mechanisms. Lastly, that this was a multicenter stroke registry-based observational study limits the representativeness and generalizability of the study results, although application of MAGIC was performed prospectively. | 2016-05-12T22:15:10.714Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "642fa52a5d2cf7edf3418a3d248c182c8623f901",
"oa_license": "CCBYNC",
"oa_url": "http://www.j-stroke.org/upload/pdf/jos-16-161.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "642fa52a5d2cf7edf3418a3d248c182c8623f901",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209488843 | pes2o/s2orc | v3-fos-license | Important Considerations for the Treatment of Patients with Diabetes Mellitus and Heart Failure from a Diabetologist’s Perspective: Lessons Learned from Cardiovascular Outcome Trials
Heart failure (HF) represents an important cardiovascular complication of type 2 diabetes mellitus (T2DM) associated with substantial morbidity and mortality, and is emphasized in recent cardiovascular outcome trials (CVOTs) as a critical outcome for patients with T2DM. Treatment of T2DM in patients with HF can be challenging, considering that these patients are usually elderly, frail and have extensive comorbidities, most importantly chronic kidney disease. The complexity of medical regimens, the high risk clinical characteristics of patients and the potential of HF therapies to interfere with glucose metabolism, and conversely the emerging potential of some antidiabetic agents to modulate HF outcomes, are only some of the challenges that need to be addressed in the framework of a team-based personalized approach. The presence of established HF or the high risk of developing HF in the future has influenced recent guideline recommendations and can guide therapeutic decision making. Metformin remains first-line treatment for overweight T2DM patients at moderate cardiovascular risk. Although not contraindicated, metformin is no longer considered as first-line therapy for patients with established HF or at risk for HF, since there is robust scientific evidence that treatment with other glucose-lowering agents such as sodium-glucose cotransporter 2 inhibitors (SGLT2i) should be prioritized in this population due to their strong and remarkably consistent beneficial effects on HF outcomes.
The relationship between DM and HF is bidirectional. On the one hand, DM increases more than 2-fold the risk of developing HF, both HF with reduced ejection fraction (HFrEF) and HF with preserved ejection fraction (HFpEF), and furthermore increases the risk of progression from asymptomatic left ventricular (LV) systolic dysfunction to symptomatic HF [4,15]. Vice versa, HF may increase the risk of developing type 2 diabetes mellitus (T2DM), considering that up to 60% of patients with HF are characterized by insulin resistance and metabolic dysregulation is both inherent and secondary to HF pathophysiology [16]. DM may promote the development and progression of HF both directly and indirectly via systemic, myocardial and cellular mechanisms [17]. Indirectly, DM can increase the risk of incident HF due to its coexistence with coronary heart disease (CHD), hypertension, atherogenic dyslipidemia and endothelial dysfunction, all of which are considered to be major predisposing factors for HF pathogenesis [17]. In addition to this indirect association, hyperglycemia, hyperinsulinemia and insulin resistance of the diabetic state exert direct negative effects on the myocardium, which have been collectively described with the term diabetic cardiomyopathy (DCM). This term was originally applied more than 40 years ago to define the systolic and/or diastolic dysfunction observed in patients with DM in the absence of other obvious causes of cardiomyopathy such as CHD, hypertension or valvulopathy [18]. Later on, the term DCM was broadened to integrate any kind of vulnerability of the diabetic myocardium to functional impairment. The major pathophysiological mechanisms implicated in the development of DCM include inflammation, oxidative stress, endoplasmic reticulum (ER) stress, insulin resistance, myocardial steatosis and lipotoxicity, maladaptive intracellular calcium homeostasis, altered myocardial substrate metabolism (shift from glucose to free fatty acid oxidation and vice versa), impaired mitochondrial bioenergetic efficiency, cardiac hypertrophy, cardiac fibrosis due to renin-angiotensin-aldosterone system (RAAS) activation and advanced glycation endproducts (AGEs) accumulation, impaired myocardial perfusion reserve and tissue hypoxia due to microvascular dysfunction, and autonomic neuropathy [19][20][21][22]. The fact that additional factors beyond glycemia may contribute to the increased risk of HF in patients with DM is substantiated by the observation that the elevated HF prevalence persists despite optimal glycemic control, and the normalization of blood glucose levels does not always restore CV risk to the non-diabetic levels [10].
In epidemiological terms, the association between DM and HF is stronger in women than men. This sex discrepancy, which remains still incompletely understood, was first observed in the Framingham Heart Study, where diabetic men presented with a 2-fold, while diabetic women presented with a 4-fold increased risk of HF after adjustment for age and other CV risk factors [23]. The same finding was further confirmed in a comprehensive systematic review and meta-analysis of 47 cohorts including more than 12 million Japanese diabetic patients, which demonstrated that both type 1 diabetes mellitus (T1DM) and T2DM are stronger risk factors for HF in women than in men [24]. Potential explanations for this gender-specific effect in the epidemiology of DM-associated HF include the higher prevalence of DCM in women, the stronger correlation of DM with CHD in women, the longer exposure of women to hyperglycemia in the prediabetic state, sex-specific differences in other CV risk factors, and finally the fact that diabetic men are more prone to premature death and thus will not survive to develop HF (survival bias) [25][26][27][28].
The concurrent presence of DM and HF and their shared pathophysiology underscore the need for an interdisciplinary collaborative management of both conditions [2]. Treatment of DM in patients with HF can be challenging, considering that patients with HF are usually elderly, frail and have extensive comorbidities, most importantly chronic kidney disease (CKD), which represents a major limitation to optimal drug dosing and hampers medication adherence due to possible drug-induced interactions and adverse effects. The complexity of medical regimens required for both conditions, the high risk clinical characteristics of patients and the well-established potential of HF therapies to interfere with glucose metabolism, and conversely the emerging potential of some antidiabetic agents to modulate HF outcomes, are only some of the clinical challenges that arise and need to be addressed in the framework of a team-based personalized approach [2]. The optimal glycemic targets for diabetic patients with HF should be individualized to reflect life expectancy and comorbidity burden (HF severity and DM complications), and the benefits expected from glucose-lowering therapies should be always balanced with the risks of hypoglycemia, polypharmacy and treatment costs [2].
The multiple challenges of treating patients with DM and cardiovascular disease (CVD), especially HF, have been addressed in clinical practice guidelines and position statements issued by international scientific societies such as the European Society of Cardiology (ESC), the European Association for the Study of Diabetes (EASD), the American Heart Association (AHA) and the Heart Failure Society of America (HFSA), which aim to provide evidence-based guidance on how to optimally treat patients with DM and CV complications [2,29]. In the updated version of these guidelines, patients with DM are stratified based on their CVD risk into medium, high and very high risk patients [29]. Young patients (<35 years for T1DM; <50 years for T2DM) with a short duration of DM (<10 years) without major CV risk factors are considered to be at medium risk. Patients with a longer DM duration (>10 years) and at least one major CV risk factor without signs of target organ damage are placed at high risk, and patients with established CVD or at least 3 major CV risk factors or evidence of target organ damage or long-standing T1DM (>20 years) are classified as very high risk. Based on the above stratification, the selection of the most appropriate antidiabetic agent should be tailored depending on the presence of established CVD or CVD risk [29].
Over the past 5-6 years, the landscape of T2DM treatment has been revolutionized by the introduction of novel glucose-lowering agents, which -for the first time in the history of DM pharmacotherapy-have shown the potential to improve CV outcomes in patients with T2DM. The so-called Cardiovascular Outcome Trials (CVOTs), namely randomized clinical trials originally designed to fulfil the Food and Drug Administration (FDA) requirement to assess CV safety of antidiabetic agents vs. placebo, have recently provided an overwhelming body of evidence suggesting that two specific antidiabetic drug classes exert cardioprotective effects in diabetic patients with CVD or at risk for CVD. These drug classes comprise glucagon-like peptide 1 receptor agonists (GLP1RAs) and sodium-glucose cotransporter 2 inhibitors (SGLT2i). The former have been primarily associated with a reduction in atherosclerosis-related outcomes and major adverse cardiovascular events (MACEs), whereas the latter have been linked to an improvement in HF-related endpoints. To date, several CVOTs have been published and influenced dramatically clinical guidelines and treatment algorithms: 4 for dipeptidyl peptidase 4 inhibitors (DPP4i) (sitagliptin, saxagliptin, linagliptin, alogliptin) [30][31][32][33], 7 for GLP1RAs (exenatide, lixisenatide, albiglutide, liraglutide, oral and injectable semaglutide, dulaglutide) [34][35][36][37][38][39][40] and 3 for SGLT2i (empagliflozin, canagliflozin, dapagliflozin) [41][42][43]. The major take-home messages derived from the recent CVOTs comprise the following: (1) Most DPP4i have neutral effects on MACEs and HF outcomes with the exception of saxagliptin, which has been shown to increase significantly the risk of HF hospitalization in high risk patients [31]. Although meta-analyses and real-world observational studies found no significant risk of HF hospitalization with DPP4i vs. placebo [44,45], a recent network meta-analysis reported a higher risk of HF with DPP4i compared with other antidiabetic drug classes [46]. Considering the lack of evidence that DPP4i provide any CV benefit and the compelling evidence that some of them can even increase HF hospitalization risk, DPP4i are not the most appropriate choice for diabetic patients with pre-existing HF or at risk for developing HF (i.e., older patients, obese, long DM duration or CKD); (2) GLP1RAs have neutral impact on HF hospitalization, but some of them (liraglutide, semaglutide, dulaglutide) reduce the risk of MACEs, including CV mortality, in diabetic patients with CVD or high CVD risk [34,35,40]. GLP1RAs are recommended for diabetic patients with CVD or at high/very high risk for CVD in order to reduce CV events. Their use in patients with HF or at risk for HF is questionable considering the disappointing results of two small randomized clinical trials showing a trend for increased HF hospitalization and worse outcomes with liraglutide in patients with HFrEF [47,48]; (3) SGLT2i are the first antidiabetic drug class which has demonstrated the potential to reduce mortality, MACEs and HF hospitalization in diabetic patients at high CVD risk, regardless of the history of previous HF [41,49,50]. Their beneficial effects are independent of glycemic control and occur too early to be attributed to the concomitant weight reduction. Based on these findings, SGLT2i are highly recommended as first-line treatment for diabetic patients with established HF or at risk to develop HF, in order to reduce hospitalization and improve prognosis. The expected benefits should be critically balanced against potential risks of genitourinary infections, euglycemic diabetic ketoacidosis and lower limb amputations and fractures, the latter related only to canagliflozin but not confirmed in recent studies [51]. The major mechanisms mediating the strong cardioprotective effects of SGLT2i involve their modest diuretic and natriuretic effects, weight loss, reduction of blood pressure, anti-oxidant and anti-inflammatory properties, amelioration of renal congestion, reduction of plasma volume without subsequent neurohormonal activation, increased supply of ketones to cardiomyocytes and improved mitochondrial bioenergetics [52][53][54][55][56].
Interestingly, both GLP1RAs (liraglutide and semaglutide) and SGLT2i (dapagliflozin, canagliflozin, empagliflozin) have shown additional renoprotective effects in patients with DM. The major CVOTs of these agents have studied renal function as a secondary outcome and have shown that compared with placebo, these drugs are able to delay the progression of CKD and reduce clinically significant renal events [34,35,[41][42][43]. Canagliflozin, in particular, has shown the potential to reduce by 30% the primary renal outcome of end-stage renal disease, doubling of serum creatinine or renal death in patients with T2DM, as well as attenuate the slope of progressive renal function decline [57]. Furthermore, recent compelling evidence suggests that the SGLT2i dapagliflozin is able to reduce the composite outcome of HF worsening or CV death in patients with HFrEF of moderate severity, regardless of the presence of DM [58]. These data highlight the potential of SGLT2i to serve as HF treatment even in patients without DM, and need to be corroborated with additional studies.
Despite the progress summarized above, a large number of unanswered questions and knowledge gaps remain still under investigation and should be thoroughly examined in the future: Future research in the field of cardiovascular diabetology should try to resolve these issues and provide definitive answers. Furthermore, it is important that future trials integrate non-invasive cardiac imaging (i.e., echocardiography, cardiac magnetic resonance imaging) in order to establish novel imaging biomarkers, which would help better characterize the structural and functional abnormalities of the diabetic failing heart and facilitate monitoring of treatment effects [59].
In conclusion, HF represents an important CV complication of DM associated with substantial morbidity and mortality, and is emphasized in recent CVOTs as a critical outcome for patients with DM. The presence of HF or the high risk for developing HF in the future has influenced recent guideline recommendations and can guide therapeutic decision making. Metformin remains first-line treatment for overweight T2DM patients without CVD and at moderate CVD risk. Although not contraindicated at any stage of HF, metformin is no longer considered as first-line therapy for diabetic patients with established HF or at risk for HF, since there is robust scientific evidence that treatment with other glucose-lowering agents such as SGLT2i should be prioritized in this population due to their strong beneficial effects on HF outcomes.
Author Contributions: Conceptualization, N.K., writing-original draft preparation, C.K., writing-review and editing, N.K. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-12-28T14:03:09.045Z | 2019-12-24T00:00:00.000 | {
"year": 2019,
"sha1": "98e79ed9fd75494b60c4e84ff76d5d8ec07ba67b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph17010155",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d4b626cf4b467197b4984f1355aa8e8a9d8325e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56106728 | pes2o/s2orc | v3-fos-license | Performance and Mechanism of UV / Immobilized Cu-TiO 2 System to Degradation Histidine
More and more attention is paid to dissolved organic nitrogen (DON) and some specific categories of amino acids are considered to be the direct precursors of nitrogenous disinfection byproducts (N-DBPs). Histidine was chosen to study the efficiency and mechanism of amino acid inUV/Cu-TiO 2 system.Moreover, the influences of pH, organics, and inorganic ion on the photocatalytic efficiency were also investigated.The results show that the degradation rate of DON in the UV/Cu-TiO 2 systemwas about 50% after 60min, and it was much lower than that of histidine (72%), which indicated that a part of degraded histidine was oxidized to other N-containing organics.The optimal pH value was 7.0 for the photodegradation of histidine, and the presence of organic compound and inorganic ion would decrease the degradation performance to some extent. After 6 h irradiation, histidine was totally degraded into NH 4 , and in the following 2 h, NH 4 + was oxidized to NO 3 − firstly and then NO 3 − was reduced to N 2 and overflowed from water, which should be attributed to the doping of Cu in the TiO 2 and provided a way to totally degrade the DON from the water.
Introduction
Total dissolved nitrogen (TDN) is presented as dissolved inorganic nitrogen (DIN) (the sum of NO 3 − , NO 2 − , and NH 4 + species) and dissolved organic nitrogen (DON) in natural waters.It is a subclass of Natural Organic Matters (NOM) in fresh water, accounting for a portion of 1-5% by weight [1,2].In brief, DON is the org-N structure of dissolved organic matters (DOM) [3].Input of DON to natural waters is largely a result of autochthonous biological process, including extracellular exudate yielded by phytoplankton, N 2 fixation, bacterial respiration, viral cell-lysis, and sloppy feeding by zooplankton and faecal pellet decay.Additionally, external sources of DON arise from sewage and industrial effluents, terrestrial run-off, and atmospheric deposition [4][5][6].This nitrogen fraction includes not only a large spectrum of natural compounds like free and hydrolysable amino acids, chlorophyll, and amino sugars but also synthetic compounds like pesticides (e.g., atrazine) [7].Dissolved organic nitrogen (DON) is currently drawing more and more attention in drinking water treatment for its potential to form nitrogenous disinfection byproducts (N-DBPs) [7,8], which are far more carcinogenic or mutagenic than some of the regulated DBPs [9][10][11].As to the dissolved organic matters, amino acids are mainly present as combined amino acids and constitute a small proportion (7.2 ± 4.3%) of the total DON [2].However, some special categories of amino acids were considered as the main precursor of N-DBPs according to former studies [12][13][14][15] and some other DON matters may change to amino acids during the water treatment.Therefore, experiments conducted with typical amino acid model compound will be useful to better understand the reactivity of the amine functional group during the process of photocatalytic degradation.In this study, histidine was chosen as the target compound because of its relative higher concentration in raw water and higher potential to form N-DBPs [16,17].Currently, studies about DON are mainly focusing on its analytical measurement, structural composition, occurrence, and potential in N-DBPs formation [18][19][20][21].Photocatalytic oxidation using TiO 2 is widely used in water treatment due to its availability, nontoxicity, cheapness, and relatively chemical stability.Additionally, the photocatalytic oxidation process can be carried out under wide conditions and leads to complete mineralization [12].Konstantinou and Albanis reported the wide use of heterogeneous photocatalysis with TiO 2 (TiO 2 /UV) to effectively degrade NOM from water [13].However, limited results on its degradation performance for the amino acids by TiO 2 are available.In addition, UV only occupies a small fraction of sunlight, which means an inefficient utilization of solar light by TiO 2 .All of the above limit the wide application of TiO 2 .Cu-doped TiO 2 were selected in this paper, which has been reported to be attributed to increase in visible light adsorption and lengthening of the photogenerated electron-hole pair recombination time [22,23].In previous researches [24,25], Cu-TiO 2 catalyst participated in the reaction in suspended state, which may cause the low recovery rate.Therefore, in this paper, Cu-TiO 2 was immobilized on fiber glass net.
Therefore, the main purpose of this research was to investigate the degradation performance and mechanism of typical amino acids by UV/Cu-TiO 2 system; the proper load of copper ion on the TiO 2 was optimized to enhance the degradation efficiency.Additionally, the effects of pH value, organic compounds, and inorganic ions on photocatalytic efficiency will also be discussed.
Materials. Histidine was purchased from J & K Scientific
Ltd. (Beijing, China), and other reagents were obtained from Sinopharm Chemical Reagent Co. (Shanghai, China).All the chemicals used were at least of analytical grade.
Blank TiO 2 and Cu-doped TiO 2 were prepared by referring to the reported sol-gel method [9].TiO 2 was prepared by a conventional sol-gel method using butyl titanate (TBT) (80 mL) in ethanol (320 mL) as a precursor.The hydrolysis solution of TBT solution was achieved by adding double distilled water (DDW) of 8 mL.Sol-gel synthesis in acidic solution was performed by substituting DDW of 8 mL by 2.5 mL nitric acid/water (volume ratio = 1 : 4).In addition, Cu-doped TiO 2 were obtained by dissolving the corresponding amount of copper precursor, Cu(NO 3 ) 2 , into the initial ethanol or acid/ethanol solution.The resulting sol was obtained after 24 hours under room temperature.Glass fiber nets selected in this study were 15 cm × 30 cm and were full of holes (15 mm × 15 mm).The holes are in favor of the attachment of Cu-TiO 2 particles to glass fiber nets.Glass fiber nets were impregnated with the sol for 5 min and dried under room temperature and finally calcined at 500 ∘ C for 2 h.After repeating for 3-4 times, Cu-TiO 2 photocatalysts supported on glass fiber nets were obtained, and the weight of Cu-TiO 2 adhering to one piece of glass fiber net is 1.0 g.According to our previous work [10], when copper loading was 1.0%, Cu-TiO 2 showed the best photocatalytic performance for DON degradation.Therefore, the photocatalyst used in the following experiments was Cu-TiO 2 (1.0%).
Analytical Methods.
In this paper, the characteristics of blank TiO 2 and Cu-TiO 2 were examined by X-ray diffraction (XRD), diffuse reflectance ultraviolet-visible spectroscopic (DR-UV-Vis) analysis, and X-ray photoelectron spectroscopy (XPS).DR-UV-Vis was conducted on a Lambda 950 spectrometer with the wavelength ranges of 175-3300 nm.The Xray diffraction measurement was performed with a Bruker D8 diffractometer using Cu K radiation ( = 1.542Å) at 40 kV, 30 mA over the 2 range 10-70 ∘ .The samples were coated with a layer of platinum-palladium prior to scanning at 100 K magnification.XPS spectra were obtained on a VG Scientific ESCA-Lab220i-XL hemispherical electron analyzer, which worked at a pressure <3 × 10 −9 Torr, with a dual X-ray source working with K Br at 300 W. NO 3 − , NO 2 − , and NH 4 + were measured by Monitoring and Analysis Methods of Water and Wastewater 4th [26].Total dissolved nitrogen (TDN) was measured using a TOC analyzer (Multi N/C 2100, Germany).DON was determined from the difference between measured TDN and sum of measured DIN species using the following: Amino acids were analyzed by HPLC with 6-aminoquinolyl-N-hydroxysuccinimidyl (AQC) derivatization [27].Briefly, the reaction between AQC and amino acids (and ammonia also) leads to the formation of fluorescent complexes separated on AccQ.Tag Waters (3.9 mm * 150 mm, C18) HPLC column and detected at excitation and emission wavelengths of 240 and 395 nm, respectively.The HPLC system consisted of a Waters TM 600 gradient pump, a Merck AS-4000 autosampler, a column heater, and a 474 Waters TM fluorescence detector.
The isoionic point of Cu-TiO 2 was measured by the following method.The homogeneous solution was obtained by dissolving Cu-TiO 2 powder into pure water and ultrasonic dispersion for 15 min.The pH value of the solutions was adjusted by 0.1 M HCl and 0.1 M NaOH, and the zeta potential of the solution under different pH value was measured by Nano-Z Zeta Potential analyzer.When the zeta potential of the solution was zero, the pH value was the isoionic point of Cu-TiO 2 .
Experiments. Standard jar-tests were used to study the photodegradation efficiency and mechanism of amino acids.
A high pressure Hg lamp (30 W) supplied by Applied Photophysics was used, which showed the main emission line at 365 nm, and UV 365 intensity was 1970 w/cm 2 .For the UV/Cu-TiO 2 oxidation experiments, raw water was obtained by dissolving certain histidine in pure water, and the concentration of histidine was 30 mg/L.One piece of glass fiber net loading Cu-TiO 2 was fixed on the inner wall of beakers and immersed in 1000 mL water sample.In the meantime, a contrast test was performed with blank TiO 2 .The pH value of the solutions was adjusted by 0.1 M HCl and 0.1 M NaOH.The solutions were put outside simultaneously under UV irradiation for 1 h and continuously stirred during the reaction.Water samples were taken every 10 minutes, and the samples were filtered through 0.45 m cellulose acetate membrane filters and placed in sample vials.The photocatalytic reactor was shown in Figure 1.
All experiments were performed in three replicates.Statistical analysis was performed using SPSS 19.0 software (IBM Corporation, USA).The values are expressed as mean ± standard deviation (SD) and all data were checked for normality.Comparisons between control and treated groups were made by statistical analysis of variance.The value of < 0.05 represents significant difference.48 ∘ , and 54 ∘ were corresponding to TiO 2 anatase phase, which was the most reactive form of TiO 2 [28].Characteristic peaks indicated that the doped copper ions were not detected.This phenomenon may be due to the fact that ionic radius of Cu 2+ is 0.72 Å, which is close to the ionic radius of Ti 4+ (0.68 Å).Therefore, Cu 2+ was incorporated in the crystalline of TiO 2 and replaced Ti 4+ to form the lattice imperfection [29].clusters.The absorption band at 600-800 nm indicates the crystalline and bulk CuO in octahedral symmetry [25].The incorporation of copper ion caused a red shift and increased the absorption band to the visible or even near-infrared range and this promoted the photocatalytic activity.The optimal doping of copper ion is 1.0%. of Cu 2+ species, while slightly lower binding energy (932 eV) and the absence of shake-up are characteristic of Cu 1+ [27,28].Our results point out that the copper species are mainly present as Cu 1+ and Cu 2+ .
Degradation Effects of DON and Histidine. As seen from
Figure 5(a), both doped and undoped systems could degrade histidine effectively, while the degradation through the Cu-TiO 2 adsorption or photodegradation by UV separately was negligible.Moreover, according to the reported research by Castaño et al. [30], lixiviation was neglectful compared to the removal effect of histidine.Thus, the reaction product between UV and TiO 2 may be responsible for the degradation performance.It is well known that TiO 2 under the ultraviolet radiation could produce electron-hole pairs on its surface; then the electron-hole pairs react with the water, including org-N rich matters.However, Cu-TiO 2 achieved better performance than blank TiO 2 , and it may be due to the fact that the recombination of excited electrons and holes in undoped system was relatively high, but in doped system the recombination of photogenerated carriers was suppressed effectively [10,31].Besides, the incorporation of copper ion caused a red shift and increased the absorption band to the visible or even near-infrared range and this promoted the photocatalytic activity (Figure 3). Figure 5(b) shows similar results for the degradation of DON.As seen from Figures 5(a) and 5(b), in the same reaction condition, the degradation rate of DON was significantly lower than that of histidine (50% versus 72%).This could be explained by the fact that amino acids can not be mineralized completely and be transformed to other org-N matters, which also can be detected by DON.Seen from the degradation rate of DON, only a small part of DON was directly oxidized to inorganic ion.
Effect of pH.
In raw water, pH value changes with the climate and the growth of plankton (especially algae and aquatic plants), while pH value affects not only the electriferous state of the surface of TiO 2 , but also the ionization degree of target compound in reaction system.Therefore, it is necessary to figure out the influence of pH value on the photocatalytic efficiency and the corresponding experimental results were shown in Figure 8.It can be seen from Figure 6 that DON degradation rate increased first and then decreased with the increase of pH value.The degradation rate of DON was up to 50% when pH value is 7.In this study, the point of zero charge of TiO 2 is 6.5, and the isoionic point of histidine is 7.59 [32].We may discover that both the catalysts and histidine are negatively charged when pH value of solution is higher than 7.59.On the contrary, both catalysts are positively charged when pH value of solution is lower than 6.5.When pH value is in the range of 6.5 and 7.59, the catalysts are negatively charged and histidine is positively charged, which enhances the chance of the collision between TiO 2 and histidine.
Effect of Organic Compound and Inorganic Ions.
There are large quantity of natural organic materials (NOM) and inorganic ion in raw water, and their presence will affect the degradation efficiency of target compound.So it is essential to investigate the effect of organics and inorganic ion on the photocatalytic system.Due to the measurement method of DON, the organic compound chosen as the representative should not contain N element, so isopropanol was selected.Cl − was chosen as the typical inorganic ion.The results are shown in Figure 9.
It can be seen from Figure 7 that the degradation efficiency of DON decreased with the increases of isopropanol and Cl − concentration.It may be due to the fact that organics and Cl − consume OH-radicals in the water [33].Besides, there is competitive adsorption between Cl − and target compound.
Proposed Mechanism.
To confirm the oxidation extent of DON in water, the concentrations of total dissolved nitrogen (TDN), ammonium nitrogen (NH 4 + ), nitrate nitrogen (NO 3 − ), and nitrite nitrogen (NO 2 − ) were determined in the different reaction time.The results are shown in Figure 10.
Seen from Figure 8, the main categories of nitrogen showed different variation tendency.The concentrations of TDN, NO 3 − , and DON decreased, while the concentration of NH 4 + kept increasing.No NO 2 − was detected during the whole oxidation process.The decrease of TDN means the production and overflow of N 2 during the oxidation process, which may be caused by the photocatalyzed reduction of NO 3 − .For the lowered concentration values of the TDN and NO 3 − were nearly the same.In addition, the increase of NH 4 + may be due to the oxidation of DON.However, the transformation mechanism for the pure histidine by the photocatalyzed oxidation was not clear for the presence of nitrate in the histidine used in the experiment.Granular activated carbon (GAC) adsorption method was used to degrade the nitrate in the water sample before the oxidation process until nitrate was not detected.After absorption, the DON concentration was still about 1.45 mg/L after the adsorption for its poor degradation by the GAC's adsorption approach.Besides, reaction time was extended to about 8 h until DON was totally degraded into inorganic nitrogen and the transformation occurred between the inorganic nitrogen ions; the results are shown in Figure 9.
Seen from Figure 9, the histidine was totally degraded into NH 4 + and the concentration of TDN kept consistent in the first 6 h reaction time, which indicated that the main reaction in this phase was to oxidize the histidine into NH 4 + .In the following 2 h, concentration of NH 4 + decreased, while the concentration of NO 3 − increased.In addition, the concentration of TDN decreased.The above results demonstrated that NH 4 + was oxidized to NO 3 − firstly and then NO 3 − was reduced to N 2 and overflowed from water.Compared with results shown in Figure 9, we may conclude that the reaction between the categories of nitrogen in the water may have some sequence in the oxidation process of histidine.The oxidation of histidine had the priority to other reactions; then the further oxidation of NH 4 + to NO 3 − and the following reduction of NO 3 − to N 2 happened.What needs to be noted in the reaction process was the formation of N 2 and the decrease of TDN's concentration, which was not found in the oxidation of UV/immobilized TiO 2 system [6].The reason may lie in the doping of copper ion.According to former studies, the doping of copper ion could not only improve the oxidation ability to certain compounds in water [34] but also enhance the reduction efficiency of nitrate to N 2 and the degradation performance of TN through the way of photocatalyzed reaction [35].The formation of N 2 may lead to the total degradation of nitrogen from water and had special important significance for the water sources which had been in serious eutrophication.The enhanced conversion rate from the nitrate to N 2 may be explained as follows: the bandgap energies of the Cu-TiO 2 and blank TiO 2 were calculated based on DR-UV-Vis spectra (Figure 3) analysis according to our former study [9], and bandgap energy of blank TiO 2 was 3.15 eV, while Cu-TiO 2 was 2.95 eV.We may find that the copper doping causes the decrease of bandgap energy because of the dispersion of metal and metal nanoparticles diffusion in the TiO 2 matrix.Electron can be easily excited from the defect state to the conduction band of TiO 2 by photon.Based on the results above and reported mechanism [36][37][38], a possible photocatalytic oxidation approach was proposed in Figure 10.
Conclusions
Surface properties and photocatalytic activity of blank and Cu-doped TiO 2 were investigated.The XRD patterns show that Cu-TiO 2 is in anatase phase, which is the most reactive form of TiO 2 .SEM images imply that the size of Cu-TiO 2 particles is small and homogeneous.DR-UV-Vis analysis shows that the incorporation of copper ion caused a red shift and increased the absorption band to the visible or even nearinfrared range and this promoted the photocatalytic activity.
UV/Cu-TiO 2 system has a good performance in degrading histidine, and the concentration of DON and histidine was examined.The results show that the degradation rate of histidine is 72% after 60 min, which is much higher than that of DON (50%).Besides, histidine is totally degraded into NH 4 + in the first 6 h; in the following 2 h, NH 4 + was oxidized to NO 3 − firstly and then NO 3 − was reduced to N 2 and overflowed from water.Thus, the photocatalytic system (UV/Cu-TiO 2 ) has a tremendous potential in solving the environmental problem caused by DON.The optimal pH value is 7.0, and the presence of isopropanol and Cl − decreases the degradation efficiency.
Analysis.The DR-UV-Vis spectra of Cudoped TiO 2 and blank TiO 2 are shown in Figure 3.The spectra of TiO 2 show an absorption peak at 350 nm in the UV region.When doped with 1.0% copper onto TiO 2 , considerable shift of the peak towards the visible range at around 400-800 nm occurred (red shift).It has been reported that the band at 210-270 nm would indicate the O 2− (2p)→Cu 2+ (3d) ligand to metal charge transfer transition, where the copper ions occupy isolated sites over the support.A band at 350 nm would indicate the formation of (Cu-O-Cu) 2+ clusters in a highly dispersed state.The broad band between 400 nm and 600 nm is attributed to the presence of Cu 1+ clusters in partially reduced CuO matrix as well as (Cu-O-Cu) 2+
Figure 10 :
Figure 10: The photocatalytic degradation mechanism of DON. | 2018-12-14T14:47:23.504Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "4e9ba0cb58d38c213d83f258ebf04e5541a0f195",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jnm/2016/8946019.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e9ba0cb58d38c213d83f258ebf04e5541a0f195",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
239128277 | pes2o/s2orc | v3-fos-license | Post-Pharmacy Exam - Are We Ready? - The Results of the Exam conducted in Poland on the Model of the UK
Background.
This article aims to present the results of the final exam in pharmacy among Polish pharmacy students. This exam was modeled on the British national exam supervised by the General Pharmaceutical Council.
Methods.
The exam was conducted in 3 cities in Poland, among a total of 175 final-year students. Taking the exam was voluntary and anonymous.
Results.
The results indicate that none of the Polish students achieved the 70% mark required to pass the Great Britain exam. Significant differences in test results were noticed between cities. Students achieved the best average exam result in Bydgoszcz (46.35%), then in Warsaw (38.81%) and Łódź (38.35%)
Conclusions.
The pharmaceutical education system in Poland requires complete changes that will prepare future pharmacists for clinical work. Increasing the role of a pharmacist in health care requires raising the level of education and emphasizing the practical content of education.
Results.
The results indicate that none of the Polish students achieved the 70% mark required to pass the Great Britain exam. Signi cant differences in test results were noticed between cities. Students achieved the best average exam result in Bydgoszcz (46.35%), then in Warsaw (38.81%) and Łódź (38. 35%) Conclusions. The pharmaceutical education system in Poland requires complete changes that will prepare future pharmacists for clinical work. Increasing the role of a pharmacist in health care requires raising the level of education and emphasizing the practical content of education.
Background
The pharmacist is one of the primary medical professionals in the healthcare sector. The main task of pharmacists is to improve patients' health and quality of life in the area of pharmacotherapy, regardless of whether they work in community or hospital pharmacies. The essential role of the pharmacist is to optimize patient treatment, including detection of adverse events and other drug errors. The signi cant in uence of pharmacists on the increase in patients' adherence to the therapies is also emphasized. Furthermore, given the constantly growing health care spending, pharmacists' role in the optimization of therapy costs should be indicated [1][2][3].
The World Health Organization (WHO) and other organizations such as the American Society of Health-System Pharmacists (AHAP) emphasize that pharmacists are the health professionals most accessible to patients [4][5]. Therefore, they should be involved in patient care. The role of a pharmacist in different countries, including within Europe, varies signi cantly [6][7]. Many of these differences are due to curricula, including soft skills learning through pharmaceutical education.
This article aims to compile the programs and standards of teaching the pharmacist profession in Poland and the United Kingdom (UK), including the nal examination results, designed in Poland, similar to the pre-registration exam conducted at the end of pharmaceutical studies in the UK. The results of these considerations may be used to de ne new directions for pharmacists' education in Poland, including the development of recommendations and standards for teaching this profession in Poland.
Pharmacy education in Poland
In Poland, pharmaceutical studies are part of education in the eld of medical sciences, health sciences, and physical education sciences. Studies last no shorter than 11 semesters, ending with the statutory work placement of 6 months. Additionally, after the 3rd and 4th year of studies, students complete one month of professional practice during the summer holidays [8][9]. The number of hours of classes and internships will be at least 5,300. The education standards of future pharmacists in Poland combine general and detailed learning outcomes, including knowledge in pharmaceutical, medical, biological, chemical, and social sciences. The pharmacy students in Poland obtain thorough knowledge about drugs and substances used in their production, pharmaceutical technology, metabolism, and effects of drugs, as well as the correct use of medicinal products. Students learn the methods and techniques of researching medicinal products in terms of chemical, pharmacological and toxicological properties. They gain knowledge of the basics of pharmaceutical law and management in pharmacy, including the principles of ethics and deontology [9].
Polish legal acts indicate that a graduate of pharmacy studies is required to achieve competence in the preparation, manufacture and quality assessment of medicinal products, providing reliable and objective information on the medicinal products and medical devices. A pharmacy graduate should understand the principles of pharmacotherapy rationalization, as well as skillfully conduct the research on both substances and medicinal products. General effects in the process of educating pharmacists in Poland in terms of knowledge and skills also take into account social competences by preparing future pharmacists to work in community pharmacies, hospital pharmacies, pharmaceutical wholesalers, the pharmaceutical industry, pharmaceutical inspection as well as other o ces and institutions, both state and local, operating in the eld of pharmacy and healthcare [9].
Pharmacy education in the UK
The education of pharmacists in the UK is regulated by an independent body of the General Pharmaceutical Council (GPhC). GPhC aims to protect and promote the health, safety, and well-being of people, especially those who need the services of pharmacists. Its main tasks include setting the requirements for practicing the profession, approving pharmacists and pharmacy technicians' quali cations, and keeping a register of pharmacists, pharmacy technicians, and pharmacies [10].
The General Pharmaceutical Council accredits universities educating students in pharmacy, ensuring pharmacists' quality of education. The Master of Pharmacy (MPharm) studies in Great Britain last four years [10]. Students are obliged to take at least 3,000 hours of classes directly related to pharmaceutical subjects during their studies, especially in relation to clinical practice. During their studies, students gain, amongst others, knowledge about the effects of drugs, the functioning of the human body, and broad universal competencies, including problem-solving, clinical decision-making, effective communication, numerical data analysis [10]. Students preparing to practice as pharmacists in the United Kingdom gain practical skills from the rst year of study. They are exposed to working with the patients during which students also have the possibility of practical problem-solving and pharmaceutical care planning, resulting from the educational standards set by the GPhC.
After graduation from their respective Universities, future pharmacists perform a 12-month obligatory internship in a community pharmacy, hospital pharmacy, and the industry (or combining different types of practices). Completing the training is associated with taking the independent state examination conducted by the Royal Pharmaceutical Society, and obtaining a positive grade for this examination enables the use of the "pharmacist" title.
Results
The best average exam result was achieved by students from Bydgoszcz (46.35%), then from Warsaw (38.81%) and Łódź (38.35%) ( Table 1). A one-way ANOVA was then performed, which showed signi cant differences in the results between the groups: p = 0.000003 (p < 0.05). However, using the Fischer NIR test (α = 0.05), signi cant differences were identi ed between the results from Bydgoszcz and Warsaw and between Bydgoszcz and Łódź. The differences in the results between Warsaw and Łódź are not statistically signi cant (Table 3). minimum value 24.5%, maximum 53.1%. The above results indicate that none of the students taking the exam reached the 70% threshold for passing the test.
Discussion
Pharmacists are an integral part of health care systems [11][12][13]. They have the experience to detect, resolve, and prevent drug-related problems [14][15]. In many countries, such as the United Kingdom, pharmacists are involved in the interdisciplinary Primary Care Teams (PCT). They directly take care of the patient through activities such as participating in drug management and drug counselling, identifying undesirable or incorrect drugs, and managing chronic diseases [16]. These activities directly impact the improvement of health effects and the quality of life of patients [17].
Highly developed pharmaceutical education, which begins during studies, is a prerequisite for pharmacists' full and bene cial involvement in patient care. Despite the fact that both Poland and Great Britain signed the Bologna Declaration's assumptions, the teaching models in the discussed countries are signi cantly different. In the UK education programs, the clinical aspect of the classes is emphasized, which is the basis for practical preparation for broadly understanding of pharmaceutical care. Internships taking place from the rst years of study, teach future pharmacists to cooperate with representatives of various medical professions, including doctors. This situation can signi cantly in uence the current practice where decisions to treat the patient are discussed between doctors and pharmacists.
The role of the pharmacist in the United Kingdom is crucial [18]. While pharmacists in the UK are PCT members and take direct care of patients, in Poland, pharmacists are still perceived mainly as drug sellers.
The UK education system is widely regulated and independently supervised, including the state nal examination. Notably, the state exam may have a maximum of 3 attempts. In the event of a triple failure on the test, students may not be allowed to work as pharmacist. The results indicating that none of the surveyed graduates of Polish universities obtained the required number of points emphasize the relatively low level of pharmacy education. Data obtained from the General Pharmaceutical Council show that in 2019, the average pass rate in the United Kingdom is 72.33%.
This will be a good point to also outline the pharmacy education system in Poland to compare and contrast the sections above. For this reason, it is necessary to develop a new model of education in Poland, based on the International Pharmaceutical Federation (FIP) guidelines and competency-based pharmacy education (CBPE) [19]. In order to assess the knowledge and competencies of future pharmacists, it is recommended to introduce a nal exam at Polish universities, which would be supervised by an independent organization. The real threat to these activities may turn out to be a noticeable downward trend in the number of new pharmacists, which already amounts to 1.85 pharmacists per pharmacy and is much lower than the number of pharmacists per pharmacy in other OECD countries (2.40) [20].
Conclusions
The pharmaceutical education system in Poland requires complete changes. However, these modi cations seem necessary to increase the pharmacist's role in the health care system and ensure the complete safety of patients. Although pharmacists are often overlooked in setting primary health goals, and their role in promoting health in many countries is still low, it should be noted that pharmacists are the medical profession most frequently visited by patients. Thus, great emphasis should be placed on educating pharmacists who can actively support patients' health in the area of pharmacotherapy, being also health educators. The local ethics committee ruled that no formal ethics approval was required for that study. We con rm that all methods in the study were carried out in accordance with relevant guidelines and regulations. We con rm that informed consent was obtained from all subjects.
Consent for publication
Not applicable.
Availability of data and materials All data are available from the corresponding author. | 2021-10-19T15:18:19.712Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "ae5bd759738951083abd46973b5f0717c565f0e8",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-871778/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "f5953b845745a7f45ba8c6b82d160fd6f57c2db3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
119482714 | pes2o/s2orc | v3-fos-license | Dynamical Low-Mass Fermion Generation in Randall-Sundrum Background
It is investigated that a dynamical mechanism to generate a low mass fermion in Randall-Sundrum (RS) background. We consider a five-dimensional four- fermion interaction model with two kinds of bulk fermion fields and take all the mass scale in the five-dimensional spacetime at the Planck scale. Evaluating the effective potential of the induced four-dimensional model, I calculate the dynamically generated fermion mass. It is shown that dynamical symmetry breaking takes place and one of the fermion mass is generated at the electroweak scale in four dimensions.
Introduction
To construct a unified theory of the electroweak interaction, strong interaction and gravity it is important to make investigation on the gauge hierarchy problem, how the electroweak scale is realized in the theory at the Planck scale. As in a large extra-dimension model it is possible to solve the gauge hierarchy problem to consider a four-dimensional brane embedded in a higher-dimensional spacetime. 1,2 Randall and Sundrum considered a higher-dimensional curved spacetime with negative curvature and found a beautiful solution of the hierarchy problem by using the exponential factor in the metric. 3 Here we launched a plan to study a dynamical mechanism to realize the electroweak scale from the Planck scale physics in a model of the brane world proposed by Randall and Sundrum. 4,5,6,7 At the beginning, it is considered that only the graviton can propagate in the extra-dimension and all the standard model particles are localized on the four-dimensional brane. However, there is a possibility that some of the standard model particles also propagate in the extra-dimension. 8 In Fig. 1 we illustrate an image of a four-dimensional brane embedded in a five-dimensional bulk. The bulk fields are the fields which can propagate in the bulk. The KK excitation modes of the bulk fields appear on the brane and the modes may affect some of low energy phenomena. One of the interesting phenomena is found in a spontaneous electroweak symmetry breaking. The electroweak symmetry can be dynamically broken down due to the fermion and anti-fermion condensation. Many works have been done to see the contribution of the KK modes to dynamical symmetry breaking in models with large extra dimensions. 9,10,11,12,13,14,15 Here a theory with bulk fermions is considered in the RS background. We assume the existence of two types of bulk fermion fields which can propagate in a five-dimensional balk. To construct a model where the fermion field naturally develops the electroweak mass scale, a four-fermion interaction is introduced between these bulk fermions. As is known, the four-fermion interaction model is a simple model of dynamical symmetry breaking. It is expected that a negative curvature enhances symmetry breaking. 16,17,18,19 Evaluating the induced four-dimensional effective potential, we calculate the mass scale of the fermion in four dimensions. Since we are interested in the bulk standard model particles, the KK excitations of graviton are assumed to have no serious effect on the fermion mass and ignore them.
Four-Fermion Interaction Model in Randall-Sundrum Background
Here we briefly review the Randall-Sundrum idea 3 and introduce a fourfermion interaction between bulk fermions.
Randall-Sundrum Background
The RS background is a five-dimensional spacetime whose fifth dimension is compactified on an orbifold with S 1 /Z 2 symmetry and two Minkowski branes exist at the orbifold fixed points, θ = 0 and π, see The spacetime described by the metric, It is a maximally symmetric spacetime with a constant negative curvature, i.e., five-dimensional anti de-Sitter spacetime. The warp factor e −2kr|θ| in Eq.(2) plays an important role to solve the hierarchy problem. The effective Planck scale, mass scale for gravity on the brane, is given by where M is the fundamental scale in the bulk and k is the curvature. On the other hand, the mass scale for M phys on the θ = π brane is suppressed by the warp factor.
For k ∼ 11, the electroweak mass scale, M EW , can be realized from only the Planck scale, M pl , without introducing some large number.
This is the most important mechanism of the RS model. We want to realize this mechanism dynamically and construct a model where the fermion mass is generated at the electroweak scale.
Bulk Four-Fermion Interaction Model
We study the bulk four-fermion interaction model defined by where we assume the existence of two kinds of bulk fermions with different parity, Two kinds of fermion necessary for constructing Dirac mass term on the brane. It is possible to consider the other types of four-fermion interaction, for example (ψ 5D 1 ψ 5D 1 )(ψ 5D 1 ψ 5D 1 ) and (ψ 5D 2 ψ 5D 2 )(ψ 5D 2 ψ 5D 2 ). But the interaction in Eq.(6) is essential to generate a low mass mode.
Following the procedure in Ref. 8 we derive the mode expansion of the bulk fermion in the RS background.
where g (n) L and g (n) R are left and right mode functions which satisfy In practical calculation it is more convenient to introduce auxiliary field σ ∼ψ 1 ψ 2 . Applying the KK mode expansions (8), the Lagrangian (6) reads M corresponds to the fermion mass. It is a function of the vacuum expectation value of σ and the mode functions. 7
Dynamically Generated Fermion Mass
To obtain the fermion mass in four dimensions we need to calculate the vacuum expectation value of σ which is determined by observing the minimum of the induced four-dimensional effective potential. Integrating over the extra direction in Eq. (10), we obtain the induced four-dimensional theory. Since the RS background has no translational invariance along to the extra direction it is impossible to generally perform the integration over the extra direction. Here we restrict ourselves in some specific forms of the vacuum expectation value, σ = ve krθ and σ = v where v is a constant parameter.
After some numerical calculations we obtain the behaviors of the effective potential in both the cases and find the critical coupling where the vacuum expectation value disappears. 7 In Fig. 3 we draw the behavior of the critical coupling.λ is defined byλ ≡ (1 − e −4krπ )λ/(4k) and N kk is a trun- cated scale of KK mode summations. It is natural to take N kk ∼ O(10 16 ). In the region between two critical lines the state, σ = ve krθ , is more stable than θ-independent one. For a large N kk limit the critical coupling is proportional to 1/N kk in the σ = ve krθ case. But the critical coupling seems to be a constant,λ cr ∼ O(30)/Λ 3 , in the θ-independent case. We conclude that the natural scale of the four-fermion coupling,λ ∼ 1/M 3 pl , is located between two critical lines and θ-dependent vacuum is realized.
For σ = ve krθ the fermion mass matrix of the induced fourdimensional theory reduces to where m n is given by The eigen values of the mass matrix is described as , n = · · · , −2, −1, 0, 1, 2, · · · .
It corresponds to the mass for each KK modes in four dimensions. We can choose n where m f is smaller than kπ/(e kπr − 1) ∼ M EW . There is a mode whose mass is smaller than the electroweak scale, M EW , even if v develops a value near the Planck scale. Therefore a low mass fermion is generated dynamically in the bulk four-fermion interaction model.
Conclusion
The dynamical origin of the electroweak mass scale have been investigated in the RS background. We have assumed the existence of two kinds of bulk fermion fields with different parity and studied a bulk four-fermion interaction model. Evaluating the effective potential for two specific θdependence of the state, we have calculated the critical value of the fourfermion coupling and found the more stable state. In a natural choice of all the physical parameters the vacuum expectation value depends on the extra direction. In the stable state the fermion mass term has been analytically calculated. We have shown the existence of a mode whose mass is smaller than the electroweak scale. The electroweak mass scale can be realized from only the Planck scale in the RS brane world due to the fermion and anti-fermion condensation. This is one of the dynamical realizations of the so-called Randall-Sundrum mechanism.
There are some remaining problems. We consider only two specific forms of the vacuum state and conclude the state whose expectation value of σ has the form ve krθ is more stable. To find the true vacuum we must calculate the induced effective potential for a general form of σ .
The fermion and anti-fermion condensation may affect the structure of spacetime. To analyze the spacetime evolution the behavior of the stress tensor is under investigation. | 2019-04-14T02:35:55.503Z | 2003-08-01T00:00:00.000 | {
"year": 2003,
"sha1": "3738b9a7c3bffd988a46a906687dab4cfceb8a9b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0308077",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a6256d89d14be6172815124169fa807312a59f30",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
58027131 | pes2o/s2orc | v3-fos-license | Screening for differentially expressed miRNAs in Aedes albopictus (Diptera: Culicidae) exposed to DENV-2 and their effect on replication of DENV-2 in C6/36 cells
Background The mosquito Aedes albopictus is an important vector for dengue virus (DENV) transmission. The midgut is the first barrier to mosquito infection by DENV, and this barrier is a critical factor affecting the vector competence of the mosquito. However, the molecular mechanism of the interaction between midgut and virus is unknown. Results Six small libraries of Ae. albopictus midgut RNAs were constructed, three of which from mosquitoes that were infected with DENV-2 after feeding on infected blood, and another three that remained uninfected with DENV-2 after feeding on same batch of infected blood. A total of 46 differentially expressed miRNAs were identified of which 17 significant differentially expressed miRNAs were selected. Compared to microRNA expression profiles of mosquitoes that were uninfected with DENV-2, 15 microRNAs were upregulated and two were downregulated in mosquitoes that were infected with DENV-2. Among these differentially expressed microRNAs, miR-1767, miR-276-3p, miR-4448 and miR-622 were verified by stem-loop qRT-PCR in samples from seven-day-infected and uninfected midguts and chosen for an in vitro transient transfection assay. miR-1767 and miR-276-3p enhanced dengue virus replication in C6/36 cells, and miR-4448 reduced dengue virus replication. Conclusions To our knowledge, this study is the first to reveal differences in expression levels between mosquitoes infected and uninfected with DENV-2 after feeding on an infected blood meal. It provides useful information on microRNAs expressed in the midgut of Aedes albopictus after exposure to the virus. Electronic supplementary material The online version of this article (10.1186/s13071-018-3261-2) contains supplementary material, which is available to authorized users.
Background
The mosquito Aedes albopictus is an important vector for transmission of many arboviruses, including dengue virus (DENV) which causes dengue fever (DF), with some cases resulting in severe symptoms such as plasma leakage, hemorrhagic fever and organ impairments.
Approximately four billion people in 128 countries are estimated to be at risk of DENV infection [1]. Each year, there are approximately 390 million cases of dengue fever worldwide, many of which are asymptomatic or undiagnosed [2]. Because there is no reliable vaccine for dengue fever and no drug therapies exist, vector control is still the main effective means to prevent this disease. However, for many reasons, for example the emergence of insecticide-resistant mosquitoes, lower impact of prevention and control efforts than in the past [3], and increasing vector and human population densities, the global pandemic of dengue fever has increased dramatically in recent decades [4], emphasizing the need for diversification of vector control strategies. Previous studies have carried out genetic manipulation of insect vectors to modulate characteristics such as vector competence [5,6], but for further in-depth researches, more knowledge of the molecular mechanism of vector-arbovirus interactions is needed.
The susceptibility of mosquito vectors to the virus is the principal factor for vector competence. A large number of studies have shown that the mosquito innate immune response is activated after mosquitoes are infected by various pathogens [7][8][9][10]. Molecular events triggered by the innate immune system either prohibit the infection of virus in the midgut epithelium (e.g. a midgut infection barrier, MIB) or prevent virus escape and dissemination to other tissues, like salivary glands and ovaries (e.g. a midgut escape barrier, MEB) [11]. The midgut is the primary barrier to pathogens invading via the digestive tract, so the antiviral ability of midgut epithelial cells is the most important factor affecting the susceptibility of mosquitoes to arboviral infection and the crucial indicator of vector competence [12]. At present, the molecular mechanism by which midgut epithelial cells regulate viral replication remains unclear, which is a major obstacle to studying the susceptibility of mosquitoes to DENV.
MicroRNAs (miRNAs) are a class of non-coding RNAs that regulate gene expression at the post-transcriptional level [13,14]. miRNA plays an important role in regulating endogenous genes and resisting the invasion of exogenous nucleic acids. In recent years, many studies have shown that miRNA plays an important role in the growth, development and infection of animals [15,16]. To date, more than 100 miRNAs have been identified within mosquitoes (miRBase20, http://www.mirbase.org/). Several studies have demonstrated that DENV infection causes changes in the expression of miRNAs, such as miR-375, which is only expressed after a blood meal, in the midgut of Ae. aegypti [17,18]. Transfecting miR-375 mimics into Aag2 cells significantly enhanced the expression of the catus gene, the gene affected DENV infection by inhibiting the expression of the transcription factor NF-κB. Upon further infecting Aag2 cells with DENV-2, miR-375 enhanced DENV-2 infection in the Aag2 cell line [19,20]. In Culex quinquefasciatus, miR-92, miR-33, miR-970 and miR-980 were upregulated and miR-989 and miR-957 were downregulated after West Nile virus (WNV) infection [21]. These results suggest that miRNA may be involved in the process of endogenous gene regulation during pathogen infection, and the process may differ in different combinations of mosquito species and viruses.
Although some Ae. albopictus miRNAs have previously been identified, there are few studies comparing the differential expression of miRNAs between DENV-2 infected and uninfected midguts [22,23]. Due to the importance of the midgut in viral infection, diagnosing the patterns and functions of different miRNAs during the course of DENV infection can help identify the molecular mechanisms of the midgut infection barrier, so the prevention and control of dengue fever can be further improved.
Previously, we constructed three small RNA libraries from the midguts of Ae. albopictus females that were either fed sugar solution, a regular blood meal, or a blood meal infected with DENV-2. We then investigated the differences between mosquitoes that had ingested a uninfected blood meal and those that had ingested a DENV-2-infected blood meal [24]. Here, we present the changes in miRNA expression levels between Ae. albopictus uninfected and infected with DENV-2, which both feeding on the same batch of infected blood meal. We identified the miRNAs that play an important role in DENV-2 infection. Then, we chose four miRNAs for an in vitro transient transfection assay to understand the effect of miRNA on the replication of DENV-2 in the C6/36 cell line.
Bioinformatic analysis of samples
The results of the analysis of total RNA concentration and integrity showed that the samples meet the requirements for sample preparation ( Table 1). The number of sRNAs and the proportion of each sequence are shown in Additional file 1: Table S1. We aligned clean sequences with sequences for rRNA, tRNA, snRNA, snoRNA, repeats, exons and introns in the NCBI GenBank and miRbase and annotated each sequence Table S3).
Changes in miRNA expression profiles following DENV infected and uninfected
We normalized the expression of each miRNA (see Additional file 4: Table S4) and calculated the infected and uninfected mosquito midgut miRNA expression ratio. We further analyzed the expression data by calculating the fold changes and P-values based on the expression ratios and plotted these data as a scatter plot.
Effect of miRNAs on replication of DENV-2 in the C6/36 cell line
In this study, four miRNAs (aal-miR-1767, aal-miR-276-3p, aal-miR-4448 and aal-miR-622) were selected for their high expression levels, high degree of fold change between the uninfected and infected midguts, and the repeatability of their differential expression at different time points for the in vitro transient transfection assay. The differential expression of these four miR-NAs in the samples from the 7-day infected and uninfected midguts were verified by stem-loop quantitative real-time PCR (qRT-PCR) (Fig. 2).
We transfected C6/36 cells with synthetic mimics of the selected miRNAs and their inhibitors as well as negative controls for the mimic (NCm) and inhibitor (NCi). The cells were inoculated with DENV-2 24 h post-transfection. The transfection efficiency of the negative controls for the mimic (NCm) after 0 h, 6 h, 1 day, 3 days, 5 days and 7 days was 0.796, 14.4, 73, 57.7, 52.3 and 38.8%, respectively. The Ae. albopictus housekeeping gene rpS7 was used as an internal control for qRT-PCR, and the four miRNAs were expressed in the Ae. albopictus C6/36 cell line (Fig. 3).
The survival rate of C6/36 cells was lower in cells transfected with the aal-miR-1767 mimic compared to those transfected with the NCm four days post-inoculation (t (2) = 24.137, P < 0.0001). The survival rate of C6/36 cells was higher in cells transfected with the aal-miR-1767 inhibitor, but the difference was not statistically significant. The survival rate of C6/36 cells was lower in cells transfected with the aal-miR-276-3p mimic (t (2) = 5.679, P = 0.011) but was not significantly different in the inhibitor group. The survival rate of C6/36 cells was higher in cells transfected with the aal-miR-4448 mimic (t (2) = 24.137, P < 0.0001) but was not significantly different in the inhibitor group. The survival rate of C6/36 cells was higher in the aal-miR-622 mimic group but was not significantly different in the inhibitor group (Fig. 6).
Discussion
The relationship between mosquito and arbovirus is a dynamic process, influenced by the inherent variability of the individual genotypes of both the mosquito and the mosquito-borne virus, and infection is an external expression of those genotype interactions [25][26][27][28]. In this study, the difference in miRNA expression between the infected and uninfected Ae. albopictus midgut was the most significant at seven days, highlighting the potential importance of this period during infection.
Our results show that many miRNAs are differentially expressed between uninfected and infected midguts at different time points, and the number and amplitude of miRNAs that were upregulated in infected midguts was larger than the number and amplitude of miRNAs that were downregulated. A total of 44 differentially expressed miRNAs were screened. The identification, screening and function of Ae. albopictus midgut-specific microRNAs related to susceptibility to dengue virus were observed to be very complex. Some miRNAs showed a trend of upregulation and downregulation at different time points, such as aal-miR-2941 and aal-miR-989.
In Ae. aegypti exposed to DENV-2, there were 34 differentially expressed miRNAs that were screened, of which miR-5119-5p was upregulated at two days, miR-34-3p, miR-87-5p and miR-988-5p were upregulated at nine days, and the remaining miRNAs were downregulated [19]. In this study, the expression of miR-34 was upregulated after 24 h of infection, and there was no significant difference compared with uninfected mosquitoes at five days. The expression of miR-87-5p was upregulated after exposure to DENV-2, but the expression level was very low. These results indicate that the expression of the same miRNA varies between different mosquito species. In a study of Ae. albopictus, the expression of miR-252 was downregulated at seven days after thoracic injection of DENV-2 [29]. Yan et al. [30] found that the expression of miR-252 was upregulated in C6/36 cells after infection with DENV-2, and the in vitro transient transfection assay showed that it inhibited the replication of DENV-2 in C6/36 cells. However, in the present study, the expression of aal-miR-252 in the midgut remained at a very low level and did not change significantly after infection. This implies that aal-miR-252 did not act on the midgut infection barrier. Two factors may be responsible for these different results. One difference between the two studies was the infection mechanism. The present study aimed to simulate the natural infection by using an infected blood meal, and may have triggered a different immune system response compared with thoracic injection. Another factor was that our study focused on the midgut tissue and the whole body. Many studies have shown that there is a difference in the expression pattern of the same miRNA in different tissues [31][32][33].
(See figure on previous page.) Fig. 1 a Scatter plots and fold changes comparing the midguts of infected and uninfected Ae. albopictus at different time points. 5B/5A is the (5-day uninfected midguts)/(5-day infected midguts after exposure to DENV-2), 7B/7A is the (7-day uninfected midguts)/(7-day infected midguts), etc. b Screened significantly differentially expressed miRNAs between the midguts of infected and uninfected Ae. albopictus at different time points after a DENV-2-infected blood meal In this study, four miRNAs were selected for their high expression levels, high degree of fold change between the uninfected and infected midguts, and the repeatability of their differential expression at different time points for the in vitro transient transfection assay. Three of these miRNAs had an effect on DENV-2 replication in C6/36 cells. miR-4448 inhibited intracellular DENV-2 replication, suggesting that it may play an important role in DENV-2 infection in Ae. albopictus. aal-miR-1767 and aal-miR-276-3p enhanced the replication of DENV-2. miR-276 was upregulated in Ae. albopictus after intraperitoneal injection of DENV-2 [29]. Similarly, in this study, miR-276 was found to be significantly upregulated in the midgut after oral infection, and in vitro transient transfection assays demonstrated that it could affect C6/36 intracellular DENV-2 replication, indicating that the miRNA may play an important role in midgut infection with DENV-2. Only one study previously reported that aal-miR-1767 is expressed in the midgut after a sugar meal (109 and 1660 reads) [24]. Our study showed that this miRNA maintained a high level of expression after DENV-2 infection. Furthermore, our in vitro transfection experiments demonstrated that miR-1767 enhanced DENV-2 replication in C6/36 cells, indicating that it could be a miRNA that is specifically expressed in the midgut of Ae. albopictus. miR-1767 expression was significantly downregulated after chicken fibroblasts were infected with duck virus, indicating that this miRNA could be associated with viral infection [34]. The replication of DENV-2 in C6/36 cells was inhibited after transfection of aal-miR-4448 mimic. A previous study showed that aal-miR-4448 was expressed in the midgut (4459 reads) and was upregulated after an uninfected blood meal but downregulated after the mosquito had ingested a DENV-2 infected blood meal [24]. The results indicated that aal-miR-4448 has an inhibitory effect on DENV-2 infection. Some studies have also shown that mosquito miRNAs interact with a virus, such as West Nile virus (WNV), which encoded miRNA-like small RNA molecules to upregulate the expression level of GATA4, thereby promoting replication of the virus in mosquito cells [35]. Wolbachia can use the miRNAs produced by host cells to regulate the expression of antiviral genes in host cells and play a role in promoting viral replication [36]. These effects may be related to changes in the host cell environment following viral infection and the host's antiviral mechanisms, but the molecular mechanism is unclear. miR-4448 was also found to be expressed in Culex pipiens pallens, and the expression level of sensitive strain was higher than that of resistant strain [37]. The same miRNA may play a different role in different species. miR-4448 may play an important role in DENV-2 infection in Ae. albopictus, which is worthy of further study.
The expression of aal-miR-662 was higher in the infected midgut at different time points, but the in vitro transient transfection experiment did not show a direct effect on the replication of dengue virus. If not directly on the virus itself, it is likely to have synergistic effects with other genes or may play a role in the midgut escape barrier.
Mosquito genes influence the mosquito's susceptibility to the virus, and the genotype of the mosquito may affect the choice of control strategies and may eventually provide clues to interrupt transmission [38]. In this study, we first compared the miRNA expression in the Ae. albopictus midgut when infected or uninfected with DENV-2. Examination of the miRNA expression profile illustrates the changes of midgut miRNA after infection with DENV-2. We screened the differentially expressed miRNA and chose four miRNAs that may play a role in DENV-2 infection for in vitro transfection experiments. Our results provide a basis for further study of novel vector control and measures to block pathogen transmission.
Conclusions
Six small libraries of Ae. albopictus midgut RNAs were constructed, three of which from mosquitoes that were infected with DENV-2 after feeding on infected blood, and another three that remained uninfected with DENV-2 after feeding on the same batch of infected blood. Compared to microRNA expression profiles of mosquitoes that were uninfected with DENV-2, 15 microRNAs were upregulated and two were downregulated in mosquitoes that were infected with DENV-2. Among these differentially expressed microRNAs, miR-1767, miR-622, miR-4448 and miR-276-3p were verified and chosen for an in vitro transient transfection assay; miR-1767 and miR-276-3p enhanced dengue virus replication in C6/36 cells, and miR-4448 reduced dengue virus replication.
Mosquito, virus strains, cell and RNA extraction
Methods of mosquito collection and husbandry, mosquito dissection, RNA extraction, small RNA libraries construction, cell culture and cell infection are identical to those used by Su et al. [24]; the same viral strains were also used.
Alignment of conserved miRNAs using BLAST and tag2 miRNA software
To make sure every unique small RNA is mapped to only one annotation, we adhered to the following priority rule: all rRNA (in which GenBank > Rfam) > repeat > exon > intron > known miRNA. Because Ae. albopictus miRNA is not available in miRBase, we first used BLAST to align small RNA tags with the Ae. aegypti miRNA precursor in miRBase 19.0 to obtain a miRNA count with no mismatches.
Prediction of novel miRNA candidates using MIREAP software MIREAP software (http://sourceforge.net/projects/mireap/) was used to predict novel miRNAs by exploring their secondary structure, Dicer cleavage sites and the minimum free energy of unannotated small RNA tags.
Analysis of variations in miRNA expression
To compare the miRNA expression levels in two groups via high throughput deep sequencing, we normalized the read numbers in each library according to the following formula: Normalized expression = Actual miRNA reads/Total count of clean reads × 10 6 . We then calculated the ratio and magnitude of the between-group differences and their associated P-values from the normalized data. The ratio was calculated according to the following formula: ratio = normalized expression of treatment group/normalized expression of control group, and the magnitude of the differences was expressed as "fold change" calculated using the following formula: fold change = log 2 ratio. A fold change > 1 or < -1 with a P-value < 0.05 was regarded as significantly different [39]. Fold changes and their associated P-values were calculated using a special procedure developed by the BGI biotech company (Shenzhen, China).
rpS7 q-RTPCR
The Ae. albopictus housekeeping rpS7 gene was used as an internal control for the qRT-PCR results. The forward primer sequence was 5'-ATG GTT TTC GGA TCA AAG GT-3', and the reverse sequence was 5'-CGA CCT TGT GTT CAA TGG TG-3'. qRT-PCR analyses followed the protocols described above.
Survival rate of C6/36 cells detection
Twenty microliters of MTT solution (5 mg/ml) was added into a well for 4 h (37°C, 5% CO 2 ), the culture solution was removed, 150 μl of DMSO was added, and the mixture was shaken horizontally for 10 min. The absorbance value at 492 nm was detected by an enzyme labeling instrument (SuPerMax 3000FA).
Statistical analysis
A Poisson distribution was used to analyze a digital transcript of the profile data following the method described by Audic & Claverie [39]. The 2 -△△CT method was used to determine relative expression levels from the qRT-PCR results, and paired t-tests were used to determine if the differences were statistically significant using SPSS v.19.0.
Additional files
Additional file 1: Table S1. Basic biological information analysis on sRNAs in the midguts from infected and uninfected Ae. albopictus after a DENV-2-infected blood meal. (DOCX 18 kb) | 2019-01-19T14:46:56.692Z | 2019-01-18T00:00:00.000 | {
"year": 2019,
"sha1": "337ed63f6ccca581e213db98a8224c35494ee46c",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-018-3261-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfd2e42fb2d6805e33512b45a3742a05312a3730",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3708889 | pes2o/s2orc | v3-fos-license | Modality-bridge Transfer Learning for Medical Image Classification
This paper presents a new approach of transfer learning-based medical image classification to mitigate insufficient labeled data problem in medical domain. Instead of direct transfer learning from source to small number of labeled target data, we propose a modality-bridge transfer learning which employs the bridge database in the same medical imaging acquisition modality as target database. By learning the projection function from source to bridge and from bridge to target, the domain difference between source (e.g., natural images) and target (e.g., X-ray images) can be mitigated. Experimental results show that the proposed method can achieve a high classification performance even for a small number of labeled target medical images, compared to various transfer learning approaches.
INTRODUCTION
Medical imaging is one of the most important diagnostic tools to visually represent the anatomical structures of the human body [1]. Over the past several decades, various types of medical imaging technologies including X-ray radiography, magnetic resonance imaging (MRI), computerized tomography (CT), and ultrasonic medical imaging have appeared and matured [2][3][4][5].
In the medical imaging, machine learning techniques play a key role by helping to solve the diagnostic and prognostic problems in a variety of medical imaging fields. Many researchers have proposed machine learning-based methods for clinical parameter analysis, medical knowledge extraction, disease progression prediction, etc. The authors of [6] proposed lung segmentation in the chest X-ray images using a regionbased active contour. In [7], a brain region detection related to Alzheimer's disease was proposed from brain MRI based on support vector machine (SVM). In [8], a multivariable logistic regression-based tool was proposed for screening malignant lung nodules from the low-dose CT. More recently, with the success of deep learning in computer vision application, the deep learning frameworks have been applied to the medical imaging [9][10][11]. However, there are limitations in adopting machine learning in the medical imaging due to unique characteristics of medical image database, e.g., incompleteness † Both authors are equally contributed to this manuscript. * Corresponding author by missing parameters and the lack of publicly sufficient labeled database. In particular, a small number of labeled databases are one of the main factors that make it difficult to well train the classification model due to over-fitting problems. Although some deep convolutional neural network (CNN)based methods achieved impressive performances in medical image domain, it is still hard to fully train deep network with a limited number of labeled datasets [10,11].
To overcome the aforementioned limitation, many transfer learning-based methods have been proposed in medical imaging applications. The aim of the transfer learning is to learn the prediction function in the target domain using the knowledge learned by a large number of labeled data set (e.g., ImageNet [12]) in source domain. In various computer vision fields, it is well known that the transfer learning contributes to the improvement of learning of sparse labeled or a small number of dataset in the target domain [13][14][15][16][17][18]. For the transfer learning in medical imaging, however, the input image characteristics are very different between the training data (a large number of natural images) and test data (a small number of medical images). Due to the extremely different domains having different and unrelated classes, the transferred function learned from the source database (training set) can be biased when directly applied to the target database (training set) [19]. As a result, the features extracted from the biased function are unlikely to be desirable for target domain, which is medical image domain.
The aim of this paper is to propose a novel approach of transfer learning-based classification with a small number of medical images considering additional database of the same acquisition modality of target data (we call it bridge database). Conventional transfer learning could cause degraded classification performances in medical image domain due to the significant distribution mismatches between source (natural image) and target (medical image) domains. To mitigate the distribution mismatches between source and target domains, we devise a new transfer learning through the bridge database (we call it modality-bridge transfer learning). The bridge database refers to a medical image set that has different purposes but is obtained from the same medical acquisition modality. For example, a large-scale chest X-ray image set can be used as bridge database as a representative of X-ray acquisition modality for cyst classification. By transferring the learned projection functions from natural images (source) to Identify applicable sponsor/s here. chest X-ray images (bridge) and from chest X-ray images to dental X-ray images (target), the distribution mismatch (not only different domain but also small number of labeled target data) between natural image domain and dental X-ray image domain can be reduced during the modality-bridge transfer learning.
The proposed method consists of consecutive three main parts as follows: 1) learning the projection function in the source domain projecting from source image space to its feature space, 2) learning the nonlinear mapping from feature space of the source image to feature space of the bridge database based on the projection function learned by source database, and 3) learning the classifier based on the transferred projection function learned by bridge database.
To evaluate the effectiveness of the proposed method using new bridge transfer learning, extensive experiments have been conducted in various types of medical image acquisition modalities such as X-ray, MRI, and CT. Experimental results show that the classification results of the proposed method are much better than those of other approaches over a small number of medical image database.
The rest of this paper is organized as follows. Section II describes the proposed transfer learning and its learning process. In Section III, comprehensive experimental results are shown to verify the usefulness of the proposed modality-bridge transfer learning. Finally, conclusions are drawn in Section IV.
A. Medical Image Configuration for the Proposed Transfer
Learning Fig. 1 shows examples of various medical images from three different medical image acquisition modalities which are MRI, CT, and X-ray. As shown in the Fig. 1, despite different human body parts, the images in the same medical image acquisition modalities are likely to have similar characteristics. For classification task in the sparse labeled or small-scale target medical images, a sufficient number of images in the same medical image acquisition modalities are obtained from multisite as a bridge database. The bridge database could not increase target data size but lead to domain adaptation between source database (natural images) and target database (medical images).
In the proposed transfer learning with small target image, the database is composed of large-scale natural images as source database, a small number of medical images for target database (e.g., dental X-ray images with cysts), and a sufficient Transfer source to bridge Transfer bridge to target Source database (e.g., natural images) Target database (e.g., dental X-ray images) Bridge database (e.g., chest X-ray images) number of medical images of bridge database. The bridge database is obtained from same medical image acquisition modalities as the target database from multi-site (e.g., chest Xray images). Fig. 2 shows the overall framework of the proposed modality-bridge transfer learning for medical classification using domain adaptation. To extract images characteristics such as edge and texture, we learn the projection function mapping source image space to source feature space through source database which consists of a large number of natural images. Then, the knowledge learned by natural images is transferred to the bridge database (i.e., medical images from the same medical imaging modality as the target) in order to learn the characteristics of medical images (e.g., bone and tissues in Xray images). Finally, based on the learned characteristics of the natural and bridge medical images, the classifier is learned for the target database. Detailed descriptions on the proposed classification are given in the following subsections. Note that the proposed modality-bridge transfer learning is established for the classification task in the sparse labeled or small-scale target medical images. (1) where N S indicates the number of source database.
C. Domain Adaptation using Modality-bridge
With the probability term of Eq. (1), by minimizing the cross-entropy loss, the trained projection and classification functions for the classification of the source database (natural images) can be obtained. However, it does not reflect the characteristics of target database (e.g., dental X-ray images) due to the domain difference.
2) Learning for the characteristics of the medical images using bridge database: To learn the characteristics of the target domain (medical image), the knowledge of the source database is supposed to be transferred. As mentioned earlier, in medical imaging domain, it is difficult to collect a large number of labeled images because of the patient privacy protection and a high cost for reliable labeling. For this reason, direct transfer learning from source to target which has small number of labeled image could cause over-fitting problem or failure of the convergence of training.
In this paper, we employ the bridge database. As abovementioned in Section II.A, the bridge database consists of sufficient number of medical images from the same medical imaging modality collected from multi-center. The domain of the bridge database (e.g., chest X-ray images) is the same as the target domain (e.g., dental X-ray images). By transferring the projection function learned in the source domain to the bridge database domain, over-fitting problem can be reduced while the characteristics of the medical images are learned.
3) Learning for the characteristics of the specific medical images using target database: Our goal is to predict true class in the target medical image domain even for insufficiently labeled condition. In the proposed method, the target images are projected onto the target feature space through the transferred projection function of the bridge database, B f .
where T i x and T i y represent i-th target image and its class, respectively. θ B and ϕ T are parameters of the projection function of the bridge database and classification function of the target database, respectively. N T is the number of target database.
Using the Eq. (2), by finding the optimal parameters, ϕ T* , of T h mapping the features projected by B f to the target domain, the classification performance in the target domain can be improved while mitigating the over-fitting problems.
III. EXPERIMENTS AND RESULTS
To verify the effectiveness of the proposed method, we performed extensive experiments in three different medical images acquisition modalities that were X-ray, MRI, and CT. In our experiments, ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC 2012) database [12] was used as the source database. Detailed explanations of other bridge and target databases are described in the following subsections.
A. Experimental databases 1) X-ray image acquisition modalities:
In the X-ray image modality, the target database consisted of 120 dental panoramic X-ray images for cyst classification. Two kinds of image patches, cyst and non-cyst patches, were labeled. We used 963 patches including 539 cyst patches and 424 non-cyst patches. For the bridge database, we used JSRT digital image database [20] including 247 chest X-ray images. We divided the entire images into small patches belonging to the lung part or other parts. So we used 13,119 patches containing 6,223 lung patches and 6,896 other body patches.
2) MRI acquisition modalities: For target database on MRI image modality, we made use of OASIS database [21] for Alzheimer's disease (AD) classification. It contained 761 brain MRI images including AD and normal cases. There were 326 AD cases (very mild, mild, moderate, and severe dementia) and 435 normal cases. In experiment to verify the proposed method, we used 1,500 patches including 720 AD and 780 normal patches. We used NCI-ISBI 2013 Challenge database [22] for the bridge database on MRI domain. There were 1,258 prostate MRI images with their segmentation maps. From this information, we constructed 18,746 patches including 7,308 prostate patches and 11,438 other part patches.
3) CT image acquisition modalities: In the case of CT image modality, we utilized thoraco-abdominal lymph node (LN) database [23]
B. Projection and classification function
In our experiment, VGG16 network [25] was employed as the projection and classification function, which is trained by ImageNet dataset. There were a total of 16 layers, which are 13 convolutional layers, 2 fully-connected layers, and a softmax layer. The 13 convolutional layers and 2 fully connected layers were considered as the projection function.
2) Performance comparisons:
To evaluate the performance of the proposed method in the sparse labeled or small-scale target medical images, 963, 1,500, and 1,264 labeled image patches from X-ray, MRI, and CT were used as the target databases. For performance comparisons, we performed a direct transfer learning (source to target) and the modality-bridge transfer learning with bridge database of the same medical acquisition modality. For the direct transfer learning, the parameters of the VGG16 model trained by ImageNet were directly applied to the target database. Then, softmax layer for classification was re-trained to avoid overfitting. For the modality-bridge transfer learning, medical images gathered from the same medical imaging acquisition modality as the bridge database (see Section III.A).
In Table I, the first and the second rows show the classification accuracy of the direct transfer learning and the proposed modality-bridge transfer learning in three medical image acquisition modalities. As shown in Table I, the modality-bridge transfer learning achieved significantly higher classification performances in three different medical image acquisition modalities than those of the direct transfer learning. In addition, we investigated the performance when the image modality of the bridge database was different from that of the target database. In these experiments, the prostate MRI, the chest X-ray, and the prostate MRI were used as bridge database for X-ray, MRI, and CT, respectively. The third row of Table I shows the performances of the transfer learning with the bridge database in the different acquisition modality. As shown in Table I, the classification accuracy is decreasing. This result indicated that the bridge database from the different medical imaging acquisition modality did not mitigate the domain differences because of the unique characteristics of each medical image modality. Furthermore, we performed additional experiments of a classification task with sufficient number of target database applicable to direct transfer learning. We investigated the performance when the amount of target database was large enough to train the network through the direct transfer learning. In these experiments, for the target database in X-ray, we used 14,123 patches including 6,517 cyst patches and 7,606 non-cyst patches. For the target database in MRI, we used 15,220 patches including 6,520 AD patches and 8,700 normal patches. For the target database in CT, we used 15,728 patches including 6,208 mediastinal LN patches and 9,520 abdominal LN patches. The classification accuracies were 92.2% in X-ray, 73.6% in MRI, and 93.3% in CT. These results demonstrated that the proposed modality-bridge transfer learning could achieve high classification performances as much as using a large amount of target database, despite using a small-scale of the target medical database.
IV. CONCLUSIONS
In this paper, a novel approach of transfer learning for classification in medical image domain considering the bridge database was presented. By learning the projection and classification functions from source to bridge and bridge to target database, the domain differences between source and target databases could be mitigated. Experimental results demonstrated that the proposed method provided high classification performances even with a small number of target databases. | 2017-08-10T07:57:05.000Z | 2017-08-10T00:00:00.000 | {
"year": 2017,
"sha1": "0e347d5c461ff5fb4b4f3a1ae054cd072602692b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1708.03111",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0e347d5c461ff5fb4b4f3a1ae054cd072602692b",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3963693 | pes2o/s2orc | v3-fos-license | Elevated Expression of miR302-367 in Endothelial Cells Inhibits Developmental Angiogenesis via CDC42/CCND1 Mediated Signaling Pathways
Rationale: Angiogenesis is critical for embryonic development and microRNAs fine-tune this process, but the underlying mechanisms remain incompletely understood. Methods: Endothelial cell (EC) specific miR302-367 line was used as gain-of-function and anti-miRs as loss-of-function models to investigate the effects of miR302-367 on developmental angiogenesis with embryonic hindbrain vasculature as an in vivo model and fibrin gel beads and tube formation assay as in vitro models. Cell migration was evaluated by Boyden chamber and scratch wound healing assay and cell proliferation by cell count, MTT assay, Ki67 immunostaining and PI cell cycle analysis. RNA high-throughput sequencing identified miR-target genes confirmed by chromatin immunoprecipitation and 3'-UTR luciferase reporter assay, and finally target site blocker determined the pathway contributing significantly to the phenotype observed upon microRNA expression. Results: Elevated EC miR302-367 expression reduced developmental angiogenesis, whereas it was enhanced by inhibition of miR302-367, possibly due to the intrinsic inhibitory effects on EC migration and proliferation. We identified Cdc42 as a direct target gene and elevated EC miR302-367 decreased total and active Cdc42, and further inhibited F-actin formation via the WASP and Klf2/Grb2/Pak1/LIM-kinase/Cofilin pathways. MiR302-367-mediated-Klf2 regulation of Grb2 for fine-tuning Pak1 activation contributing to the inhibited F-actin formation, and then the attenuation of EC migration. Moreover, miR302-367 directly down-regulated EC Ccnd1 and impaired cell proliferation via the Rb/E2F pathway. Conclusion: miR302-367 regulation of endothelial Cdc42 and Ccnd1 signal pathways for EC migration and proliferation advances our understanding of developmental angiogenesis, and meanwhile provides a rationale for future interventions of pathological angiogenesis that shares many common features of physiological angiogenesis.
Supplementary Tables
Supplementary Table 1: Partial high throughput RNA sequencing results of lung endothelial cells from miR302-367 ECTg mutant comparing to littermate WT control mice. Table 2: Primers for qPCR, Ch-IP and Clone. Table 3
Supplementary Methods
In MTT assay, HUVECs were seeded in a flat-bottom 96-well cell culture plate at an initial density of 5x10 3 cells per well and allowed to grow for 48 hours, 10 l of MTT (5 mg/ml) solution (Sigma) was added to each well followed by 4-hour incubation at 37°C, the media removed and 80 µl mixture of 40 ml isopropanol plus 44 µl 37% HCl added to each well, then vibrated on a shaking table for 10 minutes to dissolve the formed formazan. The plate was scanned with a microplate reader (Bio-Rad) at 570 nm for measuring the absorbance.
For propidium Iodide (PI) cell cycle analysis, HUVECs were starved in DMEM supplemented with 5% charcoal-stripped serum or 0.5% regular FBS. After 24 hours, medium was changed to DMEM with 10% normal FBS. Cells were harvested at designated time points, processed by standard methods by staining cell DNA with propidium iodide (PI), 10,000 cells per sample were analyzed by a flow cytometer (BD Biosciences, Mansfield, MA, USA).
For Ki67 immunofluorescence staining analysis, HUVECs were fixed in 4% neutral-buffered paraformaldehyde non-specific binding sites were blocked with PBS containing 10% normal goat serum. The HUVECs were further incubated with the primary antibodies against Ki-67 followed by an Alexa 488conjugated secondary antibody (Thermo Fisher Scientific, Danvers, Massachusetts, USA). The HUVECs were mounted using Prolong Gold anti-fade mountant with DAPI (Thermo Fisher Scientific).
Dynamic cell motility measurement under live cell station
The viable HUVECs were planted at density of 1.5×10 5 in 6-well plate and cultured for 12 hours, then transfected with Life-act GFP plasmid. The cells were placed under live cell station microscopy and photographs taken every two minutes to measure the dynamic change of cell mobility (4). Filopodia, lamellipodia, cortex and stress fiber quantification methods were described previously(5-7).
Quantitation of G-actin, F-actin and GTPase activity assay
The amount of globular G-actin and filamentous F-actin was determined using the G-actin/F-actin in vivo assay kit from Cytoskeleton (Denver, CO, USA), and the Cdc42 GTPase activity was measured using the G-LISA activation assay biochem kits from Cytoskeleton Inc. (Denver, CO), following the manufacturer's instruction.
RNA purification, RT-qPCR and miRNA quantitation
Total RNA was isolated by Trizol from ECs of newborn mouse lung or HUVECs, reverse transcribed by SuperScript First Strand Synthesis System (Invitrogen, Carlsbad, CA, USA). Expression of genes was quantified by real time PCR analysis (RT-qPCR) with the primers listed in supplement table 1 (Table S1) of the supplementary material. For miRNA quantification, total RNA was extracted from isolated ECs of lung microvessels or HUVECs using a miRNeasy mini kit and the relative microRNA levels were measured by the TaqMan miRNA reverse transcription kit and miRNA assay kit following the manufacturer's instruction.
RNA high-throughput sequence of vascular endothelial cells isolated from mouse lung
Total RNA of the green ECs FACS sorting ECs from lung of R26R-tdTomato-EGFP mouse line (JaxMice, stock number 007576) mated with miR302-367 gain-of-function or control mouse for high-throughput sequencing. RNA samples extracted with Trizol were subjected to 100 bp x 2 non-strand-specific paired-end RNA-sequence analyses by Genome Center of WuXi App Tec (8).
Western blot analysis
Total protein extracts (20-50 µg) from HUVECs were resolved on SDS-PAGE gels and transferred to PVDF membranes for western blotting. Antibodies used in this article were listed in the supplementary table S3.
Co-immunoprecipitation (Co-IP)
For GRB2/PAK1 and CCND1/CDK4 immunoprecipitation, one 145 mm dish of cells was harvested by trypsinization, washed with PBS twice, and lysed with RIPA buffer containing 1 x PIC by rotating for 1 hour at 4℃. Cell debris was removed by centrifugation, cell lysate was split into two parts (5% of lysate was saved as input), and 8 µg anti-GRB2/PAK1/CCND1/CDK4 antibodies and 8µg IgG was used for immunoprecipitation at 4℃ overnight. 30 µl protein G beads were used for pull-down at 4℃ for 1 hour. Beads were washed with RIPA buffer three times, and bead-bound proteins were lysed with cell lysis buffer (50 mMTris-Cl, pH 6.8, containing 2% SDS). Lysates and saved inputs were used for western blot detection of PAK1/GRB2/CCND1/CDK4.
CDC42-GTP Pull-Down assay
Briefly, cells were lysed in RIPA buffer and centrifuged at 15,000 rpm for 10 min at 4 °C. Supernatants were mixed with PAK-GST beads which binds specifically to GTP-bound, and not GDP-bound, CDC42 proteins (Cytoskeleton). Levels of CDC42-GTP was detected by western immunoblotting using anti-CDC42 antibodies.
Chromatin Immunoprecipitation (ChIP)
3×10 7 HUVECs were harvested for ChIP experiment. Cells were cross-linked with 1% formaldehyde at room temperature for 10 min, and then neutralized with 125 mM glycine for 5 min. Cells were rinsed with icecold PBS twice and scraped into 1 ml of ice-cold PBS. Cells were re-suspended in 0.3 ml of lysis buffer and sonicated. After centrifugation, supernatants were collected and diluted in IP dilution buffer followed by immunoclearing with protein A-sepharose for 2 hours at 4°C. 5 µg anti-KLF2 (Cat No: ab203591, Abcam); or control IgG (Cat No: 2729S, CST) were used for immunoprecipitation. After immunoprecipitation, 45 µl protein A-Sepharose was added and incubated for another hour. Precipitates were washed, and DNA was purified after de-crosslinking for real-time PCR. Primers used are listed in Supplementary Tables 2.
3'-UTR Luciferase Reporter Assay
The 3'-UTR of CCND1 or CDC42 mRNA containing miR302-367 binding sequence was inserted into pMIR-REPORT, a microRNA 3'-UTR Luciferase vector. Mutagenesis of the seed sequences from the miR302-367 binding sites of CCND1 or CDC42 was generated by PCR-mediated site-directed mutagenesis. The final sequence was validated by DNA sequencing. Luciferase activity was determined 48 hours after transfection using Dual-Luciferase assay kits (Promega, Madison, WI, USA). Individual luciferase activity was normalized to the corresponding Renilla-luciferase activity (9).
Plasmid construct of GRB2 promoter luciferase reporter.
pGRB2(1027) is the human GRB2 promoter-luciferase reporter construct which spans positions -820 to +207 relative to the transcription start site and was amplified with PCR using genomic DNA isolated from HUVECs as a template. Then, the digested PCR product was cloned into the HindIII-Xhol sites of pGL3-basic reporter vector (Promega). The details of the primers are shown in Supplementary Table S2.
Promoter luciferase reporter assay
The promoter of GRB2 gene containing two KLF2 binding motifs was inserted into a pGL3 promoter Luciferase vector. The final sequence was validated by DNA sequenceing. Luciferase activity was determined 48 hours after transfection using Dual-Luciferase assay kits (Promega). Individual luciferase activity was normalized to the corresponding Renilla-Luciferase activity.
Target site blockers for validation of the pathway contributing to endothelial cell migration and proliferation when elevated miR302 expression.
MicroRNAs usually regulate gene expression of multiple targets. Identification of these targets is important to understanding the function of the microRNA. Target site blockers (TSB) are used to determine which pathway contributes significantly to the phenotype observed upon microRNA expression. The customdesigned TSBs with phosphorothioate backbone modifications from Exiqon (miRCURY LNA TM microRNA TSB) were used in vivo and in vitro to selectively silence the activity of miR-302 cluster on mouse and human Ccnd1/CCND1 and Cdc42/CDC42, respectively, and the retinal in vivo developmental sprout angiogenesis and cell proliferation, as well as cell migration and proliferation in culture HUVECs were observed to validate the importance of these miR302 target genes in EC migration and proliferation. TSB sequences are designed with a large arm that covers the miRNA binding site and a short arm outside the miRNA seed to ensure target specificity of 3'UTR of Ccnd1/CCND1 or Cdc42/CDC42 (10).
Intraocular injection was performed to observe the in vivo effects of TSB as previous described (11). In brief, Buprenorphine (0.1 mg/kg) was injected in pups subcutaneously one hour prior to the procedure. The pups were anesthetized by hypothermia and the skin over the eyelid was cleaned with Betadine followed by water and 70% ethanol. Intraocular injections were performed under a dissecting microscope with a 30 1/2 -gauge needle attached to a 5 μl glass syringe (Hamilton, Reno, USA), the needle was positioned 1 mm posterior to the limbus and 3 μl (Target site blockers, 0.4 mg/20 g) was slowly (3-5 seconds) injected into the vitreous chamber of the eye. While in culture HUVECs, the concentration of the TSBs was 50 nM.
Hypoxia experiment
For hypoxia treatment, HUVECs were maintained in glucose-free DMEM at 37 °C in an atmosphere of 5% CO2, 1% O2, and 94% N2. Transwell, tube formation and wound healing experiments were operated under this hypoxia condition. | 2018-04-03T00:24:13.886Z | 2018-02-05T00:00:00.000 | {
"year": 2018,
"sha1": "015739440c8e0a0809fa4f8c1bff020ab7c12969",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7150/thno.21986",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a1e694f20c76172735866250ea3c3d151501448",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
58010120 | pes2o/s2orc | v3-fos-license | Therapeutic efficacy and safety of Shexiang Baoxin Pill combined with trimetazidine in elderly patients with heart failure secondary to ischaemic cardiomyopathy
Abstract Background: Shexiang Baoxin Pill (SBP) is one of the most commonly used traditional Chinese patent medicines for cardiovascular diseases. This systematic review was designed to provide rigorous therapeutic efficacy and safety evidence on the use of SBP combined with trimetazidine in elderly patients with heart failure (HF) secondary to ischaemic cardiomyopathy (ICM). Methods: Relevant randomized controlled trials (RCTs) investigating the clinical efficacy of SBP combined with trimetazidine in treating ICM-associated HF were widely searched in electronic databases, including PubMed, Cochrane library, EMBASE, CBM, CNKI, VMIS, and Wanfang up to January 1, 2018. The methodological quality of each trial was assessed according to the Cochrane Reviewers’ Handbook 5.0. Meta-analysis was performed by using Review Manager 5.3. Results: Eighteen RCTs (N = 1532) that met the criteria were included in the review for the assessment of methodological quality. Meta-analysis showed that, when compared with conventional therapy, SBP combined with trimetazidine significantly improved the clinical efficacy and indices of cardiac function (including increasing left ventricular ejection fraction [LVEF] and 6-minute walk distance [6-MWD], decreasing left ventricular end-diastolic diameter [LVEDD] and left ventricular end-systolic diameter [LVESD]) without serious adverse reactions. Conclusion: This work provides evidence of the benefit of SBP combined with trimetazidine for the treatment of HF secondary to ICM. More high quality and well-designed RCTs are needed to confirm these findings.
Introduction
Ischaemic cardiomyopathy (ICM) is one of the cardiovascular diseases defined as diffuse akinesis of the left ventricle with systolic dysfunction caused by chronic myocardial ischaemia. [1] To date, ICM is still the most common cause of heart failure (HF), accounting for approximately 60% of cases worldwide. [2] Due to the terminal stage of various heart diseases, it is believed that HF is associated with a significantly higher rate of mortality and Availability of data and materials: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
The authors confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome. morbidity in human beings. [3] Despite the multiple remarkable advances in interventional cardiology and modern optimal medical therapy for the treatment of ICM-related HF, patients with ICM have a poor prognosis even after comprehensive revascularization and especially with LV systolic dysfunction. [1] One of the most important challenges for drug advancement is the improvement of a more effective and rational drug for ICM due to HF.
Over the last few decades, traditional Chinese medicine (TCM) has provided opportunities for the treatment of various diseases. Simultaneously, integrative medicine has emerged as an increasingly useful and complementary approach to allopathic medicine. As a result, TCM as a whole medical system has become an accepted and integral component of integrative medicine. [4] Shexiang Baoxin Pill (SBP), a treasured TCM formula for cardiovascular diseases, is derived from a classical TCM named Suhexiang Pill prescription recorded in Prescriptions of the Bureau of Taiping People's Welfare Pharmacy. There are 7 substances present in SBP, including Moschus, Radix Ginseng, Cortex Cinnamomi, Borneolum Syntheticum, Styrax, Calculus Bovis, and Venenum Bufonis. [5] Moschus is the main component with notable activity and lower adverse effects in the treatment of angina pectoris and chest tightness. [6] It has been widely used for the treatment of coronary heart disease and myocardial ischaemia in clinics. [7] SBP has been widely used in the treatment of coronary heart disease and myocardial ischaemia. [8] Recent years, a number of studies have shown SBP's characteristics on attenuating mitochondrial injury of cardiomyocytes and the mechanisms are related to anti-inflammatory, anti-oxidative stress, improving lipid metabolism and protecting mitochondrial function. [9,10] Thus, SBP exhibited prominent therapeutic effects on cardiovascular disease and metabolic syndrome.
Trimetazidine is one of the anti-angina pectoris cardiovascular drugs. A large number of preliminary studies have shown that trimetazidine maintains normal energy metabolism in ischaemic or hypoxic cells and increases reduced levels of intracellular ATP. Moreover, trimetazidine reduces left ventricular work load and improves both the clinical condition and quality of life of elderly patients with ischaemic heart disease. [11] A previous study preliminarily demonstrated that SBP has a cardio-protective function by simultaneously reducing the area of myocardial infarction area and promoting angiogenesis. [12] In recent years, an increasing number of references have reported the clinical efficacy of combined SBP with trimetazidine in elderly patients with ICM and HF without serious adverse events or reactions. [13] Nevertheless, until recently, there has been a lack of comprehensive evidence to support the efficacy and safety of SBP. Therefore, to promote the rational application of Chinese patent medicine in clinical practice, this research aimed to evaluate the benefits efficacy and side effects of SBP therapy for ICM based on results from randomized controlled trials (RCTs).
This study was performed to assess whether SBP combined with trimetazidine is associated with improved therapeutic efficacy and cardiac function indexes, including left ventricular ejection fraction (LVEF), left ventricular end-diastolic diameter (LVEDD) and left ventricular end-systolic diameter (LVESD). Simultaneously, 6minute walk distance (6-MWD), plasma brain natriuretic peptide (BNP) level, N-terminal pro-brain natriuretic peptide (NT-ProBNP), serum hypersensitive C-reactive protein (hs-CRP), and side effects are discussed to identify the appropriate scientific evidence regarding SBP in the treatment of ICM with HF. This study may also independently provide more useful evidence that will stimulate further research on the therapeutic value of SBP. Thus, an extended meta-analysis with detailed outcomes regarding clinical efficacy and adverse reactions is reported (Fig. 1).
Protocol and registration
This manuscript had been registered in PROSPERO (https://www. crd.york.ac.uk/PROSPERO/#joinuppage) and the registration number is: CRD42018087688.
Inclusion criteria
Two investigators (JXW and JW) independently examined the titles and abstracts of all searched databases to access the trials for inclusion. Based on the search strategy outlined above, the full texts of articles were retrieved if there was any doubt about including an article. A table with the English translation of all the titles and the English abstracts was reviewed by 2 other reviewers (XHL and YXY). The inclusion criteria were: randomized controlled trials (RCTs) with any length of follow-up; the diagnostic criteria of included trials were based on the internationally accepted criteria for the diagnosis of HF secondary to ICM in elderly patients; one or more outcome measures were included, such as clinical efficacy, LVEF, LVEDD, LVESD, BNP, NT-ProBNP, and 6-MWD, and adverse reactions during the scheduled treatment and follow-up. Any discrepancy in opinion regarding inclusion was resolved by consensus.
Exclusion criteria
Trials meeting the following conditions were excluded: non-RCTs; failed account of diagnostic criteria; absence of any quantitative outcome measures included in trials; others: including duplicate publications reporting the same trials, non-clinical experiments, reviews, mechanism research, or animal experiment.
Data collection process
The basic information about patients or participants such as intervention, comparisons, outcomes (total efficacy rate, safety), and type of RCTs was extracted and summarized by 2 investigators (JXW and LZ) independently. Any processes for obtaining and confirming data were carried out by 2 investigators (XHL, YXY). Each RCT was validated independently by 2 reviewers (JW, YLZ) who were blinded to the authors or results. All reviewers independently performed the screening of studies, selection, validation, and data extraction. The predesigned data extraction template was used to retrieve information from the included studies. Disagreements on the assessment of data were resolved by discussion and consensus was reached in all cases. Any unclear information would contact with authors. A number of responders and the total number of participants in the experimental and control groups for each study were extracted for dichotomous outcomes. The mean change and standard deviation for the mean in each trial were extracted along with the total number for continuous outcomes (e.g., LVEF, LVEDD, LVESD, 6-MWD).
Observation index
In this systematic review and meta-analysis, the observation indexes were clinical efficacy and safety of SBP, which were clinically relevant when evaluating the pharmacology of SBP in relation to the probable mechanisms. According to the New York Heart Association (NYHA) classification, clinical efficacy is defined on 3 levels: markedly effective rate: patients achieve complete remission or cardiac function improves above level II; effective rate: patients achieve partial remission or cardiac function improves to level I. Signs and symptoms are relieved to a certain degree; ineffective rate: patient with cardiac insufficiency improves to level I, or signs and symptoms are not significantly improved. In severe cases, death ensues. The total effective rate is equal to the markedly effective rate plus the effective rate. As for safety of SBP, all the adverse reactions of SBP reported in literature were fully recorded in this study, which may help the clinician make a right decision when using SBP for the treatment of HF secondary to ICM. That is, if the existing adverse reactions of SBP affect the patient's condition, it will be used as an appropriate regimen.
Risk of bias in individual studies
The methodological quality assessment was carried out using the Cochrane Handbook for Systematic Reviews of Interventions. [14,15] Six domains including random sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment (detection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias) were used for the methodological quality of each included trials. For all the relevant outcomes in the relevant domains, the quality of each item was classified using a nominal scale: "Yes" (low risk of bias), "No" (high risk of bias), or "Unclear" (unclear risk of bias).
Statistical analysis
Statistical analysis was performed by Review Manager 5.3 software (The Cochrane Collaboration, Copenhagen, The Nordic Cochrane Centre). For measurement data, dichotomous variables were presented as risk ratio (RR), while continuous outcomes were presented as the mean difference (MD) with 95% confidence intervals (CIs). As a quantitative measure of inconsistency, the I-square (I 2 ) statistic was used to assess heterogeneity. A fixed effect model was performed for minor heterogeneity when I 2 was <50%. Random effect model was applied when I 2 was >50%. Subgroup analysis was used to evaluate the 2 combination therapy schedules of the control group. Moreover, sensitivity analysis was performed to evaluate the reliability of the meta-analysis results. Potential publication bias was evaluated using funnel plot analyses.
Identification of eligible studies
A total of 396 articles were identified as potentially eligible records. As shown in Fig. 1 publications left 121 articles for further screening. For the preserved records, 79 were excluded based on title and abstract, and 42 full articles were used for further assessment. Among the latter, 3 articles were non-RCTs, 5 papers were reviews of animal models, 2 articles were non-relevant RCTs, 6 trials were low quality, and 8 papers for other reasons were excluded. Finally, 18 studies [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] with 1532 patients with HF secondary to ICM who met the criteria were included in the meta-analysis. The flow diagram of the study screening is shown in Fig. 2.
Characteristics of the included trials
The characteristics of the 18 included studies with 1532 patients (770 patients in the experimental group and 762 patients in the control group) were investigated. As shown in Table 1, 1922 male patients accounting for 60.2% and 610 female patients accounting for 39.8% were included in this systematic review. The age of the participants ranged from 44 to 85 years. Among them, 8 studies [16,21,24,26,28,30,31,33] reported the course of the disease from 1 to 18 years. Fifteen research studies [16][17][19][20][21][22][23][24][25][26][27][28][31][32][33] reported the NYHA classification between II and IV according to the "Criteria Committee of the New York Heart Association." [34] Additionally, the duration of 77.8% of the trials was 6 months. For the dosage of SBP and trimetazidine; SBP was 2 pills, 3 times a day and trimetazidine was 20 mg, 3 times a day. Finally, the intervening measures between the experimental group and the control group were clearly outlined in Table 1.
Conventional treatment (CT) included nitrates, statins, angiotensin converting enzyme inhibitors (ACEI), b-receptor blockers, diuretics, and calcium channel blockers. Since trimetazidine is conventionally used for cardiovascular disease, the control group was defined as conventional treatment with or without trimetazidine. The characteristics of included studies are shown in Table 1.
Methodological quality of included trials
The methodological quality for each included study was evaluated according to the Cochrane risk of bias estimation. All the included trials were RCTs, among which 3 trials used random number tables [30][31]33] and 2 trials that employed lottery randomization [21,25] were designated as low risk. However, 4 trials that used therapeutic randomization method [24,26,[28][29] were designated as high risk in random sequence generation and allocation concealment. None of the studies reported blinding of participants and personnel or blinding of outcome assessment. Incomplete outcome data and selective reporting were performed in 1 record [16] (Fig. 3).
LVEF improvement
LVEF can be stable and reliable in reflecting left ventricular function. It is an important indicator for the diagnosis of myocardial pump function and has been widely used for the assessment of HF and for the evaluation of drug efficacy in clinical diagnosis and drug research. Among the included literatures, 11 [16][17][20][21][23][24][25][26]28,[30][31] studies showed that the increase in LVEF was significantly better in the SBP combined with trimetazidine and conventional treatment groups than in the control group. As shown in Fig. 5, there was substantial heterogeneity in the LVEF improvement (P < .00001, I 2 = 84%). A random effects model was used to pool this meta-analysis. Meta-analysis showed that the LVEF of patients in the experimental group was much higher than that in the control group (MD = 6.68, 95% CI [5.92, 7.43], P < .00001). Sensitivity analysis found that Lin et al [13] demonstrated significant heterogeneity compared with the other study.
The comparison of 6-MWD
The 6-MWD was used as a supplemental parameter of hemodynamics. In this systematic review, 5 studies [16,24,26,28,30] with 534 patients reported the level of 6-MWD. There was no heterogeneity found among individual trials (P = .81, I 2 = 0%) (Fig. 7) and fixed effect model was conducted for analysis. The pooled analysis suggested that SBP combined with trimetazidine therapy significantly improved 6-MWD compared with the control group (MD = 62.07, 95% CI [54. 43,69.71], P < .00001). Meanwhile, the exercise endurance of HF patients was observably increased.
Notably, 2 trials [18,24] with a conventional treatment alone did not report the clinical efficacy of SBP combined with trimetazidine compared with a control group. The subgroup analysis of these 2 schedules of control groups demonstrates no obvious heterogeneity in either the trimetazidine plus conventional treatment (P = .68, I 2 = 0%) group or the conventional treatment alone (P = .90, I 2 = 0%) group. The pooled analysis suggested that there was a significant difference between these 2 schemes. The pooled analyses indicated that trimetazidine plus conventional treatment (RR, 1.27; 95% CI, 1.19-1.35; P < .00001) (Fig. 8A) showed greater improvement in the total efficacy rate compared with the conventional treatment alone (RR, 1.22; 95% CI, 1.12-1.34; P < .00001) (Fig. 8B).
Adverse reactions
Adverse events were not mentioned in 7 [19][20]23,[25][26]30,32] of the studies (38.9%), while 11 [16][17][18][21][22]24,[27][28][29]31,33] (61.1%) reported that adverse reactions were observed during the treatment. Re-hospitalizations or adverse reactions and death in the SBP plus trimetazidine group and the control group were listed in Table 2. Neither of the 2 groups experienced any serious adverse reactions during the treatment in the 2 studies. [28,33] The side-effects included digestive symptoms such as abnormal bowel sounds and slight diarrhea, as well as headache, dizziness [18,24,31] . Impairment of liver and kidney function was also observed. [21] Though mortality is a very important indicator in the assessment of side-effects of administered medicine, these trials only reported the number of deaths rather than the cause and gave very inaccurate descriptions of the cause of death. Moreover, the studies were not designed to evaluate mortality as primary outcome measure. In summary, the incidence of adverse www.md-journal.com events in the treatment group was significantly lower than that in the control group. Thus, the use of SBP combined with trimetazidine represents a better therapy for patients with HF secondary to ICM.
Other outcomes
Plasma brain natriuretic peptide (BNP) and serum hypersensitive C-reactive protein (hs-CRP) were selected as outcome measures in 2 trials [23,28] with 146 patients. Only 1 trial [23] reported the level of N-terminal pro-brain natriuretic peptide (NT-ProBNP), and 1 trial [25] evaluated heart rate. For these 4 indicators, no significant differences were found between the 2 groups before treatment, while all of the levels in the SBP plus trimetazidine and CT groups were significantly lower than those in the control group after therapy.
Discussion
Heart failure (HF) is a global epidemic with increasing prevalence due to an ageing worldwide population with increasing comorbidities. [35] Ischaemic cardiomyopathy due to heart failure is one of the comorbidities. ICM has been widely recognized as one of the primary causes of death and disability with a high mortality and morbidity and is a serious threat to patient health and quality of life. [36] Worse still, the prognosis of patients with heart failure is quite poor. [37] Currently, several therapeutic medicines are available to treat HF that have improved survival, including ACE-I, beta-blockers (BB), angiotensin receptor blockers (ARB), diuretics, antiarrhythmic medications, vasodilators, calcium channel blockers (CCB), and statins. However, for the other types of HF, only a novel blocker of the funny channel and a combination of hydralazine/isosorbide dinitrate improved survival, while the previously listed agents only improved signs and symptoms. [38] For patients with HF due to ICM, the use of drugs is aimed at improving blood flow, balancing the electrolyte disturbance, and regulating the heart rate. Nevertheless, the longterm use of conventional agents [39] is associated with adverse reactions and side-effects. Therefore, more effective agents for treating patients with ICM due to HF are desirable. TCM has been playing a significantly important role in treating cardiovascular diseases in the elderly for the past 2000 years. As mutually complementary, TCM combined with western medicine will have potential benefits for elderly patients with cardiovascular diseases with fewer side effects. [40] A previous meta-analysis comprehensively evaluated the clinical efficacy of SBP combined with trimetazidine in the treatment of ICM and HF in elderly patients. [13] This study indicated that SBP combined with trimetazidine had potentially increased clinical efficacy, LVEF, and 6-MWD, while LVEDD decreased significantly when compared with conventional treatment. In this systematic review, we further assessed the effect of SBP combined with trimetazidine on ICM due to HF and provided more extensive findings. First, the clinical efficacy of SBP combined with trimetazidine was evaluated according to the "Nomenclature and criteria for diagnosis of diseases of the heart and great vessels" developed by the criteria committee of the New York Heart Association, 9 th ed [34] and was reported based on the NYHA classification, which were directly related to the improvement of patients with ICMassociated HF. Second, since an individual patient's long-term prognosis and lower mortality rate is related primarily to his or her cardiac function, [41] the cardiac function indexes such as LVEF, LVEDD, LVESD were systematically evaluated. Moreover, trimetazidine is commonly used in treating cardiovascular diseases and improves myocardial energy metabolism and protects both myocardial cells and blood vessel endothelia. [42] Thus, trimetazidine may be used in conventional therapy especially for patients with angina. SBP combined with trimetazidine has certain clinical effect in treating elderly patients with HF secondary to ICM and the mechanisms may be related to reduce the plasma BNP level and serum hs-CRP level. [43] In this study, subgroup analysis was performed because the control group was comprised of patients taking conventional treatment with and without trimetazidine. Furthermore, we also assessed the medication-associated adverse reactions of SBP/trimetazidine combination therapy as well as publication bias.
In this study, the overall meta-analysis demonstrated that, compared with the control group, SBP combined with trimetazidine therapy significantly improved the total efficacy rate in the treatment of ICM-related HF in elderly patients. In addition to the positive influence in the control group, a more significant increase in LVEF and 6-MWD as well a reduction in LVEDD and LVESD were observed in the experimental group with an estimated mean. Subgroup analysis indicated that conventional treatment with trimetazidine for the control group had greater clinical efficacy compared with conventional treatment alone. However, more rigorous and well-designed RCTs should be performed owing to the small samples in this systematic review. Furthermore, no serious adverse events were observed in any of the included trials. Though the plasma level of BNP, hs-CRP, and NT-ProBNP were only reported in 2, 2, 1 trial, respectively, their levels were all significantly lower in the SBP plus trimetazidine and CT than in the conventional treatment group. Additional trials would provide a better database for further clinical research. Even though the clinical efficacy and safety of SBP combined with trimetazidine were comprehensively analyzed with a large number of trials and strict methodologies, the existence of potential publication bias indicated that there were still limitations of this study still. First, characteristics of the included trials were not reported in full detail, such as the age of patients, course of disease, and NYHA classification. Second, with regard to methodological quality, it must be noted that both the blinding of participants and personnel (performance bias) and blinding of outcome assessment (detection bias) were not reported in any of the trials. Generally, independent ethics committees at each study site should approve the study protocol, and all patients should provide written informed consent before initiation of study-specific procedures. However, there were a seriously low number of trials that reported this statement of ethics. All the included trials were published in Chinese. In addition, the presence of publication bias indicated that this may influence the results of this systematic review. Finally, due to small sample size among the included studies, subgroup analysis was not performed in this study. Because of these limitations, more rigorous and well-designed RCTs are needed to confirm these findings.
Conclusion
Overall, this systematic review suggested that SBP combined with trimetazidine provides an obvious clinical efficacy for the treatment of HF secondary to ICM, indicating that the combination therapy has some clinical potential. However, due to the small samples and generally lower quality studies included in this review, we expect more evidence from high quality trials to confirm advantages of the extensive clinical use of SBP for elderly patients with ICM-related HF. | 2019-01-22T22:20:46.099Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "edfa3109034d715b1a6c5478596b3e94fba2c53a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000013580",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edfa3109034d715b1a6c5478596b3e94fba2c53a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236516208 | pes2o/s2orc | v3-fos-license | Vacuum Preconsolidation Settlement Characteristics and Microstructural Evolution of Marine Dredger-Filled Silt
*e method of vacuum preloading for foundation treatments is used in the construction of the Fangchenggang coastal area in Guangxi province, China. *e thick marine dredger-filled silt has a considerable impact on the treatment effort. In this study, the mineral composition and grain size distribution of these silts were analyzed to investigate their consolidation settlement property and microstructures. *e scanning electron microscope and finite element method were adopted. *e results reveal that the dredger-filled silt in this area is composed mainly of sand with particle size mostly smaller than 0.075mm. To replicate the construction process, the process of drainage by the vacuum preloading method was simulated by setting different water levels in the finite element analysis. *e displacement and the dissipation of the pore water pressure obtained by simulations were reasonably consistent with the field monitoring data. In addition, the results obtained using the scanning electron microscope indicate that the equivalent diameter of the structural unit and that of the pore unit decrease with the silt depth. However, the value of the structural abundance approaches one, whereas the pore abundance is significantly different from one.
Introduction
To increase the area of coastal and lakeside construction lands, the method of land reclamation with dredger-filled soil is adopted to treat the foundation [1]. After the treatment, the foundation is composed of dredger-filled sands, dredger-filled silt, or a mixture of the two [2]. Its composition is exceptionally nonuniform, and the engineering performance is low [3][4][5][6][7]. In foundation treatment, vacuum preloading with simple construction and low cost is generally used to reinforce hydraulic fill foundation [8]. It is necessary to study the vacuum preloading consolidation settlement characteristics and microstructures of dredgerfilled silt to satisfy the requirements for constructing buildings on foundations. It is also important to establish a dredger-filled silt consolidation model to provide a theoretical basis for long-term postconsolidation settlement.
Extensive comparative investigations have been carried out on the formation process and macroscopic consolidation settlement characteristics or microscopic composition structure of dredger fills worldwide. In the initial studies on soft soil deformation, scholars observed that the results of large deformations of soft soils calculated using the classical Terzaghi theory were inconsistent with in situ results. Subsequently, the Barron consolidation theory was put forward and widely used [9]. It is more suitable than the Terzaghi consolidation theory for analyzing the consolidation and settlement of foundation after plastic drainage board treatment [10]. For example, Rujikiatkamjorn and Indraratna [11] presented a procedure for designing vertical drains on soft soils. ey provided examples considering the linear variation in the horizontal permeability coefficient. e equivalent diameter equation was proposed according to different geotechnical conditions. Among these, the most widely used are equivalent area and equivalent perimeter [12,13]. Sun et al. obtained [14] finite element calculation results using the Biot consolidation theory and Duncan-Zhang nonlinear constitutive model. ese show that the law of vacuum degree along depth is consistent with the scenario of on-site construction. Huang et al. [15] proposed the elliptic cylindrical coordinate theory for the consolidation of sand drain foundations and obtained a new fundamental consolidation equation. It demonstrated that the drainage effect of plastic drainage panels cannot be satisfied when the equivalent perimeter method is applied.
To speed up the consolidation process of soft soils and increase the strength, scholars have combined other ground treatment methods to address hydraulic fill soft soil foundations on the basis of plastic drainage boards. e vacuumsurcharge preloading method was proposed and has been demonstrated to be effective for increasing foundation strength and the degree of consolidation (e.g., [16,17]). Liu et al. [18] improved the vacuum preloading method and developed a new method for alternate vacuum preloading.
rough the mercury intrusion test and shear test, Liu verified that the direction of movement of the dredger fill particles can be altered with this method and that the dissipation rate of pore water pressure is five times that of the traditional method. Scholars also adopted plastic drainage boards in combination with other foundation treatments such as deep mixing piles [19] and cement-soil piles [20] to treat soft soils. e treatment method can reduce the surface settlement as well as the lateral displacement of foundation.
Scholars have conducted various analyses on dredger fill using modern scientific and technological means, to address its micromechanism. Manning et al. and Spearman et al. [21,22] studied the deposition patterns of clay and sandy soil mixtures in European estuaries and explained the interaction between sand and mud during sedimentation. Zhang et al. [23] simulated the consolidation process of dredger fill in a laboratory. ey observed that the degree of agglomeration of particles in the deep part was higher than that in the shallow part. Furthermore, the degree of agglomeration of particles in the shallow part varied more than that in the vertical direction. e vertical and horizontal variations for different depths were similar. Zhang et al. and Liu et al. [23,24] simulated the formation process of dredger fill. ey observed the variation in dredger fill particles in the vertical and horizontal directions and divided the sedimentation of dredger fill into two stages: fine particle flocculation sedimentation and mud self-weight consolidation. Some researchers explored the relationship between the deposition of soft soil and the pH of the sedimentary environment [25][26][27]. ey summarized the law of the effect of pH on the physical properties of soft soil, such as porosity and compressibility. Meanwhile, a series of shear tests were conducted on structured and unstructured clay to study their stress-strain laws [28][29][30]. Cheng et al. [31] divided the shear process into three stages using scanning electron microscope analysis: pore gas compression and water migration, agglomerate spallation and particle slip, and structure destruction and formation of new structures.
Research on the vacuum preloading consolidation settlement performance of dredger-filled silt can not only avoid the serious consequences of foundation instability, foundation damage, and even cracking of upper buildings but also provide construction basis for building structures or further foundation treatment. In the present study, the consolidation characteristics and micromechanism of the dredger-filled silt were studied from both the macro-and microperspectives through mineral analysis, pore microstructure analysis, consolidation test, and finite element simulation. is would provide a theoretical basis for subsequent ground treatment and postsurcharge.
Materials.
Marine dredger-filled silt was sampled from different depths by in situ drilling from the coastal area formed by dredging and reclaiming land in Fangchenggang, Guangxi province, China ( Figure 1). e silt soil layer in this coastal area would be treated with the vacuum preloading method in the future. e physical and mechanical properties of the dredgerfilled silt at various depths are shown in Table 1. e fact that the liquidity indexes (the most important parameters) of the samples from the different depths are higher than one indicates that the marine dredger-filled silt is in a fluid plastic state. Meanwhile, this type of silt has rich water content and high porosity: its natural water content is higher than the liquid limit, and its void ratio is higher than 1.5.
Experimental Method.
To explore the micromechanism of the process of consolidation, the microstructure of marine dredger-filled silt was explored on the basis of three aspects: mineral composition, particle analysis, and scanning electron microscope analysis. In the X-ray diffraction (XRD) method, diffraction occurs when X-rays are directed into the crystal lattice of clay minerals. Different clay minerals have different lattice structures, whereby different diffraction patterns would be produced. e mineral composition can be determined by analyzing the diffraction patterns. e particle analysis methods that are applied differ with the grain size. A sieving test was used for large-size particles to determine the percentage of each particle group in the total mass of soil particles in the marine dredger-filled silt. Meanwhile, laser particle size analysis was used for smallsize particles. Herein, the size of soil particles is determined according to the principle of light scattering.
A scanning electron microscope (SEM) was used to observe the pore size, and the shape, composition, and connection of the marine dredger-filed silt particles. Furthermore, the interaction between different particles was analyzed using electron microscopy images.
Soil consolidation is a process of compression settlement in which air and water are discharged gradually from the soil under a certain pressure. e oedometer test is the most common method to measure the consolidation and compression characteristics of soil. e compression deformation of marine dredger-filled silt at different depths under different pressure states (1.0, 12.5, 25.0, 50.0, 100.0, 200.0, and 400.0 kPa) was tested using this method. en, the relationship of consolidation pressure with silt deformation and void ratio was analyzed. Furthermore, the consolidation and compression deformation characteristics of marine dredger-filled silt were studied.
Sand Well Equivalent.
Owing to the small size of the plastic drainage plate, it is inconvenient to divide the mesh for finite element calculation. erefore, in the present study, the sand well equivalent method was adopted to transform the sand well foundation into a sand wall foundation [32]. e smearing effect of the plastic drainage board in the laying process is considered. Furthermore, the average consolidation degree of the foundation can be ensured to be equal before and after mean time according to the sand well equivalent method mentioned above. e equivalent formula is as follows: where k xp and k zp are the horizontal and vertical permeability coefficients, respectively, of the sand wall foundation; k ra , and k za are the radial and axial permeability coefficients, respectively, of the sand well foundation; and D x and D z are the adjustment coefficients of the horizontal and vertical permeability coefficients, respectively, and are expressed as follows: μ a represents the affecting coefficient of smear area and is expressed as follows: where B is half of the sand wall spacing in the sand wall foundation; r e is the effective drainage radius of the sand well foundation; n is the diameter ratio of the sand well in the sand well foundation, n � r e /r wa ; s is the ratio of the smear radius to the sand well radius in the sand well foundation, s � r s /r wa ; n p is the ratio of B to half of the sand wall thickness, n p � B/r wp ; s p is the ratio of half the thickness of the sand wall to half the thickness of the sand wall, s p � r sp /r wp ; k s is the permeability coefficient of sand well; and v is Poisson's ratio of the foundation soil. e dredger fill foundation of Fangchenggang is vacuum-preloaded with plastic drainage boards. ese are laid in square with a spacing of 0.9 m. is configuration is equivalent to a sand wall foundation according to the above formulas, with r wa � r wp � 0.033 m, B � 2 m, and r e � 0.508 m. Considering the influence of the drainage board coating on the consolidation degree [33], s � s p � 3 is adopted. e equivalent results are shown in Table 2.
Modeling and Parameters.
After transforming the three-dimensional sand well foundation into a two-dimensional sand wall foundation, the conditions and stages of construction are set according to the on-site scenario of vacuum preloading. In addition, the finite element analysis is established according to the physical and mechanical parameters of the soil in the construction field. e parameters are shown in Table 3. e foundation surface and sand wells between the two rows of sealed walls are set as drainage boundaries to simulate the on-site drainage conditions. Because sand wells are equivalent to plastic drainage boards without consolidation, the sand wells are set as nonconsolidated conditions. e negative pressure of vacuum is simulated by setting the node head. At the site, the negative pressure of vacuum is applied step by step and stabilized at −85 kPa. erefore, the foundation surface nodes and sand well nodes between the two rows of sealing walls are set to a pore water pressure of −8.5 m, and the foundation surface nodes outside the two rows of sealing walls are set to zero pore water pressure. e on-site pore water pressure monitoring data reveals that the on-site water level has decreased by 6 m. Hence, the groundwater level has decreased from 19 m to 13 m over six times.
Assumptions.
To explore the consolidation settlement characteristics of marine dredger-filled silt, the sand well foundation was converted into a sand wall foundation to simulate the on-site vacuum preloading. e assumptions are as follows: (1) e plastic drainage board is regarded as a sand well according to the equivalent area, and the smearing effect of the plastic drainage board in the laying process is considered. e smearing radius is three times that of the sand well.
(2) e conversion between the sand well foundation and sand wall foundation is equivalently obtained based on Biot's axisymmetric consolidation theory. is can ensure that the degree of consolidation in the sand well foundation and sand wall foundation is identical and that the average pore pressure at each depth remains constant over time.
(3) e node water head on the sand wall is −8.5 m. In addition, the surface of the foundation is set according to the scenario of the on-site vacuum preloading, where the stable negative pressure is −85 kPa. e water level decreases continuously in the process of on-site foundation consolidation. erefore, the finite element analysis adopts multiple precipitations in layers to simulate the decrease in water level.
Characterization of the Soils.
e microstructure of soil is composed of minerals, and the mineral content affects the physical and mechanical properties of soil. e crystalline structure and mineral composition of the soil were analyzed by XRD. As illustrated in Figure 3, the mineral composition mainly constitutes quartz, mica, and kaolinite. is indicates that the marine dredger-filled silt contains a large amount of sand and is plastic in this area.
To explore the particle size distribution of marine dredger-filled silt, the particle size of the soil sample was analyzed using the method and the laser grain size analyzer. ese effectively demonstrated the grain size distribution of the marine dredger-filled silt, as shown in Figure 4. Figure 4 shows that the main range of the grain size of the marine dredging silt is below 0.75 mm, which accounts for 90.5% of the total particle content. is indicates that the field dredger-filled silt can be classified as fine-grained soil. e content of particles smaller than 0.075 mm is 99% and that of clay particles smaller than 0.005 mm is 30.8%. is indicates that the marine dredger-filled silt particles are mainly powder particles, followed by clay particles. Figure 5 shows the microstructure of the marine dredging silt after magnification by 20,000 times using the scanning electron microscope. It shows the relatively large pores of the marine dredger-filled silt, long flocculent soil particles, disordered particles, and significantly loose structure. Figure 6 shows the relationship of the consolidation pressure with the void ratio and axial strain, obtained through the oedometer test of the dredgerfilled silt at depths of 1 m, 3 m, and 6 m. e relationship between the void ratio of dredger-filled silt and consolidation pressure is shown in Figure 6(a). e void ratio of dredger-filled silt at the same depth decreases gradually with the increase in consolidation pressure. Essentially, the void ratio is directly proportional to the logarithm of consolidation pressure. Under constant consolidation pressure, the pore ratio of dredger-filled silt decreases gradually with the increase in depth. Figure 6(a) indicates an apparent inflection point of consolidation pressure between 50 kPa and 100 kPa for dredger-filled silt at a depth of 1 m, unlike for the other two depths. e stress path and cumulative deformation affect the mechanical properties of soil [34]. From the analysis of the scanning electron microscope images (Figure 5), the upper part of the dredging sludge in this area is structural. Meanwhile, the structure of the lower part is unapparent, and the interaction among the dredging sludge particles is more apparent. Advances in Civil Engineering Figure 6(b) indicates that the axial strain increases gradually with the increase in the consolidation pressure of the dredger-filled silt at the same depth. Under constant consolidation pressure, the axial strain of the dredger-filled silt decreases gradually as the depth increases. In addition, it is illustrated that the dredger-filled silt in this area is in an elastic deformation state when its strain is less than 20%, whereas it is compacted further and enters a dense deformation state when the strain exceeds 20%.
Numerical Simulation.
e vacuum preloading was carried out for 75 days, and the initial water level in the initial stress field of the model was set as 19 m. According to the field monitoring data on pore water pressure, the water level decreased by 6 m in the field. erefore, the water level stratified gradually decreased to 13 m in the finite element model after 75 days of vacuum loading. e result obtained is shown in Figure 7. Figure 7 shows that the vertical displacement in the middle of the finite element model is concave and that of the surface in the middle is the largest (a settlement of 300 mm). e settlement decreases gradually as the depth increases.
e vertical displacement at the bottom is approximately 1 m above the bottom of the silty sand. e ground surface outside the clay sealing wall is affected by the vacuum preload, and the ground subsides marginally. e larger the distance from the clay sealing wall, the lesser the surface settlement. e area affected by the vacuum preload is 5 m outside the sealing wall as shown in Figure 7.
In addition, we set up settlement monitoring, pore water pressure monitoring, and vacuum degree monitoring in the center of the field area selected by the finite element model. erefore, the field and the finite element model have identical stratum conditions as shown in Table 3 and can be compared with each other for analysis. e settlement at different depths along the symmetry axis was selected in the middle of the finite element model of the foundation to analyze the on-site monitoring data on layered settlement including surface settlement. As shown in Figure 8, the surface settlement results of the two conditions are approximately 300 mm, and negligibly different after 75 days. However, the settlement forms are different. On the one hand, the settlement in the finite element model increases linearly and finally stabilizes at 300 mm. On the other hand, the on-site foundation settlement shows a growing curve in the early stage, linear growth in the middle stage, and gradually stabilized in the final stage. Figure 8 shows that the results and trends of the settlement on site and in the finite element model, at depths of 4 m and 7 m, are approximately similar. e settlement is stable at approximately 140 mm at a depth of 4 m, and approximately 50 mm at a depth of 7 m. Overall, the settlement decreases as the depth increases. However, there is an evident difference in the settlement in the early stage between the site and finite element model. is phenomenon can be explained by the low vacuum sealing effect during the on-site vacuuming process, which results in air leakage ( Figure 9).
Owing to the air leakage on site during the 15th-20th day of vacuuming, the settlement of the site at depths of 4 m and 7 m shows a short upward adjustment. e settlement at each depth of the site increases linearly when the vacuum is maintained at 85 kPa. e rapid rate of surface subsidence in the early stage of the site is because the vacuum source in the shallow layer of the ground causes a relatively sufficient vacuum in the early stage of vacuuming. e water level decreases gradually with the consolidation of the foundation. At present, the vacuum is relatively insufficient, and it is difficult to be satisfied with the deep soil resulting in the decreasing settlement rate of the surface later. e pore water pressure at the depths of 3 m, 6 m, and 9 m monitored on site and the pore water pressure variation curve at these depths calculated by the finite element method are shown in Figure 10. e figure shows that the trend of pore water pressure at the depths of 6 m and 9 m is essentially identical. Meanwhile, the field pore water pressure at the depth of 3 m is significantly lower than the pore water pressure calculated by the finite element method.
is is because, during the vacuuming process, the depth of 3 m is close to the surface, and the shallow foundation has been drained and consolidated. e pore water pressure measured Advances in Civil Engineering at this time includes the negative pressure caused by the onsite vacuuming. e pore water pressure at each depth decreases stepwise because layered precipitation was adopted in the finite element calculation model. is demonstrates that layered precipitation can better simulate the on-site vacuum drainage consolidation process.
Microstructural Evolution.
e dredger-filled silt is the sedimentary soil formed by hydraulic dredging using a dredger and mud pump. e depositional process of dredger-filled silt is divided into three stages: silt falling under self-weight stage, self-weight consolidation stage, and sedimentation balance stage [35]. e dredger-filled silt at different depths in different consolidation states is considered for analysis with a scanning electron microscope (SEM), as shown in Figure 11. e figure indicates that the microstructure of marine dredger-filled silt at different depths varies significantly. is is reflected mainly by two indicators: structural unit and pore unit, i.e., equivalent diameter and abundance. e equivalent diameter refers to the diameter of the equivalent circle whose area is equal to that of the pore unit or structural unit in the soil microstructure. Abundance refers to the ratio of the long axis to the short axis of a pore unit or structural unit. e equivalent diameter and abundance of the pore units and structural units of the dredger-filled silt at different depths are measured according to Figure 11. e results are shown in Table 4.
As the depth of the dredger-filled silt increases, the equivalent diameter of the pores decreases gradually, and the pore abundance increases gradually and shifts away from one. is implies that as the depth of the marine dredgerfilled silt increases, the pores between structural units become increasingly small, whereas the pores tend to be flat. As the depth increases, the equivalent diameter of structural units decreases gradually, and the abundance increases gradually and approaches one. is implies that as the depth Advances in Civil Engineering of the marine dredger-filled silt increases, the soil flocs are compressed and gradually become smaller. Furthermore, the flocs change from long strips to standard circles. In addition, Figure 11(b) shows that the soil flocs at a depth of 3 m are bent. is indicates that the squeezing effect between the flocs is becoming increasingly apparent. Simultaneously, as the pores decrease, the connection between the soil flocs becomes tighter, and the interaction between the particles becomes stronger.
Conclusions
is study investigated the consolidation characteristics and the micromechanism of dredger-filled silt from the macroand microperspectives, through mineral analysis, pore microstructure analysis, consolidation test, and finite element simulation. e conclusions are as follows: (1) e mineral analysis of the field dredger-filled silt revealed that the dredger-filled silt in this area is plastic and has a large sand content. e grain size analysis revealed that the dredger-filled silt is composed mainly of powder particles, followed by clay particles. (2) e oedometer test of the dredger-filled silt showed that its void ratio decreases gradually with the increase in depth. Moreover, the shallow silt is structural, and the deep silt has no apparent structure. e dredger-filled silt is in an elastic deformation state when the axial strain is less than 20%, whereas it enters a compact deformation state when the strain is at least 20%.
(3) e three-dimensional sand well foundation is transformed into a two-dimensional plane strain sand wall foundation for finite element calculation. e results show that the settlement law and pore water pressure dissipation rate are essentially consistent with the on-site monitoring results. erefore, the use of layered precipitation and node heads in the finite element calculation to simulate vacuum drainage consolidation is in line with the reality. (4) e analysis and comparison of the scanning electron micrographs of the dredger-filled silt at different depths show that as the depth increases, the equivalent diameter of the structural unit and pore unit of the marine dredger-filled silt decreases, whereas the abundance increases. However, the abundance of structural units approaches one, and the pore unit keeps shifting away from one.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2021-07-29T15:20:01.724Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "24fa43ea3a6be321f602b2a8eea581afc10fa68d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ace/2021/5529478.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "24fa43ea3a6be321f602b2a8eea581afc10fa68d",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
201176979 | pes2o/s2orc | v3-fos-license | Sedentary behaviour facilitates conditioned pain modulation in middle-aged and older adults with persistent musculoskeletal pain: a cross-sectional investigation
Abstract Introduction: Higher physical activity (PA) and lower sedentary behaviour (SB) levels have demonstrated beneficial effects on temporal summation (TS) and conditioned pain modulation (CPM) in healthy adults. This cross-sectional study investigated the relationships between PA and SB and TS/CPM responses in individuals with chronic musculoskeletal pain. Methods: Sixty-seven middle-aged and older adults with chronic musculoskeletal pain were recruited from the community. Questionnaires measuring demographics, pain, and psychological measures were completed. Physical activity/SB levels were measured using the International Physical Activity Questionnaire—short form and Sedentary Behaviour Questionnaire, respectively. Semmes monofilament was used to assess mechanical TS (MTS) at the most symptomatic (MTS-S) and a reference region (MTS-R); change in the pain scores (baseline-10th application) was used for analysis. Conditioned pain modulation procedure involved suprathreshold pressure pain threshold (PPT-pain4) administered before and after (CPM30sec, CPM60sec, and CPM90sec) conditioning stimulus (2 minutes; ∼12°C cold bath immersion). For analysis, PPT-pain4 (%) change scores were used. Results: PPT-pain4 (%) change scores at CPM30sec and CPM60sec demonstrated significant weak positive correlations with SB levels and weak negative correlations with PA measures. After adjusting for confounding variables, a significant positive association was found between SB (h/d) and PPT-pain4 (%) change scores at CPM30sec and CPM60sec. No significant associations between MTS and PA/SB measures. Conclusion: Sedentariness is associated with higher pain inhibitory capacity in people with chronic musculoskeletal pain. The observed relationship may be characteristic of a protective (sedentary) behaviour to enhance pain modulatory mechanism. Prospective longitudinal studies using objective PA/SB measures are required to validate the observed relationship in a larger sample size.
Introduction
Physical activity (PA) is a commonly prescribed intervention to reduce pain in people with chronic pain. 21 Population-level studies have found associations between regular engagement in PA and lower incidence of chronic pain. 39,40 Mechanisms behind PA in modulating pain have been studied in various settings and populations. 41 Preclinical studies demonstrated that regular engagement in PA influences a range of cellular mechanisms that are responsible for pain hypersensitivity, dysregulation of endogenous pain modulatory system, and chronic pain development. 4,78,79 Healthy older adults meeting PA recommendations (ie, moderate-vigorous PA levels) demonstrate better experimental pain responses (lower temporal summation [TS] of pain and greater conditioned pain modulation [CPM]). 51,52 Individuals who perform endurance exercise and engage in vigorous activities Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article. have greater CPM effect than the control population. 18,19,22,57,87 The positive impact of PA on pain sensitivity, nociceptive processing, and modulatory mechanisms in healthy individuals may not be similar in people with persistent pain due to altered nociceptive processing and negative psychosocial contexts associated with persistent pain. 13,50,55 Evidence indicates a nonlinear relationship between PA levels and pain. 31 A large body of evidence suggests that engagement in PA/exercise by people with chronic widespread pain (CWP) often heightens their pain, potentially mediated through abnormal nociceptive processing and modulatory mechanisms associated with CS. 14,55 Central sensitisation (CS) is considered a key pain mechanism responsible for the maintenance of several chronic musculoskeletal pain syndromes. 17,71,72,81,85 Central sensitisation is characterised by amplification of peripheral nociceptive input 95 and impaired descending inhibition of nociceptive inputs. 56,81,96 Abnormal TS of pain and impaired CPM response are suggested to be surrogate markers of heightened nociceptive drive and poor descending modulatory drive, respectively. 2,13,24,81,96 In addition to CS, individuals with persistent pain possess negative pain cognitions about PA, which can adversely influence the pain modulatory systems, resulting in heightened pain experience during PA engagement. 23,33,34 Moreover, no associations were found between pain processing measures and PA levels in individuals with chronic low back pain, suggesting a potential confounding of relationship by psychosocial factors. 59 Therefore, it is essential to understand the PA relationships with various clinical markers of nociceptive processing and modulatory processes while taking into account a range of confounding factors such as pain catastrophizing and sleep quality. 5,6,33,34,74,76 Evidence of the relationship between PA levels and nociceptive modulatory mechanisms in chronic musculoskeletal pain is scarce. 41,59 Insights on these mechanistic relationships may help to design solutions to optimize PA in individuals with musculoskeletal pain. Therefore, this cross-sectional study aimed to investigate the association between self-reported PA and SB levels and measures of nociceptive processing and modulatory mechanisms in a cohort of adults with chronic musculoskeletal pain.
Study design
A cross-sectional observational study.
Sampling strategy
Adults with chronic musculoskeletal pain from an urban community were invited to participate in this study. Convenience sampling, a type of nonprobability sampling method, was used. 75 Study advertisements were published periodically (September 2016-June 2017) in a local (free) newspaper and social media (Facebook); study invitation emails were sent out to the members of the community organisations including Age Concern Otago, Arthritis New Zealand, and University of the Third Age (NZ). Interested volunteers contacted the research team through either telephone or email and underwent eligibility screening by a research team member with a health professional background.
Eligibility criteria
Adults who had chronic musculoskeletal pain, ie, pain persisted for more than 3 months, were eligible for study participation. 15,89 Volunteers who have had any of the following conditions/ situations were excluded: autoimmune diseases (rheumatoid arthritis, gout, systemic lupus erythematosus, and ankylosing spondylitis), underwent joint replacement surgery, history of angina, peripheral vascular disorders, and any neurological conditions or cognitive disorders that would influence sensory testing procedures. The Mini-Mental State Examination was used to ensure the participants were free of any cognitive impairment. 3,61 Ethical approval was obtained from the University of Otago Human Ethics Committee, and all participants provided written consent before study participation.
Procedure
All participants completed self-reported clinical and psychological questionnaires and underwent quantitative sensory testing (QST). Participants' age, sex, ethnicity, and anthropometric measures (height, weight, and waist and hip circumference) were collected. Hand and foot dominance was determined using the Edinburgh Handedness Inventory 58 and Otago Footedness Inventory, 73 respectively. Participants also reported whether they had consumed any pain medications for pain relief on the day of testing.
Pain distribution
Participants specified the location(s) of pain by ticking the relevant boxes of a blank body chart (front and back views) indicating specific body regions (shoulders, arms/elbows, wrist/hands, hip, knee, legs/ankle/feet, neck, chest, or low back). Participants marked an "X" on the body region/joint that hurts the most (ie, the most painful region). Presence of CWP was identified using the 4 items about the "pain subscale" from the London Fibromyalgia Epidemiology Symptom Screening Questionnaire (LFESSQ). 15,88,92 To be classified as having CWP, participants had to respond "yes" to all 4 pain criteria of the LFESSQ with either "both a right-and left-side positive response" or a positive response for the presence of pain at both sides. If the data were not satisfying the LFESSQ CWP criteria, then it was classified as regional pain syndrome.
Pain intensity and interference
Brief Pain Inventory (BPI), a standardized, validated assessment tool, was used to capture pain intensity of the most painful region (average, least, and worst pain intensity in the past 24 hours and 4 weeks) and interference in daily activities. 36 Participants reported the presence of pain in the area that was nominated to have the worst pain and rated the intensity of pain on an 11-point numeric pain rating scale (NPRS).
Neuropathic pain
The painDETECT questionnaire was used to identify the presence of a neuropathic pain component in the most painful area. The chosen tool has superior diagnostic accuracy when compared with other screening tools. 20 The questionnaire consists of 12 items that measure pain quality rated on a 5-point Likert scale (1 5 "never" to 5 5 "very strongly"), pain radiation from the primary area of pain (yes or no), and pain course pattern (scored from 21 to 2). The total score ranges from 21 to 38 points with a score of $19 indicative of a likely neuropathic pain (#12: nociceptive pain and 13-18: possible neuropathic pain component [or mixed type]). The depression, anxiety, and stress scale (DASS-21) was used to measures 3 psychological constructs: depression, anxiety, and stress over the past week. 94 The DASS-21 consists of 21 items rated on a 4-point Likert scale and has adequate validity (r 5 0.78-0.84) and reliability (a 5 0.70-0.90) in older adults with persistent pain. The total scores on each subscale range from 0 to 42, with higher scores indicating more severe levels of depression, anxiety, and stress.
Pain catastrophizing scale
The pain catastrophizing scale (PCS) was used to measure the extent of catastrophic thoughts about their pain. The PCS consists of 13 items rated on a 5-point Likert scale that measures 3 dimensions of catastrophizing: rumination, magnification, and helplessness. 84 The total score ranges from 0 to 52, where higher scores indicate greater levels of catastrophic thoughts about pain. 77
Pain vigilance and awareness questionnaire
The pain vigilance and awareness questionnaire (PVAQ) was used to measure the frequency of habitual "attention to pain" over the past 2 weeks. 47,48 The PVAQ has 16 items rated on a 6-point Likert scale, and the total score ranges from 0 to 80. Higher scores indicate greater levels of pain vigilance and awareness, which has shown associations with higher pain severity. 67
Central sensitization inventory
Central sensitization inventory (CSI) was used to identify participants with central sensitivity syndromes (eg, fibromyalgia, irritable bowel syndrome, chronic headache, temporomandibular disorders, and pelvic pain syndromes). 53 The CSI consists of 2 parts-part A assesses 25 health-related symptoms common to central sensitivity syndromes, with a total score ranging from 0 to 100, and part B (is not scored) asks about previous diagnoses of 1 or more specific disorders, including central sensitivity syndromes. The CSI has demonstrated high level of test-retest reliability and internal consistency (Pearson r 5 0.817; Cronbach's alpha 5 0.879). 46
Sleep quality
Sleep quality was assessed using a single item of the Pittsburgh Sleep Quality Index. 7 All participants responded to the question: During the past month, how would you rate your sleep quality overall? (very good, fairly good, fairly bad, and very bad). For purposes of this study, the response categories were collapsed to good ("very good" and "fairly good") and bad ("fairly bad" and "very bad") sleep quality.
Pain self-efficacy
A 2-item validated questionnaire was used to assess pain selfefficacy (PSE) beliefs. 54 Participants rated their confidence on a scale of 0 to 6 with 1 being not at all confident and 5 being completely confident, with the mean score taken as the final score for PSE.
Assessment of physical activity and sedentary behaviour
Physical activity levels were assessed using the International Physical Activity Questionnaire-short form (IPAQ-SF). 43 The IPAQ-SF is a commonly used questionnaire in research settings for quantifying self-reported levels of PA and has been widely used in chronic pain populations. The IPAQ-SF consists of 9 items which provides information on the time spent doing walking, moderate-to vigorous-intensity activities, and sedentary activities. Also, an additional item of the IPAQ-SF was used to estimate time spent in sitting on a typical weekday. The data processing and scoring of the IPAQ-SF was conducted as per the guidelines (www.ipaq.ki.se). A Microsoft Excel spreadsheet that enables automatic scoring of the IPAQ-SF was used. 10 Both categorical (low, moderate, and high based on PA recommendations) and continuous variables (walking MET-min/wk, moderate MET-min/wk, vigorous MET-min/wk, total PA MET-min/wk, total activity min/wk, and total days of activity) were calculated as per the recommendation for scoring the IPAQ-SF. For all analysis, we have used continuous scores of PA variables.
Sedentary behaviour was assessed using the self-reported Sedentary Behaviour Questionnaire, which has demonstrated acceptable psychometric properties. 44,69 The SBQ consists of 9 items that determine the amount of time spent doing 9 sedentary activities during a typical weekday and typical weekend day. Response categories ranged from "none" to "6 hours or more" for sedentary activity. The mean duration (hours per day) spent on individual sedentary activities on a typical weekday and weekend day was computed. A weighted daily estimate of sedentary time (hours per day) was calculated as [(S(sedentary time during a typical weekday) 3 5) 1 (S(sedentary time during a typical weekend day) 3 2)]/7. 44 As an "a priori" decision, the daily estimate of sedentary time based on the SBQ was used as a primary measure of SB in the analysis.
Quantitative sensory testing
Quantitative sensory testing procedures are commonly used to assess these somatosensory abnormalities in musculoskeletal pain. This study administered 2 dynamic QST procedures (ie, TS of pain and CPM). 24,68,98
Mechanical temporal summation
Temporal summation procedure is a commonly used sensory psychophysical testing that may produce heightened pain experience, due to the facilitation of central nociceptive drive. 82,83 Abnormal TS in humans has been proposed as a clinical signature of enhanced summation of central neurons, a feature of CS. 62,83,90 In this study, we used the mechanical TS protocol to induce TS. Mechanical TS (MTS) has been shown to predict pain severity, 32 including movement-evoked pain associated with knee osteoarthritis. 93 Moreover, ethnicity interacted with TS responses in predicting higher clinical knee pain ratings. 24 Mechanical TS was assessed using a nylon monofilament (Semmes monofilament 6.65, 300 g). 24 Brief 10 repetitive contacts were delivered at a rate of 1 Hz, externally cued by auditory stimuli. The participants rated the level of pain experienced on the NPRS immediately after the first contact and rated their greatest pain intensity after the 10th contact. Three trials were conducted at the index area and remote site, with the order of testing randomised. The index area included the nominated most painful joint, and the remote area included either the dorsal opposite wrist (in cases of lower back/lower limb joints as an index area) or the opposite shin, ie, 5 cm below the tibial tuberosity over the belly of tibialis anterior muscle (in cases of the neck/upper limb as an index area). For each trial, the MTS was calculated as the difference between the NPRS rating after the first contact and the highest pain rating after the 10th contact. This score presents the maximum amount of MTS across the 10 contact points. The average of the 3 trials was calculated for each participant for each site [ie, most symptomatic joint (MTS-S) and a remote site (MTS-R)], with a positive score indicating an increase in MTS.
Conditioned pain modulation procedure
Conditioned pain modulation is the most frequently administered procedure for exploring the endogenous pain modulatory system. 97,98 Conditioned pain modulation test procedure is always administered at least 15 to 20 minutes after the MTS procedure, 29 and it was administered according to the previously published recommendations of testing. 97,98
Conditioning stimulus
Conditioning stimulus consisted of a cold pressor task, where the participants immersed their hand (until midforearm) in a thermos containing cold water for a maximum period of 2 minutes. The hand opposite to the side of the most painful area was used unless that hand was also symptomatic (eg, the left hand was immersed when the testing joint is right-sided knee pain). The temperature of the cold water was maintained at ;12˚centigrade and was confirmed immediately before and after the immersion procedure. 26,98 Participants continued hand immersion until the end of the trial (ie, 2 minutes) or until it was too uncomfortable to be immersed (NPRS $ 8). Similar conditioning stimulus (ie, cold water) protocol has been used in previous studies showing significant CPM effect. 26,37,42
Test stimulus
A computerised, handheld digital algometer (AlgoMed; Medoc, Ramat Yishai, Israel) was used to measure suprathreshold pressure pain threshold (pain4) at the most painful area in the most symptomatic region. Two familiarisation trials were performed at the midforearm before the formal trials. The 1-cm 2 algometer probe was pressed over the marked test sites perpendicularly to the skin at a rate of 30 kPa/s. The participants were instructed to press the algometer trigger button in the patient control unit when the pressure sensation changed to a pain intensity of 4 out of 10 on the NPRS. 98 Once the patient-controlled unit was activated, the trial was automatically terminated, and the amount of pressure (kPa) was recorded. If participants did not report pain at the maximum pressure level which was set at 1000 kPa for safety reasons, the procedure was terminated by the assessor, and a score of 1000 kpa was assigned for that trial. Two PPT (pain4) trials were recorded before the conditioning stimulus and were averaged (preaverage score) to obtain a baseline score for each participant. Three PPT (pain4) trials were recorded in the same region at 30, 60, and 90 seconds immediately after the conditioning stimulus.
Calculation of conditioned pain modulation:
A percent change score was calculated for each time point (ie, 30 seconds [CPM30sec], 60 seconds [CPM60sec] , and 90 seconds [CPM90sec]) as below, with a positive score indicating an increase in PPTs (pain4) after the conditioning stimulus and thus presence of CPM effect. Conditioned pain modulation percent change score 5 [(post score 2 preaverage score)/preaverage score] 3 100. The percent change score was calculated to overcome the interregional variability of recorded pain thresholds.
Data analysis
All statistical analysis was performed using SPSS (version 23.0). Descriptive statistics were calculated for all measured variables. Statistical assumption testing revealed that the measured variables of interest were non-normally distributed.
The Friedman analysis of variance was used to assess differences between (preconditioning and postconditioning PPT-P4 scores), thus evaluating the presence of overall CPM effect. The Wilcoxon signed-rank test was used to determine differences for the following pairwise comparisons: preconditioning PPT-P4 vs postconditioning PPT-P4 at 30 seconds; preconditioning PPT-P4 vs postconditioning PPT-P4 at 60 seconds; and preconditioning PPT-P4 vs postconditioning PPT-P4 at 90 seconds. In addition, the Wilcoxon signed-rank test was used to assess differences between pain rating scores after first application and 10th application at the symptomatic and remote sites, and differences between MTS-S and MTS-R pain rating change scores. Effect sizes 70 were calculated for all pairwise comparisons. Spearman rank correlation statistics were used to determine the relationships between (1) MTS-S and MTS-R and pain severity and the level of interference and (2) MTS and CPM (dependent variables), and IPAQ-PA and SB (primary predictor/independent variables).
To assess for relationships between PA/SB and CPM/MTS response, a 2-step procedure was used as follows: Step 1 evaluated the correlations between dependent variables of MTS (MTS-S and MTS-R) and CPM (CPM30sec, CPM60sec, and CPM90sec) scores and the independent variables of PA, SB, demographics, pain-related clinical variables, and psychological variables, using Spearman rank correlation statistics (P # 0.05). No attempt was made to correct the statistical significance of multiple correlations between variables of interest. The following criteria were used to interpret the strength of association between variables of interest: very strong-0.8 to 1, strong-0.5 to 0.8, weak-0.2 to 0.5, and very weak-less than 0.2.
Subsequently, step 2 involved multiple linear regression analyses for each dependent variable (MTS and CPM) and primary independent variables (PA and SB measures) if they have demonstrated significant associations (r s ) with a P-value of # 0.05. Because no correlation exists between dependent variables (MTS and CPM measures), a multiple regression model for each dependent variable was built.
Due to modest sample size (CPM: n 5 60; MTS: n 5 67), a maximum of 4 confounder variables (ie, demographic, anthropometric, pain, and psychological variables) in addition to primary independent variables (PA and SB variables) were adjusted in the final multiple regression model. Because PA measures (except walking) demonstrated significant (P , 0.05) weak negative correlations with SB, multicollinearity effects of PA variables in the multiple regression models were assessed using variance inflation factor and tolerance functions. Confounding variables were included in the step 2 modelling if they have demonstrated significant relationships with the dependent variables. In addition to adjusted models, backward multiple linear regression analyses were performed. For all regression analysis, relevant statistical assumptions were assessed. Table 2 presents the results of the Spearman correlation analyses between MTS/CPM and participant demographics and clinical characteristics (age, body mass index, no. of painful joints, widespread pain, pain duration, pain severity, interference, neuropathic scores, psychological factors, sleep, PA, SB estimates, and pain medications intake). No correlation was evident between dependent variables (MTS and CPM measures), but there was a significant negative relationship between independent variables (PA and SB) except vigorous and walking levels.
Conditioned pain modulation
Of 67 participants, 7 participants did not undergo CPM procedure due to safety concerns. Except for 2 participants, all participants completed 2-minute exposure to conditioning (cold) stimulus. There was a significant overall change (x 2 : 18.5; P # 0.001) between preconditioning and postconditioning PPT-P4 raw scores. Pairwise comparisons found significantly higher postconditioning PPT-P4 scores at 30, 60, and 90 seconds (at all time points) when compared with preconditioning PPT-P4 average score ( Table 5). Small to moderate (range 0.35-0.45) effect sizes were observed for all pairwise comparisons.
None of the demographic factors demonstrates significant associations with CPM response. Pain severity and interference scores were significantly positively associated with CPM60sec effect only ( Table 2). None of the psychological and pain-related measures were significantly associated with CPM30sec, except sleep quality (bad vs good), which revealed a weak negative association with CPM30sec ( Table 2). Pain severity and interference scores, pain medication intake before the test session, and PCS (helplessness subscore) demonstrated weak positive associations with CPM60sec, whereas PSE was negatively (weak) associated with CPM60sec. A range of psychological factors (PCS, PVAQ, DASS, and PSE scores) demonstrated significant positive associations with CPM90sec percentage change scores. The variable "pain medication intake before the test session" showed a significant weak negative association with CPM90sec response ( Table 2).
Association of physical activity and sedentary behaviour with conditioned pain modulation responses
All PA measures (except vigorous PA MET-min/wk) demonstrated significant weak negative correlations with CPM30sec and CPM60sec ( Table 3). Sedentary behaviour showed significant weak positive associations with CPM30sec and CPM60sec ( Table 3). No significant relationships were demonstrated between PA/SB measures and CPM90sec ( Table 3). Table 4 presents the results of multiple linear regression analysis for CPM30sec and CPM60sec.
CPM30sec
After adjusting for the confounder variables (SQ and PA levels), the final multivariate model (model 1) demonstrated a significant positive association of the daily estimates of SB with the CPM30sec. However, the backward multiple regression model showed significant positive associations of SB with CPM30sec response (model 4). Independent models (models 2 and 3) were constructed for SB and PA. After controlling for sleep quality, SB measure demonstrated a significant positive association with CPM30sec response, but not the PA variable ( Table 4).
CPM60sec
After controlling for variables (pain severity, pain medication intake before the test session, and PCS-helplessness), neither PA nor SB measures demonstrated associations with CPM60sec response (model 5). Independent models (6 and 7) were constructed for SB and PA against CPM60sec response. In the model 6, after controlling for variables, SB measure demonstrated a significant positive association with CPM60sec response (model 6). However, in model 7, after controlling for variables, PA was not associated with CPM60sec. However, in the backward multivariate model (model 8), SB and pain medication intake before the test session remained in the model, and both demonstrated significant positive associations with CPM60sec response. Pain interference and PSE were not included in the model to avoid multicollinearity with pain severity and PA variables ( Table 4).
CPM90sec
Multiple linear regression analysis was not conducted for CPM90sec due to nonsignificant relationships between PA/SB variables and CPM90sec percentage change scores ( Table 3).
Mechanical temporal summation
A significant TS (x 2 : 18.5; P # 0.001) was observed both at the symptomatic (z 5 26.4; P # 0.001) and remote sites (z 5 26.2; P # 0.001), with a large effect size ( Table 5). When compared with the remote site, there was a significant (z value: 23.4; P , 0.001) TS (greater change in the pain ratings) at the symptomatic site, with a moderate effect size. Older adults' group (.65 years) had a higher TS at the symptomatic when compared with the other age group (,65 years).
Significant positive relationships were shown between MTS-S and pain severity and interference scores. Similarly, MTS-R scores were positively correlated with pain interference, but not with pain severity scores. Pain severity, interference scores, and psychological factors (PCS, CPAQ, depression, anxiety, CSI, and PSE scores) were positively associated with both MTS-S and Summary of demographics, pain, psychological, and predictor and outcome variables.
Associations of physical activity and sedentary behaviour with mechanical temporal summation
No significant correlations were demonstrated between both MTS-S and MTS-R and any of the PA or SB measures (Table 3); hence, a multivariate analysis was not conducted.
Discussion
This study demonstrated that the individuals who spent a longer duration in a day engaging in SB had a greater CPM effect. Also, PA levels were negatively correlated with the CPM effect. We did not find evidence of a relationship between MTS of pain and SB/PA levels.
Conditioned pain modulation
Sedentary behaviour levels were associated with greater CPM effects, independent of total time spent in moderate or vigorous physical activities. Also, a significant positive CPM effect (moderate effect size) was demonstrated. These findings are in contrary to the previous studies measuring CPM effect, where greater CPM responses were seen in healthy individuals: who engaged in higher levels of PA and had lower levels of sedentary time; performed better in endurance exercise; and participated in vigorous activities. 22,51,52 Generally, these studies included young and older healthy adults, used different QST paradigms, studied different domains of PA, and measured PA using self-report and objective methods. 18 Another potential factor that might have contributed to the observed relationship was the participant's PA pattern (unmeasured variable) before the study period. Notably, individuals with chronic pain generally display behavioural patterns in engagement with PA, classically defined as "boom" and "bust" phases of the chronic pain experience cycle. 14,49 Anecdotal evidence suggests that people in pain "flare-ups" after engagement in high levels of activity often reduce their activity levels or even engage in SB. It could be speculated that the participants' SB might have induced transient better pain modulatory effects to protect against pain flare-ups, thus explaining the positive crosssectional relationship between SB and CPM effect in this study.
Physical activity levels were negatively correlated with the CPM effect in this study. The contrasting observed relationship (vs healthy individuals) between PA and CPM effect may be moderated by psychological factors. 34 This study revealed positive associations between a range of psychological factors (eg, catastrophizing, DASS scores, pain hypervigilance, and PSE) and later CPM responses (at 60 and 90 seconds). 50 Similar positive associations were demonstrated in previous research, speculating a positive mediating role of general anxiety related or attentional bias associated catastrophizing thoughts on CPM efficiency. 9,50,63 Contrastingly, previous studies report psychological status (self-reports and experimental induction of acute stress) negatively influence the CPM effect in healthy and symptomatic individuals. 8,23,26,27,50,86 However, adjusting for the PCS-helplessness in this study did not significantly influence the variance of PA or SB on CPM effects. Since participants in this study had lower scores in PCS and other psychological attributes, the role of psychological confounding on the observed relationship cannot be entirely ruled out.
Mechanical temporal summation
Although significant negative associations were observed between MTS and PA/SB measures in the pain-free control population, 51,52 this study failed to find such associations. Similar to this study, a recent study demonstrated no relationships between moderate or vigorous PA levels and heat-evoked TS of pain in a group of individuals with low back pain. 59 Lack of null relationships between MTS and PA/SB measures in this study can be due to observations: lower and skewed MTS change score at the symptomatic site (mean [SD]: 1.9 [1.8]); skewed PA data; inter-regional TS differences (lower scores-neck/shoulder regions vs higher scores at low back/knee/hip regions); higher MTS-S scores in older adults group (vs ,65 years); and higher MTS at the painful site (vs remote site). Besides, there is some evidence demonstrating hypoesthesia in the painful region and no signs of CS, 30,35 which may have influenced the TS responses in the symptomatic region. Other TS mechanisms such as local tissue responses 80 and cognitive and affective responses (perceived threat) to the repeated sensory input can also explain the observed null relationships. 11,28 This perceptual component is supported by our data showing significant positive correlations between MTS scores, PCS scores, and pain severity/interference. 8,16,65,66 Thus, peripheral mechanisms of TS and perceptions might have confounded the relationship between PA/SB and MTS scores.
Study strengths
This is the first study exploring the role of PA and SB on CPM effect and TS responses in a group of individuals with mixed persistent musculoskeletal pain. This study attempted to adjust known confounding factors in the analysis. Our study participants were free of cognitive impairments, thus minimizing the possibility of recall issues in reporting pain and PA levels. This study used the CPM protocol where the test stimulus was administered at the symptomatic joint against a standard research practice where test stimulus delivered at a remote site. Although CPM effect can be independent of the testing site, it is suggested that measuring CPM response at the most painful location might be more relevant and generalizable for clinical populations, where the original nociceptive drive exists potentially confounding the CPM response assessed at the most painful site. However, this proposition needs further exploration to identify any differences in CPM response (painful vs remote location) and its correlations with pain severity and functional outcomes.
This study has some limitations which include cross-sectional study design, community-based convenience sampling technique 75 introducing sampling bias, self-report measures of PA and SB, and smaller sample size, however, similar to previous studies in healthy adults. Because it is a single-group observational study, assessor blinding was not performed; however, it is considered a limitation. There are a few limitations associated with the CPM protocol used in this study. They include noncirculation of cold water, and the pain rating was not recorded following removal. Although the water temperature (12˚) used in this study can be considered higher (vs other studies), the similar temperature was used in previous studies that have had induced significant conditioning response. 26,37 In contrary to a previous study, 51 positive CPM effect was observed, possibly explained by the mixed sample (older and middle-aged adults) of participants and suprathreshold pain (PPT-P4) used as a criterion for test stimulus in this study. Another potential limitation was towards the application of conditioning stimuli at the same segmental level (ie, cold bath immersion of the hand) in participants with shoulders and neck pain (n 5 16). 98 Therefore, the role of segmental inhibition cannot be ruled out in the CPM response in this study. A percent change of suprathreshold (pain4) PPT scores was used in the statistical analysis to overcome the regional variability in PPT scores. However, the effect of the testing site (varied symptomatic regions) on the observed relationships cannot be entirely ruled out. A possibility of variance inflation due to multicollinearity between independent variables (PA and SB) was ruled out through meeting the statistical indices' criteria (variance inflation factor and tolerance) of the multiple regression modelling. Another limitation is not correcting P values for multiple correlations; however, using Bonferroni correction may inflate a type II error rate, possibly missing the real relationships. 60 All previously published studies that investigated associations between PA and CPM/MTS in healthy as well as in symptomatic population did not correct for multiple comparisons, and they all found fair relationships. 51,52 There could be potential group differences (middle-aged and older aged adults) in relationships of interest; however, age as a continuous measure was not associated with CPM/TS measures (P . 0.05). Therefore, it is reasonable to propose that potential group differences in relationships of interest do not exist.
Research recommendations
Prospective longitudinal research should use objective methods for measuring PA and SB patterns and their impact on pain modulatory mechanisms. 51 Future research should explore the role of contexts, cognitive, affective factors (eg, fear of movement), and social factors in PA/SB engagement and their impact on pain modulatory systems. 1,38,45,91 For example, structured PA, as opposed to leisure-based PA, may have differential effects on pain modulatory functions, mediated through cognitive, emotional, and social factors. Future research should consider measuring washout effects of a conditioning stimulus. 37 Moved evoked pain paradigms such as "sensitivity to physical activity" or similar can be used in addition to experimental QST paradigms. 12,93 Future studies could use a criterion (ie, at least .2 points on an NPRS for defining a clinically meaningful summation of pain) to categorize the MTS data and assess the relationship. 64 Future research should investigate ethnic differences in CPM/MTS responses 25 and differences in older adults with multisite joint pain (.50% in this study cohort) vs widespread pain syndromes (fibromyalgia). 15,88
Conclusions
Sedentariness, independent of PA levels, is associated with greater CPM effect in people with chronic musculoskeletal pain. Both SB and PA levels were not related to mechanical TS. These findings collectively provide insights on mechanistic processes between PA behaviour and central nociceptive facilitation and inhibition in a symptomatic population. The study findings need to be interpreted with caution due to cross-sectional data and data sourced from a range of patients presenting with different regional pain presentations. Prospective longitudinal studies using objective measures of PA and SB are required to validate these observed relationships in a larger sample size, exploring relationships between PA characteristics, pain modulatory mechanisms, and clinical outcomes.
Disclosures
The authors have no conflict of interest to declare. | 2019-08-23T02:03:43.616Z | 2019-08-02T00:00:00.000 | {
"year": 2019,
"sha1": "c56d2cece02ec6022202205315c0d7e9d59790fc",
"oa_license": "CCBYND",
"oa_url": "https://doi.org/10.1097/pr9.0000000000000773",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d09ede13f91846f71b52a9c815ab1b268ad5cc5",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252131169 | pes2o/s2orc | v3-fos-license | Vulnerability for Respiratory Infections in Asthma Patients: A Systematic Review
Asthma is a non-communicable and long-term condition affecting children and adults. The air passages in the lungs become narrow due to inflammation and tightening of the muscles around the small airways. Symptoms of asthma are intermittent and include cough, wheeze, shortness of breath, and chest tightness. Asthma is very often underdiagnosed and under-treated in many regions, especially in developing countries. While many studies show that viral infections can precipitate asthmatic attacks, very few studies have been conducted to see if history or current asthmatic attack increases the risk of viral infections. Our study aims to determine the predisposition of asthmatics to develop various viral infections and susceptibility toward certain viruses that cause upper respiratory tract infections. We performed a literature review of both published and unpublished articles. We included case reports, case series, reviews, clinical trials, cohort, and case-control studies, written only in English. Commentaries, letters to editors, and book chapters were excluded. Our initial search yielded 948 articles, of which 826 were rejected either because they were irrelevant or because they did not meet our inclusion criteria. We finally screened 122 abstracts and identified 24 relevant articles. People with a history of asthma have an abnormal innate immune response, making them potentially slower in clearing the infection and susceptible to both infections and virus-induced cell cytotoxicity. Also, in these studies, deficiencies in the interferon alpha response of peripheral blood mononuclear cells and plasmacytoid dendritic cells have been observed in asthmatics, both adults and children. Asthmatics with a viral infection usually present with an acute exacerbation of asthma, represented by dyspnea and cough, with other prodromal symptoms including vomiting and general malaise. The review includes an update on the relevance of dysregulated immune pathways in causing viral infections in asthmatic populations. It focuses on the evidence to suggest that people with asthma are at increased risk of viral infection, and viral infections in turn are known to precipitate and worsen the asthmatic status, making this a vicious cycle. The authors also suggest that further studies be undertaken to elucidate the pathophysiology and identify the critical therapeutic steps to break this vicious cycle and improve the quality of life for people with asthma.
Introduction And Background
Asthma is a heterogeneous disease, usually characterized by chronic airway inflammation. It is defined by the history of respiratory symptoms such as wheezing, shortness of breath, chest tightness, and cough that vary over time and in intensity, together with the variable expiratory flow limitation. Airflow limitations may later become persistent [1]. Both genetic predisposition and environmental factors play a role in the etiology of asthma. Recurrent asthmatic attacks are associated with chronic inflammatory changes in the lamina propria. As a result, there is a thickening of the basement membrane of the respiratory epithelium, hypertrophy of the respiratory smooth muscle layer, and an increased number of respiratory submucosal glands and epithelial goblet cells in the terminal bronchioles. The airway in asthmatics tends to be hypersensitive, developing frequent flare-ups to exposure to many irritants, such as allergens, cold, stress, and exercise. Inflammation resulting from this response induces mucous production, therefore, exacerbating the narrowing of the airway. recent studies, there has been some progress in understanding the pathogenesis of increased incidence of respiratory infections in patients with asthma. Recent data provide overwhelming evidence of an increased rate of viral infection in patients suffering from asthma exacerbations requiring hospitalizations, with the most common prevalent virus being human rhinovirus (HRV). A high percentage of asthma exacerbation episodes reported among the pediatric population are attributed to viral infestations, mainly picornavirus, coronavirus, influenza, parainfluenza, respiratory syncytial virus (RSV), and rhinovirus [2,3]. Most viral infections are observed to have a grave impact on various aspects of asthma [4]. Almost half of the pediatric patients with asthma reported suffering from severe asthma exacerbations each year [5]. Factors like age and gender play a significant role in increasing susceptibility to viral infections in people with asthma [6,7].
The morphological changes in asthma increase the risk of morbidity and mortality by reducing the growth and function of the lung [8]. In 80-85% of children, viral respiratory infections are the crucial reason for asthma exacerbation [9]. However, adult patients infected with rhinovirus develop the most severe symptoms [8]. Among the wide range of symptoms, wheezing and cough are the most common. Other symptoms include shortness of breath, wet cough, nasal discharge, sore throat, headache, joint pain, muscle pain, fever, eye discharge, and vomiting [8]. Patients infected with rhinovirus have a three-fold increased risk of developing recurrent wheeze compared to other viruses [10]. The severity of these symptoms stands on a four-point scoring system: none, mild, moderate, and severe [11]. Mild symptoms include mild stifling or runny nose, which do not affect daily activities, moderate symptoms affect daily activities but not sleep, whereas sleep and breathing difficulties are part of severe symptoms. Symptoms of viral asthma exacerbation follow the criteria of mild, moderate, and severe [11]. Peak expiratory flow rate (PEF) is a critical biomarker of lung function, which is also affected by acute exacerbation. Moreover, the benchmark for loss of asthma control includes at least moderate asthma symptoms and either reduction in PEF of 20% or more, or the use of albuterol for more than two days per week [10]. However, Olenec et al. classified acute exacerbation into four categories as follows: no symptoms, solitary cold symptoms, solitary asthma symptoms, and combined cold and asthma symptoms [12].
Viruses were detected mostly either with solitary cold symptoms or combined cold and asthma symptoms. During solitary asthma symptoms, virus detection rates were similar to asymptomatic children [13]. Spring and fall are the peak seasons for asthma exacerbations [11]. During the fall season, most viruses are detected in September, termed the "September Epidemic" [9,11,13]. Causes of the "September Epidemic" in children encompass the return to school after a long break, exposure to viruses and airborne allergens, etc. [11]. These patients require hospitalization or an unplanned visit to the doctor. The prime risk factors comprise preceding acute exacerbation, allergy, poorly controlled asthma, young age, and viral respiratory infections [9]. Interconnection between viral infections and allergies elevates the risk of exacerbation to a greater level [14]. Therefore, further studies are needed to confirm if patients with a history of asthma are at increased risk of viral infections.
While there is enough evidence of viral infections precipitating asthma attacks, the predisposition of asthmatics to develop viral infections has not been clearly demonstrated. We reviewed the existing literature aiming to determine the vulnerability of asthmatic patients to contracting viral and other infections.
Review Methodology
Medical databases PubMed and Google Scholar were used for literature search through the following Medical Subject Heading (MESH) keywords: "viral infections," "infections," and "asthma." We have included articles published from the year 2000 to 2021 in the English language only. We have included reviews, randomized controlled trials, and observational studies for data extraction and analysis while excluding unpublished articles, chapters from books, commentaries, and letters from editors. A total of 10 studies were selected after the review as they fit the selection criteria ( Figure 1), making this a traditional literature review. Since it is a literature review, we did not perform a quality assessment of the selected articles. The majority of data obtained from the included studies were through the use of the reverse transcription polymerase chain reaction (PCR) on nasopharyngeal specimens from patients with asthma to detect and identify viral strains.
Results
The results of the studies reviewed demonstrated an increased vulnerability toward lower respiratory tract infections in asthmatic populations as compared to populations with normal lung physiology, following an experimental HRV inoculation [15,16]. There are multiple etiologies of asthma exacerbation including viruses, bacteria, allergens, irritants, and occupational exposures, of which respiratory viruses are the most frequent cause [17]. In an observational study using experimental rhinovirus inoculation, a clear difference was noted between pre and post-viral challenge measurements of exhaled NO2 and eosinophil density in the nasal fluid in asthmatics compared to the healthy controls [18]. The observations made from a similar study implied that homeokinesis of the respiratory system is altered in asthmatics, which affects its capacity to respond to external stimuli like HRV [18]. Moreover, during an analysis of 60 samples of nasopharyngeal aspirates collected from children below 17 years of age with acute asthma exacerbations in Iran, rhinovirus was reported to be positive in 20% of patients, RSV in 8%, adenovirus in 8%, and influenza virus in 1.6% [19]. A decrease in innate and adaptive immunity during childhood, especially in infancy, has been found to increase the susceptibility to viral respiratory infections, especially among those who are at increased risk of asthma [19]. A case-control study performed in asthmatic children aged two to 17 years in 2007 detected one or more virus-positive respiratory specimens in 63% of children with asthma during acute exacerbation and 23% of children with well-controlled asthma [20]. Other respiratory viruses detected in respiratory samples of asthmatic children include enterovirus, coronavirus, metapneumovirus, parainfluenza virus, and bocavirus [21,22]. The second most common cause of upper respiratory infections causing acute exacerbations of asthma following viruses are bacteria such as Chlamydia pneumoniae, Mycoplasma pneumoniae, Streptococcus pneumoniae, and Haemophilus influenzae [22,23]. A prospective cohort study conducted in 2013 on pregnant females reported an increased incidence of the common cold among asthmatic patients as compared to non-asthmatic patients [24]. In the same study, 31% of asthmatic pregnant women and 18.8% of pregnant women without asthma had at least an episode of PCR-positive cold during pregnancy [24]. The significant results from all the reviewed studies have been compiled below in Table 1.
Morphological and Immunological Changes in Asthma That Lead to Viral Infections
Asthma is a dynamic disease in which mechanical and inflammatory pathways interact in an exaggerated manner, resulting in unstable inflammatory responses to external triggers [18]. Certain immunological factors such as microRNAs (miRs), interferon (IFN)-α, IFN-β, and IFN-λ, forced expiratory volume in one second (FEV1), interleukin (IL)-11, nuclear factor-kappa B (NF-κB), and toll-like receptor 7 (TLR7) aid in the pathogenesis that leads to different viral Infections [21,22,25,27,28].
The deficit in both acquired and innate immune systems collectively lead to impairment of the physiological airway barrier facilitating the entry of the viruses into the respiratory tracts. miRs are small non-coding RNAs, classified as extracellular and intracellular based on their topographical identification, that function as dynamic post-transcriptional regulators of gene networks, playing a crucial role in the regulation of biological processes such as the proliferation of cells when exposed to antigens [25]. Modulation of miR expression has been used to describe key roles of miRs in epithelial function and emerging studies implicate specific miRs in controlling epithelial cell processes such as regulation of cellular differentiation, determination of epithelial cell fate (cell proliferation and death), initiation and regulation of antimicrobial immunity, fine control of inflammatory responses, and activation of intracellular signaling pathways [25]. Such control of epithelial cell functions is likely to be vital to fine-tuning the immune response by the epithelial cells against infection and the roles of miRs in normal lung development and asthma [25]. It has been claimed that the miRs are critical for lung development and for maintaining disease-free lungs as shown by comparisons of normal lung tissue and tissue from asthmatic patients that revealed significant differences in miR profiles and suggest that miRs serve as a regulatory layer in the pathogenesis of asthma [25].
Also, different research studies demonstrated that in between the in vitro infection of primary airway epithelial cells from asthmatics and healthy adults with HRV, asthmatic cells produce fewer IFN-β and IFNλ. It makes them potentially more susceptible to virus-induced cell cytotoxicity. IFN-β deficiency at asthma exacerbation promotes mixed lineage kinase domain-like protein (MLKL)-mediated necroptosis. Defective production of antiviral IFN-β is thought to contribute to rhinovirus-induced asthma exacerbations. Studies have shown that deficiencies in the IFN-α response of peripheral blood mononuclear cells and plasmacytoid dendritic cells are observed in asthmatic adults and children in response to RSV, HRV, and influenza A inoculation [25].
Moreover, the relationship between reduced FEV1 and exacerbation risk promotes long-term reductions in lung function. Alternatively, there may be a result of specific types of airway inflammation or host factors, like smoking, that result in a decline in lung function. Many experiments have provided evidence that factors such as allergy and baseline FEV1 can influence the changes in the lower-airway physiology caused by rhinovirus infection and may contribute to the increased lower-airway effects of rhinovirus infection in subjects with asthma [28].
In addition, cytokines, such as IL-11 level, in nasal secretions, were significantly increased in patients who demonstrated wheezing [28]. NF-κB activation appears to be essential for the rhinovirus-induced synthesis of proinflammatory cytokines such as IL-6 and IL-8 [28]. On the other hand, depleted IgG levels were noted in patients suffering from asthma exacerbations with virus-positive samples. However, there was no relationship with viral load, IFN-α, IFN-γ, IL-5, or IL-13 levels [21,22]. Moreover, in patients with severe asthma, deficiency of TLR7 in macrophages deteriorates innate immunity [27].
All in all, changes in innate immunity, IFN deficiencies, and altered immunoglobulins levels along with disrupted airway barriers make asthmatics increasingly prone to viral upper respiratory infections.
Limitations
The findings of this review should be considered in light of its limitations. The inclusion criteria for this review are very selective, and unpublished articles and articles in languages other than English have been excluded, increasing the probability of publication bias. The sample sizes of the studies screened in this review vary widely and affect the reliability of the results to a certain degree. Many more studies are needed to be conducted in the future on the asthmatic population, to produce more precise results applicable to a broader population in this regard.
Conclusions
Based on the results of this review, we conclusively say that asthmatic patients are at a higher risk of contracting viral infections. The various factors contributing to the susceptibility in the asthmatic population are altered immune responses, decreased circulating antibodies, and the disruption of the physiological airway barrier. The viral strains that are commonly found as causative agents in patients with asthma include HRV, RSV, coronavirus, and enterovirus.
This represents an additional cause to enhance the demonstrated higher prevalence of exacerbations on asthmatics produced by the respiratory virus and points out a vicious-cycle theory of viral infection-asthmaviral infection.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might | 2022-09-09T16:36:30.181Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "c5f3bf5e2c5e906ed74757dda2e1a844d3af05f4",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/104769-vulnerability-for-respiratory-infections-in-asthma-patients-a-systematic-review.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2209b7778d45cc7a63ee03ca23806d5ff1d3ea86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
257602630 | pes2o/s2orc | v3-fos-license | Usefulness of the cachexia index as a prognostic indicator for patients with gastric cancer
Abstract Aim Cachexia is associated with the morbidity and mortality of cancer patients. The cachexia index (CXI) is a novel biomarker of cachexia associated with the prognosis for certain cancers. This study analyzed the relationship between CXI with long‐term outcomes of gastric cancer patients. Methods We included 175 gastric cancer patients who underwent curative gastrectomy at our hospital between January 2011 and October 2019. The CXI was calculated using skeletal muscle index, serum albumin level, and neutrophil‐to‐lymphocyte ratio. The prognostic value of CXI was investigated by univariate and multivariate Cox hazard regression models adjusting for potential confounders. Results In the multivariate analyses, tumor location (hazard ratio [HR], 0.23; 95% confidence interval [CI], 0.11–0.49; p < 0.01), disease stage (HR, 15.4; 95% CI, 4.18–56.1; p < 0.01), and low CXI (HR, 2.97; 95% CI, 1.01–8.15; p = 0.03) were independent and significant predictors of disease‐free survival. Disease stage (HR, 9.88; 95% CI, 3.53–29.1; p < 0.01) and low CXI (HR, 4.07; 95% CI, 1.35–12.3; p < 0.01) were independent and significant predictors of overall survival. The low CXI group had a lower body mass index (p = 0.02), advanced disease stage (p = 0.034), and a lower prognostic nutritional index (p < 0.01). Conclusions Cachexia index is associated with a poor prognosis for gastric cancer, suggesting the utility of comprehensive assessment using nutritional, physical, and inflammatory status.
| INTRODUC TI ON
Gastric cancer is the fourth most prevalent malignancy worldwide and the second leading cause of cancer-related deaths. 1 Although advances in treatment have improved the prognosis of early-stage cancer, 2 advanced cancer still has a high recurrence rate and a poor prognosis. 1 Therefore, the investigation of preoperative biomarkers to predict therapeutic outcomes is critically important for clinical decision-making.
Cancer cachexia, a multifactorial syndrome defined as ongoing loss of skeletal muscle mass with or without a decrease in fat mass, is associated with approximately 30% of cancer-related deaths. 3 It is also characterized by systemic inflammation, progressive weight loss, and malnutrition that cannot be fully evaluated by one of these components due to its complexity. Recently, a new parameter called the cachexia index (CXI) 4,5 has been proposed and is gaining recognition as a potentially comprehensive measure of the state of cachexia.
The CXI is composed of parameters such as the skeletal muscle index (SMI), serum albumin level, and neutrophil-to-lymphocyte ratio (NLR). Further, the CXI has been reported to be able to predict the prognosis for several malignancies. 5,6 However, the utility of the CXI as a prognostic factor in gastric cancer patients has not been reported. Therefore, we here investigated the prognostic value of the CXI in gastric cancer patients.
| Patients
In this study, all patients who underwent laparoscopic or robotic gastrectomy at the Department of Surgery, International University of Health and Welfare Hospital between January 2011 and October 2019 were included in the study. Patients with remnant gastric cancer and locally advanced unresectable tumors were excluded and the remaining 175 eligible patients were enrolled in this study. We retrospectively collected clinical and laboratory data, including computed tomography (CT) scan results, using the hospital's electronic patient record system.
This study was approved by the appropriate Research Ethics
Committee at the International University of Health and Welfare Hospital (approval no. 22-B-7) and was conducted in accordance with the Declaration of Helsinki. The requirement for informed consent was waived considering the study design and anonymization of data.
| Treatment and follow-up
Distal or total gastrectomy was performed as the standard procedure. Gastrectomy was performed with D1 plus lymph node dissection for early gastric cancer, and D2 lymph node dissection for advanced cancer. Tumor-node-metastasis (TNM) classification was quoted from the latest version of the Japanese Classification of Gastric Carcinoma (the 5th edition). 7 Postoperative complications were defined as grade III of Clavien-Dindo classification or higher that occurred within 30 postoperative days. Patients with stage II or III received adjuvant chemotherapy according to the Japanese gastric cancer treatment guidelines, 8 if the general condition was judged to be tolerated. The patients were followed up every 3 months to check for the recurrence by performing blood tests, including those for tumor markers, after the operation.
Moreover, CT was performed at least every 6 months after the operation.
| CXI and nutritional assessment
The CXI was calculated as follows: SMI × serum albumin level (g/dL) / NLR. 5,6 In this equation, based on a previously described method, to calculate the SMI we used the minor and major diameters of the right iliopsoas muscle of the L3 vertebra level, which were measured using preoperative CT. 6 The SMI was calculated as follows: iliopsoas minor axis (cm) × major axis (cm) × / height squared (cm 2 / m 2 ). 9,10 The cutoff values of SMI and CXI were determined by a receiver-operating characteristic curve using the survival status at the 3-year follow-up in strata of sex, considering the differences in the skeletal muscle between males and females. According to the cutoff values, all the patients were divided into the low and high CXI groups. The NLR was defined as the neutrophil count divided by the lymphocyte count. 11,12 The Prognostic Nutritional Index (PNI) was calculated as 10 × serum albumin level (g/dL) + 0.005 × lymphocyte count. 13 The cutoff values of albumin and PNI were determined by a receiver-operating characteristic curve using the survival status at the 3-year follow-up.
| Statistical analysis
All statistical analyses were performed using STATA version 14 (Stata Corporation). All p-values were two-sided, and we used the two-sided α level of 0.05. The data are expressed as medians, ranges, and ratios. Continuous and categorical variables were compared using the Mann-Whitney U-test or chi-square test, as appropriate. The endpoint, which was overall survival (OS), was defined as time from the date of surgery until the date of death from any cause or the last follow-up date for the living patients.
Disease-free survival (DFS) was defined as time from the date of surgery to the date of gastric cancer relapse, the last follow-up date, or death.
Univariable and multivariable Cox proportional hazards regression models were conducted to estimate hazard ratio (HR) and 95% confidence intervals (CIs) for DFS and OS. A multivariate analysis was performed for factors with p < 0.05 in the univariate analysis.
The Kaplan-Meier method was used to estimate cumulative survival probabilities, and the differences between groups were compared using the log-rank test.
| DISCUSS ION
In this study, we found that a low CXI was associated with worse survival of gastric cancer patients. Notably, this survival association was consistent in both early and advanced stage of gastric cancer. Our results suggest the importance of assessing cancer cachexia even in the early stage. Moreover, CXI may help to evaluate cancer cachexia comprehensively and provide better predictive value for long-term outcomes.
Cancer cachexia is a syndrome characterized by reduction of muscle mass and anorexia, which is frequently seen in patients with advanced cancer. It has been proposed that inflammatory cytokines produced by tumors have a profound effect on cachexia. An increase in interleukin-6 or tumor necrosis factorα causes loss of appetite, muscle atrophy, and increased energy consumption that leads to cachexia. Several studies have suggested that myokines produced by skeletal muscles might have anti-cancer effects. 14,15 Therefore, decreased myokine levels caused by cancer cachexia might be associated with a poor prognosis. Cachexia has also been associated with reduced chemotherapeutic effects, increased side effects, and treatment interruptions, resulting in poor outcome in cancer patients. 16,17 In this study, 49% of early gastric cancer patients showed low CXI. Abbreviations: HR, hazard ratio; CI, confidence interval; CEA, carcinoembryonic antigen; CA19-9, carbohydrate antigen 19-9; SMI, skeletal muscle mass index; PNI, prognostic nutritional index; NLR, neutrophil-to-lymphocyte ratio; CXI, cachexia index. We acknowledge the potential limitation of this study. This study was principally limited by its small sample size and single-center retrospective design, which might have caused a selection bias. In addition, the cut-off value of CXI was determined using data from a single center, which may lack generalizability. Therefore, although this is the first study to investigate the relationship between the CXI findings and clinical outcomes in gastric cancer patients, our findings need to be validated by large-scale studies.
In conclusion, the CXI might be a valuable factor for predicting the DFS and OS of gastric cancer patients. Our results suggest the utility of comprehensive assessment using nutritional, physical, and inflammatory status in gastric cancer patients.
AUTH O R CO NTR I B UTI O N S
KN and KH developed the main concept and designed the study.
ACK N OWLED G M ENTS
None.
JP21K08805 and by research grants from the Uehara Memorial
Foundation and Japanese Foundation for Multidisciplinary Treatment of Cancer.
CO N FLI C T O F I NTE R E S T
The authors declare that they have no conflicts of interest. | 2023-03-18T15:05:08.840Z | 2023-03-15T00:00:00.000 | {
"year": 2023,
"sha1": "893bf22657920b0b180f385164b19fbfc4cfe82a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ags3.12669",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "5042b0b8fa09801a1e6b531f7544870d89bc51e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250729725 | pes2o/s2orc | v3-fos-license | Deep learning for de-convolution of Smad2 versus Smad3 binding sites
Background The transforming growth factor beta-1 (TGF β-1) cytokine exerts both pro-tumor and anti-tumor effects in carcinogenesis. An increasing body of literature suggests that TGF β-1 signaling outcome is partially dependent on the regulatory targets of downstream receptor-regulated Smad (R-Smad) proteins Smad2 and Smad3. However, the lack of Smad-specific antibodies for ChIP-seq hinders convenient identification of Smad-specific binding sites. Results In this study, we use localization and affinity purification (LAP) tags to identify Smad-specific binding sites in a cancer cell line. Using ChIP-seq data obtained from LAP-tagged Smad proteins, we develop a convolutional neural network with long-short term memory (CNN-LSTM) as a deep learning approach to classify a pool of Smad-bound sites as being Smad2- or Smad3-bound. Our data showed that this approach is able to accurately classify Smad2- versus Smad3-bound sites. We use our model to dissect the role of each R-Smad in the progression of breast cancer using a previously published dataset. Conclusions Our results suggests that deep learning approaches can be used to dissect binding site specificity of closely related transcription factors. Supplementary Information The online version contains supplementary material available at (10.1186/s12864-022-08565-x).
Introduction
Transforming growth factor-beta (TGFβ) signaling contributes to a wide range of cellular behaviors in both normal and tumor settings. TGFβ plays essential roles in differentiation [1,2], epithelial-mesenchymal transition (EMT) [3,4], cytostasis [5], cell migration [6], angiogenesis [7] and wound healing [8]. Its role in carcinogenesis has been described as paradoxical because TGFβ can act as either a tumor suppressor or a driver of cancer progression depending on context [9,10]. The paradoxical role of *Correspondence: Lisa.Tucker-Kellogg@duke-nus.edu.sg; Greg_T-K@nus.edu.sg 2 Cancer and Stem Cell Biology, and Centre for Computational Biology, Duke-NUS Medical School, Singapore, Singapore 3 Computational Biology Programme, Faculty of Science, National University of Singapore, Singapore, Singapore Full list of author information is available at the end of the article TGFβ in cancer biology has led to a growing body of data documenting molecular co-factors that determine the different TGFβ outcomes. However, an unmet need remains to re-analyze prior TGFβ-pathway data according to what is now known about specific molecular determinants.
The canonical pathway of TGFβ-1 signaling is initiated when an extracellular TGFβ-1 ligand binds and induces dimerization of the TGFβ receptor, which then phosphorylates one of the R-Smad proteins Smad2 or Smad3. The phosphorylated R-Smad forms a complex with the common partner (co-Smad) Smad4 and translocates to the nucleus to regulate the expression of target genes [11,12]. Activation of R-Smads is partly regulated by dynamic phosphorylation-dependent shuttling of R-Smad complexes between the cytoplasm and the nucleus [12,13].
Although both Smad2 and Smad3 can be phosphorylated by the same receptor, activation of different R-Smads often leads to different regulatory outcomes. For example, in the metastatic breast cancer cell model MDA-MB-231, Smad2 knock-down led to a more aggressive phenotype, while Smad3 knock-down led to a lag in tumor initiation, suggesting that Smad2 and Smad3 have opposing effects on disease progression [14]. Another study in HaCaT cells showed that Smad3 was responsible for driving cell-cycle arrest [15]. These Smad2-and Smad3-specific signaling outcomes have been further traced to Smad-specific binding of transcription factors to the R-Smad complex [16]. Smad binding partners affect which transcription sites are bound by the R-Smad complex because the R-Smads by themselves have low DNA binding affinity (1.1 × 10 −7 M) by electroshift mobility assay [17]).
Since Smad-driven genome regulation is mediated through chromatin binding, it should be possible to distinguish Smad2-from Smad3-driven regulation using genome-wide binding measurements of Smad binding elements (SBEs). However, direct genome-wide measurement of specific R-Smad binding is limited by the lack of Smad2-specific antibodies for ChIP-Seq or similar experiments. This is a challenge that pervades the Smad signaling literature (most studies simply refer to "Smad2/3" signaling), but is particularly challenging for genome-binding measurements. Consequently, most ChIP-seq studies of Smads use a high quality pan-Smad2/3 antibody and are unable to distinguish the regulation by the different Smads. Efforts to measure Smad-specific genomic binding directly, such as by transfection of Smad fusion proteins, or CRISPR knock-out of either Smad2 or Smad3, would perturb R-Smad abundance and disrupt the nucleocytoplasmic feedback dynamics [13].
An experimental solution to this challenge would be to provide cells with epitope-tagged Smads in a native cisregulatory environment. This can be accomplished using methods such as the BAC TransgenOmics platform [18], in which epitope-tagged BAC transgenes are introduced into mammalian cells, preserving proximal cis-regulatory elements. More recent genome editing approaches, such as CRISPR/Cas9, can also be used for epitope tagging in the genome itself [19]. Such an experimental approach, however, would not disambiguate Smad binding in previously generated data. The limited information available about Smad2-specific and Smad3-specific effects would be more useful if it could help provide Smad-specific attribution for the vast amounts of non-specific information already collected regarding Smad2/3 combined effects.
Recent advances in machine learning have enabled the use of models trained on existing data to perform transcription factor binding site (TFBS) prediction. The power of such models was demonstrated in the ENCODE-DREAM challenge, where teams competed to develop models for cell type-specific TFBS prediction using ATAC-seq data. The top entries such as Anchor [20], Catchitt [21], and FactorNet [22] were able to accurately predict the binding sites of transcription factors in cell types not included during training. Despite the promise of cell type-specific TFBS prediction using machine learning, model performance varies widely, partly due to differences in the quality of training data available. More recently, neural networks such as Deepbind [23] and DeepTF [24] are being used to perform TFBS prediction. While Convolutional Neural Networks (CNNs) were initially developed for use on image data, CNNs have also been used for feature selection on nonimage data, as exemplified by methods such as DeepInsight [25] and DeepFeature [26]. However, most machine learning approaches to TFBS prediction have been evaluated on widely studied transcription factors such as REST and CTCF, where large amounts of data are available for model training. To the best of our knowledge, no model has been developed to disambiguate R-Smad binding sites.
In this study, we combine experimental genome-wide measurement of Smad-specific binding sites with deep learning to disambiguate genome-wide Smad2 and Smad3 binding in new and existing data. In order to experimentally distinguish Smad2 and Smad3 target sites, Smad2 and Smad3 fusion proteins were transfected into the breast cancer cell line MDA-MB-231 in a native cisregulatory environment as BAC transgenes [18]. ChIP-seq was then performed using the fusion tags to identify binding regions of each R-Smad. Geometric analysis of the binding regions identified sequence-dependent structural features, suggesting that sequence-based learning could distinguish R-Smad-specific binding. Using the collected sequences as training data, we developed a deep learning model to classify Smad2-and Smad3-binding regions. We applied this model to the problem of attributing Smad2-versus Smad3-binding for regions of known pan-Smad2/3 antibody binding. Specifically, we re-analyzed a public ChIP-Seq data set that had been generated using a pan-Smad2/3 antibody, and our method inferred potential Smad2-and Smad3-driven genomic regulation. This study represents a proof of concept for the broader use of deep learning to resolve the specificity of genomic regulation driven by closely related transcription factors.
LAP-tagged r-Smad BAC system is able to recapitulate native TGFβ signaling
Immunoblots confirmed the presence of the LAP-Smad, which resolved at a higher molecular weight due to the presence of the LAP tag. The LAP-Smad was detected together with the endogenous Smad of interest when cell lysate was immunoblotted against a specific Smad; LAP-Smad2 at 85kDa could be detected together with the endogenous Smad2 at 58kDa when immunoblotted with Smad2 antibody. The LAP-Smad2 was also detected at the same 85kDa size when immunoblotted with GFP antibody. No LAP-Smad3 was detected in the MDA-Smad2 cell lysate, and vice versa, indicating that there was no cross interaction (Additional file 1).
To illustrate the functionality of the LAP-Smad, high content analysis imaging was performed with anti-GFP antibody to demonstrate the translocation of LAP-Smad2 and LAP-Smad3 upon TGFβ-1 stimulation. In the absence of TGFβ-1, the LAP-Smad2 and LAP-Smad3 were mainly localized in the cytoplasm. Translocation of LAP-Smad into the nucleus was observed 1 hour after 10ng/mL TGFβ-1 stimulation (Fig. 1).
LAP-tagged r-Smad BAC ChIP-seq shows good concordance with native ChIP-seq
We used an approach similar to the Irreproducible Discovery Rate [27] of ENCODE for comparing the peaks called using LAP-tagged Smad3 and native Smad3 ChIPseq generated in-house. Briefly, peaks were called using MACS2 using the default parameters and a cut-off q-value of 0.05 in both experiments. The distance from each peak obtained from the Smad3 ChIP of MDA-MB-231 to the nearest peak found with the GFP-ChIP of MDA-Smad3 was calculated using GenomicRanges [28]. Finally, the distance to the nearest peak was visualized as a function of the p-value of the peak. If the p-value indicates confidence, then we would expect peaks with higher p-value to have shorter distances between peaks (i.e., greater overlap between both ChIP experiments). Indeed, we found that although LAP-tagged Smad3 allowed a greater number of peaks to be called, there was still good concordance between peaks called in native Smad3 as well as LAP-tagged Smad3 (Additional file 2). In particular, we observed that peaks with p-values of less than 10 −20 in our MDA-Smad3 ChIP overlapped a peak identified in our native Smad3 ChIP. This result suggests good concordance between a LAP-tag Smad3 ChIP-seq and native Smad3 ChIP-seq. Having established the concordance of our LAP-tag Smad ChIP, we turned to characterizing Smad2 and Smad3 bound sites.
Characterising Smad2 and Smad3 binding sites
Earlier studies had highlighted a role of 3D conformation in determining the binding affinity of transcriptional coregulators [29][30][31]. Furthermore, a recent structural study of FOXH1-driven TGFβ signaling identified DNA shape characteristics that distinguished Smad2-versus Smad3binding complexes [32]. We took advantage of these findings to characterize key shape properties of the respective R-Smad binding sites using the R package DNAshape [33]. The binding regions of Smad2 and Smad3 obtained from our LAP-Smad ChIP were subjected to computational prediction of structural and geometric features, such as minor grove width and electrostatic potential.
While the minor grove width (MGW) of Smad2-and Smad3-bound sites were similar at the middle of the binding peaks, we observed that the MGWs at the farthest ends (+/-100 base pairs) of Smad3-bound peaks were narrower than for the Smad2-bound peaks ( Fig. 2A). We also observed larger electrostatic potential in Smad2-bound regions as compared to Smad3-bound regions (Fig. 2B). These differences can be attributed to differences in the underlying DNA structure. The intrinsic flexiblity of DNA can be characterized along dinucleotide steps [34]: flexible steps allow for more exploration of conformational space while stiffer steps allow for less. Likewise, C:G base pairs have a larger electrostatic potential due to the presence of a partial positive charge on the amine group of cytosine. The more negative electrostatic potential observed in the narrower Smad3-bound sites is also consistent with earlier Poisson-Boltzmann calculations that show lower electrostatic potentials in structures with narrower MGW [29]. Both intrinsic flexibility and electrostatic potential contribute to sequence-dependent groove width differences [35]. Consistent with our expectation, Smad2bound regions had an average GC content of 50.3% as compared to Smad3-bound regions with an average of 50.0% (p < 0.05, using t-test).
Biologically, the differences between Smad2-and Smad3-bound sites can be traced to the differences in transcriptional co-regulators that interact with each respective R-Smad. Motif enrichment analysis was performed to identify potential co-regulators of Smad2 and Smad3 binding. While both Smad2-and Smad3-bound promoters were enriched for MEF, Smad2-bound promoters were exclusively enriched for various basic helix-loophelix (bHLH) transcription factors such as E2A, Tcf12, and Ascl. This is juxtaposed to the exclusive enrichment of Smad3-bound promoters for various nuclear receptors (NR). The bHLH family of recognize the E-box motif [36] comprised of the canonical CG-rich sequence CANNTG [37]. On the other hand, the NR family of transcription factors recognize the P-box motif, which comprises either AGAACA or AGGTCA [38].
Taken together, our characterization of the shape features of Smad2-and Smad3-bound sites suggests DNA sequence could potentially encode information about R-Smad specificity. Hence, we sought to build a model that enables de-convolution of Smad2 and Smad3 binding using DNA sequence. Using such a model, we seek to classify a peak identified using a pan-Smad2/3 antibody as being Smad2-bound or Smad3-bound.
CNN-LSTM hybrid model that can distinguish between Smad2 and Smad3 binding sites
Both CNNs and RNNs have been used extensively in a TFBS prediction tasks, with both yielding competitive results in various TFBS prediction tasks. We first sought to assess the suitability of each network architecture for de-convolving Smad binding sites. As shown in Fig. 3, the AUPR obtained on the testing set for both CNN and CNN-LSTM models were comparable (0.95 and 0.96, respectively) when we used 10 models for prediction. Notably, the CNN-LSTM model was able to classify Smad2-bound sites better despite the imbalanced training data, increasing the accuracy from 0.7 to 0.78 at a cost of a 0.03 decrease in the accuracy of Smad3 predictions. The improved performance of the CNN-LSTM hybrid is consistent with the finding by Lanchantin et al. [39] that a medium-sized CNN-RNN hybrid model yielded a higher AUC compared to a small CNN comprising 2 convolutional layers while having smaller standard deviations between different models. While no neural network has been previously developed for the task of de-convoluting the binding sites of closely related transcription factors, the AUPR obtained in our study is comparable with stateof-the-art TFBS methods such as Catchitt, which reported an AUPR of > 0.8 in classification of CTCF using a large training data-set [21].
We also evaluated the impact of using different numbers of models for ensemble learning. The final output probability for classification is calculated by taking the average probability from all the models used in an ensemble. This was done by enumerating all possible combinations of N models (where N is the number of models to be used for ensemble learning). As expected, increasing the number of models led to increased AUPR (Fig. 3C). Increasing the number of models also decreased the standard deviation, suggesting greater consistency in predictions between models. These findings are consistent with earlier work in machine learning that demonstrated the superiority of ensemble methods in classification tasks [21,40]. Taken together, our results show that snapshot ensemble learning, combined with a cosine annealing training schedule, was a computationally efficient approach for increasing the performance of NN-based TFBS prediction.
To test if our model could be generalized, we tested our model on the human embryonic stem cell (hESC) dataset deposited by Kim et al. [41]. In this dataset, Kim and colleagues sought to identify Smad2 and Smad3 binding sites during embryonic development. Due to the lack of Smad2 specific antibodies, Smad2 binding sites were inferred by performing 'peak subtraction' . In brief, a pan-Smad2/3 Fig. 2 Characterization of Smad2 and Smad3 binding sites using DNAShapeR. A. Minor grove width (MGW) of Smad2-bound sites (left) and Smad3-bound sites (right). While both Smad2 and Smad3 had similar MGW at the centers of the peaks, there was a marked difference in the MGW 100 base pairs upstream and downstream of the peak center, with Smad3-bound peaks narrower than Smad2-bound. B. Electrostatic potential (EP) of Smad2-and Smad3-bound sites. Smad2-bound sites (left) were observed to have higher electrostatic potential when compared to Smad3-bound sites across the full 200 base pairs of each binding site antibody was first used to obtain a list of all Smad2/3 binding sites. A second ChIP was then performed using a commercially available Smad3-specific antibody. Finally, the Smad2 sites were identified by removing binding sites that were common in both ChIP experiments. We com-pared the predicted classification of Smad binding sites with the classification based on peak substraction. The results are shown in Fig. 3D. Despite the low AUPR (0.44), the confusion matrix showed that our model was able to classify Smad2-and Smad3-bound sites correctly about The effect of ensemble learning on model performance evaluated using AUCPR. We evaluated the performance of increasing the number of models used from one to ten, with increase in AUCPR observed as the number of models increased. The standard deviation, indicative of stability, also decreased as more models were included in the final ensemble. E. Confusion matrix of Smad2/3 binding in hESC, showing model performance in a novel cell type was not included in the training dataset 60% of the time -a decrease in performance when compared to the testing dataset. However, the decrease in accuracy can be attributed to the lack of cell-specific training data, as our model was trained using Smad binding sites in a breast cell line. The dependence of model performance on the size of the training dataset has been also observed in other state-of-the-art TFBS prediction models.
De-convolving the roles of Smad2 and Smad3 in mCF10A-MII cells
Having shown that our model is able to classify Smadbound sites as either Smad2-or Smad3-bound with reasonable accuracy, we sought to leverage our model to investigate the relative contributions of each R-Smad in breast cancer progression. Sunquivst and colleagues performed ChIP-seq against Smad2/3 in MCF10A-MII cells to identify early and late TGFβ (16 hours) response genes, and demonstrated a shift in Smad2/3 binding sites following sustained TGFβ treatment [42]. However, the authors were not able to differentiate between Smad2-and Smad3bound genes. To de-convolute the contribution of each R-Smad to breast cancer progression, we used our model to classify Smad-bound sites as either being Smad2-or Smad3-bound. This might shed light on the contributions of each R-Smad in sculpting the response of MCF10A-MII cells to TGFβ-1. Following classification, we performed GO-enrichment to functionally characterize the Smad2and Smad3-bound peaks in both early and late TGFβ response.
In the early TGFβ response, we observed an enrichment of TGFβ signaling related pathways among Smad3 peaks (Table 1). This suggests that Smad3, and not Smad2, upregulates canonical TGFβ target genes. This observation is corroborated by experimental evidence from the literature demonstrating the direct role of Smad3 in regulating the expression of canonical early response genes such as Id1 and Smad7. For instance, Liang and colleagues demonstrated that Smad3, and not Smad2, leads to the induction of Id1 expression one-hour post treatment in MCF10A cells [43]. Likewise, Smad3 also directs the expression of Smad7 via direct binding to the promoter [44].
Turning to the pathways regulated by each Smad following 16 hours of treatment, we observed that Smad3 targets were associated with processes involved in the reorganization of the extracellular matrix (ECM), including ECM degradation. The degradation of ECM is a crucial step during cell invasion process. On the other hand, we observed terms associated with neural development in Smad2-bound loci. A role of Smad2 in neural development has been observed in mouse models, with the Smad2 δ exon-3 isoform being enriched in the nuclear fraction during brain cell differentiation [45]. The process of neural development includes EMT and directed migration, and has striking resemblance to cell migration in carcinogenesis [46].
Conclusion
In this study, we first validated a LAP-tagged R-Smad system that enables identification of Smad2-and Smad3specific binding sites in a breast cancer cell line. Using the Smad-specific binding sites identified from these experiments, we performed in-silico characterization of the structural features that dictate R-Smad specific binding, and concluded that local sequences encode significant amounts of information. Thereafter, we used deep learning methods to classify a pool of R-Smad-bound sequences into Smad2-or Smad3-bound. Finally, we took the CNN-LSTM hybrid model and used it to disambiguate the roles of Smad2 and Smad3 in early and late response to TGFβ-1 in a separate breast cell line, MCF10A-MII.
Our in-silico structural predictions of Smad2 and Smad3 binding sites suggest that regions flanking Smad2 binding sites have wider minor groves as compared to Smad3 binding sites. This difference in minor grove in turn also correlates with a larger electrostatic potential in Smad2bound sites. The structural differences can be attributed to differences in sequences. In turn, the difference in sequence can be traced to the different transcriptional co-regulating partners of R-Smad. As the structural properties are encoded by the sequence, we used the sequences to develop a neural network model to disambiguate Smad2 and Smad3 binding sites. Consistent with earlier studies, our data suggests that a CNN-LSTM hybrid model outperforms a CNN-only model in such classification tasks. Finally, we applied our model to disambiguate the roles of Smad2 and Smad3 in breast cancer disease progression from a publicly available dataset. Our functional enrichment analysis suggests differential roles of Smad2 and Smad3 in both early and late TGFβ response, with a more pronounced role of Smad3 in sculpting the early response while both Smads regulate different processes involved in the epithelial-mesenchymal transition program in the late TGFβ response. While our results suggests the feasibility of using machine learning to disambiguate Smad2 and Smad3 binding sites, there are several limitations of the present model that represent potential avenues for improvement. First, our current model treats Smad2 and Smad3 binding as at distinct sites; future work to develop a multi-class model can be undertaken to identify sites to which both Smad2 and Smad3 can bind. A second limitation of our model is the lower generalizability observed in the hESC dataset. This is due to the lack of training data from other cell types, which leads to the inability of our model to learn more generalizable features of Smad2/Smad3 binding sites. More experimental data from Smad2/Smad3 specific ChIP in other cell types would be required in order for a more generalizable model to be developed.
Molecular cloning
The BAC-SMAD2 and BAC-SMAD3 recombinant plasmids used in this study were provided by the Genome Engineering Core Facility of the Institute for Genomics and Systems Biology at the University of Chicago. A BAC containing the gene and endogenous cis control elements was tagged by recombineering to yield the Localization and Affinity Purification(LAP) tag at the C-terminus [18]. Smad2 was tagged in CH17-5E15BAC (BAC-SMAD2) while Smad3 was tagged in CH17-187G10BAC (BAC-SMAD3). Following expansion, the plasmids were extracted using the Maxi/BAC' protocol with the Nucleobond AX 100 kit (Macherey-Nagel, Hoerdy, France).
Cell culture
For the generation of cells stably expressing LAP-tagged Smad2/Smad3 (referred to as BAC-SMAD cells), MDA-MB-231 cells (ATCC HTB-26) were transfected with BAC-SMAD plasmid via Lipofectamine 2000. Selection of transfects was performed with Geneticin. Three weeks after antibiotic selection, the cells were GFP-selected using Moflo XDP Cell Sorter (Beckman Coulter) that was incorporated into a Class II BSC and equipped with the standard Ar and Kr gas lasers and a 488 nm 200 mW blue laser to obtain a highly purified BAC-SMAD population. The transfected cells were maintained in DMEM, 4500mg/L glucose supplemented with 10% (v/v) Fetal Bovine Serum, 100U/mL penicillin/ streptomycin and 800μg/mL Geneticin (Gibco) in a humidified 5% CO2 incubator at 37 • C.
Western blot
Treatment of cells was performed with 10ng/mL of TGF-β1 (Sigma #T7039). Cells were lysed in RIPA Buffer containing protease inhibitors and phosphatase inhibitors and quantification was performed using Quick Start™ Bradford Protein Assay. The protein lysate were denatured and fractionated with NuPAGE Novex 4-12% Bis-Tris SDS-PAGE in 1X MES buffer. The resolved proteins were wet-transferred onto nitrocellulose membrane and blocked for one hour. The membrane was incubated overnight at 4 • C with primary antibodies. Antibodies used: Smad2 antibody abcam #ab71109, Smad3 antibody abcam #ab28379, Smad4 abcam #ab3219, GFP antibody abcam #ab290 and GAPDH Ambion #am4300. (Additional file 1) Visualization was performed with Amersham ECL Select Detecting Reagent with FluorChem R Imager (ProteinSimple, CA, USA).
ChIP-sequencing
Chromatin Immunoprecipitation (ChIP) was performed with the EZ-Magna ChIP™ A Chromatin Immunoprecipitation Kit (Millipore, Billerica, MA, USA) with anti-GFP (abcam) . ChIP DNA was purified with Qiagen PCR purification kit and quantification was performed using the Qubit 3.0 Fluorometer with Qubit dsDNA HS Assay Kit. DNA libraries were generated using the TruSeq ChIP sample Prep kit (Illumina) followed by deep sequencing with the Illumina's HiSeq 2500 system with at least 100M (million) raw reads for a ≥ 40M clean single-end reads with a minimum requirement of target non-redundancy fraction (NRF) of ≥ 0.8 for 10M reads uniquely mapped read. Sequencing was performed at the Beijing Genome Institute (BGI).
Bioinformatics analysis of ChIP-seq data
Sequencing reads were aligned to the hg38 genome using Bowtie2 [47]. Following alignment, peak calling was performed using MACS2, with reads extended to 200 base pairs to recover the original binding sites [48]. Downstream annotations and analysis was performed in R. Peaks identified by ChIP-seq data in TGFβ treated MCF10A-MII was downloaded from GEO (accession number: GSE83788) [42]. Likewise, peaks from TGFβ treated human embryonic stem cells were downloaded from GEO [41] (accession number: GSE29422) and peak coordinates were converted to hg38 using the liftover tool. Peak annotation and feature encoding was performed in a similar manner to our in-house dataset (described below). Gene ontology (GO) enrichment analysis was performed using ReactomePA [49] and default settings. GO terms with a p-value of less than 0.05 were considered to be enriched.
Architecture of neural networks
Various neural network architectures have been proposed for the task of TFBS prediction, with CNN and RNN as the two dominant architectures. Various forms of RNNs have been proposed, with long-short term memory (LSTM) and gated recurrent units (GRU) as two dominant types of RNNs used in TFBS prediction. Two models were trained -a vanilla CNN model comprising only of convolutional layers connected to two fully connected layers, and a CNN-LSTM hybrid comprising a convolutional input layer connected to a long-short term memory (LSTM) layer before being connected to two fully connected layers. Figure 4 shows the configurations of our models. A dropout layer with a dropout ratio of 0.2 was added between the two dense layers to prevent overfitting.
For the two fully connected layers prior to the output layer, the number of neurons chosen was determined based on the work by Huang et al. [50], which specified the minimum number of neurons required to capture all the samples within the dataset. This allows us to choose the smallest possible number of neurons in the dense layers while not losing valuable information for model training. We used the ReLu activation function for each layer in the fully connected layers prior to passing the values to the output layer using the sigmoid function to obtain a final predicted value.
Model training and evaluation
Smad2/3 bound promoters (defined to be within 3kb of transcription start sites) were first resized to 200 base pairs. Thereafter, the N sequences were one-hot encoded to produce a N × 200 × 5 matrix which was then used as input for training and prediction. In our one-hot encoded matrix, each promoter is encoded in one of N rows. Each base is encoded by 5 slices corresponding to either A,T,C,G or N. Neural network training was performed in Keras using the Tensorflow framework. Training was performed with 75% of the dataset, with the other 25% reserved for model testing. We used a cosine annealing training schedule with restarts [51], where the learning rate was gradually decreased in each epoch according to the formula where a(t) refers to the learning rate at epoch t, a 0 refers to the maximum learning rate, and T and M represents the total number of epochs and number of training cycles respectively. We combined the cosine annealing training schedule with snapshot ensemble learning [52], where the outputs from ten different models are averaged to produce a final predicted value. The learning rate was reset to the maxi- Fig. 4 Architectures of neural networks used in this study. The CNN is made of two convolution stacks (convolution layer + maxpooling). A filter size of five is used in the first convolution stack to serve as a motif detector. Thereafter, we used a larger filter size (32) in the next convolutional layer to capture larger patterns in the sequence. Following the convolution stacks, the features are flattened and batch normalized before passing through two dense layers using the ReLu activation function which are connected by a drop out layer. Finally, the output from the dense layer is passed to an output layer with a sigmoid activation to produce a final prediction value. Similar to the CNN model, we first used a convolution layer with a filter size of five to serve as a local motif detector for our CNN-LSTM model. After maxpooling, the output matrix is passed to an LSTM with 32 cells. Thereafter, the output from the LSTM is batch normalized and passed through two fully connected layers with the same configuration as our CNN model mum learn rate at the start of each model. The area under the precision recall curve (AUPR) was used as the metric of model performance. As our dataset was highly imbalanced with 75% of the sites being Smad3-bound, we used a cut-off probability of 0.75 for classifying peaks as being Smad2 or Smad3 bound. | 2022-07-22T05:14:14.799Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "19144c3ccc187806108ad8f133173a2e1fc98806",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-022-08565-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19144c3ccc187806108ad8f133173a2e1fc98806",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239016938 | pes2o/s2orc | v3-fos-license | Cosmology with Love: Measuring the Hubble constant using neutron star universal relations
Gravitational-wave cosmology began in 2017 with the observation of the gravitational waves emitted in the merger of two neutron stars, and the coincident observation of the electromagnetic emission that followed. Although only a $30\%$ measurement of the Hubble constant was achieved, future observations may yield more precise measurements either through other coincident events or through cross correlation of gravitational-wave events with galaxy catalogs. Here, we implement a new way to measure the Hubble constant without an electromagnetic counterpart and through the use of the binary Love relations. These relations govern the tidal deformabilities of neutron stars in an equation-of-state insensitive way. Importantly, the Love relations depend on the component masses of the binary in the source frame. Since the gravitational-wave phase and amplitude depend on the chirp mass in the observer (and hence redshifted) frame, one can in principle combine the binary Love relations with the gravitational-wave data to directly measure the redshift, and thereby infer the value of the Hubble constant. We implement this approach in both real and synthetic data through a Bayesian parameter estimation study in a range of observing scenarios. We find that for the LIGO/Virgo/KAGRA design sensitivity era, this method results in a similar measurement accuracy of the Hubble constant to those of current-day, dark-siren measurements. For third generation detectors, this accuracy improves to $\lesssim 10\%$ when combining measurements from binary neutron star events in the LIGO Voyager era, and to $\lesssim 2\%$ in the Cosmic Explorer era.
Gravitational-wave cosmology began in 2017 with the observation of the gravitational waves emitted in the merger of two neutron stars, and the coincident observation of the electromagnetic emission that followed. Although only a 30% measurement of the Hubble constant was achieved, future observations may yield more precise measurements either through other coincident events or through cross correlation of gravitational-wave events with galaxy catalogs. Here, we implement a new way to measure the Hubble constant without an electromagnetic counterpart and through the use of the binary Love relations. These relations govern the tidal deformabilities of neutron stars in an equation-of-state insensitive way. Importantly, the Love relations depend on the component masses of the binary in the source frame. Since the gravitational-wave phase and amplitude depend on the chirp mass in the observer (and hence redshifted) frame, one can in principle combine the binary Love relations with the gravitational-wave data to directly measure the redshift, and thereby infer the value of the Hubble constant. We implement this approach in both real and synthetic data through a Bayesian parameter estimation study in a range of observing scenarios. We find that for the LIGO/Virgo/KAGRA design sensitivity era, this method results in a similar measurement accuracy of the Hubble constant to those of current-day, dark-siren measurements. For third generation detectors, this accuracy improves to 10% when combining measurements from binary neutron star events in the LIGO Voyager era, and to 2% in the Cosmic Explorer era.
I. INTRODUCTION
The inference of cosmological parameters like the Hubble constant, H 0 , using gravitational waves (GWs) hinges on the standard siren approach [1,2]. The central idea is to measure the luminosity distance, D L , from the GW data while simultaneously identifying an electromagnetic (EM) signal from the source. The independent measurement of D L and the cosmological redshift, z, leads to a measurement of H 0 . In the absence of a counterpart, clustering of H 0 measurements from potential host galaxies for a large sample of events also leads to a statistical measurement of H 0 [3]. This idea found its first application in the simultaneous panchromatic observations of GWs, gamma-rays, optical, and infrared radiation from the binary neutron star (BNS) merger seen by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) [4] and Advanced Virgo [5] detectors, GW170817 [6]. The identification of the host galaxy, NGC 4993, led to an independent measurement of H 0 [7]. Also, the ∼ 16deg 2 sky-localization led to a statistical measurement of H 0 , agnostic of the host galaxy information [8]. Such independent measurements are crucial in the light of the recent tension in the value of H 0 measured from observations of the early and late-time universe [9].
A different approach of estimating the distance-redshift relation, solely using GWs from merging BNS systems, was first proposed by Messenger and Read [10]. Measuring the redshift is challenging because while the amplitude of the GW encodes information about D L , the mass parameters are degenerate with the redshift, resulting in the measurement of the redshifted mass at the detector as opposed to the true source-frame mass, i.e., m det = (1 + z)m source . However, matter effects in BNS inspirals, characterized by the tidal deformability parameter,λ, breaks this degeneracy since the tidal deformability is a function of the source-frame mass, m source . This feature has been exploited in the literature via an expansion ofλ in terms of m source [11][12][13]. However, the expansion coefficients are dependent on the NS equation of state (EoS), which is uncertain. This implies that, in the absence of a priori information about the EoS, the expansion coefficients are free parameters in data analysis, and an extraction of the H 0 is difficult. This degeneracy, however, can be broken through a set of universal relations discovered by Yagi and Yunes [14,15] (hereafter YY17). Their work has shown that there exists tight relations between these expansion coefficients that are insensitive to the EoS. Using these so calledλ 15 Oct 2021 ters drastically. Since theλ (0) 0 is universal, the measurements can be stacked using data from gold-plated BNS events for which the source frame mass, or equivalently the redshift, is known. Subsequently, its value may be fixed to expressλ =λ(m source ) for future BNS detections leading to a measurement of m source , or equivalently the redshift, and thus H 0 .
In this paper, we use the YY17 prescription to construct aλ (0) 0 −λ (k) 0 relation using EoSs that satisfy the current constraint on mass and radii of NSs from LIGO/Virgo for GW170817 [16] and the same from by the Neutron Star Interior Composition Explorer (NICER) measurements of the millisecond pulsar, PSR J0030+0451 [17]. We then perform Bayesian parameter estimation on GW170817 data, employing this relation to measure the free coefficient,λ (0) 0 , appearing in the expansion. We then analyze the prospects of measuring H 0 by performing Bayesian parameter estimation on a set of simulated (synthetic) BNS signals across different detector eras. While the H 0 measurement from individual events may not be very constraining due to the distance-inclination degeneracy in GW parameter estimation, we show that by combining the result from multiple events, the stacked measurement of H 0 converges to the true value. We find that for the design sensitivity LIGO/Virgo/KAGRA/India (HLVKI) era, the measurement accuracy is comparable to the current dark siren measurements [18,19], or from recent counterpart measurement, assuming the association of GW190521 and ZTF19abanrhr, [20,21] shown in Refs. [22,23]. This accuracy improves to ∼ 10% in the Voyager era assuming O(10 2 ) BNS events, and to ∼ 2% for O(10 3 ) detections in the Cosmic Explorer (CE) era. Ultimately, the accuracy of the measurement of H 0 will be limited by systematic uncertainties in the binary Love method, and we study what limits these systematic uncertainties place on future measurements of H 0 .
The organization of this paper is as follows. In Sec. II, we summarize the universal binary love relations in brief. We show the construction of theλ relation for this work. In Sec. III, we show our measurement for λ (0) 0 by performing Bayesian parameter estimation on GW170817 data. In Sec. IV, we use the results ofλ (0) 0 and simulated BNS events across different generations of ground-based detectors to show the prospects of measuring H 0 using multiple events. In Sec. V, we consider the systematic errors that could arise in the measurement of H 0 due to uncertainty in measurement ofλ (0) 0 , and from uncertainties in theλ 0 relations. Finally, we conclude in Sec. VI. We follow the conventions of Ref. [24], and when necessary, use geometric units G = 1 = c.
II. BINARY LOVE RELATIONS
The quasi-circular inspiral of a compact binary system is described under the post-Newtonian (PN) formal-ism [25], where the system parameters, like masses and spins, appear in the coefficients of the expansion at different PN orders. While the GW signal from a BNS system has similar morphology to an analogous binary black hole (BBH) system during the early inspiral phase, strong gravitational effects in the late inspiral stage deforms the NSs, leading to additional multipolar deformations that enhances GW emission and leads to an earlier merger [26]. The deformation of the NS is characterized by the electric-type, = 2, tidal deformability parameter, λ = (2k 2 /3)C −5 , where C = M/R is the compactness of a NS with mass M and radius R, which therefore depends on the EoS, and k 2 is the relativistic Love number [27]. This tidal deformation modifies the binding energy of the binary, which in turn modifies the chirping rate, and therefore the waveform phase, with finite-size effect appearing first at 5PN order [26]. In a BNS system, the waveform phase will have modifications that will depend on the tidal deformability of each binary component, but there exist certain EoS-insensitive relations that relate them. We present them here in brief, and refer the interested reader to YY17 for further details (see also [28] for a similar EoS-insensitive relation between the tidal deformability of each component).
While the internal composition of NSs is extremely complex, certain relations among some observables, like the moment of inertia, the quadrupole moment, and the tidal deformability, are insensitive to the details of the microphysics [29,30] (see Refs. [31,32] for reviews on universal relations). In the context of GW astrophysics, these imply certain binary Love relations, presented in YY17: 1. A relation between the symmetric and antisymmetric combination of the individual tidal deformabilities,λ s = (λ 1 +λ 2 )/2 andλ a = (λ 1 − λ 2 )/2.
A relation between the waveform tidal parameters
Λ and δΛ appearing in 5-PN and 6-PN order respectively.
3. A relation between the coefficients of the Taylor expansion of the tidal deformabilityλ(M ) about some mass m 0 as a function of mass.
Here, we are concerned with the third item in this list, and we will refer to it as theλ 0 relation is to express λ(M ) in terms of a Taylor expansion about a fiducial value, M = m 0 as, where,λ can be written entirely in terms ofλ insensitive to the EoS. (to fractional accuracy of about 10%). Theλ relation can be evaluated for any tabulated EoS, but as a Fermi estimate, we first consider the simplified example of a polytropic EoS. For such an EoS, the pressure (p) and energy density ( ) are related via p ∝ 1+1/n , where n is the polytropic index. To leading order in compactness, the tidal deformability is then given by [30],λ ∝ M −10/(3−n) . Theλ ( Thus, the binary-love-relations ensure thatλ with the relation only dependent on n.
In the relativistic case and for more realistic EoSs, the tidal deformability is obtained numerically, and thē λ where x = (λ (0) 0 ) 1/5 , a i,k are numerical coefficients, and Gn ,k is defined in Eq. (2). Here, we will only consider an expansion to k = 3. This is sufficient to accurately representλ(M ) in the range M ∈ (1.2M , 1.5M ) for m 0 = 1.4M , which we will choose for the rest of this work. The degree of universality deteriorates as k increases, but the overall sum is still EoS insensitive to about 10% for the mass range and m 0 mentioned.
We re-derive the fitting coefficients a i,k for a set of EoSs that are consistent with recent LIGO/Virgo and NICER observations (and the observation of ∼ 2M neutron stars [33][34][35]). For simplicity, we here consider the piecewise polytropic representation of the EoS [36], restricting attention only to those that have support in the 68% confidence region in terms of mass and radii reported by LIGO/Virgo in Ref. [16], and NICER in Ref. [17]. The upper and middle panels of Fig. 1 show the mass-radius curves and theλ 0 is still a free parameter that is to be constrained by observational data. We will here use this fit to parameterize the tidal deformability and perform Bayesian parameter estimation on GW170817 data to measure the constantλ The first step in estimating the Hubble parameter with the binary Love relations is to estimateλ (0) 0 with a controlled event. We provide a brief review of Bayesian inference. The aim of Bayesian inference is to obtain a posterior probability density for the parameters describing the signal, Θ Θ Θ, given the GW data, d GW , Here, p(d GW |Θ Θ Θ) is the likelihood of the parameters and p(Θ Θ Θ) is the prior distribution. Gravitational waves from compact binary coalescences of BBHs are described by 15 parameters. These contain the intrinsic parameters like the masses (m 1,2 ) and spin parameter 3-vectors (a a a 1,2 ), and the extrinsic parameters like luminosity distance, D L , inclination angle, ι, and so on (see, for example, Refs. [37][38][39][40][41]). In the case of BNSs, matter effects enter the waveform phase first at 5PN and then 6PN order via two additional tidal deformability parameters,λ 1,2 , as, where x = [π(m 1 + m 2 )f ] 2/3 is the PN expansion parameter, f is the GW frequency, and η = m 1 m 2 /(m 1 + m 2 ) 2 is the symmetric mass-ratio. The expressions forΛ and δΛ have the form, where the exact expressions for {f, g, δf, δg} are given in Sec. 2.2 of YY17. In our case, we are interested in measuring the tidal deformability by using the parameterization described in Eq. (1) and Eq. (3), but of course, all waveform parameters must be searched over when exploring the likelihood surface.
To obtain a measurement forλ for GW170817. 1 For the model, we use the IMRPhe-nomPv2_NRTidal waveform [42], which is the same as that used for parameter estimation in Ref. [43] and Ref. [44] (hereafter LV19). 2 In our analysis, however, the parametersλ 1,2 are not independent, but rather they are modeled through the Taylor expansion in Eq. (1), with the EoS-insensitiveλ . This then means that (Λ, δΛ) (or equivalently (λ 1 ,λ 2 )) are not parameters of our model anymore, but rather the model now only depends on a single tidal deformability parameter, namelyλ (0) 0 . The model also depends on the detector-frame masses m det1,2 and the redshift z, since the Taylor expansion of Eq. (1) depends on the sourceframe masses m source1,2 . For the GW170817 event, however, the redshift is known to high accuracy due to identification of the host galaxy, and so this is no longer a free parameter in this case.
The priors on the parameters of our model are chosen as follows. For the extrinsic parameters, like the distance, inclination, coalescence phase, sky-position and so on we pick uninformative priors as those in LV19. 3 Following 1 Taken from Gravitational-wave Open Science Center (GWOSC) https://www.gw-openscience.org/catalog/ GWTC-1-confident/single/GW170817/ 2 Both references analyze the same duration of data with the same settings, but the latter uses data with a different calibration. The data from the latter is available on GWOSC, which is what we use here. 3 In LV19, the sky-position was fixed to the location of the host LV19, we sample on detector-frame component masses setting the prior distribution of each component mass as the marginalized posterior distribution obtained for the component masses from LV19. We also repeat the analysis putting a flat prior on the component masses as in LV19, and find similar results. For the spins, we use the low-spin prior from LV19 i.e., χ i ≤ 0.05. Forλ (0) 0 , we use a flat prior in [0, 1000], which implies a prior on the individual tidal deformabilities through Eq. (1) for a fixed redshift. The implied priors on the individual tidal deformabilities are shown in Appendix A. This step is different from LV19, where a uniform prior was used on the individual tidal deformabilities,λ 1,2 , independently. We fix the redshift of the source at the value of the host galaxy NGC4993, z NGC4993 = 0.0099, which was determined through EM observations reported in Ref. [45], and used in LV19. This allows us to obtain the source-frame component masses, m source1,2 = m det1,2 /(1 + z NGC4993 ). The setup of the likelihood function and the sampling is done through the open-source GW parameter estimation library, BILBY, [46] using with the adaptive nested sampler, DYNESTY [47]. The resulting marginalized distribution forλ (0) 0 is shown in Fig. 2. The distribution of the other parameters is presented in Appendix B.
We find the value ofλ (0) 0 = 191 +113 −134 at 90% confidence. In order to verify the measurement is robust against the residual errors coming from theλ Fig. 2 using the un-filled histogram. We find that the result is not significantly affected (∼ 2% difference in median values). 4 Our results are consistent with the measurement ofλ(1.4M ) = 190 +390 −120 obtained by a linear expansion ofλ(m source )m 5 source about 1.4 M , reported in Ref. [16] following the approach in Refs. [12,13]. The difference in the upper limit is caused due to differences in prior implied on the tidal deformabilities. In Appendix A, we compare our prior with that in Ref. [16].
The first measurement ofλ (0) 0 reported above is perhaps not extremely constraining, but one expects future events to allow for more accurate measurements. Future observing runs of LIGO/Virgo/KAGRA with coincident operation of next generation telescope facilities, like the Rubin Observatory [48], will yield many more multi-messenger BNS events. We can think ofλ Eq. (1) we can stack data from multiple observations to yield an improved measurement ofλ (0) 0 . This implies that eventually, we will be able to "fix" the value ofλ (0) 0 , or marginalize over the small measurement (and eventually systematic) uncertainties.
has been estimated by a set of controlled observations (like the GW170817 event), what else can be done with additional future observations of BNS events? Using the same parameterization in Eq. (1), we see then that the tidal deformability terms of the waveform phase are now only a function of m source , or equivalently of the detector-frame masses and the redshift. Since the detector-frame masses can be separately and tightly estimated from lower-PN order terms, the tidal deformability terms now yield information exclusively on the redshift [10]. Therefore, any BNS signal, irrespective of being well-localized or having an associated counterpart, would result in a direct measurement of the redshift from the tidal deformability, and this can be used to infer the Hubble constant, H 0 .
Let us then consider the prospects of measuring H 0 with future observations, beginning with a best-case scenario, in which we assumeλ (0) 0 has been strongly constrained. We re-write Eq. (1) as, where Ω = {Ω i } are the cosmological parameters apart from H 0 , which we here, for simplicity, fix to a flat ΛCDM model with an assumed "true" value of H 0 = 70 km s −1 /Mpc and matter-to-critical-density ratio Ω m0 = 0.3. Hence, we can write z = z(D L , H 0 ) in Eq. (7). The measurement of D L comes from the waveform amplitude, and thus the redshift, z, alone comes from Eq. (7). This enables either direct sampling of H 0 , or we can infer H 0 in a post-processing step after we have sampled in z.
Even though the logic behind this idea is straightforward and robust, its implementation for a single event is hindered by measurement uncertainties and covariances. In particular, the distance-inclination degeneracy in the amplitude implies that distance measurements peak at lower-than-true values for face-on sources, and to greaterthan-true values for edge-on sources (see Refs. [3,49], for example). Thus, H 0 measurements from individual events may be multi-modal and, in general, peaked away from the true value. However, since all observations should depend on the same H 0 (assuming this quantity is truly a constant), one should be able to stack multiple events to obtain an accurate measurement of the Hubble constant.
We investigate how this stacking measurement could take place in the future. We simulate synthetic GW For illustration, we choose N det =10, 50, 70 for the top middle, and bottom panels, corresponding to the O5, Voyager, and CE era, respectively. Right: The normalized number count of of distances and inclinations of the systems that would be detectable in a specific observing era is shown by the contour map. This was obtained by simulating an ensemble (several thousands) of binaries with (m1, m2) ∼ (1.42, 1.27) M distribution uniform in sky, volume, and inclination, and requiring that each produce network SNR ≥ 8. The grid of overlayed points represents the true distance/inclination of synthetic systems chosen for performing Bayesian parameter estimation. We observe that the H0 measurement converges irrespective of the shape of grid as long as it covers the distances and inclinations that would be detected. signals in three different networks of detectors: groundbased observatories in the O5 era (HLVKI 5 detector network), the Voyager era (HLI upgraded to Voyager + VK), and the Cosmic Explorer era (1 CE instrument at H). For simplicity, we consider a set of BNS signals produced by systems with a fixed source frame chirp mass, M c = 1.17 M , and mass-ratio, q = 0.9, but at different distances, sky locations and inclination angles. We assume the true EoS is such that it would lead to a tidal deformability ofλ(1.4 M ) =λ The detectability of our synthetic catalog of signals is estimated as follows. We first create an ensemble (several thousands) of such fiducial systems, distributed uniformly in sky-location, orientation, and volume (∝ D 2 L ). We then say that a fiducial system can be detected if the network signal-to-noise ratio (SNR) is above 8 using the appropriate noise spectral densities for the detectors of in each era. For the O5 era, we use the noise curve from Ref. [50]. 5 For the Voyager and CE era, we use noise curves from Ref. [51]. 6 Some of these noise curves are also present as package data in BILBY. The distribution of detected distances and inclination angles are shown in the right panels of Fig. 3 for each era. We can see that the detected distance distribution is peaked at a certain distance depending on the era, due to the combination of the prior distribution and the detector sensitivity. We also see the preference towards detecting face-on sources, as opposed to edge-on sources at the same distance. However, the marginalized distribution of detected inclination angles are universal (see Fig. 4 of Ref. [52]). We have verified this for each of our eras. We focus on distance and inclination because of the distance-inclination degeneracy during parameter recovery. The detectability, in general, also depends on the mass-distribution of BNSs in the universe, which we ignore for this work. But, this dependence is weak due to the narrow distribution of BNS masses [18,49]. In Appendix C, we show effect on distance recovery based on the true inclination angle of the source.
During an observing run, we expect a sample of detected BNS events, N det , whose true values of distance and inclination are based on this distribution. In reality, the value of N det will depend on the volumetric rate of BNS mergers, and also the redshift evolution of the rate. Here, we will simplify the problem, considering the following representative cases N det = 10 1 , 10 2 , and 10 3 corresponding to expected numbers for the O5, Voyager, and CE detector eras i.e., N det events are drawn from the distributions shown in Fig. 3. Since the computational cost of performing Bayesian parameter estimation on all N det systems is high, we instead simulate the fiducial source at certain representative points on a grid of distances and inclination angles. These are represented by the solid points overlayed on the heatmap in Fig. 3. We then count each representative run based on the relative probability of the heatmap, p det , normalizing the total count to N det . Thus, N det is approximated as, where the summation index runs over all the grid points, and N α ∝ p det (D α L , ι α ). In Appendix D, we consider alternate physically motivated distance distributions from different rate models. We find that the choice does not affect the answer.
The combined H 0 posterior is given by, where Θ Θ Θ i = {m det1,2 , a a a 1,2 , . . . , D L , z,λ 0 } i is the set of GW parameters for an individual event now parameterized byλ (0) 0 and z in place ofλ 1,2 . The likelihood function of a single event is given by p(d i |Θ Θ Θ i ). The prior distributions for individual parameters are given by p(Θ Θ Θ i ). The constraint between D L and z is represented by the δfunction term. With this constraint, marginalizing over all parameters Θ Θ Θ i , leads to a posterior of H 0 from an individual event. Although, a explicit prior on H 0 is not applied, any reasonable prior on the distance and redshift for the individual events results in an implied prior on H 0 , By "dividing-out" this prior in Eq. (9), we get the individual semi-marginalized likelihoods L i (H 0 ). These are then multiplied together to obtain the joint H 0 likelihood, and the stacked posterior in Eq. (10). This "dividing-out" procedure guarantees that in the absence of any signal, the likelihood is flat (see Appendix E). Equation (10) can be simplified through our counting method to obtain where α = 1, 2, . . . goes over the representative grid points in distance and inclination, and each likelihood is counted based on its relative probability of detection.
In practice, the implementation of Eq. (12) is simpler, so we adopt it henceforth. With all of this at hand, let us now estimate the accuracy to which the Hubble parameter could be inferred in the future as follows. For each of the representative grid points, we inject a non-spinning waveform corresponding to the fiducial system mentioned above in a noise realization based on the observing scenario. We then perform Bayesian parameter estimation using the PARAL-LEL_BILBY inference library [53], and the same waveform model, IMRPhenomPv2_NRTidal, for both injection and recovery. We inject BNS waveforms represented at the grid points above in a noise-realization. We sample in the detector-frame chirp mass and mass-ratio, while ignoring spins since they have a negligible effect in our analysis. We also keep the sky-location fixed to the injected value for simplicity. We have checked that setting these free does not impact the result. We also fix the coalescence time to the injected value. This is motivated since in practice GW compact binary search pipelines report the coalescence time. We put a uniform in comovingvolume prior for the luminosity distance, and a uniform in cosine prior for the inclination angle. We also sample in the redshift, z, with a uniform prior, convert the detectorframe quantities to source-frame, and use theλ relations to obtain the tidal deformabilities. More details about PARALLEL_BILBY settings are give in Appendix F.
Given the analysis described above at each grid point, we then obtain the individual H 0 posteriors as a postprocessing step. First, we divide out the individual likelihoods by any implied prior due to the D L and z prior combination mentioned in Eq. (11) to obtain the likelihoods L α in Eq. (12). We have repeated this analysis by sampling directly on H 0 and using a flat prior on this quantity, in which case the additional "dividing-out" step is not required (in both cases, we obtain the same results). With the individual likelihoods at hand, we then multiply the individual likelihoods together based on their relative detection counts, normalizing to N det events. We then use an overall flat prior on the stacked Hubble parameter to get the stacked H 0 posterior.
The results are shown in Fig. 3, where the upper middle and lower panels represent the O5, Voyager, and CE eras. The grid points in the right panel are the true D L and ι for which we obtain representative PE runs, and count them based on the relative values of the heatmap at the grid points. 7 We use a generic choice for the representative points for the grid, and find that our final stacked posteriors are agnostic of the choice made (observe the difference between the middle panel and the other two), since we count the relative occurrence based 7 Repeating the same analysis to obtain the relative counts by integrating over a small patch around each grid point does not change our conclusions.
on the detectability of that particular injection. The individual H 0 likelihoods are shown in dashed curves in the left panels. The likelihood, after combining N det events, is shown with a thick, solid curve. Although the individual likelihoods may be peaked away from the true value (shown by the vertical line), they still have support at the true value. Combining the results via stacking leads to a stacked posterior that peaks at the injected value. For illustration, in Fig. 3, the middle and bottom panels use N det = 50, 70 for Voyager and CE era, respectively. To obtain the uncertainty in the measurement of the H 0 constant with N det = 10 2 , 10 3 events, we use a ∼ 1/ √ N det scaling and find that ∆H 0 /H 0 ∼ 10% for Voyager and ∼ 2% for CE respectively.
V. ROBUSTNESS OF FORECASTS
In the previous section, we made several assumptions to arrive at a forecast of how well H 0 could be measured in the future through the stacking of multiple events. In this section, we investigate the robustness of these forecasts by relaxing some of our assumptions. One of this assumptions was thatλ (0) 0 had been measured perfectly, i.e., we used a fixed delta-function prior forλ (0) 0 peaked at the injected value. Even though this is a justified assumption as more BNS mergers with counterparts are discovered (since each individual event will constrain the value ofλ (0) 0 more and more tightly), the posterior on λ (0) 0 will never be a delta function centered at the true value and this will deteriorate our measurements of H 0 . Another assumption was that the binary Love relations were exactly EoS independent. Although these relations are indeed EoS insensitive, they are not exactly universal, and this could lead to a systematic bias in the estimates of H 0 . We will investigate each of these issues in turn.
A. Effect of statistical uncertainty inλ (0) 0 Equation (7) tells us that the effect of statistical measurement uncertainty inλ (0) 0 directly affects the measurement of the redshift, z. To estimate this effect, we perform a Fisher analysis similar to that of Ref. [10]. For the signal model we use a restricted post-Newtonian(PN) waveform, where we include terms up to 3.5 PN for the point particle contribution [54] and upto 7.5 PN in the tidal contribution in the phase of the waveform [11]. We parametrise our waveform as where t c and φ c are the coalescence time and phase and A is the amplitude of the waveform (similar to Ref. [10]). We parameterize the tidal piece of the waveform using the parameterization mentioned in Eq. (1). We use the above signal model and a Gaussian prior onλ prior obtained using Fisher analysis on the model in Eq. (13). We see that the fractional uncertainty in z follows the trends found in Ref. [10] (analogous to the δλ (0) 0 = 0 case). We see that, the fractional error in redshift does not change significantly with change in the width ofλ (0) 0 prior at large distances from the source. The Fisher approximation is however valid in the high SNR regime, which we assume to be the region of SNR 30 shown by the shading. We find similar trends from Bayesian parameter estimation results shown in the bottom panel. Bottom Panel: Fractional statistical uncertainty in the measurement of the redshift, z, the luminosity distance, DL, and H0 when using a Gaussian prior onλ Fig. 4. We observe a similar trend in the fractional error in recovery of the redshift as Ref. [10] (the δλ (0) 0 = 0 case). When we have a measurement uncertainty, represented by the δλ (0) 0 = 10, 30 cases, we find that it does not affect the measurement uncertainty of the redshift at larger distances. We note, however, that the Fisher approximation holds true only in the high signal-to-noise limit. In the top panel of Fig. 4 we use a value of SNR 30, shown by the shading, as representing the region where the Fisher approximation is valid.
To verify these Fisher estimates, we repeat some of the representative, parameter-estimation runs in the CE era using a Gaussian prior onλ Fig. 4. In this case we mean 90% intervals by ∆z, ∆D L , ∆H 0 . We observe that the redshift error trends from the full parameter-estimation runs are in agreement with the trends of the Fisher estimates. In particular, the fractional error in H 0 does not change significantly, even when we include a Gaussian prior inλ (0) 0 . This analysis also agrees with the Fisher uncertainties reported in Messenger and Read. Given these results, we do not expect that a prior statistical uncertainty in the measurement ofλ (0) 0 will significantly affect the final H 0 measurement, especially since most detections will be at larger distances in the third-generation detector era.
B. Effect of a systematic bias inλ
(0) 0 Following Eq. (7), a systematic bias in the measurement ofλ (0) 0 will lead to a biased measurement of z, and hence a bias in the inferred value of H 0 . We can estimate this in the following way. Assume first that there is no bias in the measurement of quantities that depend directly on the signal, like masses and tidal deformabilities. If so, any systematic bias in H 0 will be solely due to an induced bias in z due to the bias inλ A covariance then arises between δH 0 and δλ In order to verify this estimate, we repeat some of the representative parameter estimation studies, but this time with models that either have a delta-function or a We observe no significant change in the distance recovery, but a systematic shift in the recovery of the redshift; the latter impacts the H0 measurement. Bottom panel: The difference in means of the H0 measurements between the true and shifted cases by doing a few more parameter estimation runs as above. This is shown by the dotted line with filled circles. We repeat the runs, putting a Gaussian prior with standard deviation of 30 forλ (0) 0 , centered at the true value of 200, and plot half of the 90% credible interval for H0 in dotted line with a plus marker. We repeat one more time with the same width but now centered at 230, and plot half credible region in using dotted lines with inverted triangles. We find that the credible region is not affected by a systematic shift inλ (0) 0 . Therefore, we consider the former, and show improvements (∝ 1/ √ N det ) as we increase the number of detections, using solid lines. Observe that for N det ∼ 30, the statistical improvement hits the boundary of systematic error, shown in blue. We also show the analytical estimate for the systematic error from Eq. (14) using the shading, which roughly agrees with the parameter estimation runs.
Gaussian prior that are both peaked at a shifted location ofλ (0) 0 = 230. With these priors, the redshift measurement shifts from its true values. An example is shown in the top panel of Fig. 5 where we observe that the recovered redshift is systematically lower for the case when we use the δ(λ (0) 0 − 230) prior. The distance measurement is unchanged since its information comes from the amplitude which is not affect due to a biasedλ (0) 0 measurement. The shift in z shows up as a bias in H 0 . We perform parameter estimation for a few more such injections at different distances with this shifted prior and simply use the difference in means as a benchmark for a systematic error. In the bottom panel of Fig. 5 we denote this shift in the means using the dotted line with filled circle markers. We find that it roughly follows the analytic trend from Eq. (14), shown by the shaded region in the bottom panel of Fig. 5.
In reality, we expect the total uncertainty to be more a combination of statistical and systematic, but as more events are stacked, the statistical error will decrease (as 1/ √ N det or faster), while the systematic error will not. This then means the latter acts as an uncertainty floor that we must contend with when extracting H 0 . One can then roughly determine how many detections would be needed to reach this floor. In Fig. 5, we consider half of the 90% confidence interval of the H 0 measurement for our runs here (shown in dotted lines with inverted triangle and plus markers) as the statistical uncertainity, and consider an improvement by 1/ √ N det for N det = 10, 30 represented by solid lines in the figure. We see then that the statistical error will become smaller than the systematic after we have stacked N det > 30 events. However, it is to be noted that this is only illustrated as an example where there is a systematic bias of 30 from the true value of 200 when measuringλ (0) 0 .
C. Effect of uncertainty in binary-love relations
We end this section with an estimate of the systematic error in the inference of H 0 induced by our assumption that the binary love relations are exact and EoS independent. In order to arrive at a rough estimate, we consider how a bias inλ (k) 0 , given a value ofλ (0) 0 , would affect our measurements of H 0 . In order to separate this effect from those discussed in the previous subsections, we assume now thatλ (0) 0 has been measured perfectly. With this in mind, the bias in theλ (k) 0 for k ≥ 1 inside the summation in Eq. (14) leads to a bias in the inferred redshift, and hence a bias in H 0 . This can be estimated by taking a variation of the individual terms as, This expression can be solved for δH 0 /δλ in Fig. 6. Here, we choose the residuals for the MPA1 EoS as the values for δλ Observe that the systematic error introduced in the extraction of H 0 is much smaller than the statistical error. From this we conclude that only after more than a certain number of events are stacked (∼ 30 in this case), will this systematic error be of any importance. Observe also that the systematic error due to the binary Love relations is smaller for the higher-order terms in the Taylor expansion.
VI. CONCLUSION
This work demonstrates yet another application of universal relations in NSs. Following the proposition by Messenger and Read [10], previous attempts at measuring the distance-redshift relation solely using GWs have relied on the knowledge of the specific NS EoS. 8 The expansion of the tidal deformability in terms of the source mass is particularly interesting since it breaks the degeneracy between GW frequency and redshift. However, in absence of a priori EoS information, all of the expansion coefficients are free parameters. Theλ relations greatly constrain this degree of freedom. By employing these relations, knowing only a single coefficient determines the rest. This can be particularly useful for setting physically motivated priors on the tidal deformability. In GW astrophysics, previous literature has considered putting flat priors onλ 1,2 orΛ. However, another option which is physically motivated prior is an uninformative prior on the free universal expansion coefficient,λ (0) 0 , which in turn restricts the priors onλ 1,2 , orΛ. In this work, we employ this technique to GW170817 strain data, and obtain the measurement of λ(1.4M ) that is consistent with previous measurements by the LIGO/Virgo collaboration.
The advantage of using theλ relations is that the tidal deformability is only parameterized by two quantities:λ (0) 0 and m source (or equivalently, z). Future multi-messenger observations of GWs from BNSs with simultaneous identification of the redshift would lead to measurements ofλ (0) 0 that can be combined to give a constrained measurement for this fundamental quantity. Such observations are well motivated based on forecasting studies of different observing scenarios of GW observations [50] combined with development of current and next generation synoptic surveys, and cyberinfrastructure developments. 9 With a constraint onλ (0) 0 , the use case can be flipped to now measure the redshift of BNS signals from the tidal deformability measurements. This is advantageous since the method does not rely on any prompt follow-up operations, and is also free from selection effect of host galaxy identification or galaxy catalog incompleteness that has been used in the literature till date.
We demonstrate this technique of measuring the Hubble constant, H 0 , using a synthetic population of detected NS binaries. We also analyze the impact of error in measurement of H 0 due to the statistical and systematic errors in this prescription. However, aside from GW observations constraining allowed NS EoSs, there are other missions which are also putting stringent measurement of the mass and radii of NSs. Recently, the NICER team reported the discovery of X-ray pulses from massive milli-second pulsars, PSR J0740+6620 and PSR J1614-2230 [58]. The data from PSR J0740+6620 has been used to measure the mass-radius relation of the NS [59][60][61]. Such measurements are expected to rule out representative EoSs from the literature that are inconsistent with observables, further reducing the uncertainty in thē λ The authors would like to thank Jocelyn Read for reviewing the document and providing helpful feedback. This document is given the LIGO DCC number P2100195. 11 The authors would also like to thank the anonymous referee for helpful comments. In Sec. III, we did not set an explicit prior on the individual tidal deformabilities,λ 1,2 independently. However, the uniform prior on theλ (0) 0 parameter along with the prior on the individual component masses imply a prior on the individual tidal deformabilities,λ 1,2 . We show this in Fig. 7. It should be noted that there are correlations between the two tidal deformabilities based on the priors on the masses. We show this in Fig. 8. On the left panel therein, we see the correlation between thē λ 1,2 . Here, for the mass priors, we use the same mass priors as Sec. III. The correlation is motivated from the fact that in the limit of equal mass components, we expect the tidal deformabilities to be equal. We also note that the prior implied in this work differs from that used in Ref. [16] where instead the symmetric tidal deformability,λ s = (λ 1 +λ 2 )/2 was sampled uniformly, and then λ a = (λ 1 −λ 2 )/2 was obtained using another universal relation,λ a =λ a (λ s , q), reported in YY17. We reproduce the implied prior obtained using the latter technique in the right panel of Fig. 8. We attributed the differences in the upper limit value forλ (0) 0 between this work and Ref. [16], mentioned in Sec. III due to the difference in priors.
Appendix B: Parameter estimation from GW170817 data
In Sec. II, we showed the marginalized posterior on the universal quantity,λ relations from Sec. II. Observe the correlation betweenλ1,2, which is expected due to the prior information about the masses when imposing a common EoS. Right panel: The prior on impliedλ1,2 when using the universal relation between the symmetric and anti-symmetric tidal deformability,λs −λa, relations from YY17. This was used in the EoS-insensitive results reported previously in Ref. [16].
marginalized distributions (corner plot) of some of the other parameters in Fig. 9. The priors on the masses are the marginalized posterior of the detector-frame values reported in GWTC-1. For the luminosity distance, we use a uniform in comoving volume up to 75 Mpc. For the spins, we use the low-spin prior from GWTC-1. The bottom right block is the same as Fig. 2 in the main body of the text. For the BILBY configuration, we use the nested sampling algorithm, DYNESTY, with number of live points nlive = 1500, and number of autocorrelation lengths to reject before accepting a new point, nact = 10. These settings are motivated from settings used in the validation of bilby against GWTC-1 events [63].
Appendix C: Trends in H0 recovery with inclination
In this section we illustrate how the inclination angle affect the distance recovery, which impacts the posterior of H 0 . In Fig. 10, we show the distance, redshift, and H 0 recovery considering injections at a fixed distance but varying inclination angles. We observe that the recovered distance has support for larger values as the true inclination angle increases. This is expected since edgeon sources have lower SNR compared to face-on sources, and therefore are degenerate with a larger distance recovery. On the other hand, since the redshift is recovered from the phase of the waveform, it is not affected. This implies that the H 0 recovery is affected in the opposite sense, having support for higher-than-true values for faceon sources and vice versa. However, in all cases, there is support for the true value of 70 km s −1 /Mpc.
Appendix D: Considering alternative redshift priors
In this appendix, we show the effect of using alternative choices for distances/redshift compared to distributing them uniformly in volume, ∝ D 2 L , used in the main body of the text for Fig. 3. We consider the Cosmic Explorer example from Sec. IV. We consider two cases -1) redshift distribution such that the rate is uniform in comoving volume, and 2) it follows the cosmic star formation history [following Eq. 15 from Ref. 64]. We reweight the luminosity distance of recovered binary systems in Sec. IV using the two priors as shown in Fig. 11. For Bayesian parameter estimation, we consider the same representative distance/inclination gridpoints as in the Cosmic Explorer panel of Fig. 3, but count them based on the re-weighted heatmap in Fig. 11. We find that the combined H 0 measurement is not affected by the choice of the priors. We also would like to note that while the evolution of the BBH rate has been done [65], the BNS rate has not been constrained strongly due to lack of BNS observations and their relative low distances compared to BBHs. Hence, we feel that our choice made here is justified. The analysis can be redone as further constrains are put on the same. on z and the luminosity distance, D L , implies a prior on H 0 , while in the latter, we directly put a prior on the H 0 . In either case, when combining multiple observations, we need to multiply the likelihoods, dividing out by any imposed or implied prior. In practice putting a uniform prior when sampling (ensuring that the posterior does not rail against the prior boundaries) is equivalent of the likelihood up to a constant factor. We illustrate this using a low signal to noise ratio (SNR ∼ 1) event. We put a prior on D L that is uniform in comoving volume up to 5 Gpc, and a uniform prior on redshift ∈ [0, 0.5]. The implied prior on H 0 due to this prior choice is shown in Fig. 12 by the solid, unfilled histogram. Due to the low SNR of the injection in this case, we expect the posterior to be similar to the prior on H 0 . To obtain the likelihood, we need to divide out by the prior, or reweight the samples such that the new prior on H 0 is flat. In practice this is done by binning the posterior samples and weighting them by the inverse count of the prior distribution.
The new re-weighted posterior is shown by the hatched histogram. As expected, this measurement is uninformative i.e., the re-weighted posterior, which in this case is . This is an example of DL, z, and H0 measurements for different combinations of the true inclination angle, θJN , at a fixed distance of 3 Gpc in the Cosmic Explorer era. Note that with increasing true inclination angle, there is increasing support for larger distance values. This is expected since a larger inclination implies a lower injected amplitude, which is recovered as a larger distance. The redshift measurement is not affected since it comes from the phase of the waveform.
the likelihood, is flat.
Appendix F: PARALLEL_BILBY configuration for BNS simulations
For Sec. IV, we make use of the PARALLEL_BILBY framework [53] which is an extension of BILBY to scale out the analysis using Message Passing Interface (MPI) 12 to an entire high-performance cluster. We use a fiducial binary with a source-frame chirp mass of M c = 1.17 M and mass-ratio, q = 0.9 for all of our synthetic injections. The signal duration is 128s. While BNS signals will last much longer during the CE era, the 5PN tidal terms only become pronounced close to merger. The injected redshifted chirp mass is obtained as, (1 + z inj )M inj c , where z inj is determined from the injected luminosity distance that assumed true flat-ΛCDM(H 0 = 70 km s −1 /Mpc, Ω m0 = 0.3) cosmology. For sampling, we use a prior that is uniform in detectorframe chirp mass between (1 + z inj )M inj c ± 0.1 M , and Here, we have reweighted the normalized number count of the recovered events using a redshift prior following uniform in comoving volume (dashed), and that following star formation history (solid). Bottom panel: The contour plot showing the normalized number counts similar to Fig. 3 for the two alternative priors. We observe that the combined measurement of H0 is not significantly affected due to the prior on redshift.
a prior uniform in mass-ratio ∈ [0.65, 1]. We ignore spins (set them to zero), and use delta function priors at zero on spin when sampling. We also fix the sky-location of the source. The prior for distance is uniform in comoving volume up to twice the injected distance. We use a stationary Gaussian noise realization for each of our detector era. For the O5 and CE results in Fig. 3, we impose a uniform prior on the redshift, and get the individual likelihoods using the technique mentioned in Appendix E. For the Voyager results, we sample directly in H 0 using a uniform prior ∈ [1, 300] km s −1 /Mpc. | 2021-06-15T01:16:15.164Z | 2021-06-11T00:00:00.000 | {
"year": 2021,
"sha1": "d6b4ca63d647ae9a46315f1020e1df2d90c55e45",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.06589",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d6b4ca63d647ae9a46315f1020e1df2d90c55e45",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270735843 | pes2o/s2orc | v3-fos-license | Rare and Common Genetic Variation Underlying Atrial Fibrillation Risk
Key Points Question What is the combined contribution of rare and common genetic variation to atrial fibrillation (AF) risk? Findings In this genetic association study, rare genetic variants, predicted to cause loss of function, in 6 genes were associated with AF. Together, these rare variants and a polygenic risk score for AF were associated with a considerable risk of incident atrial fibrillation; rare variants were also associated with heart failure and cardiomyopathy, and a higher risk of cardiomyopathy following AF diagnosis. Meaning The findings suggest that assessing both rare and common genetic variation may aid in atrial fibrillation prevention and risk stratification.
eFigure 10.Forest plot of hazard ratios for AF, cardiomyopathy, and HF (30-day grace period) eReferences This supplemental material has been provided by the authors to give readers additional information about their work.
Quality control, variant annotation, and phenotype definitions
We conducted further filtering of samples based on QC criteria listed in UK Biobank resource 531 (heterozygosity, missing rates, excess relatedness, and missing kinship inference).We excluded samples with disagreements between reported sex and genetically determined sex, and filtered for European ancestry based on the first six principal components of individuals self-reporting as "White", "Irish", or "Any other white background" (UK Biobank data field 21000, coding 1001, 1002, and 1003).We filtered variants by missingness (>10%) and Hardy-Weinberg equilibrium test (P<1x10 -15 ), and retained calls with a genotype quality >20, read depth >10 and call rate >90%.
Variant annotation was performed using dbSNP (version 4.1a) 1 and SnpEff (version 5.0) 2 .pLoF variants were defined as variants leading to a premature stop codon or to the loss of a start or stop codon, frameshift variants leading to a premature stop codon, and variants disrupting canonical splice acceptor or donor sites.Only pLoF variants annotated as "high" impact were included as pLoF variants.We assesed splice-sit variants using SpliceAI Diabetes was defined by ICD-10 codes E10, E11 and E14 (UK Biobank data fields f.130706, f.130708, and f.130714)
Gene-based tests for rare missense variants
Unlike pLOF variants, the effects of missense variants on disease risk are more difficult to predict.
Traditional burden tests lose power when the effects of variants are bidirectional.Alternative methods that account for bi-directionality, like the Sequence Kernel Association Test (SKAT) 4 , may lose power when only a small proportion of variants in a gene are associated with the investigated outcome (sparsity of causal variants).
We, therefore, performed both a traditional burden test and followed up with the Omnibus Aggregated Cauchy Association Test (ACAT-O) 5 as a sensitivity analysis.The ACAT-O test is robust to both bi-directional variant effects and sparsity of causal variants and is therefore wellsuited to examine missense variants.
Sensitivity analyses for gene-based tests
As a sensitivity analysis, we conducted a leave-one-variant-out (LOVO) analysis using the function integrated in REGENIE, for all significant and suggestive associations.This approach constructs a series of masks for each gene, leaving one variant out per mask.A subsequent gene-based burden test was performed for each mask to detect if any individual variants were the sole drivers of the association (P>0.05 for mask without individual variant).
To assess whether any associations were primarily driven by ventricular cardiomyopathies, we also conducted another gene-based burden test, excluding all individuals with diagnosed cardiomyopathies before inclusion or during follow-up.Cardiomyopathies were defined by ICD10 code I42 (UK Biobank data field 131338).
The association between pLOF variants in RPL3L and AF was primarily driven by a variant in position chr16:1945498:C>T (P=0.050 for mask without variant) and the association between missense variants in the UBE4B gene and AF was primarily driven by a single missense variant in position chr1:10107367:G>A (P=0.74 for mask without variant).Results of the LOVO analysis are summarized in Supplementary Data 4-5.Excluding individuals with cardiomyopathies did not substantially alter the results (Supplementary Data 6-7).
Replication of genetic findings
We included 138,131 participants in Geisinger Health System MyCode cohort and 29,127 participants in the Mount Sinai BioMe Biobank.Atrial fibrillation cases were defined based on International Classification of Diseases version 10 (ICD-10) I48 obtained from electronic health records.Participants without any records of cardiac arrythmia were used as controls.
DNA sequencing and genotyping data
The Regeneron Genetics Center performed high coverage whole-exome sequencing using NimbleGen VCRome probes (Roche CA, USA) or a modified version of the xGen design from Integrated DNA Technologies (IDT).Sequencing was done using Illumina v4 HiSeq 2500 or NovaSeq instruments, achieving over 20x coverage for 96% of VCRome samples and 99% of IDT samples.Variants were annotated using snpEff and Ensembl v85 gene definitions, prioritizing protein-coding transcripts based on functional impact.The following variants were defined as protein truncating: insertions or deletions resulting in frameshift, any variant causing a stop gained, start lost or stop lost and any variants affecting a splice acceptor or splice donor site.Common variant genotyping was performed on single nucleotide polymorphism (SNP) arrays as previously described 6 .We retained genotyped variants with a minor allele frequency >1%, <10% missingness, Hardy-Weinberg equilibrium test P-value >10-15.We imputed the genotyped variants based on the TOPMed reference panel 7 , using the TOPMed imputation server 8,9 .Further details are provided elsewhere 6,10,11 .
Association analyses
We estimated associations between the burden of predicted loss-of-function variants in TTN, RPL3L, PKP2, CTNNA3, C10orf71, and KDM5B with atrial fibrillation by fitting additive genetic Firth bias-corrected logistic regression models using the software REGENIE, version 2+ 12 .
Analyses were adjusted for age, age squared, sex, age-by-sex, and age squared-by-sex interaction terms; experimental batch-related covariates; the first 10 common variant-derived genetic principal components; the first 20 rare variant-derived principal components; and a polygenic score generated by REGENIE, which robustly adjusts for relatedness and population 12 .Association results from Geisinger Health System MyCode and the Mount Sinai BioMe Biobank were metaanalyzed using fixed-effects inverse variance weighting.
Protein abundance and RNA expression across cardiac cell types in human hearts
To evaluate protein abundance levels of TTN, RPL3L, PKP2, CTNNA3, C10orf71, and KDM5B, we used utilized mass spectrometry (MS)-based protein abundance measurements from human left and right atrial tissue of seven individuals from one of our previous studies 13 .Raw data were searched against the SwissProt human protein database containing canonical and isoform sequences using Similarly, to evaluate in which cell types the proteins of interest are expressed in the human heart, we queried a publicly available single-nucleus RNA sequencing (snRNAseq) data set of 287,269 cells of the human heart published by Tucker et al. 14 .Cytoplasmic cardiomyocyte clusters were removed, the remaining clusters were combined and average RNA expression values per cell types were calculated as described by Tucker et al. 14 .The average expression values of TTN, RPL3L, PKP2, CTNNA3, C10orf71, KDM5B, and MYBPC3 were extracted and scaled per gene by dividing by the max expression value over all cell types.Data were processed and visualized using Python 3.7.1 and Seaborn 0.9.0.Results are illustrated in eFigure 1.
As the C10orf71 had not previously been associated with cardiovascular phenotypes, we obtained tissue specific expression based on normalized consensus RNA-sequencing data from the Human Protein Atlas (Human Protein Atlas: www.proteinatlas.org) 15 and GTEx (www.gtexportal.org).The tissue specific RNA-expression of C10orf71 was visualized using R. Results are illustrated in eFigure 2.
Risk of heart failure and cardiomyopathy
We assessed hazard ratios for incident AF, HF, and cardiomyopathy as separate outcomes.To ascertain temporal trends in incident disease, we considered each individual outcome and all-cause mortality as competing events.The models were adjusted for sex, age, BMI at inclusion, and hypertension, and IHD at inclusion.We considered P<0.0056 as statistically significant (3 genetic exposures x 3 independent outcomes).
Among individuals diagnosed with AF during follow-up, we assessed the hazard ratios for incident HF and cardiomyopathy based on carrier status of a rare pLOF variant.Individuals who developed HF or cardiomyopathy before AF were excluded.Hazard ratios were estimated for HF and cardiomyopathy as separate, competing events.
eAppendix eTable 1 . 2 . 3 . 4 . 5 .eTable 6 .eTable 7 .eTable 8 . 9 . 1 . 2 . 3 . 4 . 5 .eFigure 6 . 9 .
Odds ratio for AF according to PRS and pLOF variants eTable Odds ratio for AF according to PRS and pLOF variants (unrelated individuals) eTable Variant carriers in study cohort for incident AF, HF and cardiomyopathy eTable Hazard ratios ratio for incident AF according to genetic risk and clinical risk factors eTable Cumulative incidence of AF by age 80 Cumulative incidence of AF by age 70 Cumulative incidence of AF by age 60 Cumulative incidence of AF by age 80 (unrelated individuals) eTable Cumulative incidence of AF by age 80 (excluding TTN pLOF variants) eFigure Flowchart of study design eFigure Manhattan plot of gene-based test for rare pLOF variants eFigure Quantile-Quantile plot of gene-based test for rare pLOF variants eFigure Cardiac expression of AF associated genes eFigure Tissue-specific RNA expression of C10orf71 Results from gene-based association test with AF in independent replication cohort eFigure 7. Ten-year risk of AF (unrelated individuals) eFigure 8. Ten-year risk of AF (excluding TTN pLOF variants) eFigure Forest plot of hazard ratios for AF, cardiomyopathy, and HF (unrelaed individuals) 3 , and classified splice-site variants with SpliceAI score >0.8 as pLoF.AF was defined by the International Classification of Diseases, 10 th revision (ICD-10) code I48, corresponding to UK Biobank data field 131351.The AF diagnosis in the UK Biobank was based on hospital records, death records and primary care records.Individuals without an AF diagnosis were used as controls.Individuals with uncertain AF diagnosis (i.e.individuals with AF diagnosis based only on self-reports or individuals diagnosed with atrial flutter [ICD-10 code I48.3 and I48.4]) were assigned to the control group.Heart failure was defined by ICD-10 code I50 (UK Biobank data field 131354) and cardiomyopathy by ICD10-code I42 (UK Biobank data field 131338).Ischemic heart disease was defined by ICD-10 codes I20, I21, I22, I24, and I25 (UK Biobank data fields 131296, 131298, 131300, 131304, and 131306).Hypertension was defined by ICD-10 code I10 (UK Biobank data field 131286) diagnosed at time of inclusion.
MaxQuant v1. 5 . 3 .
19. ProteinsGroups.txtdata were further processed and visualized using Python 3.7.1 and Seaborn 0.9.0.Reverse identifications, potential contaminants as well as proteins only identified by site were removed and LFQ protein intensities were extracted.One sample from the left atrium (H117-LA) showed a low number of protein identifications and a significantly lower overall protein intensity distribution and was thus removed from further analyses.Median protein intensity-based absolute quantification (iBAQ) values over all samples per atrium were calculated for each protein and visualized by means of a rank plot.KDM5B was not identified in the data set.Moreover, protein iBAQ values of TTN, RPL3L, PKP2, CTNNA3, and C10orf71 of each biological replicate were extracted and visualized using a box plot.
eFigure 2 .eFigure 3 .
Manhattan plot of gene-based test for rare pLOF variants X-axis denotes chromosomal position of the gene.Y-axis denotes -log10 of the P-value for the genetic associations with AF.Significant genes are labeled and colored in red.Quantile-Quantile plot of gene-based test for rare pLOF variants X-axis denotes expected -log10 P-value, while Y-axis denotes the observed -log10 P-values.The lambda value (λ) indicates a measure of genomic inflation in the dataset.
eFigure 4 .
Cardiac expression of AF associated genes Suppl.Figure1A) relative abundance of protein products of the AF-associated genes identified in the study.Suppl.Figure1B) relative abundance of protein products in left atria (LA) and right atria (RA) respectively.The product of KDM5B was not identified in the proteomics dataset.Suppl.
Figure
Figure1C) relative RNA expression across cell types, based on single-cell RNA expression data.
Odds ratio for AF according to PRS and pLOF variants Odds ratio for AF according to PRS and pLOF variants (unrelated individuals) Variant carriers in study cohort for incident AF, HF and cardiomyopathy Carriers of rare pLOF variants in main study cohort after exclusion of individuals with prevalent AF, HF or cardiomyopathy.Fifteen individuals carried rare variants in two different genes.Variant carriers total denotes number of individuals with at least one pLOF variant.Hazard ratios ratio for incident AF according to genetic risk and clinical risk factors Cumulative incidence of AF by age 80 Cumulative incidence of AF by age 80 (unrelated individuals) Cumulative incidence of AF by age 80 (excluding TTN pLOF variants) The models were adjusted for sex, age at AF diagnosis, BMI, and hypertension or IHD at time of AF diagnosis.periodandstartedfollow-up30daysafterAFdiagnosis."©2024VadOBetal.JAMA Cardiology.eTable 1. CI, Confidence interval, pLOF variant, predicted loss-of-function variant in atrial fibrillation associated gene; PRS, Polygenic risk score for atrial fibrillation.©2024VadOBetal.JAMA Cardiology.eTable 2. CI, Confidence interval, pLOF variant, predicted loss-of-function variant in atrial fibrillation associated gene; PRS, Polygenic risk score for atrial fibrillation.eTable3.©2024VadOBet al.JAMA Cardiology.eTable 4. BMI, Body-mass index, CI, Confidence interval, pLOF variant, predicted loss-of-function variant in atrial fibrillation associated gene; PRS, Polygenic risk score for atrial fibrillation.©2024VadOBet al.JAMA Cardiology.eTable 5. © 2024 Vad OB et al.JAMA Cardiology.eTable 8. CI, Confidence interval, pLOF variant, predicted loss-of-function variant in atrial fibrillation associated gene; PRS, Polygenic risk score for atrial fibrillation.©2024Vad OB et al.JAMA Cardiology.eTable 9. CI, Confidence interval, pLOF variant, predicted loss-of-function variant in atrial fibrillation associated gene; PRS, Polygenic risk score for atrial fibrillation.eFigure1. Flowchart of study design AF, atrial fibrillation, CM, cardiomyopathy, HF, heart failure, QC, quality control, WES, wholeexome sequencing. | 2024-06-27T06:16:08.985Z | 2024-06-26T00:00:00.000 | {
"year": 2024,
"sha1": "b853a075bc343856a02898caa7a72255ec0fa4de",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamacardiology/articlepdf/2820520/jamacardiology_vad_2024_oi_240031_1718902422.05964.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3495df1f4964a0156510e84718488b94162c3597",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9084035 | pes2o/s2orc | v3-fos-license | GMA Annual Conference 2016 in Bern – Conference Report
Under the motto “Innovative Together” and for the first time in 15 years, the annual meeting of the Association for Medical Education (GMA) was held in Switzerland from 14 to 17 September 2016. More than 450 participants made their way to Bern and contributed to a lively event, by presenting their projects, moderating, discussing, asking questions and providing answers. Under the patronage of the two conference presidents Sissel Guttormsen (Bern) and Christian Schirlo (Zurich), 274 scientific papers and 36 workshops tackled various projects and questions of education, further education and CPD in human, dental, veterinary medicine and academic health professions. All contributions had previously been critically reviewed by two reviewers. The abstracts for all scientific contributions were published via the German Medical Science (GMS) portal for the conference and are available at http://www.egms.de/dynamic/en/meetings/gma2016/ index.htm. Various welcome speakers from the Swiss educational landscape as well as exhibitors provided valuable contributions and together with the sponsors and our cooperation partners (Federal Office of Public Health of the Federal Department of Home Affairs and the Robert Bosch Foundation), provided a framework for the conference. The program is available online at http://box.iml. unibe.ch/gma-programm/GMA2016_Prog.pdf. Program
Introduction
Under the motto "Innovative Together" and for the first time in 15 years, the annual meeting of the Association for Medical Education (GMA) was held in Switzerland from 14 to 17 September 2016. More than 450 participants made their way to Bern and contributed to a lively event, by presenting their projects, moderating, discussing, asking questions and providing answers. Under the patronage of the two conference presidents Sissel Guttormsen (Bern) and Christian Schirlo (Zurich), 274 scientific papers and 36 workshops tackled various projects and questions of education, further education and CPD in human, dental, veterinary medicine and academic health professions. All contributions had previously been critically reviewed by two reviewers. The abstracts for all scientific contributions were published via the German Medical Science (GMS) portal for the conference and are available at http://www.egms.de/dynamic/en/meetings/gma2016/ index.htm. Various welcome speakers from the Swiss educational landscape as well as exhibitors provided valuable contributions and together with the sponsors and our cooperation partners (Federal Office of Public Health of the Federal Department of Home Affairs and the Robert Bosch Foundation), provided a framework for the conference. The program is available online at http://box.iml. unibe.ch/gma-programm/GMA2016_Prog.pdf.
Program
The motto "Innovative Together" was a central theme of the annual conference. The program design was made Innovative by incorporating more modern formats in addition to the "classical" formats such as short and poster presentations. In the so-called "flipped contributions", based on the Flipped Classroom Method [1], speakers were able to make informational material available in advance, in order to allow the sessions to be fully utilized for deep discussions. The format of "demo contributions" was also introduced to give space for demonstrations of software or learning tools. And for the first time at a GMA annual conference, the format "Fringe/Weird Stuff" was tested. This format is inspired by AMEE, which describes fringe contributions as "an opportunity to take a different perspective and a new, perhaps provocative or idiosyncratic approach to health education. [...] The emphasis is on creativity, performance and involvement of the audience" [https://www.amee.org/conferences/amee-2016/abstracts#amee-fringe]. The element of Togetherness appeared, for example, in the innovative implementation of the keynotes. Thus keynote speeches were not given by a single speaker as is common but instead by a team of two, speaking jointly and sharing discussion time. In addition, keynote speakers were asked beforehand to incorporate specific measures into their plenary lectures in the form of structured interactions to encourage audience participation and thus to explicitly apply didactic principles of large group teaching [2]. Afterwards participants had the chance to send open questions (via SMS, browser, app or Twitter) to the speakers via Poll Everywhere [https:// www.polleverywhere.com/], which were subsequently discussed in a "Meet the Experts" round. The plenary lectures focused on different aspects of competency orientation and 'Entrustable Professional Activities', the added value of narratives in learning, the need for innovation in our educational systems, the interplay of patient centering and assessment and not least interprofessionalism -joint learning and working together in medicine. The keynote lectures were recorded and are accessible in the member area of the conference web pages at https://gma2016.de/programm.html. The main program of 145 short presentations (including eight flipped contributions), 120 posters, four fringe and five demo contributions and numerous workshops tackled topics such as education research, assessment, curriculum development, interprofessionalism, further education and much more, and resulted in reflection, lively debates and exchange of experiences. During the evening reception in the historic Kornhauskeller in the old town of Bern, the GMA award winners were also honoured: In addition, an exceptional "Innovative Together Prize" was also awarded by the organisers of the annual conference for the contribution to the scientific program (excluding the plenary lectures) which best reflected the spirit of the conference motto. Based on feedback gathered from the conference participants, the prize committee ultimately decided that this was the contribution in which representatives of all North Rhine-Westphalian medical faculties jointly and innovatively explored the possibilities of the fringe format to shed light on the pitfalls and challenges of networks: • Angelika Fritz Hiroko, Christian Thrien, Jörg Reissenweber, Anke Adelt, Andrea Rietfort, Bernhard Steinweg, Gabriele Campe, Linn Hempel, Franz-Bernhard Schrewe, Tim Peters: "Live Networks for Dummies" [4].
Of course numerous meetings of the GMA committees as well as the annual general assembly were also held. They also presented an opportunity for interested conference attendees to gather information about the work of the respective committees. Please refer to the minutes of each session. In summary, the feedback and the evaluation show that the conference and the numerous innovations were well received. We recommend further reflection on the conference formats and to provide didactically sound and innovative formats at the annual GMA conference.
Thanks and outlook
We would like to thank all those who have made the annual conference possible through their enthusiasm and tireless efforts. We are also very grateful to the many staff members of the Institute of Medical Education (IML) who were responsible for running the event and who have gone to great lengths to make this this large event happen. The good cooperation between the teams from Bern and Zurich was enriching throughout, both before, during and after the conference. In addition, we would like to thank the GMA Executive Board and Mrs Herrmannsdörfer of the GMA office, the GMS employees in Cologne and many others. The next annual conference will take place 20 to 23 September 2017 in Muenster and will once again be held as a joint conference of the German Association for Medical Education (GMA) and the Working Group for the Further Development of Teaching in Dental Medicine (AKWLZ) under the motto "Realization of Education". Further information is available at www.gma2017.de.
Competing interests
The authors declare that they have no competing interests. | 2018-04-03T02:54:30.218Z | 2017-02-15T00:00:00.000 | {
"year": 2017,
"sha1": "a3d3395553ef7674bc0a7961e68ad61025f286c8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a3d3395553ef7674bc0a7961e68ad61025f286c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52285965 | pes2o/s2orc | v3-fos-license | Spatio-temporal characteristics of population responses evoked by microstimulation in the barrel cortex
Intra-cortical microstimulation (ICMS) is a widely used technique to artificially stimulate cortical tissue. This method revealed functional maps and provided causal links between neuronal activity and cognitive, sensory or motor functions. The effects of ICMS on neural activity depend on stimulation parameters. Past studies investigated the effects of stimulation frequency mainly at the behavioral or motor level. Therefore the direct effect of frequency stimulation on the evoked spatio-temporal patterns of cortical activity is largely unknown. To study this question we used voltage-sensitive dye imaging to measure the population response in the barrel cortex of anesthetized rats evoked by high frequency stimulation (HFS), a lower frequency stimulation (LFS) of the same duration or a single pulse stimulation. We found that single pulse and short trains of ICMS induced cortical activity extending over few mm. HFS evoked a lower population response during the sustained response and showed a smaller activation across time and space compared with LFS. Finally the evoked population response started near the electrode site and spread horizontally at a propagation velocity in accordance with horizontal connections. In summary, HFS was less effective in cortical activation compared to LFS although HFS had 5 fold more energy than LFS.
Electrical stimulation has long been an important tool for exploring the organization and function of the nervous system as well as an important communication channel for brain machine interfaces (BMI). For over a century, scientists have used different applications and protocols of electrical stimulation to artificially activate brain regions and investigate their functionality and connectivity 1 . Moreover, electrical stimulation enabled breakthrough advances in many clinical applications based on BMI, for example artificial cochlea can restore hearing to deaf patients and in the basal-ganglia it can alleviate motor impairment of parkinsonian patients, reduce chronic neuropathic pain and recently it was found to be a useful treatment in depression [2][3][4][5] .
Intra-cortical microstimulation (ICMS) is a widely used electrical stimulation technique where short pulses of relatively low amplitude currents (in the range of µA) are delivered to the cortical tissue via a small microelectrode tip and induce the excitation of nearby cell bodies and axons 6,7 . This method has played a central role in experimental neuroscience and helped providing a causal link between neuronal activity and cognitive or motor functions 8,9 and it was used for functional mapping of various brain areas, e.g. the motor cortex [10][11][12] , frontal eye field area 13,14 etc. In addition, it was investigated in the visual 15,16 , auditory [17][18][19] and somatosensory regions 20,21 , revealed functional connectivity between different regions in the brain [22][23][24][25] (for reviews see 1,9,26 ) and was shown to affect sensory perception, behavioral responses and behavioral decisions 27 .
The effects of ICMS on neural activity depend on the stimulation parameters, which include pulse duration, current amplitude, train duration and stimulation frequency 26 . While some studies investigated the effects of frequency, current and/or amplitude on behavioral performance, psychophysical curves 1,[28][29][30] or the generation of saccadic eye movement 31 , fewer studies investigated and measured the direct effect of these parameters on the evoked spatio-temporal patterns of the neural activity [32][33][34] . The effect of stimulation frequency on the evoked spatio-temporal patterns of neuronal activity is largely unknown. Studies using deep brain stimulation (DBS) 35 suggested that low frequency electrical stimulation (LFS; ranging between 0.5-25 Hz for this technique) induce depolarization of neuronal membranes and evokes action potentials, whereas higher frequencies stimulation The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 52900, Israel. Correspondence and requests for materials should be addressed to H.S. (email: Hamutal.Slovin@biu.ac.il) (HFS), ranging up to 30 kHz (depends on the brain region that is stimulated and the application technique) resulted in inhibition of action potentials in both central 35,36 and peripheral neurons 37,38 .
Therefore, the effect of low and high frequency ICMS on the spatio-temporal patterns of the evoked neural response are not well understood. To investigate this issue we inserted a microelectrode to the barrel cortex, a well-studied brain area, in anesthetized rats. We then stimulated the cortical tissue using 3 different ICMS conditions: single-pulse stimulation, high-frequency stimulation (HFS, 500 Hz) and a lower frequency stimulation (LFS, 100 Hz). The latter frequency is within the range of stimulation frequencies that were widely used in many previous ICMS studies and was shown to be highly effective in driving the neural activity and informative in characterizing the evoked neural and behavioral responses [39][40][41][42][43] . Using voltage-sensitive dye imaging (VSDI) we then imaged the population responses in the stimulated area at high spatial (mesoscale) and temporal resolution (Shoham et al., 1999;Slovin et al., 2002).
Our results show that LFS and HFS evoked population responses with distinct spatio-temporal characteristics. We found that ICMS at LFS was more effective in cortical activation compared with ICMS at HFS: cortical activation extended over a larger region and evoked higher neural response. In addition, we found that the two stimulation conditions induced neural activity near the stimulating electrode that propagated laterally over the cortical surface. The propagation velocity of the evoked pattern suggests the involvement of horizontal connections in lateral propagation.
Results
We measured neuronal population responses evoked by intra-cortical microstimulation (ICMS) in the upper layers (L2/3) of the barrel cortex of anesthetized rats, using voltage-sensitive dye imaging (VSDI) which enabled us to image neural activity at high spatial (mesoscale, 50 2 µm 2 /pixel) and high temporal resolution (100 Hz; see Methods) simultaneously (Fig. 1). The fluorescence dye signal from each pixel reflects the sum of membrane potential from all neuronal elements (dendrites, axons and somata) and therefore is a population signal (rather than response of single neurons). In addition, the VSD signal emphasizes subthreshold synaptic potentials (but reflects also suprathreshold activity [44][45][46]. At the beginning of each experiment, a single whisker was deflected on one side of the animal's whisker pad, while we imaged the evoked response in the contralateral barrel cortex. The early evoked population response pattern revealed the location of the barrel field of the stimulated whisker, in the imaged area (Fig. 2a left; VSD signal is measured as fluorescence change (∆f/f), map is color coded). We used the evoked activation pattern in order to direct the microelectrode to the barrel cortex (Fig. 2a right) and insert it into the upper layers (250-400 µm). We then stimulated the barrel cortex with biphasic square pulses, current amplitude of 80 µA (see Methods) with the following parameters ( Fig. 1c): single-pulse stimulation (1p), low-frequency stimulation (LFS; 100 Hz; 10 pulses, 100 ms duration) and two high frequency conditions of different duration lengths: (i) 100 ms of high-frequency stimulation (HFS; 500 Hz; 50 pulses), in this condition the stimulation length equals the LSF, but it has 5 fold more energy. (ii) 20 ms of high-frequency stimulation (HFS short; 500 Hz; 10 pulses), in this condition, the amount of energy equals to that of LFS, but stimulation length is much shorter. As the VSD signal was sampled at 100 Hz, any temporal differences, smaller than 10 ms, between the different ICMS conditions will not be detected by the current system. Population responses evoked by ICMS. Figure 2b shows data from an example session: a sequence of VSD maps evoked by ICMS for 1p (top row), HFS (middle row) and LFS (bottom row). The VSD response, appeared around the electrode tip, starting to increase already within the ICMS onset frame (t = 0 ms). In the subsequent time frames (each frame is 10 ms duration) the population activity quickly spread over the barrel cortex in an anisotropic manner, as previously reported 47,48 . The anisotropic VSD response showed a larger spread along the rows of the barrel cortex than across the columns as previously reported [48][49][50] (see Supplementary Fig. S1). The maps show that HFS and LFS evoked higher neuronal population response than 1p stimulation, as expected (see maps at 10-30 ms), while at later times the response evoked by LFS activated a larger region (e.g. more red/ pink pixels in the maps at 80-100 ms) compared with the HFS condition (although HFS contains 5 fold more energy than LFS). To compute the time course of the VSD response we defined a circular region of interest (ROI) that was centered on the peak cortical response (black circular contour in Fig. 2b, upper row, left map; see Methods). The VSD time course averaged across the ROI pixels is shown at Fig. 2c (same example session as in Fig. 2b). The responses in all conditions showed a fast and narrow peak activation and as described above, the peak responses for LFS and HFS were higher than the response to 1p (4.3 × 10 −3 ± 1.12 × 10 −4 (mean ± sem); 6.1 × 10 −3 ± 2.24 × 10 −4 and 6.2 × 10 −3 ± 1.25 × 10 −4 ∆f/f for 1p (n = 15 trials) HFS (n = 15) and LFS (n = 11) respectively; p < 0.001, Wilcoxon rank sum test, Bonferoni corrected). Interestingly, for the longer ICMS conditions, the VSD signal showed a more sustained response (40-100 ms) that was higher for the LFP condition ( Fig. 2c green curve) compared with the HFS condition ( Fig. 2c pink curve). Finally, after the stimulation was ended, all VSD responses showed a slow descending phase to baseline which was much faster for 1p than for HFS and LFS conditions (Fig. 2c inset). Next we computed the grand analysis of the VSD signal time course for the different stimulation conditions (Fig. 3a). To account for variance of the VSD response across recording sessions (resulting from variance across animals, VSD staining quality etc.), on each imaging session the VSD signals in the different conditions were normalized to peak response of the LFS condition. Next, the VSD responses were averaged across all sessions. The red curve depicts the normalized grand analysis time course for the HFS short condition (10 pulses; 500 Hz) that has equal energy as LFS and therefore has shorter time duration (20 ms). The grand analysis confirmed the main observations shown in the example session: narrow and fast peak activation for all conditions. Although the peak VSD response was higher for HFS and LFS than the peak response for the 1 P, it was not statistically significant across sessions (p > 0.1, Wilcoxon rank sum test, Bonferoni corrected; 0.77 ± 0.09 (mean ± sem); 1.05 ± 0.04; 1 ± 0 and 1.12 ± 0.05 for 1p (n = 6) HFS (n = 6 sessions) LFS (n = 6) and HFS short (n = 6) respectively). Next, a more sustained VSD response appeared for the longer stimulation conditions, and it was higher in the LFS compared with the HFS condition (p < 0.05 at 70-100 ms; Wilcoxon rank sum test; black arrows in Fig. 3a), although HFS has 5 fold more energy than LFS (50 vs. 10 pulses, respectively). This result is consistent with a previous study who showed that the HFS suppressive effect is independent upon stimulus duration 36 . At the end of stimulation, the VSD responses in all conditions showed a slower descending phase that was shorter for the 1p compared with all other conditions (Fig. 3a inset). The time to peak response was similar for all stimulation conditions (11.6 ± 1.66 ms for HFS, LFS and HFS short and 10 ± 0 ms for 1p stimulation). The normalized peak response amplitude was similar for HFS and HFS short as expected because in both conditions the peak response appeared following ~8 pulses of stimulation ( Fig. 3a; 1.12 ± 0.05 and 1.05 ± 0.04 for HFS short (n = 6) and HFS (n = 6) p > 0.05; Wilcoxon rank sum test). Interestingly, the normalized peak response of LFS was also similar and not statistically different form the peak response of HFS (p = 0.36; Wilcoxon rank sum test) despite the fact that it occurred after only ~2 pulses of stimulation.
To further investigate the temporal characteristics of HFS and LFS evoked responses we normalized the VSD time courses to the peak response of each ICMS condition (Fig. 3b). Similar to shown in Fig. 3a, a sustained VSD response appeared for the longer stimulation conditions, and it was higher in the LFS compared with the HFS condition (p < 0.05 starting at 40 ms; Wilcoxon rank sum test). Although the time to peak response was similar among all ICMS conditions (see above), the response dynamics following the end of stimulation showed variation among the conditions (Fig. 3b, and inset). The time for the response to decline below half peak response was longer for HFS short relative to 1p (30 ± 2.6 and 48.33 ± 16.4 ms, for 1p (n = 6) and HFS short (n = 6) respectively), however it was not significantly different (p = 0.48; Wilcoxon rank sum test). Similarly, the LFS response showed a longer time to decline below half peak response compared to the HFS condition but this was not significant (175 ± 17.1 and 143.33 ± 24.7 ms, for LFS (n = 6) and HFS (n = 6) respectively; p = 0.1; Wilcoxon rank sum test). However, the time for the response to decline below half peak response was significantly longer for the LFS and HFS than 1p stimulation (p < 0.01; Wilcoxon rank sum test, Bonferoni corrected) and HFS short (p < 0.05 for LFS. p = 0.09 for HFS; Wilcoxon rank sum test, Bonferoni corrected). In summary, the LFS condition evoked a higher neural activation during the sustained response, with comparison of both HFS conditions (short and long train). Thus the HFS condition was inferior to the LFS in activating neuronal populations.
Characteristics of the spatial pattern. The ICMS conditions can be divided into two stimulation groups: iso-energy but with different duration (LFS, HFS short) and iso-duration but with different energy (LFS, HFS). As for the iso-energy group, because the LFS duration was longer than HFS (100 and 20 ms, respectively) it is possible that stimulation duration (rather than frequency) affected the lower evoked activity of the HFS condition. Therefore, below we focus on ICMS conditions with similar durations. As shown in Fig. 2b, the population response evoked by ICMS appeared first around the microelectrode tip and then propagated horizontally, over cortical surface in an anisotropic manner (see Supplementary S1). To quantify and compare the profile of the spatial spread for HFS and LFS, we applied an elliptical ring shape ROI analysis (see Methods). We generated a continuous set of non-overlapping elliptical rings (see schematic illustration in Fig. 4a right), that were fitted to the evoked response pattern at 10 ms post stimulation. Figure 4a left shows the space-time maps for an example session and Fig. 4b shows the grand analysis of the space-time maps, where the VSD signal was normalized to the peak response of the LFS condition. The maps show the neuronal response as function of distance along the semi major axis of the fitted ellipses from the central ellipse (y-axis; see Methods) for each time point (x-axis). Thus, using this approach we could investigate the VSD signal propagation from the center of response to adjacent cortical regions. The spatio-temporal profile of the HFS condition in the example session ( Fig. 4a) and in the grand analysis ( Fig. 4b) showed less activation over time and space compared with the LFS condition. To quantify this difference, we computed the sum of overall activation during stimulation (i.e. 0-100 ms from stimulation onset) and over space (up to mean semi major axis: 2.4 ± 0.1 mm) for each session. Figure 4c depicts the summed normalized activation, which is significantly higher for the LFS than the HFS (173.74 ± 8.9 and 136.17 ± 11.75 for LFS (n = 6) and HFS (n = 6) respectively; p < 0.05, Wilcoxon rank sum test). This result supports the assumption that high frequency stimulation is less effective in activating neuronal responses relative to lower frequency stimulation. Figure 4d shows the spatial profile for LFS and HFS at peak response (continuous lines) and at time of stimulation end (t = 100 ms; dashed lines). At peak response, the spatial profile of the evoked activity for the two conditions is similar, however when stimulation is ended, the spatial profile of the LFS condition is significantly higher than HFS (p < 0.001, Wilcoxon rank sum test). Both spatial profiles show a significant deviation from baseline activity (i.e. activity before ICMS onset; gray lines) at time of peak response as well as the end of stimulation (Fig. 4d). When investigating the deviation of LFS spatial profile from HFS spatial profile at later time i.e. at 200 ms post stimulation, the LFS was higher from HFS (and baseline) at remote distances (1.95-2.4 mm on the semi major axis; p < 0.05, Wilcoxon rank sum test). Together, these results suggest that the LFS was superior in activating cortical populations relative to HFS. LFS generated higher neuronal responses, extending over larger cortical space during ICMS stimulation and in addition, the VSD response showed a slower decay to baseline relative to HFS, at regions remotely located from the center of response. Similar results were obtained for a circular ring shape ROI centered on peak activation in space (see Supplementary Fig. S2).
Although LFS was more effective in cortical activation relative to HFS, it is possible that a lower frequency condition, i.e. 50 Hz will be even more effective. To test this option, we stimulated the barrel cortex with a lower frequency of 50 Hz. Figure S3 shows the normalized VSD response to 100 ms stimulation at 50 Hz (5 pulses; 80 µA) and 100 Hz (LFS) and spatial spread, normalized to peak response of LFS condition. There are no significant differences between the LFS and the 50 Hz stimulation.
The propagation of the VSD response over space. ICMS evoked neuronal population response that showed a fast spread of activity over few mm in the cortex, we therefore decided to study the spatio-temporal dynamics of this signal propagation. We computed the derivative maps of the VSD responses: from each VSD map we subtracted the VSD map measured 20 ms earlier. Figure 5a depicts a sequence of derivative maps from show negative derivative values, corresponding for the fast descending phase of the VSD signal after arriving to peak response (see Fig. 3a,b for the time course of the VSD signal). This is further shown in the grand analysis of the derivative time course (Fig. 5b; The VSD signal was normalized to peak response of the LFS condition). Derivatives were computed for a circular ROI (Fig. 5a, top left map) and then averaged across all sessions. The negative derivatives values are significantly larger for HFS short than LFS or HFS (p < 0.001, Wilcoxon rank sum test, Bonferoni corrected; Fig. 5c left). The min negative derivative of all stimulation conditions are significantly smaller than zero (p < 0.05, sign-ranked test for a significant difference from zero) and all appeared at similar time regardless of stimulation length (30 ± 0 ms for 1p, 31.6 ± 1.6 ms for HFS short and HFS and 33.3 ± 2.1 ms for LFS; p > 0.5 Wilcoxon rank sum test). The LFS condition showed another trough of negative derivative at much later times, corresponding to the end of stimulation (100-140 ms; Fig. 5b,c right). This negative phase is significantly larger for LFS than all other conditions (*p < 0.05, **p < 0.01, Wilcoxon rank sum test, Bonferoni corrected). Together these results show that the major positive and negative derivative components appear in all conditions. Our next step was to study the spatial profile of the derivative maps. For this purpose we used a horizontal spatial profile (Fig. 5a, top row, map t = 30 ms; see Methods). Figure 6a shows the spatial profile of the derivative response at four consecutive time points (10-40 ms), for the example session in Fig. 5a. Because the evoked VSD response appeared close to the border of the imaging chamber, the spatial profile is not plotted symmetrically on both sides of peak response (location of peak response is denoted as 0 mm; the plotted distance is smaller for the closer border). Early times of the derivative response (10-20 ms) show a wide positive change of activation centered on the peak response, and lower values farther away from the peak. Later times (30-40 ms) showed negative values around the electrode site (gray arrow in Fig. 6a, LFS condition) while more remote sites showed values closer to zero (see 1p condition, t = 30 ms) or even positive values (black arrows in Fig. 6a, LFS condition, t = 30 ms). The negative derivative near the site of the microelectrode corresponds to the fast decline in the VSD signal, whereas the positive values at remote sites suggest the existence of an activation wave that is propagating laterally, starting near the electrode and spreading horizontally over the cortex.
To investigate the VSD signal propagation wave we performed a grand analysis of the spatial profile for HFS, LFS and HFS short conditions (Fig. 6b). The spatial profile of each session was aligned to peak response in space (at t = 10 ms) and normalized to the peak value at t = 10 ms of each session. At 30 ms, the grand analysis shows negative values near the electrode tip (gray arrow; p < 0.05, sign-ranked test for a significant difference from zero) while more remote sites showed positive values (black arrows; p < 0.05, sign-ranked test for a significant difference from zero).
To determine the propagation velocity of the VSD response, we selected a central ROI (circle, 5 pixels radius) located at the point of maximal activation in space and 7 non-overlapping peripheral rings extending 1.25-2.15 mm from the center (3 pixels width each, see Methods). We defined a threshold of 30% of peak response in each session and calculated for each ROI the time that the threshold was crossed (see Methods). Figure 7a shows a latency map i.e. the time at each pixel crossed the threshold for an example session. Pixels located closer to the peak response, passed that threshold earlier in time then remote pixels. Then, for each ring we calculated the Comparison of the VSD response evoked by whisker deflection and ICMS. Next we were interested to investigate the relation between the population response evoked physiologically by sensory stimulation or artificially by ICMS. Thus, we compared the VSD response evoked by a brief whisker deflection (see Methods) with 1p of ICMS (see Methods). Figure 8a shows data from an example session: a sequence of VSD maps evoked by 1p of ICMS (top row; same as in Fig. 2) and 1p of whisker deflection (bottom row). As expected, the evoked population response (at an ROI centered on the peak neural activation) following whisker deflection arrived to peak response later then the VSD signal evoked by ICMS (time to peak 20 ± 0 ms and 10 ± 0 ms for whisker deflection (n = 9) and ICMS 1p (n = 6) respectively; Fig. 8b). Interestingly, when aligning both responses on time of peak response (Fig. 8c), the VSD response dynamics of whisker deflection and ICMS showed high similarity. To compare the spread of cortical activity following whisker deflection or ICMS we applied an elliptical ring ROI analysis (see Fig. 4a and Methods). Figure 8d shows the space-time maps for the grand analysis where the VSD signal was normalized to peak response of each session. The maps show the neuronal population response as function of distance over cortical space (y-axis) along the semi major axis of the fitted ellipses for each time point (x-axis). The semi major axis of the central ellipse (located around peak response; see Methods) was: 0.55 ± 0.07 mm (mean across sessions ± sem), the semi major axis of the largest ellipse was: 2.3 ± 0.07 mm (mean across session). The spatio-temporal profile of the VSD response evoked by 1p ICMS showed similar activation over time and space compared with 1p whisker deflection (50 ms ramp-and-hold). To quantify this, we computed the sum of overall activation during whisker deflection (i.e. 0-50 ms from stimulation onset; see Methods) and over cortical space for each session. Figure 8e depicts the summed normalized activation, which is similar for the 1p ICMS and the whisker deflection (54.78 ± 3.6 and 56.22 ± 5.4 for 1p ICMS (n = 6) and whisker stimulation (n = 9) respectively; N.S: not significant, Wilcoxon rank sum test). These results are in accordance with previous studies that reported on similarities between the neural activation evoked by ICMS or whisker deflection 47,48 .
Discussion
While ICMS is in use at research for many years, the effects of low and high frequency ICMS on the evoked spatio-temporal patterns of neural activity are not well understood. In this study we measured using VSDI the neural population response evoked by high (500 Hz) and a lower frequency (100 Hz) ICMS, in the barrel cortex of anesthetized rats. We found that ICMS at HFS was less effective in cortical activation on both the time and space domain, when compared to ICMS at LFS (although HFS has 5 fold more energy than the LFS). Furthermore, we showed evidence for a lateral propagation of the signal starting near the electrode site and spreading horizontally over the cortex. The calculated propagation velocity of the evoked pattern suggests the involvement of cortical horizontal connections.
All sessions
Velocity (mm/ms) Proportion of total velocities (%) The effects of ICMS on the evoked spatio-temporal pattern of neural activity or behavior depend on the electrical stimulation parameters. Previous electrophysiological, VSDI and optical imaging of intrinsic signal studies reported that increasing current amplitude lead to higher neuronal activity, extending over a larger brain area around the microelectrode site, including activity propagation to neighboring, interconnected cortical regions. Fehérvári et al. performed VSDI in mouse's V1 while applying ICMS. They used variable current amplitudes (10-50 µA) and found that the VSD signal increased with larger current amplitude. In addition, the spatial spread of activation was limited for weaker stimulation, while at stronger stimulation, the evoked response appeared over a larger cortical area 33 . Similar results were observed when using optical imaging of intrinsic signal in anesthetized monkeys, in which higher currents (ranging from 10 to 200 µA) evoked larger cortical responses and recruited more cortical areas than lower currents 51 . In the latter study, the authors also manipulated the stimulation duration and found that the magnitude of the hemodynamic response increased with stimulation duration. The effects of current amplitude were tested at the behavioral level as well. For example, rats were trained to detect ICMS occurrence delivered to the barrel cortex and their detection performance increased with larger stimulation amplitude 29 . Frequency is another important parameter of ICMS, however its effects were mainly studied at the behavioral or motor level [28][29][30] . For example, Semprini et al. varied the frequency of the ICMS in rats trained to detect ICMS delivery to the barrel cortex and found that high detection rates were achieved with the range of 25-200 Hz. Thus little is known on how ICMS frequency shapes the evoked spatio-temporal pattern of neuronal activity.
Here we measured and quantified the effects of HFS and LFS ICMS on the evoked spatio-temporal patterns of neuronal responses and showed that LFS is superior over HFS in neural activation. LFS showed higher VSD signal that spread over a larger area during stimulation compared with HFS. The definition of LFS and HFS is quite variable among studies and depends on the stimulated tissue and the electrical stimulation technique. For example, studies of peripheral nerves stimulation defined HFS to be within the range of 1-40 kHz 38 while studies using deep brain stimulation (DBS) defined HFS as 50 Hz 36 . In this work we used 500 Hz as HFS, that is well within the range of previous ICMS HFS studies 28,52,53 . As the frequency range for LFS is not well defined for ICMS, we chose 100 Hz as the lower frequency. The frequency range of 100-200 Hz was widely used in many previous studies and was reported to be highly effective, in evoking neural activity and affecting behavioral performance 28,52,54 .
Our results showed that the peak population response evoked by HFS or LFS appeared fast, within 10 ms post stimulation onset. This peak was followed by a fast descending phase and a sustained activity (Figs 2, 3 and 4) that persisted throughout the stimulation. This activity was lower for HFS than LFS. In addition LFS spread over larger cortical area than HFS. These results may suggest the involvement of an inhibitory effect in the HFS condition while another possibility is that HFS was less effective in directly activating the cortical tissue. Previous studies reported on evidences for involvement of GABAergic inhibitory neurons in sensory cortical processing, including in the barrel cortex, which share a proportional relation with the excitatory network. It was previously shown that frequency modulation of sensory input may lead to changes in the excitatory/inhibitory balance [55][56][57][58] (for review see 56 ). Short latency inhibition (fast time to peak, <10 ms) 56 was suggested to be mediated via GABA A inhibitory neurons which were shown to exist in layer 2/3 59 as well as in layer 4 58 , the main layers for from which the VSD signal is collected. Therefore, it is possible that the observed early narrow peak response in LFS and HFS, followed by a fast decline and then a lower sustained activity in the HFS during train duration (100 ms; Fig. 3), arise from activation of this type of inhibitory neurons. The time scale of this modulation may be too early to be affected by GABA B inhibitory neurons which are considered to have slower dynamics (recruited around 100-500 ms) mediated via G-protein 60 .
In addition to the above, it is also possible that HFS was less effective in recruiting excitatory responses within the cortical network. Few mechanisms were suggested in this relation. HFS can generate a conduction block which can lead to a lower neuronal activation at the site of stimulation. A conduction block may arise from an increased extracellular potassium concentration which can then lead to changes in neural excitability 36 or reduction in the open probability of voltage-gated sodium channels 37,61 . In addition, Burrier et al. (2001) investigated HFS in the subthalamic nucleus (STN) in rats slices, in-vitro. They proposed that HFS-induced silence of neural activity was mediated by a reduction of Na + and Ca +2 voltage-gated currents, which interrupted the ongoing activity of STN neurons 62 . In order to investigate the involvement of inhibitory processing that may underlie the response difference between LFS and HFS ICMS, additional studies are required.
An interesting aspect of ICMS technique is the ability to affect sensory perception and behavior in a specific manner. In our study and others 7,25 , there is evidence that even short trains of electrical stimulation are activating a large region of the cortex, larger than expected by passive spread of current and direct excitation of cortical elements 25 . These results raise the question: how can such a wide activation of cortical tissue lead to specific and precise behavioral effects? In a previous series of experiments performed in area MT of behaving monkeys Salzman et al. (1990Salzman et al. ( , 1992 used low amplitude ICMS pulses (10 µA) at 200 Hz 27,63 and reported on a specific behavioral effect. ICMS biased the monkey's choices on a direction discrimination task towards the preferred direction of neurons at the stimulation site. Interestingly, a continues study of the same group 28 showed that increasing the stimulation frequency to 500 Hz (at 10 µA) preserved the directional specificity of the microstimulation effect and increased the intensity of the directional signal, whereas using a higher current amplitude at lower frequencies (80 µA, 200 Hz) reduced or eliminated the behavioral effect. The interpretation of these results can be explained with larger spread of neural activation around the electrode at lower frequencies and high stimulation currents, whereas for higher frequencies, the evoked activity was spatially restricted to neurons having similar preferred direction. Indeed increasing the current amplitude alone, induced a larger spread of the neuronal activity within and across cortical areas 32,33 . The above behavioral and neuronal results and the interpretation are in accordance with our observation, that HFS induces cortical activation that spread to a smaller cortical region, even at high current amplitudes (i.e. 80 µA), and thus may cause a more spatially restricted activation, within the columnar range.
The high spatio-temporal resolution of the VSDI technique offers the opportunity to image activity dynamics over milliseconds at the mesoscale resolution, following stimulation and to investigate activity propagation and cortical connectivity. We used a ring analysis (Fig. 7) to determine the propagation velocity of the ICMS evoked response. Using this approach we computed the propagation velocity of the VSD response and the median value was 0.113 ± 0.052 mm/ms. This velocity is in accordance with previous VSD reports of lateral propagation velocities, from upper cortical layers of rodent in-vitro [64][65][66] and in-vivo 50 although the calculation method and the stimulation parameters were different. The similarity of horizontal spread velocity among several studies suggests that ICMS can be used to reveal characteristics of the underling cortical connectivity network.
ICMS provides a way for artificially activating the cortical network by directly activation neurons and axons passing near the stimulating electrode. Neural activation can then spread and propagate through the activated network connectivity. Previous studies reported that ICMS can mimic the neural response evoked by sensory stimulation 50,67,68 and can also evoke natural behavior 69,70 . These reported observations are consistent with our results, showing that the evoked population response of 1p ICMS resembles the evoked response following brief whisker deflection. The similarity is evident for both the spatial spread and response dynamics (Fig. 8). Partial explanation for the high similarity might be the fast dynamics of both whisker deflection (by the piazo actuator) and 1p ICMS (current spread of the short single pulse).
In the absence of external stimulation, primary sensory cortices, show spontaneous activity patterns and propagation that resembles cortical activation evoked by sensory stimulation [71][72][73][74] . Using photostimulation Mohajerani et al. (2013), showed that they can induce activation patterns similar to those observed in spontaneous activity or sensory stimulation 73 . Moreover, a recent study by Carrillo-Reid et al. (2016) showed that repetitive two-photon photostimulation in sensory cortex, can build and imprint cortical ensembles that recur spontaneously. Interestingly, the spontaneous activity mimics the repetitive photostimulated response. Future studies are needed to investigate whether ICMS can also imprint new cortical ensembles.
In conclusion, we found that HFS of ICMS was less effective in cortical activation compared to the LFS condition which may result from a suppression of axonal conduction during HFS, the involvement of inhibitory network or both. However, we note that our observations are limited to stimulation in the upper layers (250-400 µm) of the barrel cortex and additional studies are required to uncover whether there are layer specific stimulation effects.
Materials and Methods
Animals and surgical procedure. Six male albino rats (Sprague Dawley (SD), 200-350 gr) were used for the experiments and data analysis. All experimental and surgical procedures were carried out according to the NIH guidelines, approved by the Animal Care and Use Guidelines Committee of Bar-Ilan University and supervised by the Israeli authorities for animal experiments. Rats were deeply anesthetized with Urethane (1.5 gr/kg), which provides a long lasting stable anesthesia. A craniotomy was performed above the barrel cortex area of the rat (stereotactic coordinates: 2 mm posterior and 6 mm lateral to the bregma) and the dura mater was carefully removed in order to expose a ~5 mm × 10 mm window over the barrel cortex.
Voltage-sensitive dye staining and imaging. A staining chamber was used in order to stain the exposed cortex with voltage sensitive dye (RH-1838; 0.5 mg/ml of artificial cerebrospinal fluid (ACSF) for ~2 hours. Following staining, the brain was washed with ACSF solution, covered with agarose and sealed with a custom cut coverslip. For voltage-sensitive dye imaging (VSDI) we used the MicamUltima system and images of 100 × 100 pixels (the whole image covers an area of 5 2 mm 2 ; each pixel cover cortical area of 50 2 μm 2 ) were acquired at 100 Hz. During imaging, the exposed cortex was illuminated using an epi-illumination stage with an appropriate excitation filter (peak transmission 630 nm, width at half height 10 nm) and a dichroic mirror (DRLP 650 nm), both from Omega Optical, Brattleboro, VT, USA. In order to collect the fluorescence and reject stray excitation light, barrier post-filter was placed above the diachronic mirror (RG 665 nm, Schott, Mainz, Germany; Fig. 1a). To obtain the vascular pattern of the imaged area we used a green light (540 nm bp10). Next, VSDI was performed for the next 2-3 hours.
Whisker stimulation and mapping the barrel fields. To verify that we are measuring neuronal population responses from the barrel cortex (in addition to cortical exposure at the adequate anatomical coordinates, see above), different individual whiskers (e.g. B2 or C2) were deflected separately by a piezoelectric stimulator. Each whisker was glued to a thin glass pipette attached to the piezoelectric device ~5 mm from the whisker's base and was deflected along the anterior-posterior axis of the head (1-5 pulses; pulse duration: 50 ms; frequency: 10 Hz). The piezo bending actuator reaches its nominal displacement within ~0.5 µs. Then the cortical maps were obtained using VSDI (Fig. 1a). To evaluate the barrel field size and define the different barrels, we used the activated area at early times (10 ms after stimulus onset). Pixels exceeding a high response threshold (75% of peak activity within the barrel cortex) were included in the region of interest (ROI). This threshold reconciled well with the expected size of a single barrel field as shown in previous studies using a similar approach 47 . Figure 1b left shows an example of the VSD response pattern that is corresponding to C2 barrel field following C2 whisker deflection (1p, 50 ms duration). The map in Fig. 1b top represents the pattern of activation 10 ms after stimulation onset while C2 barrel field is depicted by a black contour (75% of maximal response amplitude). Figure 1b bottom shows the outline of different barrel fields over the image of the blood vessels pattern. This map enables to obtain the rows and columns location in the barrel field. Based this stage we could target the microstimulation electrode to the barrel cortex.
Intra-cortical microstimulation parameters.
A microelectrode was targeted to the barrel cortex (identified by the evoked responses to whisker stimulation; Fig. 2a) and inserted in the upper layers (250-400 µm; L2/3). Biphasic square pulses were delivered through a standard tungsten microelectrode (FHC, Bowdoin, ME, USA) using a microstimulation box (linear biphasic stimulus isolator, BAK electronics, BSI-1A). Each biphasic pulse is composed from a cathodal (0.2 ms) pulse followed by an anodal (0.2 ms) pulse (Fig. 1c). We stimulated the brain with current amplitude of 80 µA in 4 different stimulation conditions: single-pulse stimulation (1p; Fig. 1c top), low-frequency stimulation (LFS; 100 Hz; 10 pulses, 100 ms duration; Fig. 1c middle) and two high frequency conditions of different duration lengths: (i) 100 ms of high-frequency stimulation (HFS; 500 Hz; 50 pulses; Fig. 1c bottom), in this condition stimulation length equals the LSF, but it has X5 energy. (ii) 20 ms of high-frequency stimulation (HFS short; 500 Hz; 10 pulses), in this condition, the amount of energy equals to that of LFS, but stimulation length is much shorter. ICMS frequency definition for HFS and LFS has a wide range and depends on stimulated tissue and the electrical stimulation technique; In this work we used 500 Hz as HFS, that is well within the range of previous ICMS HFS studies 28,52,53 . Because the frequency range for LFS is not well defined for ICMS, we chose 100 Hz as the lower frequency. The frequency range of 100-200 Hz was widely used in many previous studies and was reported to be highly effective, in evoking neural activity and affecting behavioral performance 28,52,54 . We used additional control stimulation condition: 5 pulses of lower frequency (50 Hz) that has half of the LFS energy but equal time duration (100 ms). The output current from the microstimulation box was verified as voltage measurement across a 100 KΩ resistor located between the animal and the microstimulation box. Data analysis. Data analysis was performed on 18 recording sessions of stimulation conditions and 13 control sessions from 6 rats.
Basic VSDI analysis. All data analyses and calculations were done using MATLAB software. The basic analysis of the VSD signal is detailed elsewhere [75][76][77] . Briefly, to remove the background fluorescence levels, each pixel was normalized to its baseline fluorescence level (average over first few frames, before stimulation onset). The heart beat artifact and the photo bleaching effect were removed by subtraction of the average of blank signal (recorded in absence of stimulation) from stimulated trials. Thus, the imaged signal (Δf/f) reflects relative changes in fluorescence compared to the resting level observed at blank trials. For further analysis, VSD maps were computed by averaging over all trials in a single condition.
Defining regions of interests (ROIs).
To study the spatial and temporal properties of the VSD signal in a given area, ROIs were defined. ICMS evoked neuronal population responses first near the electrode tip which then spread across the cortical surface. For each location of the microelectrode, a circle ROI, radius of 11 pixels (0.55 mm, 373 pixels total) was set at the peak spatial location of the VSD response. Thus, the same ROI was used for different stimulation conditions, but with the same microelectrode position. By averaging the VSD signal over pixels in the desired ROI we obtained the time course of response. In most experiments, the peak VSD response activation is slightly shifted from the microelectrode penetration site (Fig. 2a) due to the microelectrode sharp penetration angle (limited by the vicinity of large optical lenses).
Time to half peak response.
To study the temporal characteristics of HFS and LFS we calculated the time to half peak response at the rising phase and descending phase of the response. To compute accurate time-to-desired threshold, we applied a linear fit to the rising phase of the response, and calculated the point at which it crossed the absolute threshold. This enabled us to overcome the temporal limitation imposed by our sampling rate (100 Hz, 10 ms per frame). Since the descending phase showed nonlinear pattern, mainly for HFS and LFS condition, we applied this approach only at the rising phase.
Space time analysis (elliptical ring ROI analysis). Space time analysis was applied in order to quantify and compare the spatial profile spread of the VSD signal. We generated a continuous set of non-overlapping elliptical shape rings (see schematic illustration in Fig. 4a), centered over the spatial peak of the evoked response and fitted to the activation pattern at 10 ms post stimulation. The size of the major axis and minor axis of the 'ellipses' changed from the fitted ellipse at steps of 50 µm (one pixel) for each ring to create a set of 40 consequential elliptical rings. The central ellipse was defined at the 5 th elliptical ROI and the VSD response of the central ellipse was defined to be mean across ring no. 1-5, to pass a threshold of minimal number of pixels. The central ellipse was located around the activation peak response.
Derivative maps.
To study the propagation dynamics of the evoked VSD signal across cortical surface we computed derivative maps that were defined by Equation (1), where t is time in ms: = = − − derivative map (time t) map(t) map(t 20) (1)
Propagation velocity calculation.
To calculate the velocity of response propagation from ICMS site across cortical surface, we defined for each session the following ROIs: a center ROI located at the point of peak activation in space (circle, 5 pixels radius) and 7 peripheral ring (each of 3 pixels width) ROIs, co-centered with the center ROI, of increasing radius from 1.25 to 2.15 mm. We then defined a threshold as 30% of peak VSD response in each session, and calculated for each ROI the time in which the VSD signal passed the threshold. Finally, we calculated for each ring the velocity of propagation from the center as follows: where Δx is the distance of the ring from the center and Δt is the difference in time-to-threshold between the ring and center ROIs. To compute accurate time-to-threshold, we applied a linear fit to the rising phase of the response, and calculated the point at which it crossed the absolute threshold. This enabled us to overcome the temporal limitation imposed by our sampling rate (100 Hz, 10 ms per frame).
Statistical analysis.
To compare the VSD response across stimulation conditions, we used nonparametric Wilcoxon Rank-Sum test and applied Bonferoni correction for multiple comparisons. Data are presented as mean ± SEM or median ± MAD. | 2018-09-18T14:16:52.483Z | 2018-09-17T00:00:00.000 | {
"year": 2018,
"sha1": "23e5ed0fa5f2396598fcbdf20382f8cc07367f6d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-32148-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b99c1cff1205d4034175aeeb557f2742ee00d72f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
247019214 | pes2o/s2orc | v3-fos-license | International Journal of Engineering
The Universal Filtered Multicarrier (UFMC) waveform technology is one of the promising waveforms for 5G and beyond 5G networks. Owing 2N-point Fast Fourier Transform (FFT) processor at the UFMC receiver, the computational and implementation complexity is two times more than the conventional Orthogonal Frequency Division Multiplexing (OFDM) receiver system. In this paper, we proposed a simplified UFMC receiver structure to reduce computational complexity as well as hardware requirements. The received UFMC symbol simplified exactly to its equivalent after performing 2N-point FFT and decimation operations. In which, the mathematical model of the frequency-domain UFMC signal is rederived after processing through 2N-point FFT and decimator, and the simplified signal is generated with an N-point FFT. Accordingly, the 2N-point FFT processor and decimator are replaced with a single N-point FFT processor. This approach reduces the 50% computational complexity at the FFT processor level hence the hardware and processing time. The computational complexity of the proposed receiver model is approximately equivalent to the OFDM receiver. Additionally, analyzed the mathematical model for the simplified UFMC receiver and the comparative performance of the UFMC system with the conventional model.
NOMENCLATURE
The impulse response of p th sub-band filter 2
INTRODUCTION 1
The ongoing fifth-generation (5G) systems continue to reveal the inherent limitations due to the rise of intelligent communication environments, and some 5G application use cases like massive machine-type communications *Corresponding Author Institutional Email: rmanda@ddn.upes.ac.in (R. Manda) (mMTC), and ultra-reliable low latency communications (URLLC) [1]. These limitations are motivation to define the technical requirements and targets of the nextgeneration (6G) cellular networks that can transfer beyond personalized communication toward the full realization of the Internet of Things (IoT) standard, that connects everything, not just people, but also sensors, vehicles, wearables, computing resources, and even robotic agents [2][3][4]. Therefore, 5G and beyond 5G network use cases complicate the required specifications in many aspects such as data rate, delay or latency, reliability, energy consumption, multicast connectivity, and the type of protocols that provides diverse ways to exchange information between the devices [5]. These systems are impacted by the modulation format used at the physical layer [6][7][8].
In the last few years, several physical layer waveforms have been proposed like Filter bank multicarrier (FBMC) [9], generalized frequency division multiplexing (GFDM) [10], filtered orthogonal frequency division multiplexing (F-OFDM) [11], and universal filtered multicarrier (UFMC) [12]. Out of these, the UFMC waveform technique is one of the competitive modulation schemes for future generation wireless systems to provide flexible packet transmission services, low interference due to OBE, and relaxed synchronization [13,14]. However, the transmitter and receiver complexity of UFMC is higher compared to the conventional (CP-OFDM) due to its filtering operation at the transmitter and 2N-point FFT processor at the receiver. To reduce the system complexity and to improve the energy efficiency, the baseband signal processing time and hardware requirement for the new modulation waveform can be reduced by simplifying the structure of the transceiver architecture in terms of computations. In recent years, some methods have been proposed to reduce the computational complexity of the UFMC transmitter [15][16][17][18]. The transmitter complexity was reduced by approximating the frequency domain UFMC signal [15], by introducing the FIR filter structure and the poly-phase filter structure based on the lightweight method into the UFMC transmitter structure [17]. A reduced hardware complexity solution [16] was proposed for the implementation of IFFT and filtering operation by avoiding redundant computations. Recently, a reconfigurable baseband UFMC transmitter architecture was proposed by Kumar et al. [18], which has the flexibility to choose the number of subcarriers per sub-band and the pulse-shaping filter. Wu et al. [19] have proposed an advanced receiver for UFMC, which uses the odd number samples of 2N-point FFT along with even number samples to improve the performance at high computations, and here the computational complexity burden of 2N-point FFT has not been reduced. The FFT pruning approach [20], in which removing the operations related to the zero inputs reduces the computational complexity of the UFMC transmitter and receiver. In this paper, the method mainly focused on the system complexity, baseband signal processing latency, and power consumption at the receiver. The proposed approach simplifies the UFMC receiver model, which uses a single N-point FFT to generate the frequency-domain baseband signal for data detection like the conventional OFDM. This approach avoids the zeropadding operation and decimation part at the UFMC receiver baseband signal processing. Therefore, UFMC reduces the hardware requirement, computational complexity, and hence latency and power consumption. The FFT pruning approach at the UFMC receiver gives fewer computations compared to the conventional model but requires high-level reprogramming since the nonzero inputs vary over time. However, the proposed model requires a smaller number of computations compared to the conventional and the FFT pruning algorithm [20] which is clearly explained in the result analysis session. One of the minor drawbacks of the proposed model is that the hardware implementation needs to use Lf+Lh-2 number of adders before the N-point FFT, which may cause the rise of connectivity complexity and word length effect when it processes through the FFT processor. Otherwise, the proposed model is superior to the conventional model.
The rest of the paper is organized as follows; section 2 described the conventional UFMC transceiver model, section 3 explained the proposed UFMC receiver model that how it is derived from the traditional UFMC receiver model, and section 4 discusses the computational complexity. Section 5, described the performance analysis in terms of complexity and SNR versus BER, and finally, concluded in section 6.
1. The UFMC Transmitter Model
The UFMC waveform technique is a generalized form of OFDM and FBMC, in which a group of subcarriers is individually filtered with a bandpass filter [12]. The schematic block diagram of the traditional UFMC transceiver model is depicted in Figure 1. Whereat the UFMC transmitter, the entire frequency band (total subcarrier) is divided into several blocks (a group of sub-carriers) called sub-band and filtered individually. The final i th UFMC timedomain symbol is the sum of the outputs of all sub-band filters, which can be expressed as The p th sub-band filter output of i th UFMC symbol where [ ]; = 0, 1, … . , − 1 is p th sub-band filter impulse response, which is the frequency-shifted version of prototype filter [ ] to the center of p th sub-band.
The UFMC Receiver Model
At the receiver, the zeroes padded to the received symbol to perform the 2N-point FFT to consider the sub-band filter tails. The received signal after zeros padded The frequency-domain of the received UFMC signal after processing through the 2N-point FFT is mathematically formulated as The receiver only needs to extract the even-numbered samples after the 2N-point FFT transformation to estimate the data symbols because the odd-numbered samples consist of an interference component (proof can see in the appendix). In conventional UFMC receiver, this is implemented with 2N-point FFT and decimator with factor 2 shown in Figure 1, which is twice the size of FFT that is used in conventional OFDM receiver and increases the UFMC receiver complexity compared to the OFDM receiver.
3. Data Detection
The final UFMC transmitted symbol ( ) having the length + − 1, can be expressed in matrix form as These even-numbered samples are used for data detection. By Least squares algorithm, the estimated data sequence is defined as
THE PROPOSED UFMC RECEIVER MODEL
The We know that We rewrite Equation (8) as Let assume that = − , then, Equation (16) where Finally, Equation (18) can be written as where Now, according to Equation (20) the even-numbered subcarriers can be generated with a single N-point FFT having ′ [ ] as input, which is shown in Figure 2. Here, both the 2N-point DFT and decimator operations can be implemented with the single N-point DFT. The computational complexity can be reduced twice compared to the conventional UFMC receiver.
COMPLEXITY ANALYSIS OF THE PROPOSED MODEL
The major computational complexity in the UFMC receiver system includes the 2N-point FFT processor, the channel estimation, and equalization algorithms. The UFMC baseband signal model at the receiver is like the OFDM signal except for the filter equalization, so, the same algorithms can be applied in the UFMC system for channel estimation and equalization. Therefore, the Figure 2. Block diagram of the proposed UFMC receiver model UFMC receiver complexity is two times higher compared to the OFDM receiver. Furthermore, the additional memory or control overhead is one of the main disadvantages of the implementation. The N-point DFT is efficiently computed by the FFT algorithm [21], which requires the total number of arithmetic operations (real multiplications and additions) using radix-2 FFT algorithm is 5 log 2 . The splitradix FFT algorithm [22] has a lower number of arithmetic operations compared to the radix-2 FFT algorithm, which requires the total number of arithmetic operations to be 4 log 2 − 6 + 8. With the FFT pruning algorithm, the number of real operations (additions and multiplications) required to process the UFMC baseband received signal [20] is 5 log 2 − 2 + 4( − 2). Table 1 describes the computational complexity comparison between the conventional, FFT pruning approach and the proposed receiver model. Which states that the proposed UFMC receiver model has a smaller number of arithmetic operations, it is simple and effective compared to conventional CP-OFDM and UFMC receivers. The complexity efficiency of the system can be measured by complexity reduction ratio (CRR), which is defined as the ratio of the total number of computations required for the conventional model ( ) to the total number of computations required for the conventional model ( From Equation (21), the proposed model reduces the computational complexity more than two times as compared to the conventional UFMC receiver model. Also, in the proposed model, there are no zeros padding operations to process through 2N-point FFT. Therefore, the number of read/ write memory locations was reduced by two times approximately. However, the required storage space for read/ write operations is the same as the conventional OFDM receiver model. In the proposed model, the adder blocks are used before N-point FFT which may increase the connectivity complexity compared to the traditional model. The power carried by odd-numbered frequency samples of 2N-point FFT is not utilized in the conventional UFMC receiver, hence the power efficiency is 50%. But in the proposed model, there is no 2N-point FFT and which uses all the samples processed by FFT. Therefore, the power efficiency can be improved to 100%. Finally, we can say that the proposed UFMC receiver model is more suitable for ultra-low latency and low energy consumption IoT uses cases as well as for next-generation cellular networks.
SIMULATION RESULTS
The computational complexity of the receiver depends only on the size of the FFT. In this session, some computer simulation results are presented. The numerical analysis of computational complexity and its comparison is shown in Figure 3 for different bandwidth (BW) configurations mentioned in Table 2 under the NR-TDL vehicular-A channel model with a length of 24. These comparisons conclude that the proposed receiver model has a lesser number of arithmetic operations (i.e., two times lesser) at the baseband FFT signal processing level compared conventional model and almost achieved the same computational complexity of the CP-OFDM receiver. Furthermore, the complexity ratio (CR) of the UFMC receiver to the OFDM receiver ( ) for different methods are shown in Figure 4. Consider the bandwidth of 20 MHz and N = 1024 for numerical comparison between the proposed and conventional models, in this case, the CR values are 2.1904, 1.3881, and 1.0036 for the conventional, conventional with FFT pruning algorithm and the proposed UFMC receiver model respectively. From these numerical analyses, the proposed model has less complexity ratio and is more efficient in terms of computational complexity. Also, the FFT's per energy [24] can be improved, which is defined as where Technology is the CMOS process in micrometers, the power consumption is proportional to supply voltage, clock frequency, and load capacitance. The execution time depends on the number of operations/computations required to complete a particular task. From Equation (22), the FFT's per energy is inversely proportional to the execution time, which means for the proposed model the FFT processor requires lesser execution time compared to the conventional one. Finally, we can say that the proposed UFMC receiver model is more suitable for ultra-low latency and low energy consumption uses cases of the next-generation cellular networks. The proposed model is a simplified model of the conventional model. Therefore, it gives almost the same performance in terms of SNR versus BER at lower computational cost, which is shown in Figure 5 for the simulation parameters mentioned in Table 3.
In practical cases due to the additional adders on the receiver side, the proposed model may increase the connectivity complexity and occurs small losses; hence small performance degradation (shown in Figure 5).
CONCLUSION
The UFMC was one of the candidate waveforms for 5G, but the computational and hardware complexity of the UFMC transceiver system has more than the CP-OFDM system due to filtering operation at the transmitter and 2N-point FFT processing at the receiver. In this point of view, here we proposed the simplified UFMC receiver model, in which the exact frequency domain UFMC received symbol after FFT processor and decimator is derived and simplified to implement with a single Npoint FFT and that reduced the computational complexity more than two times (i.e., 50%) compared to the traditional receiver model without degrading the system performance. At the receiver, the zero-padding for processing 2N-point FFT and decimation part is simply replaced by one N-point FFT, which reduced the number of hardware components at baseband signal processing and the storage requirement for read/ write operation to process the data and the number of computations or operations. This model reduced the hardware requirement and hence the power consumption. The realtime hardware implementation of this model is the future scope of this work. | 2022-06-21T14:55:09.289Z | 0001-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "c8d2907fb06d0e651306a482d4d0223b96b6f6ef",
"oa_license": null,
"oa_url": "https://www.ije.ir/article_143048_0a47558736402b47b7ae2febb411d5ce.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c8d2907fb06d0e651306a482d4d0223b96b6f6ef",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
247612242 | pes2o/s2orc | v3-fos-license | Difficult Journey to Find the Best Treatment for Homozygous Familial Hypercholesterolemia: Case Report
Abstract Homozygous familial hypercholesterolemia (HoFH) is a rare autosomal recessive genetic disorder. It is difficult to diagnose and treat it at early stage. We present a nine-year-old boy with HoFH from China. At the beginning, he was misdiagnosed as xanthomatosis in the dermatology department of the local hospital, but the disease did not alleviate after three laser ablation operations. Later, blood lipid monitoring, ultrasound of heart and carotid artery were further added in our hospital, and finally the boy was diagnosed with HoFH by genetic testing. A biallelic mutations was observed in the fourth exon of low density lipoprotein receptor (LDLR): c.418G>A (p.E140K). Our patient achieved a relatively satisfactory therapeutic results after a series of lipid-lowering therapies including atorvastatin monotherapy, lipoprotein apheresis and double-filtration plasma pheresis. We found that LDL-C levels obtained 57% reduction from baseline after atorvastatin combined with double-filtration plasma pheresis (DFPP). It was observed that regression of carotid intima-media thickness (cIMT), valve regurgitation and xanthoma occurred after a series of Intensive lipid-lowering therapy.
Introduction
Homozygous familial hypercholesterolemia (HoFH) is an autosomal genetic disorder characterized by a significant increase in circulating low density lipoprotein cholesterol (LDL-C) and deposition of cholesterol in skin or tendon. 1,2 If left untreated, patients with HoFH may occur life-threatening cardiovascular disease (CVD) in early childhood. 3 Early identification of HoFH is quite important to initiate lipid-lowering therapy and predict the risk of cardiovascular events. 4 DNA sequencing can provide a more reliable diagnostic basis for patients of clinical suspicion. 5 This report describes a 9-year-old Chinese boy, who has multiple tendon xanthomas and extremely high levels of LDL-C. His HoFH was found to be caused by a mutation in the LDLR as demonstrated by gene sequencing.
Case Presentation
A 9-year-old boy was reported with skin protrusions at the ankles, knees, elbows and buttocks after birth ( Figure 1A-D). The laser ablation operations for three times were performed to remove the diseased tissue in the local hospital, but new lesions appeared soon at the surgical site. Subsequently, he was transferred to the Children's Hospital of Xi'an Jiaotong University for further treatment. According to his family history, we found that his parents had a fourth-generation consanguineous marriage, and several members of the family had abnormal blood lipid indicators (Table 1). A biallelic mutations was observed in the fourth exon of low density lipoprotein receptor (LDLR): c.418G>A (p.E140K) by genetic testing (Figure 2), and sanger sequencing confirmed his mutant genes were obtained from his parents ( Figure 3). His ultrasound results of the right carotid artery showed uneven thickening of the anterior and posterior intima, and the carotid intima-media thickness (cIMT) value was 2.5mm ( Figure 4A). The results of echocardiography showed no abnormalities in the structure and cavity of the heart, but there was a small amount of valve regurgitation ( Figure 5A). Eventually, he was diagnosed as HoFH.
The patient started taking atorvastatin at a dose of 10 mg daily after the diagnosis, and the dose was gradually increased to 40 mg daily in the next 8 months. The results showed that the LDL-C level was only 38.4% lower than the baseline. Therefore, we tried blood purification combined with atorvastatin. Informed consent was obtained from the patient and his parents before starting the blood purification. They were informed about the benefits, and possible risks and side effects. All procedures performed in the study were in accordance with the ethical standards of the institutional. It was observed that lipoprotein apheresis (LA) biweekly combined with atorvastatin lowered LDL-C by 40%, LDL-C levels obtained 57% reduction from baseline after atorvastatin combined with DFPP ( Figure 6). And we found the texture of xanthomas soften, the size lessened in the ankles, knees, elbows and buttocks ( Figure 1E-H). At the same time, it was detected that the intima thickness of right carotid artery was relieved, and the cIMT value was 2.0mm ( Figure 4B) and the regurgitation of the heart valve was reduced ( Figure 5B). 98 Figure 2 Pedigree of the patient. The patient with biallelic mutations had homozygous familial hypercholesterolemia.
Figure 3
Sanger sequencing detected LDLR mutation in the patient, III.1 (A) and his father, FH II.2 (B) and mother, FH II.3 (C). The chromatograms above showed the partial sequence of the LDLR exon 4, where a de novo c.418G>A (p.E 140k) biallelic mutations was observed in the patient. The partial sequence of c.418G>A mutation was detected in his father and mother. NCBI, National Center for Biotechnology Information.
Figure 4
Ultrasound of the right carotid artery of the patient before and after therapy. The result showed uneven thickening of the anterior and posterior intima, and the cIMT value before therapy was 2.5mm (A), but the thickening of inner membrane was significantly relieved, and the cIMT value after treatment was 2.0mm after therapy (B).
DFPP System
The DFPP system is a semi-selective blood purification modality for removing macromolecular pathogenic substances. 6 Two types of filters with different pore sizes in the process are used: a plasma separator and a plasma component separator. The patient's blood is drawn from the body by a blood pump. Blood is separated into plasma and blood cells using a plasma separator. The separated plasma is fractionated into large and small molecular weight components by Figure 5 Echocardiography of the patient before and after therapy. No abnormalities in the structure and cavity of the heart, but there was a small amount of valve regurgitation before therapy (A). The valve regurgitation was reduced after therapy (B).
Figure 6
Dose of atorvastatin during follow-up and LDL-C levels before and after blood purification. The dose of atorvastatin was 10 mg daily initially, and gradually increased to 40 mg daily in the next 8 months, thereafter the dose was maintained for therapy. LA biweekly combined with atorvastatin lowered LDL-C by 40%, LDL-C levels obtained 57% reduction from baseline after atorvastatin combined with DFPP. The black line represented the LDL-C level before blood purification, the red line represented the level of LDL-C after blood purification. https://doi.org/10.2147/IMCRJ.S345320
DovePress
International Medical Case Reports Journal 2022:15 a plasma component separator. Large molecular weight components, including pathogenic substances, are discarded. Small molecular weight components, including valuable substances such as albumin, are returned to the patient. DFPP was performed by using ACH-10 (Asahi Kasei Medical Co, Tokyo, Japan) generator. The plasma separator was Plasmaflo OP-05W (Asahi Kasei Medical Co, Tokyo, Japan) which has inside diameter of 330 μm and maximum pore size of 0.3 μm. Cascadeflo EC-50W (Asahi Kasei Medical Co, Tokyo, Japan) was used as plasma component separator. Plasma was separated from blood cells using a hollow fiber filter with a surface area of 0.5-0.8 m 2 , then perfused through a second filter with a surface area of 2 m 2 which selectively retains useful plasma components, such as albumins, immunoglobulins, high density lipoprotein cholesterol (HDL-C). 7 The peripheral right femoral venous route was used in the patient. Anticoagulation was achieved with 100U/kg heparin. The parameter settings of DFPP were: the blood flow velocity is initially 70-80mL/min, gradually increased to 100-140mL/min, the separation speed of plasma separator was 20 mL/min, the separation speed of plasma fraction separator was 2.5mL/min, the speed of plasma reflux was 17.5mL/ min, treatment last 3 hours. Total cholesterol (TC), triglyceride (TG), HDL-C, LDL-C, biochemical tests were measured before and after each DFPP procedure (Additional Table 1).
The DFPP model was more effective than LA model at removing LDL-C, and the loss of albumin and HDL-C was less. LA, lipoprotein apheresis; DFPP, double-filtration plasma pheresis; TC, total cholesterol; LDL-C, low density lipoprotein cholesterol; HDL-C, high density lipoprotein cholesterol; TG, triglycerides.
Discussion
Although statin monotherapy is usually not sufficient for HoFH patients to lower LDL-C levels, it still decreased the LDL-C levels by an average 26%. Meanwhile, it reduced the overall mortality and the risk of CVD events. 8 Recently, the International Cholesterol management Practice Study found that less than half of patients with definite or probable FH were accepting the maximum dose of statin available, and target achievement rate was quite low in those patients even if receiving maximum statin therapy (28.0%). 9 Our patient achieved a 38.4% LDL-C reduction with atorvastatin for half a year, but the patient's LDL-C level was determined to be higher than the optimal level of 135 mg/dL (3.5 mmol/L). This indicates that it is necessary to adopt additional lipid-lowering strategies.
LA is a reliable strategy to lower the LDL-C levels for patients with a high baseline LDL-C. 10 Almost all treatment guidelines for HoFH patients emphasize the importance of early LA initiation, ideally starting at the age of 2. 4,11 The first trial of LA was performed by de Gennes et al, which led to the improvement of coronary artery stenosis, the xanthoma regression and the LDL-C reduction. 12 However, LA has obvious disadvantage that a amount of important substances, such as albumins, immunoglobulins, HDL-C, are removed along with LDL-C, 5 which is only used for patients with poor extracorporeal circulation under 10 years of age. [13][14][15] In the initial stage, we used LA model to remove excess LDL-C, but serum albumin needed to be supplemented after each blood purification. Meanwhile, other components in plasma, such as immunoglobulin and HDL-C, cannot be replenished in time, which pose a threat to children's growth and health. Therefore, we considered the initiation of DFPP model, which had the advantage of recycling plasma components by secondary separation. 6,16,17 LDL-C levels of the patient obtained 57% reduction from baseline after statins combined with DFPP, and it was no need to replenish albumin for him. This suggests that the DFPP model was more effective than LA model at removing LDL-C, and the loss of important plasma components was less. The detection of LDL-C levels showed that LA can quickly reduce LDL-C levels, but the levels increased significantly within 1-3 days, slowly increased after 1 week, and usually reached a peak in 2 weeks. Therefore, DFPP was performed biweekly to maintain a relatively stable level of LDL-C. 18 As an important indicator of cardiovascular event risks, carotid intima-media thickness (cIMT) is commonly used to assess the severity of HoFH. 11 In addition, pathological changes of heart structure or function are common in the early stages of HoFH. 19 Our patient's cIMT thickening and valvular regurgitation occurred at early stage of HoFH, but it was observed that regression of cIMT, valve regurgitation and xanthoma occurred after a series of Intensive lipid-lowering therapy.
Our patient achieved a relatively satisfactory therapeutic result after a lipid-lowering therapy of more than one year. In addition, proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors provide an opportunity for the treatment of
101
HoFH, both as monotherapy and as an adjunct to statins. However, application of PCSK9 inhibitors have not been approved in pediatrics. 20
Conclusion
Patients with multiple tendon xanthomas and significantly elevated LDL-C levels need to be alert to the possibility of HoFH, and genetic testing can provide a reliable basis for early diagnosis. Single statins, LA or DFPP all contributed to marked reduction of LDL-C levels, especially combination of statins and DFPP was more effective in removal of LDL-C. In addition, early intensive lipid-lowering therapy was beneficial for significant regression of xanthoma, and could reduce the risk of cardiovascular events.
Consent Statement
The patient and his parents provided informed consent to publish their case details and any accompanying images. No need for institutional approval. | 2022-03-23T15:11:38.649Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "87ca29b040ce15681028af6261974b6889f1a9f9",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "38bf709d2cf75aa9a65e7de92eb7dd37bdfe2dce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2187466 | pes2o/s2orc | v3-fos-license | Seroepidemiology of Toxocara Canis infection among primary schoolchildren in the capital area of the Republic of the Marshall Islands
Background Toxocariasis, which is predominantly caused by Toxocara canis (T. canis) infection, is a common zoonotic parasitosis worldwide; however, the status of toxocariasis endemicity in the Republic of the Marshall Islands (RMI) remains unknown. Methods A seroepidemiological investigation was conducted among 166 primary school children (PSC) aged 7–12 years from the capital area of the RMI. Western blots based the excretory-secretory antigens of larval T. canis (TcES) was employed, and children were considered seropositive if their serum reacted with TcES when diluted at a titer of 1:64. Information regarding demographic characteristics of and environmental risk factors affecting these children was collected using a structured questionnaire. A logistic regression model was applied to conduct a multivariate analysis. Results The overall seropositive rate of T. canis infection was 86.75% (144/166). In the univariate analysis, PSC who exhibited a history of feeding dogs at home (OR = 5.52, 95% CI = 1.15–26.61, p = 0.02) and whose parents were employed as nonskilled workers (OR = 2.86, 95% CI = 1.08–7.60, p = 0.03) demonstrated a statistically elevated risk of contracting T. canis infections. Cleaning dog huts with gloves might prevent infection, but yielded nonsignificant effects. The multivariate analysis indicated that parental occupation was the critical risk factor in this study because its effect remained significant after adjusting for other variables; by contrast, the effect of dog feeding became nonsignificant because of other potential confounding factors. No associations were observed among gender, age, consuming raw meat or vegetables, drinking unboiled water, cleaning dog huts with gloves, or touching soil. Conclusions This is the first serological investigation of T. canis infection among PSC in the RMI. The high seroprevalence indicates the commonness of T. canis transmission and possible human risk. The fundamental information that the present study provides regarding T. canis epidemiology can facilitate developing strategies for disease prevention and control.
Background
The ascarids that cause human toxocariasis are Toxocara canis (T. canis) and, likely to a lesser extent, Toxocara cati (T. cati). The definitive hosts of T. canis and T. cati are dogs and cats, respectively; these ascarids inhabit the lumen of the small intestine [1]. Worldwide surveys of T. canis occurrence have indicated a prevalence ranging from 86% to 100% in pups and 1% to 45% in adult dogs [2,3]. Humans are one of several accidental hosts, and are primarily infected by ingesting parasite eggs or, to a lesser extent, by consuming chicken or cow livers [4].
Although human infections with Toxocara spp. are typically asymptomatic, larval migration into the internal organs via the blood can cause various clinical syndromes including visceral larva migrans and ocular larva migrans. The manifestation of symptoms in human toxocariasis depends on multiple factors, including which organs are affected and the magnitude of the infection [3,5].
Young children up to the age of 12 years appear to be the primary population susceptible to T. canis infection because of dirt pica, poor hygiene, or frequent contact with dogs [3,6]. Multiple reports have indicated that child toxocariasis is associated with endomyocarditis, generalized lymphadenopathy, endophthalmitis, asthma, hepatosplenomegaly, and meningoencephalitis [7][8][9][10][11]. Considerable interest has been directed toward the role of T. canis infection in epilepsy, and particularly in partial epilepsy [12][13][14].
In humans, parasites cannot mature to the adult stage; thus, examining stool for parasites and eggs is not useful. Making a direct parasitological diagnosis by using biopsy is extremely difficult; thus, serological methods are the diagnostic mainstay. Serological diagnoses of toxocariasis primarily rely on a T. canis larval excretorysecretory (TcES) antigen-based enzyme-linked immunosorbent assay (ELISA) of T. canis [3,5]. The seroprevalence of T. canis infection among children in various countries has been reported to range from 4% to 86% according to TcES-ELISA [15][16][17]. No reports on the seroprevalence of T. canis infection in children in Micronesian areas are available, and its status remains unknown among children who live in the Republic of the Marshall Islands (RMI).
The sensitivity and specificity of TcES-ELISA, when 1:32 was used as the threshold titer for positivity, have recently been estimated at 78% and 92%, respectively [18,19]; however, antigenic cross-reactivity (e.g., with Ascaris lumbricoides) reduces the usefulness of such assays, particularly in areas where polyparasitism is common [19]. Western blotting based on the fractionated, native, and excretory-secretory antigens of T. canis larvae (TcES-WB) can yield superior specificity levels, exhibiting reactivity to bands of low molecular weights (24-32 kDa) that were proven to be specific to T. canis infection [19]. In the present study, TcES-WB was used to detect T. canis-specific Immunoglobulin G (IgG) and estimate the seroprevalence of T. canis infection among primary schoolchildren (PSC) living in the capital area of Majuro of the RMI.
Geography of the Republic of the Marshall Islands and Majuro Atoll
The RMI is an island nation situated in the central Pacific Ocean between 4°and 14°North latitude and 160°and 173°East longitude. The country comprises approximately 1,225 islands and islets and lies in two parallel chains of 29 low-lying atolls: the Eastern Ratak (Sunrise) and Western Ralik (Sunset) chains of atolls and islands. The RMI is divided into 24 municipalities and Majuro, Ebeye, Wotje, and Jaluit are its major district centers. The Majuro Atoll, a large coral atoll of 64 islands, is a legislative district of the Ratak chain of the Marshall Islands. The Majuro Atoll has a land area of 3.7 mi 2 and encloses a lagoon of 114 mi 2 . Similar to other atolls in the Marshall Islands, Majuro consists of extremely narrow land masses, on which a person can walk from the lagoon side to the ocean side within minutes. The primary population center, also named Majuro, is the capital of and largest city in the RMI. The RMI has a total population of 52,560. Its characteristic climate is tropical, and a long wet season occurs between June and November. The economy of the RMI primarily relies on agriculture, fishery, and support from the United States. The major ethnic group is Micronesian [20].
Study population and participant selection
This study was conducted among PSC in Majuro, the capital city of the RMI. Public health nurses collected blood specimen of PSC, after informed consent was obtained from PSC or parents/guardians, from schools located in urban and suburban areas. These well-trained public health nurses interviewed the enrolled schoolchildren by using the structured questionnaire we designed in a previous study [21]. Basic demographic data regarding age, gender, parental occupation, height, weight, self-reported health status, and urbanization levels were collected during the interview. To maintain a sufficient sample size in the analysis, the children were divided into three age groups: Group 1: < 7 years; Group 2: 7-9 years; and Group 3: > 9 years. Parental occupation was classified into two categories and six levels: skilled workers and nonskilled workers. More than 60% of parents were employed at the semiskilled level (e.g., truck drivers, factory workers, or salespeople), which was classified as nonskilled work in the analysis. Among the 166 PSC enrolled in the study, the sex ratio was 1.18 (male to female) and the mean age was 9.5 ± 2.3 years.
Multiple environmental risk factors were investigated in this study. Data regarding contact with dogs, dog feeding behavior, cleaning dog huts while wearing gloves, consuming raw or undercooked meat, and drinking untreated or unboiled water were included in the multivariate analysis.
Ethical approval
The research protocols were approved by the Institutional Review Board of Shuang Ho Hospital, Taipei Medical University (TMU-JIRB-No. 201306003) and also it was approved by Ministry of Health, RMI.
Toxocara egg culture
Adult T. canis were collected from the stools of stray dogs that had been treated with mebendazole (Janssen-Cilag, High Wycombe, UK). Eggs, harvested from the anterior third of the uteri of female worms were cultured according to the method of Fan et al. [21,22]. Briefly, the eggs were stirred in 1% (w/v) sodium hypochlorite solution, left for 5 min at room temperature and then centrifuged (5 min at 2000 × g). After the eggs were washed twice with distilled water and once with 2% formalin, they were resuspended in 2% formalin and placed in a 250-mL Erlenmeyer flask, to which additional 2% formalin was added until attaining a 1-cm-deep layer of liquid. The flask was sealed with ParafilmH (SPI Supplies, West Chester, PA, USA) and left at room temperature for 8-9 weeks, undergoing gentle weekly agitation. The suspended eggs (then containing second-stage larvae) were then stored at 4°C until use.
Preparation of T. canis larval excretory-secretory antigens
An identical protocol has been used in our previous studies [21,22]. When larvae were required, the embryonated eggs were hatched under aseptic conditions; the eggs were washed with sterile phosphate-buffered saline (PBS), resuspended in sterile 1% (w/v) sodium hypochlorite, and incubated in an atmosphere containing 5% CO 2 , for 30 min at 37°C. After several washes in sterile PBS containing three antibiotics (100 IU of penicillin, 250 mg of streptomycin, and 25 mg/mL of nystatin; Biochrom KG, Berlin, Germany), the pelleted larvae were resuspended in RPMI-1640 medium (JRH Biosciences, Lenexa, KS, USA) containing the same concentrations of the three antibiotics. Motile larvae were collected using a modified Baermann apparatus, which was left in an atmosphere containing 5% CO 2 , for 12 h at 37°C. The larvae were transferred to 50-mL tissue culture flasks (BD Biosciences, Franklin Lakes, NJ, USA) containing fresh RPMI-1640 medium and antibiotics, yielding 10 4 larvae/ mL, then incubated in an atmosphere containing 5% CO 2 at 37°C. The supernatant medium in each culture, which contained TcES antigens, was collected (and replaced with fresh medium) weekly for 3-4 weeks. These samples were pooled, and then centrifuged to precipitate the debris. The resulting supernatant solution was sterilized using filtration through a membrane (exhibiting 0.2-mm pores) and then dialyzed (at a molecular weight cut-off of 6000-8000 kDa) against PBS at 4°C, either for 12 h or until the phenol red in the medium could no longer be observed. The protein content of the dialysate was estimated before the dialysate was concentrated using lyophilization (Labconco, Kansas City, MO, USA) and stored at −70°C until use.
Western-blot analysis
The same protocol has been used in our previous studies [21,22]. The larval excretory-secretory antigens were used in western blots to test each serum sample for Toxocaraspecific IgG. Briefly, the TcES (9 mg/slab) were separated using 12.5% sodium-dodecylsulphate polyacrylamide-gel electrophoresis (SDS-PAGE) and then transferred to strips of nitrocellulose membrane (Amersham Biosciences, Little Chalfont, UK) in a semiblotter (Hoffer, Fremont, CA, USA.). The membrane strips were then incubated with sera, each diluted at 1:64, before employing a Western Lightning H kit (PerkinElmer Life Sciences, Boston, MA, USA) to detect immunoreactions. When a serum reacted with any of the low molecular-weight T. canis-specific bands (24, 28, 30, or 35 kDa) [19], a child was considered seropositive.
Statistical analysis
To evaluate the associations between demographic characteristics and T. canis infection, a chi-square test was used to compare the proportions of infection based on gender, age group, parental occupation, and urbanization level. A logistic regression model was applied to investigate multiple environmental risk factors associated with T. canis infection. All statistical analyses were conducted using SAS Version 9.3 (SAS Institute, Inc., Cary, NC, USA).
Urbanization level was not significantly associated with seropositivity for T. canis in the uni-or multivariate logistic regression analyses (p > 0.05). By contrast, parental occupation level was a crucial contributing factor because PSC whose parents were employed as semiskilled workers exhibited a significantly higher risk of T. canis infection than did those whose parents were employed as skilled workers in both the univariate (ORs = 2.86, 95% CIs = 1.08-7.60, p = 0.03) and multivariate logistic regression analyses (ORs = 3.83, 95% CIs = 1.20-12.22, p = 0.02). Among the analyzed risk factors, univariate regression indicated that only PSC who exhibited a history of feeding dogs demonstrated an increased risk of T. canis infection compared with those who lacked such a history (ORs = 5.53, 95% CIs = 1.15-26.61, p = 0.03); by contrast, these effects were nonsignificant in the multivariate logistic regression analysis (ORs = 2.64, 95% CIs = 0.48-14.65, p = 0.27).
Discussion
The RMI is a tropical developing country; thus, its climatic and living conditions might favor various pathogens survival including parasites such as T. canis. However, few systemic studies have evaluated the prevalence of T. canis infection among PSC in the RMI.
PSC are particularly vulnerable to toxocariasis because of their habits of playing in water or soil; eating raw foods; or contacting pets, including cats and dogs; thus, PSC are an ideal target group for investigating the prevalence of toxocariasis. Data collected from the evaluated age groups can be used to assess whether toxocariasis threatens the health of school-aged children, and can serve as a reference when evaluating the need for community interventions [23].
In most laboratories, such as the Centers for Disease Control and Prevention (CDC) in Atlanta, the current methods of detecting T. canis infection in humans are typically based on TcES-ELISA [6,24]. Although TcES-ELISA has been reported to yield reasonable levels of sensitivity (78%) and specificity (92%), when the threshold titer of positivity is set at 1:32 [18,19], the specificity is excessively low to be reliable when assaying communities in which several species of intestinal cross-reacting helminths (e.g., Ascaris lumbricoides ) are common. Although no attempt was made to verify whether the current participants were host to helminths other than T. canis, it was assumed that infection with several species of helminth would be common in the RMI. Thus, western blotting [19], rather than a general TcES-ELISA, was employed in this study. The findings indicated that T. canis infection was common among the PSC living in the capital areas of the RMI, and most PCS (86.75%) tested seropositive. Making a valid comparison of the present seroprevalence data with those recorded in previous studies is hampered by the variation in detection methods (ELISA or WB) and cut-off titers employed and by the general difficulty in exploring the relationships among titers, infection, and clinical disease [3,5,19]. Serological surveys conducted primarily among children in developed countries have indicated T. canis seroprevalence levels of 0.7% in New Zealand, 1.6% in Japan, 2.4% in Denmark, 7.5% in Australia, 14% in the United States, and 15% in Poland [3,5,25]. By contrast, high levels of seroprevalence have been reported in less developed, or tropical countries in Africa (30% in Nigeria, 45% in Swaziland, and 93% in La Reunion), Asia (81% in Nepal, 63.2% in Indonesia, and 58% in Malaysia), and South America (36% in Brazil and 37% in Peru) [21,26,27]. Other studies involving TcES-ELISA assessments have reported markedly increased seroprevalence levels of 77% among indigenous children in Taiwan [22] and 86% among children in St. Lucia [28]. In the present study, a relatively high cut-off titer (1:64) was used and it seems likely that some and perhaps many of the seropositive PSC exhibited active T. canis infections when they were sampled. According to the CDC, when conducting TcES-ELISA, a titer of 1:32 is indicative of active Toxocara infection [19].
Although boys might be more likely to be infected than girls in certain communities in which boys have frequent contact with dogs [15,26], the current results indicated that girls exhibited an elevated risk of infection. However, gender did not significantly affect seroprevalence in the present study, contrasting the results of studies in China, Iran, Nigeria, and Spain [16,[29][30][31]. The reasons for this discrepancy warrant further investigation. Girls might often be involved in housework-related chores, such as washing clothes in water, or might have more frequent contact with dogs than boys do, increasing their opportunity to be exposed to T. canis; thus, subsequent studies are required.
The seroprevalence of T. canis infection in RMI PSC decreased as age increased; however, this trend was nonsignificant, possibly because of the small numbers of children in certain age groups. Seroprevalence might be cumulative in children because detectable titers of anti-Toxocara antibodies might persist over time postinfection [15,19,21]. No age-related increase in seroprevalence was observed among children in Argentina, Iran, Nigeria, or Spain [16,30,31]. Exposure to Toxocara is common because soil contamination is prevalent in peridomestic environments, and is exacerbated by poverty, poor hygiene, and the risk of contact with infected dogs [5,6]. The results of the full regression analysis suggested that poverty is a critical risk factor because seropositivity levels were elevated among PSC whose parents were employed as nonskilled workers (e.g., factory workers). A recent study verified that Toxocara infection is closely related to poverty status in the United States, suggesting that those in certain occupations, such as farming, might be at an increased risk [32].
A crucial risk factor of T. canis infection is contact with dogs [3]. The results of the univariate analysis in the current study indicated that PSC who exhibited a history of feeding dogs demonstrated a higher risk of T. canis infection than those who did not; however, these effects were nonsignificant in the multivariate regression analysis. Substantial evidence has indicated that direct human-dog contact might be a route of human infection based on the retained infectivity and pathogenicity of embryonated T. canis eggs recovered from dog hair [33][34][35]. Nagy et al. [36] located embryonated eggs on dog hair, but suggested that this route of transmission is rare. Nevertheless, a recent study indicated that embryonation is slow on dog hair but can occur; thus, transmission through direct contact, and even contact with well-groomed dogs, should not be ruled out [37]. Therefore, potential transmission through contact with embryonated T. canis egg contaminated in the environment such as soil should be not ignored also [3].
Children who have tested seropositive for T. canis have been linked to impaired cognitive development [25,38], generating cause for concern. Walsh and Haseeb [39] reported that seropositive children in the United States attained statistically lower cognitive scores on both the Wechsler Intelligence Scale for Children-Revised and Wide Range Achievement Test-Revised than did seronegative children. This finding was independent of socioeconomic status, ethnicity, gender, rural residence, cytomegalovirus infection, and lead levels in blood. Growing evidence has also implied that T. canis infection is associated with epilepsy [13,14]. Whether neurological deficits exist among PSC infected by T. canis in RMI warrants further comprehensive investigation.
Conclusions
In summation, to prevent T. canis infections, PSC in the RMI must be educated regarding appropriate hygiene, including hand washing after contact with dogs and soil. | 2018-04-03T03:39:38.945Z | 2014-05-15T00:00:00.000 | {
"year": 2014,
"sha1": "1f03545b0a3faee6cb3c668bf83425172173efaa",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-14-261",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e6b306b15c7eb3995c205ae50520592a10fc5b0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218764065 | pes2o/s2orc | v3-fos-license | A new species of Eriobotrya (Rosaceae) from Yunnan Province, China
Abstract Eriobotrya laoshanica, a new species of Rosaceae from Yunnan, China, is described and illustrated. The new species is easily distinguished from the most similar species E. malipoensis K. C. Kuan by its longer petioles (2–5 vs. 0.5–1 cm); indumentum on the lower leaf surfaces (densely tomentose vs. glabrous); much fewer flowers (15- to 30-flowered vs. 50- to 100-flowered) on the panicle; larger flowers (2.5–3 vs. 1.5–2 cm in diameter); and non-angulated (vs. angulated) young fruits.
Introduction
The genus Eriobotrya Lindley, a small genus of subtribe Malinae (tribe Maleae, subfamily Amygdaloideae, Rosaceae) consisting of approximately 30 species, is distributed in Himalaya, eastern Asia and western Malesia (Vidal 1965;Gu and Spongberg 2003;Mabberley 2017). This genus is considered close to Rhaphiolepis based on the shared characters of larger seeds and thinner endocarp (Robertson et al. 1991). Recent studies based on molecular evidence strongly supported the Eriobotrya-Rhaphiolepis clade (Lo and Donoghue 2012;Xiang et al. 2016). Eriobotrya japonica (Thunberg) Lindley, commonly known as loquat, is an important fruit tree cultivated throughout southeastern Asia and southern Europe (Gu and Spongberg 2003).
There are about 16 Eriobotrya species (five endemic) recorded in China (Gu and Spongberg 2003;Yang and Lin 2007;Li et al. 2012). Among them, there are only four species and one natural hybrid species flowering in autumn and winter, namely, E. × daduheensis H. Z. Zhang (Gu and Spongberg 2003;Ding et al. 2015). In our investigations into Eriobotrya species in Yunnan province of China, a distinct Eriobotrya species flowering in autumn was collected in 2015. After four years' field observations and comprehensive literature studies, we confirmed it was a new species and it is described and illustrated here.
Materials and methods
Morphological observations of the putative new species and its close relatives were carried out based on living plants in the field as well as dried specimens. All morphological characters were measured using a stereomicroscope with ocular micrometer (Leica S8APO, Leica Microsystems Inc., Germany). The voucher specimens were deposited in the herbarium of Sun Yat-sen University (SYS) and the herbarium of South China Botanical Institute (IBSC).
Leaf samples for the putative new species were collected and stored in silica gel. The total DNA was extracted with the TIANamp Genomic DNA Kit [TIANGEN Biotech (Beijing) CO. Ltd] according to the protocol procedure, and then sent to Novogene Bioinformatics Technology (Beijing, China) Co. Ltd for quality inspection and low-coverage genome sequencing using Illumina 2000 platform following the standard Illumina sequencing procedure. Approximately 6 GB cleaned raw data was produced and assembled into circled chloroplast genomes with the perl script NOVO-Plasty2.7.2 (Dierckxsens et al. 2017; accession numbers: MT130714, MT130715), using the chloroplast genome and the rbcL gene of E. japonica (downloaded from NCBI website, accession number: NC_034639.1) as reference and seed, respectively. Then the two assembled sequences were annotated on online GeSeq (Tillich et al. 2017) with the same reference of E. japonica (accession number: NC_034639.1). Further, complete chloroplast genome sequences for Eriobotrya, and other close genus such as Rhaphiolepis, Heteromeles, Cotoneaster, and Photinia were downloaded from the NCBI nucleotide database (Zhang et al. 2017). Together with the putative new species, all the chloroplast genomes were aligned with MAFFT version 7 (Rozewicki et al. 2019) and then manually checked and revised with MEGA version 6.0 (Tamura et al. 2013). The phylogenetic tree was then constructed with IQ-Tree 1.6.10 (Nguyen et al. 2015) based on the maximum likelihood method, in which the best-fit model of DNA substitution was auto-determined by calculating the Bayesian Information Criterion (BIC) using the 88 available nucleotide substitution models, ultrafast bootstrap was set as 2000, and Photinia species were set as outgroup.
Molecular analyses
The alignment length of these twenty-five chloroplast genomes was 166,363bp in total, with the statistics of 1,307 parsimony-informative sites. No variable sites were detected between the two accessions of the new species but 139 variable sites were detected between the new species and E. malipoensis. This low diversity within the species was also observed in E. japonica (KT633951 was also identical to NC_034639, KY085905 identical to MN577877).The best-fit nucleotide substitution model was detected as TVM+F+R2 based on Bayesian Information Criterion (BIC). The ML phylogenetic tree ( Fig. 1) showed that all the Rhaphiolepis species formed a well-supported clade (clade A) that is sister to a clade of Eriobotry species (clade B) and the sister group relationship of these two clades is well supported; E. henryi, E. obovata, E. salwinensis and E. seguinii clustered together forming the clade B; the clade A and B formed a sister relationship; the putative new species, E. laoshanica is placed into a well-supported clade with E. malipoensis and E. calaleriei, and then clustered with E. japonica and E. deflexa forming the clade C. (Figs 2, 3) Diagnosis. This species is similar to E. malipoensis and E. serrata, but differs from them in its leaf shape, indumentum on the lower leaf surfaces, longer petioles, much fewer flowers on the panicle, larger flowers, and other traits.
Phenology. Flowering from September to October, fruiting from November to December.
Etymology. The specifc epithet refers to Laoshan Mountain, the locality of the type collection.
Distribution and habitat. Eriobotrya laoshanica is currently known only from two localities in Laoshan Natural Reserve, Malipo County, southeastern Yunnan, China. Here, the species is distributed in thin forests on the slopes of limestone hills at al-
Conservation status.
Only two populations were found with no more than 50 mature individuals in a total area of about 5 km 2 . It's about 6.5 km away between the two populations. The wood of this species is very suitable for firewood. During the expedition in 2019, we found that at least two big trees about 15 cm in diameter were felled by the local villagers. Thus the species could be considered as CR (Critically Endangered) status according to IUCN Red List criteria (B2ab(v); IUCN 2019).
Additional specimens examined ( | 2020-05-14T08:58:22.702Z | 2020-05-08T00:00:00.000 | {
"year": 2020,
"sha1": "52fef8c7cb2726735e8f950dd6c2ab079f3814ad",
"oa_license": "CCBY",
"oa_url": "https://phytokeys.pensoft.net/article/50728/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52fef8c7cb2726735e8f950dd6c2ab079f3814ad",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
59468169 | pes2o/s2orc | v3-fos-license | Hydrodynamical simulations of the jet in the symbiotic star MWC 560 I. Structure, emission and synthetic absorption line profiles
We performed hydrodynamical simulations with and without radiative cooling of jet models with parameters representative for the symbiotic system MWC 560. For symbiotic systems we have to perform jet simulations of a pulsed underdense jet in a high density ambient medium. We present the jet structure resulting from our simulations and calculate emission plots which account for expected radiative processes. In addition, our calculations provide expansion velocities for the jet bow shock, the density and temperature structure in the jet, and the propagation and evolution of the jet pulses. In MWC 560 the jet axis is parallel to the line of sight so that the outflowing jet gas can be seen as blue shifted, variable absorption lines in the continuum of the underlying jet source. Based on our simulations we calculate and discuss synthetic absorption profiles. Based on a detailed comparison between model spectra and observations we discuss our hydrodynamical calculations for a pulsed jet in MWC 560 and suggest improvements for future models.
Introduction
Jets are very common in a variety of astrophysical objects on very different size and mass scales. They can be produced by supermassive black holes in the case of Active Galactic Nuclei (AGN), by stellar black holes in Black Hole X-ray Binaries (BHXBs), by neutron stars in X-ray Binaries, by pre-main sequence stars in Young Stellar Objects (YSO) and by white dwarfs in supersoft X-ray sources and in symbiotic binaries. Jets in symbiotic systems are not yet as well investigated theoretically as the other objects. Due to the fact, that their parameters are in a different regime, studying them should promise new insights. The density contrast η = ρ jet /ρ ambient is about 10 −3 − 10 −2 in symbiotic stars (1 − 10 in YSO, ≈ 0.1 in AGN), the outflow velocities are with 1000-5000 km s −1 somewhat higher than in YSOs with 100 − 1000 km s −1 (≈ c in AGN). The absolute densities of N e ≈ 10 6 − 10 8 cm −3 are similar to YSO jets. However, the jet gas densities in symbiotic systems are the highest for underdense jets (η < 1) -which are in AGN jets smaller than 10 −2 cm −3 -therefore radiative processes become very important.
Symbiotic systems consist of a red giant undergoing strong mass loss and a white dwarf. More than one hundred symbiotic Send offprint requests to: Matthias Stute, e-mail: M.Stute@lsw.uni-heidelberg.de stars are known, but only about ten systems show jet emission. The most famous systems are R Aquarii, CH Cygni and MWC 560. While the first two objects are seen at high inclinations -a fact which makes it possible to study the morphology and structure of jets of symbiotic stars -the jet axis in MWC 560 is practically parallel to the line of sight. This special orientation provides the opportunity to observe the outflowing gas as line absorption in the source spectrum. With such observations the radial velocity and the column density of the outflowing jet gas close to the source can be investigated in great detail. In particular we can probe the acceleration and evolution of individual outflow components with spectroscopic monitoring programs as described in Schmid et al. (2001).
MWC 560 is a symbiotic binary system with a late M giant undergoing strong mass loss. At least a significant fraction of the lost material is accreted by the companion, which is as for most symbiotic stars a white dwarf. No radial velocity variation have been detected for the red giant most likely because the inclination of the system is close to 0 • . The orbital period of the system is not known. However, Schmid et al. (2001) provide arguments for a likely orbital period in the range 4 to 10 years.
We performed hydrodynamical simulations with and without cooling of jets with parameters that are intended to represent those in MWC 560. In a grid of eight simulations we investigated the influence of different jet pulse parameters. Due to the high computational costs of simulations including cooling, this grid was restricted to adiabatic simulations. Existing simulations of pulsed jets for YSO systems (e.g. Stone & Norman 1993;Steffen et al. 1997;de Gouveia dal Pino & Cerqueira 2002, and references therein) showed that the resulting jet structure differs strongly between purely hydrodynamical models and models using radiation hydrodynamics. Therefore, one model simulation was performed which includes a treatment of radiative cooling. We calculate from these models the absorption line profiles and investigate their ability to explain the corresponding spectroscopic observations.
Due to the fact that morphological studies of the jet structure and synthetic emission plots are the first model results which are obtained, we also present them in this paper. In addition, our calculations provide expansion velocities for the jet bow shock, the density and temperature structure in the jet, and the propagation and evolution of the jet pulses. This allows us to compare at least the qualitative properties of the jet simulations with various types of observations of jets in symbiotic systems.
In section 2, we summarize the main observational results of the jet sources in symbiotic systems and in particular for the jet absorptions in MWC 560. A detailed description of our hydrodynamic model is given in section 3. The resulting jet structures are described and emission plots are presented (section 4). In section 5, we discuss the model parameters which define the synthetic absorption line profiles and the results for different model cases. Section 6 describes the temporal evolution of the high velocity gas in the pulses and compares the resulting jet absorption profiles with observation. Finally a summary and a discussion are given.
MWC 560 and jets in symbiotic stars
Symbiotic binaries are a very heterogeneous class of objects, showing different types of nova-like activity. Jets are only expected in systems with substantial accretion from the red giant via a disk. The presence of a disk can be inferred from strong, short term (∼ hour) flickering of the hot component. However, flickering is only observed in very few objects (Sokoloski, Bildsten & Ho 2001). In many systems an accretion disk may be present, but it is hard to observe due to the much stronger emission from the cool giant, the nebula or the accreting white dwarf. Further the disk may be hidden by a larger scale circumbinary disk. In some systems the jet emission seems to be a transient phenomenon (e.g. Tomov, Munari & Marrese 2000), connected with an active phase of the hot component -perhaps due to an accretion disk instability. However, such transient features are difficult to study observationally.
MWC 560
The jet in the symbiotic system MWC 560 serves in this study as template object for our model calculations. We have chosen this unique jet system because it provides us with direct infor-mation on some hydrodynamical parameters of the jet gas in the near vicinity < 1 AU of the jet source.
MWC 560 is observationally a point source. In early 1990, MWC 560 attracted attention with a photometric outburst of 2 mag. With spectroscopic observations it was found that the system exhibits strongly variable, blue shifted absorptions with outflow velocities (RV) up to 6000 km s −1 (Tomov et al. 1990). Unlike in normal P Cygni profiles from a stellar wind the blue absorption components are detached from the emission component. After the initial outburst phase the outflow showed during about a year a low velocity phase with v = 300 km s −1 . Since September 1991, the "normal" outflow mode is re-established, with strongly variable, detached absorption components and a typical outflow velocity of ≈ 1500 km s −1 (Tomov & Kolev 1997). The absorption line structure can be explained as a jet outflow whose axis is parallel to the line of sight (Tomov et al. 1990;Schmid et al. 2001). This very special system orientation is supported by the absence of measurable radial velocity variations for the red giant indicating that the orbital plane and therefore presumably also the accretion disk are perpendicular to the line of sight. Moreover strong flickering is present (Sokoloski, Bildsten & Ho 2001) as expected for an accretion disk of a strong jet source seen pole on. Up to now, this object is the only stellar object known with this special jet orientation. Therefore MWC 560 is most useful for studying the acceleration and dynamical evolution of small scale structures in a stellar jet. Studying the variable gas absorptions yields information about the outflowing gas at very small distances < 10 AU from the source.
A most important observational source of information for the investigation of the pulsed jet in MWC 560 are the monitoring observations described in Schmid et al. (2001). Figure 1 displays a small fraction of the Hδ jet absorption data obtained during this campaign. It is clearly visible that the jet absorptions in MWC 560 are very different from classical P Cygni profiles of stellar winds. The variable absorption features can be grouped into three different components (see Fig. 1). Schmid et al. (2001). The sequence shows normalized spectra (and shifted for clarity) from days before and after the appearance of a strong high velocity component on day 144.
ponents in He . The transient high velocity components reach their maximum velocity within one day and decay again in the following few days. The disappearance is faster in He than in H . The He transitions have a higher excitation potential, therefore the He features can be associated with hotter and/or higher ionized gas. -The third absorption component in the RV range of ≈ −900 to −400 km s −1 weakens at the same time as the appearance of the high velocity component. In Fig. 1 the reduced absorption (enhanced emission) is particularly well visible for days 143 and 144 around −900 km s −1 . The low velocity absorptions are anti-correlated with the transient high velocity components. This is an indication for a close relationship between the acceleration region producing the low velocity absorption and the transient high velocity components.
The observations provide much more information on the jet structure in MWC 560 and we refer the interested reader to Schmid et al. (2001).
In that work, various jet parameters for MWC 560 were estimated from the observations.Of interest for this study on synthetic spectra from hydrodynamic models is the estimated jet mass outflow rate of > 7 × 10 −9 M ⊙ yr −1 . In addition it was possible to derive values for the velocity and the gas density for the "normal" jet outflow, but also for phases with strong high velocity components. These determinations of hydrodynamical parameters of the initial jet gas in MWC 560 serve now in this paper as input parameter for the numerical simulations of pulsed jets in symbiotic systems.
The pole on orientation of the jet in MWC 560 has of course the drawback that this system is not suited for investigations on the morphological structure of the jet. For this we have to use observations of jets which are seen from the side.
R Aquarii and CH Cygni
R Aquarii is with a distance of about 200 pc one of the nearest symbiotic stars and a well known jet source. The system contains a Mira-like variable with a pulsation period of 387 days. The hot, ionizing companion is not resolved, but it is presumably a white dwarf or sub-dwarf with an accretion disk. An orbital period of ≈ 44 years has been suggested, but this value is highly uncertain (Wallerstein 1986). The jet has been extensively observed in the optical, at radio wavelengths and with X-ray observations (e.g. Solf & Ulrich 1985;Paresce & Hack 1994;Hollis et al. 1985a,b;Kellogg, Pedelty & Lyon 2001). R Aqr shows a jet and a counter-jet extending about 10 ′′ each. The jets are embedded in an extended and complex nebulosity. Individual jet features are morphologically and photometrically variable with time. In observations with HST (Paresce & Hack 1994), the jet can be traced down to a distance of only 15 AU from the Mira where it is already collimated with an opening angle of < 15 • . After a straight propagation of 50 AU, it hits a dense clump and produces a radiative bow shock (feature N2; notation as in Paresce & Hack 1994). The flow seems to split into two parts: one stream extends around 700 AU towards feature A1, further a series of parallel features (N3 -N6) are detected downstream of N2, orthogonal to the original flow. HST observations taken at different epoch revealed that the transverse velocity (proper motion) of the different features increases from about 40 to 240 km s −1 with increasing distance from the central jet (Hollis et al. 1985b). In Chandra images (Kellogg, Pedelty & Lyon 2001), the knots are not as well resolved as in optical observations. Only larger clumps are visible, corresponding to the central source, the feature A1 and the feature S3 in Paresce & Hack (1994). VLA observations (Hollis et al. 1985a) show similar structures.
For R Aqr a major problem for the interpretation of the jet observations is the blending of jet features with emission from the surrounding nebulosity. Therefore it is not clear whether the observed features are due to the jet gas, due to the ambient medium which is shock-excited by the jet outflow, or just circumstellar material ionized by the radiation from the accreting component.
CH Cygni is a symbiotic binary where the hot component shows strong, short term flickering as expected for a bright accretion disk. The cool companion is an extended M6 III giant with a radius of about 200 R ⊙ . The binary period is 5700 d (Crocker et al. 2002) and eclipses of the flickering component indicate a system inclination near 90 • (Mikolajewski, Mikolajewska & Tomov 1987). In 1984/85, the system showed a strong radio outburst, during which a double-sided jet with multiple components was ejected (Taylor, Seaquist & Mattei 1986). This event enabled an accurate measurement of the jet expansion with an apparent proper motion of 1.1 arcsec per year. With a distance of 268 pc (HIPPARCOS) (Crocker et al. 2001), this leads to a jet velocity near 1500 km s −1 . The spectral energy distribution derived from the radio observations suggest a gas temperature of about 7000 K for the propagating jet gas (Taylor, Seaquist & Mattei 1986). In HST observations (Eyres et al. 2002), arcs can be detected which also could be produced by episodic ejection events. The X-Ray spectrum taken with ASCA (Ezuka, Ishida & Makino 1998) is composed of three components which are associated with the hot source and the secondary or the radio jet. X-Ray imaging is not yet available.
The numerical models
In the following, we describe the employed computer code with the incorporated equations, the model geometry and the chosen jet parameters. We also discuss the simplifications and approximations made due to the constraints set by the available computer resources.
General description
With the code NIRVANA (Ziegler & Yorke 1997) we solve the following set of the hyperbolic differential equations of ideal hydrodynamics Thereby ρ is the gas density, e the energy density, v the velocity and γ the ratio of the specific heats at constant pressure and volume. NIRVANA uses second order accurate finite-difference and finite-volume methods and explicit time-stepping. This code was modified by M. Thiele (Thiele 2000) to calculate energy losses due to non-equilibrium cooling by radiative emission processes. The microphysics is introduced via the cooling term Λ in the energy equation and the following interaction equations for the species. When cooling is important, the above equations are supplemented by a species network with ρ i the species densities satisfying ρ = N s i=1 ρ i for the total density. k i j are the rate coefficients for two-body reactions which are functions of the fluid temperature T . They de-scribe electron collision ionization and radiative and dielectronic recombination processes. The summations go over the N s species. Both atomic and/or molecular species can be included in the model and NIRVANA C can handle up to 36 species (Thiele 2000).
In the optically thin case, the cooling rate is a function of the species densities and temperature, Λ = Λ(T ; ρ i ) (Sutherland & Dopita 1993). Cooling functions describing electron collision ionization, radiative and dielectronic recombination and line radiation. When cooling is very efficient, the atomic network is solved in a time-implicit way (Thiele 2000). This code has been extensively tested in Thiele (2000) and Krause (2001).
When the cooling is solved dynamically with the full set of non-equilibrium equations, the various ionization states and concentration densities ρ i of each element are calculated from the atomic rate equations. They are used explicitly in the cooling functions as with e i j the cooling rates from two-body reactions between species i and j, and Λ BS the cooling function due to Bremsstrahlung. In this case, the equation of state has to be given in the form The code in its present form can handle collisional excitation, collisional ionization, recombination, metal-line cooling and Bremsstrahlung.
Our cooling setup
The available computer resources set strong constraints to the microphysics which can be included in our models. Important for jet simulations is a high spatial resolution in order to resolve the fine structure. The parameter study for pulsed jets is therefore restricted to adiabatic simulations, where we neglect both the cooling term Λ in the energy equation in the set of equations (1) and the interaction equations (2).
In addition, one model simulation was performed which includes a simple treatment of radiative cooling. Instead of the full explicit cooling function Λ(T ; ρ i ), we consider only cooling by hydrogen, together with a general non-equilibrium cooling function Λ(T ) adapted from Sutherland & Dopita (1993) -without radiation field and assuming solar abundances -to account for the cooling by the heavier elements. The cooling function neglects collisional de-excitation. For the temperature regime of our calculations T > 10000 K collisional deexcitation is only of importance for densities above 10 8 cm −3 . Thus in the jet regions with the highest densities our cooling rate may be slightly overestimated. This effect will most likely be more than overcompensated by the fact that the limited resolution of our calculations underestimates the gas density (clumping) and the cooling rate. For hydrogen we solve the atomic network
Geometric model
Due to high computational costs of our cooling treatment and in order to combine a large computational domain with high spatial resolution it was not possible to perform model simulations in three dimensions. Therefore we had to choose a two-dimensional slice of the full domain and to assume axisymmetry. This kind of simulation is often called 2.5D simulations. The geometry of this slice in the examined system is shown in Figure 2. The dimensions of this two-dimensional slice are set to 50 AU in polar direction perpendicular to the orbital plane of the binary and 30 AU in the direction of the orbital plane - Figure 2 is not drawn to scale.
The hot component is located in the origin of the coordinate frame, therefore the companion is expanded into a "Red Giant Ring". In the 2D integration domain, only half of the cross-section of this ring on this slice is included. The binary separation in the models is chosen to 4 AU which is of the order of the estimated separations of 3.3 -5.2 AU. The density of the red giant is set to 2.8 · 10 −5 g cm −3 and its radius to 1 AU.
Surrounding the red giant, a stellar wind is implemented. The wind has a constant velocity of v = 10 km/s, a gas temperature of T = 50 K and a mass loss of 10 −6 M ⊙ yr −1 . The density of the red giant wind at the surface of the star is then 2.2 · 10 −14 g cm −3 . The density of the external medium is given by an 1/r 2 rg -law for a spherical wind where r rg is the distance from the center of the red giant. The density of the red giant wind near the jet nozzle is about 200 times higher than the initial jet density. At a distance of 50 AU from the symbiotic system, i.e. at the end of the integration domain, the wind density is about equal to the jet density at the nozzle.
In 3D this density distribution corresponds for small z (< 5 AU) to a torus-like structure which becomes for large z (> 10 AU) a slightly flattened but quasi-spherical distribution ≈ 1/r 2 centered on the jet nozzle. This seems to be a reasonable approximation for the z-direction. In the r-direction we introduce by this procedure artifically a density symmetry around the jet axis, which may be a qualitatively important difference compared to real jets in binary systems.
The gravitational potential of the two stars with assumed masses of 1 M ⊙ each is considered. However, the effect of the gravitational potential on the resulting jet structure is marginal.
The numerical resolution was chosen to 20 grid cells per AU, therefore our computational domain was 1000×600 grid cells. To account for the counter-jet and the other part of the jet, respectively, the boundary conditions in the equatorial plane and on the jet axis are set to reflection symmetry. On the other boundaries, outflow conditions are chosen.
Parameters for the pulsed jet
The jet is produced within a thin jet nozzle with a radius of 1 AU. Because we like to investigate in these simulations the propagation of small scale structures in the jet and not the formation and collimation of the jet outflow, this ansatz is appropriate. We assume that the jet is already completely collimated when leaving the nozzle. To simulate the stable velocity component mentioned, the initial velocity of the jet is chosen to 1000 km s −1 or 0.578 AU d −1 and its density is set to 8.4 · 10 −18 g cm −3 (equal to a hydrogen number density of 5 · 10 6 cm −3 ). These parameters lead to a density contrast η of 5·10 −3 , a Mach number of ≈ 60 in the nozzle and a mass loss rate of ≈ 10 −8 M ⊙ yr −1 .
Repeatedly each seventh day, the velocity and density values in the nozzle are changed to simulate the jet pulses which are seen in the observations of MWC 560. The effects of different pulse densities and speeds are investigated with a parameter study for adiabatic jet models.
Adiabatic models
We present eight adiabatic models for pulsed underdense jets which differ in their pulse parameters. Table 1 lists the main parameters of the models: the model number, the density of the pulse in the nozzle n pulse in cm −3 , the velocity of the pulse in the nozzle v pulse in cm s −1 , the mass loss during the pulseṀ in g s −1 and in M ⊙ yr −1 and the kinetic jet luminosity during the pulse in erg s −1 . Each pulse lasts for one day. Thus the first 4 models have pulses with enhanced velocity and lower, the same or higher density as the "regular"outflow. In the models v to viii only the density is changed, except for the special model vii which has no pulses at all. Columns 7 and 8 in Table 1 give the axial and radial extent of the bow shock after 380 days of simulation time at which the first model (model iv) reaches the outer boundary. The bow shock sizes are results of our simulations which will be discussed below.
In Fig. 3 the logarithm of density is plotted for model i of the eight hydrodynamical models listed in Table 1. The density plots of the other seven simulations are qualitatively similar and therefore omitted. Table 1. Parameters of the jet pulses: the model number, the density of the pulse in the nozzle n pulse in cm −3 , the velocity of the pulse in the nozzle v pulse in cm s −1 , the mass outflow during the pulseṀ in g s −1 and in M ⊙ yr −1 , the kinetic jet luminosity during the pulse in erg s −1 and the axial and radial extent of the bow shock after 380 days of simulation time. Model vii is a special case, because no pulses are present. Therefore "pulse" values given for this model are identical to the jet parameters out of pulses valid for each model 1.25 · 10 6 2.0 · 10 8 2.94 · 10 17 4.66 · 10 −9 5.88 · 10 33 41.2 15.2 ii 2.5 · 10 6 2.0 · 10 8 5.88 · 10 17 9.33 · 10 −9 1.18 · 10 34 42.0 15.4 iii 5.0 · 10 6 2.0 · 10 8 1.18 · 10 18 1.87 · 10 −8 2.35 · 10 34 46.0 17.1 iv 1.0 · 10 7 2.0 · 10 8 2.35 · 10 18 3.73 · 10 −8 4.70 · 10 34 50.0 18.4 v 1.25 · 10 6 1.0 · 10 8 1.47 · 10 17 2.33 · 10 −9 7.35 · 10 32 43.8 14.4 vi 2.5 · 10 6 1.0 · 10 8 2.94 · 10 17 4.66 · 10 −9 1.47 · 10 33 48.6 14.6 vii * 5.0 · 10 6 1.0 · 10 8 5.88 · 10 17 9.33 · 10 −9 2.93 · 10 33 48.0 14.4 viii 1.0 · 10 7 1.0 · 10 8 1.18 · 10 18 1.87 · 10 −8 5.88 · 10 33 38.7 14.5 * equivalent to no pulses, these values represent the jet parameters out of pulses valid for each model In the models i-iv with the higher jet pulse velocity of 2000 km s −1 , the axial extent of the bow shock after 380 days -and therefore its averaged velocity -is increasing with increasing jet pulse density and mass outflow (see Table 1). This seems to be a trend in the simulations which is also valid for the radial extent of the bow shock. No clear trend is present in the four models v-viii, where the jet velocity is constant and only the jet pulse density is varied. The models with the highest and lowest pulse density have a lower averaged bow shock velocity than the intermediate models. The radial extent of the bow shock, however, is for all four models v-viii practically equal.
Thus we can say that the shock front expansion velocity is about equal within 10 − 20 per cent for all our adiabatic jet simulations. The average expansion velocity of the bow shock during the first year is about 200 km s −1 in axial direction and about 75 km s −1 in radial direction. This has to be compared with the gas velocity in the jet nozzle, which is 1000 km s −1 .
In the models, temperatures are present in the range of 10 3 K in the red giant wind and 10 5 -10 8 K in the jet (Fig. 4). These are by far too high for the observed absorptions from neutral or Fig. 4. Logarithm of temperature of the model i; temperatures are present in the range of 10 3 K in the red giant wind, of 10 5 K in the shocked ambient medium and up to 10 8 K in the jet beam and the cocoon; in this and the following color plots, only the jet material is considered which is filtered out by means of a passively advected tracer, the black line inside the jet cocoon is an artifact resulting from this filtering singly ionized metals which are only expected for cool gas with temperatures around T ≈ 10000 K or lower.
It is well known from previous simulations of protostellar and extragalactic jets (e.g. Stone & Norman 1993;Steffen et al. 1997;de Gouveia dal Pino & Cerqueira 2002, and references therein) that the resulting jet structure differs strongly between purely hydrodynamical models and models using radiation hydrodynamics. This should also be the case in the parameter region of jets in symbiotic stars.
An estimate on the cooling time for the jet gas due to bremsstrahlung can be obtained from where T 7 is the temperature in units of 10 7 K which is about the postshock temperature of shocks with velocities of 1000 km s −1 (Dopita & Sutherland 2003) and n 7 the number density in units of 10 7 cm −3 , which are typical for our adiabatic models. It follows that the cooling time due to bremsstrahlung is comparable to the propagation time of the jet. This implies that radiative cooling is important for the employed jet model parameters and should be included in the energy equation.
Model with cooling
We performed one pulsed jet simulations including radiative cooling. For this we have chosen the same parameters as for the hydrodynamical model i, in which the jet velocity during the pulse phase is doubled from 1000 km s −1 to 2000 km s −1 while the particle number density is reduced by a factor of 4 from 5 · 10 6 cm −3 to 1.25 · 10 6 cm −3 in the jet beam. Thus the kinetic energy of the jet gas in the nozzle remains constant. Radiative cooling is treated with a simple procedure. The cooling by metals is described by a general cooling function Λ(T ) adopted from Sutherland & Dopita (1993) and bremsstrahlung emission is considered. In addition we determined for each grid point the density of H , H and e − from the rate equations and calculate the cooling due to collisional ionization and line excitation of H .
The expensive computational costs of the additional terms and equations to solve, together with an increased demand for memory, made it impossible to perform this simulation on a normal workstation. Therefore the already vectorized code was expanded to run on the NEC SX-5 supercomputer at the High Performance Computing Center (HLRS) in Stuttgart (Krause 2001).
The cooled jet looses energy through radiation which instantaneously lowers the pressure and the temperature (Fig. 5). Therefore the radial extension of the jet, the cross section and the resistance exerted by the external medium are weakened. This leads to a faster propagation velocity. In Figs. 6 and 7, the jet density structure after 74 days is plotted for model i without and with cooling, respectively. This illustrates well the dramatic difference in the jet propagation. The bow shock velocity shows an apparently steady increase from about 500 km s −1 during the first days to about 730 km s −1 after 70 days. This steadiness, however, could be pretended by our coarse time resolution of 1 day. The average bow shock velocity for the covered 74 days is 570 km s −1 . The bow shock velocity is increasing with time, because the jet head area remains almost constant and is therefore not able to compensate the decreasing local density contrast as in the simulations without cooling. The maximum radial extend of the jet cocoon is about 2.5 AU. model i with cooling model i Fig. 9. Cut along the jet axis for model i as in Fig. 8, but of the parallel velocity component
Jet structure
From our simulations we can investigate in detail the internal structure of the model jets. In Figs. 8-12 axial cuts along the jet of hydrodynamical quantities are plotted for model i without and with cooling. However, in the following discussion we focus on the calculations with radiative cooling. The cooled jet shows a very simple, periodic jet structure for the Mach number, axial velocity, density, pressure and temperature . The internal shocks, which have not yet merged with the bow shocks, can be identified with well defined discontinuities in all parameters. The jet flow shows thus two very different states of gas parameters which we call knots and hot beam. In the knots the density is high ≈ 10 −16 g cm −3 and the temperature low ≈ 10 4 K. Contrary to this the density in the hot beam is about 1000 times lower or ≈ 10 −19 g cm −3 while the temperature is much higher between 10 5.5 K to 10 7 K. Fig. 13 shows the evolution of the internal shocks along the jet axis. The locations of the pulses were derived from the slice of the Mach number by searching for extremal points in Fig. 8. The splitting of some knots is due the inner structure of their peaks. The propagation of all pulses can be traced in the plot. This is model i with cooling model i Fig. 12. Cut along the jet axis for model i as in Fig. 8, but of the logarithm of temperature in good accordance with simple theoretical models of pulsed jets (Raga & Cantó 2003). Each new pulse is instantaneously slowed down from 2000 km s −1 to about 1200 km s −1 within the first two days. Without the disturbance of the KH-instabilities, the distance between the internal shocks created by the periodic velocity pulses stays constant. It remains to be investigated whether this situation changes if the kinetic energy for the jet
Emission plots
We present emission plots of the cooled jet model in bremsstrahlung, synchrotron and optical radiation.
In the models, temperatures are present in the range of 10 3 K in the red giant wind and 10 4 -10 7 K in the jet (Figs. 5 and 12). The high gas temperature (T > 10 5 K) in the low density region of the jet beam makes the jets X-ray emitters due to thermal bremsstrahlung.
According to standard radiation theory, e.g. Rybicki & Lightman (1979), the total emissivity due to bremsstrahlung due to a completely ionized plasma is j = 1.68 · 10 −27 T 1/2 n e n i erg s −1 cm −3 .
Using the density and pressure data in each grid cell of the simulations, the emission per grid cell can be calculated and emission plots can be produced.
Synchrotron emission should be present due to the acceleration of electrons in the shocks and the presence of local magnetic fields in the plasma. With a power-law energy distribution of the electron N(E) dE = E −κ dE, the spectral index of the resulting spectrum of the emission is α = (κ − 1)/2 (Rybicki & Lightman 1979) and the total emissivity of synchrotron radiation can be estimated to with an assumed spectral index α = 0.6 (Saxton, Bicknell & Sutherland 2002), ǫ s the normalization of the power-law and β < 1 the proportionality factor of equipartition. This is a very crude estimate, furthermore the values of the parameters are quite unclear. Therefore the synchrotron emission plots should be considered only as relative and qualitative. According to Aller (1984), we can choose those grid cells to estimate the H line emission (of only Balmer lines), in which the temperature is larger than 10 4 K and where only recombination contributes to the emission. Hydrogen is assumed to be fully ionized. The emissivity is then j = 4.16 · 10 −25 T −0.983 4 10 −0.0424/T 4 n e n i erg s −1 cm −3 with T 4 the temperature in units of 10 4 K. The integrated luminosities at day 74 are: L Bremsstrahlung = 1.15 · 10 33 erg s −1 L HI = 6.96 · 10 32 erg s −1 The integrated synchrotron emission was omitted because it is only a relative and qualitative presentation. L HI accounts only for the H recombination emission. The total cooling emission in all emission lines is certainly higher. Thus the dominant cooling radiation should be atomic emission lines. L Bremsstrahlung L HI L kinetic
Fig. 17. Evolution of the kinetic, bremsstrahlung and H luminosity
For the comparison of the emission plot with observations it should be considered that the H plot would not be representative for all optical lines. For example emission from collisionally excited lines like Ca is only expected from the very coolest T < 15000 K regions in the jet knots. No emission would come from the hot jet beam or the cocoon because there Ca would be in a higher ionization state than Ca + . Thus the knot structure may be particularly well defined in such low ionization lines. Similar selection effects must be considered for the observations of the Bremsstrahlung radiation. Thermal free-free radiation in the radio range would strongly favor the low temperature emission regions, such as the dense knots, while the Bremsstrahlung radiation observable with X-ray telescope would only originate from the very hottest, low density regions in the hot beam or the cocoon. The plot for the Bremsstrahlung simply includes the emitted radiation in all wavelength bands. Fig. 17 shows the evolution of the kinetic, bremsstrahlung and H luminosity until day 74. The emitted luminosity is about 50 per cent of the kinetic luminosity. The peaks in the emitted luminosity coincide with the merging of the pulses with the bow shock (see Fig. 13). These "flashes" are in accordance with simple analytical models (Raga & Cantó 2003).
Calculations of jet absorption profiles
Using the model results from the hydrodynamical simulations, we calculate the absorption line structure.
As emission region we define a disc in the equatorial plane (z = 0) which extends to a radius r em . For each grid cell inside this region, we assume a normalized continuum emission and a Gaussian emission line profile: This initial intensity is taken to be independent of position r < r em . Further we consider the possibility of different system inclinations i for the calculation of light rays from the emission region through the computational domain. We consider only absorption along straight lines and neglect possible emissivity or scattering inside the jet region.
The initial continuum and line emission (9) is taken as input for the innermost grid cell of each path. We then calculate the absorption within each grid cell j along the line of sight according to: where e is the electron charge, m e the electron mass, c the speed of light, λ kl the rest frame wavelength of the transition, m H the proton mass and f kl the oscillator strength. The parameters are constant for a given atomic transition. ∆x j is the length of the path through grid cell j which is a function of the inclination i. The parameters depending on the different hydrodynamical models are the velocity projected onto the line of sight v j , the mass density ρ j and η the relative number density with respect to hydrogen η = n k /n H of the absorbing atom in the lower level k of the investigated line transition. The velocity is binned for the calculation of the absorption. The size of the bins ∆v can be interpreted as a measure of the kinetic motion and turbulence in one grid cell. This absorption dispersion helps to smooth the effects of the limited spatial resolution of the numerical models.
The absorption calculation through the jet region is repeated for all possible light paths from the emission region to the observer. The arithmetic mean of the individual absorption line profiles from all path is then taken as resulting spectrum.
Model parameters
The quantities determining the absorption line profiles are first the parameters defining the hydrodynamical model. These are for our model grid the pulse velocity, the pulse density, and the time (day) in the simulation. For model i we have to distinguish further between adiabatic calculation and calculations including radiative cooling.
For the emission from the jet source, the size of the emission region r em must be fixed. For the emission spectrum we adopt throughout this paper a normalized continuum and an emission line component at rest velocity (the same transition as the calculated jet absorption line) with I peak = 6, σ = 100 km s −1 . An additional parameter is the inclination i under which the system is seen.
The parameters of the absorbing transition are defined by the rest wavelength λ kl , the oscillator strength f kl and the population of the absorbing atomic level η = n k /n H . Further we have to define the velocity bin size ∆v which is a measure for the adopted turbulent (and kinetic) motion of the absorbers.
Results for different model parameters
The structure of the synthetic absorption profiles reflect in some way the hydrodynamical structure of the jet outflow. The hydrodynamical variables determining the absorption profile are the density and the velocity projected on to the line of sight. The density, the velocity component parallel and the velocity component perpendicular to the jet axis are plotted in Fig. 18 for the model with radiative cooling at day 74. While the adiabatic jet produces a very extended and highly structured jet cocoon, the model i with radiative cooling produces an almost "naked" jet beam with practically no jet cocoon, but with a well defined and sharp transition towards the ambient medium. In the adiabatic model four different velocity regions can be distinguished (Fig. 19): I the jet beam with high outflow velocities of −800 to −1400 km s −1 , II the cocoon and bow shock region with velocities of −10 to −800 km s −1 , III the external medium, the companion star and its wind with velocities around 0 km s −1 , and IV the backflow near and besides the jet head with velocities of 0 to +500 km s −1 .
In the jet models with radiative cooling there exists neither an extended cocoon and bow-shock region nor a backflow region. These regions are suppressed into a narrow transition region between jet beam and external medium due to the high density and the efficient cooling. Thus the model with cooling has essentially only two velocity regions: I the jet beam, and III the external medium.
Synthetic absorption line profiles
From our model simulations we can calculate from the density and velocity structure of the jet (e.g. Fig. 18) the synthetic absorption line profile for different line of sights.
For the emission region we assumed that it has an radius of r em = 1 AU and the resulting spectrum is the mean of the corresponding line of sights distributed over this emission area. For the atomic transition we use the parameters λ kl = 3934 Å, f kl = 0.69 and η = 2 · 10 −6 appropriate for the Ca K transition. For η we just assumed that all Ca-atoms are in the will be discussed in Sect. 5.6. The velocity bin width for the calculation of the absorption profiles is set to ∆v = 10 km s −1 .
The parameters for the emission line component in the initial spectrum are I peak = 6 and σ = 100 km s −1 . Figure 19 shows the resulting synthetic absorption line profiles calculated for the model i without cooling (adiabatic model) and with cooling. The model with cooling shows a strong, broad absorption from the velocity region I centered around RV = −1100 km −1 from the jet beam. In addition there is a saturated, narrow absorption component in the middle of the emission component caused by the almost stationary stellar wind of the cool giant (velocity region III) in front of the jet bow shock.
The absorption spectrum of the adiabatic model i is much more structured. Besides the velocity component I and III from the jet beam and the ambient medium respectively, there is also a broad velocity component from the bow shock region in the velocity range v = −400 to +100 km s −1 . This component can be associated with the RV-region II of the jet cocoon. The line of sight goes for this geometry not through the extended backflow region of the adiabatic jet. Signatures of the backflow regions may be visible for other line of sight geometries.
Surprisingly, the structure of the main absorption trough is quite similar in both the adiabatic model and the model with radiative cooling.
Different line of sight geometries
In this section we describe the dependence of the jet absorption structure on the line of sight inclination and the emission region size for the jet with radiative cooling.
In Fig. 20 the synthetic absorption profiles are given for different radii r em = 1, 2, 3 AU of the disk-like emission region. The system inclination is set to i = 0 • , therefore the projected velocity is equal to the velocity component parallel to the jet axis.
For r em = 1 AU the gas inside the jet beam (region I) with v = −800 to −1300 km s −1 produces absorptions. The external
The velocity bin size
The velocity structure in the synthetic jet absorption profile represents an average of the jet which is limited by the spatial resolution. The real velocity structure inside one grid cell is cer- tainly more complex, mainly due to turbulent small scale gas motions. Therefore the discrete velocity value for the absorption of one grid cell should be replaced by a velocity dispersion. The strength of this dispersion, however, is not known. Thermal motion in our temperature regime leads to values of about 10 km s −1 , turbulence can increase them to 50 km s −1 and more.
The effect of such a velocity dispersion can be considered in the calculation of the absorption line by the binning of the velocities into broader bins. For broader velocity bins, simulating a larger velocity dispersion of the absorbing atoms, a smoother jet absorption profile with less structure is obtained as shown in Fig. 22. The observations show typically a smooth absorption line structure indicating a significant velocity turbulence.
The ionization problem
In all the jet absorption profile calculations in the previous sections we have assumed that all Ca atoms are singly ionized. In the adiabatic models the gas temperature is far too high for Ca + -ions. Assuming collisional ionization equilibrium, Ca + would be the dominant ionization stage for T e < 15000 K. The relative abundance Ca + /Ca is less than 1 % for T e > 25000 K or less than 0.01 % for T e > 250000 K. After fitting the ionization balances from Sutherland & Dopita (1993) with and introducing this into the calculation, for our adiabatic model no absorption is present if ionization equilibrium is considered for the Ca + abundance. Even in the model with cooling, the jet gas temperature is slightly too high to produce a strong Ca absorption (Fig. 23). The absorption generated by material which is cooler than 10 5 K is of the order of 60 % in the model with cooling, the material with temperatues below 3 × 10 4 K is responsible for 48 % of the absorption. Therefore the temperature in our model with cooling is only slightly overestimated. The high temperature of the model jet gas, compared to the observations, could be explained by unsufficient cooling. As the cooling is propor- no T max T max = 10 6 K T max = 10 5 K T max = 5 × 10 4 K T max = 3 × 10 4 K T max = 2 × 10 4 K η = η(T ) Fig. 23. Theoretical absorption line profile with and without a temperature dependent η for the model i with cooling (day 65) tional to the density squared, higher density gas in the jet model would cool faster and down to the required temperature to account for the observed Ca absorption. This may be achieved with models having higher gas densities in the jet or pulse. Another possibility is, that high density clumps may form, for which the cooling would be very efficient. However, the spatial resolution of our model grid is too coarse for simulating such small scale structures. A third reason could be the choice of the used cooling curves. They extend only down to temperatures of 10 4 K. Below that value, no cooling is achieved in the present treatment. The adding of more cooling processes could improve the models and perhaps also solve the ionization problem.
Time variability
As next step the temporal evolution of the highest velocity component is investigated. This "edge" velocity for the variable absorption originates from the most recent pulse, close to the jet nozzle, which is not yet slowed down as much as the previous pulses. As this gas is located close to the continuum emission region, the corresponding absorption feature is present for all viewing angles considered and essentially independently from the size of the emission region. In Fig. 24, the maximum outflow (negative) velocity of the high velocity component is plotted between day 360 and 380 for the first four adiabatic models and between day 50 and 74 for the one with cooling. The first result is the fact, that pulses without velocity enhancements (only density changes) as investigated in the four adiabatic models v to viii, produce hardly any velocity effects. The velocity variations created by interactions between different density regions in the jet are only of the order of a few 10 km s −1 and only in the model v, where the density in the pulse is the smallest. It seems unlikely that high velocity components can be produced by density jumps in the jet outflow.
The simulations with high velocity pulses show all a qualitatively similar behavior, with a sudden outflow (negative) velocity increase to the pulse velocity of −2000 km/s followed by a velocity decay during the following days. Closer inspection of the evolution of the high velocity component shows some dif- ferences between the four adiabatic models i -iv to the model i with cooling. In the latter, the velocity drops within one or two days from the initial -2000 km s −1 to -1400 km s −1 and then much slower -apparently asymptotically -to a stationary value of -1300 km s −1 during the next four days. In the adiabatic model i, the first drop is similar, but the slow decay seems to be only a plateau of two days, after that the velocity drops to -1150 km s −1 . The model ii shows the same behavior but with higher velocities (-1400 and -1250 km s −1 ). The last two models iii and iv decay again in an asymptotic manner to -1300 km s −1 and -1550 km s −1 , respectively. The fact, that the structure and time variability of the high velocity components are quite similar in the adiabatic models and the model with radiative cooling, shows that also the former have their merits. As their computational requirements are by far smaller than those for models with radiative cooling, they can be used for (additional) parameter studies without a great loss of accuracy. Another point of future investigation could be then the influence of different jet pulse shapes and frequencies on the structure of the high velocity components.
Jet absorption variability: comparison with observations
Based on our simulations of jet pulses we construct now sequences of absorption line profiles which can be compared to the spectroscopic observations of the MWC 560 jet outflow. For the line of sight we adopt a direction parallel to the jet (in-
Variations of the jet absorption for adiabatic models
For the comparison with the observations we consider only the adiabatic models i -iv. One can directly see that the adiabatic models v -viii can be discarded because they do not show high velocity components in the synthetic absorption line profiles (Fig. 24, middle). For the models i to iv the density in the jet pulses increases from 0.25, 0.5, 1 to 2 times the interpulse jet density at the nozzle. A major discrepancy in the jet absorption between our adiabatic simulations and the observations is seen in the velocity region between −300 − 0 km s −1 where the calculated absorption is grossly overestimated. This feature appears whenever the light path from the emission region to the observer travels through the jet head (bow shock-Mach disk) region (Fig. 25). The simulations were stopped when the jet reached the opposite boundary. At this position, the density of the external medium has dropped to the density value of the jet at its nozzle. In reality, the jet would extend further into a region where the density of the external medium is by far smaller. Then the densities in the jet head region also should be smaller and therefore the depth of the absorption. To account for this, we disregard the absorption produced in this region. Thus the calculations of the absorption line profile is only done for the line of sight from z = 0 AU to 40 AU in model i and iv and 37.5 AU in model ii and iii.
Potentially the adiabatic models may show absorptions with RV > 0 km s −1 and some synthetic spectra show weak traces of an absorption at the base of the red wing of the emission line. It must be noted that the chosen line of sight geometry for the synthetic absorption profiles avoids the jet backflow region. The backflow region would be clearly visible for inclined line of sights or more extended emission regions.
Further we disregard in the line profile calculation the ionization problem discussed in section 5.6 assuming that all Ca-atoms are singly ionized Ca + . This is questionable particularly for the adiabatic jet models.
With these restrictions we obtain in the simulations absorption line profiles which are shown in Fig. 26. There, sequences of the synthetical absorption line profiles are plotted for eight consecutive days for the adiabatic models i-iv.
There exists quite some agreement between the synthetic profiles and the observations. The simulations produce the detached, broad absorption component as observed for times where no new high velocity component was ejected during the previous days in the jet nozzle.
Not well reproduced is the strength of the high velocity absorption components due to the jet pulses. They are too weak by a substantial amount. At least one can see that the equivalent width and the depth of the high velocity components increases as the density in the jet pulse increases from n pulse = 1.25, 2.5, 5 to 10 × 10 6 cm −3 for models i, ii, iii and iv respectively. Further it is visible, that the high velocity absorption components of the higher density pulses require a longer time scale to be decelerated than the low density pulses. For example in Fig. 26 (top), these components can be detected in only two consecutive days while they are present during three days in Fig. 26 (middle).
Also a shallow component between the absorption trough and the emission line is visible in all models. The absorption is stronger for the adiabatic jets with higher pulse densities. The strength of this absorption is anticorrelated with respect to the high velocity component as in the observations.
Variations of the jet absorptions for the model with cooling
Also for calculations of the jet absorptions from model i with cooling we have excluded the line of sight section through the ambient medium in front of the jet. This avoids the narrow, saturated absorption component in the emission component. Thus the calculation of the absorption profile was only performed from z = 0 to 21 AU. The absorption for the jet model i with cooling behaves qualitatively similar to the adiabatic model. There is a broad jet absorption trough centered at about ≈ -1100 km s −1 , which is completely detached from the emission component. And again the high velocity absorptions are far too weak when compared with the observations. Comparing the models i with and without cooling seems to indicate, however, that they are a bit deeper and persist for a somewhat longer time in the model with cooling.
We like to note that it was also assumed for model i with cooling that all Ca is singly ionized. According the the model calculation the jet gas would be too high for Ca + , but the discrepancy between required temperature for Ca + and calculated temperature is not very large, in particular much smaller than for the adiabatic case.
Summary and discussion
This paper describes pulsed jet models for parameters representative for symbiotic systems. Two main types of simulations were performed, a small model grid for adiabatic jets with different jet pulse parameters and one simulation including radiative cooling. For the adiabatic jet models some quantitative differences are present for the different pulse parameters, but qualitatively the model results are very similar.
Huge differences can, however, be seen between the adiabatic models and the model which includes radiative cooling. Compared to the adiabatic models the most important effects induced by radiative cooling are: -The energy loss through radiation lowers the pressure and the temperature which leads to a much smaller radial extension of the jet, so that the cross section and the resistance exerted by the external medium is strongly reduced leading to a significantly higher bow shock velocity. -The internal structure of the pulsed jet with cooling shows a well defined periodic shock structure with high density, low temperature knots separated by hot, low density beam sections. Contrary to this in the adiabatic jet the hottest regions have the highest densities while the density is low in cool regions. The temperature and density contrast is much more pronounced in the models with cooling.
These differences are generic for model results from jet simulations without and with cooling. For the particular model i described here we can now quantify these differences.
-The jet radii are 10 AU for the adiabatic jet and 2.5 for the jet with cooling. These values are determined for model jets with the same axial length of 24 AU. -The bow shock velocity after 74 days is about 200 km s −1 in the adiabatic model compared to 730 km s −1 in the cooled jet. -The gas temperature in the adiabatic jet is everywhere (except for the initial jet gas in the nozzle) higher than T ≈ 2 · 10 5 K and goes up to T ≈ 10 7 K in the high density shock regions. In the jet model with cooling the temperature is really low T ≈ 10 4 K in the cool, high density knots.
The hot beam regions are T ≫ 2 · 10 5 K. -The density contrast between cool and hot regions is about a factor of 30 in the adiabatic jet, but a factor of 1000 in the jet with radiative cooling. Note that high density regions are hot in adiabatic models, while they are cool in the models with cooling.
These huge differences show clearly that adiabatic jet models are far from reality if radiative cooling is indeed important. Therefore jet models with radiative cooling should be investigated in much more detail for the high jet gas densities encountered in symbiotic systems.
Unfortunately, the time scales of cooling processes are several orders of magnitude shorter than those of hydrodynamical ones. This is a large numerical problem which is demanding high computational costs. But as mentioned above, these costs are necessary to get new insights into the physics of jets in symbiotic stars.
Until now, only two-dimensional simulations have been performed, also due to high computational costs and memory demands of 3D-simulations. In those simulations, the red giant would be taken into account correctly, not as a ring, and its influences on the jet, the gravitation and the stellar wind, would not be symmetric anymore. This would result in the backflow being blocked only in a small segment, therefore the adiabatic jet should be more turbulent from the beginning.
As Kössl & Müller (1988) and Krause & Camenzind (2001) have shown in their simulations, the numerical resolution determines, if one sees the transition from the laminar to the turbulent phase or not. Also in our simulations, this transition can be seen, implying that the resolution here is not too small. An increased resolution should not change the main result, the shrinking of the cocoon and the necessity of perform-ing hydrodynamical simulations with cooling to understand the structure and emission of jets in symbiotic stars.
Comparison with symbiotic systems
Our model calculations make various predictions for jets in symbiotic systems. We focus here on the basic question whether the observations of symbiotic systems support more the adiabatic jet model or the jet model with cooling.
For MWC 560 the spectroscopic observations show strong absorptions from low ionization species such as Ca , Fe and Na . Absorptions from neutral or singly ionized metals are only expected for cool gas with temperatures around T ≈ 10000 K or lower. The strength of these absorptions suggest that a substantial fraction of the jet gas must be in this cool state as in the jet model with cooling. This is incompatible with the hot gas temperatures T ≫ 10 5 present in adiabatic models. The presented adiabatic jet models are not able to produce low temperature regions.
The propagation of the jet bow shock was well observed for the outburst in CH Cyg. The jet expansion was linear and fast ≈ 1500 km s −1 producing after one year a narrow jet structure with an extention well beyond 300 AU. The estimated jet gas temperature was found to be below 10000 K. This is again in very good agreement with the jet model with cooling. According to our models an adiabatic model is expected to produce a jet with a lower expansion velocity, a broader jet beam emission and most importantly only high temperature jet gas. For CH Cyg it must be said that the initial jet conditions "at the nozzle" are not known.
Less clear is the situation in R Aqr. Although jet features can be traced down to a distance of less than 20 AU, it is not clear whether the jet gas is cool or hot. A proper motion for the emission features in the range 36 to 240 km s −1 has been derived. This would be more in the range of the gas motion of the adiabatic models. However it is not clear from where the observed emission originates, from the jet gas or the surrounding gas excited by shocks induced by the jet propagation. The small jet opening angle of 15 • favors more a jet with cooling, but a strong disk wind or a steep density gradient for the circumstellar gas may also help to enhance the collimation of an adiabatic jet.
Thus, the comparison between model simulation and observations suggest strongly that radiative cooling is important for jets in symbiotic systems. The fact, that the model with cooling describes the observations better than those without cooling, means that the cooling time scale in the jet of symbiotic systems must be shorter than the propagation time scale (i.e. expansion time scale of the adiabatic gas) which is of the order of hundred days. This condition sets a lower limit for the density inside the jet of about n ≈ 10 6 cm 3 ( T ≈ 10 4 K ) which is consistent with our initial parameters.
The synthetical absorption line profiles
Synthetic line absorption profiles are presented in this work, based on hydrodynamical calculations of pulsed jets. These "theoretical" profiles are compared to observations of the jet in the symbiotic system MWC 560.
An important point is that the gas temperature in the adiabatic jet is everywhere far too high to produce the low ionization absorptions seen in the observations. Even in the model simulations including radiative cooling the gas temperature in the jet is too high for producing the observed Ca line strengths. Thus, more efficient cooling of the jet gas is required. This can be achieved with higher gas densities in the model jet outflow or perhaps with higher resolution of the hydrodynamical calculation, which may then be able to resolve high density small scale clumps. A third solution could be the adding of further cooling processes which would be important at temperatures below 10 4 K. Additional modeling is required to investigate which of these solutions is more likely.
Disregarding the ionization problem, we calculated the synthetic jet absorption line structure. This line structure represents essentially a projection of the gas radial velocity along the line of sight, which is more or less parallel to the jet.
A success of our computations is that the basic structure of the jet absorption in MWC 560, which is the broad, detached component, can be well reproduced. The mean velocity and the velocity width is in good agreement with the observations. Surprisingly, both adiabatic jet models and the model with cooling provide qualitatively similar absorption profiles for the jet beam and the transient high velocity components.
Here are the merits of the adiabatic models which enable us to perform parameter studies which would be highly expensive for models with radiative cooling. However, as we disregard the ionization equilibrium we should more correctly speak of the projected RV distribution of the gas.
Also the temporal evolution of the highest velocity components of the transient jet pulses follows the observed behavior. Not well reproduced by our simulations are the strengths of the high velocity components. They are far too week in our simulations. This suggests that higher gas densities are required for the jet pulses.
From the presented comparison between the synthetic and observed jet absorption line structure we conclude that the general direction of our modeling is correct. Due to the high temperature the adiabatic simulations are inadequate to explain the observed low ionization absorptions observed. However, disregarding the temperature, they provide an easy to calculate mean to explore the projected velocity distribution of the gas in a pulsed jet along the line of sight. Our current jet simulation with radiative cooling solves the ionization problem not fully, as the gas temperature is still somewhat too high. However, the discrepancy is rather small and may be solved with calculations having higher gas densities in the jet in order to enhance the efficiency of gas cooling. Higher gas densities are also required for the strengths of the absorption in the transient high velocity components.
With our comparison between synthetic and observed line profiles we have demonstrated the huge diagnostic potential of the spectroscopic observations of the jet absorption profiles in MWC 560. The information, which can be extracted from these observations, is unique for astrophysical jets and is therefore a most important source for a better understanding of astrophysi-cal jets, in particular for the detailed investigation of the propagation and evolution of small scale structures originating in the jet acceleration region.
The shape of the high velocity components is in all simulations discrete. This could be a result of the assumed rectangular velocity and density steps for the pulses. A Gaussian form possibly could smooth the components in the absorption line profiles.
A point where these simulations can be further improved is the elapsed time and therefore the calculated length of the resulting jet. We only simulated the jet until it reached a length of 50 AU. At this point, however, the density of the environment has decreased to the density in the jet nozzle. A transition of the initially underdense jet (n jet /n wind < 1) towards an overdense jet will occur, which should result in changed kinematics. A direct effect on the absorption line profiles should be a decreased density of the shocked ambient medium, which was artificially cut out in the calculation of the profiles in this paper. After extending the computational domain, this manipulation should not be necessary. Simulations with larger domains are currently being carried out. | 2018-12-22T02:49:31.910Z | 2004-09-03T00:00:00.000 | {
"year": 2004,
"sha1": "ca0d2148499eb320621fd278ac92e7fbd405419d",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2005/01/aa1342.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ca0d2148499eb320621fd278ac92e7fbd405419d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1349142 | pes2o/s2orc | v3-fos-license | A Healthy Fear of the Unknown: Perspectives on the Interpretation of Parameter Fits from Computational Models in Neuroscience
Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.
ior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-offit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement modelfitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.
The use of models to infer the neural computations that underlie behavior is becoming increasingly common in neuroscience research, especially for cognitive and perceptual tasks involving decision making and learning. As their sophistication and usefulness expand, these models become increasingly central to the design, analysis, and interpretation of experiments. We consider this development to be generally positive but provide here some perspectives on the challenges inherent to this approach, particularly when behavior might be driven by unexpected factors that can complicate the interpretation of model fits. Our goal is to raise awareness of these issues and present complementary approaches that can help ensure that our understanding of the brain does not become overly conditioned to the quality of existing models fit to particular data sets.
We illustrate these challenges using a set of models that describe the ongoing process of learning values to guide actions and that are used extensively in the field of cognitive neuroscience [1][2][3][4][5][6][7][8][9][10][11][12][13]. These models adjust expectations about future outcomes according to the difference between actual and predicted outcomes, known as the prediction error. Originally developed in parallel in both animal-and machine-learning fields [14][15][16], this relatively simple form of reinforcement-learning algorithm (often referred to as a ''delta rule'' because the prediction error is typically represented by the Greek symbol delta (h) in the equations) has: 1) provided efficient solutions to a broad array of biologically relevant problems [15]; 2) accounted for many, but not all, learning phenomena exhibited by both human and nonhuman subjects [17,18]; 3) provided a generative architecture that has been used to predict behavior across tasks, compare brain activity to learning variables within a single task, and explore the range of possible behaviors that one might expect to find in a variable population [19,20]; and 4) guided an understanding of the neural computations expressed by the brainstem dopaminergic system [21]. These successes have led to the proposal that the interpretation of delta-rule model parameters fit to behavioral data from human subjects performing simple learning tasks might serve as a more precise diagnostic tool for certain mental disorders than existing methods [22][23][24]. Thus reinforcement-learning models are becoming highly influential in guiding and filtering our understanding of normal and pathological brain function.
Here we focus on the interpretation of a term in most delta-rule models called the learning rate. The learning rate, a, determines the amount of influence that the prediction error, d, associated with a given outcome has on the new expectation of future outcomes, E: As its name implies, the learning rate determines how quickly the model adapts to errors. A fixed value near zero implies that expectations are updated slowly, essentially averaging over a long history of past outcomes. In contrast, a fixed value near one implies that expectations are updated quickly to match the most recent outcomes. Thus, the learning rate can be interpreted as the amount of influence each unpredicted outcome exerts on the subsequent expectation. These updated expectations can, in turn, be used to select actions, often through a soft-max function with an inverse-temperature parameter. This parameter can be adjusted to optimize the trade-off between exploiting actions known to be valuable in the present (emphasized at higher inverse temperatures) and exploring actions that might be valuable in the future (emphasized at lower inverse temperatures) [12,13,15,25]. Recent work has highlighted the advantages of using learning rates that, instead of remaining fixed, are adjusted adaptively according to environmental dynamics [9][10][11][26][27][28]. For example, adaptive learning rates can help ensure that expectations remain relatively stable in stationary environments but change rapidly in response to abrupt environmental changes. Consistent with this idea, human behavior in tasks containing abrupt changes conforms to models in which the influence of each outcome depends on the statistics of other recent outcomes. Such rational adjustments of learning rate are most prominent after changes in action-outcome contingencies that lead to surprisingly large prediction errors [9][10][11].
Here we consider in detail two of these change-point tasks. The first, an estimation task, requires subjects to predict the next in a series of outcomes (randomly generated numbers) [9]. Each outcome is drawn from a normal distribution with a fixed mean and variance. However, the mean of this distribution is occasionally reset at random times, producing abrupt change-points in the series of outcomes. Learning rates can be measured directly on a trial-by-trial basis, using predictions and outcomes plugged into Eq. 1. Previous work showed that subjects performing this task tended to use learning rates that were consistent with predictions from a reduced form of a Bayesian ideal-observer algorithm, including a positive relationship between error magnitude and learning rate. However, the details of this relationship varied considerably across individual subjects. Some subjects tended to use highly adaptive learning rates, including values near zero following small errors and values near one following surprisingly large prediction errors. In contrast, other subjects used a much narrower range of learning rates, choosing similar values over most conditions. This across-subject variability was described by a flexible model that could generate behaviors ranging from that of a fixed learning-rate delta rule to that of the reduced Bayesian algorithm, depending on the value of a learning rate ''adaptiveness'' parameter.
The second task is a four-alternative forced-choice task that includes occasional, unsignaled change-points in the probabilistic associations of monetary rewards for each choice target [11]. Learning rates are not measured directly, as in the estimation task, but rather inferred from model fits. The best-fitting models incorporate learning rates that increase transiently after unexpectedly large errors, although the magnitude of this increase differs across subjects. The existence of this kind of across-subject variability can have dramatic effects on the interpretation of best-fitting parameters from models that do not account for this variability explicitly. Here we illustrate this problem by fitting behavioral data corresponding to different forms of adaptive learning with delta-rule models that neglect adaptive learning entirely. However, we emphasize that this problem is not limited to adaptive learning but can also arise when neglecting other factors that can influence performance on learning tasks, such as a tendency to repeat choices [29,30], and more generally whenever oversimplified models are fit to complex behavioral data.
We used simulations of the two tasks to illustrate how fitting models with fixed learning rates to behavior that is based on adaptive learning rates can lead to misleading conclusions. For each task, behavioral data were simulated using a deltarule inference algorithm with different levels of learning-rate adaptiveness coupled with a soft-max function for action selection. These simulated data were then fit, using maximum-likelihood methods, with a simpler model that included two free parameters: a fixed learning rate and the inverse temperature of a soft-max action-selection process (see Text S1). In all cases, the simpler, fixed learning-rate model was preferred over a null model constituting random choice behavior, even after penalizing for additional complexity (e.g., using BIC or AIC; see Text S1). Despite passing these model-selection criteria, we highlight two misleading conclusions that might be drawn from these fits: biased estimates of adaptive learning and of exploratory behavior.
The problem of misestimating adaptive learning is depicted in Figure 1A & B. Panel A shows simulations based on the estimation task. For this task, learning rate is measured directly as the proportion of the current prediction error used to update from the current prediction to the next prediction [9]. As expected, variability in measured learning rates tended to increase with learning-rate adaptiveness. The average value of measured learning rates also tended to decrease with learning-rate adaptiveness, because changepoints that dictate high values of adaptive learning rates were relatively rare in our simulated tasks (black circles and error bars reflect median and interquartile range, respectively, across 800 simulated trials).
The model fits, however, tell a different story. When behavior was simulated using a fixed learning rate (learning-rate adaptiveness = 0), the best-fitting models naturally captured the appropriate value. However, when behavior was simulated using increasingly adaptive learning rates, the fixed learning-rate models returned systematically larger estimates of learning rate than were actually used by the simulated subjects ( Figure 1A, gray points).
Panel B shows simulations based on the four-choice task, for which we determined the learning rate on each trial based on its value in the internal, generative process used in the simulations. Data from this task tell a similar story. Simulated learning rates were lower but more variable for more adaptive models (black circles and error bars reflect median and interquartile range), yet fit learning rates were higher for these same models ( Figure 1B, gray points). These data suggest that periods of rapid learning (i.e., following change-points) are more influential than periods of slow learning on maximumlikelihood fits of the fixed learning-rate parameter, which thus becomes biased upwards when the underlying learning rate is adaptive.
The problem of misestimating exploratory behavior is depicted in Figure 1C & D. We first simulated behavior on both the estimation task and the four-choice task using a fixed learning rate and an actionselection process governed by an inversetemperature parameter. In each case, fits from a model with a fixed learning rate and an inverse-temperature process returned appropriate estimates of the inverse temperature used in the generative process (left-most circles in Figure 1C & D, corresponding to learning-rate adaptiveness = 0). However, when the simulated subjects used increasingly adaptive learning rates, inverse-temperature fits from a fixed learning-rate model substantially overestimated the true variability in action selection (circles in Figure 1C & D: inferred inverse temperature decreases as learning-rate adaptiveness increases). Such biased parameter estimates were not simply a problem with the fixed learningrate model. Fitting an alternative model that used optimal (maximally adaptive) learning rates [9,31] to the behavior of the same simulated subjects yielded a complementary pattern of biases: the model accurately inferred the level of exploratory action selection for simulated subjects that choose learning rates adaptively but overestimated this quantity for subjects that used simpler strategies of less-adaptive, or even fixed, learning rates (squares in Figure 1C: inferred inverse temperature decreases as learning-rate adaptiveness decreases). For both models, these problems were not apparent from standard analyses of best-fitting parameter values, which had similar confidence intervals and covariance estimates for biased and unbiased fit conditions (see Text S1). These problems also did not simply reflect difficulties in estimating model parameters when the inverse temperature was low and behavior was more random, because the . In all panels, the abscissa represents learning-rate adaptiveness (0 is equivalent to using a fixed learning rate; higher numbers indicate higher adaptiveness to unexpected errors). A & B. Actual (black) and model-inferred (gray) learning rates used by agents with different levels of learning-rate adaptiveness. Points and error bars represent the median and interquartile range, respectively, of data from six simulated sessions. C & D. Best-fitting values of the inverse-temperature parameter, intended to describe exploratory behavior, inferred using a fixed delta-rule (circles) or approximately Bayesian (squares) model. Shades of gray indicate the level of exploratory behavior of the simulated agent, as indicated. Arrows indicate the actual value of the inverse-temperature parameter used in the generative process. Points and error bars (obscured) represent the mean and standard error of the mean, respectively, of data from six simulated sessions. doi:10.1371/journal.pcbi.1003015.g001 problem was also apparent when the inverse temperature was high. Thus, subtle differences in learning that were not accounted for by the inference model caused underestimation of the inversetemperature parameter, which might be misinterpreted as increases in exploratory action selection.
Diagnosing these kinds of problems is difficult, especially when the subtle aspect of behavior that is missing from the model is unknown. Model-selection practices that compare likelihoods of various models (after either cross validation or penalization of parameter numbers) are useful for identifying the better of two or more models with respect to particular data sets. However, these practices require a priori knowledge of the models to be tested and cannot, by themselves, indicate what might be missing from the tested models. One might be tempted to interpret likelihoods directly and set a criterion for what might be considered a ''good'' model. However, these metrics cannot say whether or not a model is correct (or even sufficiently good, given that no fit model is truly correct). For example, consider a test of the suitability of a fixed learning-rate model for simulated subjects that can vary in terms of learning-rate adaptiveness and exploratory behavior. Similar values of AIC, BIC, and other likelihood-based quantities are obtained for fixed delta-rule models fit to two very different subjects: one who uses a fixed learning rate, which is consistent with the model, and relatively high exploration; and another who uses a highly adaptive learning rate, which is inconsistent with the model, and relatively low exploration. Interpretation of parameter fits from the latter case would be misleading, whereas parameter fits from the former would be asymptotically unbiased and thus more informative.
To overcome these limitations, it is sometimes effective to look for indications that a model is failing under specific sets of conditions for which behavior is heavily influenced by the assumptions of the model. For the case of adaptive learning, fixed learning-rate models fail to address adaptive responses to inferred changepoints in the action-outcome contingency. Thus, it can be instructive to examine the likelihoods of these models computed for choice data collected shortly after changepoints. For the case of the estimation task, a fixed learning-rate model shows an obvious inability to account for data from trials just after a change-point for all but the least adaptive simulated subjects (Figure 2A; dip in log-likelihood at trial 1). However, this approach is not effective for the four-choice task ( Figure 2B).
Another potentially useful approach for diagnosing misleading parameter fits is to compute these fits using subsets of data that might correspond to different bestfitting values of certain parameters. For the estimation task, eliminating data from trials immediately following change-points has dramatic effects on fits for both learning rate ( Figure 2C) and inverse temperature ( Figure 2E). However, this diagnostic approach is far less effective for the four-choice task, for which adjustments in learning rate occur with a longer and more variable time course following change-points ( Figure 2D & F). Thus, for tasks like the estimation task that provide explicit information about the subject's underlying expectations, the insufficiency of the fixed learning-rate model can be fairly simple to diagnose. However, for tasks like the four-choice task in which information about the subject's expectations is limited to inferences based on lessinformative choice behavior, parameter biases are still large ( Figure 1B & D), but model insufficiency is far less apparent.
A sobering conclusion that can be drawn from these examples is that even when the parameter fits from a computational model are reasonably likely to produce a data set, and even when this likelihood is robust to perturbations in the specific trials that are fit or the settings of other parameters in the model, the model might still be missing specific features of the data. Missing even a fairly nuanced feature of the data (such as adaptive learning) can lead the parameters in the model to account for the feature in surprising ways. These unexpected influences can lead to parameter fits that, if interpreted naïvely, might suggest computational relationships that are unrelated to, or even opposite to, the true underlying relationships. Here we use an example from reinforcement learning, but the lessons apply to any model-fitting procedure that requires the interpretation of best-fitting parameter values. Certain parameters, like the inverse-temperature parameter in reinforcement-learning models, are particularly susceptible to this problem, because they are always sensitive to other sources of behavioral variability that are incompletely described by the rest of the model.
These challenges highlight the narrow wire on which the computational neuroscientist walks. On one hand, we seek to generalize a wide array of physiological and behavioral data from different tasks onto a tractable set of computational principles. On the other hand, the results that we obtain from each experiment are conditioned on assumptions from the particular model through which they are obtained. We believe that the goals of computational neuroscience are possible even in the face of this contradiction. Obtaining generalizable results depends on not only good modeling practices [32] but also the extensive use of descriptive statistics to dissect and interpret data from both experiments and simulated model data. For example, the estimation task described above was designed to allow learning rates from individual trials to be computed directly and not inferred via model fits to resulting choice behaviors. This approach revealed clear taskdependent effects on adaptive learning [9]. In principle, congruence between these kinds of direct analyses of behavioral data and fit model parameters can help support interpretations of those parameters and has the advantage of testing modeling assumptions and predictions explicitly rather than via comparisons of different model sets [8,33,34]. In contrast, inconsistencies between direct analyses and fit model parameters can help guide how the model can be modified or expanded-keeping in mind, of course, that adding to a model's complexity can improve its overall fit to the data but often by overfitting to specious features of the data and making it more difficult to interpret the contributions of individual parameters [35].
In summary, model fits to behavioral data can provide useful and important insights into the neurocomputational principles underlying such behavior but should not replace good experimental designs that explicitly isolate, manipulate, and/or measure the behavioral processes of interest. Combining such designs with both model fitting and other kinds of analyses can support steady progress in attaining a more general understanding of the neural basis for complex behaviors that is not overly tied to a particular model or behavioral test.
Supporting Information
Text S1 Provides methods for simulations and model fitting as well as Bayesian information criterion values for each set of models. (DOCX) | 2017-04-11T00:01:34.967Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "590a0c6150094fe7409f6afc9bdc3f7cc0ab05c5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1003015&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "590a0c6150094fe7409f6afc9bdc3f7cc0ab05c5",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
257202573 | pes2o/s2orc | v3-fos-license | A Clinical-Epidemiological Study on Beta-Blocker Poisonings Based on the Type of Drug Overdose
Background Beta‐blockers carry a high risk of potentially causing fatal poisoning if overdosed. We aimed to assess the clinical and epidemiological characteristics of patients with beta-blocker poisoning. Methods Patients were categorized based on the type of drug poisoning into propranolol, other beta-blockers, and the combination of beta-blocker groups, respectively. Demographic data, drug toxicity, and clinical, laboratory, and treatment information of different groups were compared. Results During the study period, 5086 poisoned patients were hospitalized, of whom 255 (5.1%) had beta-blocker poisoning. Most patients were women (80.8%), married (50.6%), with a history of psychiatric disorders (36.5%), previous suicide attempts (34.6%), and intentional type of exposure (95.3%). The mean ± SD age of the patients was 28.94 ± 11.08 years. Propranolol toxicity was the most common among different beta-blockers (84.4%). There was a significant difference in age, occupation, education level, and history of psychiatric diseases with respect to the type of beta-blocker poisoning (P < 0.05). We observed changes in the consciousness level and need for endotracheal intubation only in the third group (combination of beta-blockers). Only 1 (0.4%) patient had a fatal outcome in toxicity with the combination of beta-blockers. Conclusion Beta-blocker poisoning is not common in our poisoning referral center. Propranolol toxicity was most common among different beta-blockers. Although symptoms are not different among defined beta‐blocker groups, more severe symptoms are observed in the combination of the beta-blocker group. Only one patient had a fatal outcome in the toxicity with the combination of the beta-blocker group. Therefore, poisoning circumstances have to investigate thoroughly to screen coexposure with combined drugs.
Introduction
In general, poisoning is referred to as adverse efects that occur following the use of drugs or chemicals and is one of the important causes of morbidity and mortality [1]. Epidemiological studies have indicated that up to 75% of hospital admissions could be related to drug poisoning [2,3]. One of the types of drug poisoning is beta-blocker poisoning. Beta-blockers are often nonspecifc blockers with the aim of acting on the beta-1 receptor located in the heart and arteries. With beta-1 blocking, they reduce the number of heartbeats and lower blood pressure [4,5]. Beta-blocker overdosing carries a high risk of potentially causing fatal poisonings because of its strong therapeutic efect and rapid onset of action [6]. Tere is a belief that beta-blocker overdosing poses a serious, life-threatening risk to patients and is difcult to treat. In a large survey in the US poison centers in 2003, the case fatality rate of 7,415 cases treated in healthcare facilities was reported at 0.45% [7]. One study showed that cardiovascular mortality was 1.4% among 280 beta-blocker exposures [8].
It seems that the most important factor involved in cardiovascular complications in beta-blocker poisoning is the concomitant use of drugs, especially calcium channel blockers, cyclic antidepressants, and neuroleptics [8][9][10]. Although, in the case of beta-blockers, if there are no symptoms up to 6 hours after oral administration, poisoning seems unlikely [11,12]. Te prognosis of beta-blocker poisoning is excellent, especially when it is along with prompt medical intervention. In the absence of such concurrency, exposure to a beta-blocker with membranestabilizing activity is associated with an increased risk of cardiovascular morbidity [8].
Evaluation of toxicity with beta-blockers indicated that propranolol was the only beta-blocker associated with seizures; of those who ingested more than 2 g of propranolol, two-thirds experienced a seizure [13]. Another study showed that diferent types of beta-blockers might have various levels of toxicity, while poisoning with propranolol and metoprolol was more frequent and associated with higher rates of cardiovascular complications [8].
Poisoning with these drugs has various consequences depending on the type of treatment intervention, the time interval between referral to health centers and the start of the action, and a series of factors depending on the patient. Terefore, in this study, we aimed to investigate the clinical and epidemiological data of patients with beta-blocker poisonings in the poisoning referral center in the central part of Iran. Te prognosis of beta-blocker poisoning was assessed based on the type of drug toxicity by examining the patients' records.
Methods
Tis cross-sectional study was performed in 2021 at Khorshid Hospital, afliated with Isfahan University of Medical Sciences. Te records of all patients who were referred to our center in 2018 because of beta-blocker poisoning were reviewed. Te study protocol was approved by the Research Committee of Isfahan University of Medical Sciences, and the Ethics Committee confrmed it (IR.MUI.MED.REC.1399.040).
Te inclusion criteria were the age of more than 8 years, poisoning by beta-blockers, availability of medical records, and complete medical documents. Among 259 patients who were suspected of beta-blocker intoxication, 255 were included in the study. In multiple drug intakes, patients who had taken cardiovascular drugs (antihypertensive andantiarrhythmic) with beta-blockers were excluded from the study. Patients with a history of severe cardiac arrhythmia, renal and hepatic dysfunction, and those who left the hospital voluntarily or without permission while their follow-up was continuing were excluded. Patients were categorized into three groups according to the type of drug poisoning as propranolol, other beta-blockers (including metoprolol, bisoprolol, atenolol, and carvedilol), and the combination of beta-blockers, respectively ( Figure 1).
Te following information about poisoning was collected from the documents: personal characteristics (such as age, sex, marital status, level of education, and occupation), characteristics related to poisoning (type of drug, number of drug taken, and location of drug use), and mode of poisoning (intentional, accidental, and overdose), history of addiction and type of addiction (alcohol, cigarettes, opiates, or others), length of hospitalization, medical history related to psychiatric illness, and suicide history, as well as clinical fndings in main organs including the central nervous system (CNS), heart, skin, eye (miosis or mydriasis), deep tendon refex, palmar refex, and vital signs (blood pressure, respiration rate, pulse rate, and body temperature) at baseline, laboratory data, treatments performed (receiving charcoal, atropine, glucose, calcium glucagon, dialysis) and other treatments, and treatment outcome (complete recovery or death). All poisonings registered in our medical center were collected after extracting the desired data and entering them into a computer fle with a special format. Te obtained data were entered into the Statistical Package for Social Sciences (SPSS) version 24. Statistical analyzes were performed in two parts: descriptive and analytical. In the descriptive part, the reports were presented in the form of a percentage (number) for qualitative variables and an average (variance) for quantitative variables. In the analytical section, the relationship between age, sex, frequency of predictive factors, and outcome therapy was examined based on diferent outcomes by eliminating possible confounders using logistic regression. We used independent t-tests and repeated measure tests to compare data between diferent timelines and diferent groups. P < 0.05 was considered statistically signifcant.
Results
During the study period, 5086 patients were admitted to our medical center because of poisoning. Among 259 patients who were suspected of beta-blocker intoxication, 255 were included in the study. 255 (5.1%) patients were hospitalized because of beta-blocker poisoning. Among 255 patients who were performed in this study, only 1 (0.4%) had a fatal outcome. Te mean ± SD age of the patients was 28.94 ± 11.08 years. Most patients (n � 206, 80.8%) were women, married (50.6%), housekeepers (53.3%), with a history of psychiatric diseases (36.5%), previous suicide (34.6%), and intentional ingestion (95.3%).
Data analysis showed that there were no signifcant correlations between the outcomes and the demographic characteristics, drug toxicity parameters, and medical history. Based on our analysis, the following characteristics were most prevalent among patients: married status, female sex, being a housewife or freelance worker, rural residence, hospital admission, intentional poisoning, drugs, nonaddicts, and consumption of cigarettes.
However, the pulse rate was signifcantly more normal among recovered patients (P � 0.001), and the absence of intubation in the recovered group was strongly signifcant when compared with the fatality group (P � 0.001). Figure 2 shows the type of beta-blocker consumed. Tere was no signifcant relationship between the type of betablocker and the fnal outcome (P � 1.00). Based on our results, propranolol was taken by 84.4% of the patients. Other types of beta-blockers were as follows: metoprolol (6.5%), atenolol (1.1%), bisoprolol (0.8%), propranolol and metoprolol (1.9%), and bisoprolol and metoprolol (0.8%). Betablockers were also taken orally by all patients, often at home.
Further analysis of demographic data between patients based on types of drug poisoning was shown at some points. We observed a signifcant correlation between age, occupation, education level, and history of psychiatric diseases based on the type of beta-blocker poisoning. A lower mean age was seen in patients poisoned with other beta-blockers (P � 0.003). Most patients poisoned with propranolol were single, while married patients were mostly poisoned with a combination of beta-blockers (P � 0.06). Beta-blocker overdose was mostly observed among housekeepers and students (P � 0.015). Between the mentioned groups, we did not observe a signifcant diference based on addition and suicide history. A positive history of psychiatric diseases was more frequent among patients poisoned with a combination of beta-blockers (P � 0.03, Table 1). 62% of the patients had taken a combination of beta-blockers.
In terms of CNS evaluation, we observed changes in the consciousness level in the combination of the beta-blocker toxicity group including two patients with coma, two patients with stupor, and four patients with restlessness. Endotracheal intubation was performed for six patients in the combination of the beta-blocker toxicity group (Table 1). Only 1 (0.4%) patient had a fatal outcome in toxicity with the combination of beta-blockers. All of the laboratory parameters did not difer between groups. In addition, charcoal therapy performed 25.5%, 12.1%, and 62.3% for propranolol, other beta-blockers, and the combination of beta-blocker groups, respectively.
Discussion
We aimed to evaluate beta-blocker toxicity and compare clinical and laboratory data among the types of betablockers. Beta-blocker poisoning is not common in our referral-poisoning center. In recent research, much attention has been given to the treatment procedure for beta-blocker toxicity, while few studies have evaluated its epidemiology and clinical manifestations. Tis might be due to the lower prevalence of suicide actions with beta-blockers in other regions and the higher rates of intentional beta-blocker toxicity in our society since they are prescribed for many reasons including problems such as headaches.
We observed that propranolol toxicity was most common among diferent beta-blockers followed by metoprolol. Te indications are mainly used in the treatment of cardiovascular disease and migraine. In addition, propranolol is also used "of-label" to treat fear of social situations, panic disorder, and types of other anxiety disorders [14]. It could explain more prescriptions of propranolol and more toxicity of this drug when compared with others.
It was found that the patients of defned beta-blocker groups did not difer in terms of frequency and distribution of symptoms following exposure. In 2019, a study conducted by Lauterbach in Germany on 2967 cases of beta-blocker toxicity showed that there were signifcant diferences between the occurrence and severity of symptoms among diferent types of beta-blockers [6].
In our study, there were more patients with bradycardia and coma status in the combination of the beta-blocker group. In addition, more intubations and deaths were also observed in this group. Only one case (0.4%) had a fatal outcome in the toxicity with the combination of the betablocker group. Lauterbach [6] recently indicated in a 10-year retrospective, explorative analysis of the Mainz Poison Center/Germany database that all patients with beta-blocker toxicity recovered and only a potential fatality of coexposures with verapamil exposure occurred. It is compatible with our result that one fatality was observed in the group which was poisoned with the combination of beta-blockers. In another study in 2015, Menke et al. and colleagues evaluated the toxicity of cardiovascular xenobiotics in 10577 patients with a single exposure to beta-blockers. Tey showed that 9.6% of the patients had moderate-to-severe outcomes [15]. Indeed, in all the mentioned studies, the high recovery rate of patients was in line with our study in terms of a low mortality rate and good outcomes.
It has also been mentioned that cardiovascular complications such as bradycardia and hypotension are observed among most beta-blocker poisoning cases [6,16]. In our study, symptoms were mainly cardiovascular with bradycardia only observed in 17 (6.6%) patients. A recent study evaluated beta-blocker poisoning and its required treatments. Teir researchers found that beta-blocker poisoning is a serious clinical condition, mostly associated with cardiovascular manifestations, and immediate medical intervention is necessary. Tey also mentioned that there might be no signifcant diferences between diferent types of drug poisoning and the clinical symptoms at the beginning of the process, while cardiovascular symptoms are the most important symptoms among them [17].
In our study, there were no statistically signifcant differences between groups in terms of mean age and sex which is in accordance with other studies [18].
A recent study in the US showed that almost 66% of betablocker poisoning was due to an accidental overdose or people using beta-blockers for noncardiac indications [19]. Tese data are not in line with the fndings of our study which could be attributed to the higher prevalence of suicide actions in Iran compared with western countries [20,21]. Based on our fndings, patients with a combination of betablocker toxicities had a history of psychiatric diseases than the other groups. Based on previous evidence, the chances of multiple-drug toxicity are higher in patients with histories of psychiatric diseases [22].
Beta-blocker toxicity is often ascribed to the presence of membrane-stabilizing activity (MSA), a property of propranolol, labetalol, acebutolol, metoprolol, and pindolol. MSA was the only factor associated with cardiovascular toxicity [8]. Although hemodynamic compromise can result from beta-blockers without MSA, such instances appear to be much less common than with MSA [23,24]. Since most of our patients were intoxicated with propranolol and metoprolol, it seems that the combination of beta-blocker toxicity with MSA properties might have induced more cardiac, CNS, and hemodynamic changes in our study. CNS depression was more frequently seen in combination with the beta-blocker group. Although lipophilic beta-blockers can cause CNS depression, combination with other medications including CNS depression can lower the level of consciousness in other beta-blockers as well.
Toxicity from beta-blocker exposure generally develops within 2 hours of ingestion [25]. A review of the literature [26] and a subsequent report [13] suggested that patients develop signs and symptoms of toxicity within 6 hours of ingestion. Our study substantiates these fndings, showing that most of our patients with probable beta-blocker overdoses who remain asymptomatic and demonstrate no sign of hemodynamic instability for 6 hours after ingestion appear to be at little risk of subsequent deterioration. Te mean time interval between poisoning and frst treatment was 3.39 ± 4.8 hours, and this could explain the lower mortality in our cases due to the rapid therapeutic approach for these patients.
Activated charcoal was given to about 94% of the patients. Considering the rapid onset of action of beta-blockers after ingestion and impaired CNS status in a few patients with beta-blocker overdose, activated charcoal was used for almost all patients. However, this needs to be critically Journal of Toxicology discussed for the potential risk of aspiration and should be further studied [6]. Te limitation of our study was the small number of patients poisoned with single beta-blockers. Another limitation of our study was that beta-blocker doses were recorded according to patient reports, and the actual comparison between groups based on the blood concentration of drugs was not performed.
Conclusion
Beta-blocker poisoning is not common in our poisoning referral center. Propranolol toxicity was most common among diferent beta-blockers. Although symptoms have no diference among defned beta-blocker groups, more severe symptoms are observed with the combination of betablocker groups. Only one patient had a fatal outcome in the toxicity with a combination of the beta-blocker group. Terefore, poisoning circumstances have to be thoroughly investigated to screen for coexposure to combined drugs.
Data Availability
Te data used to support the fndings of this study are available from the corresponding author upon request.
Conflicts of Interest
Te authors declare that they have no conficts of interest. | 2023-02-26T16:12:02.412Z | 2023-02-24T00:00:00.000 | {
"year": 2023,
"sha1": "41236f7ff1c5438735fd4a211c7e878cd37736d5",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jt/2023/1064955.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2cd028ab6da5c1a161ecc9df9b78b7c94a67168f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
229172695 | pes2o/s2orc | v3-fos-license | Zoonotic coronavirus epidemics
Objective To review the virology, immunology, epidemiology, clinical manifestations, and treatment of the following 3 major zoonotic coronavirus epidemics: severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS), and coronavirus disease 2019 (COVID-19). Data Sources Published literature obtained through PubMed database searches and reports from national and international public health agencies. Study Selections Studies relevant to the basic science, epidemiology, clinical characteristics, and treatment of SARS, MERS, and COVID-19, with a focus on patients with asthma, allergy, and primary immunodeficiency. Results Although SARS and MERS each caused less than a thousand deaths, COVID-19 has caused a worldwide pandemic with nearly 1 million deaths. Diagnosing COVID-19 relies on nucleic acid amplification tests, and infection has broad clinical manifestations that can affect almost every organ system. Asthma and atopy do not seem to predispose patients to COVID-19 infection, but their effects on COVID-19 clinical outcomes remain mixed and inconclusive. It is recommended that effective therapies, including inhaled corticosteroids and biologic therapy, be continued to maintain disease control. There are no reports of COVID-19 among patients with primary innate and T-cell deficiencies. The presentation of COVID-19 among patients with primary antibody deficiencies is variable, with some experiencing mild clinical courses, whereas others experiencing a fatal disease. The landscape of treatment for COVID-19 is rapidly evolving, with both antivirals and immunomodulators demonstrating efficacy. Conclusion Further data are needed to better understand the role of asthma, allergy, and primary immunodeficiency on COVID-19 infection and outcomes.
Introduction
There are 4 common coronaviruses that cause mild upper respiratory illness in humans. Over the past 20 years, there have been 3 major zoonotic coronavirus (CoV) epidemics with 3 other highly pathogenic CoV: (1) severe acute respiratory syndrome (SARS) caused by SARS-CoV, (2) Middle East respiratory syndrome (MERS) caused by MERS-CoV, and now (3) coronavirus disease 2019 (COVID- 19) caused by SARS-CoV-2. SARS and MERS each caused fewer than a thousand deaths, but COVID-19 has spread worldwide and, at the time of this review, has infected nearly 50 million patients and caused more than a million deaths. 1 In this article, we will describe each of these syndromes in detail, with a particular focus on patients with asthma, allergy, and primary immunodeficiency.
Virology and Immune Response
Like other CoV, SARS-CoV is a single-stranded RNA virus in the Betacoronavirus genera of the Coronaviridae family. 2 Bats are the natural reservoir for SARS-CoV, and the palm civet (a cat-like Asian mammal) is a possible intermediate host. [3][4][5] The viral life cycle is similar to that of SARS-CoV-2 (Fig 1). Severe disease seems to be mediated by the activation of T H 1 cell response and the release of proinflammatory cytokines. 6 SARS-CoV evades the human interferon (IFN) response by means of multiple active and passive strategies, such as inhibiting IFN regulatory factor 3, a transcription factor that activates IFN genes. 7,8 Epidemiology and Transmission SARS initially emerged in the People's Republic of China during the fall of 2002, which then spread worldwide to 29 countries, creating large outbreaks in the People's Republic of China, Hong Kong, Republic of China, Singapore, and Toronto, Canada. 9 There were 19 probable and 8 confirmed cases in the United States. 10 The epidemic ended in July 2003 with a total of 8098 cases and 774 deaths (case fatality rate of 9.6%). 9 Since then, there have only been a few sporadic cases of SARS reported, mostly associated with laboratory breaches. 6 The main mode of transmission for SARS-CoV is by means of respiratory droplets and possibly fomites. 11 Nosocomial transmission was well documented. 6
Treatment and Vaccines
Supportive care was the mainstay of management for SARS. Multiple drugs were tried for treatment, but few randomized controlled trials (RCTs) were done. Ribavirin did not have clinical efficacy and led to hemolysis in many patients. 6 Retrospective studies revealed possible benefits from lopinavir/ritonavir, IFN, and convalescent plasma. 2,6,12 Steroids seemed to be harmful, leading to increased mortality and prolonged viremia. 2,6 There were early vaccine trials in macaques and mice, but no completed human trials. 6 Middle East respiratory syndrome (MERS)
Virology and Immune Response
MERS-CoV is another Betacoronavirus in the same genus SARS-CoV that infects humans and camels. 13,14 Although bats serve as a MERS-CoV reservoir, 15,16 the immediate host is the dromedary camel that then infects humans. 17 Although the lifecycle of MERS-CoV is largely similar to that of SARS-CoV and SARS-CoV-2, MERS-CoV differs in host binding receptor dipeptidyl peptidase 4 (also known as CD26), which is found on epithelial and endothelial cells of the human lung, kidney, small intestine, liver, and prostate. [18][19][20] The host immune response to MERS-CoV is characterized by elevated proinflammatory cytokines (eg, interleukin [IL]-6 and CXCL-10) 21 followed by T H 1 and type 1 cytotoxic T-cell responses during convalescence.
Epidemiology and Transmission
The first MERS case was reported in September 2012 in Saudi Arabia, 22 and there have been more than 2400 cases and 800 deaths reported to the World Health Organization (WHO). 23 Cases have occurred predominantly in persons residing in or traveling from the Arabian Peninsula. The median age was 52 years (interquartile range, 37-65 years), and 79% were men. 24 Transmission primary occurs by means of close contact between dromedary camels and humans. 15,17,25 Although humanto-human transmission has been confirmed, 24,26-28 humans are considered transient or terminal hosts with no sustained humanto-human transmission. 29 Reported R 0 estimates vary significantly (0.45-8.1), 30 with increased spreading described in nosocomial outbreaks. 31 Primary modes of transmission are droplet and contact, with potential for aerosol spread in close unprotected contact. 32
Clinical Manifestations
The average incubation period for MERS is 5 to 7 days. 33,34 The most common clinical presentation is severe pneumonia and acute respiratory distress syndrome in an adult. Among 47 patients with MERS-CoV infection in Saudi Arabia, fever (98%) and cough (83%) were present in most; less common symptoms included myalgia, diarrhea, and sore throat. All patients had abnormal chest imaging, but there were no clear characteristic laboratory findings. Approximately 89% of patients required intensive care and 72% required mechanical ventilation, with a case fatality rate of 60%. 27
Diagnosis
Real-time reverse-transcriptase PCR (RT-PCR) testing is the main diagnostic for MERS-CoV. Given the severity of disease and risk for human-to-human transmission, a combined approach to testing is favored with PCR testing of the lower respiratory tract, upper respiratory tract, and serum (in order of preference). 15,26,35
Treatment and Vaccines
There are no therapeutic agents with proven clinical efficacy for MERS-CoV, and supportive care is the mainstay of therapy. Retrospective studies of antiviral agents (ribavirin or IFN) and steroids among critically ill patients with MERS have exhibited either statistically nonsignficant trends toward benefit or suggested increased mortality. [36][37][38] Although convalescent plasma, 39 monoclonal antibodies, 40,41 and novel antivirals (fusion inhibitor 42 ; nucleotide analog 43 ) exhibited promise in animal studies, these therapies were not studied in humans.
Coronavirus Disease 2019 (COVID-19)
Virology and Immune Response SARS-CoV-2 is a novel Betacoronavirus that is related to but distinct from SARS-CoV and MERS-CoV. 44 SARS-CoV-2 is closely related to bat and pangolin coronaviruses 44,45 ; it has been theorized that bats are the natural reservoir of the virus and the pangolin, an endangered and frequently trafficked mammal, may have served as an intermediate host. 45 Although a market in Wuhan, People's Republic of China, was initially thought to be the source of the outbreak, this has not been definitely proven. 45 The lifecycle of SARS-CoV-2 is believed to be similar to that of SARS-CoV and other coronaviruses (Fig 1). The spike protein on the virion surface binds to the angiotensin-converting enzyme 2 (ACE2) receptor on host cells. 46 The virus is then internalized by means of endocytosis, which is mediated by spike protein cleavage by the serine protease transmembrane serine protease 2. 47 The viral genome is then translated into a polyprotein that is cleaved by both host and viral proteases; a viral RNA-dependent RNA polymerase then amplifies the genome, and virions are assembled and released. 48 It is notable that the ACE2 receptor has broad tissue distribution, including the lungs, upper airway, myocardium, gastrointestinal tract, kidneys, and vascular endothelial cells in most tissues. 49,50 This likely, in part, explains the broad clinical manifestations of COVID-19.
SARS-CoV-2 induces a limited type I and type III IFN response but high chemokine and proinflammatory cytokine gene expression. This exuberant inflammatory response is thought to play a role in more severe disease given the association between elevated inflammatory markers and mortality. 51
Epidemiology and Transmission
Since the first reports of COVID-19 cases in Wuhan, the People's Republic of China, in late 2019, the SARS-CoV-2 virus has spread worldwide, infecting nearly 30 million patients and causing close to 1 million deaths. 1,52 Older patients and those with comorbidities are at increased risk for severe COVID-19 disease. [53][54][55] Data from multiple countries have exhibited incrementally higher rates of hospitalization and mortality with increasing age. [56][57][58] In the People's Republic of China, patients with COVID-19 who were in the age range of 70 to 79 years and 80 years or older experienced case fatality rates of 8% and 15%, respectively, compared with the overall case fatality rate of 2.3%. 59 Other established epidemiologic risk factors for severe COVID-19 include diabetes, hypertension, cardiovascular disease, chronic lung disease, and obesity. [56][57][58]60 In a large prospective cohort from the United Kingdom, significantly increased mortality was seen among COVID-19 patients with cardiovascular disease (hazard ratio [HR], 1.16), liver disease (HR, 1.51), obesity (HR, 1.33), and chronic kidney disease (HR, 1.33). 60 Immunosuppressed patients with malignancy and solid organ transplant recipients seem to be at increased risk of severe COVID-19 disease and death, whereas for those with other types of immunocompromise, current evidence is less clear. 61 Within the United States, there are significant racial disparities in COVID-19 disease and death likely as a result of social conditions and systemic health inequities among racial groups. 62 The current understanding of SARS-CoV-2 transmission is incomplete. Person-to-person transmission by means of closerange respiratory droplets is considered the predominant mode of transmission (Fig 2). 63,64 Although SARS-CoV-2 can be transmitted as an airborne aerosol, 65-67 this has not been clearly exhibited in the real world, including among health care workers. 68 Although SARS-CoV-2 has been detected in nonrespiratory specimens (stool, blood, semen, ocular fluid), the likelihood of bloodborne or nonmucous membrane transmission seems to be low. [69][70][71] The duration and degree of infectivity of an individual with COVID-19 depend on multiple factors. Asymptomatic or presymptomatic transmission plays a large role, with several studies documenting transmission up to 6 days before symptom onset, 72,73 A persistent positive PCR for SARS-CoV-2 does not necessarily indicate the presence of a live infective virus, 63 but viral load as assessed by PCR cycle threshold may. 74 Risk of infection is also related to the type and duration of exposure, with prolonged close contact in closed or crowded settings conveying the highest risk. 75
Clinical Manifestations
Approximately 40% to 45% of SARS-CoV-2 infections are asymptomatic. 76 For the remaining patients with symptomatic infection, approximately 80% are mild (not requiring hospitalization), 15% are moderate to severe (requiring hospitalization), and 5% are critical (requiring intensive care unit [ICU] care). 59,77-79 COVID-19 can involve almost every system in the body. The median incubation period between infection and symptom onset is 5 days. Patients often do not manifest signs and symptoms of a severe disease until the second week of illness (Fig 3). 55,80 Of note, 2 recent reports describe that a significant proportion of patients have persistent symptoms weeks to months after recovery from acute infection, even in young patients with no comorbidities. 81,82 Systemic and Respiratory Manifestations The main systemic manifestations of COVID-19 are fever (>75%), myalgias (10%-50%), and fatigue (20%-40%). 53,55,79,83-85 Cough is seen in 45% to 80% of patients (usually dry) and dyspnea in 20% to 55%. Headache and symptoms of upper respiratory tract infection (sore throat and rhinorrhea) are seen in less than 20% of patients.
Gastrointestinal Manifestations
Diarrhea or nausea/vomiting is seen in only 5% to 9% of patients. [86][87][88] More importantly, gastrointestinal symptoms can rarely be the only presenting symptoms (ie, without respiratory complaints) of COVID-19. [89][90][91] Cardiac Manifestations Arrhythmias have been described in 7% to 17% of hospitalized patients 83,92 and cardiac injury (defined by elevation in troponin level) in 7% to 28%. 93 Multiple studies have found that there is no association between the use of ACE inhibitors and angiotensin receptor blockers and the risk of SARS-CoV-2 acquisition or the risk for more severe disease. [94][95][96][97]
Hematologic Manifestations
The incidence of venous thromboembolism (deep vein thrombosis and/or pulmonary embolism) in patients hospitalized with COVID-19 ranges from approximately 15% to 50%, and the risk seems to be higher in patients with elevated D-dimer levels. [113][114][115][116][117][118][119] The role of therapeutic anticoagulation in severe COVID-19 is controversial 120 dthe risks and benefits remain unclear and prospective trials are needed.
Renal Manifestations
Acute kidney injury is seen in 3% to 11% of hospitalized patients, requiring renal replacement therapy in 2% to 7%. 121,122 It is not clear whether kidney injury is because of direct viral effects (there are high levels of ACE2 expression in the kidney) or whether this is a byproduct of inflammation or hemodynamic shifts. 121,122 Dermatologic Manifestations Rash has been reported from less than 1% to 20% of patients, depending on the study. 53,[123][124][125] The most common morphologies reported are erythematous, urticarial, and vesicular rashes. Chilblain-like lesions (known colloquially as "COVID toes") have been described typically in patients during the COVID-19 pandemic. 126,127 However, more recent data argue against a causal link; rather, these lesions may be owing to lifestyle changes (eg, spending more time barefoot) during shelter in place. 128,129
Inflammatory Syndromes
The increased levels of inflammatory markers in patients with severe COVID-19 (discussed in the subsequent sections) have raised the possibility that some manifestations of critical illness in COVID-19 may be caused by a cytokine storm. However, recent data suggest that the levels of inflammatory cytokines are similar in critically ill patients with and without COVID-19. 130,131 Nevertheless, the inflammatory response in COVID-19 underlies the rationale for trying to treat COVID-19 with anti-inflammatory medications (eg, immunosuppressives and steroids). Another multisystem inflammatory syndrome has recently been described in children, which has similarities to Kawasaki disease but is thought to be a distinct entity. 132,133
Laboratory Findings
Multiple studies have tried to identify factors that could predict disease severity, disease progression, and/or death. Factors that have been identified to date include older age, presence of comorbidities, low oxygen saturation, levels of inflammatory markers (eg, lactic acid dehydrogenase), and chest computed tomography (CT) severity. [134][135][136][137] However, when and how to use these data from a clinical or triage standpoint is not yet clear.
Imaging
Chest radiographic findings are abnormal in 60% and chest CT scans in 86% of patients hospitalized with COVID-19. 53 The most common chest CT findings are ground-glass opacities (83%-87%) that are usually bilateral (78%-80%) and in a peripheral distribution (75%-77%). 138,139 Consolidations, septal thickening, and crazy paving are also common. Typical CT findings are illustrated in Figure 4.
Clinical Manifestations Among Patients with Allergy and Atopy
Although data are limited, initial studies suggest that asthma and allergies do not particularly predispose patients to coronavirus infections. Asthma exacerbations did not seem to increase during the previous SARS-CoV and MERS-CoV outbreaks. 140,141 During the current SARS-CoV-2 pandemic, asthma has been reported in 0% to 23.9% of patients with COVID-19 (Table 1). Studies have found asthma prevalence in patients with COVID-19 to be lower than the asthma prevalence reported in respective regions. 142,143 Similarly, rates of asthma, allergic rhinitis, and atopic dermatitis were all lower in patients with COVID-19 (9.9%, 57.4%, and 1.9%, respectively) compared with the total tested population (14.9%, 63.1%, 3.9%, respectively) in a nationwide Korean cohort study. 144 When 37 major pediatric asthma and allergy centers estimated to treat 1000 patients with asthma in Europe and Turkey were surveyed between September 2019 and July 2020, none reported any symptomatic COVID-19 cases or positive SARS-CoV-2 tests among their patients. 145 Of note, 3 studies suggest that perhaps asthma rates may be concentrated in pediatric patients with COVID-19. However, data are limited, studies were done in different countries, and definitive conclusions cannot be reached. A higher asthma prevalence was observed among pediatric patients aged 21 years and below (32.6%, 13/55) than in the whole cohort of patients with COVID-19 aged 65 years and below in New York (12.6%, 163/1298). 146 In pediatric patients from Australia, asthma prevalence was higher among patients with COVID-19 (25% [1/4]) than in the whole cohort (10.9%, 47/433), 147 whereas, in adult patients from South Korea, asthma prevalence was lower among patients with COVID-19 (9.9%, 725/ 7340) than in the whole cohort (14.9%, 32,845/219,959]). 144 Data regarding the effect of asthma on COVID-19 outcomes illustrated in Table 1 are mixed and conflicting. Severe asthma was associated with increased risk of COVID-19-related death in 1 study reviewing a health analytics platform with records of 40% of patients from the United Kingdom, 148 but asthma was not necessarily associated with increased mortality in other studies. 146,[149][150][151] Interestingly, in at least 2 studies in which asthma was associated with worse clinical outcomes, nonallergic asthma accounted for the increased risk for worse outcomes (severe COVID-19, ICU admission, intubation, and death). 144,152 The effect of asthma on COVID-19 outcomes may differ on the basis of other patient characteristics, although, again, data are limited. One study found male sex, Asian race, and comorbid chronic obstructive pulmonary disease (COPD) to be risk factors for hospitalization among patients with asthma. 153 Another study from UK Biobank found that asthma was a risk factor for COVID-19 hospitalization among women but not men. 154 Among pregnant women, those with moderate to severe COVID-19 were more likely to have asthma than those with mild COVID-19. 155 It has been hypothesized that reduced ACE2 expression could be protective against COVID-19 infection, 156 although data remain limited and conflicting. Asthma, allergic rhinitis, and increasing severity of allergic sensitization have been associated with reductions in ACE2 expression in airway epithelial cells. 157,158 Asthma has been associated with lower ACE2 expression and a lower risk of developing severe COVID-19 compared with COPD. 159 IL-13, a cytokine implicated in the pathogenesis of multiple atopic conditions, including asthma, 160 were found to reduce ACE2 and increase transmembrane serine protease 2 expression in airway epithelial cells. 157 Conversely, asthma has also been associated with increased ACE2 expression in bronchial biopsy, bronchoalveolar lavage, and blood. 161 In addition, a study comparing 330 patients with asthma and 79 healthy control patients and a study comparing 77 patients with asthma and 17 healthy control patients found that ACE2 expression was similar between patients with asthma and nonpatients with asthma. 162,163 Notably, there was higher ACE2 expression among patients with asthma who were men, African American, or had diabetes mellitus, and the authors have suggested that a higher level of monitoring may be needed for these patients. 162 Indeed, African American race and diabetes mellitus were implicated as risk factors for hospitalization in patients with asthma with COVID-19 153 and noneinsulin-dependent diabetes mellitus was observed with a significantly higher prevalence in patients with asthma with COVID-19. 164 There has also been speculation as to whether inhaled corticosteroids (ICS) could be protective against or provide treatment benefit for COVID-19 infection. Currently, there is no literature clearly indicating whether ICS use is beneficial or detrimental to COVID-19 outcomes. 165 In one study of 1562 patients, ICS use did not seem to affect the risk for hospitalization among patients with asthma in Chicago. 166 There has been a case series of inhaled ciclesonide initiation temporally correlating to improvement in 3 hospitalized patients with COVID-19. 167 In vitro, a combination of glycopyrronium, formoterol, and budesonide seemed to inhibit viral replication in infected nasal and tracheal epithelial cells 168 and ACE2 expression was found to be decreased in sputum cells of patients with asthma and COPD on ICS. 162,169 Questions have also been raised regarding the effect of type 2 biologic therapy on COVID-19 infectivity and outcomes. Observational experiences reported to date do not provide evidence that type 2 biologics are associated with increased risk for COVID-19 infection or higher COVID-19 disease severity. To date, reports specifically investigating COVID-19 infectivity and outcomes in patients on type 2 biologic therapy found that among 1938 patients on antieimmunoglobulin E (IgE) (n ¼ 610), antieIL-5 or antieIL5R (n ¼ 844), or antieIL-14/IL-13 (n ¼ 483), COVID-19 infection was observed in 55 (2.8%), with 6 severe cases and 1 mortality. In addition, a recent case report specifically described milder than expected COVID-19 severity in a patient on dupilumab ( Table 2). 170 In a study specifically investigating risk factors for hospitalization, ICU stay, and mortality among patients with asthma and COVID-19, both ICS and biologic use did not differ between patients with COVID-19 having asthma who needed general vs ICU level of care. Short-acting b agonisteonly use was associated with a lower risk for hospitalization. 153 Multiple position statements (Global Initiative for Asthma, American Academy of Allergy, Asthma, and Immunology, American College of Allergy, Asthma and Immunology, British Thoracic Society, and European Academy of Allergy and Clinical Immunology) have been released recommending continued treatments that are effective for patients with atopy, including type 2 biologics, given the current lack of evidence that type 2 biologics increase infectivity or mortality and the risk of losing disease control if type 2 biologics were to be stopped. 171,172 Clinical Manifestations Among Patients with Primary Immunodeficiency Not all primary immunodeficiencies (PID) are thought to be equally susceptible to SARS-CoV-2 infections and its complications, but this is largely based on knowledge of the immune system function in pathogen response given the limited published reports (Table 3). COVID-19 data from the People's Republic of China describe very few patients with immunodeficiencies. Of note, 2 studies of more than a thousand patients with COVID-19 each reported that 0.19% of their study population had an immunodeficiency and milder disease courses, but the specifics of their diagnoses were not elaborated on. 53,173 Given the lack of robust information regarding COVID-19 in patients with PID, likely owing to the small numbers of such patients, reports from databases and group studies will particularly helpful to further understanding. The largest report of patients with PID infected with SARS-CoV-2 comes from an international effort among immunologists who described 94 patients with a wide range of PID diagnoses. 174 A total of 59 patients (63%) required hospitalization, and 16% of all patients required intensive care. All adult patients who died from SARS-CoV-2 had preexisting comorbidities. 174 The innate immune system is the first line of defense against pathogens, CoV is recognized by pattern recognition receptorsd such as toll-like receptors (TLRs), particularly TLR3, TLR4, and TLR7, and retinoic acideinducible gene 1 (RIG-1)elike receptorsdthat induce proinflammatory cytokines that help propagate antiviral responses. 175 There have been few specific reports of COVID-19 in patients with known innate system immune deficiencies. From the larger international study, innate system immune deficiencies were described in 3 young children younger than 2 years of age that ranged from an asymptomatic child with STAT1 gain-of-function to a one-year-old man with interferon gamma receptor 2 deficiency who required ICU admission. 174,176 In New York, the one-year-old boy with interferon gamma receptor 2 deficiency with COVID-19 and a miliary Mycobacterium avium coinfection was treated with steroids in the ICU but recovered. 174,176 There was also a report of a young child in Italy who became infected with SARS-CoV-2 and developed mild myocarditis and recovered. 174 A case series of 2 pairs of brothers in the Netherlands highlights a potential clinical presentation. 177 All 4 patients were healthy and young, with a mean age of 26 years, who developed severe COVID-19 leading to mechanical ventilation. 177 One patient died. Whole-exome sequencing performed found X-chromosomal loss-of-function mutations in TLR7, and on stimulation with a TLR7 agonist, type I IFN signaling was transcriptionally down-regulated, as was the production of type II IFNs. 177 Significance of these findings is unclear, because patients with TLR7 deficiency have not been reported to have recurrent infections, and TLR signaling has been reported to be complex with redundancies.
T cell responses are thought to be particularly important as defenses against viral infections such as SARS-CoV-2. Many studies suggest that lymphopenia is associated with more severe COVID-19 disease. 53,[178][179][180] In 1 retrospective study of 1018 patients with COVID-19, all T lymphocyte subsets, especially CD8-positive T cells, were markedly lower in nonsurvivors than in survivors. 178 Patients in the group with elevated IL-6 levels (>20 pg/mL) and lower CD8positive T cell counts (<165 cells/mL) were older, had more comorbidities, increased need for mechanical ventilation, and ICU admission, and increased incidence of death. 178 Though there are no published reports of COVID-19 in patients with PID having isolated primary T cell defects, there have been reports of some patients with combined immunodeficiencies; however, limited numbers do not allow, thus, validated conclusions regarding risk of infection or severity of COVID-19 disease cannot be drawn. 174 These patients labeled as combined immunodeficiencies in the international study all required admission, with half needing ICU care. 174 Predominantly antibody deficiencies represent the most common group of primary immunodeficiency diagnoses, and reports have found a wide range of clinical presentations of COVID-19 in these patients but suggest that the adaptive immune system may not be as critical in the defense against SARS-CoV-2 as other aspects of the immune system. 174,176,[181][182][183][184][185][186] Cases of COVID-19 in children with PID are limited. From an international study of patients with PID, 32 of the 94 patients reported were younger than 18 years of age. Of those patients, 9 (28%) required ICU admission, and 2 (6%) died. 174 The diagnoses of these children included STAT1 gain-of-function, GATA2 deficiency, phagocyte defects (eg, chronic granulomatous disease), combined immunodeficiency, common variable immune deficiency (CVID), hypogammaglobinemia, autoinflammatory syndromes (eg, Mediterranean fever), and immune dysregulation. 174 One case report describes a moderately severe case of COVID-19 in a 7-year-old child with a rare folliculin interacting protein 1 deficiency that leads to cardiomyopathy, chronic lung disease, and a B-cell deficiency with hypogammaglobulinemia necessitating immunoglobulin replacement. 184 This patient required a high-flow nasal cannula and developed cardiac dysfunction and renal failure but ultimately clinically improved. 184 In a study of the Mexican open registry of patients with COVID-19, immunodeficiencies (3.8%) and asthma (3.8%) were the most frequently found preexisting conditions in the 21,161 patients younger than 18 years of age. 185 The patients labeled with an immunodeficiency included "transient hypogammaglobulinemia, IgG subclass deficiency, impaired polysaccharide responsiveness, and IgA deficiency." 185 This study concluded that children with immunodeficiencies were associated with mild and moderate forms of COVID-19 disease. 185 These findings may be influenced by biased reporting, given that patients with PID and asthma may have better access to medical care than others.
In the few reports describing COVID-19 in adult patients with CVID, X-linked agammaglobulinemia (XLA), and autosomalrecessive agammaglobulinemia, patients with more severe B cell defects seemed to experience a milder clinical course. 174,176,[181][182][183][186][187][188] Out of the patients described in these reports, there were 10 patients who were asymptomatic (1 with autosomalrecessive agammaglobulinemia, 1 with XLA, and 1 with hypogammaglobuinemia). 174,176,[181][182][183][186][187][188] In the international study with 94 patients with PID, 26% had mild disease and were treated outpatient, and the most frequently reported PID in that group was predominantly antibody deficiency with 14 patients. 174 There are also reports of patients with XLA who had COVID-19-related pneumonia but not needing mechanical ventilation. 174,176,186 These cases suggest that B cells are important but not strictly required to overcome infection.
In the literature, there have been approximately a dozen reported fatalities after a SARS-CoV-2 infection described in patients with inborn errors of immunity, predominantly in those with antibody deficiencies. 174,182,183 In the international collaboration study, 9 patients in that cohort (7 adults and 2 children) died. All adult patients with PID who died because of SARS-CoV-2 infection had preexisting comorbidities, which included cardiomyopathy, chronic kidney disease, malignancies, chronic lung disease. 174 Their PID diagnoses were mostly antibody deficienciesd6 patients with CVID (4), IgG deficiency (1), IgA and IgG2 deficiency (1)dand 1 patient with a syndromic disease. 174 The 2 children with X-CGD also had concomitant Burkholderia sepsis and hemophagocytic lymphohistiocytosis, and another child had XIAP deficiency who had severe gut graft vs host disease after hematopoietic stem-cell transplantation, septic shock, and hemophagocytic lymphohistiocytosis. There have also been 2 case reports of death in other patients with CVID, including 1 patient who was a 59-year-old woman with chronic bronchitis and CVID on immunoglobulin replacement and the other a 42-year-old man with asthma, morbid obesity, and CVID who was off of intravenous immunoglobulin (IVIG) for at least 6 months. 182,183 The male patient developed COVID-19 pneumonia and acute respiratory distress syndrome. 183 He was treated with convalescent plasma, remdesivir, and antibiotics for multiple bacterial infections. 183 He was found to be severely hypogammaglobulinemicdIgG 117 mg/dL, IgA 10 mg/dL, and IgM undetectabledand received multiple doses of IVIG, but his SARS-CoV-2 nasopharyngeal PCR swabs remained positive throughout his month-long hospitalization before he died. 183 Given this patient's poor clinical course and that most other patients with CVID and COVID-19 have received IVIG (83%, 5 out of the 6 patients with CVID in the other case series) recovered, maintaining patients on immunoglobulin replacement could be important during these infections potentially to prevent bacterial suprainfections. [181][182][183]187,188 Immunoglobulin replacement has been speculated to potentially be beneficial given its immunomodulatory effects and also potential to provide antibodies that may be cross-reactive with COVID-19, but there are limited data. 189 There are many other factors present as well that may increase mortality, including age and comorbidities.
These reports are small and additional studies, and RCTs are needed to evaluate the susceptibility to, clinical course, and optimal treatment of SARS-CoV-2 infections in patients with PID. There are current efforts between allergists and immunologists internationally to gather further data through surveys and databases, and there have been joint society statements, which state that there is no current data pointing to whether there is generally an increased risk of severe COVID-19 in PID. 174,190,191 There may be certain types of PID that are at higher risk of contracting an infection and developing a more severe course, though, and clinician contribution to these studies and the publication of data will be helpful in informing clinical care for patients with PID having COVID-19 because, at this time, there are no formal recommendations for specific therapies in this population.
Diagnosis
The 2 major categories of SARS-CoV-2 diagnostics are assays detecting viral nucleic acid and serologic response. Interpretation of results depends on the time test is performed. 192
SARS-CoV-2 Nucleic Acid Testing
Viral nucleic acid detection is the mainstay of testing for active infection. There are multiple assays using RT-PCR technology that amplify and detect regions of the SARS-CoV-2 genome. Although high in specificity and analytical sensitivity SARS-CoV-2, real-life performance depends on the clinical scenario. False-negative results may arise owing to improper sampling or sampling site. For example, a patient with COVID-19 lower respiratory tract infection may be negative by PCR testing of the upper respiratory tract. 193,194 For this reason, among symptomatic patients who are either hospitalized or in high-risk settings such as congregate living, 1 or more negative nucleic acid amplification testing (NAATs) may not be able to rule out COVID-19.
SARS-CoV-2 Serology Testing
Serologic tests detect antibodies to SARS-CoV-2 in the blood, with multiple assays developed against different viral epitopes with varying degrees of diagnostic performance. Both IgG and IgM rise approximately 10 to 14 days into the illness. 195 Current US Centers for Disease Control and Prevention and WHO guidelines recommend against using antibody tests to diagnose individuals with active SARS-CoV-2 infection. 196 There are also limited data on whether certain antibodies confer immunity and on the duration of protection of neutralizing antibodies. At this time, serologic testing serves as a public health surveillance tool or as an adjunct to PCR testing for diagnosing active infection.
SARS-CoV-2 Antigen Testing
Tests that identify SARS-CoV-2 antigen can be performed rapidly and serve as a rapid point-of-care assay. However, these assays are typically less sensitive than NAATs, with sensitivity ranging from 0% to 94% with an average of 56%. 197 Antigen tests perform best early in the course of infection when viral load is highest and is currently recommended by the WHO when NAAT is unavailable and within the first 5 to 7 days of infection.
SARS-CoV-2 Culture
SARS-CoV-2 viral culture is currently only performed for research purposes.
Treatment and Vaccines
The landscape for therapeutics against COVID-19 has changed dramatically since the beginning of the pandemic. Although supportive care remains a cornerstone of therapy, there are now also targeted therapies with data from RCTs to support their use. Here we summarize treatment options for COVID-19.
Antivirals
Remdesivir is a nucleoside analog that inhibits the viral RNAdependent RNA polymerase. Of note, 2 RCTs revealed a clinical benefit in improving recovery in hospitalized patients with 199 The US Food and Drug Administration (FDA) had granted remdesivir emergency use authorization (EUA), and it has become standard of care in the United States for the treatment of COVID-19 in hospitalized patients. 200 Currently, remdesivir should only be used in hospitalized patients. Although the exact oxygen saturation cutoff for remdesivir use is controversial, it has only been studied in patients with evidence of lower tract respiratory disease from COVID-19.
Although initial uncontrolled trials found a possible benefit for hydroxychloroquine 201 multiple RCTs now report no clinical benefit for the treatment of or prophylaxis against SARS-CoV-2 infection, and most also exhibit an increased risk of adverse effects. [202][203][204][205] The FDA has revoked its EUA for hydroxychloroquine 206 and the Infectious Diseases Society of America (IDSA) COVID-19 Guidelines recommend against using hydroxychloroquine. 207 Protease inhibitors used for human immunodeficiency viruses, in particular lopinavir/ritonavir, were postulated to act against the proteases of SARS-CoV-2 and were used previously to treat SARS and MERS. 12 However, randomized control trials have found no benefit of either lopinavir/ritonavir 208 or darunavir/cobicistat. 209 Corticosteroids should not be used in patients who do not require oxygen.
Immunomodulators
The Randomized Evaluation of COVID-19 Therapy trial, an RCT of more than 6000 hospitalized patients in the United Kingdom, reported a significant mortality benefit for the use of dexamethasone vs placebo, in particular, those who were mechanically ventilated or on supplemental oxygen; there was no mortality benefit (and a trend toward harm) among patients who did not require oxygen. 210 Although there are some caveats to the study, the IDSA guidelines now recommend dexamethasone for hospitalized patients requiring oxygen. 207 Convalescent plasma is believed to have both antiviral (by means of neutralizing antibodies) and immunomodulatory effects (by means of neutralization of cytokines/complement and other effects). 211 Observational data suggest a possible benefit of convalescent plasma 212,213 and minimal risk of harm, 214 although RCT data are limited. 215 More trials are underway, and the IDSA guidelines currently recommend using convalescent plasma only in the context of a clinical trial. 217 However, the FDA has issued EUA for convalescent plasma despite the current lack of robust RCT data. 216 Tocilizumab is an antibody against the IL-6 receptor that has been used in hopes of dampening the inflammatory response in severe cases of COVID-19. However, a meta-analysis of 7 retrospective studies 217 and preliminary data from an RCT 218 both reported no clinical benefit. IDSA guidelines recommend using tocilizumab only in the context of a clinical trial. 207 Other immunomodulators are currently under investigation, including other cytokine and Janus kinase inhibitors. IFN beta is also being studied and has exhibited some promise as part of combination therapy in small RCTs. 219,220 There are multiple vaccines in development currently using various platforms, including some which use novel messenger RNA (mRNA) technology. 221,222 The mRNA vaccines rely on the premise that the mRNA that codes for a viral antigen can be delivered into human cells, which then leads to the production of antigen within the cell and a robust immunogenic response against it. 221 Conclusion A review of the virology, clinical manifestations, and treatment of SARS, MERS, and COVID-19 has elucidated the similarities and differences among these infections. Additional data are needed to better understand the impact of COVID-19 on patients with asthma, allergy, and PID. | 2020-12-15T21:57:53.492Z | 2020-12-10T00:00:00.000 | {
"year": 2020,
"sha1": "65470458147a199ed5892892f7bb6b81cdbe5d2d",
"oa_license": null,
"oa_url": "http://www.annallergy.org/article/S1081120620312412/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "9988968f5b40b7591ad3a36b4515d6c63b0bc8ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
910639 | pes2o/s2orc | v3-fos-license | Detection of Imported Malaria with the Cell-Dyn 4000 Hematology Analyzer
ABSTRACT The sensitivity and specificity of the Cell-Dyn 4000 hematology analyzer in the diagnosis of imported malaria were studied with samples from patients in an academic hospital setting. The performance of the Cell-Dyn 4000 hematology analyzer was compared with that of conventional diagnostic methods for malaria. The Cell-Dyn 4000 hematology analyzer detected hemozoin-containing depolarizing monocytes in 29 of 58 patients with malaria and 2 of 55 patients without malaria. The presence or absence of depolarizing monocytes in patients with malaria was related to duration of symptoms before presentation for malaria analysis. A second parameter, pseudoreticulocytosis due to nuclear material of intraerythrocytic malaria parasites, was detected by the Cell-Dyn 4000 hematology analyzer almost exclusively in Plasmodium falciparum malaria patients with parasitemia levels of ≥0.5%. Attention to these abnormalities in medical centers without tropical disease expertise may decrease a delay in the diagnosis of malaria.
Microscopic investigation of stained thick and thin blood smears has been the reference standard for malaria detection and species identification for decades. Recently, a number of alternative diagnostic approaches have evolved, including detection of Plasmodium species DNA stained with acridine orange in a quantitative buffy coat analysis, PCR methods, and assays based on detection of circulating Plasmodium speciesspecific antigens (e.g., P. falciparum histidine-rich protein 2). Recent studies using automated hematology analyzers have demonstrated unexpected abnormalities in differential white blood cell plots and reticulocyte histograms from patients with malaria (1,2,4). Normal monocytes can be discriminated from monocytes that have ingested the malarial breakdown product hemozoin because of the ability of hemozoin to depolarize laser light used for routine differentiation of eosinophils (1,4). Nuclear material of intraerythrocytic malaria parasites could be discriminated by fluorescent nucleic acid dye used in routine quantification of reticulocytes. The presence of infected erythrocytes leads to a distinct fluorescent spike in reticulocyte histograms, referred to as pseudoreticulocytosis (2). It has been suggested that this novel method is a useful addition to conventional microscopy (1). Here, we set out to prospectively study the sensitivity and specificity of the Cell-Dyn 4000 hematology analyzer (Abbott Diagnostics, Santa Clara, Calif.) in the diagnosis of imported malaria among patients in an academic hospital setting in The Netherlands. The Cell-Dyn 4000 hematology analyzer is a new-generation automated hematology analyzer that has found widespread use in routine laboratory hematology.
We prospectively studied 113 patients referred to the Section of Parasitology because of clinical suspicions of malaria by the Emergency Department or the Tropical Disease Center of the Academic Medical Center, Amsterdam, The Netherlands. Malaria was considered part of the differential diagnosis of pyrexia when a patient presented with a travel history to an area where malaria is endemic. Routine malaria analysis consisted of examination of stained thick and thin blood smears according to World Health Organization standard methods and quantitative buffy coat analysis. If malaria parasites were observed and a parasite density of Ն0.5% was expected, parasite density was expressed as the percentage of erythrocytes infected by determining in duplo the number of infected erythrocytes per 2,000 erythrocytes on a thin blood smear. In the case of an expected parasite density of Ͻ0.5%, parasite density was determined by counting in duplicate the number of parasites observed per 100 white blood cells on a thick blood smear. Species identification was based on morphological features. In addition, full blood count and reticulocyte analyses were performed at the Department of Clinical Chemistry with EDTAanticoagulated blood on the Cell-Dyn 4000 hematology analyzer. Details of this technology have been described previously (2,4). The presence of depolarizing monocytes was defined as the presence in a lobularity-granularity plot of any depolarizing purple event situated above a separation line constructed from a line making a 22.5°angle with the x axis and a horizontal line through granularity signal channel 25 (C. S. Scott, Abbott Diagnostics, Maidenhead, Berkshire, United Kingdom, per-sonal communication) (Fig. 1A and B). Pseudoreticulocytosis was defined as a distinct spike with narrow fluorescence intensity in a reticulocyte histogram ( Fig. 1C and D). Laboratory technicians at the Parasitology Section and the Department of Clinical Chemistry were blind to results obtained in the other laboratory.
Of the 113 analyzed patients, 58 (51%) were found to have malaria by routine conventional methods. Forty-six patients suffered from P. falciparum malaria, of which 25 had parasitemia levels of Ն0.5% and 21 had parasitemia levels of Ͻ0.5%. Non-falciparum malaria was detected in 12 patients (5 P. vivax cases, 4 P. ovale cases, and 3 cases in which no differentiation could be made between P. vivax and P. ovale). Table 1 presents findings with the Cell-Dyn 4000 hematology analyzer for the 113 analyzed patients. Overall, either depolarizing monocytes or pseudoreticulocytosis was observed in 36 of 58 malaria patients (62%) and in 2 of 55 patients without malaria (4%). In the subgroup of P. falciparum malaria patients with parasitemia levels of Ն0.5%, either depolarizing monocytes or pseudoreticulocytosis was observed in 24 of 25 patients (96%). A statistically significant difference (unpaired t test, P ϭ 0.03, SPSS software version 10.0) in mean durations of symptoms before presentation for malaria analysis was present among the 29 malaria patients in which depolarizing monocytes were detected (4.5 days; 95% confidence interval, 3.7 to 5.4) and the 29 malaria patients in which these cells were not detected (3. days; 95% confidence interval, 2.4 to 4.0). Mean levels of parasitemia as detected by routine malaria analysis did not differ between these two groups.
Our data indicate that the Cell-Dyn 4000 hematology analyzer is capable of detecting specific abnormalities in the blood of patients with imported malaria. Its overall sensitivity in an academic hospital setting with technicians with expertise in tropical medicine and diagnostic parasitology is, however, limited compared to the sensitivities of conventional diagnostic methods.
Pseudoreticulocytosis was observed almost exclusively in P. falciparum malaria patients with parasitemia levels of Ն0.5%. One P. falciparum malaria patient with a parasitemia level of 0.4% presented with pseudoreticulocytosis, while three P. falciparum malaria patients with parasitemia levels of 0.5, 0.5, and 0.6% did not exhibit this phenomenon. This indicates that a parasitemia level around 0.5% is a cutoff point for pseudoreticulocytosis to appear. Pseudoreticulocytosis did not occur in non-falciparum malaria patients. Parasitemia levels in non-falciparum malaria patients are low and might, in general, be below the detection level of the Cell-Dyn 4000 technique. In addition, it is known that P. vivax preferentially infects reticulocytes. P. vivax-infected reticulocytes will be detected by the Cell-Dyn 4000 hematology analyzer as true reticulocytes. In theory, therefore, they might not contribute to the pseudoreticulocytosis phenomenon.
In their initial report on the detection of depolarizing monocytes in areas where malaria is endemic, Mendelow et al. reported sensitivities for black and white patients in South Africa of 90 and 42%, respectively, using the Cell-Dyn 3500 hematology analyzer. The reason for this racial difference was not found, but speculations were made regarding certain genetic factors, previous exposure, or different responses to infection (4). We show that in imported malaria, the presence or absence of depolarizing monocytes detectable by the Cell-Dyn 4000 hematology analyzer depends on the duration of symptoms before presentation for malaria analysis. Thus, the difference in sensitivities for black and white malaria patients in the South African study may reflect socioeconomic or cultural differences in access to health services and, therefore, in duration of symptoms before presentation for malaria analysis.
Recently, a Canadian study reported on the diagnosis and management of imported malaria in medical centers whose technicians lack expertise in tropical medicine versus the diagnosis and management in medical centers with a tropical disease unit. There were significant delays in the recognition, laboratory diagnosis, and initiation of treatment of malaria when patients presented to medical centers where technicians lacked expertise in tropical medicine (3). Although the Cell-Dyn 4000 hematology analyzer has a low sensitivity for the detection of malaria, the specificity is high. Therefore, it is feasible that the Cell-Dyn 4000 hematology analyzer may contribute to the diagnosis of imported malaria in medical centers whose technicians lack expertise in tropical medicine. As it is likely that general screening tests like a full blood count are always undertaken for patients who present with pyrexia, it can be expected that attention to these abnormalities can decrease a delay in the diagnosis of severe P. falciparum malaria if such a diagnosis was not initially considered. However, our data also indicate that one should not rely on the Cell-Dyn 4000 hematology analyzer as the sole diagnostic test for malaria. A good patient history and repeated routine malaria analysis are essential, especially for patients suspected of being early in the course of a P. falciparum infection. None 55 2 (4) b 0 (0) 2 (4) b a Findings with the Cell-Dyn 4000 hematology analyzer are for patients whose malaria or lack of malaria was determined by routine malaria analysis. b One of the two patients without malaria in whom depolarizing monocytes were observed had been treated for malaria the previous month. | 2018-04-03T03:11:26.218Z | 2002-12-01T00:00:00.000 | {
"year": 2002,
"sha1": "7c6aa7526e33c266a727b9a7793b97778738dbfc",
"oa_license": "CCBY",
"oa_url": "https://jcm.asm.org/content/jcm/40/12/4729.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ASMUSA",
"pdf_hash": "2b513728c16333a8d13996205397ada2f4badc11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248479040 | pes2o/s2orc | v3-fos-license | Chinese Classical Music Lowers Blood Pressure and Improves Left Ventricular Hypertrophy in Spontaneously Hypertensive Rats
High blood pressure (BP) plays an important role in the pathogenesis and development of cardiovascular diseases and multi-organ damages. Music has been well known to elicit emotional changes, such as anxiolytic effects. However, whether music therapy lowers BP in spontaneously hypertensive rats (SHR) and the potential mechanism remains unknown. SHRs were, respectively exposed to white noise (WN), Western classical music (WM), Chinese classical music (CCM), rock music (RM), and bisoprolol treatment. WN and WM did not lower systemic BP, but CCM and RM significantly lowered BPs in SHRs. The effects of CCM therapy on lowering systemic BPs is comparable to that of bisoprolol at the dose of low to medium. Combination of CCM treatment with bisoprolol further improved systemic BPs and myocardial hypertrophy in SHRs, compared to CCM treatment or bisoprolol alone. Furthermore, IHC and WB analysis indicated that CCM therapy inhibited the β1/cAMP/PKA and α1/PLC/PKC signalings, but didn’t alter the β2/PI3K/Akt signaling. Above all, CCM therapy lowers systemic BPs and alleviates myocardial hypertrophy in hypertensive rats, which may be caused by the inhibitions of β1/cAMP/PKA and α1/PLC/PKC signalings.
INTRODUCTION
Hypertension is a growing public health problem and affects over 1.2 billion individuals worldwide. Hypertension, as one of the most common chronic diseases, can lead to major complications such as stroke, myocardial infarction, heart failure and chronic kidney disease, causing serious social and economic burden to the society (Flint et al., 2019;Slivnick and Lampert, 2019;Dhande et al., 2021). Hypertension is a multifactorial disease involving environmental and genetic factors together with risk-conferring behaviors. Some major risk factors have been clarified, such as genetic factors, high sodium and low potassium diet, overweight and obesity, longterm mental tension (Manosroi and Williams, 2019;Saxton et al., 2019;Lee et al., 2020;Rossitto et al., 2021). However, unlike the eating patterns and body types, mental stress is difficult to quantify. This makes it difficult to conduct epidemiological studies and prospective clinical studies on the causal relationship between stress and hypertension. This also leads to the lack of evidence for stress intervention in all current hypertension treatment guidelines.
At present, clinicians mainly treat hypertension from two aspects: non-drug and drug therapy. Non-drug therapy is the cornerstone of hypertension prevention and treatment. Currently, in the United States and Europe, the recommendations for non-pharmacological treatment of hypertension include a low-sodium and low-fat diet, moderate physical activity, weight loss, tobacco abstinence, and alcohol restriction, but none of them are related to stress reduction (Williams et al., 2018). In the China's 2009 basic edition of hypertension treatment guidelines, the suggestions for nondrug treatment also include maintaining optimism, alleviating psychological burden, overcoming paroxia, correcting bad personality, resisting adverse social factors, psychological counseling, music therapy, self-discipline training or Qigong. It is the first time that music therapy has been mentioned in the world's national guidelines for treating hypertension. But it does not give specific recommendations for music therapy.
In 1979, Janiszewski M published the study of music treatment of hypertension for the first time in the world, opening up a new field of music treatment for cardiovascular diseases (Janiszewski, 1979). However, for more than half a century, music therapy received little attention from the medical community. Recent studies have shown that music therapy can reduce the occurrence of anxiety and depression, relieve the clinical symptoms of Parkinson's disease, epilepsy, Alzheimer's disease and other diseases, promote the repair of brain injury and so on (Koshimori and Thaut, 2018;Leggieri et al., 2019). For patients with grade 1 hypertension and over the age of 50, implementation of music therapy for 2 months caused systolic blood pressure and diastolic blood pressure significantly reduced (Zanini et al., 2009). For patients with anxiety-related hypertension before and after surgery, music therapy can significantly reduce blood pressure (Allen et al., 2001).
Our recent research shows that in mice with anxiety caused by mutations in the brain-derived neurotrophic factor (BDNF) gene, music therapy reversed anxiety symptoms and increased BDNF mRNA and protein levels in several brain regions. Its molecular mechanism is that music therapy increased the BDNF receptor TrkB mRNA expression level (Li et al., 2010). These above studies suggest that music therapy for hypertension has theoretical and experimental basis. However, up to now, large sample clinical evidence of music therapy for lowering blood pressure is lacking, and the molecular mechanism of music therapy and its effect on target organ damage are unknown.
A large number of experimental and clinical evidence indicates that sympathetic excitation and activation of the RAS system play a central role in the pathogenesis of hypertension (Jackson et al., 2018;Arendse et al., 2019;Hirooka, 2020). Sympathetic nerve stimulation can increase the release of catecholamines (epinephrine and norepinephrine). Adrenergic receptors in cardiomyocytes are a class of G-protein-coupled receptors, including β1, β2 and α1 subtypes. Although it is theoretically speculated that music therapy may act by inhibiting the sympathetic nerve, the effect of music therapy on the adrenergic receptor signaling pathway in cardiomyocytes has not been reported.
Compared with medication for hypertension, music therapy has the advantages of simplicity, economy, no side effects and higher compliance, so it has a potential clinical application prospect. However, there are a number of important scientific issues that need to be addressed in this area. Firstly, previous published clinical studies on music therapy for hypertension have mostly involved small populations, and large clinical evidence is lacking. Secondly, the standardization of music therapy is uncertain. Western classical music, such as Mozart's repertoire, was mostly used in previous foreign studies, but there were no randomized comparative studies on the antihypertensive effects of different types of Western music. No one has compared the antihypertensive effects of Chinese and Western classical music. Thirdly, the antihypertensive effect of music therapy is unclear compared with traditional antihypertensive drugs with clear efficacy. Whether music therapy and drug therapy have additive or synergistic antihypertensive effects is unknown. Furthermore, previous studies only evaluated the antihypertensive effect of music therapy, but whether music therapy can improve the target organ damage of hypertension is unclear. And no studies have been reported on the effects of music therapy on sympathetic nerve activity, expression of adrenergic α1, β1 and β2 receptor subtypes, expression of receptor-coupled proteins GQ, GS and GI, and intracellular PKC, PKA and PI3K signaling pathways.
In this study, we compared four types of music therapy, including Western classical music (WM), rock music (RM), Chinese classical music (CCM) and noise on systemic BPs in rats. Our results indicated that CCM is more effective to lower BP and improves cardiac function in spontaneously hypertensive rats (SHR). And the main mechanism of CCM lowering blood pressure is the down-regulation of β1/cAMP/PKA and α1/ PLC/PKC signaling pathways.
Animals
Male Wistar-Kyoto (WKY) rats and spontaneously hypertensive rats (SHR), 8-12 weeks of age, 200-250 g, were obtained from the Beijing Huafukang Company (Beijing, China). Rats were housed in temperature-controlled cages with a 12-h light-dark cycle and given free access to water and food. All animal protocols were approved by the Institutional Animal Care and Use Committee of Cheeloo College of Medicine, Shandong University. All relevant ethical regulations were adhered to.
Music and Drug Treatment
As described by us (Li et al., 2010), for music treatment, rats were exposed to music daily for 10 weeks. Music or noise was played from 19:00 to 7:00 for 12 h during the active time of rats. The distance between rat cage and sound box was kept in 1 m. All experiments were carried out in a quiet environment to avoid other voice added. The sound level of the control group was under 40 dB (ambient noise). Music of 50-60 dB was played on a CD player with a frequency of 300-10,000 Hz in the home cages. Bisoprolol (from Sigma Corp.,United States) was given by oral administration at 2.5, 5 or 10 mg/kg/day for 10 weeks.
Music Selection
For white noise, rats were treated with irregular noises of 50-100 Hz generated by a high-frequency noise generator. For Chinese classical music, we selected classical instrumental music from the Qin and Han dynasties to the Sui and Tang Dynasties, including guqin music, pipa music, silk and bamboo music and guchui music written in pentatonic mode of the ancient scale. The music reflect the characteristics of more rhymes and less tones. The tone is simple and passionate, the rhythm is natural and free, the melody is euphemistic and delicate, and the meaning of the song is meaningful. For Western classical music, we selected the classical works of some composers from the Baroque, classical and romantic periods. The music is characterized by varied and dynamic rhythms, multi-voice harmonic thinking to construct the musical texture, and melody balance and symmetry. Selected repertoire includes of piano music and symphony music, such as Bach piano with strict and balanced style, Handel chorus and Concerto with bright, cheerful and comely style, Beethoven sonata with balanced tonality and elegant style, Haydn symphonies with drama and philosophy and Mozart and Schumann piano pieces with pure beauty. For rock music, we chose songs from the 1980s that are a mixture of singing and loud electroacoustic instruments, characterized by a simple musical vocabulary, strong rhythm, shock and harsh sound effects.
Blood Pressure Measurement
Blood pressure was determined by invasive radio telemetry methods as described previously (Yan et al., 2020;Zhang et al., 2022). Briefly, rats were implanted with a TA11PA-C10 radio telemetry transmitter (Data Sciences, Laurel, Md) for 24-h recording of arterial pressure and heart rate with a radio telemetry data-acquisition program (Dataquest ART 3.1, Data Sciences). Hemodynamic measurements were sampled for 10 s every 10 min for the 3-week duration. Data were reported as 24-h average.
Echocardiography
The heart function and dimension parameters were measured using a standard protocol by transthoracic parasternal echocardiography using the VEVO770 imaging system (Visual Sonics, Toronto, ON, Canada). LV parameters, including left ventricular ejection fraction (LVEF), fractional shortening (FS), the interventricular septum (IVSd) and left ventricular posterior wall thickness (LVPWd), were measured in M-mode via the long/ short axis view.
Histopathology and Immunohistochemistry
As described previously (Zhang M. et al., 2021), the heart tissues were isolated and fixed in 4% paraformaldehyde and maintained at 4°C until use. The fixed tissues were dehydrated and processed for paraffin embedding. 5-μm sections were stained with hematoxylin and eosin (H&E) and immunohistochemistry assays. The deparaffinized, rehydrated section from hearts were microwaved in citrate buffer for antigen retrieval. Sections were incubated in endogenous peroxidase and protein block buffer, and then with primary antibodies indicated overnight at 4°C. Slides were rinsed with washing buffer and incubated with labelled polymer-horseradish peroxidaseantimouse/antirabbit antibodies followed by DAB + chromogen detection. After final washes, sections were counterstained with hematoxylin. All positive stainings were confirmed by ensuring that no staining occurred under the same conditions with the use of non-immune rabbit or mouse control IgG. Semi-quantitative analysis of the integrated optical density (IOD) was analyzed by software of Image-Pro Plus 5.1. The IOD is expressed as positive area × intensity.
Western Blot Analysis
Total proteins were extracted from rat hearts and equal amounts of protein were electrophoretically separated and then transferred to PVDF membranes. The membranes were incubated overnight at 4°C with the corresponding primary antibodies. Anti-rabbit IgG and anti-mouse IgG antibodies were used as secondary antibodies. The membranes were developed using Immobilon Western Detection Reagents (Millipore, Billerica, MA, United States). The intensity of bands was measured by using ImageJ. All experiments were repeated at least three times and mean values were derived. Primary antibodies against α1 receptor, β1 receptor, β2 receptor, PI3K, Akt, PLC, cAMP, PKA, and PKC and secondary antibodies were obtained from Cell Signaling technology (CST Corp.,United States).
Statistical Analysis
All quantitative results are expressed as mean ± SEM. After testing for normality and equality of variance, continuous data were compared by unpaired Student's t-test or one-way analysis of variance (ANOVA). Bonferroni corrections were applied to multiple tests. Statistical analysis was conducted using IBM SPSS statistics 20.0 (IBM Corp., Armonk, NY, United States) and p < 0.05 were considered statistically significant.
Meta-Analysis Indicates That Music Therapy Lowers BP in Hypertensive Patients
We first performed a meta-analysis to find the association between music therapy and hypertension. In all the included studies, 987 patients from 13 trials provided data of systolic BP change from the baseline to the end-point of treatment period. Figure 1A shows the forest plot of different lowering systolic BP The meta-analysis of diastolic BP reductions between the music group and control group also showed a statistically meaningful result. As shown in Figure 1B, among 849 subjects from 10 trials, patients who listen to music possess lower diastolic BP compared to that of control group (pooled effect size [95%CI]: −3.23 [−4.55, −1.91]; Heterogeneity:Tau 2 = 0.00; Chi 2 = 8.36, df = 9 (p = 0.5); I 2 = 0%. The heterogeneity doesn't exist). Taking all these data together, it demonstrates that music therapy lowers BP in hypertensive patients.
Both CCM and RM Therapies Lower Systemic BPs in Spontaneously Hypertensive Rats
The results of meta-analysis from clinical observations were further confirmed by our experimental investigations in animals. We investigated the effects of music therapy on hypertensive rats by exposing SHR to WN, WM, CCM and RM for 10 weeks. Systemic BPs, including systolic pressure, diastolic pressure and mean arterial pressure, were measured
CCM Therapy on Hypertension is Comparable to Bisoprolol at the Dose of Low to Medium in Hypertensive Rats
We next compared the effects of music therapy with drug treatment on hypertension. Bisoprolol, a highly selective β1 receptor antagonist, is served as a drug to prevent hypertension (Sabidó et al., 2018). WKY rats were used as normal control. As shown in Figures
CCM Therapy Increases Bisoprolol-Induced Improvement of Myocardial Remodeling in Hypertensive Rats
Hypertension is a critical risk factor of left ventricle hypertrophy (Yildiz et al., 2020). We next examined the effects of CCM therapy on bisoprolol-induced protective effects in myocardial remodeling. Differences in heart weight and the ratio of heart weight to body weight were statistically significant among the five groups (Table 1). Meanwhile, masson staining showed that the cardiac fibrosis was improved in the combination treatment group, compared to CCM or Bisoprolol treatment alone ( Figure 4B). The echocardiography results showed that therapy of bisoprolol plus CCM further elevated the left ventricular ejection fraction (LVEF) and fractional shortening (FS), decreased the thickness of left ventricle posterior wall (LVPWd) and interventricular septum (IVSd) in SHRs, indicating that bisoprolol plus CCM therapy is more effective than bisoprolol alone on improvement of myocardial remodeling ( Figures 4C-F).
CCM Therapy Decreases β1/cAMP/PKA Signaling As described above, CCM therapy produced similar effects of bisoprolol at low-medium dose. We hypothesized that CCM therapy also blocks β1-receptor to lower BP and improve myocardial remodeling in SHRs treated with bisoprolol. To Figures 7A-D). These data indicate that α1/PLC/PKC but not β2/PI3K/Akt signaling is involved in the effects of CCM therapy.
DISCUSSION
Hypertension has become one of the most common chronic diseases worldwide and is a major risk factor for atherosclerotic cardiovascular disease. Advanced hypertension can lead to stroke, heart failure, myocardial infarction, renal failure, aortic dissection and other serious complications. Therefore, early prevention, early detection and early treatment are extremely important. The mass prevention and treatment of hypertension has become a major task faced by the health prevention and medical institutions. Interventions for patients with hypertension can be divided into non-pharmacological and pharmacological treatments, and non-pharmacological treatment should be implemented first except for acute and severe hypertension. In the China's 2009 basic edition of hypertension treatment guidelines, it was the first time that music therapy has been mentioned in the guidelines for the treatment of hypertension in any country in the world, which is a major innovation. But it does not give advice on how to conduct music therapy. Therefore, it has become an important topic in the field of hypertension treatment to establish clinical and experimental evidence of blood pressure reduction by music therapy, compare the relative and synergistic effects of different types of music and traditional anti-hypertensive drugs, and study the molecular mechanism of blood pressure reduction by music therapy. Since Sear HG first published his paper on music therapy in 1946 (Sear, 1946), music therapy has been used as an adjuvant therapy for a small number of neuropsychiatric disorders. In 1979, Janiszewski M first published the study of music in the treatment of hypertension, which opened up a new field of music in the treatment of cardiovascular diseases. Studies have shown that music therapy can slow the heart rate, relieve anxiety and improve clinical symptoms such as headaches, dizziness and chest pain. Therefore, music therapy has been used in the field of mental illness and rehabilitation medicine, aiming to improve the mental and physical state of patients (Wang and Agius, 2018;Monsalve-Duarte et al., 2021). However, there is a lack of largesample clinical evidence on whether music therapy is effective in reducing blood pressure. In this study, we adopted a new retrieval strategy related to music and blood pressure, rather than retrieving the literature on music in the treatment of hypertension, thus obtaining a larger sample size and significantly enhancing the efficacy of the summary analysis. Our results showed that in a randomized clinical trial of 1,063 patients, the music treatment group reduced systolic blood pressure by 5.41 mmHg and diastolic blood pressure by 3.23 mmHg, which were statistically significant. It should be noted that of the 12 studies that met the inclusion criteria for this pooled analysis, only 5 were on hypertension. The mean reduction in systolic and diastolic blood pressure by music therapy was 9.2 and 4.8 mmHg, respectively. These results suggest that music therapy may reduce blood pressure in patients with hypertension more than in patients without hypertension. On the other hand, music therapy can reduce blood pressure even in patients with anxiety disorders with normal blood pressure. A recently published pooled analysis of the results of 10 clinical trials failed to draw a statistically significant conclusion due to the lack of complete data on the control group (Soares do Amaral et al., 2016). Another pooled analysis published in the same year, which included only three small clinical studies, found that music therapy reduced systolic blood pressure in patients with hypertension, but had no effect on diastolic blood pressure (Kunikullaya et al., 2015). The pooled analysis of this study provides the first large sample of clinical evidence that music therapy reduces systolic and diastolic blood pressure in patients with hypertension and anxiety.
In this study, we investigated for the first time the effects of three different types of music, including Western classical music (WM), rock music (RM), and Chinese classical music (CCM), on blood pressure in Wistar rats with normal blood pressure and spontaneously hypertensive SHR rats. In this study, for the first time, we found that different types of music had no effect on blood pressure in Wistar rats with normal blood pressure, but for SHR rats, noise stimulation had no significant effect on blood pressure (not shown in data). Western classical music showed a trend of improvement but no significance. Chinese classical music and rock music could significantly reduce blood pressure.
Although music therapy has been used in the treatment of hypertension, there are many problems in the studies published in the literature. Most of the studies are non-randomized, and the results are affected by a variety of confounding factors, such as hypertensive course, hypertension stage, mood swings, medication, cultural background, music preference, etc. Music therapy is not standard in some aspects, such as music types, different choice of songs, different length of treatment. And the research sample is usually small, dozens of cases. In the past, the selection of music for music therapy was relatively simple, and western classical music was basically chosen. However, the selection of music by different ethnic groups may vary with the cultural background of different ethnic groups, resulting in different antihypertensive effects. Above all, the previous conclusions on the treatment of hypertension by western classical music in a small sample of western populations are not necessarily applicable to all ethnic groups in the world. Since there is no difference in ethnic, historical and cultural background in rats, the rat experiment should be able to prove the effect of different music therapy on blood pressure more objectively in theory. Surprisingly, in the SHR rat model of this study, western classical music did not exert the hypotensive effects reported in previous studies in western populations. This may be due to two reasons. Firstly, the influence of western classical music on blood pressure in western populations is closely related to the historical, cultural and educational background of western populations. Secondly, in previous studies on the treatment of western classical music, the subjects were mostly patients with grade I hypertension, and the blood pressure of SHR rats was as high as 180/140 mmHg. Therefore, western classical music may not have an obvious effect on hypertension above grade III.
Previously published studies on music in the treatment of hypertension have not compared the relative and synergistic effects of music therapy with traditional antihypertensive drugs, and this study is the first to compare the two therapies. A 2015 study comparing music therapy with lifestyle improvements versus lifestyle improvements alone found that music therapy significantly reduced systolic and diastolic blood pressure in patients with hypertension and prehypertension (Kunikullaya et al., 2015). At present, there is a lack of comparative studies between music therapy and drug therapy in the literature, so the value of music therapy relative to traditional drug therapy is unknown. Bisoprolol is a highly selective β1 receptor blocker that inhibits sympathetic activation and has been widely used in the treatment of hypertensive patients (DiNicolantonio et al., 2013;Lin et al., 2013). Since previous studies suggested that music therapy is likely to play its role by regulating autonomic nerve balance, we selected the Chinese classical music therapy group with the most obvious antihypertensive effect and compared the antihypertensive effect with the moderate dose of bisoprolol group. The results showed that there was no significant difference in the antihypertensive effect between the two groups, suggesting that the antihypertensive effect of Chinese classical music was similar to that of medium doses of bisoprolol. In addition, our results show that, even if the high dose of the bisoprolol, nor will the blood pressure down to normal levels, so we discussed the synergistic effect of Chinese classical music therapy and middle dose of bisoprolol. We found that the effects of drugs and music combination is superior to pure drugs or pure music therapy, hints of Chinese classical music therapy can be used as a collaborative approach of drug treatment. These results suggest that if the antihypertensive effect of medium dose bisoprolol is not ideal, music therapy can be added. It can not only increase the antihypertensive effect, but also reduce the dosage of bisoprolol, avoiding drug side effects and reducing the economic burden, thus generating important social and economic benefits. Increased hemodynamic load in high blood pressure condition cause left ventricular hypertrophy, which can lead to left ventricular ischemia and abnormal left ventricular diastolic function, and eventually the development of congestive heart failure. Therefore, left ventricular hypertrophy is an important marker of hypertensive target organ damage (Nwabuo and Vasan, 2020). Previous published studies have only evaluated the hypotensive effects of music therapy, but whether music therapy can improve myocardial remodeling in hypertension remains unclear. In this study, left ventricular wall thickness measured by ultrasound was used as an index to evaluate myocardial remodeling. It was found that Chinese classical music and medium dose bisoprolol significantly reduced left ventricular hypertrophy, and the two therapies had similar inhibitory effects on myocardial remodeling. We also observed the synergistic effect in inhibiting myocardial remodeling of Chinese classical music and bisoprolol. We found that compared with the pure music therapy or medication, combination of these two therapies can significantly reduce the left ventricular wall thickness. It suggested compared with pure drug therapy, music and drug combination therapy can effectively improve myocardial remodeling.
A large number of previous studies have proved that the activation of the sympathetic nerve plays a central role in the pathogenesis of hypertension, and the adrenergic receptors located on the membrane of different organs are the key link to mediate the function of the sympathetic nerve. Adrenergic receptor is G protein coupled receptors to accept catecholamine stimulation, which can be divided into receptor alpha and beta receptors. The adrenergic receptor in cardiomyocytes includes beta 1, 2 and alpha 1 three subtypes. The downstream of beta 1 receptor is cAMP/PKA signaling pathways, beta 2 receptor downstream is PI3K/Akt signaling pathway, and alpha 1 receptor downstream is PLC/PKC pathways. Previous studies on the treatment of hypertension by music have only observed the efficacy of antihypertensive therapy, but not the molecular mechanism. Therefore, the molecular mechanism of music in the treatment of hypertension is still unclear. In our study, we found that the expression of β1/cAMP/PKA and α1/PLC/PKC in SHR group were significantly increased compared with Wistar rats. Compared with the SHR control group, the expression levels of β1/cAMP/PKA and α1/PLC/PKC were significantly decreased in both the Chinese classical music treatment group and the medium dose bisoprolol treatment group, while the expression levels of β1/cAMP/PKA and α1/PLC/PKC were further significantly decreased in the Chinese classical music and medium dose bisoprolol combined treatment group. These results suggest that Chinese classical music therapy can downregulate the expression levels of β1/cAMP/PKA and α1/PLC/PKC signaling, and Chinese classical music combined with and medium dose bisoprolol has synergistic effect on the inhibition of β1/cAMP/PKA and α1/PLC/PKC signaling pathways, which may explain the synergistic effect of music and drug therapy on the antihypertensive effect. On the other hand, the expression of β2/PI3K/Akt signaling pathway was not significantly affected by either Chinese classical music alone or medium dose bisoprolol, or music combined with drugs. Unfortunately, in this experiment, we only detected the expression level of PKA/PKC, but did not detect its phosphorylation level and activity. We will continue to explore the activation level and downstream pathway expression changes of PKA/PKC in future studies, so as to further reveal the specific mechanism of music therapy. These results indicate that the main mechanism of Chinese classical music lowering blood pressure is the down-regulation of β1/cAMP/PKA and α1/PLC/ PKC signaling pathways. This is the first time to elucidate the molecular mechanism of music therapy lowering blood pressure, and point out the direction for further research of music therapy.
The results of this study have important clinical significance in the treatment of hypertension complicated with respiratory diseases. It is known that the α1 receptor is mainly distributed in vascular smooth muscle, the β1 receptor is mainly distributed in cardiac muscle, and the β2 receptor is distributed in both vascular and bronchial smooth muscle (O-Uchi and Lopes, 2010;Chen et al., 2010). As a result, patients with a combination of respiratory disease and hypertension have poor tolerance to non-selective beta blockers. In these patients, clinicians often use highly selective β1 blockers such as bisoprolol as antihypertensive therapy, but even highly selective β1 blockers are limited if respiratory disease is more severe. This study confirmed that the molecular mechanisms of Chinese classical music in the treatment of hypertension and inhibition of myocardial remodeling all come from the downregulation of β1/cAMP/PKA and α1/PLC/PKC signaling pathway, and does not affect the bronchial β2/PI3K/Akt signaling pathway. Therefore, Chinese classical music may be a priority for patients with severe respiratory diseases and hypertension in the future.
Taken together, the pooled analysis of this study provides the first large sample of clinical evidence that music therapy reduces systolic and diastolic blood pressure in patients with hypertension and anxiety. The basic experiment of this study confirmed that Chinese classical music therapy reduced blood pressure and inhibited left ventricular hypertrophy in SHR rats by inhibiting the β1/cAMP/PKA and α1/PLC/PKC signaling pathways. The therapeutic effect was similar to that of medium dose bisoprolol, and the combination of Chinese classical music and medium dose bisoprolol had a synergistic effect on lowering blood pressure and inhibiting left ventricular remodeling. The results of our study suggest that Chinese classical music therapy may be an alternative and adjuvant therapy for hypertension, which has important academic value and bright clinical application prospect.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by the Institutional Animal Care and Use Committee of Cheeloo College of Medicine, Shandong University. | 2022-05-02T13:14:07.403Z | 2022-05-02T00:00:00.000 | {
"year": 2022,
"sha1": "5c914ceee7e2aacf1f5fb3b58b2c94e68f919d99",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "5c914ceee7e2aacf1f5fb3b58b2c94e68f919d99",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
40056207 | pes2o/s2orc | v3-fos-license | IgG4-Related Inflammatory Pseudotumor of the Trigeminal Nerve: Another Component of IgG4-Related Sclerosing Disease?
SUMMARY: IgG4-related IPTs have been reported in various sites and may form part of the spectrum of systemic IgG4-related sclerosing disease. Some pseudotumors are clinically and radiologically indistinguishable from malignant tumors. We present the first case of an IgG4-related IPT of the trigeminal nerve diagnosed histopathologically without involvement of any of the common sites. The trigeminal nerve pseudotumor may represent a component of IgG4-related sclerosing disease.
I PT is a rarely occurring non-neoplastic lesion that can mimic a neoplasm. Histologically, the lesions are characterized by inflammatory cell infiltration and variable fibrotic responses. IPT has been described in the literature by many different names, a fact that suggests the complexity and variable histologic characteristics and behavior of this entity.
In recent years, the relationship between some populations of IPT and IgG4 has been suggested. IgG4-related IPTs may form part of the spectrum of systemic IgG4-related sclerosing disease, with autoimmune pancreatitis being the predominant clinical presentation. 1 However, IgG4-related sclerosing diseases without pancreatic involvement have also been reported. 2 In this article, we report a rare case of pathologically proved IgG4-related IPT that involved the unilateral trigeminal nerve.
Case Report
A 59-year-old woman with a medical history of palmoplantar pustulosis and diabetes presented with left-sided facial numbness, which she had experienced for 4 years. She denied tinnitus, difficulty in hearing, or dysphagia. Physical examination revealed paresthesia in areas supplied by the left trigeminal nerve. Findings of the remainder of her neurologic examination were unremarkable. She had no surgical history in the head or neck region.
MR imaging revealed a homogeneously enhancing soft-tissue mass involving the skull base along the second and third divisions of the left trigeminal nerve (Fig 1A). T2-weighted imaging demonstrated a hypointense mass in the left Meckel cave, extending to the left pterygopalatine fossa via the left foramen rotundum and further to the infraorbital canal. The mass also extended to the left masticator space (Fig 1B) via the left foramen ovale. CT in a bone algorithm showed expansion of the left infraorbital canal (Fig 1C), the left foramen rotundum, and the left foramen ovale. There were no signs of bone destruction.
Laboratory work-up results were all within normal limits. Tumor markers were also negative. Chest, abdominal, and pelvic CT findings were unremarkable.
The mass in the left retromaxillary area was surgically biopsied. Histologic evaluation showed nerve fibers (Fig 2A, arrowhead) surrounded by an attenuated inflammatory infiltrate ( Fig 2B) comprising predominantly B and T lymphocytes with moderate fibrosis. Immunohistochemistry indicated abundant IgG4ϩ plasma cell infiltration (128/HPF) in and around the aggregates of lymphocytes ( Fig 2C) and high ratios of IgG4ϩ/IgGϩ plasma cells (71%). There was no evidence of atypia or neoplasia. Immunostaining for ALK was negative. The final diagnosis was IgG4-related IPT.
The patient did not seek further treatment. MR imaging studies at 6-and 12-month follow-up showed no significant changes.
Discussion
The etiology of IPT is still unclear but is thought to be infectious or autoimmune in nature. IPT is histologically characterized by a proliferation of spindle cells such as fibroblasts and myofibroblasts, admixed with inflammatory components consisting of lymphocytes, plasma cells, eosinophils, and histiocytes. However, the cell population can vary from one tumor to another or even from one microscopic field to another within the individual tumor.
Historically, IPT has been designated under various synonyms, such as plasma cell granuloma, xanthogranuloma, histiocytoma, and IMT. IMT is now considered as a "distinctive neoplastic proliferation" composed predominantly of myofibroblasts. 3 Recent studies have revealed ALK gene rearrangements in IMT, suggesting a neoplastic cause, though, not all IMTs are positive for ALK by immunohistochemistry. Our case showed predominant lymphocyte proliferation with reactive fibrosis. Myofibroblastic spindle cells did not show cytologic atypia, and immunostaining for ALK was negative.
Recently, some populations of IPTs of the liver 4 and lung 5 have been reported to show pathologic characteristics similar to those of IgG4-related sclerosing disease as they involved lymphoplasmacytic infiltration and high IgG4ϩ plasma cell infiltrates. IgG4-related IPTs are occasionally complicated with IgG4-related pancreatitis and sialadenitis, suggesting that these types of IPTs may form part of the spectrum of systemic IgG4-related sclerosing disease.
There is currently no consensus on the number of IgG4ϩ plasma cells required for the diagnosis of IgG4-related IPT; however, several reports recommend that the ratio of IgG4ϩ/ IgGϩ cells is also important for the diagnosis. 4,5 According to the literature, IgG4ϩ cells Ͼ60 -100/HPF and the ratio of IgG4ϩ/IgGϩ cells Ͼ40%-50% seem to be highly suggestive of IgG4-related disease. Yamamoto et al 6 recently reported that both of these values are significantly lower in IMTs than in IgG4-related lesions. In our case, the number of IgG4ϩ plasma cells per HPF was 128, and the ratio of IgG4ϩ/IgGϩ cells was 71%.
In the head and neck region, IgG4-related IPTs of the salivary glands, lacrimal glands, and pituitary glands are well known; however, nasal/paranasal, 7 dura mater, 8 and parapharyngeal space 9 lesions have also been reported. Our report describes a case of IgG4-related IPT of the trigeminal nerve without involvement of any of the preferential organs. Seol et al 10 described another case of skull base IPT, which showed perineural spread along the trigeminal nerve, very similar to our case. However, no attempt was made to investigate the possible linkage with IgG4-related systemic disease or to characterize the plasma cell immunophenotype.
Imaging findings of IPT, including IgG4-related IPT, frequently reveal a homogeneously enhancing soft-tissue mass. On T2-weighted images, the mass is usually iso-to hypo-intense relative to brain. This can possibly be explained by the combination of fibrosis and attenuated cellularity of IPTs, which may also explain why IPTs tend to appear hyperintense on DWI. Attempts to differentiate IPTs from cancer 11 and lymphoma 12 on DWI have been made. However, the various degrees of cellularity, fibrosis, and perfusion in IPTs account for differences in diffusion restriction and often make the diagnosis difficult. IPTs with bone involvement and internal carotid artery encasement have also been reported in the past, 13,14 including pathologically proved IgG4-related IPTs. 7 In addition to these imaging features, our patient also had extensive perineural spread along the trigeminal nerve, highly suggestive of a neoplastic process.
IgG4-related IPT generally has a benign clinical course. These lesions usually respond to corticosteroids; however, relapse rates range from 30% to 40%. 15 For unresponsive IPTs, surgical resection may be considered. Our patient did not seek further treatment. Her symptoms and follow-up MR images have not shown significant changes. However, a recent report suggests that there is a frequency of lymphomatous transformation in 10% of cases, 16 adding yet another reason for close long-term surveillance. | 2017-06-19T19:20:26.982Z | 2011-09-01T00:00:00.000 | {
"year": 2011,
"sha1": "e30a37e99703a64b5cc27ea087dd4bc0f988f163",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/32/8/E150.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e30a37e99703a64b5cc27ea087dd4bc0f988f163",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52048671 | pes2o/s2orc | v3-fos-license | Temporal coding and rate remapping: Representation of nonspatial information in the hippocampus
Hippocampal place cells represent nonspatial information through a process called rate remapping, which involves a change in the firing rate of a place cell without changes in its spatial specificity. However, many hippocampal phenomena occur on very short time scales over which long‐term average firing rates are not an appropriate description of activity. To understand how rate remapping relates to fine‐scale temporal firing phenomena, we asked how rate remapping affected burst firing and trial‐to‐trial spike count variability. In addition, we looked at how rate remapping relates to the theta‐frequency oscillations of the hippocampus, which are thought to temporally organize firing on time scales faster than 100 ms. We found that theta phase coding was preserved through changes in firing rate due to rate remapping. Interestingly, rate remapping in CA1 in response to task demands preferentially occurred during the first half of the theta cycle. The other half of the theta cycle contained preferential expression of phase precession, a phenomenon associated with place cell sequences, in agreement with previous results. This difference of place cell coding during different halves of the theta cycle supports recent theoretical suggestions that different processes occur during the two halves of the theta cycle. The differentiation between the halves of the theta cycle was not clear in recordings from CA3 during rate remapping induced by task‐irrelevant sensory changes. These findings provide new insight into the way that temporal coding is utilized in the hippocampus and how rate remapping is expressed through that temporal code.
| INTRODUCTION
The hippocampus is well known for the location-specific firing of its principal cells, termed "place cells" (O'Keefe, 1976). Place cell firing involves more than a simple representation of the current location of the animal. For example, sequences of place cells corresponding to actual paths through the environment occur during the theta-frequency (6-12 Hz) oscillations that occur during engaged behavior (Buzsaki, 2002). During each~100 ms theta cycle, the place cell population represents the sequence of upcoming locations in order (Foster & Wilson, 2007;Skaggs, McNaughton, Wilson, & Barnes, 1996). Thus, the function of the hippocampus can only be understood if one takes into consideration temporal coding properties.
In addition to representing spatial sequences, place cells also represent nonspatial in-formation (e.g., sensory: Leutgeb et al., 2005 or task-related Allen, Rawlins, Bannerman, & Csicsvari, 2012). This phenomenon has been described in terms of the effects of nonspatial information on the long-term average firing rates of place cells and thus has been termed "rate remapping". While the preferred response location ("place field") of a given cell does not change with the content of the nonspatial information, the rate at which the place cell fires within the place field can change by an order of magnitude. This phenomenon has generated interest (Schiller et al., 2015) because it bridges the gap between the hippocampal spatial map and the function of the hippocampus in the formation of episodic memory, an essential component of which is nonspatial information (Eichenbaum, Yonelinas, & Ranganath, 2007;Scoville & Milner, 1957). However, studies of rate remapping thus far have generally examined the long-term average firing rate of place cells over the course of 10 (Leutgeb et al., 2005) or 30 min (Ji & Wilson, 2008), but see (Mankin, Diehl, Sparks, Leutgeb, & Leutgeb, 2015;Mankin et al., 2012) for within-session effects on firing rate. Given that an entire sequence of locations is played out during the~100 ms theta cycle, rate remapping information associated with individual locations in that sequence must be available on even shorter time scales. To this end, we studied finescale temporal aspects of rate remapping, as described in the following paragraphs.
One aspect of fine-scale response properties in the hippocampus is the tendency of cells to fire in high-frequency bursts (Ranck, 1973).
These bursts have been experimentally associated with particular input pathways (Bittner et al., 2015;Royer et al., 2012). Theoretical work has suggested that, by modulating the number of spikes that occur within a burst, graded information can be encoded on very fast time scales (Kepecs & Lisman, 2003). Therefore, we looked at whether rate remapping is the result of a change in the number of spikes per burst.
Another aspect of place cell firing that reveals itself on analysis of short time scales is the extreme variability of spike counts on different single passes through a particular location (Fenton & Muller, 1998).
This unexplained variability, which is known as "overdispersion", implies that there are hidden variables that affect place cell activity.
We examined the possibility that the representation of nonspatial information via rate remapping is the source of this variability.
Interestingly, one theory that has been proposed to explain this excess variability provides an alternative interpretation of rate remapping. It has been shown that, for a task in which an animal must switch between maze and room cues, there are actually separate hippocampal maps for the two reference frames and the hippocampus switches between them in a task-dependent way (Fenton et al., 2010;Kelemen & Fenton, 2016). It has been suggested that this process of map switching may actually be a more general phenomenon that occurs under all conditions and that the hippocampus switches between maps at a frequency of 1-10 Hz (Olypher, L'ansky, & Fenton, 2002;Jackson & Redish, 2007;Jezek, Henriksen, Treves, Moser, & Moser, 2011). According to this suggestion, the excess variability observed in place cell firing is the result of map switches that the experimenter is unaware of and has not accounted for. When averaged over many map switches, changes in probability of firing look like changes in rate of firing. In this way, nonspatial variables could be understood as affecting the probability of inhabiting particular maps instead of as changing a consistent firing rate in a given map. Therefore, we looked at the probability and frequency of firing on time scales short enough to ensure the presence of only a single map.
Another important issue is the expression of rate remapping on a subtheta cycle time scale. The possibility that place cells independently code spatial and nonspatial information is an interesting one, but it runs into potential difficulties. As mentioned above, place cell activity shows organization within the~100 ms theta cycle. An important aspect of that temporal organization is the phenomenon known as theta phase precession (O'Keefe & Recce, 1993), in which each place cell fires at earlier and earlier theta phase as the animal progresses through its place field (Lisman & Redish, 2009). Theta phase precession is closely related to the phenomenon of theta sequences, but is not exactly the same, as theta phase precession is a single-cell phenomenon whereas theta sequences are an ensemble phenomenon which require learning to stabilize (Feng, Silva, & Foster, 2015). Both of these phenomena may be disrupted by firing rate changes if extra spikes resulting from increased firing rate are distributed at theta phases not corresponding to the position of the animal. We therefore tested whether the rate code of rate remapping interfered with the temporal code of theta phase precession.
In a final set of analyses, we asked the question of whether rate remapping occurs uniformly throughout the theta cycle. This question is of particular interest given theoretical work (Hasselmo, Bodelón, & Wyble, 2002;Sanders, Rennó-Costa, Idiart, & Lisman, 2015) and experimental work (Hyman, Wyble, Goyal, Rossi, & Hasselmo, 2003;Schomburg et al., 2014;Siegle & Wilson, 2014;Zheng, Bieri, Hsiao, & Colgin, 2016) suggesting that different phases of theta have different computational functions. In particular, it was suggested that the first half of theta is concerned with computations about current position, whereas the second half of theta deals with predictions associated with upcoming locations (Sanders et al., 2015). Our results provide support for this classification of function through analysis of the theta phase preference of rate remapping.
We addressed the above questions by analyzing two rich datasets available to us. For one task, rats were trained to run back and forth on a black linear track to receive reward at both ends of the track. On the recording days, comparisons were made between place cell activity on the black track and activity on the same task when the black track surface had been switched out for a white surface. This task is similar to other sensory rate remapping paradigms, for which rate remapping occurs in the CA3 region as well as the downstream CA1 region (Leutgeb et al., 2005). We analyzed recordings from CA3 during this task and refer to this data as the "Sensory/CA3" dataset.
For the other task, recordings of place cells in rats on a nondelay alternation task demonstrate rate remapping between the two trajectories (Ji & Wilson, 2008;Robitsek, White, & Eichenbaum, 2013;Wood, Dudchenko, Robitsek, & Eichenbaum, 2000). On each trial, the rat runs down the central arm of a figure-8 shaped maze. At the end, it can turn either left or right. Reward is given for turning the opposite direction of the previous trial. Neurons with place fields on the central arm rate remap depending on which trial type (from left to right or from right to left) the rat is currently on (Frank, Brown, & Wilson, 2000;Ji & Wilson, 2008;Wood et al., 2000). This rate remapping occurs in CA1 place cells, but not in the upstream CA3 place cells (Ito, Zhang, Witter, Moser, & Moser, 2015). Rate remapping in CA1 is inherited from prefrontal cortex via the thalamus, and the task is not hippocampal dependent, so the rate remapping is a reflection of task information, not a driver of decision-making (Ito et al., 2015). We analyzed recordings from CA1 during this task and refer to this data as the "Internally Generated (IntGen)/CA1" dataset. This task is important to study because the variable being represented in the firing rate of the place cells is internally generated; there are no external cues to differentiate between the trial types.
Because these forms of rate remapping differ in the type of information represented and in the relevant region, they provide an overview of how temporal coding is affected by rate remapping. However, one should be careful not to directly compare the results from the two datasets, because it is impossible to say whether differences are a result of differences between the brain areas or the result of differences between the tasks.
| MATERIALS AND METHODS
All data analysis was performed by HS under the supervision of JEL.
Unpublished CA3 data was provided by TS, and collected in the lab of JKL. Unpublished CA1 data was provided by DJ, and collected in the lab of MAW.
| Sensory/CA3 task data collection
Recording procedures have been described in detail previously . All surgical and experimental procedures were approved by the University of California, San Diego IACUC. Three rats (Long-Evans males, 3-5 months old, preoperative weight of 375-485 g) were trained to perform the Sensory/CA3 task after prior training in a spatial working memory task in a radial 8-arm maze for 3 weeks. An electrode assembly that consisted of 14 independently movable tetrodes was implanted above the right hippocampus (4.0 mm posterior and 2.9 mm lateral relative to Bregma).
Two weeks after surgery, access to food was restricted and the rats were maintained at about 85% of the free-feeding body weight.
Water was readily available. Animals were trained to run back and forth on a linear track with a black vinyl surface (148 × 7 cm with small sides rising .5 cm above the surface of the arm, 53 cm elevated from the floor) to obtain chocolate milk reward at the ends of the track. The training lasted for 2 days.
Electrophysiological recording started when well-separated units were identified in the hippocampus. LFP recordings were filtered between 1 and 425 Hz. Unit activity was amplified and band-pass filtered at 600 Hz to 6 kHz. Spike waveforms above a trigger threshold (40 μV) were recorded at 32 kHz and sorted manually offline (A.D. Redish, http://redishlab.neuroscience.umn.edu/MClust/MClust.html.) Autocorrelation and crosscorrelation functions were used as additional separation criteria. CA3 cells with an average firing rate of less than 3 Hz and waveforms longer than 200 μs were considered to be putative excitatory cells and included in analysis. The animals' positions were tracked with two infrared diodes mounted over the animal's head and sampled at 30 Hz.
Each recording day consisted of four 10 min long sessions with an intertrial interval of 5 min. During the first and fourth session, animals performed the task on the familiar black track. During the second and third sessions, the black surface of the track was switched out for a white vinyl surface. The animals were recorded on 2 days each.
| Internal/CA1 task data collection
The experimental procedure has been described previously (Ji & Wilson, 2008). Briefly, rats (Long-Evans males, 5-8 months old) were trained on a figure-8 shaped maze for about 2-3 weeks, for about 30 min each day. The animal was rewarded with chocolate sprinkles every time he alternated between two trajectories that shared the same central arm.
After the animal achieved a performance criterion of at least 80% accuracy, a tetrode array containing 18 tetrodes was surgically implanted targeting the right dorsal CA1 (coordinates: 4.1 mm posterior and 2.2 mm lateral relative to Bregma). Tetrodes were slowly moved to the CA1 pyramidal layer during the 2-3 weeks after the surgery. One week after surgery, rats were retrained on the figure-8 alternation task.
Recording started once clusters of spikes were stabilized and the animal's performance reached the presurgery performance level.
Spikes were identified by a threshold of~70 μV, sampled at 32 kHz, and manually sorted offline using the spike amplitudes across four tetrode channels (XClust, M. Wilson). Local field potentials (LFPs) were sampled at 2 kHz. The animal's positions were tracked with two infrared diodes mounted over the animal's head and sampled at 30 Hz.
The data analyzed here were acquired from three rats while they were performing the figure-8 alternation task. The rats later performed a trajectory-switching task (Ji & Wilson, 2008), but the data were not included in the analysis here.
| Data analysis: Technical definitions
Place cells were defined as having overall firing rate >.02 and <4 Hz.
Place fields were defined as areas where the firing rate of the cell was greater than 10% of the maximum firing rate of that cell. Place fields with gaps that were smaller than 10 cm and less than 1/5 of the total size of the place field were merged into a single place field. Place fields with fewer than 5 total spikes, lower than .5 Hz firing rate, or smaller than 3 cm (a single spatial bin) were removed.
For the CA1 dataset, place cells with place fields on the central arm were considered as potential remappers and included for subsequent analysis (74/198 cells). Of those, one cell had two place fields on the central arm for a total of 75 place fields analyzed in Figure 1c. Of those place fields, 44 had a firing rate ≥.5 Hz and 5 or more spikes fired for the low firing rate condition and therefore had sufficient firing for the comparisons shown in all following figures (Figures 2-6).
For the CA3 dataset, we restricted our analysis to locations sufficiently far from the edge of the track so that data would not be confounded with reward site activity. We defined reward locations manually by excluding areas close to the edge of the track, where the animal had low velocity. These locations corresponded to a region of 10-20 cm at each edge of the track. We calculated place fields separately for rightward passes and leftward passes, as we found independence of place cell activity for opposite directions of motion (data not shown, see also Markus et al., 1995). For the CA3 data, of 154 units recorded, 37 had place fields on leftward passes and 33 had place fields on rightward passes. Of those, 17 cells had multiple place fields for a total of 87 place fields analyzed in Figure 1d. Of those, 32 place fields had a firing rate ≥.5 Hz and 5 or more spikes fired for the low firing rate condition and therefore had sufficient firing for the comparisons shown in all following figures (Figures 2-6).
Theta was defined by filtering the raw LFP trace, as measured at the local pyramidal cell layer, between 6 and 10 Hz. Local maxima (peaks) were defined as 0 phase, and other phases were linear interpolation between consecutive peaks.
Bursts were defined as spikes with interspike intervals <10 ms .
Overdispersion was calculated as described in (Fenton & Muller, 1998). In short, the observed number of spikes S on a given pass through the place field is compared to the expected number of spikes N = ΣR i Δt calculated by multiplying the average firing rate at each location with the amount of time spent at that location. If there are more than four spikes expected, the Poisson distribution can be approximated by a normal distribution giving the z score Z which is subject to a correction of decreasing the absolute value of Z by 1/2 for transforming the discrete spike counts into a continuous z score. Passes for which fewer than four spikes were expected because of too little time spent in the place field were excluded, as in (Fenton & Muller, 1998).
All analyses were implemented in Matlab.
| Phase precession analyses
All quantifications of phase precession used the cl_corr function in the measure_phaseprec toolbox, which was provided by Richard Kempter (Kempter, Leibold, Buzsáki, Diba, & Schmidt, 2012). This toolbox itself relies on the circStat toolbox by Philipp Berens (Berens, 2009), available at http://www.mathworks.com/matlabcentral/fileexchange/10676. Figure 6 were calculated using the circ_mean and circ_r functions in the circStat toolbox. The circularlinear correlations in Figure 6a,d were calculated with the circ_corrcl function in the circStat toolbox. It is important to note that the circ-Stat circ_corrcl calculates the correlation between a circular independent variable and a linear dependent variable, whereas the cl_corr function from the measure phaseprec toolbox calculates the correlation between a linear independent variable and a circular dependent variable.
| RESULTS
To better understand how the rate changes in response to nonspatial variables of rate remapping are expressed on short time scales, we analyzed two datasets collected from the hippocampus as rats ran on IntGen-CA1 Sen-CA3 FIGURE 1 Rate remapping verification. (a,b) Each dot represents a cell, where its x-position is the center of the place field with maximal firing rate for condition 1, and its y-position is the center of the place field with maximal firing rate for condition 2. These dots fall near the dotted unity line, indicating that the locations of place fields are similar for both conditions. (a) Cells from the CA1 dataset. Condition 1 was when the animal was on the central arm coming from the left arm and going to the right arm, whereas condition 2 was when the animal was on the central arm coming from the right arm and going to the left arm. (b) Cells from the CA3 dataset. Condition 1 was the black track, and condition 2 was the white track. (c,d) Each dot represents a place field, where its x-position is the firing rate for condition 1 and its y-position is the firing rate for condition 2. These dots do not fall near the dotted unity line, indicating that firing rates for the two conditions are different. Blue dots indicate place fields with sufficient firing for both conditions to be included for subsequent analyses, whereas red x's indicate place fields that were not considered rate remappers. Panel (c) shows cells from the CA1 dataset, and panel ( Burst duration. (a,b) Histograms of number of spikes per burst for two example place fields. Top: high rate condition, bottom: low rate condition. Mean number of spikes per burst (green lines). No significant difference between the spikes per burst distributions across conditions for either place field. Panel (a) shows data from a place field from the CA1 dataset and panel (b) shows data from a place field from the CA3 dataset. (c,d) For each place field, the distributions of spikes per burst are compared between the two conditions using the Kolmogorov-Smirnov test. The p value for each place field is plotted as a function of extent of rate remapping for that place field (purple dots). Very few place fields have significantly different burst length in the two conditions (dotted line at p = .05). Panel (c) shows place fields from the CA1 dataset and panel (d) shows place fields from the CA3 dataset. (e-f ) For each place field, two dots are plotted: one (red) corresponding to the high firing rate condition and one (blue) corresponding to the low firing rate condition. On the x axis is the firing rate of the place field on that condition and on the y axis is the mean number of spikes per burst (top) or the fraction of spikes that occur in bursts (bottom). Neither of these is strongly affected by firing rate. Panel (e) shows place fields from the CA1 dataset, and panel (f) shows place fields from the CA3 dataset (a) Overdispersion of all trials for all place fields. For each trial, a z score is calculated based on a comparison of the number of spikes expected on each trial based on the average firing rate over all trials with the number of spikes observed normalized by the expected variance. Stacked histograms are plotted where red represents trials that occurred for the high rate condition for each place field and blue represents trials that occurred for the low rate condition for each place field. The solid line shows the distribution expected if spike counts were simply Poisson (approximated by the normal distribution for sufficiently large expected spike counts; see Section 2). The histograms are highly overdispersed and show some bimodality. Panel (a) shows place fields from the CA1 dataset, and panel (b) shows place fields from the CA3 dataset.
(c,d) Overdispersion for all place fields for their high rate condition. In this panel, the z score is calculated with respect to the average firing rate only from the high rate condition. Spike counts are still overdispersed but now seem to be more unimodal. Panel (c) shows place fields from the CA1 dataset and panel (d) shows place fields from the CA3 dataset. (e,f) Overdispersion for all place fields for their low rate condition. In this panel, the z score is calculated with respect to the average firing rate only from the low rate condition. Spike counts are still overdispersed, but now seem to be more unimodal. Panel (e) shows place fields from the CA1 dataset, and panel F shows place fields from the CA3 dataset. (g,h) Trial-by-trial correlations between spike counts occurring in pairs of simultaneously recorded place fields. The wide bars show the observed distribution of correlations. Place field pairs whose spike count correlations were significant are shown in purple. The null distribution achieved by shuffling trial indices independently for each place field is shown in blue. The null distribution achieved by shuffling trial indices independently for each place field, while requiring that the shuffling only be within trials with the same condition, is shown in orange. Panel (g) shows correlations from the CA1 dataset, and panel (h) shows correlations from the CA3 dataset For the Sensory/CA3 task, recordings were obtained from CA3 with the goal of understanding the encoding of sensory information.
| Representation of nonspatial information in firing rate without affecting spatial response properties
Rate remapping is defined as changes in firing rate of place cells without changes in their spatial response. We asked whether the data collected in our tasks contains bona-fide representation of nonspatial information by modulation of place cell firing rates as opposed to statistical noise or misdiagnosis of changes in spatial response properties, as might be observed in global remapping (Colgin, Moser, & Moser, 2008;Muller, Kubie, & Ranck, 1987). In this and all following analyses, we separated each pass through the place field into one of two conditions: black track or white track for the Sensory/CA3 data, and going from left to right or going from right to left for the Internal/CA1 data.
For the Sensory/CA3 data, we treated the activity of each cell occurring on rightward passes and leftward passes independently, as we found independence of place cell activity for opposite directions of motion (data not shown, see also Markus et al., 1995).
First, we calculated place field locations for each condition independently according to criteria described in Section 2. We calculated the center of the place field with maximal firing rate for each cell under each condition, and found no significant difference between place field locations for each cell under the two conditions (Figure 1a, b); p > .05, paired t CA1 = −1.0, df CA1 = 46, 95% confidence interval of mean difference [−4.5 1.5]; paired t CA3 = −1.3, df CA3 = 45, 95% confidence interval of mean difference [−31 6.5]; mean and median movement < mean place field size, see Figure 1a,b for values). For all following analyses, we calculated place field locations using activity during both conditions, and compared activity in those place fields across conditions.
After verifying that the locations of place fields were largely consistent across the conditions, we compared the firing rates within each place field across the conditions. Average firing rates in each place field were significantly different between the two conditions (Figure 1c,d; In particular, 66% of Internal/CA1 place fields and 34% of Sensory/CA3 place fields had significantly different (p < .01, 2 sample Kolmogorov-Smirnov test) trial-by-trial spike counts between the two conditions.
Both of these analyses ( Figure 1) demonstrate a heterogeneity within the population. One group of place cells completely change their place fields in response to the condition change, reminiscent of global remapping. However, these results also demonstrate a subpopulation of place cells that retain their spatial response properties while simultaneously changing their firing rates in response to the condition change, conforming to the definition of rate remapping. Such a heterogeneous response to a manipulation has observed previously and is referred to as partial remapping (Colgin et al., 2008). We will focus in this paper on the population of rate remappers. Figure 1d). Of those, 32 place fields had firing for both conditions (green dots in Figure 1d), and were included for subsequent analyses. For additional information on criteria for inclusion and relevant definitions, see Section 2. Subsequent analyses will compare between the "high-rate condition" and the "low-rate condition". The identities of these conditions are defined on a place field-by-place field basis: the "high-rate condition" is the condition for which that particular cell has a higher firing rate in that place field. Thus, the same condition (e.g., black track for the Sensory/CA3 data) may be the high-rate condition for one place field and the lowrate condition for another.
| Burst coding
What is the fine-scale temporal structure of these rate changes? It has been suggested (Kepecs & Lisman, 2003) that information could be coded in the duration (number of spikes) of the high-frequency bursts that are known to be emitted by hippocampal place cells (Ranck, 1973); bursts including a variable number of spikes would allow for graded changes in firing rate. To explore this possibility, we produced histograms of the number of spikes occurring in each burst in the place field under each condition. We defined a burst as spikes with inter-spike intervals of less than 10 ms . The histograms for two example place fields are shown in Figure 2a We calculated the contribution of changes in burst duration to changes in firing rate across conditions by plotting the fold change in firing rate against fold change in burst duration for each place field. Only theta cycles in which there was at least one spike (where the appropriate map is being used, so stereotyped activity is expected) are counted. Top: high rate condition; bottom: low rate condition. Mean number of spikes per active theta cycle (green lines) is different in the two conditions. Panel (a) shows data from a place field from the CA1 dataset and panel (b) shows data from a place field from the CA3 dataset. (c,d) For each place field, the distributions of spikes per theta cycle are compared between the two conditions using the Kolmogorov-Smirnov test. The p value for each place field is plotted as a function of extent of rate remapping for that place field (purple dots). Almost all place fields with greater than twofold rate remapping had highly significant increases (dotted line at p = .01) in number of spikes per theta cycle, even when limited to theta cycles for which there was at least one spike in the place field. Panel (c) shows place fields from the CA1 dataset, and panel (d) shows place fields from the CA3 dataset. (e,f ) For each place field, two dots are plotted: one (red) corresponding to the high firing rate condition and one (blue) corresponding to the low firing rate condition. On the x axis is the firing rate in the place field for that condition, and on the y axis is the probability of firing in any theta cycle for which the animal is in the place field (top) or the mean number of spikes that occur in a theta cycle, when restricted to theta cycles for which there was at least one spike (bottom). Panel (e) shows place fields from the CA1 dataset, and panel (f) shows place fields from the CA3 dataset slope represents the fraction of the observed rate change across conditions attributable to the observed changes in burst duration. In the Sensory/CA3 data, changes in burst duration contributed .54% of the rate changes across condition. In the Internal/CA1 data, changes in burst duration contributed .25% of the rate changes across condition.
These data lead to the conclusion that rate remapping is not the result of modulation of number of spikes per burst.
| Excess variability in place cell spiking: Overdispersion
We next turned to analysis of trial-to-trial variability in spatial response properties of place cells, termed "overdispersion" (Fenton & Muller, 1998). The observation is that the number of spikes that occur on a given pass is more variable than would be expected from a Pois- Figures 2a,b, 4a,b and 5a,b. In red is shown the normalized spike count difference between the high and low firing rate (FR) conditions for each theta phase bin. Panel (a) shows a place field from the CA1 dataset, and panel (b) shows a place field from the CA3 dataset. (c,d) The red line shows the population-wide average of the firing differences shown in panels (a and b). The average normalized difference for the CA1 dataset (c) ranges from~.2 at~270 to~.5 at~90 . The firing rate difference is heavily weighted towards the first half of theta, as quantified by the resultant vector, which is plotted on an r axis with a range of [0, 1]. The CA3 data (d) did not show a significant bias. (e,f ) Theta phase histogram of spikes in the low and high rate conditions. For the CA1 dataset (e), theta phase preference shifts from second half of theta to first half of theta between low and high rate conditions, demonstrating a suppression of first half firing during the low rate condition in addition to the increase in first half firing for the high rate condition. For the CA3 dataset (f), no significant effect exists. (g,h) For each place field, the preferred phase of rate remapping is plotted on the x axis, and the preferred phase of phase precession is plotted on the y axis. (g) Phase preferences of place fields from the CA1 dataset. Place fields cluster in the top left corner corresponding to a preference of rate remapping for the first half of theta and a preference of phase precession for the second half of theta. (h) Phase preferences of place fields from the CA3 dataset. Place fields are overrepresented in the top half, corresponding to a preference of phase precession for the second half of theta, but there does not seem to be a consistently preferred phase of rate remapping [Color figure can be viewed at wileyonlinelibrary.com] generated two null distributions. One (blue line in Figure 3g,h: "shuffled") was generated by calculating spike count correlations between place field A from some trial and place field B on another random trial.
The second distribution (orange line in Figure 3g,h: "condition-matched shuffled") was generated by calculating spike count correlations between place field A from some trial and place field B on a random trial from the same condition. For both datasets, the empirical distribution has more extreme values than the condition-matched shuffled distribution, which in turn had more extreme values than the shuffled distribution. The extent of extremity differed between the datasets however.
| Rate modulation as changes in probability of discrete states?
As noted in the introduction, there are two ways to achieve a change in firing rate. One is that the nonspatial variable modulates the firing rate of a place cell in its place field, and spike counts at any particular time are drawn from a Poisson distribution with that characteristic rate. Another option ("discrete state probability" hypothesis) is that the nonspatial information can affect the probability of the expression of an all-or-none firing phenomenon. For example, imagine that the network has two states, one in which the place field exists, and one in which it does not. Changes in probability of that place field existing would show up as continuous changes in long-term average firing rate even if there are discrete underlying states (place field existing or not existing). State transitions have been hypothesized to occur in the hippocampus with the theta cycle being the smallest unit of time over which states are considered stable (Jackson & Redish, 2007;Jezek et al., 2011). We looked at all theta cycles (the smallest unit of time hypothesized to contain a single state) that the animal was in a particular place field, and divided the data between theta cycles in which the cell fired and theta cycles in which it did not. One can calculate the fraction of theta cycles that the animal was in the place field for which the cell was active (text insets of Figure 4a,b and top panels of Therefore, if we only look at theta cycles for which the cell is active in its place field (i.e., theta cycles when the network is in the state for which the place field exists), the spike count distribution (Figure 4a,b) should be the same no matter the rate remapping condition. However, the number of spikes per theta cycle increases in the high firing rate condition, even when restricting analysis to active theta cycles (when the network state presumably corresponds to the one in which the place field exists; Figure 4a,d, bottom panels of Figure 4e,f ). Therefore, we do not find evidence of discrete states in which the place field either does or does not exist.
| Coexistence of rate remapping and phase precession
The question of how rate changes involved in rate remapping interact with fine-scale temporal patterning of place cell firing remains unanswered. Place cells express a phenomenon known as "phase precession" (Jensen & Lisman, 1996;O'Keefe & Recce, 1993;Skaggs et al., 1996), which is the observation of a negative correlation between position of the animal and theta phase of a cell's spikes. It seems possible that increased spiking under the high rate condition may degrade the theta phase code. The question then arises, is the quality of phase precession affected by rate remapping? We compared the quality of phase precession between the high and low rate conditions for each place field. We used the absolute value of the circular-linear correlation coefficient developed by (Kempter et al., 2012), as the sign of the circular-linear correlation coefficient does not always match up with the sign of the best fit slope (Kempter et al., 2012) (Figure 5). We did not find a significant difference in phase precession quality between the conditions in either data set (p > .05, paired t CA1 = 1.3, df CA1 = 43, paired t CA3 = .3, df CA3 = 31).
| Specialization of two halves of theta
Previous work has suggested different functions of the two halves of the theta cycle (Hasselmo et al., 2002;Sanders et al., 2015), where the first half of theta is for current experience (encoding) and the second half for upcoming predictions (retrieval). We use the convention that the peak of theta at the local pyramidal cell layer is 0 . For each place field, we constructed a polar histogram of spikes occurring in the place field binned by theta phase under each condition. We then took the difference of spike counts in each theta phase bin between the two conditions divided by the sum of the spike counts in that phase bin to get a normalized difference in firing between the conditions at each theta phase (Figure 6a,b); theta phase bins with more firing on the low firing rate condition have a negative normalized difference, therefore their values are reflected across the origin in the polar plot). At each phase bin, we averaged the normalized firing difference across all place fields, giving an average firing difference between the high and low rate conditions by theta phase for the entire population (Figure 6c,d). For the Internal/CA1 data, the firing difference is large during the first half of theta and smaller during the second half of theta. Circular statistics give the mean resultant vector, whose direction signifies the theta phase preference of rate remapping and whose magnitude signifies the strength of that preference. Rate remapping in the Internal/CA1 data had a preferred phase of 90 and a resultant length of .24 (Figure 6c; circular-linear correlation ρ = .87, p < .01, df = 34). We plotted a theta phase histogram of all spikes occurring on the low and high rate conditions (Figure 6i). The theta phase preference clearly shifts from the second half of theta for the low rate condition to the first half of theta for the high rate condition, demonstrating that firing is increased for the high rate condition during the first half of theta and is actually suppressed for the low rate condition during the first half of theta. The Sensory/CA3 data did not show significant theta phase dependence of rate remapping (Figure 6d; resultant length .06, circular-linear correlation ρ = .24, p > .05, df = 34).
The strong expression of rate remapping during the first half of theta observed in our Internal/CA1 data complements previous work that has shown a preferential expression of phase precession during the second half of theta (Mehta, Lee, & Wilson, 2002;Yamaguchi, Aota, McNaughton, & Lipa, 2002). We verified this with a novel analysis parallel to the rate remapping analysis above. The magnitude of the circularlinear correlation of phase precession (Kempter et al., 2012) was calculated for a sliding 90 window every 10 , generating a polar plot analogous to those shown in Figure 6a,b, where the radial value is phase precession quality instead of rate remapping magnitude. The mean resultant vector was calculated for the phase precession quality polar plot of each place field (not shown). For each place field, we plotted the direction of that mean resultant vector for phase precession on the y axis and the direction of the mean resultant vector of rate remapping on the x axis (Figure 6g,h). Place fields in the Internal/CA1 dataset clustered in the top left corner corresponding to a preference of rate remapping for the first half of theta and a preference of phase precession for the second half of theta. This analysis demonstrates a separation of rate remapping and phase precession into separate halves of the theta cycle; see below for discussion of difficulties in interpreting the Sensory/CA3 data (end of "Two halves of theta dedicated to rate remapping and phase precession respectively" in the Section 4).
| DISCUSSION
For place cells of the hippocampus, spatial relation can be expressed through the fine-scale temporal ordering of firing that occurs during the theta sequences associated with phase precession (Foster & Wilson, 2007;Jensen & Lisman, 1996;Lisman & Redish, 2009;O'Keefe & Recce, 1993;Skaggs et al., 1996). Place cells can also represent nonspatial information in their firing rates (Aronov, Nevers, & Tank, 2017;Leutgeb et al., 2005;Wood et al., 2000). In this study, we looked at the fine-scale temporal behavior of firing rate changes of place cells in response to nonspatial changes.
| Co-existence of independent rate and temporal codes
Whether neurons use rate codes or temporal codes to represent information has been a source of controversy (Brette, 2015;Gautrais & Thorpe, 1998;Shadlen & Newsome, 1994;Softky, 1995). Here, we demonstrate that two independent information streams can be simultaneously represented in a single spike train. Several-fold firing rate changes in response to nonspatial information can occur in place cells without degrading the quality of the theta phase coding of location within the place field. Previous work had suggested that place cells had such an ability to represent independent information content in theta phase and firing rate, specifically in the case of the rate changes due to changes in running speed (Huxter, Burgess, & O'Keefe, 2003;O'Keefe & Burgess, 2005). However, the firing rate changes associated with running speed are hard to disentangle from the natural relationship between velocity and position (Terrazas et al., 2005), so it was not clear whether rate changes due to running speed would apply to other types of rate changes. We now extend more recent reports of the independence of rate remapping and phase precession (Allen et al., 2012) in showing the general applicability of this independence of rate and temporal coding.
| Phase precession: Models and analysis
The question of the origin of phase precession has alternatively been suggested to be generated as a network process or as a single-cell phenomenon, reviewed in (Maurer & McNaughton, 2007). In particular, it had been suggested that the excitation received by a place cell slowly ramped up over the length of the place field. When combined with oscillatory inhibition, spiking would occur progressively earlier in the theta cycle as excitation ramped up, leading to the observation of phase precession (Harvey, Collman, Dombeck, & Tank, 2009;Kamondi, Acsády, Wang, & Buzsáki, 1998;Lengyel, Szatmary, & Erdi, 2003;Mehta et al., 2002). However, the results of the current study contradict such a model. We see that phase precession and the extent of excitation are independent phenomena, as the quality of phase precession is maintained over several-fold changes in firing rate. This is in addition to other recent evidence of dissociation of place fields from phase precession (Aghajan et al., 2014;Feng et al., 2015;Schlesiger et al., 2015).
Single-cell models of phase precession were supported by the observation that phase precession plots where position was replaced on the x axis with instantaneous firing rate still generate the negative correlation representative of phase precession (Harris et al., 2002), but see (Huxter et al., 2003). However, our work demonstrates that a strong confound in such an analysis is that changes from low rate condition to high rate condition during rate remapping change the theta phase preference of firing due to suppression of firing during the first half of theta under the low rate condition. This effect leads to an artificially strong correlation between instantaneous firing rate and theta phase of firing. While this analysis technique has the benefit of being a "quick and dirty" method for quantifying phase precession, analysts should be aware of the confound of nonspatial information in the relationship between firing rate and theta phase of firing.
However, in this hippocampal data, the number of spikes per burst does not seem to change as a function of rate remapping condition. It should be noted that spike shape changes as bursts progress, so it is possible that biases in spike clustering may have truncated bursts. The fraction of spikes occurring in bursts did change, but not enough to account for the observed rate changes. Thus, burst parameters do not explain the representation of nonspatial information in rate remapping.
| Multiple maps hypothesis and population coordination of activity
The relationship between single cell rate remapping and populationlevel representation is not simply summarized. On the one hand, there is clearly correlated variability (sometimes referred to as "noise corre- were multiple maps that were being switched between, one in which the cell has a place field at that location and others in which it does not (Jackson & Redish, 2007;Kelemen & Fenton, 2016;Olypher et al., 2002). Rather, we show that even when restricting analysis to theta cycles on which a cell is active and therefore in the appropriate map, there are still firing rate changes between the conditions. This result is more in line with the idea that place cells have very high-dimensional spatial/nonspatial receptive fields, for which many variables can have independent effects (Rangel, 2012;Wu, 2012). Indeed, the overdispersion of single-pass spike counts increases between CA3 and CA1 (Mankin et al., 2012), potentially corresponding to an increase in the dimensionality of the representation with progressive processing.
Finally, we observe that splitting trials by condition reduces the overdispersion of spike counts thought to be a result of unaccounted for map switches. In the Internal/CA1 data, this reduction of overdispersion is so complete as to reduce overdispersion to levels expected by chance. Even so, the low-overdispersion "maps" particular to each condition involve characteristic firing rates in each condition, not characteristic place field locations. Taken together, these results indicate the need for further research into the relationship between single cell and population-level representation of nonspatial information, as that relationship is not entirely straightforward.
4.5 | Two halves of theta dedicated to rate remapping and phase precession, respectively We have shown here that, in our Internal/CA1 data, place cells represent different types of information during the two halves of theta.
Rate changes due to rate remapping preferentially occur during the first half of the theta cycle. On the other hand, the negative correlation between position and theta phase of spiking characteristic of phase precession is stronger during the second half of the theta cycle.
It seems that, during the first half of the theta cycle, place cells exhibit rate remapping, and during the second half of theta, place cells exhibit phase precession.
The separation of the theta cycle into halves dedicated respectively to rate remapping and phase precession in CA1 parallels the distinct inputs that give rise to these phenomena. Anatomically, CA1 cells receive spatially segregated inputs onto their apical dendrites (reviewed in Witter, 2010). Proximally, in the stratum radiatum, CA1 cells receive input from the CA3 region of the hippocampus (Ramón y Cajal, 1911). Distally, in the stratum lacunosum-moleculare, CA1 cells receive direct input from layer III of the entorhinal cortex (EC3) as well as from the prefrontal cortex (PFC) via the nucleus reuniens of the thalamus (Herkenham, 1978;Steward & Scoville, 1976).
These two inputs as defined by anatomy seem to correspond respectively to phase precession (stratum radiatum or SR) and rate remapping (stratum lacunosum-moleculare or SL-M; see Table 1). (Jaramillo et al., 2014). In contrast, rate remapping on this task has been shown to depend on the PFC input to CA1 via nucleus reuniens of the thalamus and is not observed in CA3 (Ito et al., 2015); therefore, this information likely is received in stratum lacunosum-moleculare, where thalamic synapses occur. It is important to note that not all rate remapping information in the hippocampus arrives through this pathway. For example, sensory related Herkenham (1978), Steward and Scoville (1976).
Each layer is associated with a gamma rhythm of a particular frequency. CA1 spiking couples with gamma frequencies stereotypical of these inputs in a function-, place-, and theta phase-specific way (Bieri et al., 2014;Colgin et al., 2009;Zheng et al., 2016), so these gamma rhythms are thought to correspond with specific computations. The strength of the inputs into these two layers (as measured by theta/ gamma cross-frequency coupling in each layer) have been shown to peak during opposite halves of the theta cycle (Schomburg et al., 2014). It seems the SR-associated slow gamma is strongest near the trough of theta (Schomburg et al., 2014) and the SL-M-associated mid gamma seems to be strongest near the peak of theta (using a theta reference of the pyramidal cell layer theta) (Schomburg et al., 2014).
These phase estimates require further verification as another paper using different recording and analysis methods found different preferred phases of these gamma bands (Colgin et al., 2009). The~90 phase difference between the (Schomburg et al., 2014) theta phase preferences and our findings can potentially be explained in two ways.
One is that phase precession in CA3 itself occurs following the phase of maximal firing of CA3 place cells (Mizuseki et al., 2009;Mizuseki, Royer, Diba, & Buzsaki, 2012), so we would expect phase precession in CA1 inherited from CA3 to occur following the phase of maximal CA3/SR input. The other is that it takes time for inputs to travel down the dendrite to the point of affecting somatic firing (London & Häusser, 2005), so distal inputs may be temporally shifted relative to their effect on firing. It is also important to note that the work on gamma frequency and theta phase of SL-M fits with theta phase and gamma frequency of the EC3 input that arrives in that layer (Schomburg et al., 2014). However, little is known about the theta phase or gamma frequency associated with the thalamic input to SL-M carrying prefrontal information which gives rise to the rate remapping observed in our data.
In summary, CA1 inputs corresponding to rate remapping on the alternation task and phase precession are spatially segregated on the dendrites of CA1 pyramidal cells, and their stereotypical zones alternate maximum activation during the course of the theta cycle. As CA1 rate remapping on this task occurs preferentially during the first half of theta, it is likely that its essential input, thalamic relay neurons of the nucleus reuniens, are theta modulated. The temporal and spatial segregation of inputs to CA1 is summarized in Table 1.
This separation of representations fits very nicely with a recent theoretical suggestion about a functional dichotomy between the halves of theta (Sanders et al., 2015). That proposal describes a twostep interaction between place cells and grid cells during active navigation: during the first half of theta, place cells merge diverse information streams to accurately estimate current position. During the second half of theta, grid cells use their knowledge of spatial structure to generate sequential predictions of upcoming locations ("mindtravel") and then pass this sequence on to place cells in order to recover associates of upcoming locations. This separation is potentially compatible with a separation of theta into "encoding" (current position) and "retrieval" (mind-travel) phases (Hasselmo et al., 2002), a separation that has recently been supported with theta phase-specific optogenetic simulation (Siegle & Wilson, 2014).
How does rate remapping correspond with representation of current position/encoding of experience in these theories? One might think that CA1 is encoding the future decision of which way the animal will turn at the end or the central track. However, it is important to note that the identification of the correct choice on this task is not generated in the hippocampus as part of spatial processing (Ainge, van der Meer, Langston, & Wood, 2007). Rather, CA1 is representing the knowledge of current state provided by the prefrontal cortex via the thalamus (Ito et al., 2015). Corresponding to the understanding of this rate remapping as representation of current state as opposed to future choice, the rates of CA1 neurons on the alternation task correspond to the last trajectory, not to the future trajectory (Ji & Wilson, 2008), although see others including (Ferbinteanu & Shapiro, 2003) for similar tasks with coding of future trajectory.
In the Sensory/CA3 task, the phase dependence of rate remapping was not as clear. There was no significant phase preference of the firing rate difference across conditions (Figure 6d), nor was there a shift in preferred theta phase of spiking across conditions ( Figure 6f ). It is possible that we were unable to observe a difference in rate remapping in the CA3 data between the two halves of theta because of the relative weakness of rate remapping in the Sensory/ CA3 task ( Figure 1D). Another possibility is that rate remapping and phase precession are simply not expressed as robustly until CA1, in line with previous work showing weaker phase precession in CA3 than in CA1 Mizuseki et al., 2012). Alternatively, it may be that rate remapping in the Sensory/CA3 task occurs uniformly at all theta phases, implying that the sensory information is
| Function of rate remapping?
One interesting point to be emphasized is the diversity of types of information represented in rate remapping. One might expect that trajectory might be represented in hippocampal place cells as a highly task-relevant factor. The sensory change of track color from black to white, which was totally irrelevant to the task being performed might not seem worth representing, although the novelty of the white track is worth noting. Despite the irrelevance of track color to the task, 30% of CA3 cells had significantly different firing rates under the two conditions in the sensory task. What can be taken from this is that even task-irrelevant aspects of the environment are represented in the rates of place cells, emphasizing the nontrivial nature of hippocampal representation.
What is the importance of representation of current state in all of its nuance during rate remapping? We believe the reason for this representation is that the hippocampus must represent current context so that any occurrences experienced can be bound to the correct context in its entirety, containing both sensorily experienced and internally inferred information about the state of the world. Since the animal does not know beforehand which aspects of the experience will be relevant, many different aspects of the experience must be represented. Internally generated task information in the Internal/CA1 task should be thought of as representation of a "hidden state" of the world and not as representation of action plans: rate remapping represents what is known about the world, not what is predicted. Questions remain on how nonspatial information is encoded during the predictive firing during mind-travel (phase precession/theta sequences).
The alternation of rate remapping and phase precession, or alternatively encoding and recall, during the two halves of theta allows for the usage of past knowledge while nearly simultaneously doing what is necessary for learning to occur for future performance.
ACKNOWLEDGMENTS
Thank you to Simona Dalin and Magdalene Schlesiger for helpful comments on this manuscript and to Timothy O'Leary for analysis suggestions. | 2018-08-22T21:31:15.734Z | 2018-11-22T00:00:00.000 | {
"year": 2019,
"sha1": "11518182688fd0236140cb85d7f463dc0f3b1e43",
"oa_license": "CCBYNCSA",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/122459/2/hipo.23020.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "11518182688fd0236140cb85d7f463dc0f3b1e43",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
28817271 | pes2o/s2orc | v3-fos-license | On A New Convergence Class in Sup-sober Spaces
Recently, J. D. Lawson encouraged the domain theory community to consider the scientific program of developing domain theory in the wider context of $T_0$-spaces instead of restricting to posets. In this paper, we respond to this calling by proving a topological parallel of a 2005 result due to B. Zhao and D. Zhao, i.e., an order-theoretic characterisation of those posets for which the Scott-convergence is topological. We do this by adopting a recent approach due to D. Zhao and W. K. Ho by replacing directed subsets with irreducible sets. As a result, we formulate a new convergence class $\mathcal{I}$ in $T_0$-spaces called ${\operatorname{Irr}}$-convergence and establish that a sup-sober space $X$ is ${\operatorname{SI}}^{-}$-continuous if and only if it satisfies $*$-property and the convergence class $\mathcal{I}$ in it is topological.
Introduction
Domain theory can be said to be a theory of approximation on partially ordered sets. There are two sides of the same domain-theoretic coin: the order-theoretic one and the topological one. On the order-theoretic side, the facility to approximate is built in the ordered structures via approximation relations, and here domain is the generic term that includes all ordered structures that satisfy some approximation axioms. On the topological side, approximation can be handled by topology; more precisely, using net convergence. Two famous results of D. S. Scott [15] epitomise this deep connection between domains and topology: (1) A space is injective if and only if it is a continuous lattice with respect to its specialization order. (2) The Scott-convergence class in a directed complete partial order (dcpo, for short) P is topological if and only if P is continuous (furthermore, in a continuous dcpo, the Scott topology induces the Scott convergence). The second result was later generalised by B. Zhao and D. Zhao ([21]) to the setting of posets which are not necessarily dcpo's. We highlight to the reader that, in [21], the terminology "lim-inf convergence" is used instead of "Scott-convergence". The latter seems more suitable to use since the former is a bit misleading, because in [8], which is the modern dominant source for Domain Theory, lim-inf convergence is more related to the Lawson than the Scott topology (see [8,). The terminology "Scott-convergence" was used in [5] in which the fact that "a poset P is continuous if and only if all the sets և x are directed and the Scott-convergence class in P is topological" is proved by considering filter convergence rather than net convergence (see [5,Theorem 2.13]).
In an invited presentation 1 at the 6th International Symposium in Domain Theory, J. D. Lawson gave further evidence from recent development in domain theory to illustrate this intimate relationship between domains and T 0 -spaces. In particular, it was pointed out that "several results in domain theory can be lifted from the context of posets to T 0 -spaces". For example, (1) the topological technique of dcpo-completion of posets [19] can be upgraded to yield the D-completion of T 0 -spaces (i.e., a certain completion of T 0 -spaces to yield dspaces) [13], and (2) an important order-theoretic result known as Rudin's lemma [7], which is central to the theory of quasicontinuos domains, has a topological version [10].
In this paper, we respond (in a small way) to Lawson's call to develop the core of domain theory directly in topological spaces by establishing a topological parallel of the aforementioned result due to B. Zhao and D. Zhao ([21, Theorem 2.1]). To prove a parallel topological result of this, we adopt the recent approach in [20] by replacing directed subsets with irreducible subsets. The motivation for their approach is based on the observation that the directed subsets of a poset are precisely its Alexandroff irreducible subsets. Based on this replacement principle, we invent topological analogues of the usual domain-theoretic notions: (i) a new way-below relation ≪ Irr on a T 0 -space, (ii) some new notions of continuity of spaces, and (iii) a new net convergence class I on a given topological space X.
Working with the so-called irreducible-directed replacement principle have a connection with the concept of subset system Z introduced in [17]. The Z-theory in partially ordered sets have been studied extensively in the last few decades (see, e.g., [1], [2], [6], [16], [18]). In light of this theory, the replacement principle here may be saved for a particular subcollection Z in the realm of T 0 -space. Moreover, this generalization would correlate the theory of Z-(quasi)continuous posets and their topological aspects.
In this paper, the notion of sup-sobriety is heavily involved. Thi notion, which was first introduced in [20] as a generalisation of bounded-sobriety ( [14]), has close connections with irreducibly-derived topology mentioned in [20]. Because little is known about this kind of sobriety, it is one of the purposes of this paper to give a slightly better understanding of it in relation to net convergence.
We organise this paper in the following way. In Section 2, we summarise some of the recent results reported in [20] that are essential in our ensuing development. These results concern the irreducibly-derived topology defined using irreducible sets of the underlying topology X and sup-sober spaces. In Section 3, we focus in some result in some continuities of a space. In Section 4, we introduce the new convergence class I defined in any T 0 -space X and present some of its elementary properties. Finally, we focus our development of the convergence class I on sup-sober spaces and prove the main characterisation theorem which we advertised in the abstract.
Irreducibly derived topology
A nonempty subset E of a topological space (X, τ ) is irreducible if for any closed sets A 1 and The family of all irreducible subsets of X is denoted by Irr τ (X) or Irr(X) whenever it is clear which topology one is referring to.
It is often useful to check the irreducibility of a set using open sets, i.e., a nonempty set E is irreducible if and only if for any open sets U 1 and Regarding irreducible sets, here are some elementary properties: Proposition 2.1. For any given topological space (X, τ ), one has: (1) E ∈ Irr τ (X) if and only if cl(E) ∈ Irr τ (X).
(2) The continuous image of an irreducible set is again irreducible.
Every T 0 -space (X, τ ) can be viewed as a partially ordered set via its specialisation order, denoted by ≤ τ , where x ≤ τ y if x ∈ cl τ (y). Henceforth, all order-theoretical statements on a T 0 -space refer to the specialisation order on the space. For any subset A of a T 0 -space (X, τ ), the supremum of A, denoted by τ A, is the least upper bound of A with respect to the specialisation order ≤ τ of X. We denote the set of all irreducible subsets of X whose supremum exists by Irr + τ (X). The subscript " τ " shall be removed from the denotations whenever it is clear which topology one is referring to.
A topological space X is sober if every irreducible closed set is the closure of a unique singleton. All Hausdorff spaces are sober and all sober spaces are T 0 . The Scott space of any continuous domain is sober. A weaker form of sobriety is that of bounded-sobriety which requires that every irreducible closed set which is bounded above with respect to the specialisation order is the closure of a unique singleton. Notice that, as a specialisation order is involved, a bounded sober space needs to be T 0 at the first place. Bounded-sober spaces have been studied in [14] and [19]. A yet weaker form of sobriety is that of supsobriety. A T 0 -space is sup-sober if every closed set F ∈ Irr + (X) is the closure of a unique singleton, in this case F is exactly cl{ F }. Every T 1 -space is sup-sober. Every poset P is sup-sober with respect to its upper topology, i.e., the coarsest one generated by sets of the form P \ ↓ x, x ∈ P . All continuous posets are sup-sober with respect to the Scott topology, yet a sup-sober space is not necessarily continuous as witnessed by Johnstone's space ( [11]).
Directed subsets play a central role in domain theory. Directed subsets of a poset can be characterised topologically. Recall that the Alexandroff topology on a poset P consists of all upper sets. The directed subsets of P are precisely the Alexandroff irreducible subsets. The Scott topology is a coarsening of the Alexandroff topology in that every Scott open set is required to be an upper set and in addition inaccessible by directed suprema. By replacing the directed sets by irreducible sets in the definition of a Scott open set, D. Zhao and W. K. Ho defined for any T 0 -space (not just poset) a coarser topology called the irreducibly-derived topology that mimics the Scott topology on a poset. More precisely, let (X, τ ) be a T 0 -space and U ⊆ X, define U ∈ τ SI if (1) U ∈ τ , and (2) for every E ∈ Irr + τ (X), E ∈ U implies E ∩ U = ∅. It can be easily verified that SI(X, τ ) := (X, τ SI ) is a topological space whose topology is coarser than (X, τ ). An open set in SI(X, τ ) is called SI-open and the interior of a subset A of X with respect to τ SI is denoted by int SI (A).
Because the Scott-like topology τ SI is derived from a topology τ on the same set X, we sometimes refer to τ SI as the Scott derivative of τ .
(2) A closed subset C of (X, τ ) is closed in SI(X, τ ) if and only if for every E ∈ Irr + τ (X), Example 2.3. Let P be a poset endowed with the Alexandroff topology α(P ). Since the irreducible sets in (P, α(P )) are precisely the directed ones, it is clear that SI(P, α(P )) = Σ(P ), where Σ(P ) is the set P endowed with the Scott topology on P .
In general, the Scott topology of a given poset does not coincide with its Alexandroff topology. For example, in the set R of all real numbers equipped with the usual order, sets of the form [x, ∞) are Alexandroff open but not Scott open. We shall now look at those spaces which are equal to their Scott derivatives.
Given T 0 -space (X, τ ), one can derive from it a space satisfying SI ∞ -property. Let (X, τ ) be a T 0 space and α an ordinal. We define by transfinite induction a topological space X α on X as follows: (1) X 0 := (X, τ ); (2) X α+1 := SI (X α ); (3) If α is a limit ordinal, then X α is the space on X whose topology is the intersection of all topologies X β , where β < α. Since (X α ) α is a sequence of increasingly coarser topologies on X, there is a smallest ordinal γ(X) such that the topology on X α coincides with that on X γ for all α ≥ γ(X). We denote this X γ(X) by X ∞ .
It is immediately clear by the definition that the following theorem holds.
Some continuities of spaces
Starting from this section, a topological space or a space refers to a T 0 -space, unless otherwise mentioned. In a space X, one defines a "new" way-below relation ≪ Irr (called the Irr-way-below relation) using irreducible subsets instead of directed subsets. Given x, y ∈ X, the Irr-waybelow relation is defined as follows: For a given x ∈ X, և Irr x denotes the set {y ∈ X | y ≪ Irr x}. The following properties of Irr-way-below relation are as expected: In a space X the following hold for all u, x, y and z ∈ X: Using ≪ Irr , we can now introduce the notion of Irr-continuous space -a topological analogue of continuous posets.
Definition 3.2.
A space X is said to be Irr-continuous if for every x ∈ X the following hold: (1) և Irr x is irreducible and Our definition of Irr-continuous space differs from that of SI-continuous spaces defined in [20, p.192] in that we choose to drop their first condition, i.e., for any x ∈ X, the set ։ Irr x := {y ∈ X | x ≪ Irr y} is open in X, and weaken the requirement in their second condition: from և Irr x being directed to և Irr x being irreducible. One also needs to notice that sticking in the definition of SI-continuity from [20, p.192] will go contrary to our original intention of developing domain theory in the wider contexts of topological spaces and not restricted just to (continuous) posets. This is because of a result by M. Erné ([4, Theorem 4, p.462]). That result asserts that a topological space is a weak C-space (i.e., it is both a C-space and a weak monotone convergence space) if and only if it is homeomorphic to the Scott space of some continuous poset. It was shown in [20,Theorem 6.4] that X is SI-continuous if and only if the derived topology SI(X) is a C-space. Because SI(X) is always a weak monotone convergence space, it follows that the derived topology on an SI-continuous space is homeomorphic to the Scott topology on some continuous poset.
With the absence of the first condition and weakened version of second condition, we can still say a few things about Irr-continuous spaces in general.
Lemma 3.4. Let X be an Irr-continuous space. Then, for every x ∈ X it holds that Let u be an upper bound of M x . We shall show that u ≥ x for any upper bound u of M x . Suppose for the sake of contradiction that u x. Then, by the Irr-continuity of X, x = և Irr x so that there exists y ∈ և Irr x with y u. Repeating the same argument we can find a z ∈ և Irr y such that z u. But this is a contradiction to the fact that z ∈ M x and u is an upper bound of M x . Therefore, u ≥ x and this completes the proof.
Any domain theorist would know the price for weakening the second condition, i.e., one loses the interpolating property of the Irr-way-below relation. Fortunately, within the scope of our present study concerning sup-sober spaces, we can recover this loss.
Theorem 3.5. Let X be an Irr-continuous and sup-sober space. Then, ≪ Irr enjoys the interpolating property in that whenever z ≪ Irr x, there exists y ∈ X such that z ≪ Irr y ≪ Irr x.
Proof. We first show that M x := { և Irr y | y ≪ Irr x} is an irreducible subset of X. Let U 1 and U 2 be open in X such that M x ∩U 1 = ∅ and M x ∩U 2 = ∅. Then there exist y 1 , y 2 ∈ և Irr x such that y 1 ∈ U 1 and y 2 ∈ U 2 . Since x is an upper bound of {y 1 , y 2 } and both U 1 and U 2 are upper sets, x ∈ U 1 ∩ U 2 . By Irr-continuity of X, x is the supremum of և Irr x. Since X is sup-sober, it enjoys the SI ∞ property and so U 1 , U 2 ∈ SI(X). Hence there exists y ∈ և Irr x such that y ∈ U 1 ∩ U 2 . Using a similar argument, there exists z ∈ և Irr y such that Hence there exists y ∈ X such that, by virtue of Proposition 3.1, z ≪ Irr y ≪ Irr x holds as desired.
Example 3.6. The rational line Q := (Q, ≤) with the Scott topology ΣQ is an Irr-continuous sup-sober space which is not sober.
Another way to recover the interpolating property of the Irr-way-below relation is by considering SI-continuity introduced in [20] but omitting the first condition. We define SI − -continuity of spaces as follows: և Irr x contains a directed set whose supremum is x.
From the definition, one can see that every SI − -continuous space is Irr-continuous. This is because in any space, directed sets are irreducible. In particular, in any poset endowed with the Alexandroff topology on it, both notions of continuity are exactly the same.
An SI − -continuous space and Irr-continuous space shares a same property: For every point x, և Irr x contains an irreducible subset whose supremum is x. Unlike on an Irrcontinuous space, the Irr-way-below relation on an SI − -continuous space is interpolating, regardless of whether it is sup-sober. Proof. Let X be an SI − -continuous space and z, x ∈ X such that z ≪ Irr x. By definition, there exists D ⊆ և Irr x such that D is directed and D = x. For each d ∈ D, we fix a directed subset B y of և Irr y whose supremum is y. Now consider the set A = {B y | y ∈ D}. Being a union of directed sets, A is directed, hence irreducible. The fact that X is SI −continuous gives A = x. We then have an element y ′ ∈ A such that z ≤ y ′ . This gives the existence of an element y ∈ X such that z ≪ Irr y ≪ Irr x.
An SI − -continuous sup-sober space satisfies a special property, that is, for every F ∈ Irr + (X) there exists a directed subset D of ↓ F such that F = D. We call such property * -property. It is given in [20,Lemma 7.4] that every C-space satisfies * -property. Recall that a T 0 -space is a C-space if for every open set U and x ∈ U there exists y ∈ U satisfying x ∈ int(U ). (1) Every Irr-continuous space satisfying * -property is SI − -continuous.
(1) The proof is immediate from the definition.
(2) Let X be a SI − -continuous sup-sober space and F ∈ Irr + (X). We then, by supsobriety of X, have cl(F ) = cl({x}) where x = F . This implies և Irr x ⊆ ↓ F . The fact that X is SI − -continuous implies that there exists a directed subset D of ↓ F whose supremum is x, as desired.
At the end of this section, we shall present some results concerning continuities of spaces. We first recall the definition of SI-continuous.
In the definition of Irr-continuous, one can see that there is no much information about the underlying topology. Imposing ⊕-property to a space may give us more information regarding the topology on it. We shall call Irr-continuous space satisfying ⊕-property an Irr + -continuous space.
Example 3.11. The space N endowed with a cofinite topology is a T 1 -space. Hence it is Irr-continuous and sup-sober, yet it does not satisfy ⊕-property. This space is also obviously not a C-space.
It is mentioned in [20,Theorem 6.4] that a space X is SI-continuous if and only if the space SI(X) is a C-space. We shall show that, in the presence of sup-sobriety, the notions of SI-continuity and Irr + -continuity are the same. In fact, for a sup-sober space, satisfying one of the two continuities is equivalent with being a C-space.
For each y ∈ S x we have x ∈↑ y, hence y ≤ x. Now let x z. Since SI(X) is a C-space and X− ↓ z is SI-open, there exists y 0 ∈ X− ↓ z such that x ∈ int SI (↑ y 0 ). We have that y 0 ∈ S x and y 0 z. Therefore S x = x.
We next show that S x is irreducible. Let U 1 and U 2 be opens in X such that S x ∩U 1 = ∅ and S x ∩ U 2 = ∅. There exist y 1 ∈ U 1 and y 2 ∈ U 2 such that x ∈ int SI (↑ y 1 ) and x ∈ int SI (↑ y 2 ), hence x ∈ int SI (↑ y 1 ) ∩ int SI (↑ y 2 ). Since SI(X) is a C-space, there exists y 3 ∈ int SI (↑ y 1 ) ∩ int SI (↑ y 2 ) ⊆ ↑ y 1 ∩ ↑ y 2 ⊆ U 1 ∩ U 2 such that x ∈ int SI (↑ y 3 ). We have If y ∈ S x , then x ∈ int SI (↑ y). By Proposition 3.1 we have that y ≪ Irr x. Thus S x ⊆ և Irr x, implying that ↓ S x ⊆ և Irr x. Now let y ′ ≪ Irr x. Since S x is irreducible and x ≤ S x , there exists y ∈ S x such that y ′ ≤ y. Hence y ′ ∈↓ S x . Therefore ↓ S x = և Irr x. At this point, we have that X is Irr-continuous. Now if z ∈ ։ Irr (x), then x ∈ ↓ S z . There exists an element y ∈ X such that x ≤ y and z ∈ int SI (↑ y). Hence z ∈ x≤y int SI (↑ y). If z ∈ int SI (↑ y) for some y ∈↑ x, we have that which gives ։ Irr x is open, in particular irreducibly open. One needs to to notice that the condition SI(X) is a C-space in Lemma 3.12 cannot be replaced by X is a C-space. Indeed, there is a C-space which is not Irr-continuous, let alone Irr + -continuous.
Lemma 3.14. If X is an Irr + -continuous sup-sober space, then X is a C-space.
Proof. We first show that given x ≪ Irr y, we have y ∈ int SI (↑ x). Let x ≪ Irr y. Since the relation ≪ Irr is interpolating (in light of Theorem 3.5), there exist z 1 , z 2 , . . . , z n , . . . ∈ X such that x ≪ Irr . . . ≪ Irr z n ≪ Irr . . . ≪ Irr z 2 ≪ Irr z 1 ≪ Irr y By assumption, we have that the set V := i∈N ։ Irr z i is an open set containing y. Moreover, by its construction, V is also SI-open. We also have that for each i ∈ N, Let U be open in X and y ∈ U . By assumption, we have և Irr y = y ∈ U . Since U is inaccessible by suprema of irreducible sets, there exists x ∈ X such that x ≪ Irr y and x ∈ U . By the above result, we have that y ∈ int SI (↑ x). Therefore X is a C-space.
The condition that X satisfies ⊕-property in Lemma 3.14 is essential as witnessed by the space given in Example 3.11. The following theorem is an immediate consequence of Lemma 3.12, Lemma 3.14, and [20, Theorem 6.4].
Theorem 3.15. Let X be a sup-sober space. Then the following conditions are equivalent: (1) X is Irr + -continuous.
(2) X is a C-space.
Corollary 3.16. In the presence of sup-sobriety, SI-continuity and Irr + -continuity are the same notion.
Convergence class defined by irreducible sets
In a topological space, approximation can be described by means of net convergence. Let X be a set. A net (x i ) i∈I in X is a mapping from a directed set (I, ≤) to X, where ≤ is a pre-order on I. Real number sequences, for instance, are nets in the Euclidean space R. Thus, nets can be viewed as generalised sequences. We denote the class of all nets in X by ΨX.
For each x ∈ X, one can define a constant net by x i = x for all i ∈ I. Parallel to the notion of subsequence, we have the notion of a subnet. A net (y j ) j∈J is a subnet of (x i ) i∈I if (i) there exists a function g : J → I such that y j = x g(j) for all j ∈ J and (ii) for each i ∈ I there exists j ′ ∈ K such that g(j) ≥ i whenever j ≥ j ′ .
A convergence class S in a set X is a relation between ΨX and X. An element of S is denoted by ((x i ) i∈I , x) or sometimes (x i ) i∈I S − → x, in which case we say that the net (x i ) i∈I S-converges to x.
Every space (X, τ ) induces a convergence class S τ defined by Here, a property of a net (x i ) i∈I holds eventually if there exist i 0 ∈ I such that for all i ≥ i 0 , the property holds for x i . Given a set X and a topology τ on X, when (x i ) i∈I
Sτ
−→ x, we say that (x i ) i∈I converges to x with respect to topology τ . A convergence class S in a set X, is said to be topological if there is a topology τ on X that induces it, i.e., S = S τ . In fact, for a topological convergence class, the topology inducing it is unique, which is an immediate consequence of the following propostion Proposition 4.1. Let X be a set and τ and σ be topologies on X. Then τ ⊆ σ if and only if S σ ⊆ S τ .
A special convergence class in a dcpo called the lim-inf convergence was first introduced in [15]. Crucially, this convergence makes use of the directed sets. It was shown that the lim-inf convergence class in a dcpo is topological if and only if the dcpo is a domain. Later in [21], this lim-inf convergence was modified to create a new convergence class for a general poset. Recall that in a poset P , a net (x i ) i∈I converges to y provided that there exists a directed subset D of eventually lower bounds of (x i ) i∈I whose supremum belongs to ↑ y. In that later paper, it was established that the new lim-inf convergence class in a poset is topological if and only if the poset is continuous.
In this paper, we modify the preceding definition of convergence to suit the context of a topological space by replacing the directed subsets with irreducible subsets.
Definition 4.2. Let X be a space. A net (x i ) i∈I in X is said to Irr-converge to y ∈ P if there exists E ∈ Irr + (X) such that E ≥ y and for each e ∈ E there exists k(e) ∈ I such that for all i ≥ k(e) it holds that x i ≥ e. An instance of (x i ) i∈I converging to x is denoted Equivalently, (x i ) i∈I Irr −→ y if and only if there exists an irreducible subset E of eventually lower bounds of (x i ) i∈I and whose supremum exists and belongs to ↑ y. Remark 4.3. In any set, the notions net convergence and filter convergence are equivalent [3]. In this paper, we prefer to work with the former, different from that in [5]. Because of this preference, our ensuing development will depends heavily on Kelley's characterisation of topological convergence class [12].
For a space X, the convergence class in X defined by Irr −→ is denoted by I. The rest of this section is completely devoted to studying I and its relation with sup-sobriety and continuities of spaces.
The following result characterises ≪ Irr in terms of the convergence Irr −→.
Lemma 4.4. Let X be a space, (x i ) i∈I be a net in X, and y ∈ X. Then x i Irr −→ y implies for each x ∈ և Irr y, there is k(x) ∈ I such that for each i ≥ k(x) it holds that x i ≥ x. Furthermore, if X is either an Irr-continuous or an SI − -continuous space, then the converse is true.
Proof. Let (x i ) i∈I Irr −→ y and x ≪ Irr y. Then one can find an irreducible set E such that y ≤ E and for each e ∈ E there exists k(e) ∈ I such that x i ≥ e for all i ≥ k(e). Using the fact that x ≪ Irr y, we can find e x ∈ E such that x ≤ e x . Hence for all i ≥ k(e x ) =: k(x) it holds that x i ≥ x.
Conversely, if X is Irr-continuous or SI − -continuous, there exists an irreducible subset E of և Irr y such that E = y. The assumption asserts that for each From [12], we know that a convergence class S in a set X is topological if and only if it satisfies the following conditions: (1) (Constants). If (x i ) i∈I is a constant net with x i = x for all i, then (x i ) i∈I , x ∈ S.
(2) (Subnets). If (x i ) i∈I , x ∈ S and (y j ) j∈J is a subnet of (x i ) i∈I , then (y j ) j∈J , x ∈ S.
(3) (Divergence). If (x i ) i∈I , x / ∈ S, then there exists a subnet (y j ) j∈J of (x i ) i∈I such that for any subnet (z k ) k∈K of (y j ) j∈J , (z k ) k∈K , x / ∈ S.
(4) (Iterated limits). If (x i ) i∈I , x ∈ S and (x i,j ) j∈J(i) , x i ∈ S for all i ∈ I, then We shall rely on this result in proving our main result of this paper.
Lemma 4.5. Let X be a space.
(1) The convergence class I in X satisfies the axioms (Constants) and (Subnets).
(2) If X is Irr-continuous or SI − -continuous, then I satisfies the (Divergence) axiom.
(3) If X is Irr-continuous and sup-sober, then I satisfies the (Iterated limits) axiom.
(1) That I satisfies the (Constants) axiom is immediate. We now show that I satisfies the (Subnets) axiom. Let (x i ) i∈I , x ∈ I. Then there exists an irreducible subset E of X such that x ≤ E and for each e ∈ E there exists k(e) ∈ I satisfying x i ≥ e for all i ≥ k(e). Let (y j ) j∈J be a subnet of (x i ) i∈I , with y j = x g(j) for each j ∈ J. Then there exists j ′ (e) ∈ J such that g(j) ≥ k(e) whenever j ≥ j ′ (e). Hence for every j ≥ j ′ (e) we have y j = x g(j) ≥ e. Therefore, (y j ) j∈J , x ∈ I.
(2) Suppose (x i ) i∈I , x ∈ I. By virtue of X being Irr-continuous or SI − -continuous, there exists an irreducible subset E of և Irr x such that E = x. Hence we can find y ∈ E ⊆ և Irr x such that for each i ∈ I one can find j(i) ∈ I satisfying j(i) ≥ i and x j(i) y. Define J := {j ∈ I | x j y}. Then (x j ) j∈J is a subnet of (x i ) i∈I . For every subnet (z k ) k∈K of (x j ) j∈J we have that z k y. By Lemma 4.4, (z k ) k∈K , x cannot belong to I. Thus, I satisfies the (Divergence) axiom.
(3) We now prove that I satisfies the (Iterated limits) axiom. Let (x i ) i∈I , x ∈ I and (x i,j ) j∈J(i) , x i ∈ I for all i ∈ I. Let y ≪ Irr x. Since X is Irr-continuous and supsober, by Theorem 3.5, the relation ≪ Irr is interpolating. Then there exists z ∈ X such that y ≪ Irr z ≪ Irr x. Applying Lemma 4.4 to the situation where x i Irr −→ x and z ≪ Irr x, there exists k(z) ∈ I such that x i ≥ z for all i ≥ k(z). We then have y ≪ Irr x i for all such i. Similarly, applying Lemma 4.4 to the situation where x i,j Irr −→ x. Therefore, I satisfies the (Iterated limits) axiom. (4) The proof is similar to (3) by considering Proposition 3.8 instead of Theorem 3.5 for the "the relation ≪ Irr is interpolating" part.
Lemma 4.5 above provides sufficient conditions for the Irr-convergence class I in a space X to be topological. Lemma 4.6. Let X be a sup-sober space and x ∈ X. If E is an irreducible subset of X such that E ≥ x and E ⊆ և Irr x, then և Irr x itself is irreducible in X and has x as its supremum.
Proof. Let U 1 and U 2 be open in X such that և Irr x ∩ U 1 = ∅ and և Irr x ∩ U 2 = ∅. Then there exist w k ∈ X (k = 1, 2) such that w k ≪ Irr x and w k ∈ U k . Since U 1 and U 2 are upper, . This yields that there exists e ∈ E such that e ∈ U 1 ∩ U 2 . By assumption, e ≪ Irr x. Hence և Irr x ∩ U 1 ∩ U 2 is nonempty. We have that և Irr x is irreducible in X. Now let y be an upper bound of և Irr x. Then y is also an upper bound of E. We have that y ≥ E ≥ x. Therefore, և Irr x = x Lemma 4.6 provides a tool to proof irreducibility of և Irr x in a sup-sober space which was one of our initial intention. Bearing in mind that irreducible sets are not necessarily directed (but yet indices of nets are required by definition to be directed sets with respect to some pre-ordering), we are unable to directly deduce Irr-continuity or SI − -continuity of a sup-sober space from the assumption that the Irr-convergence class in it is topological. However, if we assume further that the space satisfies the * -property the Irr-convergence being topological will indeed imply that the space is both Irr-continuous and SI − -continuous. Lemma 4.7. Let X be sup-sober space which satisfies * -property. If I satisfies the (Iterated limits) axiom then X is both, Irr-continuous and SI − continuous.
Proof. Let x ∈ X and F x = {{x i,j } j∈J(i) | i ∈ I} be the family of all directed subsets of X whose supremum exists and is greater than or equal to x. The family F x is nonempty since {x} is in it.
For each i ∈ I, let x i := sup{x i,j | j ∈ J(i)}. Then x i ≥ x for all i ∈ I. Since the set {x} ∈ F x , we have inf{x i | i ∈ I} = x. We define a pre-order ≤ on I as follows: i 1 ≤ i 2 for any i 1 , i 2 ∈ I. We have that I is directed and the net (x i ) i∈I Irr-converges to x; just take {x} as the irreducible set satisfying the definition.
For all i ∈ I, define a pre-order ≤ on J(i) as follows: j 1 ≤ j 2 if and only if x i,j 1 ≤ x i,j 2 . We then have J(i) is a directed set and the net (x i,j ) j∈J(i) Irr-converges to x i ; just take {x i,j | j ∈ J(i)} as the required irreducible set.
Let M := {J(i) | i ∈ I}. By assumption, we have that the net x i,f (i) (i,f )∈I×M Irr −→ x. Thus, we can find an irreducible set E such that (1) E ≥ x and (2) for each e ∈ E, x i,f (i) ≥ e eventually.
We now show that E ⊆ և Irr x. Let e ∈ E and K be an irreducible set with K ≥ x. Since X satisfies * -property, there exists a directed set D such that D ⊆ ↓ K and D = By the definition of the pre-order defined on I, i 0 ≥ i e holds. Hence x i 0 ,fe(i 0 ) ≥ e. Since x i 0 ,fe(i 0 ) ∈ ↓ K, there exists k ∈ K such that x i 0 ,fe(i 0 ) ≤ k. It follows that e ≪ Irr x. Thus, E is an irreducible subset such that E ⊆ և Irr x and E ≥ x, and so by Lemma 4.6, և Irr x is irreducible and has x as its supremum. Therefore we have X is Irr-continuous. By Proposition 3.9, X is also SI − -continuous.
Given a convergence class S in a set X, one defines a topology τ S on X induced by the convergence class, i.e., U ⊆ X is in τ S if and only if for every ((x i ) i∈I , x) ∈ S, x ∈ U implies x i ∈ U eventually. From the definition, one can easily see that S ⊆ S τ S . The reverse containment S τ S ⊆S is not necessarily true, unless S is topological. Indeed, if S is a topological convergence class in a set X, then the topology on X that induces it is τ S [12].
The following lemma provides the location of the topology τ I on X with respect to the underlying topology and irreducibly-derived topology, assuming that the Irr-convergence class I in X is topological.
Lemma 4.8. Let X be a space in which the Irr-convergence class I is topological. Then the topology τ I is finer than the irreducibly-derived topology on X. If X is Irr + -continuous or SI-continuous, then the topology τ I is coarser than the underlying topology.
Proof. Let x i Irr −→ x and U be open in SI(X) such that x ∈ U . Then there exists an irreducible set E such that E ≥ x and for every e ∈ E, x i ≥ x eventually. Upperness of U gives E ∈ U . Since U is inaccessible by suprema of irreducible sets, we have U contains an element of E which is an eventually lower bound of the net (x i ) i∈I . Hence (x i ) i∈I converges to x with respect topology on SI(X). Therefore, by Proposition 4.1, U is in τ I . Now let (x i ) i∈I be a net converging to x with respect to the underlying topology on X. If X is Irr + -continuous or SI-continuous, we are guaranteed to have an irreducible subset E of և Irr x whose supremum is x. For every g ∈ E, x is in the open set ։ Irr g, hence g ≪ Irr x i eventually. This gives that E is a set of eventually lower bound of (x i ) i∈I . We have that (x i ) i∈I Irr-converges to x. Thus τ I is contained in the underlying topology on X, which completes the proof.
Finally, our main result, i.e., Theorem 4.9 below, is an immediate consequence of Lemma 4.5, Lemma 4.7, Lemma 4.8, and Theorem 2.4. Theorem 4.9. (i) If X is an Irr-continuous sup-sober space, then the net convergence class I in X is topological. In addition, if X also satisfies ⊕-property, then the topology that induces I is exactly the underlying topology on X. (ii) If X is a sup-sober space satisfying * -property in which the net convergence class I is topological then X is both, Irr-continuous and SI − -continuous. (iii) A sup-sober space X is SI − -continuous if and only if it satisfies * -property and the net convergence class I in it is topological. In addition, if X also satisfies ⊕-property, then the topology that induces I is exactly the underlying topology on X.
Corollary 4.10. In a sup-sober C-space, the topological convergence and Irr-convergence coincide.
Conclusion
In this paper, we take a small step towards taking up the programme of exporting domain theory to the more general context of a T 0 -space. The key strategy involved in our approach is to simply replace directed subsets by irreducible sets -a methodology first introduced by Zhao and Ho [20]. Recently, the importance of the role of irreducible (closed) sets in domain theory has also been underscored in the solution of the Ho-Zhao problem in [9]. All these indicate a need to carry out an in-depth and systematic enactment of the scientific program proposed by Jimmie Lawson (as described in the introduction) via our present replacement strategy. A significant part of our research objective is to see how much of domain theory can be developed in the more general setting of topological spaces. The main result we report herein characterises those sup-sober spaces satisfying the Irrcontinuity (or SI − -continuity) condition. The fundamental property that sup-sober spaces X are invariant under the Scott derivative operator SI plays a key role in the many major arguments employed herein. The requirement of sup-sobriety seems indispensable in view that sets of the form ։ Irr x need not be τ -open in an Irr-continuous or SI − -continuous space (X, τ ). The present work can be seen as a preliminary investigation of sup-sober spaces which were first introduced in [20]. We believe that sup-sobriety of spaces is an interesting topic which deserve a more thorough study on its own right. | 2017-09-11T07:21:27.000Z | 2017-09-11T00:00:00.000 | {
"year": 2017,
"sha1": "7bdfe12437a17355c3ec93163902b871322cec46",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7bdfe12437a17355c3ec93163902b871322cec46",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
169436012 | pes2o/s2orc | v3-fos-license | Is Choice of Agricultural Technologies a Risk Management Strategy among Smallholder Farmers? Insights from Kenya
This study addresses a gap in literature on the adoption of improved agricultural technologies as a risk management strategy using data from 599 households in Kenya who were exposed to fortified beans (Phaseolus vulgaris) and an improved indigenous chicken (Gallus gallus domesticus). This is because despite the rich literature on agricultural technology adoption, literature on technology adoption as a risk mitigation strategy is limited. Seventy-three per cent of farmers were non-adopters, 18% adopted the fortified beans, 3% adopted the improved indigenous chicken and 6% adopted both technologies. Econometric results show that limited access to markets reduced adoption as marketing risks increase. Older farmers were more likely to adopt the fortified beans as they may be wealthier and generally knowledgeable about bean technology reducing their absolute risk averseness. Maleheaded households were more likely to adopt the improved chicken. Farm diversity, access to extension and being a group official increased adoption to spread risk. We concluded that farmers' choice of agricultural technologies is indeed a risk management strategy and therefore policies and technology promotion interventions should be riskresponsive.
INTRODUCTION
Global food systems are highly vulnerable to risks such as unreliable rainfall and fluctuating market conditions (Ullah and Shivakoti, 2014). This vulnerability is further exacerbated by the fact that food systems depend on a small number of domesticated plant and animal species (Tung, 2017). Kahan (2013) categorizes agricultural risks into: • Production risks due to erratic weather, pests and diseases • Marketing risks due to uncertainties in market prices and cost of production • Financial risks due to uncertainties about future interest rates • Institutional risks due to unpredictable changes in the provision of services by markets and extension providers • Human risks that are associated with poor health or even death According to Bramoullé and Kranton (2007), many developing countries lack formal insurance mechanisms to manage risk. As a result, farmers make farming decisions to mitigate the adverse effects of risks. Farm diversification has been reported as one such risk management strategy (Ullah and Shivakoti, 2014;Rehima et al., 2013). Farm diversification is the cultivation and keeping of more than one crop and livestock enterprises, respectively and at the same time (Tangermann, 2011;Mishra et al., 2004). Kahan (2013) argues that farm diversification spreads risk because it is unlikely that multiple farm enterprises can be affected by changing conditions such as weather or markets in the same way.
Another common risk management decision and especially among smallholder farmers is mixed farming. According to the Food and Agriculture Organization (FAO) of the United Nations (FAO, 2001), mixed farming involves managing crop and livestock enterprises concurrently. One of the benefits of the mixed farming system is the symbiotic relationship between crops and livestock. For instance, while animal manure is a good source of plant nutrients, plant remains (e.g., straw) can be used as animal feed (FAO, 2001).
In this study, we hypothesise that farming systems are designed to triumph amidst risks. For example, a decision by farmers to manage a mix of crops and livestock enterprises or to choose either of them or even failure to adopt can be viewed as a risk mitigation strategy. We study two improved technologies: fortified KK15 beans (Phaseolus vulgaris) and cross-breed chicken (Gallus gallus domesticus). Since the two technologies were new in the study area, the issue of risk in their adoption is inevitable. The purpose of this study was to assess whether farmers' adoption decisions have a risk management bearing. Findings from this study will inform practitioners and policymakers in the design of interventions and policies that are risk responsive.
The KK15 bean is enhanced with zinc (57.5 ppm) and iron (631 ppm). These levels of zinc and iron are higher compared to those of most African bean cultivars that exhibit an average of 31 ppm of zinc and 96.1 ppm of iron (Kimani et al., 2006). While iron is responsible for the synthesis of haemoglobin, zinc is essential for human growth (Devi et al., 2014). Also, the KK15 bean variety is resistant to root rot and bean rust, it is early maturing, fast cooking and is high yielding in low to medium altitude agro-ecological zones.
According to Fotsa and Ngeno (2011), Kuroiler chicken is a dual purpose cross-breed that can be kept under both free range and intensive production systems making it cost effective compared to pure hybrid chicken. The cross-breed is also high yielding producing up to 200 eggs per year compared to indigenous breeds that produce about 100 eggs per year. Moreover, the Kuroiler chicken can reach a live weight of 4 kg in six months while indigenous breeds can only weigh 1.5-2.5 kg within the same period.
MATERIALS AND METHODS
Theoretical framework: Farmers' decision to choose a particular agricultural technology can be analysed within the framework of benefit maximization. The two widely applied benefit maximization theories in agricultural technology adoption studies are random utility theory (RUT) and Expected Utility Theory (EUT) (Greene, 2012;Schoemaker, 1982). The two theories assume that given a set of alternatives, individuals choose the alternative that yields the highest benefit (Batz et al., 1999). The only difference between the two theories is that EUT applies when one's choice is stated while RUT applies when the choice of a decision maker is revealed (Polak and Liu, 2006). Farmers' adoption statuses in this study were observed and therefore we applied the RUT.
Given the two technologies (KK15 fortified beans and Kuroiler chicken), four technology choice options are possible: • Choose the KK15 fortified beans • Choose the Kuroiler chicken • Choose both technologies, or • Fail to adopt If the benefits due to the above four choice options are U b , U c , U bc and U n respectively, the RUT suggests that a farmer will choose an alternative only if the specific choice yields the highest benefit.
Following Greene (2012), the choices can be specified as follows: If U bc > U n both technologies are chosen Otherwise, none of the technologies is adopted (4) Empirical model: We observed farmers' behaviour for the four possible adoption options giving rise to unordered discrete outcome variable with four categories. According to Greene (2012), unordered categorical data can be analysed using the Multinomial Logit (MNL) and Multinomial Probit (MNP) models. Gujarati (2004) argues that MNL and MNP models yield similar estimates and therefore, the choice between them is only guided by the distribution of the error term. In the use of MNL, the error term is assumed to be logistically distributed (Greene, 2012). The main limitation of the MNL model is the independence of irrelevant alternatives (IIA) assumption which requires that the probability of choosing between alternatives should not change with the introduction of new alternatives. Providing alternatives that are absolutely different (as is the case in this study) renders the IIA assumption irrelevant.
In applying the MNP technique, the error term should be normally distributed and homoscedastic, otherwise, the estimates are inefficient (Greene, 2012). The more stringent normal distribution and homoscedasticity assumptions of the MNP model constraints its application in analysing cross-sectional data (Greene, 2012). The major strength of the MNP model is its ability to relax the IIA assumption. The error term in this study was not normally distributed and therefore, we apply the MNL model and modelled as shown below.
Following McFadden (1974), the probability that the i th farmer makes the j th choice is specified as: where, Pr j|x i is the probability that the i th farmer makes the j th choice option (J = 4) and the probability takes a value 0>Pr j|x i >1, i are socio-economic and institutional factors associated with the i th farmer and β j is a vector of parameters to be estimated. Four estimations were possible in this study each corresponding to a choice alternative. Nevertheless, three equations were estimated, one for KK15 fortified beans, the second estimation was for the Kuroiler chicken and a third for both technologies. The nonadoption choice was set as the base alternative against which parameter estimates for the other choices were interpreted because a majority (73%) of the farmers failed to adopt.
Estimates by the MNL model do not directly explain the effect of the independent variables on the outcome variable but the relative odds because the resulting probability function is non-linear (Wulff, 2015). To measure the direct effect of a change in any of the explanatory variables on the dependent variable, marginal effects were computed by differentiating Eq.
(2) (Bowen and Wiersema, 2004). Following Wulff (2015), the marginal effects were calculated as shown: where, ME ij is the marginal effect and β i is the weighted probability of the coefficients for the different choice combinations. The rest of the parameters are defined in the same way as in Eq.
(2). The equation that was estimated using the MNL model to assess the drivers of the four choice decisions is given Eq. (8):
Definition of variables used in the empirical model:
The dependent variable in this study was choice measured as non-adoption = 0; KK15 fortified beans = 1; cross-breed chicken = 2 and both = 3. The independent variables, their units of measurement and a priori signs are summarised in Table 1. Distance to market significantly influences technology adoption. According to Nazziwa-Nviiri et al. (2017), an extra kilometre away from market reduced the likelihood of fertilizer adoption by 1.1 percentage points in Uganda. A negative effect of distance to market on technology adoption is hypothesised in this study. Access to extension services was measured as a dummy variable. A study by Njuguna et al. (2017) found that access to extension services positively and significantly influenced the adoption and intensity of adoption of brooding technologies in Kenya. Similar findings were reported by Tamir et al. (2015) in Ethiopia. According to Noltze et al. (2011), off-farm income increases the likelihood of technology adoption. However, findings in the same study show that off-farm income had no effect on the intensity of technology adoption. We hypothesise that off-farm income would increase technology adoption as farmers are able to afford the cost associated with technology adoption.
According to Nguyen- Van et al. (2016), household size has a negative effect on the choice of technologies. A member's increase in household size reduced the likelihood of choosing old-green and new-old-green tea varieties by 81.8 and 66.3 percentage points, respectively in Vietnam. Similar negative association between household size and choice of adaptation strategies to climate change was reported by Obayelu et al. (2014) in Nigeria. However, Ayuya et al. (2012) found that a member's increase in household size increased the likelihood of adopting farmyard manure by 3.7 percentage points in Kenya. Due to contradicting findings, we are not able to predict the Nazziwa-Nviiri et al. (2017) found that female-headed households were 20 percentage points less likely to adopt fertilizer technology due to resource inequities between men and women in Uganda. However, Simtowe et al. (2016) found a positive association between female-headed households and pigeon pea adoption in Malawi.
Literature is inconsistent on the effect of age on technology adoption. Ayuya et al. (2012) found that a year's increase in farmers' age increased the likelihood of choosing crop residue as an organic soil management practice by one percentage point but reduced the likelihood of choosing farmyard manure by 1.3 percentage points in Nigeria. Similarly, Murage and Ilatsia (2011) found that a year's increase in farmers' age reduced the likelihood of choosing artificial and natural insemination services for dairy cows by 0.7 percentage points compared to artificial insemination in Kenya. Due to literature discrepancies, we do not hypothesise the effect of age.
Education of household head was measured in years. According to Matsumoto et al. (2013), a years' increase in education of household head increased adoption of planting fertilizer by 0.133 units in Uganda indicating that educated farmers were more willing to use modern inputs. Similarly, Obisesan et al. (2016) found that an increase in farmers' education increased cassava adoption by 17.5 percentage points in Nigeria. Access to credit was measured as a dummy and a positive association with choice of new technologies hypothesised following Obisesan et al. (2016) who found that access to credit increased intensity of adoption of cassava varieties by 15.8 percentage points in Nigeria.
Farm size plays a critical role in the choice of agricultural technologies. Some studies have reported a positive effect of farm size on technology adoption as farmers with large farm sizes incur less opportunity cost of land (Abay et al., 2016). Similarly, Chuchird et al. (2017) found a positive effect of farm size on the choice of irrigation technologies in Thailand where a hectare's increase in farm size increased adoption of water wheel technology by 55.6 percentage points. Consequently, we hypothesise a positive association between farm size and technology choice.
Farm diversity was measured as the number of enterprises (crop and livestock) a household was managing in the year preceding the survey for this study. Ali (2015) reports that farm diversity is a risk management strategy. As a result, we hypothesise that farmers who are diversified are more likely to diversify further to spread risks even more. Group official was a dummy variable and a positive association with technology choice was hypothesised.
Study area:
This study was carried out in Nyamira and Kisii Counties, Kenya where the beans and chicken technologies were promoted in 2016. Nyamira County lies between latitude 00 30' and 00 45' South and longitude 34 45' and 35 00' East with a population density of 656 persons/km 2 (Commission on Revenue Allocation, 2011). Ninety per cent of the County's land is arable. Coupled with reliable rainfall of 1200 mm-2100 mm per year makes agriculture the major economic activity. Farmers are largely smallholders managing plots of approximately 0.97 Ha owing to the high population density.
Kisii County lies between latitude 0 30' and 1 0' South and longitude 34 38' and 35 0' East and has a population density of 595 persons/km 2 (Commission on Revenue Allocation, 2011). With 95% of the County's land being arable and reliable rainfall of about 1,500 mm per year, agriculture is the mainstay of the County. Smallholder farming is common with household farm size of approximately 0.81 Ha due to the high population density.
The choice of the two Counties was informed by the paradox of 'reliable agricultural conditions and malnutrition in the same area'. Smale et al. (2011) argue that such paradoxes could be due low technology adoption which is a common phenomenon in SSA anyway. According to the Kenya National Bureau of Statistics (KNBS, 2015), about 26 percent of children in the two Counties under the age of five years is stunted. Moreover, agricultural extension systems often promote productivity-enhancing technologies leaving out the equally important nutrition-enhancing technologies. This suggests that promotion of nutrition-enhancing technologies such as fortified beans and their subsequent adoption can significantly reduce malnutrition.
Sampling procedure and data: This study used crosssectional data collected in October-December 2016. Multi-stage sampling procedure was used to select 599 households. In the first stage, a list of 94 farmers' groups (71 from Kisii and 23 from Nyamira) was constructed from existing registered farmers' groups. Considering the proportion of farmers' groups in each County, the second stage involved the use of simple random sampling to select 48 groups (32 from Kisii County and 16 from Nyamira County). In the third stage, simple random sampling was used to select 13 farmers from each of the selected groups. A total of 599 farmers were surveyed and included in this analysis.
RESULTS AND DISCUSSION
Farmer and farm characteristics: Farmer and farm characteristics of the respondents are summarized in Table 2. The average farm size was 0.39 ha although adopters had slightly larger farms compared to non- 109.87*** Pseudo R 2 0.1159 Log pseudo likelihood -400.24 Dependent variable, choice between KK15, Kuroiler, both or none (base choice); *** and * denote significance at the 1% and 10% levels significantly; Marginal effects computed at sample means; Robust z statistics in parentheses adopters. Distance to market was 3.97 km and adopters were closer to the nearest market by half a kilometre relative to non-adopters. Farmers were middle-aged (50.54 years) and adopters were significantly older. A majority (53 percent) of adopters were group officials while access to extension ranged from 64 percent among non-adopters to 84 percent among adopters and the difference was significant at the one per cent level.
Econometric results: The econometric results are presented in Table 3. The data (y|x) was logistically distributed (Jarque-Bera statistic was significant at the one per cent level) and therefore multinomial logit model was appropriate. The Wald statistic was significant at the one per cent level and the pseudo R 2 was 11.9 percent suggesting that the model fitted the data well. Moreover, 5 out of the 11 independent variables included in the model were highly significant, implying high prediction power of the model.
The independent variables were tested for multicollinearity using the Variance Inflation Factor (VIF) and the Pearson correlation statistics. The results show that the explanatory variables did not exhibit multicollinearity (all the coefficients of the VIF and the Pearson tests were less than 10 and 0.5, respectively). Seventy-three per cent of farmers were non-adopters, 18 percent adopted fortified beans, 3 percent adopted Kuroiler chicken and 6 percent adopted both technologies.
Distance to market was negatively and significantly associated with the choice of the fortified beans and the estimate was significant at the one per cent level (Table 3). A kilometre increase in distance to the nearest market decreased the likelihood of choosing the fortified beans by 2 percentage points. This finding implies that as the distance to market increases, farmers are less likely to adopt fortified beans. Long distance to market limits market access due to the resulting higher costs and more time required in market participation. By reducing their likelihood of adopting the fortified beans, this decision can be viewed as a risk management strategy where farmers consider the positive role of proximity to markets in increasing profit from the sale of beans.
Gender of the household head had a positive and significant effect on the choice of Kuroiler chicken (Table 3). Male-headed households were more likely to adopt Kuroiler chicken by 3 percentage points compared to female-headed households and the estimate was significant at the one per cent level. The effect of gender is against our hypothesised direction that men would be less likely to adopt chicken because chicken is perceived to be a women's enterprise (Akite et al., 2018). However, studies have shown that as income from agricultural enterprises increase, management of those enterprises and revenues thereafter tend to shift from women to men (Ogutu et al., 2017). This explains the observation that men were more likely to adopt Kuroiler given its higher cost of adoption and expected higher returns relative to local breeds. This finding contradicts Kabunga et al. (2012) who found that female-headed households were more likely to adopt tissue culture bananas in Kenya if they are provided with similar adoption conditions as men.
Age of the household head had a positive and significant effect on the choice of KK15 fortified beans at the one per cent level (Table 3). A year's increase in age increased the probability of adopting the fortified beans by 0.4 percentage points. McNamara and Weiss (2005) provide a rational explanation for this observation by arguing that as age advances, farmers are more likely to accumulate wealth decreasing their absolute risk averseness. This would lead to an increase in adoption by older farmers as observed in this study. Moreover, beans have been grown in the study area since time immemorial and therefore older farmers may understand the management of beans enterprises than younger farmers, posing minimal production risks.
Farm diversity is a risk management strategy (Agyeman et al., 2014;Akaakohol and Aye, 2014). This study found that households that were already diversified were more likely to choose fortified beans and both technologies by 1.7 and 0.5 percentage points respectively and the estimates were significant at the one per cent level (Table 3). This implies that farm diversity encourages further diversification explaining the positive association between the two adoption options. This is because farm diversification is a strategy for spreading risk as multiple enterprises may not be affected in the same manner by risks (Kahan, 2013). Access to extension services had a positive and significant effect on the adoption of KK15 fortified beans and both technologies at the one per cent level (Table 3). Farmers who accessed extension services were more likely to adopt the KK15 fortified beans by 8.8 percentage points and both technologies by 4.5 percentage points, holding other factors constant. Extension creates awareness regarding existing and new technologies. Moreover, extension services provide farmers with the skills required in managing agricultural enterprises possibly explaining the positive association. By choosing the fortified beans or even both technologies, it is likely that farmers intend to take advantage of indigenous knowledge to manage production and marketing risks. Moreover, for the case of marketing risks such as poor prices, the two technologies can be kept for future markets or even consumed at home.
Although significant only at the 10 percent level, being a group official increased the probability of choosing KK15 fortified beans by 6.5 percentage points. Group officials are often of higher social status in many spheres including education, wealth and social networks. Kabunga et al. (2012) argue that information flows on agricultural technologies tend to favour community leaders because they play a linking role between change agents and the target communities. From a risk management perspective, group officials are often innovators and therefore, risk-loving further supporting the positive observation.
CONCLUSION
Previous studies have analysed the determinants of technology adoption (Pindiriri, 2018;Langat et al., 2013;Asfaw et al., 2011). However, none of them analyses the choice of agricultural technologies as a risk management strategy among smallholder farmers which we do in this article. We answer the question: is the choice of agricultural technologies a risk management strategy among smallholder farmers? Evidence was generated using cross-sectional data from 599 households.
Applying the multinomial logit model, we show that indeed, farmers' choice of agricultural technologies is aimed at managing risks inherent in smallholder agriculture systems. Farmers' choice of agricultural technologies had a plausible bearing about risk management. For example, farmers further from markets are less likely to adopt fortified beans because of limited market access that leads to higher marketing risks. Older farmers are more likely to adopt fortified beans given that they may be wealthier and knowledgeable about beans management reducing their absolute risk averseness. We also show that farmers with higher farm diversity are more likely to diversify even further which we interpret as a strategy to spread risk.
Overall, we conclude that farmers' choice of agricultural technologies is aimed at managing risks. An important policy implication is that considering the risks inherent in smallholder agriculture is a key strategy in developing risk-sensitive policies regarding new agricultural technologies. The findings also indicate that socio-economic and institutional attributes of farmers influence their choices amidst risks with a clear aim of managing those risks. We conclude that choice of agricultural technologies is farmer specific and therefore, practitioners should tailor interventions that promote new agricultural technologies to address farmers' individual motives of managing risks.
ACKNOWLEDGMENT
This research was funded by the German Federal Ministry of Food and Agriculture (BMEL) grant number 2813FSNu01.
CONFLICT OF INTEREST
We declare no conflict of interest in this study whatsoever. | 2019-05-30T23:47:32.370Z | 2019-10-25T00:00:00.000 | {
"year": 2019,
"sha1": "1182ffeb4f2f08d1964121aa5c57690631c5a90f",
"oa_license": "CCBY",
"oa_url": "https://www.maxwellsci.com/announce/CRJSS/10-1-8.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89edb033b38f2f6410a0d59281b0d7f0274fde55",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
263719832 | pes2o/s2orc | v3-fos-license | SELECTIVE CYTOTOXICITY EFFECTS OF R-GLYCIDOL AND S-GLYCIDOL ON VERO AND HCT 116 CELLS IN EVALUATING THE INCIDENCE OF GLYCIDYL ESTERS IN EDIBLE OILS AND FATS
Despite their feasibility as food flavouring, glycidol is classified as a probable carcinogen under group 2A by WHO. The cytotoxicity effects of isomers of R-and S-glycidol on African green monkey kidney normal cell lines (Vero) and human colon cancer cell line (HCT 116) remain unclear. Cell viability of the treated Vero and HCT 116 cells was determined using the AlamarBlue ® assay. Dichlorodihydrofluorescein diacetate (DCFDA) was used to evaluate reactive oxygen species (ROS) activity. Protein expressions of ERK ½, p-ERK, Bcl-2 and caspase-3 were investigated using western blotting technique. The findings indicated that R-and S-glycidol (1.16 µg/mL) exposure dramatically reduced the cell viability of the treated HCT 116 cells but was slightly cytotoxic to Vero cells, hence triggering ROS activity. R-and S-glycidol cause down-regulation of ERK ½, p-ERK, and BCL-2 protein expression at 48 h of treatment. Furthermore, both R-and S-glycidol possess close interaction in proximity to 3D-structure of human ERK and p-ERK protein receptors. In conclusion, R - and S -glycidol potentially triggered oxidative stress and affected ERK protein phosphorylation, leading to caspase-3 independent cell death of the treated HCT 166 cells, suggesting that lower doses (<1.16 µg/mL) of R - and S -glycidol are safe for human consumption
INTRODUCTION
Glycidol is widely employed in industrial applications such as the production of sweetening, flavouring compounds, diluent, and dye-levelling agents due to oxiran ring of glycidol that may function as an alkylating agent and contributes to some of its reactivity (Foroumadi and Emami, 2014).Natural oils and vinyl polymers are both produced using glycidol as a stabiliser (Foroumadi and Emami, 2014).Glycidols have become an important key intermediate as substitute dendrimers for the preparation of chemicals, pharmaceuticals, bioactive compounds and food products (Tollini et al., 2022).Mahapatra and Tysoe (2015) have stated that glycidols exist in two stereoisomeric forms, namely, R-and S-glycidol enantiomers and the difference between the two compounds is only chirality (EFSA Panel on Contaminants in the Food Chain (CONTAM), 2016).Due to biocompatibility with human cells, R-and S-glycidol have been investigated for their potential as replacement dendrimers for biomedical applications.Dendrimers are hyper-
P R E S S
branched macromolecules which can be functionalised.In order to express explicit biological traits such as lipid bilayer interactions, cytotoxicity, and others, changing their physicochemical properties has made it a suitable delivery vehicle (Abbasi et al., 2014).The benefits of many drugs cannot be exploited because of their poor solubility, toxicity, or stability problems.The use of dendrimers as carriers of bioactive compounds can solve many problems in biomedical approaches and have an advantage in improving their usage for clinical applications (Aurelia et al., 2020).The utilisation of drug combinations based on their targeted administration or drug delivery materials has become a new trend to optimise the therapeutic effects of the drugs with a reduction in adverse effects and potentially becomes a new strategy in the search for successful cancer therapy (Wróbel et al., 2022).In addition, according to a study by Wróbel et al. (2022), 2-and 3-poly(amidoamine) (PAMAM G2 and G3) dendrimers covered with R-glycidol could penetrate cells faster and exhibited higher toxicity for cancerous cells than for normal cells compared to S-glycidol covered analogues.Interestingly, the chirality and functional group of the stereoisomers were reported to alter the physiological of human cells that may contribute to either harmful or harmless effects (Mäder and Kattner, 2020;Moreno-Yruela et al., 2022).
Contradictory, reports by EFSA CONTAM (2016) and Hartwig et al. (2020) stated that glycidol is genotoxic (damages DNA) and carcinogenic (causes cancer).The chemicals have the ability to cause adverse effects in rodents (Spungen et al., 2018) that may contribute to genotoxic and mutagenic effects on rats (Bakhiya et al., 2011 andEFSA CONTAM, 2016).Given that the research on glycidol in human is limited, Schilter et al. (2011) conducted carcinogenicity bioassays on glycidol in mice and rats in various tissues and found that glycidol induced dose-related increases in the rates of neoplasms.According to Akane et al. (2013), adult rats axon injury in the central and peripheral nervous systems were damaged by glycidol, which suggests that it targets the developing nerve terminals of immature granule cells and inhibits late-stage hippocampal neurogenesis.The studies on evaluation of the food contaminant isomers especially the toxicological effect of R-glycidol or S-glycidol on human cells are limited (EFSA CONTAM, 2016).
Moreover, foods such as infant food formula, margarine, potato crisp, cookies, hot surface cooked pastries, short crusts, fried, roast meat and chocolate spreads were reported to contain harmful food contaminants especially glycidol (EFSA CONTAM, 2016 andBakhiya et al., 2011).Goh et al. (2021) stated that glycidyl esters contamination in palm-based cooking oil were in the range of 1.338 to 18.362 mg/kg.However, the permissible daily glycidol exposure from refined dietary oils and fats is 1.33 g/ kg/day (maximum daily fat intake: 80g) for adult individuals weighing 60 kg (Bakhiya et al., 2011).This controversial toxicological effects of glycidols need to be investigated in order to understand underlying mechanism on the safety of the chemical to human normal and cancer cells especially R-and S-glycidols isomers.Furthermore, limited studies were reported on the genotoxic, mutagenic and cytotoxic effect of glycidols in human normal and cancer cells.
Trans and cis fat are example of isomers that have different effects on human cells where trans-fat may contribute to carcinogenesis and cancer risk (Matta et al., 2021) but cis-fat is considered to be safe to human as it plays important roles as energy source, a component of cellular membranes, and a regulator of different biological process (Hirata, 2021).Despite the fact that the all-trans isomer of astaxanthin (AST) is a potent antioxidant predominates in nature, certain studies have indicated that the cis isomer of AST, particularly the 9-cis AST, demonstrated a higher antioxidant potency than the all trans isomer (Liu et al., 2016).Furthermore, α-tocopheryl succinate (α-TOS) is selective for cancer cells at least in part due to the lower esterase activity and reduced antioxidant defences expressed by these malignant cells when compared to their normal (non-malignant) counterpart than the others isomers of tocopherols (Constantinou et al., 2008).Lim et al. (2014) has suggested that different tocotrienol isomers might exhibit a different cellular mechanism of cell death in different cancer types but specific mechanisms induced by alpha-, gamma-and deltatocotrienols in both brain and lung cancers are still unclear.
According to Brooks et al. (2011) chirality is one of the important factors that plays a role in determining the chemicals interaction with the cancer protein receptors which results in dichotomous effects in normal and cancer human cells.Oxaliplatin isomers of (R, R)-cyclohexane-1,2diamine and (S, S)-cyclohexane-1,2-diamine have different roles in inducing cytotoxic effects on leukaemia, lung, colon, breast, renal, melanoma and ovarian human tumour cells.(S, S)-cyclohexane-1,2-diamine platinum antineoplastic compound is more biologically active in comparison with (R, R)-cyclohexane-1,2-diamine (Arnesano et al., 2015).Toxicological assessment of the glycidol isomers on human cells are limited.Thus, the objectives of this study are to elucidate selective toxicological effects of R-and S-glycidol in normal and cancer cells.This finding may lead to better understanding of selective cytotoxicity of R-and S-glycidol in normal cells (monkey) and cancer cells (human), which may be beneficial for safety evaluation of glycidols exposure in food.
Cell Culture of African Green Monkey Kidney Cell Lines, Vero and HCT 116 Cells
Vero cell line was obtained from Universiti Sultan Zainal Abidin (UNISZA) and HCT 116 cells were obtained from Imperial College London.Vero and HCT 116 cells were maintained in RPMI 1640 media (Gibco, USA), 1% penicillinstreptomycin (Gibco, USA) and 10% FBS (Tico Europe, Netherlands).The flasks were incubated in 37°C humidified incubator supplemented with 5% CO 2 .After the cells reached 80%-90% confluency, the cells were trypsinised.The cultures were examined using inverted microscope to check for signs of contamination; old media were discarded.The flasks were rinsed with PBS (3 mL) for three times to discard traces of serum, thereby inhibiting the action of accutase.Accutase (1 mL) was added for trypsinisation process and the flasks were incubated at 3°C and 5% humidified incubator for 5 min to detach the cells.Complete medium (3 mL) was added, and the solutions were transferred into 15 mL centrifuge tubes before spinning down at 2500 rpm for 5 min.The supernatants were discarded, and the pellet of Vero and HCT 116 cell lines was used for the next procedure.The cells and media (10 μL) were transferred immediately to the edge of the hemocytometer chamber, and the slide was viewed under inverted microscope.The cells were counted within the four corners of the grid.Non-viable cells were stained blue (Kim et al., 2016).
The final volume of fresh media and treatment solution (1 μL) was 100 μL in each plate.The 96well plate that contains 99 μL of seeded cells and fresh complete medium was incubated overnight in 37°C humidified incubator supplemented with 5% CO 2 (Kadir et al., 2009).
AlamarBlue ® Assay by Using Vero and HCT 116 Cell Lines
Different concentrations of R-and S-glycidol (0.000116 μg/mL, 0.00116 μg/mL, 0.0116 μg/mL, 0.116 μg/mL and 1.16 μg/mL) were used to treat the Vero and HCT 116 cell lines.Four replicates of the treatment were conducted on 96-well plates and incubated in 37°C humidified incubator supplemented with 5% CO 2 for 24, 48 and 72 hr.AlamarBlue ® reagent (10 μL was added into each well and immediately incubated back in the same condition for 4 hr.Resazurin (opaque blue) in AlamarBlue ® reagent was converted to resorufin (fluorescence pink) via the reduction reactions of metabolically active cells.The absorbance values were measured at 570 nm excitation wavelength value and 590 nm emission wavelength value by using microplate reader (Thermo Fisher Scientific, USA) (Bonnier et al., 2015).
ROS by Using 2,7-dichlorofluorescin Diacetate (DCFDA) Dye
The cells were seeded in 24-well plate, and 1 × 10 5 cells were seeded in each well.The 24-well plates containing cells were incubated overnight.Dichlorodihydrofluorescein diacetate (DCFDA) powder (Sigma Aldrich, USA) was used to measure ROS activity in treated HCT 116 cells.5 mg of DCFDA was mixed with dimethylsulfoxide (DMSO) (4.104 mL).DCFDA (2.5 μL) was added to the wells containing cells and incubated in an incubator for 30 min.After 30 min, the medium in each well was discarded carefully and washed with PBS.The media were added in each well and treated with different concentrations of R-and S-glycidol. 5 μL of the treatment solutions was aliquoted in media containing HCT 116 cells (500 μL) that were pre-treated with DCFDA.The treated cells were immediately incubated at 37°C with 5% CO 2 .The samples were measured for every 1 hr by using microplate reader.The absorbance values were measured fluorometrically at excitation of 520 nm and emission of 550 nm (Wu and Yotnda, 2015).
Protein Extraction
2.5 × 10 5 of HCT 116 cells were seeded in sixwell plate and treated with 500 mM to obtain final concentration of 5 mM for 24 and 48 hr.The seeded cells that were treated with 0.01% DMSO was used as a negative control for both time points.The cells were incubated for 24 and 48 hr prior to treatment with R-glycidol and S-glycidol and trypsinisation.The cell pellets were washed with 3 mL PBS and centrifuged at 3000 rpm for 5 min.The supernatants were completely discarded by pipetting, and the pellets were stored in −80°C until the next procedure was conducted.Proteins from the samples of treated cells were extracted using NucleoSpin® RNA/ Protein (Macherey-Nagel) protocol with slight modifications.Bradford assay was performed using Bio-Rad Protein Assay (Bio-Rad, USA) to determine protein concentrations.The absorbance of the samples was measured at 595 nm by using ELISA plate reader (Bio-Rad, USA) (Sinkala et al., 2017).
Western Blotting, Blocking and Gel Dot
Loading samples of the treated HCT 116 cells with R-and S-glycidol were prepared according to manufacturer's instructions (Pub.Part No. IM-8042).20.0 μg/μL protein samples, 3.75 μL NuPAGE ® LDS sample buffer (4X), 0.75 μL BME and deionised water were mixed together with the total volume of 15 μL.The mixture was heated at 72°C for 10 min at 500 rpm.1X SDS running buffer was poured in XCell SureLock™ Mini-Cell electrophoresis tank.Protein samples (15 μL) from section 2.5 were loaded in NuPAGE 4%-12% Bris-Tris gel (Novex, Life Technologies, USA) by using loading tips.The system was run at voltage 200 V and 100-125 mA for 1 hr and 30 min.After that, the gel was taken out from its cassette and washed three times with ultrapure water.The SDS-PAGE gel was then stained with SimplyBlue ® safe stain for 2 hr.The gel was destained by ultrapure water overnight.The image was observed by using gel imager (Gel Doc XR + system, Bio-Rad, USA).After separating the protein mixture, it was transferred to a PVDF membrane.
The PVDF membrane was soaked in a blocking buffer (5% of non-fat dry milk) for 2 hr.The membrane was incubated overnight with primary mouse monoclonal ERK ½ at 4ºC.Subsequently, the membrane was incubated with HRP conjugated anti-mouse secondary antibody (1:2200 dilution) for 4 hr.The membrane was washed with Tris-buffered saline with Tween 20 (TBST) buffer (20 mM Tris pH 7.5, 150 mM NaCl, 0.1% Tween 20) for three times.The bounded antibody was detected by using chromogenic peroxidase substrate (Life technologies, USA).Gel dot was used to measure band intensity.The image was analysed by using ImageJ software.
Molecular Docking Study
AutoDock 4.2 and AutoDock Tools (Morris et al., 2009) were used to examine the nonbonding interaction of the 3D-structure of human ERK2 (PDB: 4FMQ) and Phosphorylated Map Kinase ERK2 (PDB: 2ERK) with R-and S-glycidol through molecular docking study (Aral et al., 2012).The crystal structures were downloaded from the RCSB PDB and prepared for molecular docking in two steps.The protein data bank (PDB) files (4FMQ and 2ERK) were separately imported into AutoDock Tools (ADT), the graphical user interface for AutoDock, as the first step.Water molecules, ligands, and any associated molecules were removed from the receptor (protein) structure (PDB: 4FMQ and 2ERK) during this process.Next, the receptor was prepared by adding missing hydrogen atoms, residues, and charges.The R-and S-glycidol structure were retrieved from PubChem and the energy of the molecules was minimised for optimal geometrical parameters for docking with AutoDock 4.2.The best dock poses obtained from docking simulations were used for visualisation using Discovery Studio software (BIOVIA Discovery Studio 2016).
Statistical Analysis
GraphPad Prism V was used to analyse cell viability, ROS level and western blot analysis value.One-way ANOVA analysis with Dunnet post-test and two-way ANOVA with Bonferroni multiple comparison test were used to determine significant difference of cell viability and ROS at p<0.05 of the treated HCT 116 with R-and S-glycidol.However, unpaired t-test was used to determine significant difference of protein expression in the treated human colon, HCT 116 cell lines.
Cytotoxicity Effect of R-and S-glycidol on Vero and HCT 116 Cell Lines
In order to determine the toxicity of R-and S-glycidol, Alamar blue was performed on normal and cancer cell lines.Figure 1
[i(a)]; [ii(a)]; [iii(a)]; {i(b)]; [ii(b)] and [iii(b)]
shows percentage cell viability of Vero and HCT 116 after treatment with R-glycidol and S-glycidol.Based on our result, Rand S-glycidol caused the inhibition of cell viability at 24, 48 and 72 hr of the treatment for HCT 116 but not for the Vero cells.R-and S-glycidol showed a slight cytotoxicity effect on Vero cells persistent with the increased time points.However, the inhibition activity of treated HCT 116 cells were more cytotoxic and prominent when the cells were exposed to R-and S-glycidol at higher dose and time of the exposure, suggesting that the cytotoxicity effect were dose and time dependent.This interesting results on cell viability of the treated Vero and HCT 116 cells has proven the selectivity on cytotoxicity effects of Rand S-glycidol exposure.Based on our findings, 50% Whereas IC 50 values of the treated HCT 116 cells with S-glycidol were 4.8, 0.7 and 0.4 μg/mL at 24, 48 and 72 hr of treatment, respectively.Interestingly, the exposure of R-and S-glycidol on Vero cells did not exceed 50% of cell inhibition indicating that the chemicals were less toxic to normal cells.Moreover, the cell cytotoxicity effects of both chemicals have shown similar trend of cells inhibition, suggesting that the isomers of glycidols did not contribute much on the cytotoxic effects.
Reactive Oxygen Species (ROS) Expression
In ROS level was determined to evaluate the oxidative stress event after the exposure R-and S-glycidol on HCT 116 cells.Figure 2a shows significant increase at concentration of 10 mM of the treatments compared to control at time points 6 and 24 hr, but other concentrations of R-and S-glycidol treatments had negligible effect.The data suggested that the highest concentration of R-glycidol (1.16 μg/mL) exposure resulted in significant increase in ROS level of the treated HCT 116 cells, whereas other concentrations of the treatment showed slight increase in ROS level of the treated HCT 116 cells with R-and S-glycidol, suggesting that oxidative stress was expressed after the exposure of R-glycidol to HCT 116 cells at 24 hr. Figure 2b shows significant increase in ROS production after the treatment with S-glycidol (1.16 μg/mL) compared to control.However, other concentrations of the treatments showed negligible effect.The data suggested that the highest concentration of R-and S-glycidol may
Western Blot Analysis of the Protein Expressions
ERK ½ is a protein in MAPK pathway that may contribute to oxidative stress event in cells.Furthermore, the expression of ERK ½ protein (42 kD), p-ERK protein (43-55 kD), BCL-2 protein (26 kD), Caspase-3 (35 kD) and GAPDH (36 kD) using western blotting technique were done to elucidate the mechanism in the cells.The expression of ERK ½ protein (42 kD), p-ERK protein (43-55 kD), BCL-2 protein (26 kD), Caspase-3 (35 kD) and GAPDH (36 kD) expression of the treated HCT 116 cells with R-and S-glycidol at 24-and 48-hr treatments are shown in Figure 3a and 3b, respectively.Protein bands were intensely expressed on the PVDF membrane of the treated HCT 116 cells with R-and S-glycidol (1.16 μg/mL) compared to control at 24 and 48 hr of the exposure.
As shown in Figure 3c, there was no significant difference between the ERK ½ protein expression of the treated HCT 116 cells with R-glycidol compared to control after 24 hr of the treatment.However, after 48 hr of exposure, HCT 116 treated with R-glycidol showed a marginal decrease compared to control with significant difference at p<0.05.On the other hand, S-glycidol exposure to HCT 116 cells showed significant down regulation (p<0.01) at 48 hr exposure of the ERK ½ protein expression, but the expression of ERK ½ protein at 24 hr showed no significant difference, suggesting that the expression of ERK ½ protein of R-and S-glycidol treatments in HCT 116 cells took longer time of exposure to show the effect.The p-ERK protein expression of the treated HCT 116 cells with R-and S-glycidol showed no significant difference compared to control at 24 hr treatment, but at 48 hr treatment, it showed significant decrease (p<0.01)(as shown in Figure 3d).Interestingly, BCL-2 protein expression
P R E S S
in Figure 3e was down-regulated compared with the control with significant difference of p<0.05 at 48 hr but no significance at 24 hr after R-and S-glycidol exposure.This finding suggested that pro-apoptotic event might occur during 48 hr of the exposure.However, caspase-3 protein expression of the HCT 116 cells showed no significant difference compared to control after the treatment with free R-glycidol at 24 hr and 48 hr (Figure 3f).Whereas caspase-3 protein expression of the treated HCT 116 cells with free S-glycidol was significantly downregulated (p<0.001) compared to the control at 48 hr of exposure but no significance at 24 hr of S-glycidol exposure.This finding suggested that cell death induced by R-glycidol was caspase-3 independent.
Relative density ratio of p-ERK and ERK ½ and proteins was calculated to assess the phosphorylation event of ERK.The results in Figure 4a and 4b indicate that the relative density ratio p-ERK/ERK ½ of the HCT 116 cells treated with R-glycidol was significantly lower than the control at 48 hr of exposure but not significant at 24 hr of exposure.Meanwhile, relative density ratio p-ERK/ ERK ½ of the HCT 116 cells treated with S-glycidol (Figure 4c and 4d) showed no significant difference at both 24 and 48 hr of the exposure.The findings suggest that the cytotoxicity and oxidative stress in the treated cells with R-glycidol were likely due to phosphorylation event of ERK protein.
Molecular Docking Analysis
In silico study was conducted to support protein expression data, and the molecular docking result showed an interaction between R-and S-glycidol (Figure 5(c,d); Figure 6(c,d)) and ERK2 (Figure 5a) and phosphorylated Map Kinase ERK2 (Figure 6a) protein receptor.A molecular docking simulation involving two receptors (PDB: 4FMQ and 2ERK) and two compounds (R-and S-glycidol) was performed using AutoDock 4.2.The top four docking poses in this simulation were selected based on their binding energies, which ranged from -3.42 to -4.0 kcal/mol.These selected poses were further investigated in Discovery Studio to investigate the interactions of R-and S-glycidol with the 3D structures of the human ERK2 protein (PDB: 4FMQ) and pERK2 (PDB: 2ERK).The results are shown in Figure 5 and 6.In Figure 5(d,e), the active amino acid residues form a cavity that resembles a sphere around the molecule.The presence of glycidol is close to the hydrophobic and hydrophilic radicals, leading to different secondary forces, such as Van der Waals contacts and hydrogen bonding.This hydrophobic contact, hydrogen bonding and Van der Waals forces contribute in the binding interaction between chemical compounds and active amino residues of the receptor.R-and S-glycidol formed two hydrogen bonds with MET108 and ASP106 with distances of 1.62276-2.30747Å, as well as Van der Waals contacts with the amino acid residues LEU017, ALA52, LEU156, ILE84 and GLN105 as represented in Figures 5 (h, i).These binding interactions allow the drug to connect closely with the receptor, inhibiting the activity of the cells in the system.The hydrophobic receptor residues involved in interactions with compounds, such as LEU, ILE, and ALA, as well as the hydrophilic amino acid GLN, provide extra strength inside the active pocket.This condition allows for a better mutual relationship between the ligand and the receptor, paving the way for inhibitory drug development.The binding details and bond lengths achieved through molecular docking are shown in Table 1.Docking analysis (Table 1) revealed that R-and S-glycidol interacted with the protein in the best possible orientation to stabilise the structures by making close contact with the receptor cavities.As a result, at their best docked positions, both compounds exhibit similar interactions and approximately similar binding energies, culminating in similar inhibition activities in experimental studies for both compounds (R-and S-glycidol).
DISCUSSION
A study by Abraham et al. (2011), indicated that different isomers may provide different effects, and the induction mechanism in human cells should be studied.Two enantiomers from same chemical composition that has different chirality could react differently in biochemical process or biological reaction of cells (Fanali et al., 2019).At the low concentration less than 5 μg/mL of glycidol, it did not show any toxicity effects on Chinese hamster ovary (CHO) cells as the cell toxicity treated with glycidol never exceeded more than 8% in CHO as measured by trypan blue dye exclusion (El Ramy et al., 2007).Based on our findings, IC 50 values of cell viability of treated HCT 116 cells with R-and S-glycidol at 24, 48 and 72 hr were 5.0, 4.5, 3.4 μg/mL and 4.8, 0.7, 0.4 μg/mL, respectively.This result has shown that the cytotoxicity effect of both glycidols are dose and time dependant.Our finding is supported by Senyildiz et al. (2017), who stated that cytotoxicity of glycidols and 3-MCPD were dose and time dependant.Ozcagli et al. (2016) found that 3-MCPD, glycidol and β-chlorolactic acid also reduced the cell viability of human embryonic kidney cells (HEK-293) and kidney epithelial cell lines (NRK-52E) at 24 hr treatment by using MTT assay.Furthermore, Liu et al. (2021) indicated that glycidol at different doses of treatments using NRK-52E cells induced a significant cytotoxicity effect in a dose and time dependent.
The glycidol induce ROS by damaging DNA and proteins in the cells as the epoxide group of the
P R E S S
glycidol may react with the macromolecules in the cells (Sevim et al., 2021).The epoxide group of the glycidol have the ability to react with the nucleophilic biomolecules such as protein in the cells that may triggers ROS production (Inagaki et al., 2019).ROS level of treated 116 cells with R-and S-glycidol (Figure 2) had shown that both chemicals are able to induce oxidative stress.This may be due to ROS levels have been found to be elevated in almost all cancers, promoting several aspects of tumour formation and progression (Liou and Storz, 2010).
The increase in ROS production (oxidative stress) might mediate changes in the biological response that cause cell and tissue damage (NavaneethaKrishnan et al., 2019;Pizzino et al., 2017;Snezhkina et al., 2019).
In addition, the oxidative stress event may cause the induction of mitochondrial cytochrome C which triggered by the ROS (Ji et al., 2017).Our results also supported by Mossoba et al. (2020) who reported that HK-2 cells that were treated with 3-MCPD and its esters have shown slightly increased in ROS levels with the increasing concentrations of the treatment.ROS are a typical by product of oxidative energy metabolism and are thought to have a role in various intracellular signalling pathways, including the MAPK pathway (Rezatabar et al., 2019).
According to Circu and Aw (2010), concentrations of compound are the most important factors that affect ROS production because it is a product of normal metabolism during xenobiotic exposure.Increasing ROS production can occur when mutation in mitochondrial genes is observed (Redza-Dutordoir and Averill-Bates, 2016).The formation of superoxide can occur when the electron transfer is inhibited along the electron transport chain (ETC).Thus, this radical yielding hydrogen peroxide (H 2 O 2 ) by diffusion into cell nucleus and DNA attack leads to the genetic instability (Schumacker, 2006).At low level, ROS can function as redox messenger in most intracellular signalling.Otherwise, excessive or high production of ROS would induce oxidative stress of cellular macromolecules that might inhibit protein function and promote cell death that could lead to DNA damage, apoptosis and cancer development (Fu et al., 2014).The results in Figure 3 showed that high concentration of R-and S-glycidol compound affected protein expressions of ERK ½, p-ERK, BCL-2 and caspase-3.This effect was due to phosphorylation event of ERK ½ that might be responsible in inducing oxidative stress and further led to cell death (Liou and Storz, 2010).Furthermore, docking results (Figure 5 and 6) shows that R-and S-glycidol have similar interactions and relatively similar binding energies, resulting in similar inhibition activities of ERK ½, p-ERK proteins after the treatment with both compounds.We speculated that the down regulation of ERK ½ and p-ERK protein expression for HCT 116 cells treated with both R-and S-glycidol might activate the phosphorylation event of ERK protein (Figure 4) when ROS is elevated (Figure 2), thereby leading to oxidative stress that may contribute to cytotoxic effect.The cell death mechanism of R-and S-glycidol is shown in Figure 7.As in human cells, the Raf/MEK/ERK module was activated through MAPK pathway as shown in Figure 7. R-and S-glycidol potentially induce the production of ROS by triggering the MAPK pathway that involves ERK ½, p-ERK, BCL-2 and caspase-3.In the nucleus, DNA alteration occur as the phosphorylated ERK and BCL-2 were activated that might lead to the DNA damage and cell death.ERK ½ requires simultaneous
P R E S S
phosphorylation at conserved threonine (Thr) and tyrosine (Tyr) residues to be fully activated.As the result of the phosphorylation event, the ERK ½ kinase protein undergoes conformational changes, domain rotation and remodelling which allowing substrates to bind and be phosphorylated by ERK ½ kinase (Takahashi et al., 2012).
Caspase-mediated intrinsic signalling system, which is mostly regulated by the BCL-2 family of intracellular proteins, or an extrinsic signalling pathway, which is primarily regulated by the tumour necrosis factor (TNF) receptor family, can both cause apoptosis (Lu et al., 2020).Apoptosis dysregulation is linked to uncontrolled cell proliferation, cell growth, and cancer development (Takahashi et al., 2012).Based on our findings, BCL-2 protein expression for both R-and S-glycidol were down regulated as the time exposure increased (Figure 3).We postulated that the down regulation of BCL 2 contributes to cell death of treated HCT 116 cells after R-and S-glycidol treatments.According to Ji et al. (2017), the decreasing BCL-2 protein expression after exposure of 3-MCPD resulted in activation of mitochondrial apoptosis pathway.In order to prove whether early apoptotic event occurred or not, caspase-3 protein expression was conducted and caspase-3 protein expression showed no significant difference compared to control after treated with R-and S-glycidol in the HCT 116 cells (Figure 3).This event suggested that induction of cell death by R-and S-glycidol in the treated HCT 116 cells was caspase independent.A study by Kim et al. (2021), indicated that cytotoxic effects on human lung cells after treated with mercury chloride (HgCl 2 ), subsequently leading to the cell death via caspase-3 independent pathway.Previous studies also indicated that cell death event in various types of cells (in vitro) which mediated cell death via caspase -3 independent (Cregan et al., 2013;Mallepogu et al., 2017;Nowak et al., 2020).In addition, glycidol has an ability to induce immunoreactivity which may lead to the apoptosis in the Albino rat brain cells (Sevim et al., 2021).Furthermore, epoxide functional group in the chemical structure of metabolites (chemicals undergo xenobiotic metabolism) has high reactivity that was proven to promote high toxicity than the parent compound (Obach and Kalgutkar, 2010).The epoxide of the glycidol was believed the cause of DNA damage in human cells that mediated cell death (Liu et al., 2021).The free glycidol was discovered to induce cell cycle arrest and apoptosis event with the involvement MAPK.RAS/RAF/MEK/ERK proteins are proteins in MAPK pathway, which are the most common proteins in cancer cell biology especially in oxidative stress event of the cancer cells (De Luca et al., 2012).McCubrey et al. (2012), reported that activated ERK ½ S/T kinases phosphorylated and activated various substrates, and this pathway was involved in cancer development.Furthermore, ERK MAPK overexpression and activation have a role in the proliferation and progression of cancer cells development (Fang and Richardson, 2005).RAS protein or originally known as Rat Sarcoma Virus (RAS) protein can be found in human.The RAS protein mediates the ERK pathway, which subsequently activates rapidly accelerated fibrosarcoma 1 (RAF1), triggering a cascade involving MEK and then ERK activation, which was involved in the pathogenesis, progression, and oncogenic behaviour of human colorectal cancer (Fang and Richardson, 2005;Rezatabar et al., 2019).Nappi et al. (2020) and Vidri and Fitzgerald (2020) have stated that the function of enhanced signalling in colorectal cancer through the Ras/Raf/MEK/ ERK has become one of the prominent pathway to investigate the cytotoxicity and oncogenic events of xenobiotic such as glycidols in human cells (as provided in Figure 7).CONCLUSION R-and S-glycidol compounds were found less toxic to Vero (normal cells) but it induces cell death in HCT 116 cells.The cytotoxic event of the chemicals has proven that it was due to phosphorylation of ERK ½ proteins that may activate MAPK pathway which activated ROS production and leads to oxidative stress.Furthermore, the docking study showed that the interactions between glycidol and of inhibition concentration (IC 50 ) values R-glycidol of the treated HCT 116 cells were 5.0, 4.5 and 3.4 μg/mL at 24, 48 and 72 hr of treatment, respectively.
Figure 2 .
Figure 2. ROS expression of HCT 116 cells after 24 hr treatment of (a) R-glycidol, and (b) S-glycidol.Two-way ANOVA with Bonferroni multiple comparison test were done and * indicates p<0.05, ** indicates p<0.01, *** indicates p<0.001 significant difference of the concentration towards the control.
Figure 5 .
Figure 5. Molecular docking study of (a) human ERK2 (b, c) Sand R-glycidol, (d, e) active pocket of receptor shown by sphere around compounds, (f, g) involvement of various hydrophobic amino acids including a hydrophilic amino acid with the studied compounds, and (h, i) 2D representation for the buried compound inside cavity of receptor showing hydrogen bond, van der Waals, and other secondary forces.
Figure 6 .
Figure 6.Molecular docking study of (a) Phosphorylated Map Kinase ERK2 (b, c) Sand R-glycidol, (d, e) active pocket of receptor shown by sphere around compounds, (f, g) involvement of various hydrophobic amino acids including a hydrophilic amino acid with the studied compounds, and (h, i) 2D representation for the buried compound inside cavity of receptor showing hydrogen bond, van der Waals, and other secondary forces.
Figure 7 .
Figure 7. Schematic diagram of cell death mechanism after treatment with Rand S-glycidol in HCT 116 cells. | 2023-10-07T15:02:02.454Z | 2023-10-05T00:00:00.000 | {
"year": 2023,
"sha1": "4c0a19605a12cb40ddb3dc413244b86d91384916",
"oa_license": null,
"oa_url": "http://jopr.mpob.gov.my/wp-content/uploads/2023/09/joprinpress2023-nurulhuda.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f235c451b77de975632fafc64f07e40836b44ea0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
146808693 | pes2o/s2orc | v3-fos-license | Regular irreducible represntations of classical reductive groups over finite quotient rings
A parametrization of irreducible representations associated with a regular adjoint orbit of a reductive group over finite quotient rings of a non-dyadic non-archimedean local filed is presented. The parametrization is given by means of (a subset of) the character group of the centralizer of a representative of the regular adjoint orbit. Our method is based upon Cliffod's theory and Weil representations over finite fields.
Introduction
Let F be a non-dyadic non-archimedean local field. The integer ring of F is denoted by O with the maximal ideal p generated by ̟. The residue class field F = O/p is a finite field of odd characteristic with q elements. For an integer r > 0 put O r = O/p r so that F = O 1 .
Let G be a connected reductive group scheme over O. The problem on which we will consider in this paper is to determine the set Irr(G(O r )) of the equivalence classes of the irreducible complex representations of the finite group G(O r ).
This problem in the case r = 1, that is the representation theory of the finite reductive group G(F), has been studied extensively, starting from Green [7] concerned with GL n (F) to the decisive paper of Deligne-Lusztig [3].
On the other hand, the study of the representation theory of the finite group G(O r ) with r > 1 is less complete. The systematic studies are done mainly in the case of G = GL n [8][9][10][11], [12], [17], [18]. The purpose of this paper is to show that the method used in [18] works for greater range of reductive group schemes over O.
In this paper, we will establish a parametrization of the irreducible representations of G(O r ) (r > 1) associated with the regular (more precisely smoothly regular, see subsection 2.4 for the definition) adjoint orbits. Taking a representative β of the adjoint orbit, the parametrization is given by means of a subset of the character group of G β (O r ) where G β is the centralizer of β in G which is assumed to be smooth commutative group scheme over O. Our theory is based on Clifford theory and Weil representations over finite fields.
The main results of this paper are Theorem 2.3.1 and Theorem 2.5.1. The latter is a paraphrase of the former with more emphasis posed on the regularity of Lie elements but being restricted to the groups of type A, B and C.
The situation is quite simple when r is even, and almost all of this paper is devoted to treat the case of r = 2l − 1 being odd. In this case Clifford theory requires us to construct an irreducible representation of G β (O r ) · K l−1 (O r ) where K l−1 (O r ) is the kernel of the canonical surjection G(O r ) → G(O l−1 ). To construct an irreducible representation of K l−1 (O r ), we will use Schrödinger representation of the Heisenberg group associated with a symplectic space over finite field which is associated with β (Proposition 4.4.1). Then we will use Weil representation to extend the irreducible representation of K l−1 (O r ) to an irreducible representation of G β (O r ) · K l−1 (O r ). At this point appears a Schur multiplier as an obstruction to the extension (see subsection 4.5). The definition and a fundamental property of the Schur multiplier will be discussed in section 3. In the case of G = GL n , the extendability of the irreducible representation of K l−1 (O r ) to that of G β (O r )·K l−1 (O r ) is proved by [17]. Based upon this result, we will prove the triviality of the Schur multiplier for general G ⊂ GL n under the condition that the reduction modulo p of the characteristic polynomial of β ∈ g(O) ⊂ gl n (O) is the minimal polynomial of β (mod p) ∈ M n (F) (Proposition 4.6.1).
We will give in section 5 some examples of classical groups where the reduction modulo p of the characteristic polynomial of β is the minimal polynomial of β (mod p) ∈ M n (F). In these cases the parametrization is given by a subset of the character group of the unit group of a tamely ramified extension of the base field F . See Propositions 5.2.1 for a special linear group, Proposition 5.3.1 for a symplectic group, Propositions 5.4.1 and 5.4.2 for a special orthogonal group with respect to a quadratic form of even and odd variables respectively.
The character group of a finite abelian group G is denoted by G . The multiplicative group of the complex numbers of absolute value one is denoted by C 1 .
Acknowledgment The author express his thanks to the referee for giving a kind suggestion which is decisive to prove Proposition 4.6.1, the key stone of this paper.
Main results
2.1 Fix a continuous unitary character τ of the additive group F such that and define an additive character τ of F by τ (x) = τ (̟ −1 x).
Let G ⊂ GL n be a closed smooth O-group subscheme, and g the Lie algebra of G which is a closed affine O-subscheme of gl n the Lie algebra of GL n . We may assume that the fibers G⊗ O K (K = F or K = F) are non-commutative algebraic K-group (that is smooth K-group scheme).
For any O-algebra K (in this paper, an O-algebra means an commutative unital O-algebra) the set of the K-valued points gl n (K) is identified with the K-Lie algebra of square matrices M n (K) of size n with Lie bracket [X, Y ] = XY − Y X, and the group of K-valued points GL n (K) is identified with the matrix group where K × is the multiplicative group of K. Hence g(K) is identified with a matrix Lie subalgebra of gl n (K) and G(K) is identified with a matrix subgroup of GL n (K). Let be the trace form on gl n , that is B(X, Y ) = tr(XY ) for all X, Y ∈ gl n (K) with any O-algebra K. The smoothness of G implies that we have a canonical isomorphism For any g ∈ G(O) (resp. X ∈ g(O)), the image under the canonical surjection onto G(O l ) (resp. onto g(O l )) with l > 0 is denoted by Since the rational points G(K) (resp. g(K)) of the fiber G⊗ O K (resp. g⊗ O K) with K = F or K = F plays some special roles in our theory, let us denote by We will pose the following three conditions; II) for any integers r = l + l ′ with 0 < l ′ ≤ l, we have a group isomorphism The condition I) implies that B : The mappings of the conditions II) and III) from Lie algebras to groups can be regarded as truncations of the exponential mapping.
2.2 From now on we will fix an integer r ≥ 2 and put r = l + l ′ with the smallest integer l such that 0 < l ′ ≤ l. In other word Take a β ∈ g(O) and define a character ψ β of the finite abelian group K l (O r ) by Then β (mod p l ′ ) → ψ β gives an isomorphism of the additive group g(O l ′ ) onto the character group K l (O r ) . For any g r = g (mod p r ) ∈ G(O r ), we have 2) a bijection of Irr(G(O r , β) | ψ β ) onto Irr(G(O r ) | ψ β ) is given by So our problem is to give a good parametrization of the set Irr(G(O r , β) | ψ β ).
2.3
For any β ∈ g(O), let us denote by G β = Z G (β) the centralizer of β in G which is a closed O-group subscheme of G. The Lie algebra g β = Z g (β) of G β is a closed O-subscheme of g such that for any O-algebra K where β ∈ g(K) is the image of β ∈ g(O) under the canonical morphism g(O) → g(K). Now our main result is 2) the characteristic polynomial χ β (t) = det(t · 1 n − β) of β ∈ g(F) ⊂ gl n (F) is the minimal polynomial of β ∈ M n (F).
Then we have a bijection θ → σ β,θ of the set The explicit description of the representation σ β,θ is given by (1) if r is even, and by (5) if r is odd.
The proof of this theorem in the case of even r is quite simple, and it will be given in subsection 2.6. The remaining part of this paper is devoted to the proof in the case of odd r.
These proves show that the second condition in the theorem is required only in the case of r being odd. The second condition is related with the smooth regularity of β ∈ g(O) as presented in the next subsection.
2.4
We will present a sufficient condition on β ∈ g(O) under which G β is commutative and smooth over O.
Let us assume that the connected O-group scheme G is reductive, that is, the fibers G⊗ O K (K = F, F) are reductive K-algebraic groups. In this case the dimension of the maximal torus in G⊗ O K is independent of K which is denoted by rank(G). For any β ∈ g(O) we have We say β to be smoothly regular with respect to G over K (or β ∈ g(K) is smoothly regular with respect to G⊗ O K) if dim K g β (K) = rank(G) (see [16, 1.4]). In this case G β ⊗ O K is smooth over K. If β is smoothly regular with respect to G over F and over F, then β is said to be smoothly regular with respect to G.
We say β to be connected with respect to G if the fibers G β ⊗ O K (K = F, F) are connected. See Remark 2.4.3 for a sufficient condition for the connectedness of G β ⊗ O K.
Then [13] shows that G β (F ) is commutative (F is the algebraic closure of F ), and hence G β is a commutative O-group scheme.
Let us present more detail description of the smooth regularity of Lie element. Assume that the characteristic of K = F, F is not bad with respect to G⊗ O K. The list of the bad primes is (see [2, p.178, I-4.3]). Take a β ∈ g(O) and let β = β s + β n be the Jordan decomposition of β ∈ g(K) into the semi-simple part β s ∈ g(K) and the nilpotent part β n ∈ g(K) (see [1,Prop.13.19] and its proof). Then T ⊂ L and rank(L) = rank(G). Put l = Lie(L), then l(K) = Z g(K) (β s ). So β ∈ g(K) is smoothly regular with respect to G⊗ O K if and only if β n ∈ l(K) is smoothly regular with respect to L. Now fix a system of positive roots Φ + in the root system Φ(T, L) of L with respect to T such that where X α is a root vector of the root α. Then the result of [2, p.228,III-3.5] implies Proposition 2.4.2 β ∈ g(K) is smoothly regular with respect to G over K if and only if c α = 0 for all simple α ∈ Φ + .
and its center are connected (see Theorem 5.9 b) of [15]).
Remark 2.4.4
If G⊗ O K is of type A r , B r or C r , then, putting G ⊂ GL n with suitable n (that is n = r + 1, 2r + 1, 2r for type A r , B r , C r respectively), an element β ∈ g(O) ⊂ gl n (O) is smoothly regular with respect to G over K if and only if β is smoothly regular with respect to GL n over K.
Let us consider the case of GL n (n ≥ 2) which is a connected smooth reductive O-group scheme. For β ∈ gl n (O), the following statements are equivalent; where α 1 , · · · , α r are distinct elements of the algebraic closure F of F and In this case we have 2) β ∈ gl n (O) is smoothly regular with respect to GL n over F and 3) the centralizer GL n,β is commutative and smooth over O.
2.5
In order to give a presentation of the results directly connected with the regularity, let us put G = GL n , SL n (with n prime to the characteristic of F), Sp n (with even n) or SO(S) with a symmetric matrix S ∈ Sym n (O) of odd size and SO(S)(K) = {g ∈ SL n (K) | t gSg = g} for any O-algebra K. Then G is a smooth O-group scheme which fulfills three conditions I), II) and III) of subsection 2.1. Take a β ∈ g(O) which is smoothly regular and connected with respect to G. Then the results of the preceding subsection show that G β is a commutative smooth O-group scheme, and that the characteristic polynomial of β ∈ g(F) ⊂ gl n (F) is the minimal polynomial of β ∈ M n (F). Then Theorem 2.3.1 gives the following Theorem 2.5.1 Take a β ∈ g(O) which is smoothly regular and connected with respect G. Then we have a bijection θ → σ β,θ of the set Then θ → σ β,θ is an injection of the set
Schur multiplier
Let G ⊂ GL n be a closed F-algebraic subgroup and g the Lie algebra of G which is a closed affine F-subscheme of the Lie algebra gl n of GL n . Let us assume that the trace form is non-degenerate. Fix a β ∈ g(F) such that g β (F) g(F).
3.1
The non-zero F-vector space V β = g(F)/g β (F) has a symplectic form For any v ∈ V β and g ∈ G β (F), put Take a character ρ ∈ g β (F) . Then there exists uniquely a v g ∈ V β such that 3.2 Let us assume that there exists a closed smooth O-group subscheme H ⊂ GL n of which our G is a closed O-group subscheme and that the trace form Then we have because v ′′ , v g β = 0. Hence we have is the image under the restriction mapping
We have a chain of canonical surjections
defined by Let us denote by Z(O r , β) the inverse image under the surjection ♥ of g β (F).
Let us denote by Y β the set of the group homomorphisms ψ of Z(O r , β) to C × such that ψ = ψ β on K l (O r ). Then a bijection of g β (F) onto Y β is given by where a group homomorphism ψ β : Z(O r , β) → C × is defined by Take a ψ ∈ Y β . For two elements for all x ∈ K l−1 (O r ) and y ∈ Z(O r , β) so that we can define . Note that D ψ is non-degenerate. Then Proposition 3.1.1 of [18] gives Proposition 4.1.1 For any ψ = ψ β,ρ ∈ Y β with ρ ∈ g β (F) , there exists unique irreducible representation π ψ of K l−1 (O r ) such that ψ, π ψ Z(Or ,β) > 0. Furthermore Fix a ψ = ψ β,ρ ∈ Y β with ρ ∈ g β (F) . Our problem is to extend the represen- β). This means that, for any g ∈ G β (O r ), the g-conjugate of π ψ is isomorphic to π ψ , that is, there exists a U (g) ∈ GL C (V ψ ) (V ψ is the representation space of π ψ ) such that for all x ∈ K l−1 (O r ), and moreover, for any g, h ∈ G β (O r ), there exists a c U (g, h) ∈ C × such that In the following subsections, we will construct π ψ by means of Schrödinger representations over the finite field F (see Proposition 4.4.1), and will show that we can construct U (g) by means of Weil representation so that we have [Proof ] Since the Schur multiplier [ On the other hand we have and then [X, β] ≡ 0 (mod p), that is X (mod p) ∈ g β (F). Then π ψ (h) is the homothety for g ∈ G β (O r ) and h ∈ K l−1 (O r ) with ψ = ψ β,ρ . Then we have This proposition combined with the following proposition gives the bijection presented in our main Theorem 2.3.1 in the case of r being odd.
[Proof ] Take a (θ, ρ) ∈ G β (F) × K l−1 (Or ) g β (F) . The smoothness of G β over O implies that the canonical mapping g β (O) → g β (F) is surjective. So Take a X ∈ g β (F) with X ∈ g β (O). Then we have Hence we have This means that the mapping (θ, ρ) → θ is injective. Take X, X ′ ∈ g β (O) such that X ≡ X ′ (mod p). Then we have X ′ = X + ̟T with T ∈ g β (O) and where 1 + ̟ l T (mod p r ) ∈ K l (O r ) and hence This and the commutativity of G β show that gives an well-defined group homomorphism of g β (F) to C × . Then (θ, ρ) ∈ G β (O r ) × K l−1 (Or ) g β (F) and our mapping in question is surjective.
A group extension
is given by the canonical surjection (4), whose kernel is K l (O r ), with the group isomorphism defined by S (mod p l−1 ) → (1 + ̟ l S) (mod p r ). In order to determine the 2-cocycle of the group extension (6), choose any mapping λ : g(F) → g(O) such that X = λ(X) (mod p) for all X ∈ g(F) and λ(0) = 0, and define a section for all X ∈ g(F) and for all X, Y ∈ g(F). Now we have two elements (2-cocycle) Let us consider two groups M and G corresponding to the two 2-cocycles µ and c respectively. That is the group operation on M = g(F) × g(O l−1 ) is defined by and the group operation on G = g(F) × g(O l−1 ) is defined by Let G× g(F) M be the fiber product of G and M with respect to the canonical projections onto g(F). In other word defined by Then the center of H β is Z(H β ) = g β (F) × O l−1 , the direct product of two additive groups g β (F) and O l−1 . The inverse image of Z(H β ) with respect to the surjective group homomorphism (7). Take a ρ ∈ g β (F) which defines group homomorphisms On the other hand we have a group homomorphism . Then ψ 0 · χ β is trivial on the kernel of the surjection (7) and it induces a group homomorphism ψ β,ρ ∈ Y β defined in subsection 4.1.
4.4
Fix a ρ ∈ g β (F) . Let us determine the 2-cocycle of the group extension of F-vector spaces and define a section l : V β → H β of the group extension (9) by l(v) = ([v], 0). Then we have for u =Ẋ, v =Ẏ ∈ V β so that the 2-cocycle of the group extension (9) is
Define a group operation on
.
There exists a group homomorphism T : Sp(V β ) → GL C (L 2 (W ′ )) such that for all σ ∈ Sp(V β ) and (v, s) ∈ H β (see [6,Th.2.4]). Then we have On the other hand for all g r ∈ G β (O r ) and h ∈ K l−1 (O r ).
4.6
The following proposition is the key stone of this paper.
is trivial for all ρ ∈ g(F) .
[Proof ] We will devide the proof into two parts.
1) The case of G = G n . In this case, Corollary 5.1 of [17] shows that the Schur multiplier [c U ] ∈ H 2 (G β (O r ), C × ) is trivial. On the other hand we have the inflation-restriction exact sequence induced by the exact sequence Since we have and K 1 (O r ) ⊂ G β (O r ) are finite commutative groups, the restriction mapping is surjective. Hence the inflation mapping inf : is injective. Since the results of the preceding subsections show that the Schur under the inflation mapping, the statement of the proposition is established for the group G = GL n . † 2) The general case of G ⊂ GL n . We have G β (F) ⊂ GL n,β (F). Then Proposition 3.2.1 says that the Schur multiplier [c β,ρ ] ∈ H 2 (G β (F), C × ) is the image of the Schur multiplier [c β, ρ ] ∈ H 2 (GL n,β (F), C × ) under the restriction mapping res : Since we have shown in the part one of the proof that [c β, ρ ] ∈ H 2 (GL n,β , It may be quite interesting if we can find a counter example to the following statement; Let G be a connected reductive algebraic group defined over F and g the Lie algebra of G. Take a β ∈ g(F) which is regular with respect to G and G β is commutative. Then the Schur multiplier [c β,ρ ] ∈ H 2 (G β (F), C × )is trivial for all ρ ∈ g(F) .
5.1
Let K/F be a tamely ramified field extension of degree n and O K ⊂ K the integer ring with the maximal ideal p K = ̟ K O K . The residue class field F = O/p is identified with a subfield of K = O K /p K . A prime element ̟ K can be chosen so that we have ̟ e K ∈ O K0 where K 0 is the maximal unramified subextension of K/F and e = (K : K 0 ) is the ramification index of K/F . Then [14,p.545, shows that the following two statements are equivalent; 2) b 0 σ = b 0 for all 1 = σ ∈ Gal(K/F), and b 1 ∈ O × K0 if e > 1. By means of the regular representation with respect to an O-basis of O K , we will identify K with a F -subalgebra of the matrix algebra M n (F ) where . Then [14,p.545,Cor.1] shows that the characteristic polynomial χ β (t) = det(t · 1 n − β) of β ∈ M n (O) has the following properties; By the abuse of the notation, the residue class of α ∈ O K modulo p m K is denoted by α ∈ O K /p m K .
5.2 G = SL n (n ≥ 2) is a smooth O-group scheme. If n is prime to the characteristic of F, then G fulfills three conditions I), II) and III) of subsection 2.1. Let K/F be a field extension of degree n so that it is a tamely ramified extension. Take a β ∈ O K such that O K = O[β] and T K/F (β) = 0. Under the identification of subsection 5.1, we have β ∈ g(O) such that G β is commutative smooth O-group scheme. In this case, we have where e is the ramification index of K/F and We have also Let K + /F be a tamely ramified extension of degree n and K/K + a quadratic extension. Take a ω ∈ O K such that where ρ ∈ Gal(K/K + ) is the non-trivial element. Then with the ramification index e + of K + /F is a symplectic form on the F -vector space K. Fix an O-basis {u 1 , · · · , u n } of O K+ . Since K + /F is a tamely ramified extension, there exists u * j ∈ p Identify the F -algebra K with a F -subalgebra of M 2n (F ) by means of the Obasis and β ρ + β = 0. Then β ∈ g(O) such that G β is a commutative smooth O-group subscheme of G. We have where e is the ramification index of K/F and We have also for all O-algebra L. The O-group scheme G satisfies the conditions I), II) and III) of the subsection 2.1. Take a β ∈ g(O) and assume that n is odd or that det β ≡ 0 (mod p). Let β s ∈ g(L) be the semisimple part of β ∈ g(L) (L = F or L = F). Then the centralizer Z G⊗ O L (β s ) is connected and its center is also connected. Hence if β ∈ g(O) is smoothly regular with respect to G, then G β is a smooth commutative O-group scheme.
Assume that n = 2m is even. Let K/F be a tamely ramified Galois extension of degree 2m. Fix an intermediate field F ⊂ K + ⊂ K such that (K : K + ) = 2, and assume that K/K + is unramified. Take an ε ∈ O × K+ and put S ε (x, y) = T K/F ε · ̟ 1−e K+ · xy ρ (x, y ∈ K) where ρ ∈ Gal(K/K + ) is the non-trivial element and e is the ramification index of K/F . Then S ε is a regular F -quadratic form on K. Take a O-basis {u 1 , · · · , u n } of O K and put B = u σj i 1≤i,j≤n with Gal(K/F ) = {σ 1 , · · · , σ n }. Then we have so that the discriminant of the quadratic form S ε is det (S ε (u i , u j )) 1≤i,j≤n = ±(det B) 2 N K/F ε̟ 1−e
Note that (det B) σ = ± det B for any σ ∈ Gal(K/F ). Since K/F is tamely ramified, its discriminant is where n = ef . Hence det (S ε (u i , u j )) 1≤i,j≤n ∈ O × . So the O-group scheme G = SO(S ε ) and its Lie algebra g = so(S ε ) is defined by G(L) = g ∈ SL L (O K ⊗L) S ε (xg, yg) = S ε (x, y) for ∀x, y ∈ O K ⊗ O L and by g(L) = X ∈ End L (O K ⊗L) S ε (xX, y) + S ε (x, yX) = 0 for ∀x, y ∈ O K ⊗ O L for all O-algebra L. Note that End F (K) acts on K from the right side. Take a β ∈ O × K such that O K = O[β] and β ρ + β = 0. Identify β ∈ K with the element x → xβ of g(O) ⊂ End O (O K ). Then we have where e is the ramification index of K/F and We have also and ψ β 1 + ̟ l x = τ ̟ −l ′ K/F (βx) for x ∈ O K such that T K/K+ (x) ≡ 0 (mod p Let us consider the case of n = 2m + 1 being odd. Take a η ∈ O × and define a F -quadratic form S ε,η on the F -vector space K × F by S ε,η ((x, s), (y, t)) = S ε (x, y) + η · st.
Then the O-group scheme G = SO(S ε,η ) and its Lie algebra g = so(S ε,η ) is defined by | 2019-05-04T01:17:23.000Z | 2019-05-04T00:00:00.000 | {
"year": 2019,
"sha1": "1e015f384a9086905c11e97e67817522d60175b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1e015f384a9086905c11e97e67817522d60175b2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221562760 | pes2o/s2orc | v3-fos-license | Porcine Cysticercosis Control in Western Kenya: The Interlink of Management Practices in Pig Farms and Meat Inspection Practice at Slaughter Slabs
This study assessed the management practices for controlling porcine cysticercosis (PC) on pig farms and in pork at the slaughter slabs in two counties (Busia and Kakamega) of Western Kenya. A total of 162 pig-rearing households at the farm level, 26 butcher owners, and 26 slaughter slab workers at the slaughter slab level were interviewed using a structured questionnaire. Data were analyzed using the “Statistical Analysis System” (SAS) programme. Results indicated that the frequent management practices used at the farm level (p < 0.05) were rearing pigs under free range (69.1%), latrine ownership by households (87.7%), and use of pit latrines (72.8%) in households. At the slaughter level (p < 0.05), results of the butcher owners (76.9%) and slaughter slab workers (62.5%) revealed that meat inspection was not practiced adequately in the two areas of study. The results imply that slaughtered pigs for human consumption were not adequately inspected, and thus, the study recommends for implementation of effective pig management practices at the farm level and pork meat inspection at slaughter slabs to prevent PC infections and assure food safety along the pork value chain.
Introduction
Porcine cysticercosis (PC) is an infection of pigs which is prevalent in many developing countries [1] with high effect on public health and agriculture [2,3]. e disease is caused by Taenia solium which also causes cysticercosis in pigs, seizures and death in pigs [4,5], and epilepsy in humans [6,7]. e zoonotic tapeworm T. solium has a two-host life cycle: the indirect one in humans as the definitive host harboring the mature tapeworm in the small intestine, causing taeniasis and the second with pigs as a normal intermediate host harboring the larval Cysticerci which encyst in the muscles and brain and cause porcine cysticercosis [8]. Transmission of T. solium is related to socioeconomic, behavioural, and environmental factors [9,10]. is was confirmed in a study in Western Kenya [11] which reported that inadequacy in meat inspection, sanitation, and cooking habits were contributing factors to cysticercosis transmission for Taenia spp. Contact with infected human faecal waste by pigs is a requisite for the successful propagation of the parasite's lifecycle [12].
In pig farming, external and internal biosecurity measures are critical tools in preventing the transmission of diseases, contributing to public health and improving livelihood of pig farmers [13]. Biosecurity encompasses bioexclusion, biocontainment, and biomanagement. e three practices are distinct but often blended with sets of actions and overlapping components. Most often, pig producers focus on bioexclusion and biomanagement while neglecting biocontainment which is the prevention of the spread of disease agents to neighbors or even long-distance transfer. In bioexclusion, the external biosecurity involves preventing the introduction of new pathogens/diseases within a pig unit from outside source, while biomanagement refers to the combined effort to control economically important infectious diseases that are already present in the farm population [14]. e observation of routine farm biosecurity constitutes a priority solution in the minimization of risk in disease spread [15]. It has been documented that total confinement of pigs poses welfare issues and could create other management problems such as aggressiveness and biting [16][17][18]. e feasibility of the intensification of livestock production requires long-term application of the One Health approach [19], focusing on the mitigation of the health risks at the interfaces between animals and humans in different ecosystems [20]. Studies elsewhere have reported that safe slaughter of pigs and monitoring of rejected carcasses found to be infected at the farm level contributed to the interruption of the parasite life cycle [21]. Poor implementation of biosecurity measures exposes pigs to the risk of PC disease [18,22]. Estimating the extent of the risks of PC and its consequences to pig farming requires well-maintained and updated pig production and management records. However, the veterinary reports, farm records, and other important statistics on pig farming are usually absent, inaccurate, or completely missing in various households and slaughter slabs.
is study was undertaken to determine the management practices frequently used by pig-rearing farmers and the level of implementation of meat inspection at various slaughter slabs in Busia and Kakamega counties of Western Kenya.
Study Site and Questionnaire.
is study was conducted from August to September 2018 within 9 villages within Busia and Kakamega Counties ( e climate is mainly tropical, with variations by County due to altitude. e whole region experiences heaviest rainfall in April and lowest in January, with the long rains which is at its peak between late March and late May. e minimum temperatures range from 14°C to 18°C and maximum of 30°C to 36°C throughout the year (24). e villages have high concentration of free scavenging pigs within Busia (Mundika, Bugengi, Nango'ma, Lwanya, Murende) and Kakamega Counties (Shikulu, Shivagala and Lunenele for Idako central), Mukongolo for Idakho North).
e human population at risk of taeniasis of Busia and Kakamega is 893,681 and 1,867,579, respectively [23].
Qualitative data on management practices influencing the disease were collected through interview using structured questionnaires which were translated in the national language and local language for some respondents during the interview. A structured questionnaire on pig farming management practices at the farm level was administered to 162 pig-rearing smallholder households, on the prevailing management practices. e pig-rearing smallholders were composed by 102 (63.75%) from Busia and 60 (36.25%) from Kakamega, respectively. A separate questionnaire on meat inspection implementation at the slaughter slab level was administered to 26 licensed butcher owners who brought their pigs at the slaughter slabs during the period of the data collection and 26 slaughter slab workers to collect information on the level of implementation of meat inspection. All slaughter slabs (Khayega, Shinyalu, and Malinya from Kakamega county; Musambaruwa and Matayos from Busia county) in the selected clusters were sampled. Variables defining management practices and meat inspection implementation were collected using the binary response [24] from farmers and slaughter slabs. Respondents would indicate whether they had frequently (yes) or had not frequently practiced (no) against a set of nine measures of management practices, namely, free-range pig keeping, use of outdoor defecation by humans, presence of latrine by the household, using of pit latrines by the household, sourcing water outside the farm, sourcing feed outside the farm, routine deworming, routine vaccination, presence of a fenced farm, and meat inspection ( Table 1).
Data Analysis.
Qualitative data on management practices from pig-rearing households, butcher owners who approached at the slaughter slabs. and slaughter slab workers were entered into Microsoft Excel (2007) and exported to SAS version 9.1.3 [24] for analysis. Descriptive statistics were used to summarize respondents' demographic characteristics and management practices [26].
Demographic Characteristics of Farmers and Butcher
Owners by Counties. A total of 214 respondents comprising 162 pig-rearing households, 26 butcher owners, and 26 slaughter slab workers were interviewed at the farm and slaughter slab points in Busia and Kakamega counties of Western Kenya. Out of the 162 pig-rearing households interviewed, majority, 37.7%, 26.5%, and 10.5% were youthful farmers whose age groups varied between 21 and 30, 31 and 40, and 11 and20, respectively. One-quarter (25.5%) of the households interviewed were over 41-50 years old, and 53.1% belonged to the female gender, while 41.7% had no formal school education. A majority (77.2%) of farmers in Busia and Kakamega counties had kept pigs for a period of 6-10 years, while 22.8% had kept them for an average period of 28-35 years (Table 2).
For butcher owners, out of the 26 respondents interviewed, majority, 53.9% were between 11 and 20 years old, 92.3% of them were male gender, and 57.7% % had secondary school education. A majority (46.2%) of butcher 2 Veterinary Medicine International
Management Practices Preventing PC Infection at the Production Level.
e results (Table 3) indicate that, in the two counties (Busia and Kakamega), more farmers frequently practiced (p < 0.05) free-ranging pig rearing (69.1%), have latrines (87.6%), and used latrines (72.8%). However, more farmers did not frequently (p < 0.05) practice use of outdoor defecation (66.7%), vaccination (69.7%), routine deworming (70.4%), fencing the farm (77.8%), and sourcing water (92.0%) or sourcing feed (87.0%) outside the farm. Table 4 show the attitudes of butcher owners and slaughter slab workers towards the level of implementation of meat inspection as a management practice at slaughter slabs. However, more of the butcher owners (76.9%) and slaughter slab workers (61.5%) attested that the meat inspection is frequently (p < 0.05) practiced, and 23.1 and 38.1% of them, respectively, did not attest that.
Discussion
is paper describes the pig farming management practices and meat inspection implementation at farm and slaughter slab levels to investigate factors favouring porcine cysticercosis in Busia and Kakamega counties. e demographic descriptors revealed that out of the 162 farmer population interviewed, 37.7% were aged between 21 and 30 years, 53.1% were of the female gender, 41.4% had no formal school education, and 77.2% had kept pigs for a period of 6 to 10 years (Table 2). ese findings were similar to those reported that the female gender dominated rearing and owning pigs in the rural areas of Western Kenya [26,27] and other African countries [28][29][30]. e findings agree with the report by Ampaire and Totchschild [31] that, in Africa, women are traditionally empowered to rear and own pigs as opposed to cattle. ese findings differed from early reports on pig farmer age ranges of 12-88 and 45-60 years in Homa Bay and Embu counties of Kenya [32][33][34]. ey also reported that 86.4% and 92.6% pig farmers in Uganda and Kenya (Embu county), respectively, were males. is variation could be attributed to the sociocultural differences in the areas of this study.
Pigs in the two counties were predominantly reared under the free-range system at the farm level (69.1%) ( Table 3). e presence of latrines at households and use of structurally dilapidated, unhygienic pit latrines for human waste disposal formed the main bioexclusion, biocontainment, and biomanagement practices with a frequency of up to 87.7% and 72.8% in the surveyed farms. Studies elsewhere had established a significant positive relationship between inappropriate use of latrines and PC prevalence [33,35]. It has been documented that keeping pigs under the free-range system elevated the risk of pigs acquiring T. solium infection that leads to the endemicity of zoonotic porcine cysticercosis [36]. Findings in this study not only concurred with this fact but also corroborated the information that pigs kept under the free-range pig production system, compounded by poor utilization or lack of latrines, could have been the main In this study, 76.9 and 61.5% of butcher owners and slaughter slab workers reported that meat inspection was frequently implemented at slaughter slabs (Table 4). It was observed that meat inspection practice was occasionally ignored in some slaughter slabs in seasons of high demand and was not thoroughly performed in the sense that infected animals could be slaughtered, and uninspected meat easily found its way into the human food chain. e observations here concur with those given by Gabriël et al. [37], who reported that inadequate meat inspection was a contributory factor to the spread of the infection by Taenia solium which could lead to the emergence or re-emergence of the disease in pig farming systems. is finding suggests that inadequate meat inspection at the slaughter slabs is a critical factor influencing the spread of this disease in Busia and Kakamega counties at the slaughter slab points.
Conclusions
e free-range pig production system (no fencing and scavenging) and inappropriate use of latrines were the critical poor management practices that propagated and propelled PC infection at the farm level in Busia and Kakamega counties. e meat inspection practice as a factor of biosecurity at slaughter slabs was not adequate in the two counties of Western Kenya. ese findings suggested that there is a need for implementation of effective pig biosecurity measures to prevent PC infections and ensure food safety along the pork value chain in Western Kenya. is will require collaboration with policymakers who have in their mandate the reinforcement of the regulations by inspiring farmers through sensitization training and strengthening the meat inspection in Western Kenya.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2020-09-03T09:10:18.472Z | 2020-08-27T00:00:00.000 | {
"year": 2020,
"sha1": "fa8ffbc9ab458e8a3fce5f68d83f12c16655f57c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/vmi/2020/7935656.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccf1cf54449822c6df616605c8026bfe58b78b62",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
233267320 | pes2o/s2orc | v3-fos-license | Validation of Online Prognostication Model that Predicts Survival for Women with Early Breast Cancer in Egypt
Breast cancer is considered to be the commonest cancer among females, encompassing 23% of the 1.1 million female cancers diagnosed annually [1-2]. It is also considered the highlighting cause of cancer-related deaths worldwide with the highest case fatality rates in the developing countries [3]. In Egypt, Breast cancer is considered the most prevalent cancer in women. Age-specific-incidence rates have a dramatic increase after the age of 30 [4]. Breast cancer is considered a heterogeneous disease. Its etiology and pathology vary among patients. Metastasis can occur at different stages depending on the biology of the disease and its degree of aggressiveness [5]. Abstract
Introduction or low risk. They are highly efficient however they are costly and not easily available in developing countries. Similarly, several programs have emerged through time trying to estimate the survival and calculate the added benefit of the treatment given such as Nottingham Prognostic Index (NPI), PREDICT, Adjuvant! [6].
The Nottingham Prognostic Index (NPI) is a scoring program that depends on 3 tumor characteristics which are tumor size, grade, and lymph node status. It stratified the patients into 3 groups with different survival [9]. Adjuvant!, is an online model used for the prediction of survival as well as the expected treatment benefits [10].
PREDICT is an online freely available program that aids clinicians to estimate patient's survival based on combined tumor and patient criteria. It was developed as a collaboration between the Cambridge Breast Unit, University of Cambridge Department of Oncology, and the UK's Eastern Cancer Information and Registration Centre (ECRIC). It was first established in the UK and had been validated on a cohort of 5,000 patients [6]. It was initially revealed in 2011 and has been widely approved and its use has been increasing. It includes entry of specific data as regards the patient's age at the time of diagnosis, mode of detection, hormonal-receptor status, tumor grade, size as well as the number of involved nodes. It provides an average estimation of 5 and 10 years overall-survival in women with early breast cancer. It also gives an insight into the added benefit of any given therapy whether chemotherapy, hormonal therapy, targeted therapy (anti-HER2), or even combinations of these modalities.
PREDICT is a well-calibrated model that provides easy access and a fruitful insight as regards the estimated survival and the additional benefit from the use of adjuvant therapy. PREDICT has not been validated in any cohort of the Egyptian population. This study aimed to test the utility and reliability of PREDICT as a prognostication model in patients with early breast cancer in Alexandria, Egypt.
Patients and Methods
This study included female patients diagnosed with early breast cancer and treated with surgery (either breast conservative surgery or modified radical mastectomy) followed by adjuvant systemic therapy with or without radiotherapy in 2005. Data on patient, tumor, and treatment-related characteristics, as well as the follow-up, were obtained from the archives of the Department of Clinical Oncology, Faculty of Medicine and Department of Cancer Management and Research, Medical Research Institute, University of Alexandria, Egypt after having approval from the ethical committee.
A total number of 128 eligible patients with an adequate follow-up that allowed calculation of the actual 5 and 10-Year OS were included in our study. Data obtained on patient's age, tumor characteristics (including pathological data on tumor size, number of involved lymph nodes, tumor grade, ER status, and HER2 status based on immunohistochemistry testing), as well as treatment and follow up. Treatment data included the type of treatment (surgery, chemotherapy, endocrine therapy, and radiotherapy), type of chemotherapy regimen received (no chemotherapy, second-generation chemotherapydoxorubicin based, third-generation chemotherapy-taxane based). Data on Ki67 status was not available as it was not tested in all of the patients at that time.
Besides, patients with unknown tumor size, number of positive lymph nodes, differentiation grade, or estrogen receptor (ER) status were excluded, since PREDICT doesn't permit the absence of these data.
The program then produced an estimated 5 and 10-year overall survival (OS) for each patient. It also included a survival analysis for countable possibilities, that is, overall-survival with no adjuvant treatment added benefit of adjuvant hormonal therapy, chemotherapy alone or the combined benefit of both, additional benefit of adding trastuzumab to adjuvant chemotherapy and hormone therapy.
Statistical Analysis
The area under the ROC curve (AUC) was used for validation of the given results. It detected the accuracy of PREDICT in the estimation of the actual survival. A p-value of 0.05 was chosen as a cutoff point for statistical significance. Values under 0.05 were considered statistically significant difference between predicted & actual survivals, while those bigger were considered nonsignificant. Analysis of different prognostic subgroups was done as well.
Results
In this study of women with early breast cancer, the mean age at diagnosis was 49 years. Almost all of the patients were symptomatic at presentation (125, 99.2%), whereas only 1.6% of women in this study had mammographic screening-detected breast cancer. The mean tumor size at presentation was 32 mm, and 55 patients had lymph node involvement (43%). ER was found positive in about 115 patients, Data on HER2 status was not available in 121 patients, within patients with available information, HER2 was expressed in only 4 patients. No data on Ki67 was available. 106 (82.8%) patients had grade II tumors. 108 (84.4%) patients received adjuvant chemotherapy which was only anthracyclinebased (second-generation) regimes.
Receiver-operating characteristic (ROC) analysis was apjcc.waocp.com Gehan A. Khedr, et al: Predict Prognostic Model Validation significant (p=0.671) as shown in Table 5. Table 6 shows that PREDICT overestimated 10-year OS in subgroups of patients with a good prognosis, for example, ER-positive, T1, and N0 disease. However, none was statistically significant (p=0.76, 0.118 & 1 respectively) it also underestimated 10-year OS for ER-negative patients, in such population, the difference between predicted and actual survivors was -5.5%, which was statistically significant (p= 0.016), PREDICT also underestimated 10-year OS in other poor prognostic subgroups, for example, GIII, N+. However, none was statistically significant (p= 0.125 & 0.405 respectively).
PREDICT under-estimated 10-year OS in a certain age group (>35-50 years), although the difference between predicted and actual survivors was -5.5%, it wasn't statistically significant (p = 0.162).
PREDICT accurately predicts 5-year OS in the entire study subjects and all predefined subgroups. Ten-year survival was predicted quite well, although underestimation of survival was actual in ER-negative patients. Although this difference was within the range of 5.5%, it was statistically significant used to validate the estimated results of PREDICT as shown in Figure 1 and 2. An area under the ROC curve AUC was used to evaluate the 5-and 10-years overall survival.
In the entire study population, 5-year OS analysis was good with an AUC of 0.787, An AUC of 0.649 was used for testing the accuracy of 10-year OS estimation as shown in Table 1. The minimum percentage calculated for 5 year survival was 63% and the maximum was 98% with a mean of 91.12% and a median of 93%. Meanwhile, The minimum percentage calculated for 10-year survival was 40% and the maximum was 95% with a mean of 80.42% and a median of 82%as shown in Table 2.
The predicted number of survivors after 5 years in the entire study subjects was 125 (97.7%) compared to 123 (96.1%) actual survivors. The difference was 1.6% which was not significant (p= 0.625) as shown in Table 3. Table 4 shows that PREDICT overestimated 5-year OS in subgroups of patients with a good prognosis, for example, ER-positive and N0 disease. However none was statistically significant (p=0.625 & 0.25 respectively) it also underestimated 5-year OS for ER-negative patients, however, it wasn't statistically significant (the difference between predicted and actual survivors was -0.8%, (p=0.5).
The predicted number of survivors after 10 years in the entire cohort was 77 (60.2%) compared to 81 (63.3%) actual survivors. The difference was -3.1% which was not
Discussion
Generally, PREDICT performed well in terms of estimating the 5-and 10-years overall survival with no statistical significance between the actual and predicted survivals. Meanwhile, PREDICT overestimated 5-year OS in subgroups of patients with good prognosis, for example, ER-positive and N0 disease. However, none was statistically significant (p=0.625 & 0.25 respectively). It also underestimated 5-year OS for ER-negative patients, however, it wasn't statistically significant (the difference between predicted and actual survivors was -0.8%, (p=0.5). Similar to 5-year survival analysis, PREDICT overestimated 10-year OS in subgroups of patients with good prognosis, for example, ER-positive, T1 and N0 disease. However, none was statistically significant (p=0.76, 0.118 & 1 respectively) it also underestimated 10-year OS for ER-negative patients, in this subgroup, the difference between predicted and actual survivors was -5.5%, which was statistically significant (p=0.016), PREDICT also underestimated 10-year OS in other poor prognostic subgroups, for example, GIII, N+. However, none was statistically significant (p= 0.125 & 0.405 respectively).
This finding is consistent with a Dutch study performed to validate Predict in Dutch population by van Maaren et al [11] and was carried on 10,338 patients with operated, non-metastatic primary invasive breast cancer, diagnosed in 2005. In the Dutch population, an AUC of 0.80 was used for the assessment of 5-year OS accuracy. The predicted number of survivors after 5 years was 7595.2 (86.0%) compared to 7723 (87.4%) actual survivors. The difference was -1.4%, which was not significant (p=0.14). In ER-positive patients, the difference between predicted and actual survivors was -0.7% (p=0.53). In ER-negative patients, the difference between predicted and actual survivors was -4.9%, which was statistically significant (p=0.02) but just within the range of 5%. For the entire cohort and the ER-positive patients, the predicted and actual 5-year OS do not differ significantly.
In the entire Dutch validation population, an AUC of 0.78 was used for the assessment of 10-year OS accuracy. The predicted number of survivors after 10years was 6404 (72.5%) compared to 6493 (73.5%) actual events. The difference was-1.0%, which was not significant (p=0.27). In ER-positive patients, the difference between predicted and actual survivors was -0.1% (p=0.92). In ER-negative patients, the difference between predicted and actual events was -5.3%, which was statistically significant (p=0.01). For the entire cohort and the ER-positive patients, the predicted 10-year OS did not differ from the actual 10-year OS. However, for ER-negative patients, a significant underestimation was seen (p=0.01). 10-year OS was significantly underestimated by PREDICT in T3 (-13%, p < 0.01), grade III (-3.2%, p=0.03). However, the only differences outside the range of 5%, were in patients with T3 (underestimation).
Van Maaren et al [11] concluded that PREDICT accurately predicts 5-year and 10-year OS in the overall Dutch validation population. However, 5 and 10-year OS was underestimated for ER-negative disease.
The finding that 10-year OS was underestimated in ER-negative patients, but was accurately predicted in ER-positive patients is consistent with the present study in which Predict underestimated 10-year OS in ER-negative patients. This may be related to the biological criteria of the ER-negative population which is characterized by much more aggressive disease with subsequent worse predicted survival rates.
In a similar study performed by Wong et al [12], on the Southeast Asian population particularly on 1480 patients who underwent complete surgical treatment for stages I to III breast cancer from 1998 to 2006, were identified from the prospective breast cancer registry [12]. In this study, an AUC of 0.78 was used for the assessment of 5-year OS accuracy. The predicted number of survivors after 5 years in the entire cohort was 86.3% compared to 87.6% actual survivors.
In-addition, An AUC of 0.73 was used for the assessment of 10-year OS accuracy. The predicted number of survivors after 10 years in the entire cohort was 77.5% compared to 74.2% actual survivors. The difference was 3.3%, which was not statistically significant (p=0.12).
PREDICT was also accurate in most subgroups of patients, except in certain subgroups, the program tended to overestimate the survival. For example, in a cohort of women with age less than 40 years, PREDICT overestimated their 5-year OS by 6.8% and their 10-year OS by 17.2%. Similar to the present study, the model tended to underestimate the 5-year OS in subgroup of patients with ER-negative tumors. However, in the Southeast Asian study, it was statistically significant, the difference between predicted and actual events was -6.0% (p<0.001), the underestimation was not reported in the prediction of 10-year survival [12].
A similar study was carried out by Engelhardt et al [13], for validation of PREDICT in a certain group of female patients with early breast cancer younger than 50 years. The study was carried out on 2710 patients with stage I-III breast cancer.
In Engelhardt et al [13] study, the only estimation of 10-year overall survival was analyzed. The difference between predicted and actual mortality was -1.1 which was non-significant (P=0.28), which is consistent with the present study. PREDICT did significantly underestimated all-cause mortality for patients <40 years by up to -6.6% [14]. Younger patients tend to present with more advanced stage and more aggressive disease. Additionally, younger patients are more likely to be hormone receptor-negative. Also, lack of awareness about the increasing incidence in the younger population tends to attribute breast cancer symptomatology to a more benign cause without consideration of breast cancer as a possibility, eventually leading to a more advanced stage with a poorer outcome.
Eventually, this trend for PREDICT to underestimate the survival in the ER-negative populations, makes it an unreliable tool for these subsets of patients. Also, the lack of HER2 data renders it difficult to assess the additive benefit of trastuzumab in either actual analysis or predicted values.
In conclusion, to our knowledge, this is the first study in Egypt that validates PREDICT as an online prognostication tool in women diagnosed with early-stage breast cancer.
A limitation of this study is the absence of knowledge on cause-specific mortality which prevents determining whether differences are due to breast cancer-specific mortality or other unrelated causes of death. Another limitation of this study is the lack of data on Ki67, therefore it was marked as unknown for all patients. Also, nearly all the study subjects were clinically symptomatic at the time of the presentation. Symptomatic cancers are more likely to present with undesirable tumor characteristics in comparison with screen-detected cancers. Furthermore, a larger study population is needed to provide a wider database and establish a more powerful analysis as regard patients with less favorable tumor characteristics especially ER-negative patients.
In conclusion, PREDICT is a valuable prognostication tool. It has the advantage of being a free easily accessible online model. It has a mere benefit in developing countries with limited resources. Moreover, it shall add a fruitful insight to help clinicians in determining the appropriate treatment strategy for each patient on an individual basis.
Clinical Practice Points
Breast cancer is a major problem in Egypt. In a country with low income, managing the resources in the best possible way would allow directing the proper therapy without excessive use of unnecessary chemotherapy.
PREDICT is an online easy access program that allows integration of clinical parameters in the clinical practice. It was validated in the UK population. Applying this program to our study subjects proved its effectiveness and its major role in tailoring therapy in a country where access to modern molecular and genetic analysis is difficult.
Clinicians should integrate this program to guide them in decision-making as regards providing the proper therapy for women with early breast cancer in Egypt. | 2021-04-17T00:28:15.252Z | 2020-12-26T00:00:00.000 | {
"year": 2020,
"sha1": "e41497ac70e5ec4507d52e372c47a93fcdb8e5de",
"oa_license": "CCBY",
"oa_url": "http://www.waocp.com/journal/index.php/apjcc/article/download/480/1624",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e41497ac70e5ec4507d52e372c47a93fcdb8e5de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55526169 | pes2o/s2orc | v3-fos-license | Distribution patterns and feeding success of anchovy , Engraulis anchoita , larvae off southern Brazil *
Anchovy, Engraulis anchoita, inhabit the Southwest Atlantic Ocean, between 22° S and 47° S (Whitehead et al., 1988), where it is one of the most abundant pelagic fish species. In southern Brazil E. anchoita spawns throughout the year, but mainly during the austral winter and spring (Weiss and Souza, 1977) when adult stock biomass may be as high as 1.9 x106 tons (Lima and Castello, 1994). Successful anchovy spawning during winter and spring is associated with high biological productivity (Castello et al., 1991; Ciotti et al., 1995), stable conditions in the water column, and wind induced circulation that retains eggs and larvae on areas over the shelf (Bakun and Parrish, 1991; Lima and Castello, 1995). The combination of enrichment, stability and retention mechanisms is thought to create a suitable habitat for feeding and survival of anchovy larvae on the southern Brazil shelf ecosystem (Lima and Castello, 1995; Bakun, 1996). In this paper we analyze the feeding success of anchovy larvae off southern Brazil to test the hypothesis of improved feeding conditions during austral winter and spring. Feeding success is also compared between larvae size classes with different morphological characteristics and distribution patterns to investigate possible changes in feeding conditions with larval development. SCI. MAR., 62 (4): 385-392 SCIENTIA MARINA 1998
INTRODUCTION
Anchovy, Engraulis anchoita, inhabit the Southwest Atlantic Ocean, between 22°S and 47°S (Whitehead et al., 1988), where it is one of the most abundant pelagic fish species.In southern Brazil E. anchoita spawns throughout the year, but mainly during the austral winter and spring (Weiss and Souza, 1977) when adult stock biomass may be as high as 1.9 x10 6 tons (Lima and Castello, 1994).Successful anchovy spawning during winter and spring is associated with high biological productivity (Castello et al., 1991;Ciotti et al., 1995), stable conditions in the water column, and wind induced circulation that retains eggs and larvae on areas over the shelf (Bakun and Parrish, 1991;Lima and Castello, 1995).The combination of enrichment, stability and retention mechanisms is thought to create a suitable habitat for feeding and survival of anchovy larvae on the southern Brazil shelf ecosystem (Lima and Castello, 1995;Bakun, 1996).In this paper we analyze the feeding success of anchovy larvae off southern Brazil to test the hypothesis of improved feeding conditions during austral winter and spring.Feeding success is also compared between larvae size classes with different morphological characteristics and distribution patterns to investigate possible changes in feeding conditions with larval development.
MATERIALS AND METHODS
Larval samples were collected during eight cruises conducted by the R/V "Atlantico Sul" off southern Brazil between 34°30´ S and 28°30´ S (AREPE cruises) and between 34°30´ S and 32°S (ECOPEL cruises) (Table 1; Fig. 1).Anchovy larvae were collected with a Bongo net with a mouth diameter of 60 cm and a mesh size of 300 µm.The Bongo net was towed at 2.5 Knots in oblique hauls between the surface and 5 m above the bottom.The water volume filtered was measured by a flow meter attached to the mouth of the net.
Larvae were preserved in a 4% buffered formalin solution, then measured and counted.Standard length (SL, mm) was corrected for shrinkage applying the factor calculated by Theilacker (1980).Feeding and gill raker development were assessed from 1231 larvae between 2.8 and 34 mm SL, collected in 62 hauls from ECOPEL cruises (Table 2).Analysis of larval feeding success was concentrated on samples collected during daylight.Anchovy larvae are thought of as visual feeders, feeding mainly during the day (Sánchez et al., 1991).For food items to be seen in the entire gut, larvae were stained with Bengal Rose and Lugol.
Feeding incidence, used as an index of feeding success, was defined as the percentage of the total larvae caught during daylight with at least one food item in the gut.Differences in feeding incidences among periods and size classes were tested using a Tukey test for proportions (Zar, 1984).Food items were identified to the lowest discernible taxa, being mainly composed by nauplii, copepodites, copepods, eggs of invertebrates and tintinids (Freire, 1995).To evaluate the relationship between particle size, mouth width and larval length, the maximum cross section that the larvae would have to encompass for ingestion was measured.The number of gill rakers in the largest branch of the first left gill arch was counted after staining with Bengal Rose.
The horizontal patchiness and the night/day catch ratio for each length class were analyzed.Larval concentrations were standardized as number caught in 10 m 2 of water surface to correct for differences between water volume filtered at different stations.Patchiness variations with length were analyzed applying the Lloyd patchiness index (Hewitt, 1981).This index is a function of the mean crowding (m*) defined as the mean number of larvae for each length class under a 10 m≤ surface area : m* = m + m/k where m and k are parameters of the negative binomial distribution, representing the mean density and the degree of patchiness of a population, respectively.Lloyd's patchiness index is calculated as the ratio between mean crowding and mean density: m*/m = 1 + 1/k and may be considered as a measure of how many times more crowded the larvae are in relation to a randomly distributed population with the same mean density (Hewitt, 1981).This index is frequently applied in the analysis of spatial distribution patterns of fish eggs and larvae due to its independence from population density and the spatial scale of the sampling (McGurk, 1986).The parameter k was estimated by solving iteratively the equation (Krebs, 1989): where N: total number of sampled stations n 0 : number of stations containing zero individuals x: mean density of a length class k: negative binomial exponent The night/day catch ratio for larvae was calculated for each length class in each cruise and used as an index of net evasion.
RESULTS
Feeding success was significantly higher during the winter (P< 0.001; Table 3).Feeding incidence was not statistically different for anchovy larvae caught during the spring, autumn and summer cruises.Overall, between 33 and 52% of the larvae caught in any period had food in their gut.
In order to identify possible differences in feeding success due to size, feeding incidence was ana-ANCHOVY LARVAE FEEDING SUCCESS 387 lyzed for two length intervals: SL smaller than 10 mm, and SL larger than 10 mm.Larvae with less than 10 mm showed a low evasion rate of the sampling gear (Fig. 2), attained the lowest level of patchiness in their distribution (Fig. 3), possess no developed apparatus for filtration (i.e.gill rakers) (Fig. 4), while feeding mainly on small food particles (Fig. 5).Conversely, larvae with more than 10 mm showed an increasing ability to escape from the sampler during day hauls (higher night/day catch ratio), while attaining higher patchiness and, consequently, lower spatial dispersion in the sampled area.These changes are coincidentally accompanied by the appearance of gill rakers, which increase rapidly in numbers from this length interval, and by the increase in the maximum size of food ingested.Although feeding continued to include large amounts of the more abundant small particles, larvae of more than 10 mm fed on particles that were twice as large as those consumed by smaller size classes (Fig. 5).These results indicate substantial changes in larval swimming ability and behavior which could affect both searching for food and feeding success.It was, therefore, hypothesized that under the same conditions of habitat and food availability larvae of more than 10 mm would have a higher feeding success than smaller size classes.
Table 4 shows the values of feeding incidence for each length interval for each period.Difference in feeding success with size was only statistically significant for the winter data, when larvae of more than 10 mm SL had higher feeding success (~64%) than larvae with less than 10 mm (~47 %).
DISCUSSION
Feeding success of anchovy larvae off southern Brazil was higher during austral winter.Higher feeding success rates during the winter are apparently related to the combined effects of freshwater run-off and the flow of cold waters of Subantartic origin, which result in a strong vertical water stability over the shelf (Bakun and Parrish, 1991;Lima and Castello, 1995) and provide conditions which increase primary production (Ciotti et al., 1995) and zooplankton densities (Castello et al., 1991).Surprisingly low feeding incidences were observed in spring data, suggesting below optimal conditions for larval feeding during a highly productive period.Phytoplankton production in the region increases in the early spring with the nutrient influx from continental and southern cold waters (Ciotti et al., 1995).Zooplankton biomass is also high during spring months (up to 98.47 mg C m -3 ), especially in offshore areas under the influence of Subantartic waters (Montú et al., 1997).Information on zooplankton species composition would be needed to understand the lower feeding incidences encoutered during spring, though feeding success may depend not only on food concentration but also on its specific characteristics, i.e. species and size composition (Lasker, 1975).
Lower feeding success during autumn and summer months were, on the other hand, expected since warm waters dominate the shelf and the amount of rainfall is greatly reduced.As a result, water column stratification is not as strong as in winter and spring (Lima and Castello, 1995), primary production and phytoplankton biomass are considerably lower (Odebrecht and Garcia, 1997) and zooplankton concentrations are frequently associated with gelatinous plankton (Montú et al., 1997).
The analysis of feeding success with size provides important information to better understand the significance of decisive events in anchovy early life history.No ontogenic differences in feeding success were observed in periods with below optimal conditions for feeding (i.e.spring, summer and autumn).Conversely, results indicate that under optimal feeding conditions (winter) larvae with more than 10 mm have higher feeding success than larvae with less than 10 mm.From 7 to 12 mm SL E. anchoita larvae pass through a phase of transformations in their body structure marked by fin development and a functional gas gland (Phonlor, 1984) and the appearance of gill rakers (Fig. 4).The development of fins and a gas gland creates better swimming ability and is directly related to the initiation of vertical migration behavior, common among clupeoid species (Sánchez et al., 1991).Similar changes have been noted in the larvae of E. mordax, E. japonicus and E. capensis (Hunter and Sánchez, 1976;Blaxter and Hunter, 1982).The beginning of vertical migration seems to play an important role in the development of schooling behavior by establishing the concentration of larvae at or near the surface and, therefore, increasing the frequency of visual contacts between individuals (Hunter, 1984).The association between changes in distribution patterns and behavior processes was also shown for E. mordax larvae.In this species the onset of schooling behavior begins when larvae are 11 to 12 mm SL and is associated with an increase in patchiness in the sea (Hunter and Coyne, 1982).For E. anchoita patchiness increases with size when larvae attain on average ca. 10 mm SL (Fig. 3; Table 5).The increase in patchiness and in the evasion ability of anchovy larvae from the plankton sampler during the day (Fig. 2) denote substantial changes in larval swimming ability which provide improved feeding success and searching capacity for bigger and more motile prey (Fig. 5).This may play an important role in growth and survival of larger larvae.For instance, Hunter (1984) showed that an increase of 2.5 times in copepod size can produce a tenfold increase in dry weight, resulting in a considerable energetic gain for larvae.
The length interval with lower feeding success rates extends over two important ontogenic thresholds (sensu Balon, 1984): the beginning of exogenous feeding and the onset of active swimming, aided by gas gland buoyancy and manifested in school forming behavior.These steps in early development mark important changes in the survival chances of larval anchovy.At the end of yolk absorption, larvae face a very delicate phase when survival depends on food availability in the proper size range and adequate concentration.On the other hand, fin development and a functional gas gland provide anchovy larvae with improved ability for searching for food, preying upon larger organisms and escaping from predators.To evaluate the significance of these events for anchovy larvae survival it is necessary to compare mortality rates throughout larval development.Kitahara and Matsuura (1995) reported higher mortality rates of preflexion anchovy larvae (SL from 3.8 to 12.9 mm) at the particular oceanographic conditions off the southeastern Brazilian Bight.Larvae of this size range were more abundant in the upper mixed layer (Matsuura and Kitahara, 1995) which is characteristically less productive.As the larger ones start to migrate to deeper water, near the chlorophyll maximum layer, starvation-induced mortality decreases due to enhanced feeding conditions.The analysis of RNA/DNA ratios for anchovy larvae caught in the same area showed that the highest amount of starvation occurred in the length class interval from 4 to 10 mm (Clemmensen et al., in press).These studies, therefore, provide evidence for the importance of vertical migration behavior for larval anchovy feeding and survival.We see these results as a corollary of the hypothesis that the length interval up to 10 mm SL comprises a phase when larval survival chances remain low and thus indicate a decisive or critical period in the species' life history.
FIG. 2
FIG. 2. -Relationship between the night/day catch ratio and larvae standard length.The catch ratio was calculated from the pool of samples obtained in each cruise series (AREPE and ECOPEL).The numbers inside the figures refer to the total number of day and night samples used to calculate the ratios.
FIG. 5
FIG. 5. -Relationship between the size of ingested food and standard length.Each dot represents the size of a food item found in the gut of a given size larvae.
TABLE 1 .
-Sampling period and total number of ichthyoplankton samples collected during each cruise of R.V. "Atlântico Sul" off southern Brazil.
TABLE 2 .
-Number of samples and number of larvae utilized in the feeding analysis.Larvae -day and -night refer to the total number of larvae analysed from day and night samples respectively.
TABLE 4 .
-Feeding incidence for length interval and period (* denotes a period with difference in feeding success between size classes statistically significant; p< 0.001).
TABLE 5 .
-Data for the patchiness analysis.N is the total number of sampled stations, n is the number of larvae in a given length class and k is the negative binomial exponent.The Lloyd patchiness index is calculated from the inverse of k (see details in text). | 2018-12-07T14:42:20.969Z | 1998-12-30T00:00:00.000 | {
"year": 1998,
"sha1": "78207dd5a46108f99c5448f52b10f98e2f7546b8",
"oa_license": "CCBY",
"oa_url": "https://scientiamarina.revistas.csic.es/index.php/scientiamarina/article/download/984/1025/1003",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "78207dd5a46108f99c5448f52b10f98e2f7546b8",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
157312459 | pes2o/s2orc | v3-fos-license | Study of the Appropriate and Inappropriate Methods of Visual Arts Education in the Primary Schools According to the Types of Multiple Intelligences
In the current changing world, named the era of knowledge explosion, specialists and those involved in education have been attracted finding a response to a question: what should we teach today’s students that to be useful for them in the future life? The main objective of this study is to investigate the appropriate and inappropriate methods of visual arts education in pre-school. According to the types of multiple intelligences, reaching to this goal requires careful planning, proper training and proper content selection along with talent and interests of learners along with the use of appropriate practice training and educational staff training. The research handles descriptive and analytic methods as well as academic literature. The results suggest the importance of understanding the multiple intelligences in the visual arts education.
Introduction
The childhood is one of the most effective periods of human life that the human character arises in this period. So the child not only needs to physical care and attention, but also needs to the social, emotional and mental education as a determining factor. The entry of children to primary schools is considered as a turning point in their growth. Because whatever we are learning in childhood, those will be foundation for all subsequent learning of our life. The child's age and intellectual conditions occasion that diverse training to children taking place through games and entertainment. So art is a way to shortcut and a good and ideal pattern for teaching. The visual arts are one of the most important languages of instrumental that by using of it, the broad concepts have been transferred between nationalities. Concepts through visual arts are comprehensible understood and received by visual elements like: line, point, level, shape and color. The artist by selecting and decorating some of these elements may create an artwork and hereby transfer her/his subjective concepts and emotions to other people. Each student comes from a different culture. Their cultural and economic backgrounds create different levels of interest and methods to show themselves and their weaknesses. Understanding the multiple intelligence theory motivates the teacher to use different methods to help to the students. Verbal intelligence, logical intelligence, spatial intelligence, physical intelligence, intrapersonal intelligence, extra personal intelligence, musical intelligence, nature-oriented intelligence, are the multiple intelligences that Gardner (1990) refers. With regard to the contents that mentioned, the main goal of this study is to investigate and compare the appropriate and inappropriate methods of visual arts education in the preschool section. And the most important question is that: do all types of multiple intelligences are effective in the visual arts education?
This research suggests relatively newer area that has no research background. Only a few internal and external researchers can be named working on the visual arts education and the effect of multiple intelligences. In the meantime, researchers like Mehr Mohammadi Mahmoud, Ashki Mahnaz, Shahidi Nima are persons who some part of their study are used in this article. This article has been formulated in several sections: the first section handles the meaning and the nature of art education and also the features of visual arts. The second section discusses 503 the appropriate and inappropriate methods of visual arts education. The third section talks on the types of multiple intelligences and their importance in education.
The meaning and the nature of the art education
Jerome Housman thinks that art education has effects on learning the different senses.
According to him, art helps children to understand that there are multiple views of the realities that they encounter. In fact, from his comments, art training includes training the basis of senses and growth the consciousness by visual experiments, creation and understanding the visual symbolic forms. In terms of Chapman, art education and aesthetics rather than emphasis on the application, should attempt to establish a connection between the events and phenomena through their form and shape. This is the approach that the educational systems require to it, because in terms of Chapman, according to the requirements of modern life, education should develop mainly sensitiveness of children. The children should be learned to how to see, how to hear, how to connect with the others, they have to figure out how the environment can form their feels and acts.
Visual arts
Visual arts have a vital role in the field of education and discover the creative talents of students. In this field, in addition to lead a little man towards the creative world, can bring on the first joys from entering the creative world and discover the beauty and the joy creativity that is one of the greatest human joys because its effect will influence within the child and gave him the confidence. Therefore, the child would become a perfect man, independent person, reasonable, and beneficial to the society. _During the teaching of visual arts, students are encouraged and their talents are considered.
_Students during learning the visual arts, in addition to show their inner talents, also they can accustom with ancient and contemporary cultures of this art; because when the children are familiar with the art world, not only in the way of intellectual development of children is effective but also in having positive emotions, appreciate the beauty and goodness of the cosmos help to them. Today that the issue of all-round education of the futuristic generation has special importance, disregard to the visual arts and the non-use of existing facilities is not permitted during training.
The proper methods of art education
There are a number of ways that usage of them in children's talent growth is very effective.
Having a few different ways for teaching is a very big problem since there is not any particular method to teach the types of lessons that would be successful for all students. Many teachers are having difficulties for finding a correct method that is suitable for students, especially for ones who have learning disabilities. Many teachers struggle to figure out an effective method to teach children and switching between teaching methods. However, if this work were done with skills, usually it would be successful. The most suitable methods include: a. Method of observational pattern In art education, because the tools of receive and doing a work of art and also because learning the art gets done based on objective observations and sensory encounter, so this method can be very effective. In teaching observation method, before you do the work, it is necessary to know that whenever discussion about the word observation, it means the five senses of human beings must be separately recorded whatever they have earned in the viewed topic. In the view of the subject of art, not only two senses of sight and touch are used but also the other senses are used in an appropriate level. Finally, by the help of the five senses we get a complete concept in analyzing the brain. Some believe that in painting the senses of touch and hearing and smell are not applied; while many have acquired this experience by analyzing the students who see or touch objects or have examined other senses in order to provide better and more perfect artwork. An observational pattern, which employs the five senses of human beings, would have tremendous success.
b. Method of display pattern
Essentially in the display method, when a teacher cannot or does not want to use other methods, displays the learning object to students. The students by 75% of their visual capacity communicate with this educational case. In the case of art, students benefit through touching objects and understanding type of them by six percent. When we show something, understanding and learning the subject is really much faster for events and steps than other senses work alone or collectivity, because sensory reception capacity is only nineteen percent for the other senses, respectively thirteen percent by hearing, three percent by smelling, and three percent by tasting.
Sensory reception capacity 19% → 3% smelling, 3% tasting, 13% hearing Sensory reception capacity 81% → 75% seeing, 6% touching c. Method of field trip pattern In the field trip method, students are placed in an out-of-class, up through objective observation to deal with concepts and topics and therefore teaching the subject in question becomes easier and clearer. Students gain scientific experience through observation of nature, colors, volumes, activities and things.
d. Method of training pattern
Repeat or perform a multiplicity of learning is a great way to eliminate forgetting and the extent of some of the content. The number and repeat of art has two features and advantages: 1_Broader recognition: in the process of practice, the whole work to be reviewed according to the detailed notes. 2_More complete stabilization: recording and archiving system of the cerebral cortex are strengthened by practicing with the familiar and close items.
It should be noted that it is better using this method in the first part of the training. Then gradually other training methods are also used. Indulgent using of this pattern stops creativity of children and also highlights the inability of the teacher to provide true artistic concept.
During the learning, by using the personal or group capacity, training method makes students automatically encounter with the phenomena; therefore, spontaneously create better 506 relationships in the brain to record events. Of course the creation of such relationships on the agenda and brain system is very complex and differs from person to person.
e. Method of synectics pattern
In this method, as the name implies, teacher by motivating students brings them to the point of synectics. The main objective of this approach is innovation. The teacher should try to help to students in order to develop their thinking. The first orientation for innovation and change traditions is using of creative thinking and with this method experiences and activities forward to increase the creative thinking.
f. Method of workshop pattern
Workshop teaching method is one of the most effective methods of teaching and learning that is synonyms with the meaning of some methods like lecture, seminar, conference and symposium methods. The following concepts would be useful to understand this method: 1_Lecture: it is the most common training method, which is used since ancient times by the majority of teachers. This method is one of the most prominent training demos, because in this way the teacher is responsible for the learning process and in various ways trains the curriculum materials. The obvious sign of this educational method is that the teacher works in class by rhetorical activity.
2_Seminar: in this method, learning subject is collected together and ideas are exchanged among parts of students and teacher. However, the number of persons should be limited at the seminar up to a hundred people who are divided into small groups of ten or fifteen people and then discuss to each other. 3_Conference: a researcher achieves a theory and shares it to the others. 4_Symposium: it is like seminar and the only difference is that persons who participate in symposium are more professional and they have higher awareness compared with the persons in seminar.
g. Method of group game pattern
In this method, the teacher can enjoy training. The game has plenty of features. Great psychologists believe that games can awaken the senses and thinking of student because game is enjoyable for them. Therefore, teachers use games as a special language for teaching 507 activity. Individual and group plays have special features. In this topic we talk more about group game of students because in this type of game students can show everything that they have learned and improve their responsibility. One another advantage of group game is stimulating the sense of superiority and competition, which all people have respectively. If a teacher succeeds to lead this sense of competition towards right direction, it would be a good stimulant for the game teaching method.
Specific game of teaching is divided into two types: 1_Play sets (physical activity) →80% physical dynamic and 20% mental 2_Play sets (intellectual, mental and memorial activity) →95% mental, 5% physical dynamic h. Method of discovery pattern Adult art and children's art, both of them have advantages. However, children's art is different from adult one and related to growing conditions of children. On the other hand, both children and adult may learn problem-solving in the field of visual arts. The visual arts education, which the student whereby learns problem solving method, must contain aesthetic concepts. Therefore, this education way would be converted to a strategic process by teachers whereby the students are encouraged to solve problems in a manner similar to the method of artists. Designed model based on problem solving method, enables teachers to teach more effectively; because creation of an artwork by students in order to specified problem solving takes place by using certain concepts. Issue-driven training of visual arts, which is based on employing the different aesthetic concepts during the creation of artwork, is similar to problem solving. In this method, learning of art is not only dependent to talent and intrinsic merit but also based on the belief that aesthetic issues plan and solve problems. In this way, using of art increases the mastery and semantic communication. The artwork that can be obtained by using problem solving called a guided artwork. These opuses in some features are quite similar to each other and there is a conceptual harmony. On the other hand, these opuses are different in a variety of ways that indicating divergent thinking in students. In short, this phenomenon should be considered as a valuable feature of the art education.
Improper methods of art education
Some teachers try to use direct teaching methods in the art classes and teach special manner of paint, choosing correct color, special manner of shade, and the shape of things to children.
Even more, sometimes don't let student to choose subject and this way cause that children not be able to enjoy from artistic activity and use it for showing his/her inner thoughts. Applying such approaches, destroys children's creativity, encourages the child to imitate, and sometimes discouraged the child from participating in art activities. Some believe that if we give opportunity to the children, without interference of adult do artistic activity, and with no stimulus will be able to foster creativity among children. The involvement of adults in performing art activities can be a deterrent because the involvement of adults causes that children do not trust to their artwork and fear that their work would be evaluated based on adult's criterion. This fear may lead students to stop their art activity. Evaluating children's progress and art activity with adult's criterion is a mistake.
The improper methods of art education include: a. Production and mechanical manufacturing: it means that the teacher makes children equally product their desired activity. However, this activity may not have an artistic value and it only may amuse students.
b. The clean and tidy art: that is, trainer heavily emphasizes observing cleanliness and creates a fear among students while employing media, materials and new tools. This situation reduces the curiosity of students.
c. Absolute freedom: if a trainer emphasizes desired tools without any intervention of adults, although shows a profound sense compared to the art but in fact she/he is not understanding the correct meaning and concept of teaching. Art education is not only use to drain the feelings and emotions.
d. The art of formula: it means that a trainer imposes his/her favorite formulas and rules on students to produce an artwork such as applying geometric shapes-triangle, square, and rectangle-for drawing a house. Behavior of this trainer is inconsistent with the spirit of art and destroys the power of creativity of students.
The concept of intelligence
The intelligence is the oldest and one of the most fascinating topics that have been discussed in psychology. The concept of intelligence has a long history. As human beings in terms of shape are different with each other, there are vital differences from the viewpoint of psychological characteristic like intelligence, talent, desire, and other psychological and personality characteristics. Most of people think that intelligence is the ability to learn and understand the new condition and correct deal with situation. In other words, a clever person has some qualities like being punctual, smart, sharp, eminent, and so on. On the contrary, a simple-minded person has some features like being slow, feeble-minded, and etc. Gardner (1990) believes that intelligence is a bio-psychological potential for processing information that can be activated in a cultural environment in order to solve a problem or create products that is valuable in a culture. According to Gardner's theory, to obtain all the capabilities and talents of a person, it should have not been only checked the IQ but also other types of intelligences including but not limited musical, intrapersonal, pictorial, and verbal intelligences.
The types of multiple intelligences
There are eight types of intelligences that Gardner modify: a. Visual/spatial intelligence: this type of intelligence is an ability to understand the visual phenomena. Persons who have this type of intelligence are incline to think with pictures. They obtain the information they need by creating a vivid mental image in mind. They enjoy looking at maps, charts, pictures, video and movies. Their skills include making a puzzle, reading, writing, and perceiving charts and pictures, designing, painting, building and repairing the design of practical tools, and interpreting visual images. Many professions are suitable for them including but not limited being a sailor, sculptor, inventor, discoverer, architect, interior designer, mechanic, and engineer. b. Verbal intelligence: this type of intelligence is an ability to use words and language.
Persons who have verbal intelligence are professional in listening skills and usually they are outstanding speakers. They incline to think with words instead of pictures. Their skills include: listening, speaking, storytelling, explaining, teaching, using of humor, understanding the meaning of the words, recalling information, and analyzing the application of the language. Suitable professions for them include being a poet, journalist, writer, teacher, lawyer, politician, and etc. c. Logical intelligence/mathematics: it means the ability of using of logic, reasoning, and numbers. These learners think by the conceptual method and use numerical and logical pattern. They are curious about the world around them, ask a lot of questions and like to tryout. Their skills include: problem solving, dividing and classifying information, working with abstract concepts, applying chain of reasoning, doing a controlled trial, having a curiosity on the natural phenomena, doing the complex calculation of math, and working with geometric shapes. Their favorite professions are being a scientist, engineer, computer programmer, researcher, accountant, mathematician, and etc. d. Physical intelligence/kinetic: it is the ability of skillful control of body movements and the use of things. These learners show themselves by moving. They have good perception of a sense of balance and hand-eye coordination. They interact with the surrounding environment and able to remember and process information. Their skills include: dancing, forming physical coordination, doing sport, using body language, being handiwork and handicraft, and expressing emotions through the body. Their favorite professions: being an athlete, teacher of physical education, dancer, actor, fireman, artist, and etc. e. Music intelligence/rhythmic: it is the ability to produce and understand music. This kind of learners thinks by using sounds, rhythms and musical patterns. They react to the music immediately and they are sensitive to the environmental noises. Their skills include: singing melody, whistling, performing, recognizing rhythmic patterns, composing, and understanding the structure and rhythm of music. Suitable professions for them: being a musician, singer, composer, and etc.
f. Intrapersonal intelligence: it is the ability to interact and understand the others. These learners try to see everything from the point of view of other people to understand how they think and feel. They usually have an ability to understand the senses, aims and motives. They are very good organizers. They usually try to establish peace of group and encourage cooperation. They are also using of verbal skills and nonverbal skills. Their skills: having a dual point of view, listening, having empathy, understanding the feelings of others, consulting, collaboration with group, understanding the motivation and intention of the 511 people, creating confidence, settling conflicts, and establishing positive relationships with other people. Suitable professions for them include: being a consultant, seller, diplomat, merchant, and etc. g. Extra personal intelligence: it is the ability to understand oneself and being aware of inner state. These learners try to understand inner feelings, dreams, relationships and weaknesses and strengths of themselves. They recognize strengths and weaknesses, understand and check inner feelings, desires and dreams, and evaluate thought patterns. Suitable professions for them include: being a researcher, theorist, philosopher, and etc.
h. Nature-oriented intelligence: it is the latest type of intelligence that Gardner (1990) refers to it in his theory. According to the Gardner's theory, those who have high nature-oriented intelligence are more compatible with nature and usually they are interested to training and explore the environment. These people are sensitive towards their environment. Their skills include: being interested to the subjects like botany, biology, zoology, gardening, and discovering the nature. Suitable professions for them include: being a biologist, gardener, and etc.
The benefits of the application of the theory of multiple intelligences in the schools
a. It would be supposed that mental abilities, such as drawing an image, singing a melody, listening music, and seeing a representation, are not necessary for educational activities.
However, these activities are necessary as much as writing and solving mathematic problems.
Studies show that many of students, who have low performance in the traditional tests, are interested to the lesson when the classroom combined with the artistic activities, sport and music.
b. This theory would help making opportunities for students in order to shape a suitable learning model based on the needs, interests and talent of students.
c. This theory creates an opportunity for students to realize their weaknesses. Therefore, the self-confidence of students increases.
d. While increasing knowledge of students, they gain positive educational experiences and ability to solve problems. 512 e. Students would have much more control over the things that they have learnt.
f. Students would have significant progress in terms of critical thinking, organizing and assessment information and providing new knowledge.
g. Educational systems claim that this theory combines a large number of teaching methods together and most students find ways for success in learning.
h. Multiple intelligences help the teacher for creating educational experiences more and more.
i. This theory helps the teacher to wisely evaluate natural talents of students.
Conclusion
One of the most important aspects of any society is training the healthy children with creative thought. Training of these children requires a detailed understanding of the children and their abilities. As Gardner expresses, focusing on individual differences among students is very important. Knowledge of the theory of multiple intelligences motivates teachers to use different ways in order to help all students in their classes. The teacher must be aware that every student has his/her own unique intelligence that can affect student's learning. In fact, the theory of multiple intelligences makes an effective learning method that improves the teaching. If in a teaching method a teacher uses multiple-intelligence, teaching method would be quite creative and innovative. Therefore, instead of forcing child to imitate, increasing the creativity and aesthetic sense of child is the most appropriate educational method for visual arts. In addition to the creation of inner joy, artistic activities must reduce stress and mental health of child. The educational methods should give a special attention to cognitive, emotional, and motor characteristics of children till increase children's tendency towards creative artistic activities. Behavior of trainer, educational methods and materials, and educational tools must be designed and selected in accordance with the needs, conditions, and nature of child. Beauty and art are among human innate needs. Hence, art education and aesthetics with regard to the application of multiple intelligences have great importance on education and children's mental health. | 2019-05-19T13:04:18.848Z | 2017-01-05T00:00:00.000 | {
"year": 2017,
"sha1": "cb82871c17d375669158b3be94ba239a815a7aaa",
"oa_license": "CCBY",
"oa_url": "http://kutaksam.karabuk.edu.tr/index.php/ilk/article/view/620/481",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "073f1479d6b183675f2ed39e5ea997d71b9987a8",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
261086056 | pes2o/s2orc | v3-fos-license | Controlling viral inflammatory lesions by rebalancing immune response patterns
In this review, we discuss a variety of immune modulating approaches that could be used to counteract tissue-damaging viral immunoinflammatory lesions which typify many chronic viral infections. We make the point that in several viral infections the lesions can be largely the result of one or more aspects of the host response mediating the cell and tissue damage rather than the virus itself being directly responsible. However, within the reactive inflammatory lesions along with the pro-inflammatory participants there are also other aspects of the host response that may be acting to constrain the activity of the damaging components and are contributing to resolution. This scenario should provide the prospect of rebalancing the contributions of different host responses and hence diminish or even fully control the virus-induced lesions. We identify several aspects of the host reactions that influence the pattern of immune responsiveness and describe approaches that have been used successfully, mainly in model systems, to modulate the activity of damaging participants and which has led to lesion control. We emphasize examples where such therapies are, or could be, translated for practical use in the clinic to control inflammatory lesions caused by viral infections.
Introduction
Since the dawn of the COVID-19 era, persons not claiming to be virologists have learned more facts and fake facts about viruses and how they cause disease than anyone could have imagined.Many of the newly enlightened feel they know so much about COVID-19 to decide that they and their family, including children, do not need to be vaccinated against the COVID-19 virus or even against any infectious agent.This often includes measles, a highly infectious and potentially devastating viral infection of children.
However, as all trained medical scientists are well aware, some virus infections, that include COVID-19, can be controlled effectively with vaccines and when this occurs the vaccine approach is more effective, convenient to use and far less expensive than any other viral control measure.Unfortunately, we lack effective vaccines against some virus infections and these need to be controlled by alternative therapies that are often unsatisfactory.In this review, we discuss how we might control virus infections where lesions occur mainly as a consequence of the host`s immune reaction to the infection and make a case for controlling such infections by rebalancing the participation of immune reactants.
Some viruses are endowed with intrinsic pathogenicity and cause clinical disease as a direct consequence of their destroying cells and tissues.In mankind, smallpox was such an example causing marked clinical signs and death in many millions of persons before successful vaccines were discovered and used.When smallpox vaccines became widely available, all were forced, or wisely chose, to use them and in 1979 smallpox was fully controlled and eradicated (1).So far this is the only human virus disease that has been eradicated, but we are, or at least were, close to success with poliovirus (2).We might have succeeded also in eliminating the COVID-19 virus if appropriate public health measures had been adopted early after its initial discovery.Many virus infections are not a major health problem in persons with a normally functioning immune system, but do become so when the activity of one or more aspects of immunity are defective for genetic or other reasons (3,4).Human cytomegalovirus is such an example with this herpesvirus becoming a significant pathogen in persons that receive immunosuppressive drugs to prevent rejecting their transplants or are co-infected with other agents that suppress immunity, such as untreated human immunodeficiency virus (HIV) infection (5).
There are also many virus infections which are not cytopathic and fail to cause overt tissue damage when they replicate in cells, but they can elicit tissue-damaging lesions that are often chronic.When this happens, usually the lesions are largely the consequence of the host`s response to the infection (3).As immunologists have taught us, the host can respond to invaders in many ways and only some of these aspects may be responsible for causing the pro-inflammatory effects.In the case of viral immunoinflammatory lesions, the pro-inflammatory participants are usually a subset of T cells along with nonlymphoid inflammatory cells (6,7).At the same time other aspects of the host`s innate or adaptive immunity also may be engaged with these acting to diminish the extent of damage and contributing to resolution.Given such a scenario the case can be made that if ways are used that rebalance the participation of the damaging and protective components to favor the latter then the impact of viral inflammatory lesions will be diminished.In this review, we describe several ways that can achieve a rebalance of immune responsiveness and emphasize those approaches that might be most practical for use in the case of human tissuedamaging viral infections.A scheme defining several strategies that could rebalance immune activity and diminish lesions is depicted in Figure 1.
Minimizing inflammatory reactions by rebalancing aspects of the innate immune response
Innate immunity constitutes a first line of defense against invading pathogens.The system is set into action by pathogenassociated molecular patterns (PAMPs) present within or on the surface of pathogens or generated from tissues damaged by the pathogens-so-called damage associated molecular patterns (DAMPs) (8).These PAMPs and DAMPs are sensed by several types of pattern recognition receptors (PRR) expressed by the multiple types of innate immune cells.The responding cells undergo a range of changes that include migratory activity, activation, metabolism status, morphological alterations, acquisition of molecules responsible for their migratory activity, as well as the production of several molecules that can influence the function of other cell types, such as those that constitute the adaptive immune system (9,10).There are multiple types of innate immune cells and these include dendritic cells (DC), macrophages, natural killer (NK) cells, granulocytes, gamma delta T cells and innate lymphoid cells and all cell types could influence in some way the outcome of a viral infection.However, perhaps the most relevant first responders to viral infections are DC, which are A scheme proposing multiple strategies that have the potential to recalibrate immune activity and reduce the occurrence of lesions.Approaches aimed at inhibiting pro-inflammatory cells and or cytokines, expanding regulatory cells of innate or adaptive immune system, removal of pathogens by reinvigorating exhausted T cells could represent such interventions.
themselves quite heterogeneous in terms of the actual PRR they mainly express, as well as the major molecules they produce once stimulated by PAMPs or DAMPs (11,12).Moreover, DC are superior antigen presenting cells and this function, along with molecules produced such as cytokines, results in a variable outcome in terms of the quantity and quality of the subsequent adaptive immune response induced (13,14).Notable DC products also include interferons, which in addition to having antiviral activity also help shape the nature of immune responses to infection (15).Given this heterogeneity of the first responders, there is clearly an opportunity to introduce modulators, particularly early during the initial infection process that will favor the engagement and activation of some subsets of innate cells over others (16).Manipulating innate aspects of immunity to improve immunity has been exploited for decades by formulating vaccines that include adjuvants that improve immunity by acting on one or more aspects of innate immunity (17).Manipulation of innate immune function could, for example, diminish the induction of pro-inflammatory tissue-damaging responses favoring the induction of regulatory responses and anti-inflammatory cytokines.However, once the pattern of immune responsiveness is established after immune induction, changing this pattern by manipulating innate cell composition and their functions becomes far less accessible and this is the challenge we are discussing in this review.
The strategies available to effect changes in innate cell activity can exploit the fact that the different innate cells express a diverse array of PRR that respond to the many PAMPs expressed by the infecting agent and DAMPs resulting from tissue damage.Accordingly, by using PRR ligands or inhibitors it should be possible to change the participation of various innate cells (18).In addition, innate modulations could target signaling events set into motion as well as the protein products of innate cells that mediate their influence.With regard to manipulating the diverse array of PRRs, some key ones that mold the pattern of innate responsiveness to viruses are the multiple Toll-like receptors (TLR) that are expressed either at the surface or within cells that compose the innate immune system (10).Others include the melanoma differentiation-associated protein 5 (19), retinoic acid-inducible gene I receptors (20), nucleotide oligomerization domain, as well as receptors which recognize complement components and their breakdown products (21).There are additional nucleotide sensors in the cytoplasm such as cyclic GMP-AMP synthase, which senses DNA derived from viruses (22).A list of innate sensors and viral ligands is provided in Table 1.For instance, TLR-2, TLR-4 are triggered by some viral envelope or capsid components (47-49).TLR-3 senses double stranded RNA which many viruses produce during their replication (50).TLR-7 and TLR-8 sense singlestranded RNA (51) while viral DNA is sensed by TLR-9 (27,52).These TLRs are differentially expressed on innate cells setting the stage for a virus expressing one or more PAMPs triggering some, but not all, innate cells depending on their PRR expression.Moreover, the innate cell triggering may lead to a cascade of signaling events and the production of cytokines and chemokines that in turn mediate cell recruitment and aspects of the tissuedamaging inflammatory reaction.Consequently, strategies available to rebalance the participation of innate immune mechanisms, are available and some these are listed in Figure 2 and are discussed below.
In the majority of viral immunopathological reactions, the lesions are orchestrated by adaptive immune components, usually T cells, but the actual tissue damage is caused mainly by nonlymphoid inflammatory cells and several inflammatory mediators released from activated innate as well as adaptive cell types (3).These mediators often constitute what is called a cytokine storm (53).Prominent among the cellular participants are macrophages, particularly those described as M1 macrophages.Thus, lesion severity can be diminished by removing macrophages or by changing the balance of macrophage subtypes from a dominance of M1 to the M2 subset, more often involved in lesion resolution.There are approaches to counteract macrophages and their activities and these have been used in numerous model systems of viral immunopathology.The initial approach was developed by Nico Van Rooijen and involved using clodronate liposomes that destroyed cells, predominantly macrophages that phagocytosed the liposomes.The clodronate, once in the cytoplasm, is metabolized by aminoacyl t-RNA synthetases to generate an ATP analogue that in turn trans locates to the mitochondrial membrane, inhibiting mitochondrial ATP/ADP translocase.This results in inhibition of mitochondrial respiration, as well as the induction of apoptosis (54,55).This approach has been used to define the role of macrophages in several viral infections in model systems.These include respiratory disease caused by pneumovirus and the neuropathological consequences of Theiler's virus infection in mice (56,57).Macrophage depletion was also reported to attenuate muscle and joint inflammation after Ross River virus infection in mice (58).Furthermore, clodronate liposomes given intranasally result in less severe lesions in mice infected with respiratory syncytial virus (RSV) (59).All of these studies were interpreted to mean that macrophages were involved in causing the tissue damage at the overt lesion stage, but the experimental design invariably involved depleting macrophages from early stages of infection, so could not exclude the influence the cells also might be having during immune induction or reveal how macrophages participate in inflammatory reactions.As far as we know, human viral immunoinflammatory lesions have not been managed using the clodronate approach.
In more recent times, it has become evident that macrophages can polarize into pro-inflammatory M1 cells mainly involved in mediating tissue damage and M2 cells that largely play an anti-inflammatory and tissue repairing function.The M1/M2 concept was introduced by Mills and came from experiments showing that macrophages from Th1-prone mouse strains (such as C57BL/6 and B10D2) exhibit a strong nitric oxide response against stimulants such as LPS, while macrophages from Th2-prone strains (such as BALB/c and DBA/2) exhibited a strong arginine metabolism response, an effect not observed in macrophages from Th1 prone mice (60).These two functional subtypes can readily be differentiated from precursors in vitro and shown to differ in the types of cytokines they can produce, surface characteristics, and the expression of some critical enzymes involved in their function.For instance, M1 cells are CD86 positive and produce nitric oxide synthase and reactive oxygen products.They mainly produce the cytokines TNF-a, IL-1b, IL-6, and IL-23.M2 cells are CD206 positive, arginase positive, and mainly produce the cytokines IL-4, IL-13, IL-10, and TGF-b (61).Whereas there are no convenient ways in vivo, especially in humans, to selectively deplete the subtypes and demonstrate their role in tissue damage, in model systems it is feasible to preferentially expand the different cell types and record any change in the expression of a virus infection.Additionally, studies can be done where mice are manipulated to generate predominantly M1 or M2 macrophages and then to compare the outcome of a viral infection.Using the latter approach one group has implicated that M1 cells are more involved than are M2 cells in ocular inflammatory lesions caused by herpes simplex virus (HSV) infection (62).Others reported that M1 macrophage polarization occurs in the brain after multiple flavivirus infections, and that inhibiting this polarization by blockading the M1 cytokine product TNF-a significantly attenuated Dengue virus-induced neurotoxicity, a lesion that involves a host inflammatory reaction to the virus (63).However, it is not clear if polarization changes can be accomplished once inflammatory lesions have commenced and are ongoing, as would be needed in clinical situations.
Another procedure that has been used to switch the balance of M1 and M2 macrophages is to use agonists of the transcription factor peroxisome proliferator-activated receptor gamma (PPAR-g), which is a member of the nuclear receptor family and is involved in regulating several inflammatory genes (64).It was shown that the interaction between the signal transducer and activator of transcription 6, promotes macrophage polarization towards to the anti-inflammatory M2 type along with the expression of IL-4 and IL-13 (65).For example, using mice infected with RSV and then treated with the PPAR-g agonist pioglitazone from day 2, resulted in diminished lung pathology explained by M2 cells dominating the lesions rather than M1 cells as is the case in lesions (66).Unfortunately, the majority of these macrophage subtype rebalancing therapies were started in the immune induction phase and continued on rather than addressing the issue we are discussing of making changes in established lesions to alleviate their severity.To this point therapy with the PPAR-g activator, 15d-PGJ2, resulted in significantly reduced lung inflammatory reactions and mortality to influenza (FLU) infection, but only if therapy was begun on day 1 and not when lesions were already present (67).Thus, the PPAR-g agonist approach may have minimal value to control viral inflammatory lesions once they are already underway.Additional comments about PPAR-g are made in a subsequent section since PPAR-g impacts on some genes involved in metabolic pathways.
Although the idea that controlling viral inflammatory lesions can be achieved by approaches that suppress M1 or enhance M2 macrophages is a useful notion, it has yet to become an accepted paradigm.Indeed, contrary data for the concept does exist.For example, there are reports that expanding the M1 population, as can be achieved using the molecule Baicalein, causes a diminution of inflammatory responses to FLU infection (68).In addition, whereas macrophages, particularly M1 cells, may directly participate in causing tissue damage, the cells may also participate indirectly to mediate inflammatory effects by releasing soluble mediators such as TNF-a (69).Overall, macrophage-targeted approaches hold promise to control inflammatory viral lesions, although achieving such control in practical situations such as in a chronic human viral infection still needs to be realized.
As mentioned previously, tissue damage is often caused by inflammatory cytokines and chemokines which can be the products of innate immune cells, although some of the cytokines also may be the products of adaptive immune cells.In several viral infections, where the host response is a major contributor to the tissue damage; multiple cytokines and chemokines can be involved constituting what is usually referred to as a cytokine storm (53).Such storms are a major feature of severe Dengue viral infections, but also occur in some patients with severe COVID-19 lesions as well as occasionally in FLU (53, 70).Controlling cytokine storms therapeutically is currently achieved using anti-inflammatory drugs, but monoclonal antibodies (mAbs) against some pro-inflammatory cytokines, or their receptors, also can be effective therapies.For example, severe COVID-19 patients often experience a cytokine storm that includes multiple cytokines (71).A commonly used control measure is to target some of these, but especially IL-6 and its receptors, with a mAb to control the severe inflammatory lesions so preventing multiorgan failure and aiding recovery (72).Accordingly, clinical trials showed that anti-IL-6 receptor mAb (e.g., sarilumab, tocilizumab) and anti-IL-6 mAb (e.g., siltuximab) reduced inflammation, decreased the need for mechanical ventilation, and resulted in a 45% reduction in death in COVID-19 cases (72).As a result, tocilizumab has been included in some treatment guidelines for severe COVID-19.Additionally, Anakinra, a recombinant IL-1 receptor antagonist, also was suggested as a potential treatment for the hyperinflammatory state linked to SARS-CoV-2.Accordingly, a study with 52 patients found that subcutaneous Anakinra treatment reduced the need for mechanical ventilation in the ICU and decreased mortality rates in severe COVID-19 patients without significant side effects (73).However, further controlled trials are necessary to confirm the effectiveness of Anakinra.Similarly, in severe Dengue virus inflammatory disease, there is significant upregulation of multiple cytokines compared to healthy controls and lesions can be diminished by blocking some of these with specific mAb (63, 74).In H5N1 FLU, patients show elevated levels of IL-6, IL-8, IL-1b, IFN-g, TNF-a, and the soluble IL-2 receptor, but the outcome of blocking one or more of these cytokines needs to be evaluated in humans.However, when tested in mice, blockade of TNF-a, IL-1b attenuated lung pathology after FLU infection of mice (75,76).Targeting some chemokines is also a useful way to diminish the consequence of inflammatory viral lesions.In a study using a chronic obstructive pulmonary disease-induced mouse model caused by H1N1 FLU infection, treatment with the CCR5 chemokine antagonist, maraviroc, led to a significant reduction in lung pathology with concomitant reduction in the numbers of infiltrating neutrophils and macrophages in lung airways, as well as increased survival of mice (77).In the context of COVID-19 infection, terminally ill patients showed restored plasma IL-6, CD4, and CD8 responses after receiving two doses of leronlimab, a CCR5-specific IgG4 mAb, in the ICU while on mechanical ventilation.Although lacking a control group, the results suggest that anti-CCR5 treatment significantly reduced inflammatory reactions compared to baseline in plasma samples, which might have prevented pulmonary pro-inflammatory leukocyte infiltration (78).Supporting these findings, a clinical trial registry (NCT04347239) comparing anti-CCR5 mAb treatment with dexamethasone as standard care or placebo resulted in significantly increased survival (79).Along a similar line, in ferret models for FLU, inhibiting CXCL10 activity with the CXCR3 inhibitor, AMG487, which blocks its signaling, increased survival length and diminished lung pathology (79).The study also reported a reduction in viral load in the lungs, which might be correlated with the establishment of effective antiviral responses in H5N1 infected and CXCR3 antagonist drug-treated ferrets (80).Other research has tested blocking CCR2 using the inhibitor, PF-04178903, in mice infected with H1N1 FLU.The results showed that CCR2 blockade reduced mortality and clinical implications without altering viral titers, suggesting that CCR2 antagonists could potentially serve as an effective therapy against FLU-induced pathogenesis (81).
In this section, we have evaluated the prospect of managing viral inflammatory lesions by changing the function of innate aspects of immunity.Few approaches that target and change aspects of innate immunity can be applied in practical situations, but strategies have succeeded in model systems, although they are less effective when used to counteract already established lesions.The most effective practical therapies are those that counteract inflammatory cytokines and chemokines most of which are proprietary humanized mAbs and so are very expensive to use.Nevertheless, some have been valuable therapies to control inflammatory lesions in COVID-19 patients as well as in severe Dengue.Conceivably, lesion control also could be achieved by administering anti-inflammatory cytokines such as IL-10 and TGF-b, but they have a short half-life and might be more effective if delivered directly to inflammatory lesions.Conceivable problems with delivering cytokines could be overcome by using half-life extended fusion cytokine proteins since this approach has been successful in mice using fusion proteins that deliver IL-10 to treat solid tumors (82).We anticipate that chemical reagents that can change the functional type of macrophages in lesions could be in the pipeline, as could be ligands that act on TLRs and inflammasomes.For example, the TLR7 agonist, Imiquimod is dispensed topically to treat genital warts caused by papillomavirus infection (83).In addition, several mouse models have shown the value of using approaches that target TLRs and inflammasomes (84-87), although few if any studies start therapy when significant lesions were already present.
Minimizing inflammatory reactions by manipulating the activity of adaptive immune participants
The idea that lesions manifest in some viral infections represent immunopathological reactions arose largely from pathogenesis s t u d i e s o n t h e n o n -c y t o p a t h i c v i r u s L y m p h o c y t i c choriomeningitis virus (LCMV) by Mims and Blanden >50 years ago.As another pioneer in the LCMV field, Michael Oldstone, liked to claim all major mechanistic discoveries in viral pathogenesis and many in immunobiology in general came from studies using LCMV (88).With this infection, the choriomenigitis is a consequence of immunopathology with CD8 T cells orchestrating the lesions and the glomerulonephritis that often occurs is a lesion resulting from the trapping of immune complexes that cause an inflammatory reaction (89), a topic rarely studied by contemporary investigators.Furthermore, studies using LCMV revealed how T cells recognize antigens for which Doherty and Zinkernagel were awarded the Nobel prize in 1996 (90).The idea that all instances of viral immunopathology involved CD8 T cells gained gravity, but it seems likely that in human viral immunopathologies either CD4+ Th1 or Th17 T cells are more often involved in directing the inflammatory lesions.Studies on LCMV clearly showed CD8 cells cause tissue damage mainly by directly destroying infected cells, but when CD4 cells are the orchestrators the tissue damage is usually indirect and involves the release of mediators that recruit a range of nonlymphoid pro-inflammatory cells and tissue-damaging activities (89).In both situations, controlling the lesion severity should occur if the orchestrating T cells are removed, or their function and products inhibited, such as by cells or proteins with regulatory function as is discussed in a later section.Therapies directed against T cell orchestrators of lesions have proven s u c c e s s f u l i n m a n y m o u s e m o d e l s y s t e m s o f v i r a l immunopathology.These include HSV induced stromal keratitis used by our group (91), Theiler's virus-induced neuropathology (92), Coxsackie virus-induced myocarditis (93), West Nile fever virus lesions (94) and several others (95,96).However, in the models, a wide range of sophisticated approaches can be used.These include several in vivo genetically engineered systems that can directly implicate one or another cell type, numerous specific mAbs that block cells or cytokines and their receptors, adoptive cell transfer strategies with intact and gene modified cells along with some drugs and small molecule inhibitors that selectively block the activity of a particular pro-inflammatory cell type.We will not review these many observations made in model systems since most of the experimental maneuvers could not be applied to rebalance and control a clinical situation of viral immunopathology.However, in Table 2 we list some general approaches used in model systems that have succeeded in defining the participation of critical cell types and their products that mediate immunopathology.In addition, many small uncontrolled therapies directed at T cells and their products have been explored to limit the inflammatory stage of COVID-19.For example, a retrospective study blocking Th17 T cells with the anti-IL-17 mAb netakimab proved effective and was well tolerated (101).In addition, a trial in Bangladesh that combined an anti-IL17A mAb with the JAK inhibitor Barictib was effective against COVID-19 respiratory disease, although the therapy was followed by a higher frequency of secondary infections than in controls (102).Moreover, as discussed in the previous section, mitigation of the severity of COVID-19 lesions has also been achieved using mAbs against some cytokines produced by inflammatory T cells as well as by innate cell types.
There are some general approaches that have achieved efficacy in model systems that could be translatable.One is to use small molecule inhibitors that can specifically target pro-inflammatory T cells and disarm their function (104-106).For example, the small molecules CQMU151 and CQMU152 target the transcription factor RORgt needed for pro-inflammatory Th17 T cells to function (106).Furthermore, therapeutic administration of CQMU151 and CQMU152 attenuated the clinical severity of experimental autoimmune uveitis, experimental autoimmune encephalomyelitis and type 1 diabetes in mice (106).However, the drug has not been tested as a means to counter ongoing viral immunopathology.There are other druggable targets of T cells that might translate from successful results in model systems.For example, treating ongoing lesions of herpetic ocular lesions with the DNA methyltransferase inhibitor 5-azacytidine diminished lesions, although the effect was more to expand the suppressive function of regulatory T cells (Treg) than being inhibitory to proinflammatory T cells (123).Also of potential value are antibiotics derived from streptomyces that were shown to inhibit ongoing autoimmune lesions mediated by Th17 T cells (124).Another strategy is to target molecules such as lymphotoxin alpha expressed by Th1 and Th17 (125).It was shown that therapeutic administration of a lymphotoxin alpha blocking antibody given after infection attenuated the severity of ocular lesions (126).Similarly, therapeutic administration of the drug 2, 3, 7, 8-Tetrachlorodibenzo-p-dioxin, which activates the transcription factor aryl hydrocarbon receptor and inhibits Th1 and Th17 cells, but also expanded T cells with regulatory activity, diminished the severity of herpetic ocular lesions (103), an example of a successful immune rebalancing scenario.There are reports also of elevated levels of immune complexes during chronic LCMV infection with such immune complexes impairing otherwise protective antibody effector functions mediated by Fcg-receptor (FcgR) activity (127,128).This raises a challenge in chronic virus infections for testing antibodies whose effect is FcgR dependent.
Another perhaps longshot approach to rebalance and control a viral inflammatory reaction came from the Iwasaki lab (129).They advocated a so-called prime and pull approach to control HSV inflammatory reactions in the genital tract (129).Priming meant virus immunization in their model animal system and pull meant subsequently using chemokines to attract immune T cells to the infection site to resolve the lesions.Extensions of this idea have used more acceptable pulling agents such as the non-toxic aminoglycoside antibiotic, neomycin (130).This idea was also advocated for use to counteract human genital HSV lesions where persons are already lifelong latently HSV-infected and hence primed.The pulling agent was advocated to be topical application of the TLR-7 agonist, Imiquimod, an approach shown to be effective in a guinea pig model (131) with Imiquimod approved at least for external topic treatment of human warts (83).Conceivably, the prime and pull approach may be tried in the clinic to control troublesome recurrent herpetic inflammatory lesions.
Finally, an approach to rebalance the role of adaptive immune cells in an inflammatory viral infection, is to manipulate the composition of the microbiome at surface sites.Thus, largely from studies done on controlling autoimmunity, it has become evident that the composition of the microbiome, particularly in the intestinal tract, can influence the extent of inflammatory responses mediated by T cells (132).Accordingly, the dominance of certain microbes will favor the systemic induction of Th17 T cells and hence increase the incidence and severity of some AIDs and likely too of viral inflammatory lesions (100).However, the predominance of other microbes favors the induction of regulatory T cells, which can suppress inflammatory reactions (133).Manipulating the microbiome composition, which can be achieved most Using T cell exhaustion/check point therapy to reversing T cell exhaustion, thereby enhancing the immune response to clear the infection.
PD-1 blockade was effective on CXCR5+ progenitor Tex, inducing proliferation and cytokine production to clear LCMV infection, but was without effect on terminal Tex (118) Dual blockade of Tim-3 and PD-1 or combining PD-1 blockade with IL-2R agonist substantially enhanced virus-specific CD8 T cell responses, increasing the numbers and cytokine production compared to single checkpoint blockade in LCMV infection (119)(120)(121) The combination of PD-L1 blockade with 4-1BB costimulation led to significantly improved antiviral CD8 T cell responses in chronic LCMV infection (122) conveniently by dietary measures, holds high promise to modulate the development and severity of immunoinflammatory diseases.Unfortunately, the approach has more value to prevent an inflammatory problem than being an effective way to manage established chronic lesions.More mention of this topic is made in the section discussing metabolism.This section reviewed approaches targeting adaptive immune components to rebalance immune responsiveness so as to mitigate virus-induced tissue damage.Numerous strategies showed value when tested in model systems, but in most cases these studies were done in a way that would not meet the challenge of rebalancing the pattern of immune responsiveness in an established clinical situation in humans or companion animals.However, control of the immunopathological stage of COVID-19 with mAbs to T cell subsets and cytokines has shown promise, but clearly more translational research is merited to develop practical approaches to rebalance the participation of adaptive immune participants to curtail the consequences of viral immunopathology.
Rebalancing reactions by expanding the activity of regulatory mechanisms
The late and much missed Dick Gershon popularized the concept in the 70s that suppressor cells could put a break on immune responses and constrain their over-reaction (134).Suppressor cells faded from fashion largely due to their unambiguous identification.However, the idea came back with a vengeance in the late 90s when such markers were discovered by groups at NIH and in Japan and the cells were renamed regulatory cells, which were T lymphocytes (135,136).The regulatory T cells (Treg) were shown to express CD4 and a high affinity subunit of IL-2 receptor (alpha chain also known as CD25) on their surface and they accounted for around 5-10% of the total CD4+ T cells in naïve mice, as well as healthy humans (135).Subsequently, a more reliable identifier, the transcription factor FoxP3 that controls some of the regulatory activities, was discovered (137).With the description of a canonical transcription factor driving the differentiation and function of these cells, their further genetic, molecular, and biochemical analysis became possible, and this led many to believe that such cells could be used for managing inflammatory conditions.Whereas FoxP3+ Treg remain the vanilla flavor, as Shevach later described (138), we now have almost as many flavors of regulatory cells as we have ice creams in parlors.In addition to the activity of regulatory cells, the extent of immune reactions can be limited by several inhibitory cytokines, in particular IL-10, TGFb and IL- 35 (139, 140).One well studied type of regulatory T cells, so-called Tr1 cells, are FoxP3 negative and produce the antiinflammatory cytokine IL-10 (141).Of its several activities, IL-10 can downregulate class II MHC and can also interfere with the NF-kB pathway to affect immunosuppressive functions (141).Unlike FoxP3+ Treg, such cells may not require physical contact with the effectors to cause immunosuppression.Regulatory cells can also have innate immune features that mainly function during the early phase of viral encounter, or be adaptive antigen specific cells relevant in controlling excessive inflammatory reactions.
When Treg became an accepted part of immunobiology, the focus was on their role in constraining autoimmune lesions and this role could be firmly established following identification of FoxP3 as their canonical transcription factor.Thus, naturally occurring genetic defects in FoxP3 as was observed in scurfy mice as well as in humans with the Immunodysregulation polyendocrinopathy enteropathy X-linked syndrome (142).In addition, when FoxP3 was removed as could be done in model systems either by gene knockout approach or a conditional deletion by diphtheria toxin in transgenic FoxP3-DTR mice, multiple inflammatory and autoimmune diseases resulted (142,143).Subsequently, it became evident that the function of Treg was involved also in controlling the extent of inflammatory reactions in viral infections (see Table 3) and that their over activity could be detrimental in many cancers (156).With respect to chronic viral infections, many studies demonstrated that lesions become more severe if cells with regulatory activity, most commonly CD4+FoxP3+ T cells, were absent or depleted (146).Our own laboratory made the initial observations for a viral infection by showing that immunity to HSV was influenced by Treg and subsequently that the severity of ocular inflammatory lesions caused by HSV were more damaging if Treg were depleted (144, 146).Results with other viral inflammatory lesions told a similar story, but FoxP3+ Treg were not the only cell type involved in limiting the inflammatory reactions (157).In some viral inflammatory lesions, the regulatory cells were identified as CD8+ T cells and in others termed Tr1 CD4+ T cells that produce IL-10 (158).In addition, the FoxP3+ Treg are themselves heterogeneous falling into two major categories; so-called natural Treg (nTreg) that derive from the thymus and mainly react with self-antigens and induced Treg (iTreg) that recognize exogenous antigens, such as viral antigens.The latter group are those mostly involved in limiting inflammatory responses to viruses, but these too are heterogenous some expressing the transcription factor Tbet and are critically involved in regulating Th1 effector cells (159).Acquisition of a phenotype by Treg similar to that of effectors of different helper subsets was also demonstrated, but why it is critical to exert potent suppression is not known (160).A common feature of iTreg is that they can be plastic, losing their regulatory function and even converting to become pro-inflammatory, an event more likely to occur in an inflammatory environment.Hence, the challenge to therapy is both to find ways of expanding the representation of cells with regulatory activity and also to maintain the function of those cells already expressing regulatory function.The issue is how do we achieve these objectives, and can it be done in clinical situations, such as when viral induced inflammatory lesions are already present?
There have been some drug and biological approaches described that do succeed in preferentially expanding Treg in vivo, at least in model systems (Figure 3) (161, 162).For example, administering the galectin molecules, such as Galectin-1 or Galectin-9, expands Treg for reasons that remain unclear.Our group used this approach to diminish the severity of herpetic ocular lesions and correlated the success with a change in the balance of T cells to increase the frequency of Treg (163).Other drugs that were reported to expand Treg include rapamycin, retinoic acid, glatiramer acetate and FTY720, but these compounds have mainly been evaluated to limit autoimmune disease lesions.They merit testing in viral model systems and perhaps also in clinical situations (161).A biological approach that caused excitement was that Treg could be expanded using immune complexes of IL-2 and mAb (clone JES6-1) to IL-2, but reports of its success to limit an inflammatory reaction caused by a viral infection are not available (164).The subset of T cells expanded by immune complexes can be critically dependent on the clone of anti-IL-2 mAb used.A different clone, S4B6, when injected in vivo expanded virus-specific CD8+ T cells in HSV infected animals and not Treg (165).In addition, complexes with clone S4B6 also promoted LCMV reactive CD8+ T cells, as demonstrated using an adoptive transfer approach (164).The paradoxical effects of the two clones of anti-IL-2 antibodies complexed with the cytokine were explained by structural analysis.While the cytokine in complex with the S4B6 clone preferentially binds to IL-2Rb and IL-2Rg that are predominantly present on effector cells, the JES6-1 clone complexed with IL-2 becomes dissociated from the complex facilitating the interaction of IL-2 with all the subunits of IL-2R, including the high affinity a-chain (162).As Treg preferentially express a-chain of the receptor, the cytokine is utilized by such cells more efficiently with the complexes increasing the bioavailability of IL-2 (162).
Another promising biological approach to induce Treg was a slow release of antigen delivered via osmotic pumps, which had the advantage of inducing antigen-specific Treg (166).Such an approach could be more clinically relevant with a more acceptable delivery system.Conceivably, incorporating antigens in skin patches or intradermal implants to efficiently expand Treg of a required antigen specificity could be an effective strategy, but this needs to be evaluated.Chimeric antigen receptor expressing Treg, as can be obtained by genome editing techniques such as the use of CRISPR/Cas9 (167), may also merit a trial.Further modification of such Treg to stably express transcription factors such as FoxP3 and the high affinity IL-2 receptor a-chain as constitutive modules to enhance their suppressive function represents a potential strategy to mitigate viral induced immunopathology.However, enthusiasm and application of Treg expanding approaches seems to have waned because of the plastic nature of the expanded cells.
Studies targeting Treg cells Outcomes observed in-vivo Ref
Adoptive Whether a cell exhibits plasticity and can change its phenotype is largely governed by epigenetic modification (168).Therefore, drugs such as trichostatin A, valproic acid or DNA methyl transferase such as Dnmt1 that can modify the epigenome of differentiating or already committed cells could be useful (169).Such chemicals not only promote the de novo conversion of non-Treg into Treg, but also stabilize the expression of FoxP3 in already committed cells.However, selectivity and specificity are always challenges to meet with several such approaches and the epigenetic modifiers being no exception.The composition of the milieu in which APCs and T cells engage, as well as how the antigen is delivered to APCs, could result in the induction of either pro-or anti-inflammatory cells.The composition of the milieu could be altered by several means such as the inclusion of neutralizing antibodies against pro-inflammatory cytokines, injecting anti-inflammatory cytokines such as IL-10, TGF-b or some of the immunosuppressive chemicals such as rapamycin, dexamethasone to induce tolerogenic APCs with intact antigen-presentation capability to expand or induce Treg of different types (170).The DC therapy approach could also be used wherein the cells isolated either from PBMCs or bone marrow of patients are exposed to such reagents in the presence of antigens exvivo.The ex vivo primed APCs are then transferred back in the patient to expand antigen specific Treg (171).Nano formulations of organic material such as liposomes, polymers including poly lacticco-glycolic acid, polylactide, poly(b-amino esters), polyethylene glycol also could be generated to incorporate antigens for delivery to the APCs under tolerogenic conditions (171).For example, nanotechnology-based drug delivery approaches offer potential for stimulating or suppressing immune cell responses.Thus, functionalized carbon nanotubes activate DC by triggering TLR7 signaling pathways that lead to pro-inflammatory cytokine production and increased co-stimulatory molecule expression (172).On the other hand, native cellulose nanofibrils induce immune tolerance in DC by possibly interacting with CD209 and actin filaments, leading to altered T cell responses characterized by a weaker Th1 and Th17 response, but a stronger Th2 and regulatory T cell response (173).A novel cellulose nanofibril-reinforced hydrogel developed by Yang et al. (174) uses a pH-responsive drug release system, which may involve interactions with immune cells in the wound healing process.Additionally, Tomićet al. (175) reported that functionalized cellulose nanofibrils induce tolerogenic properties in DC, which may suppress allogeneic T cell proliferation through mechanisms yet to be fully specified.These findings demonstrate the potential of nanotechnology to manipulate immune responses through targeted drug delivery, offering new therapeutic avenues for immune-related conditions and may be beneficial to alleviate tissue damage after viral infection.
The injection of antibodies against pro-inflammatory cytokines such as IL-6, TNF-a, IL-17 are routinely used to manage inflammatory diseases (176), but the cost involved with such regimens represents a major prohibitive step.Approaches wherein the replicating microbes, preferably commensal bacteria, if modified to express and secrete such biologicals in situ could reduce the expense significantly.In fact, the feasibility of such approaches has been demonstrated, although not in a viral disease (177).More recently, a modified strain of E. coli was engineered to secrete a nanobody against the cytokine TNFa to dampen inflammatory response in the gut.With the neutralized pro-inflammatory cytokines regulatory mechanism could operate effectively (178).Identifying such microbes, the ease of their manipulation, the disease condition being targeted, and the regulatory compliances would all need to be factored in, should such approaches be pursued in a practical situation.The approach nonetheless opens new avenues to produce neutralizing antibodies not only against the cytokines, but obviates the need to inject purified antibodies that are expensive and challenging to employ in clinical situations.
In addition to employing cytokine neutralizing antibodies, some of the host's metabolites could shift the balance from proinflammatory T cells such as Th1 or Th17 towards Treg (179).For example, bile acid metabolites.such as the derivatives of lithocholic acid (LCA), 3-oxoLCA and isoalloLCA, reciprocally regulate the differentiation of Treg and Th17 cells and when administered to mice served to reduce pro-inflammatory Th17 cells, but increased Treg representation (179).Similarly, short chain fatty acids, such as sodium propionate, helped to resolve ocular lesion caused by HSV potentially by affecting several cell types of innate as well as adaptive immune participants (155).In animals fed sodium propionate, Treg outnumbered T effectors (155).Approaches that modify cellular metabolism are cost effective, easy to apply and therefore could have translational value as is discussed in a later section.
Apart from Treg, other regulatory mechanisms, such as myeloid derived suppressor cells (MDSCs), also exhibit potent suppressive activity.Whether or not the MDSCs are induced during the early phase of a virus infection could help decide the pathogenesis of certain viral infections such as LCMV in mice.Infection with clone 13 of LCMV, that activates and expands MDSCs early after infection, results in chronic infection while the Armstrong strain of LCMV fails to efficiently signal MDSCs and the infection resolves favorably (180).Furthermore, depletion of MDSCs generated efficient anti-viral CD8+ T cell response in clone 13 infected animals, although these maneuvers had to be done at the initiation stage of infection.Other infections such as HIV and HSV can also activate and expand MDSCs early after infection (180).For example, our group showed that therapy with ex vivo differentiated MDSCs in the presence of cytokines such as IL-6, IL-4 and GM-CSF controlled the severity of herpetic ocular lesions when using a therapeutic design (181).Conceivably, strategies to expand MDSCs in vivo could alleviate inflammatory response by their direct action of effectors, as well as by expanding endogenous Treg responses.
In conclusion, rebalancing inflammatory reactions to expand regulatory mechanisms and inhibiting pro-inflammatory components represents a major objective to minimize the consequences of any viral immunoinflammatory process.There are many different forms of regulation and accessible means to expand them and dampen lesions at least in model systems.However, few if any are ready for routine use in the clinic and replace or support, for example, the use of anti-inflammatory drugs.For the long term control of persistent chronic lesions conceivably there could be a place for vaccines based on the mRNA format successfully used in COVID-19 vaccines (182).For example, this format could be designed to induce regulatory mediators and could also include the mRNA sequences of viral epitopes that expand viral specific Treg.Such an approach merits evaluation and conceivable it might become a practical procedure to achieve the rebalance of Treg and pro-inflammatory T cells that we advocate is needed to manage some chronic virus-induced inflammatory lesions.
Rebalancing inflammatory reactions by restoring lost effector cell functions
T cell activation depends on signals received from engagement of their T cell receptors, signals from additional receptors binding to costimulatory molecules such as the CD80/86 ligands on antigen presenting cells, as well as signals from cytokines such as IL-2.Excessive activation of T cells is avoided by signaling induced by inhibitory receptors for several molecules that include CTLA-4, PD-1, TIM-3, LAG-3 and some others.In some circumstances, the activity of the inhibitory receptors becomes predominant and this serves to impair the protective function of T cells.The effect happens in several cancers, but also occurs in many chronic viral infections as was first discovered in the LCMV model of chronic infection (183) and is now referred to as immune exhaustion (184).Fortunately, oftentimes, the protective function of T cells can be restored by administering mAb that block the function of one or more inhibitory receptors and this checkpoint blockade therapy has become a valuable strategy to control some cancers (185).There is abundant evidence that the T cell exhaustion phenotype can be demonstrated in several human chronic viral infections, which infers that checkpoint blockade could be a valuable means to control such infections, although this has not been formally demonstrated in a clinical situation (186).During immune exhaustion, a gradual increase in the expression of multiple inhibitory receptors occurs and the T cells lose functions such as the production or loss of IFN-g, TNF-a, IL-2 and compromised ability to control model chronic virus infections.The actual mechanisms involved in immune exhaustion have been a topic of intensive study using model systems and it is expected this will translate to therapeutic use in the clinic.For example, it is now clear that exhausted CD8 T cells (Tex) may consist of two subpopulationsprogenitor: Tex which are CXCR5+TCF-1+PD1 Int and terminal Tex that are CXCR5-TCF-1-PD1 hi .The CXCR5+ progenitor Tex shared transcriptional signatures with memory precursor CD8 T cells and hematopoietic stem cell early progenitors, while the CXCR5-terminal Tex shared transcriptional signatures with CD8 terminal effectors and hematopoietic stem cell mature cells (118).Of relevance, PD-1 blockade treatment acted on CXCR5+ progenitor Tex, which underwent vigorous proliferation, produced cytokines and conferred therapeutic benefit of PD-1 blockade therapy to clear chronic LCMV infection.On the contrary, terminal Tex did not respond to PD-1 blockade therapy (118).Furthermore, adoptive transfer of CXCR5+ progenitor Tex, but not CXCR5-terminal Tex, was effective in controlling chronic LCMV infection.Thus, approaches that target CXCR5+ progenitor Tex may be more relevant to control chronic virus infections.Additionally, it has also been observed that blocking simultaneously more than one inhibitory receptor mechanism is more effective than single checkpoint inhibitor blockade (187).For example, combinatorial blockade of PD-1 and Tim-3 was synergistic to curtail viremia during chronic LCMV infection (119).In addition, combining checkpoint blockade with other therapies may also achieve greater success than single therapy.For example, combining PD-1 blockade therapy with the provision of IL-2 in chronic LCMV was synergistic and acted to reverse CD8 T cell exhaustion by acting primarily on CXCR5+ progenitor Tex (120,121).Similarly, inhibiting signaling by the co-stimulator molecule 4-1BB along with PD-1 led to potent suppression of viremia and more effective control of after chronic LCMV infection (122).
Although the majority of studies on immune exhaustion focus on CD8+ T cells other cell types also are subject to immune exhaustion.These include CD4 T cells (188) and NK cells (189).As regards the latter, PD-1 expression was increased on NK cells from HIV infected persons (190) some of which had Kaposi sarcoma (191).Furthermore, Tim-3 was upregulated on NK cells from patients with chronic hepatitis B virus infection (192).However, in chronic infections the outcome of receptor blockade therapy has not been assessed for effects on NK cell function, but such therapy has been recorded in some tumor systems with restored NK cell activity correlating with improved tumor control (193).
Overall, we are optimistic that checkpoint blockade will be used in the clinic to facilitate the control of some chronic viral lesions, although more research is needed to find the optimal strategies to use.Obvious candidates are chronic liver pathology caused by hepatitis viruses, particularly those caused by HBV where immune exhaustion is known to occur (194) and effective antiviral drugs are lacking such as are available to control hepatitis caused by HCV (186,195).
Rebalancing reactions by changing the microRNA environment
MicroRNAs (miRNAs) are small noncoding gene sequences that exist in cells and are also found in many viruses.They are usually 20-22 nucleotides in length and act to silence mRNAs and mediate post-transcriptional regulation of gene expression (196).There are an estimated 2300 different miRNAs in human cells (197), and these influence a wide range of genes that control the biological activity of cells that includes those that react to virus infections.Additionally, it is known that several viruses also encode one or more miRNAs and these too contribute to viral functions and also can affect the pathogenesis of infection (198).Accordingly, during a virus infection changes in expression levels of several miRNA species may occur many of which act to affect the function of one or more cells of the innate and adaptive immune systems that respond to the infection.Moreover, it has become evident that manipulating the expression levels of one or more miRNAs, primarily host miRNAs and usually before or early after infection, can be a useful approach to change the outcome, such as minimizing tissue-damaging consequences.
MicroRNAs can act directly or indirectly to affect the ability of a virus to replicate and the extent of tissue damage that results from the infection.Some host miRNAs are known to influence viral gene translation and replication events as well as essential steps in viral infection, such as the expression of viral receptors.Other host miRNAs may also influence the nature of the host reaction made to the infection, which is of particular relevance in chronic viral infections.This begs the question of if manipulating the expression levels of one or more miRNAs might be a practical therapeutic maneuver to minimize the extent of tissue damage caused by inflammatory reactions to viral infections, with this effect explained by a rebalanced immune reaction.Several reports have described the consequences of changing the expression levels of usually a single microRNA either by increasing levels using synthetic mimics, or reducing its presence by gene knockout or using specific antagomirs (109,(199)(200)(201).Some of the more spectacular results were obtained by manipulating miR-122 levels, a molecule expressed predominantly in hepatic tissues and necessary for the replication of HCV in the liver.In a chimpanzee model, it was shown that blocking miR122 with a locked nucleic acid-modified DNA phosphorothionate antisense oligonucleotide provided long lasting protection against their chronic HCV infection (202).This approach was subsequently found safe and reduced HCV RNA levels in humans (203), but highly effective direct antiviral drugs are now preferred to control HCV.Moreover, with the miR122 blocking studies it was not clear if the favorable outcome correlated with a rebalanced immune response pattern since such studies were not performed.
Many miRNAs affect the functions of immune cells (see Table 4) and changing the expression of such miRNA can result in diminished lesions.For example, studies were done showing that modulating miR155, a molecule that affects several aspects of the inflammatory reaction, may change the severity of lesions.An early study from the Baltimore laboratory showed that stopping miR155 Frontiers in Immunology frontiersin.orgexpression using gene knockout resulted in protection from the induction of an autoimmune lesion in mice (204).This outcome was shown to correlate with a change in the pattern of immune responsiveness with less induction of lesion producing proinflammatory Th-17 and Th-1 cell subsets (204).Another group also showed that silencing miR-155 led to less severe clinical consequences of experimental autoimmune encephalomyelitis (EAE) (110).A similar change in outcome was noted when comparing the extent of immunopathological damage to the eyes of mice caused by HSV infection.Thus mice unable to produce miRNA, because of gene knockout or miRNA 155 blocked in normal mice with specific antagomirs, resulted in less severe ocular lesions, an effect that correlated with a diminished proinflammatory CD4 T cell response (109).Unfortunately, however, the treated host was left more susceptible to other complications since the virus usually disseminated to the brain causing encephalitis and death indicating the potential downside of manipulating a microRNA with likely multiple targets of action (200).It would have been of interest to evaluate if local blunting of mR-155 expression in only the eye could have achieved effective therapy.
Studies in mice have also shown that miR-155 is involved in the inflammatory reaction to FLU infection with lung injury being diminished in miR-155 knockout mice (205).Other studies with model systems have shown a critical role of a particular microRNA.For example, in chronic LCMV miR-31 plays an influential role in chronic lesions.Mechanistically, it was found that miR-31 targeted Ppp6C, a negative regulator of IFN signaling, resulting in increased levels of checkpoint molecules such as PD-1 and consequent T cell dysfunction (210).This could mean that targeting miR-31 may provide a therapeutic strategy to rebalance the pattern of immunity to control chronic viral infections.There are other studies which show enforced expression of miR-29a acts to counter CD8 T cell exhaustion and improved CD8 T cell function implying that upregulating miR-29a may be another approach to diminish the consequences of some chronic viral infections (211).It also could be that microRNA manipulation could be useful to influence some critical steps in viral pathogenesis one of which could be angiogenesis.Thus the angiogenesis process is influenced by several microRNAs (224).Our group could show that suppressing miR-132, which influences signaling by the angiogenic factor VEGF, led to diminished ocular lesions caused by HSV (225).Unfortunately, to achieve success against pathological angiogenesis, especially in the eye, requires therapy during the development of pathological angiogenesis since once present, as occurs in chronic lesions, its removal is highly problematic.
Perhaps of no surprise several studies have been performed to record the role that microRNAs could be playing in COVID-19 pathogenesis.Several host microRNAs show changed expression (226,227), but of interest would be if microRNA manipulation therapy would be of value to help control the inflammatory stage of COVID-19 infection.To this issue, one report showed that exosomes containing miR-145 and miR-885 regulate thrombotic events in COVID-19 patients (228).In addition, animal models of infection have indicated that miR-155 inhibition can control the inflammatory effects of COVID-19 (229), as we mentioned is also the case in other viral immunopathologies.We anticipate that the future will see more studies evaluating how manipulating one or more microRNAs will help control COVID-19 infection.
In conclusion we discussed examples where changing singular microRNAs was effective at changing the outcome of inflammatory viral infections.Examples of success are few and are largely confined to model systems.However, we are optimistic that the field is worthy of more investigation and also should consider evaluating multiple microRNA changing cocktails that target different aspects of viral pathogenesis and also perhaps combining microRNA manipulation with other approaches that succeed in rebalancing the nature of immune responses to infections.
Rebalancing reactions by targeting metabolic pathway differences in inflammatory lesion participants
During the course of any virus infection both the cells infected by the virus and the host cells that respond to the infection undergo metabolic reprogramming to support the infection and to influence its outcome, respectively.When a virus infects a cell, several metabolic changes usually occur before new virions are produced and modifying these changes provides an approach to reshape the impact of the infection (230).In the current review, we are focusing on viral infections where lesions are mainly the consequence of a host inflammatory response to the infection raising the question of if modulating one or more metabolic pathways represents a practical approach to limit the extent of lesion expression.The multiple cell types that respond to infection may show different metabolic signatures with respect to the various pathways that are mainly reprogrammed.This opens up the prospect of targeting the metabolic pathways used by cells that are more tissue-damaging with control measures that will suppress their activity, or block their induction (Figure 4).Oftentimes, the main tissue damage is mediated by activated subsets of T cells, such as Th1 and Th17 cells, or M1 macrophages, which mainly metabolize glucose via the glycolysis pathway, which rapidly supplies their energy needs (231).Other cell types in the inflammatory reaction, such as Treg that can limit the extent of tissue damage, may derive their energy mainly from alternative pathways such as fatty acid oxidation and oxidative phosphorylation (OXPHOS) (232).In consequence, using drugs that target glycolysis can blunt the participation of proinflammatory cells and preserve the regulators, thus limiting the extent of tissue damage.This strategy of manipulating metabolic pathways to rebalance inflammatory reactions has been explored mostly to control autoimmune lesions and some cancers, but the approach has been evaluated more recently with some chronic viral induced lesions, as reviewed by us recently (233).
Basically, there are two major strategies that could be used to manipulate metabolic events to counteract tissue damage caused by a viral infection.One approach is to inhibit critical metabolic pathways in the cells directly involved in mediating tissue damage, so disarming their ability to cause the damage (234).The other approach is to use metabolic modulators either before or during the development of the inflammatory reactions with the objective of minimizing the induction of immune components that cause the tissue damage, but at the same time retaining the protective and regulatory aspects of the immune responses (235).Most experimental work on metabolic modification to control viral immunoinflammatory lesions have used the latter approach.
The objective to diminish the extent of ongoing viral reactive inflammatory reactions is usually achieved with a range of antiinflammatory drugs such as steroids and other anti-inflammatory reagents, which do not target any particular metabolic pathway (236).It is often the case that in active immunoinflammatory lesions, the cells that orchestrate and participate in lesions are some subsets of T cells and M1 type macrophages that are activated and need to obtain an immediate source of energy, which is mainly supplied by metabolizing glucose via the glycolysis pathway (234).Accordingly, the use of drugs such as 2-deoxy-d-glucose (2DG), which cannot be further metabolized by downstream enzymes in the glycolytic pathway, could be merited.However, when 2DG has been used to inhibit viral immunoinflammatory lesions, it has usually been to control the development of inflammatory cell induction and mediator production rather than used as a treatment therapy (114).
There are potential approaches to achieve inflammation disarming therapy, one of which could be to target the AMPK mediated mTOR pathway that acts as a metabolic regulator during inflammatory cytokine induction, such as IFN-g, TNF-a and some chemokines (237).The drug rapamycin could achieve this effect, as has been shown in the case of some autoimmune lesions and cancers (238).A prior study from our group showed that rapamycin administration markedly diminished the severity of herpetic ocular immunoinflammatory lesions, but did not evaluate if rapamycin therapy could diminish the severity of already established lesions (239).There are also reports that the drug metformin, which acts primarily to inhibit energy metabolism via the OXPHOS pathway in mitochondria and likely the activation of AMPK, can be useful to attenuate the severity of some autoimmune disease lesions (240).Moreover, unconfirmed reports claim that metformin can diminish the severity of inflammatory lesions caused by COVID-19 (241).In line with this, some patients after COVID-19 infection that suffer with the troublesome syndrome Long-COVID have benefitted by therapy with metformin (242).
Other approaches worth exploring to disarm inflammatory cells during viral immunoinflammatory lesions include using drugs such as GW9662 that modulates PPAR-g, which is involved in regulating glucose and lipid metabolism as well as was mentioned previously some genes involved with inflammation (243).The metabolism relevant genes include FAB4, CD36, adiponectin (responsible for lipid uptake), FASN (lipid synthesis), GLUT4 and pyruvate dehydrogenase kinase 4, which are responsible for glucose metabolism and fatty acid oxidation.When PPAR-g is activated, it promotes the uptake and storage of fatty acids and glucose in immune cells, such as macrophages, leading to a shift towards the M2 IL-10 producing anti-inflammatory phenotype (243).Metabolic targets and immunotherapeutic approaches can have an impact on immune responses.Targeting the glycolytic pathway with 2DG can influence the proinflammatory response of immune cells, including Th1, Th17, and M1 macrophages.PPARg, which regulates fatty acid metabolism, can also modulate glucose uptake and glycolysis.The use of agonists or inhibitors can affect M2 macrophages and CD8 T cell responses.PPARg also plays a role in Treg that utilize fatty acid oxidation (FAO).Inhibiting carnitine palmitoyltransferase 1 (CPT1) with etomoxir prevents the transport of long-chain fatty acids into the mitochondria, thereby impairing the suppressive capabilities of Treg cells.The AMPK activator metformin promotes FAO while inhibiting oxidative phosphorylation (OXPHOS) through the inhibition of complex chain I, which can restore immunosuppressive functions and reduce proinflammatory cell proliferation by indirectly inhibiting mTOR.Inhibitors of glutaminase, such as DON, or derivatives of itaconic acid like DMI and 4-O1, interfere with the tricarboxylic acid (TCA) cycle and can alleviate proinflammatory cell responses.The uptake of tryptophan, which directly affects the suppressive functions of Treg, can be reduced by an inhibitor called 1MT, which inhibits the IDO enzyme.
Conceivably, the upregulation of PPAR-g, as can be achieved with agonist drug therapy, may result in diminished lesions.Such an effect was reported in the case of lung inflammatory lesions caused by the 2009 H1N1 pandemic FLU virus (244) indicating that the approach should be explored to disarm other viral inflammatory lesions.
The majority of observations supporting the notion that manipulating some aspect of metabolism can rebalance immune response patterns and alleviate the severity of viral immunopathology have observed the effects of changing the metabolic climate either before or early during the development of the viral tissue-damaging events.For example, there are many situations where the absence for genetic, dietary or therapeutic reasons of some metabolic activity may result in changing the outcome of a virus infection.A contemporary example came from the recent COVID-19 pandemic where it was well documented that those with diabetes and significant obesity suffered more severe immunoinflammatory lung lesions and often succumbed to the infection (245).Many patients were kept alive by using antiinflammatory drugs and mAb against inflammatory cytokines.In addition, the frequent sequel to COVID-19 infection, Long-COVID, is suspected to be at least in part a metabolic problem, although its nature remains ill-defined and metabolic reprogramming is not currently used as a treatment modality (242).
Other metabolic changes that affect the expression of a viral infection include problems with tryptophan metabolism.Thus, if the essential amino acid tryptophan is depleted for some reason, one of its metabolites, kynurenine, accumulates and this has immunosuppressive effects on immune control (246).One means by which tryptophan can be depleted is that the enzyme indoleamine 2,3-dioxygenase (IDO), that is induced by some viral infections, catalyzes the breakdown of tryptophan into kynurenine, leading to tryptophan depletion, kynurenine accumulation and suppressed immunity (247).Therefore, inhibiting the activity of IDO, or replenishing tryptophan, may be a therapeutic strategy for controlling the initial phases of a viral infection as has been shown in experimental FLU virus infection (248).Curiously, animals unable to produce IDO, because of gene knockout or drug suppression, do exhibit less severe immunoinflammatory lesions in experimental FLU and RSV infections (248), but it is at yet unclear if manipulating tryptophan metabolism would succeed in suppressing already established viral inflammatory lesions.
Diet affects metabolism in several ways and diet can impact on the response pattern to an infectious disease.Thus, in our own studies we could show that supplementing the diet with short chain fatty acids such as propionate (155) and butyrate could diminish the severity of the ocular inflammatory response to HSV explained by a change in the ratio of pro-inflammatory and regulatory T cells to favor the latter in lesions.Similarly, the studies on FLU by Trompette et al. showed that feeding mice butyrate as a dietary supplement increased their resistance, seemingly because the balance of their immune reactivity was shifted to favor a superior protective CD8 T cell response (249).In other viral systems, the composition of the diet can also influence the pathogenesis of infection.One favored model has been to compare the outcome of viral infections in animals fed high fat or low fat diets.For example, in a study of mice infected with H1N1 FLU virus those fed a high-fat diet (HFD), which induced obesity, developed more severe inflammatory lung disease, higher levels of inflammatory cytokines, and higher mortality than those fed low-fat diets (250).Another group also showed that HFD led to increased levels of ROS and myeloperoxidase (an enzyme that indicates neutrophil activation) in lung homogenates compared to low-fat diet groups, an effect they correlated in part with a reduced NK cell response in HFD recipients (251).Other groups have linked the greater susceptibility of a HFD to an increased inflammatory neutrophil response, which can increase in number by as much as 20-fold (252).One of the consequences of feeding certain diets such as unsaturated fats and high calories is that the adipose tissue may become pro-inflammatory and produce cytokines such as GM-CSF, IFN-g and granzyme B, which in turn disrupts the balance of T cell induction contributing to tissue-damaging lesions that occur during chronic inflammation (253).
There is a strong suspicion that HFD and obesity are risk factors in humans for both severe FLU and COVID-19 infection, with the underlying mechanism related to dysregulation of the immune response and a delay in tissue healing and recovery, but further mechanistic studies are warranted to establish how these effects are mediated.There are also suggestions that many dietary supplements, such as amino acids, vitamins, minerals, and omega-3 fatty acids, may be able to support effective immune functions and potentially reduce the severity of viral infections.However, clinical trials in this area are limited, and more research is needed to confirm the many claims that are made.There does appear to be a strong case that Vitamin A (VitA), which is necessary for immune cells such as T and B lymphocytes to function normally, is useful (254) and some advocate VitA supplements for measles infection to reduce its severity and the duration of measles-related symptoms.A similar consequence was advocated to apply to COVID-19 (255), but further evidence is still needed.Other studies have shown that diets supplemented with nutrients such as L-glutamine, vitamin C, omega-3 fatty acid derivatives or zinc may all improve the outcome in COVID-19 patients (256-259).It is not clear, however, if supplementing diets with glutamine is always a useful approach to control viral inflammation.Thus, two groups showed that suppressing glutamine with the inhibitor 6-Diazo-5-oxo-l-norleucine attenuated viral immunoinflammatory lesions caused by HSV and Sindbis virus infections (260,261).These observations correlated with a marked reduction in the proinflammatory T cell response to the infections.Some of the more convincing data showing that manipulating metabolic events is a valuable approach to rebalance the pattern of an inflammatory response to virus infection was done by using drugs that change metabolic events when given during the course of an infection.For example, Varanasi K. et al. showed that inhibition of glucose metabolism with 2DG administered in the early stage of HSV ocular infection markedly inhibited the immunoinflammatory lesions of stromal keratitis (114).The beneficial outcome correlated with a rebalance of cell type representation in lesions with proinflammatory T cells markedly reduced in numbers, but Treg were unaffected and hence became dominant in lesions.Accordingly, 2DG therapy was a valuable approach to control an immunoinflammatory viral lesion and acted by rebalancing the response pattern.However, using 2DG to control viral immunoinflammatory lesions can result in complications as observed initially by the Medzhitov and coworkers using a FLU virus model in mice (262).In this instance administering 2DG to FLU infected mice resulted in mortality.Our group also showed that controlling herpetic lesions with 2DG therapy was potentially hazardous (114).Thus, HSV is a neurotropic virus with severe, often lethal consequences if virus enters the CNS causing the syndrome herpes simplex encephalitis (HSE).We could show that therapy with 2DG started when replicating virus was still present often led to HSE and a lethal outcome (114).The result appeared to be the consequence of a failure of the inflammatory reaction in the peripheral nervous system to prevent viral dissemination to the brain (263).Other drugs that affect energy metabolism were subsequently studied.These included metformin, a drug that inhibits OXPHOS in the mitochondria and etomoxir which inhibits energy metabolism derived from fatty acid oxidation.Both drugs successfully inhibited the severity of ocular lesions (264).However, neither drug caused HSE, a result explained by minimal inhibitory effects on the protective inflammatory reactions that occur in the local peripheral nerve ganglion and which confines HSV to the ganglion in the form of a latent infection (264).
There is some interest in using the molecule Itaconic acid which affects energy metabolism by inhibiting the enzyme succinate dehydrogenase, which participates in the TCA cycle.In a study on the inflammatory reaction to FLU infection in mice, itaconate and its derivatives (dimethyl-itaconate and 4-octyl-itaconate) given daily from the onset of infection reduced lung lesions and protected from death effects perhaps mediated by suppressing by IFN-g and other inflammatory cytokines (265).
So far there are minimal studies that describe the use of metabolic reprogramming to control inflammatory viral infections in human diseases, but the approach does merit further exploration.The future for using metabolic reprogramming to control viral immunopathogenesis may be to use mixtures of drugs that affect different metabolic pathways perhaps also administering different modulator cocktails at varying stages of infection and perhaps combined too with other rebalancing approaches that were discussed.
Conclusions
The modern world`s success with controlling viral infectious diseases has been spectacular whenever effective vaccines are available and impressive too when the virus infection can be managed successively with antiviral drugs, as occurs in the rich world with HIV and HCV.In this review, we have discussed how we might control those viral infections where the lesions are mainly the consequence of a host reactive response to the virus with the lesions often being chronic.We advocate that when such lesions do occur it is often the case that there are components of host responsiveness that are causing tissue damage, but at the same time other ongoing reactions that are anti-inflammatory and are in the process of alleviating the extent of lesions and are facilitating resolution.This should provide an opportunity to rebalance the involvement of the damaging and protective participants and minimize the impact of the infection.This raised the question of how we might achieve such an objective, particularly when faced with an ongoing chronic viral infection in the clinic.We identified and discussed six different categories of host responses that could be manipulated to achieve our objective and described examples where success has been reported.In almost instances of success, there was a caveat.Thus, most successful procedures were achieved using model animal infection systems that would be difficult, or perhaps impossible, to translate to clinical application.The second caveat was that experimental therapies more often than not perform the manipulations either before or very early after infection.However, in a practical clinical situation the problem requiring attention is usually an established lesion that needs to be counteracted.Nevertheless, progress is being made and a silver lining of the COVID-19 pandemic, where the severe pulmonary lesions represent an example of the problem we seek to solve, many otherwise experimental procedures were evaluated and shown as successful to contain inflammatory events and achieved what we would interpret to be immune rebalancing.
The first category of events discussed was the prospect of rebalancing host innate responses to infections, but few if any practical maneuvers were revealed.Thus innate influences mainly come into play during the initial stages of viral pathogenesis and rebalancing such responses during clinical lesions is highly problematic.Some succeed such as blunting the effects of inflammatory cytokines and others have been successful in model systems.Most notably the latter include destroying active macrophages with chlodronate containing liposomes, or using chemical reagents that change cells from the M1 to become M2 type macrophages These approaches, however, are not approved yet for the clinic.We suggest that other means to block cytokines and chemokines are needed one of which could be to construct mRNA vaccines to induce anti-cytokine responses, although turning off this therapy when no longer needed would be problematic.
The second and third categories of control measures discussed was to block the adaptive immune orchestrators of lesions, which are usually T cells or expand the activity of regulator mechanisms.These can be highly effective rebalancing strategies in model systems, but have yet to find much value in the clinic.Our optimism is highest for finding practical approaches that will enhance Treg responses, especially those that are antigen specific and functionally stable.
The fourth category was to describe ways to restore the function of formerly protective T cell responses that lose potency in a chronic inflammatory environment.This topic is referred to as immune checkpoint therapy and this therapy has been highly successful to inhibit some cancers.Immune exhaustion occurs in many chronic viral infections and one of the successful therapies was discovered using a chronic viral disease model (LCMV).One expects immune checkpoint therapy to find a place in the clinic to rebalance immune reactivity in a human chronic viral infection, but this has yet to happen.We are staying tuned!The issue of achieving immune rebalance and lesion control by manipulating the expression of one or more miRNAs was the fifth topic discussed.Again an abundance of encouraging positive data has come from model animal studies, but as yet none applied to the clinic.We made the case that some aspects of viral pathogenesis may be more available for adjusting miRNAs than others with pathological angiogenesis topping our favor list.It might also be that different miRNAs need to be changed in expression during the course of viral pathogenesis and microRNA adjustment might be more effective using multiple reagents or combining miRNA manipulation with other rebalancing approaches we have mentioned.
Finally we advocated that manipulating metabolic pathways or changing metabolism by dietary changes could be a way of rebalancing immune reactivity.We described the accumulating success stories on this topic and remain highly enthusiastic about this objective.So far, however, experimentally infected mice are the main beneficiaries, but success for chronic infection control in mankind could be just around the corner.
2
FIGURE 2 Some approaches tested clinically (A) and in in-vivo model systems (B) to resolve inflammation.(A) Multiple virus infections are presented with a surge in cytokine levels contributing to inflammation and tissue damage.Blocking the activity of IL-1, IL-6, IL-17, TNF-a conferred therapeutic benefit in COVID-19 patients and is likely useful in other viral pathologies involving cytokine storms (anti-IL-6 mAb: Siltuximab, anti-IL-6R mAb: Sarilumab and tocilizumab, anti-IL-17A: Netakimab).(B) M1 or M2 macrophages differentially contribute to tissue damage after virus infection.Activation of PPARg with 15d-PGJ2 or rosiglitazone and resultant M2 cell expansion led to diminished lung pathology after influenza and RSV infection respectively.Contrary results also exist, as M1 cell expansion using Baicalein led to diminished inflammatory response to influenza.Thus, inducing polarization of macrophages can recalibrate immune response after virus infection.
transfer of CD4+CD25+ cells CD4+CD25+ cell depletion prevented autoimmune disease development in mice (135) Depletion of CD25+ Treg cells in mice using anti-CD25 treatment before infection The numbers and functionality of HSV-1 specific CD8 T cells enhanced (144) Suppression of Treg cells by in vivo treatment with anti-GITR during persistent infection with Friend virus Improved the secretion of IFNg from adoptively transferred CD8+ T cells and diminished viral titers (145) Adoptive transfer of CD4 + CD25 + T reg cells to SCID mice CD4 + CD25 + Treg cells attenuated the immunopathological lesion severity on the eye in HSV-1 infected SCID mice.(146) CD4+CD25+ cells were depleted by negative selection from peripheral blood mononuclear cells (PBMCs) obtained from humans infected with the HCV.In peripheral blood, depletion of CD4+CD25+ cells led to the expansion of hepatitis C virus-specific IFN-g expressing CD4+ and CD8+ T cells in vitro.(147) Rapamycin induced CD4+CD25+FoxP3+ Treg cells in-vitro and adoptively transfer to allograft recipients Rapamycin induced Treg cells prevented allograft rejection in allogeneic pancreatic islets recipient mice.(148) Treg cells depletion in DEREG mice with diphtheria toxin before West Nile virus infection Treg depleted mice experienced more severe symptoms and lethality compared to Treg intact mice.(149) Using DEREG mice model to deplete Treg in experimental allergic airway inflammation Depletion of Treg cells in experimental allergic mice study resulted in exacerbation of airway inflammation (150) Depletion or boosting Treg cells during experimental RSV infection in DEREG mice Treg depletion in mice before RSV infection led to increase in disease severity and inflammatory cellular response in lungs.However, enhancing Treg numbers via IL-2/anti-IL-2 complexes attenuated disease severity.(151) Treg cells transiently expanded in mice using anti-CD28 mAb or Treg cells depleted in mice using DEREG model to study Measles viral persistence in CNS Treg depletion in DEREG mice enhanced the virus specific CD8+ effector T cells in CNS while Treg expansion resulted in virus spread to the brain.(152) Depletion of Treg cells in DEREG mice using diphtheria toxin Depletion of Treg cells resulted in severe corneal lesions in ocular HSV-1 infected mice.(140) Depletion of Tregs in Balb/c mice model before RSV infection using anti-CD25 mAb treatment Treg depleted DEREG mice revealed delayed viral clearance and severe disease when compared to Treg intact mice.(153) Use of the DNA methyltransferase inhibitor 5-azacytidine to improve the stability and performance of regulatory T cells Administration of azacytidine to HSV-1 infected mice starting from five days post infection improved the suppressive activity of Treg cells and diminished ocular lesion immunopathology (114) Stabilization of Treg cells using retinoic acid treatment Retinoic acid treatment diminished lesion severity and reduced the numbers of inflammatory cells in the eyes of HSV-1 infected mice (154) Dietary supplementation of short chain fatty acid (SCFA) into drinking water to induce Treg expansion SCFA supplementation reduced the inflammatory immune response in the corneas and prevented the development of herpetic lesions after HSV-1 infection.
FIGURE 3
FIGURE 3 Approaches to enhance Treg responses.Several biological methods as well as pharmacological agents have been documented to selectively expand Treg cells in vivo either by achieving de novo conversion or promoting function of already committed Treg, particularly in model systems.These include use of monoclonal antibodies targeting IL-2, CRISPR gene editing, epigenetic modifiers, interleukins, galectins, bile acid metabolites, and metabolism-acting immunotherapy drugs.Each approach aims to promote Treg expansion, conversion or stability through unique mechanisms such as acting on FoxP3 enhancer the conserved non coding sequence 3 to improve Treg differentiation (isoallo-LCA bile acid metabolite), epigenetic modification and stable induction of FoxP3 (Azacytidine), proliferation and conversion (IL-2 and anti-IL-2 complexes), proliferation and enhanced suppressive activity (Rapamycin), nanogel backpack containing IL-2FC conjugated to CD45 where IL-2 is released only after Treg activation, genetically modified Treg expressing IL-10, IL-35 or absent regulators of FoxP3.These strategies offer potential ways to enhance Treg populations and rebalance immune responses in viral diseases.
TABLE 1
Examples of several innate immune sensors that recognize viral ligands.
TABLE 2
Some approaches used to control inflammatory lesions that target adaptive immune participants.
TABLE 3 Some
Strategies that target Treg cells and outcomes observed following use in-vivo.
TABLE 4
Some miRNAs that affect function of immune cells. | 2023-08-24T15:15:24.835Z | 2023-08-21T00:00:00.000 | {
"year": 2023,
"sha1": "0e31fd3a741801a092a17ddbfe46bb2b2470a59d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1257192/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70dd7baf981e7b2cecf7f745ec85439f38740c3c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
219544714 | pes2o/s2orc | v3-fos-license | Self-Intelligence with Human Activities Recognition based on Convolutional Neural Networks
: In the presented paper, we propose a strategy related to activity recognition of human from profundity maps as well as sequences stance information using convolutional neural systems. Two information descriptors will be utilized for activity portrayal. The main information is a depth movement picture which will store back to back depth motion images of a human activity, whilst the subsequent data is the proposed moving joint description feature which conveys the movement of joints after time instants. To boost highlight extraction for precise activity arrangement, we will use three networked channels prepared with different inputs along with hypothesis verification. The activity results produced from those channels are intertwined for last activity characterization. Here, we suggest a few combination score based tasks to amplify the weightage of the correct activity. The experiments reveal the aftereffects of intertwining the yield of those channels along with the hypothesis are superior to utilizing a single channel or intertwining more than one channel in particular. The technique was assessed on two open databases which are Microsoft activity dataset and the second one is taken from University of Texas . The results demonstrate that our method beats the vast majority of existing cutting edge techniques, for example, histogram of arranged 4-D normal in datasets. Albeit DHA dataset has high number of activities (38 activities) contrasted with existing activity datasets, our paper outperforms a cutting edge strategy on the dataset by 6.9%.
INTRODUCTION
Convolution Neural Networks [1] are specialized linear operations that use convolution instead of usual matrix multiplication in at least one of their layers .Convolution layer mainly comprise of input, multiple hidden and output layers. CNN is based on the fact that the input has the image and constrains the design in more sensitive way, it has also improved the development of other machine learning approaches. It is a class of deep feed artificial network that has successfully been applied to analyzing visual imagery.Nowadays, human activity recognition is pertinent for various computer application that require information about human's actions, not limited to but for inspection for public safety, robotics etc.
In our proposed paper we have used CNN which can take in an image as input, assign weights to various objects in that image. CNN is a good replacement from existing deep learning algorithms and is more efficient because it reduces the number of features and hence complexity. It is a specific type of neural network that uses perceptron's, machine learning algorithm, for monitored learning, to analyze and process data. Traditionally ,videos-based action recognition methods are used which is mainly based on processing inputs of two dimensions red-green-blue color images by utilized classifier like K-nearest neighbor [2], Bayesian network etc. We already know there are many pocket friendly devices such as kinect available and impressive features are provided by motion maps and positions in order to represent the action of human. On the other note they also have some drawbacks on existing techniques. For building up multi-view depth maps dataset we have extracted a huge number of features so as to provide a distinguished representation of each human activity for classification. While extracting feature from multi-view ,it might be possible to have same actions from front-view, but have different from side view. For performing different functions we have expanded number of layers. There are over 70 layers for performing different complex function and to solve algorithms. Some of the common layers are indulged like dropout and dense layer. Dropout reduces the overfitting in the data which improves the performance of the algorithm on the other hand dense layer feeds all the outputs from the predecessor to all its neurons, each neuron providing the single output to the next layer. It is basic layer in neural network which contains ten neurons. In the presented paper, we have used two descriptors in order to represent the actions of human, to show sequence of map and joint descriptor variable to show the body posture sequence. It connects depth maps which capture the change in depth of motion. The MJD [3] can change the missing lateral-view with it's informative representation which has great effect on boosting the performance. One of the operation named as score fusion operation has been devised to do the appropriate action with the help of various convolutional neural network channels. Our experiment suggested that while considering the highest score/marks of the three channel results it will lower the accuracy prediction on the tested data. The result expresses that the suggested method can identify human action more efficiently and with better and improved performance over the existing methods.
II. LITERATURE SURVEY
Nowadays the need for human interaction is getting popular in the field of robotic domain. Due to having a large no. of actions we need to have no. of features. CNN is a class of deep learning networks mostly applied to analyze visual imagery. They are regularized versions of multilayer perceptron's. Multilayer perceptron's means fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. CNN take a different approach towards regularization which take advantage of the hierarchical pattern in data and assemble more complex pattern using small patterns which results in lower extreme complexity. Convolutional layer convolves the input and pass its result to the next layer which is similar to the response of a neuron in the visual cortex to a specific stimulus. CNN is a technique which is used for feature extraction and classification [4]. Depth based [5] approach provides an ordering from the center outwards such that the most central object gets the highest depth values and the least center objects the smallest depth. Skeletal based methods in parallel to depth-The given architecture shows the image processing classification of the input image. However, it is observe that the majority of CNN performance improvement came from redesigning of processing neurons and designing of new blocks. The input image is processed under various layers.Firstly, it is processed under convolution layer. In the based approaches [6]. Every joint is related to the local/subfield pattern description variable that provides highly discriminative features and translation invariant. For providing good 3-d activity recognition representation, a framework is proposed which is based on hypothesis generation and hypothesis testing along with matching. Zanfir [7] et al proposed nonparametric motion pose for low cache in human actions recognition, speed , movement of joints with respect to the current frame in the specific time window. Few proposed the action recognition to do in short films by sculpting them pyramidal and temperal design of human action images. [8].
Fig. 1.2 System architecture
convolution layer it is processed according to the target output and then secondly, it is processed under polling layer in order to flatten the image according to the requirement in order to get the desired output [9].
IV. EXISTING SYSTEM
The existing system comprises of a technique for human activity acknowledgment from profundity maps and stance information three channels are superior to utilizing channel in particular. This proposed technique was assessed on two open databases which are Microsoft activity dataset and the University of Texas at Dallas-multimodal human activity dataset (DHA).The testing results demonstrate that the proposed approach beats a large portion of existing cutting edge strategies, for example, histogram of arranged 4Dneurons. In the above used RGB-D datasets the existing system take two datasets to get the performance model of the above given system [10]. The conclusion given by this system was that the position descriptor variable influence mostly the entire identification process to enhance first view for profundity figures representation, the combination experiments among the output predictors of the CNN channels will be used for maximizing the output step done. However, the results outperform most of the previous frameworks but it only yields good results in the test suite with still cameras from a pre-defined distance [11].
A.Data Preprocessing Step
Depth Image Processing-The depth motion image describes the action which is overall reflected outwardly so as to produce an image which can be used to represent action with a particular outlook. The DMI will take into consideration the pictures captured by the device such as camera or mobile phone and will process each of the image containing body parts movement to full fledged action which further makes it easy for the model to extract the relevant information and perform manipulation.
Fig 1.4 Generation of Training Data • Moving Joint Descriptor:
The pictures captured by the smart devices, contain a huge number of joints. Out of those only some are useful in the context. We take cartesian coordinates which is sensitive to the joint movement and it is possible that it might represent same action for two or three different
Model Description
The human recognition model is trained and learned based on convolution neural network of the deep learning, which has strong robustness to illumination difference, facial expression change and facial occlusion. The Convolutional Neural Network is a great achievement of Artificial Intelligence which can be used for image classification, object detection, semantic segmentation and human activity recognition. The Convolutional Neural Network (CNNs) is a bias of multi-layer perceptron motivated by biological vision and targeted at streamlining preprocessed data processes [12].Difference which is accounted between the two is that the cnn layer is made of convolution and the other is of pooling layer. The accuracy of the proposed system is largely depending on the many parameters. One of the main parameters is illumination condition [13]. The best conventional utilized histogram normalization procedure is histogram equalization where one tries to adjust the image histogram into a histogram that is persistent for all brightness values. The basic function is f(x) = y that describes the relation between x and y for all feasible inputs in the proper approach. The function f has to be decided from the hypothesis space [14]. Model Training Model are processed under no. of multiple training channels. Each channel is represented by-a, b and c respectively .Channel 'a' is prepared with depth motion images, and the channel 'b' is proposed under image variables and MJD descriptor variables together, and channel 'c' is processed under MDR. The Channel 'b' is further sub-divided into two others sub-channels named as chn which is trained with descriptor as well as chn1 which is trained with MJD. After joining the last pooling layer it results in the formation of new one. The concatenation strategy was motivated by reference papers , which proposed different concatenation vide as which right forecast. In our test sets, numerous combination choices have been attempted, for example, component savvy averaging, greatest, expansion, and item, however the most extreme and item activities which we mean Max and Prod produce preferred outcomes over different tasks proposed an idea of joining the layers. Furthermore, the idea of joining the layers resulted in the better accuracy .After this initializationit resulted in an unstable and loss in the behavior. As already known, our learning supervised resulted in the loss and the decay of the weight which resulted in adecrease. We gave the input weights, then we already have trained the neural network model witha batch containing maximum 50 images. The looping or the iterations for every single channel's number is to get the minimum loss decay feature function. Score Fusion Each representation is having some information . In any case, for some tests, the most extreme worth doesn't speak to the right activity. A lower likelihood value than the most extreme may relate to the. As we will find in the exploratory outcomes area the grouping precision doesn't just relies upon the activity Max or Prod, however it additionally relies upon the directs engaged with the calculation.
VI. EXPERIMENT DESIGN
Here we extensively evaluate our proposed method on two public benchmark datasets namely MSRAction3D, and DHA. We employ convolutional neural network extreme learning machine (CNNELM) with occlusion and illumination performed on the dataset because of its better classification performance and efficient computation.
MSR 3D
The most widely used dataset used for the recognition of action is MSRAction3D. It has 20 actions: "high arm wave", "horizontal arm wave", "hammer", "hand catch", "forward punch", "draw x", "draw tick", "draw circle", "hand clap", "two hand wave", "bend", "forward kick", "side kick", "jogging", "tennis swing", "tennis serve", "golf swing", "pick up &throw". In the MSRAction3D dataset, actions such as "drawX" and "drawTick" are kind a of similar. One of the most challenging part of this dataset is mainly related to self-occlusion. Some pairs can be found in "leg-curl" and "leg-kick", "run" and "walk", etc. Second, our method directly uses depth motion maps with occlusions and illumination, which provide much better motion information. DHA We tend to used a chronic version of the DHA dataset wherever additional six action classes are concerned. [Lin et al., 2012] split depth sequences into reference system volume and developed 3bit binary patterns as depth options, that resulted in associate accuracy of eighty six.80% on the particular dataset. By incorporating multi-temporal info to the DMMs, our projected technique of exploitation occlusions and illumination achieves higher accuracy even on the extended DHA dataset. These enhancements show that operative datasets on multi-temporal DMMs will turn out a lot of informative options than operative on depth distinction motion history image (D-MHI).
B.Training and Testing time
The preparation time varies from a dataset to another, contingent upon the quantity of descriptors that are utilized for preparing. While MSRAction3D dataset has the least number of preparing information, the preparation time is additionally littler contrasted with the other two datasets that have more preparing information. We also notice that the preparation time also, number of emphases required for the model to unite are liable to number of preparing information. The instance of the DHA dataset is somewhat not quite the same as the other two datasets [15]. As the assessment convention of this dataset requests five preparing steps to ascertain the normal of the fivefold outcomes, the calculation preparing time for this dataset is the total of the five preparing spans. Despite the fact that the fivefold have a similar number of preparing information, the quantity of emphases required to get the base misfortune varies from one fold to another on the grounds that every datum has diverse mix of activities, and henceforth unique kinds of highlights to be educated. While the structure of the model utilized for preparing is the same for the three datasets just as the sort of preparing information. The equipment material utilized for testing and preparing is not the same as the one utilized for the preprocessing. For the most part, models require huge number of preparing models, for example, a huge instances pictures with long periods of preparing to arrive at a high expectation precision [16]. The key accomplishment of the learning process from a lot of information is to sufficiently separate highlights to perceive each activity.
VII. EXPERIMENTAL RESULTS
We have implemented the model in python using deep neural layers which turned out to improve the efficiency of the prediction. Output time was less than what we expected and the performance load was also reduced on the resources. Overall efficiency was eighty two percentage on both the datasets used. The models we have discussed constructs feature from spatial and temporal dimensions by implementing 3D convolutions. All the final feature representation is obtained by aggregating information from all channels. In the proposed paper we have considered the model for action recognition. Some deep architectures are also used such as deep belief networks which have promising performance on object recognition. In this paper, supervised algorithm is used to train developed 3D CNN and a large number of labeled samples are required. Earlier studies show that the number of labeled samples can be significantly lessen when such a model is pre-trained using unsupervised algorithms. A multi-temporal DMMs representation is suggested to seize more temporal motion information in depth sequence. Results shows that our method outrun the state-of-the-art methods in all datasets.
VIII. CONCLUSIONS AND FUTURE WORK
We developed 3D CNN models for action recognition. For the representation of a better action, two types of features has been proposed. The whole recognition process provides a great influence by providing parameters for front view of depth maps representation. As RGB-D dataset have small number of training samples. Our future work will focus on recognition of interconnection between people /objects. | 2020-05-07T09:13:37.962Z | 2020-04-30T00:00:00.000 | {
"year": 2020,
"sha1": "be10432cc562937acac049460408010b877ba3dd",
"oa_license": null,
"oa_url": "https://doi.org/10.35940/ijeat.d6489.049420",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2a607955633052087b0f1ad8538ef75fa5ee1637",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
5927285 | pes2o/s2orc | v3-fos-license | TBP Binding-Induced Folding of the Glucocorticoid Receptor AF1 Domain Facilitates Its Interaction with Steroid Receptor Coactivator-1
The precise mechanism by which glucocorticoid receptor (GR) regulates the transcription of its target genes is largely unknown. This is, in part, due to the lack of structural and functional information about GR's N-terminal activation function domain, AF1. Like many steroid hormone receptors (SHRs), the GR AF1 exists in an intrinsically disordered (ID) conformation or an ensemble of conformers that collectively appears to be unstructured. The GR AF1 is known to recruit several coregulatory proteins, including those from the basal transcriptional machinery, e.g., TATA box binding protein (TBP) that forms the basis for the multiprotein transcription initiation complex. However, the precise mechanism of this process is unknown. We have earlier shown that conditional folding of the GR AF1 is the key for its interactions with critical coactivator proteins. We hypothesize that binding of TBP to AF1 results in the structural rearrangement of the ID AF1 domain such that its surfaces become easily accessible for interaction with other coactivators. To test this hypothesis, we determined whether TBP binding-induced structure formation in the GR AF1 facilitates its interaction with steroid receptor coactivator-1 (SRC-1), a critical coactivator that is important for GR-mediated transcriptional activity. Our data show that stoichiometric binding of TBP induces significantly higher helical content at the expense of random coil configuration in the GR AF1. Further, we found that this induced AF1 conformation facilitates its interaction with SRC-1, and subsequent AF1-mediated transcriptional activity. Our results may provide a potential mechanism through which GR and by large other SHRs may regulate the expression of the GR-target genes.
Introduction
Ligand-activated glucocorticoid receptor (GR) regulates transcription of target genes by binding to DNA at specific hormone response elements and by interacting with other coregulatory proteins [1], [2], [3], [4], [5]. Like other members of the steroid hormone receptors (SHRs), the GR possesses a modular structure characterized by three major functional domains: N-terminal domain (NTD), DNA binding domain (DBD), and ligand binding domain (LBD) ( Figure 1A). The transactivation activity of SHRs is mainly controlled by two activation function domains, AF1 and AF2 located in the NTD and LBD, respectively [6], [7], [8], [9], [10]. The precise mechanism by which SHRs regulate the transcription of the target genes is largely unknown. This is, in part, due to the lack of structural and functional information about AF1 domain. It has been shown that AF1 is constitutively active and retains 60-80% of the GR transcriptional activity [11], [12], [13], [14]. The AF1 is defined by amino acids 77-262 in the human GR [11], [12], [13], [14]. Due to availability of the LBD crystal structure [15], the relevant structural and functional properties of AF2 have been well characterized whereas it is nebulous in the case of AF1.
In spite of rigorous attempts from several laboratories, we have not yet been able to determine a three-dimensional folded structure of the NTD/AF1 of any member of SHR family. One of the biggest obstacles in knowing the structure of AF1 has been due to its unstructured or intrinsically disordered (ID) conformation in solution, which is found in transactivation domains of several transcription factors (TFs) including SHRs [16], [17], [18], [19], [20], [21], [22]. The GR AF1 recruits other coregulatory proteins, including proteins from the basal transcriptional machinery, e.g. TATA box binding protein (TBP) by creating binding surfaces for these proteins [21], [23], [24], [25]. Other studies have also shown that transactivation domains of several TFs including SHRs undergo a disorder/order transition upon interaction with proteins from the basal transcriptional machinery [26], [27], [28], [29]. We have earlier shown that conditional folding of the GR AF1 is the key for its interactions with its critical coactivator proteins [25].
It is interesting that the ID GR AF1 directly interacts with the TBP, the critical protein that forms the basis for the multiprotein transcription initiation complex. However, the precise mechanism of this process is unknown. In vitro transcription studies indicated that the holo-GR acts to stabilize the pre-initiation complex [30], [31], [32]. One possibility may be that TBP binding-induced structured conformation in AF1 is involved in creating a platform for the GR AF1-associated coactivators. In this study we tested whether TBP binding induces structure formation in the GR AF1 such that AF1's interaction with steroid receptor coactivator-1 (SRC-1), a critical coactivator which is important for GRmediated transcriptional activity, is facilitated. Our results show that TBP binding induced structure formation in the GR AF1 facilitates its interaction with SRC-1, and subsequent AF1mediated transcriptional activity. Our results provide a potential mechanism through which GR and other SHRs may regulate the expression of the GR-target genes, information essential to understand how specific signals are passed from the receptor to target genes.
Plasmids
The pGRE_SEAP vector (BD Biosciences, Palo Alto, CA) contains three copies of a GRE consensus sequence in tandem, fused to a TATA-like promoter (P TAL ) upstream from the reporter gene for secreted alkaline phosphatase (SEAP). GR500 encodes amino acids 1-500 of the human GR, plus a five-residue nonspecific extension [32]. TBP was cloned into the pcDNA3.1(+) expression vector (Invitrogen, Carlsbad, CA) and SRC-1 into pEYFP-SRC-1 as described [18]. DNA sequencing was performed on all clones to confirm correct sequences.
Protein Expression and Purification
The GR AF1 domain was constructed from human GR cDNA digested with BglII and inserted into an expression vector pGEX-4T-1 (Amersham Pharmacia Biotech) as described [25]. TBP C encoding 181 C-terminal residues of human TBP was expressed in pET-21d vector [24], [33]. The expression and purification of AF1 protein is described [25]. TBP C was expressed in Escherichia coli BL21(DE3), and purified on the NTA column (QIAGEN, Valencia, CA) using imidazole step-gradient. Final protein purity of both proteins was greater than 95% as verified by presence of a major single band on SDS-PAGE ( Figure S1).
Surface Plasmon Resonance (SPR) analysis
The kinetics of TBP c binding to GR AF1 was determined by surface plasmon resonance (SPR) on Biacore X-100 plus (GE Healthcare). The binding reaction was carried out at room temperature and in a physiologic buffer (0.01 M HEPES, pH 7.4, 0.15 M NaCl, 50 mM EDTA, 0.05% Tween-20). Purified TBP c Secondary structural elements predictions of the GR AF1 using HNN method as described [34]. Blue, red, and purple colors indicate helix, b-sheet, and random coil, respectively. C) PONDR plot for AF1 protein disorder prediction [35]. X axis shows amino acid numbers in the GR AF1 sequences, and Y axis shows probability score. doi:10.1371/journal.pone.0021939.g001 was immobilized to Fc2 channel of C1 chip as the ligand at 200-250RU through strong ionic interaction. Fc1 channel was equally treated but without TBP c as the control. Multi-cycle kinetics procedure was employed to measure the binding. AF1 at different concentrations (0.2-6.4 mM) was used as analyte, and sequentially injected over Fc1 and Fc2 channels to measure its binding to TBP c . The sensor surface was regenerated by 0.3% SDS after each cycle of binding. The flow rate was kept constant at 30 ml/min. Data from 120 seconds of association and 180 seconds of dissociation were collected. The sensorgrams were normalized by the subtraction of Fc1 from Fc2, and then fitted for kinetics by Biacore X-100 evaluation software using 1:1 Langmuir binding model (A+BRAB). The affinity (K D ) was calculated from the equation K D = k d /k a , where k a is association rate and k d is dissociation rate.
Circular Dichroism (CD) Spectroscopy
The far-UV CD spectra of the purified recombinant AF1, TBP C , and AF1:TBP C mixtures were recorded at 22uC on a Jasco 815 spectropolarimeter by using a 0.1-cm quartz cell, with a bandwidth of 0.5 nm and a scan step of 0.5 nm. The spectra were recorded at a fixed AF1 protein concentration (4.5 mM) and varying concentrations of TBP C . All the spectra recorded were corrected for the contribution of solute concentrations. Each spectrum is a result of five spectra accumulated, averaged, and smoothed.
Immunoprecipitation Assay 5 ml of antibody for SRC-1, and 50 ml of protein A/G-agarose beads conjugate were incubated for 4 h at 4uC. HeLa nuclear extract containing 0.5 mg of total protein, and 10 mg of purified AF1 and/or TBP C were mixed together and incubated for 2 hr at 4uC in a separate tube, and added to the beads, followed by incubation for another 2 h at 4uC. The beads were centrifuged, washed thoroughly, resuspended in SDS loading buffer, and boiled for 5 min to release bound proteins. The released proteins were resolved by SDS-PAGE and immunoblotted with a GR AF1 antibody after transfer onto a polyvinylidene difluoride membrane as described previously [25]. The results are expressed as means 6 the standard error. Levels of significance were evaluated by a twotailed paired Student t test, and a P value of ,0.05 was considered significant.
Cell culture and transient transfection
CV-1 cells (American Type Culture Collection) were grown at 37uC in minimal essential medium with Earle's salts (Invitrogen) supplemented with 10% (vol/vol) fetal bovine serum (Atlanta Biologicals, Norcross, GA). Cells were subcultured every 2 to 3 days. CV-1 cells were plated on a 24-well plate (1000 ml/well) one day before the transfection and transfected by using Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol. Transfected cells were maintained at 37uC in 5% CO 2 and 95% air for the duration of the experiment. Transfection efficiency was normalized by using immunoblot analysis with specific antibody against AF1.
Reporter gene assays
We used the SEAP reporter system due to its high signal-tonoise ratio and quantifiable transcriptional activity without the need for cell disruption. CV-1 cells were cotransfected as described above with 0.13 mg of pGRE_SEAP reporter vector; 0.13 mg of pECFP-GR500, pcDNA3.1-TBP, and/or pYFP-SRC-1. The total amount of DNA added was kept fixed by the addition of empty pECFP vector. Medium (25 ml) was collected 24 h later and tested for the presence of SEAP (Great EscAPe SEAP detection kit; BD Biosciences) according to the manufacturer's protocol. The data from different experiments were normalized to GR500 activity. The results are expressed as means 6 the standard error. Levels of significance were evaluated by a two-tailed paired Student t test and a P value of ,0.05 was considered significant.
Analyses of ID characteristics of the GR AF1 domain
We performed secondary structural analysis of the GR AF1 domain (amino acids 77-262) using Network Protein Sequence analysis. It is evident from this analysis that a significantly large number of amino acid sequences represent a random coil configuration ( Figure 1B). Further calculations reveal that more than 67% of AF1 sequences contain random coil conformation with only a small proportion as helix and sheet (data not shown). Predictor of Naturally Disordered Regions (PONDR) analysis for ID prediction confirms the ID nature of AF1 ( Figure 1C) as evident from the PONDR Score obtained from three different methods (more than 0.5). Similar results were obtained using Prediction of Intrinsically Unstructured Proteins (IUPred) analysis ( Figure 2A). To further confirm these findings, we used FoldIndex method of disorder prediction, which predicts the probability of a protein/peptide to fold. A large red area (unfolded) compared to small green area (folded) suggests that a large fraction of the AF1 is unfolded ( Figure 2B). Uversky et al. [36] have introduced a method for the analysis to distinguish ordered and disordered protein conformations based only on net charge and hydropathy. We applied this method to the GR AF1 sequences. It is evident from the results that AF1 falls within the ID proteins ( Figure 2C). These results were further confirmed from the PONDR Scores obtained as a function of cumulative fraction of residues ( Figure 2D) in which the black line plot separates the boundary of database proteins that possess globular structure. It is evident from the green dot plot of AF1 that it falls within the range of ID proteins. Together, data support the notion that AF1 possess characteristics of an ID protein.
Kinetics analysis of TBPc:GR AF1 binding by surface plasmon resonance (SPR)
To measure the binding kinetics of AF1 to TBP by SPR, TBP was immobilized to C1 chip as the ligand through strong ionic interaction, which was based on the high positive charge of TBP C at physiological pH (pI = 10.3). The GR AF1 was used as an analyte. Low density of TBP C (200-250RU) was immobilized to eliminate mass transport limitation and heterogenic ligand for the kinetics assay. The regeneration was optimized to remove both TBP C and AF1 after the binding. Fresh TBP C was immobilized for each cycle; therefore, the activity of ligand was same during the whole process of kinetics assay. As shown in Figure 3A, AF1 exhibited specific binding to TBP C as indicated by the normal sensorgram response in the Fc2 assay channel, whereas the control Fc1 channel was just the perturbation due to the analyte. The sensorgrams were fitted well using 1:1 binding model ( Figure 3B; as shown by black line overlapped with experimental color lines in each case). Meanwhile the dissociation was very slow, implying that there could be an induced conformational change upon the initial binding that caused the formation of stable complex. In fact, we have previously shown that AF1 undergoes order/disorder transition under specific conditions [21], [25].
Since the conformational changes are taking place in AF1, and we used AF1 as the analyte in our SPR assay, it is difficult to be detected by this method. On the other hand, in single cycle SPR kinetic assay, we observed that at lower concentrations, the binding patterns are similar; however, at higher concentrations, the response became weaker and showed different kinetic behavior (data not shown), suggesting that there may be conformational changes occurring in the AF1 when initial AF1:TBP complex is formed, which is included in the later binding stages. Overall, AF1:TBP binding displayed a moderate rate of association and slow dissociation with calculated affinity (K D ) of 0.46 mM, similar to other SHRs, such as binding of estrogen receptor's AF1/NTD to TBP [29]. Comparing the actual SPR response to the immobilized RU of the ligand, we predicted that there could be only one binding site in the AF1:TBP interaction, which is based on sensogram response theory using equation: R max = R L ?(MW A / MW L )?S m , where R max is the maximum binding response, R L is the ligand density, MW A, L is the molecular weight of analyte or ligand, and S m is the binding valency; TBP and AF1 have similar molecular weight at 22 kD. Since experimental R max was equal to R L , S m should be 1. These results should help us further identify the physical binding sites on AF1 and TBP.
Binding of TBP induces structure in otherwise ID GR AF1 domain
To test the effects of TBP binding on the conformational changes in AF1, we recorded the far-UV CD spectra of purified recombinant AF1, TBP C , and a mixture of AF1:TBP C at 1:1 ratio.
As expected, AF1 alone shows characteristics of an ID protein, and TBP C alone shows that of a globular protein with significant secondary structural elements in it, whereas the complex shows much higher secondary structural elements in comparison to both AF1 and TBP C alone (data not shown). Figure 4A shows the spectra of AF1:TBP C complex at various ratios ranging from 1:0.25 to 1:2. It is clear from these spectra that with increasing [38]. Score above 0.5 are considered to be disordered sequences. B) Fold Index showing the probability of AF1 sequences for the propensity to fold [37]. C) Cumulative fraction of AF1 residues showing ID PONDR score of AF1 [36]. D) Charge-hydropathy analysis using Uversky plot [35,36]. The plot of the mean hydrophobicity vs. mean net charge of 54 completely disordered proteins (red), and 105 completely ordered proteins (blue). The solid line represents the border between ordered and disordered proteins. The cyan square corresponds to AF1. doi:10.1371/journal.pone.0021939.g002 concentration of TBP C , the complex adopts more secondary structural elements as evident from the increased ellipticity at around 220 nm wavelength followed by a red shift toward 208 nm. When individual data for AF1 and TBP C (at each concentration) are added and plotted (theoretical sum), similar spectra in nature are observed ( Figure 4B). When the value of ellipticity at 220 nm was plotted against the increasing concentrations of TBP C , a concentration dependent linear relationship was obtained ( Figure 4C). A comparison of data from the experimental and theoretical sum suggest that with increasing concentrations of TBP C , the complex adopts significantly higher helical content in a concentration dependent manner as evident from the differences in ellipticity at 220 nm ( Figure 4C).
To determine the difference between experimental and theoretical sums, we plotted the differences in ellipticity at 220 nm between the experimental and theoretical sum against various concentrations. These results suggest that the helical content in the complex increases up to a ratio of ,1:1, and saturates afterword, suggesting that the complex keeps adopting more and more structure formation until the full complex is formed ( Figure 4D). To determine whether these structural changes observed in the AF1:TBP C complex are happening in AF1, TBP C , or both, we subtracted the contribution of TBP C from each spectrum and plotted them with respect to AF1 alone ( Figure 5A). It is evident from the comparison of the spectra that AF1 adopts significantly higher helical content when complexed with TBP ( Figure 5A). Sigmoidal curves shown in Figure 5 B&C represent the absolute changes in the ellipticity at 220 nm and difference in ellipticity in AF1 at each concentration of TBP C , respectively. To further determine whether the structural changes observed in the complex are confined to AF1 or TBP C conformation is also changed, we plotted the spectra of TBP C alone ( Figure 6A) and after subtracting the contribution of AF1 at each TBP C concentration from AF1:TBP C complex ( Figure 6B). When comparing the changes in the ellipticity at 220 nm ( Figure 6C), we observed a linear relationship with increasing concentrations of TBP C (blue line) with no significant deviation in TBP C when present in the complex (red line), suggesting that unlike AF1, there were no significant structural changes in TBP C when complex was formed. Together, these results clearly demonstrate that binding of TBP C to AF1 results in an induced structure formation in otherwise ID AF1 domain without any significant perturbation in TBP C structure.
TBP binding-induced structure formation in the GR AF1 facilitates its interaction with SRC-1
It is known that AF1 interacts with SRC-1 to transactivate gene(s), and that conditional folding is important for this interaction [25]. We therefore evaluated whether the conformation induced in ID AF1 domain by TBP binding is important for AF1's interaction with SRC-1. Using immunoprecipitation assay, we tested the interaction between the AF1 and SRC-1 from HeLa nuclear extracts. Separate HeLa nuclear extracts supplemented with purified AF1 6 TBP C protein were prepared. The extracts were then incubated with antibody-linked beads specific to SRC-1. The antibody-linked beads were recovered and washed extensively, and the bound proteins were released and resolved by SDS-PAGE. An antiserum to amino acids 150 to 175 of the GR was then used to identify AF1 on the gels. The results of our immunoprecipitation experiments are shown in Figure 7. The blots shown in the Figure 7 are for AF1 (MW,22 kD) retained after immunoprecipitation as assessed by AF1 antibody. Consistent with previous reports [25], in the case of AF1 alone, we detected a very weak interaction with SRC-1 (Figure 7; Lanes 1 & 2; Upper Panel) from HeLa nuclear extracts. This interaction was significantly increased (Figure 7; Lanes 3 & 4; Upper Panel) when AF1 was bound to TBP C , suggesting that TBP binding-induced formation in AF1 facilitates its interaction with SRC-1 (Figure 7). A quantitative analysis of this interaction shows ,8 fold increase in the bound SRC-1, when AF1 is complexed with TBP C compared to AF1 alone (Figure 7; Lower Panel). These results suggest that TBP-induced conformation in AF1 is important for AF1's interaction with a critical coactivator.
Effect of TBP and SRC-1 interactions on AF1-driven transcription
We tested the effects of TBP-induced binding/folding events on AF1-driven transcription using GR-responsive promoters in transient transfection-based reporter assays in GR-deficient CV-1 cells. To test the effects of these coregulators on transcription driven by the human GR AF1, we cotransfected CV-1 cells with a GRE-dependent reporter gene and a constant amount of GR500 expression vector alone or with added vectors expressing TBP and/or SRC-1. The GR500 construct is constitutively active as a transcription factor, while avoiding the possibility of any contribution from AF2 [18]. Lacking the LBD, GR500 is transcriptionally active without steroid and can induce genes and/or apoptosis in cells to nearly the same extent as steroidbound holo-GR [18], [24]. As expected, GR500 alone significantly increased reporter activity compared to empty vector alone (Figure 8), and input of the plasmids expressing TBP or SRC-1 gene, enhanced the GR500 induction of the GRE-SEAP reporter by ,2-3 fold (Figure 8). When cells were co-transfected with GR500, TBP, and SRC-1 together, the reporter activity was further enhanced by ,8 fold. These results strongly suggest that the enhancement of GR-induced transcription by TBP or SRC-1 is achieved through the AF1 region and that TBP binding plays an important role in it by inducing more helical structure in the otherwise ID AF1 region, confirming that TBP-induced structure formation in ID AF1 region aids in facilitating protein-protein interactions between AF1 and coactivators, which subsequently helps drive GRE-mediated AF1 transcriptional activity.
Discussion
The GR mediates most of the biological effects of glucocorticoids at the level of gene regulation. To regulate the expression of target genes, the GR interacts with several coregulatory proteins including coactivators and corepressors [3]. In recent years, based on the kinetic behavior of the SHR in cells, it has been concluded that the SHRs function very dynamically such that it rapidly and reversibly interacts with its coregulatory proteins, and chromatin and DNA [39], [40]. Requirement of various constellations of coregulatory proteins to regulate GR target genes suggests that the kinetics of these interactions must be variable, depending upon the local concentration and/or binding affinities of these coregulators [3], [41], [42], [43], [44], [45], [46], [47]. Thus, the overall picture is one of a complex, dynamic network controlled by the GR as it interacts reversibly with a variety of other coregulatory proteins. Since many of these cell-specific interactions between the GR and other coregulatory proteins take place through AF1 domain, it is logical to build upon the idea that ID nature of AF1 may be a dominant factor in regulating these events. Of course, for full transcriptional activity, AF1 and AF2 must work synergistically through cross communication, and this is where the flexible structural characteristics of AF1 may play a major role by facilitating protein:protein interactions.
There are reports showing the evidence for a two-step binding model in which the ID activation domain of c-Myc and estrogen receptor bind rapidly to TBP due to polar interactions and subsequently folds to an ordered conformation [26], [29]. Similar mechanisms have been proposed for the GR as well [21], [24]. TBP has a central role in the basal transcription machinery, and it is known to bind to several TFs, suggesting that the multiprotein complexes involving basal transcription machinery proteins and TF may represent a mechanism through recruitment of TBP to the TATA box. It is generally believed that activation domains of many TFs work through an induced binding/folding mechanism, i.e., they may not be structured until they have recruited and bound their proper binding partners. In this study we show that complex formation between the GR AF1 and TBP is accompanied by a change in protein conformation. An approximate dissociation constant in the mM range for the interaction between AF1and TBP was obtained. However, we were not able to make a complete kinetic analysis of the interaction due to certain technical limitations. A slower dissociation suggests that once the complex is formed, AF1 must have undergone structural rearrangement such that AF1 has now adopted a more stable folded conformation. This is consistent with our findings of increased helical contents in AF1 when complexed with TBP.
Thus, the emerging picture is that ID transactivation regions become folded in concert with target factor interaction, and TBP seems to be a major coregulatory binding partner protein in this process. The question, therefore, arise whether there could be a unified mechanism of TBP binding/folding events on the action of ID activation domains of TFs. We have earlier shown that TBP interacts with the GR AF1 in cells [24]. We have also shown that conditional folding of AF1 is critical for its interaction with SRC-1 [18]. Our present studies certainly support the idea. However, the clear picture will emerge only when the 3-D structures of these complexes are available. Unlike AF2, no single interaction motif has been identified for AF1 coregulatory proteins, and it appears that ID conformation of AF1 helps in promoting molecular recognition by providing surfaces capable of binding specific target molecules [17], [19], [26], [27], [28], [29], [48], [49], [50], [51], [52], [53], [54], [55], [56]. These AF1 surfaces can be achieved through AF1's interaction with specific target molecules, and our data support that TBP may be one such target molecule, since, unlike several other coregulatory proteins including SRC-1 (which bind to both AF1 and AF2 regions), TBP binds and regulates GR activity mainly through AF1 domain [24], [57]. We propose that the ID nature of the GR AF1 allows it to rapidly ''sample'' its environment until coregulatory proteins of appropriate concentration and affinity are found [5].
Our results show that TBP binding induces secondary/tertiary structure formation in the GR AF1. This TBP binding-induced folding of the GR AF1 is quite striking in the sense that TBP is the major component of basal transcription machinery, and a cross communication of the receptor with the basal transcription machinery is an essential requirement of regulation of GR target genes. Our identification of SRC-1, a critical coactivator of GR activity is a testimony of this fact. We have earlier reported that the GR AF1-TBP interaction occurs under in vivo conditions, and amino acid residues 187-242 of the human GR AF1 and amino acid residues 159-339 of human TBP are critical for this interaction [24]. It is also interesting to note that unlike AF2, activation domain-2 (AD2) and possibly AD1 regions of SRC-1 are involved in its interaction with AF1 [58]. SHRs function in an extremely dynamic situation such that they have the capacity to rapidly form and reform multiprotein complexes involving coactivators/corepressors and/or proteins from the fundamental initiation complex. Thus, TBP binding-induced AF1 conformation may provide platform(s) for inclusion and/or exclusion of specific coregulators, which may dictate the final outcome responsible for the regulation of target genes. These effects of course may be celland promoter-specific. It is a well accepted fact that generally, though perhaps not universally [49], [59] under physiological conditions proteins must have specific structure to carry out their proper functions. In the context of the GR AF1, it could be hypothesized that the GR AF1 domain may be structured in vivo, at least when directly involved in transcriptional activation. Our studies support this notion, that when carrying out its transcription-regulating function, AF1 shifts to more structured conformers through specific protein:protein interaction. Conformational uniqueness of most proteins determines their biological function, and we propose that the ID nature of the GR AF1 can explain much about the GR's observed dynamic behaviors in cells. The ID AF1 region can be thought of as a large collection of rapidly inter-convertible conformers, which can select among available coregulatory proteins to form the basis for building large transcription-regulating complexes on specific promoters. Such complexes can dissociate and re-associate with differing composition.
Our results provide a potential mechanism through which GR AF1 may regulate the expression of the GR-target genes, information essential to understand how specific signals are passed from the receptor to target genes. Because TBP is known to bind to several transcription factors including SHRs through their ID activation domain, our results from this study may provide a mechanism through which ID activation domains form assembly of critical coregulatory proteins and subsequent transcriptional activity. Of course, these effects can further be influenced by other factors such inter-domain interactions, and small molecule ligands and other protein:protein interactions. | 2014-10-01T00:00:00.000Z | 2011-07-07T00:00:00.000 | {
"year": 2011,
"sha1": "9c40d0fb35026ee88863e7df5f601a5cf1b9e3ac",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021939&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c40d0fb35026ee88863e7df5f601a5cf1b9e3ac",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118324147 | pes2o/s2orc | v3-fos-license | The Solar Corona: Why It Is Interesting for Us
Strong magnetic fields are of vital importance to the physics of the solar corona. They easily move a rarefied coronal plasma. Physical origin of the main structural element of the corona, the so-called coronal streamers, is discussed. It is shown that the reconnecting current layers inside streamers determine their large-scale structure and evolution, including creation, disruption and recovery. Small-scale (fine) magnetic fields in the photosphere experience random motion. Their reconnection appears to be an important source of energy flux for quiet-corona heating. For active-corona heating, the peculiarities of entropy and magnetoacoustic waves, related to radiative cooling, are significant and should be taken into account in the coronal heating theory.
Perhaps the most amazing aspect of the corona of the Sun is its intricate beauty. Thousands of people move through thousands of km (even from Moscow to Australia) to achieve a place of the best seeing a solar eclipse. They move heaven and earth in order to observe this nice natural phenomenon. Recall that the corona, consisting mainly of ionized plasma, becomes visible to the naked eye during a total eclipse (Fig. 1). Parker (Parker E.N., ApJ, 1958, 128, 677) suggested that outer parts of the corona must be expanding in the form of a solar wind . The first calculations of the magnetic field in the corona were based on two main premises: the magnetic field over the photosphere is potential up to a certain height, at which the field becomes purely radial owing to drawing-out by the solar wind. The magnetic fields calculated under these simplified assumptions exhibited a reasonable correlation with the optical structure of the chromosphere and corona, as well as with the radio-and soft X-ray pictures of the Sun.
The coronal magnetic field constructed by this method contains neutral points where in the presence of plasma the current layers appear (Syrovatskii S.I., Sov. Astron. -AJ, 1962, 6, 768). They change geometry of magnetic field. A current layer with quasi-radial fields of opposite direction on either side of it appear inside a coronal streamer similar to the current layer in the magnetosphere tail. In both cases the dipole magnetic field is drawn out by the stream of the solar wind plasma: in the corona it is the dipole magnetic field of an extended active region, and in the magnetosphere it is the Earth magnetic field. The simple 2D problem of stretching out of a dipole field by a plasma flow was formulated assuming that the field is frozen in the flow which is accelerated similar to the solar wind (Somov B.V. and Syrovatskii S.I., Sov. Phys. -JETP, 1972, 34, 992).
The capture of the magnetic field by the solar wind occurs from the interior of the field itself. The plasma slowly flows along the field lines in the strong-field region, near the dipole. However, as the magnetic field becomes weaker with a height in the corona, the plasma flow becomes stronger and is smoothly transformed into a radial solar wind that carries an external part of the field away. As a result, a quasi-stationary picture of magnetic field can be established for a long-lived active region as illustrated by We see that the MHD approximation of a strong magnetic field can be very good in reproducing the large-scale structures in the corona. Moreover, varying with time according to boundary conditions, the magnetic field easily sets the highly conducting coronal plasma in motion. Its kinematics is uniquely defined by two equations. The first of them follows from the momentum conservation law and means that the acceleration is orthogonal to the magnetic field lines. The second equation is a corollary of the freezing-in condition. For example, the set of 2D ideal MHD equations describing the plasma flows can be rewritten as the following set of equations (e.g., Somov B.V., Plasma Astrophysics, Part II, Reconnection and Flares, New York, Springer SBM, 2013, Ch. 2): Here the scalar function A(x, y, t) is commonly called a vector potential because of definition the vector potential A = { 0, 0, A (x, y, t) } for the magnetic field B = rot A. A complete solution of the set of equations (1), including the velocity field and the plasma density distribution, was obtained in a vicinity of a reconnecting current layer at a hyperbolic zeroth point of magnetic field (Somov B.V. and Syrovatskii S.I., in Neutral Current Sheets in Plasma, New York and London, Consultants Bureau, 1976, p. 13).
In the corona, more complicated models are required in order to describe the coronal streamer behavior in the periods of high solar activity. A generalization of the model illustrated by Fig. 2 is needed because the current layer inside a streamer can be disrupted into parallel current filaments or ribbons (Wagner S.A. and Somov B.V., in Cosmicheskie Issledovania, Sankt-Peterburg, FTI, 1991, p. 79, in Russian). Fig. 3a demonstrates such a model which assumes that a rupture (a gap between points h D and h U ) of the reconnecting current layer (RCL, two thick vertical segments) emerges in a region of high electric resistivity, for example, anomalous resistivity due to the excitation of plasma turbulence. Fast magnetic reconnection takes place in the vicinity of the Xtype zeroth point h X of a strong magnetic field. The reconnection process is driven by the uncompensated magnetic forces F mag , acting on the edges of the gap, h D and h U , and having a disruptive influence on the RCL. A simple analytical model of a disrupting RCL (Somov B.V. and Syrovatskii S.I., Bull. Acad. Sci. USSR, Phys. Ser., 1975, 39, No. 2, 109) shows that the magnetic tension forces F mag are proportional the the size of the gap, h U − h D , and are tending to increase it. The effective magnetic 'charges' e n and e s at the points x = ± a model the photospheric (or under-photospheric) sources of magnetic field. (a) Disruption of the RCL due to magnetic reconnection at the point h X . At the point h Y the magnetic force equals zero, but at the points h D and h U it is not equal to zero and is directed downwards and upwards respectively. Therefore, fast reconnection is driven by the magnetic forces F mag , acting on the edges of the gap. (b) The non-stationary process of recovery of the RCL via a secondary reconnecting current layer (h Y1 , h Y2 ). Thick empty arrows show the plasma flows in the vicinity of this new RCL. V rec is the velocity corresponding to the reconnection rate in the secondary RCL.
While considering the (x, y) plane as a complex plane z = x + iy, we relate an analytic function F to the vector potential A as follows F (z, t) = A(x, y, t) + i A + (x, y, t) . Then where the asterisk denotes the complex conjugation. Define B * = B x − i B y ≡ B(z). The magnetic field of non-equilibrium disruptive streamer shown in Fig. 3a is given by formula The X-type point h X at the center of the reconnection region has a special status. If the plasma density near this point does not drop too much in the reconnection process (see, however, Somov B.V. and Syrovatskii S.I., in Neutral Current Sheets in Plasma, New York and London, Consultants Bureau, 1976, Ch. 3, Sect. 2), then a secondary current layer (the thick vertical segment between points h Y1 and h Y2 in Fig. 3b) will appear. Otherwise, the plasma is not enough to produce the secondary current layer capable of suppressing the current layer disruption. In other words, the primary reconnecting current layer (RCL) can be completely disrupted or, alternatively, recreated in the non-stationary process shown in Fig. 3b. The streamer will make a full recovery from the rupture (Fig. 3a) to its original shape (Fig. ??). The magnetic field of a recovering streamer (Fig. 3b) is So, the basic idea articulated above is that coronal streamer formation is a twofold magnetic process. First, the magnetic field plays a passive role in shaping streamers by some processes involving the stretching-out the field by the solar wind acceleration and motion. Second, the magnetic field plays an active role in providing dynamic behavior of a streamer by magnetic reconnection. The non-stationary dynamics of a coronal streamer combines two opposite processes: (a) the disruption of a reconnecting current layer inside a streamer and (b) its recreation, which we call recovery.
As a consequence, depending on physical conditions, a streamer can be completely disrupted and disappears or be recovered once or several times after being disrupted. That is why a streamer can exist a very long time even as long as an underlying active region exists. Moreover, its large-scale external structure looks like the same stationary configuration. Non-stationary plasma flows inside a streamer related to magnetic reconnection in a recovering streamer (Fig. 3b) are always present but not always they are well observable. Because of conservation of the global configuration, a coronal streamer may be compared with a river: one cannot come in the same river twice.
The orientation of coronal streamers, the change of shape of the corona with the solar activity cycle, etc., all these observational facts tell us about the existence of coronal magnetic fields. However it is quite difficult to measure them because the coronal emission is exceedingly faint. Until the present, observations are scarce and our knowledge about the coronal magnetic field comes mainly from the theoretical extrapolation of photospheric fields and from comparison of the theoretical model predictions with the observed large-scale structures of the corona. * * * Fine structure of solar magnetic fields presumably has properties of complex field configurations containing many places (points or lines) where reconnection occurs. Such a situation frequently appears in astrophysical plasmas, for example in a set of closely packed flux tubes suggested by Parker (Parker E.N., ApJ, 1972, 174, 499). The tubes tend to form many reconnecting current layers (RCLs) at their interfaces. This may be the case of active regions when the field-line footpoint motions are slow enough to consider the evolution of the coronal magnetic field as a series of equilibria, but fast enough to explain coronal heating.
Magnetic flux tubes in the photosphere are subject to constant buffeting by convective motions, and as a result, flux tubes experience random walk through the photosphere. From time to time, these motions will have the effect that a flux tube will come into contact with another tube of opposite polarity. We refer to this process as reconnection in weaklyionized plasma (Litvinenko Yu.E. and Somov B.V., Solar Phys., 1994, 151, 265). Another possibility is the photospheric dynamo effect (Hénoux J.C. and Somov B.V., A&A, 1997, 318, 947) which, in an initially weak field, generates thin flux tubes of strong magnetic fields. Such tubes extend high into the chromosphere and contribute to the mass and energy balance of the quiet corona.
SOHO's MDI observations have shown that the magnetic field in the quiet network of the photosphere is organized into relatively small 'concentrations' (magnetic elements, small loops etc.) with fluxes in the range of 10 18 Mx up to a few times 10 19 Mx, and an intrinsic field strength of the order of a kilogauss. These concentrations are embedded in a superposion of flows, including the granulation and supergranulation. They fragment in response to sheared flows, merge when they collide with others of the same polarity, or cancel against concentrations of opposite polarity. Newly emerging fluxes replace the canceled ones.
Direct evidence that the so-called 'magnetic carpet' (Day C., Physics Today, 1998, March issue, 19), an ensemble of magnetic concentrations in the photosphere, really can heat the corona comes from the two other SOHO instruments: CDS and EIT. Both have recorded local brightenings of hot plasma that coincide with disappearances of the carpet's elements. This indicates that just about all the elements reconnect and cancel, thereby releasing magnetic energy.
The transition region and chromospheric lines observed by SOHO together with radio emission of the quiet Sun simultaneously observed by VLA show that the corona above the magnetic network has a higher pressure and is more variable than that above the interior of supergranular cells. Comparison of multiwavelength observations of quiet Sun emission shows good spatial correlations between enhanced radiations originating from the chromosphere to the corona. Furthermore the coronal heating events follow the basic properties of regular solar flares and thus may be well interpreted as microflares and nanoflares. The differences is mainly quantitative (Krucker S. and Benz A.O., Solar Phys., 2000, 191, 341).
What do we really need to replenish the entire magnetic carpet quickly, say 1-3 days? -A rapid replenishment, including the entire cancelation of magnetic fluxes, requires the fundamental assumption of a two-level reconnection in the solar atmosphere (Somov B.V., Bull. Russ. Acad. Sci., 1999, 63, 1157. First, we apply the concept of fast reconnection of electric currents as the source of energy for microflares to explain coronal heating in quiet regions (Somov B.V. and Hénoux J.-C., in Magnetic Fields and Solar Processes, 9th Eur. Meet. on Solar Phys., ESA SP-448, 1999, 659). Second, in addition to coronal reconnection, we need an efficient mechanism of magnetic field and current dissipation in the photosphere. The presence of a huge amount of neutrals in the weakly ionized plasma in the temperature minimum region makes its properties very different from an ideal MHD medium. Dissipative collisional reconnection is very efficient here (Litvinenko and Somov, 1994). * * * While the corona is evidently heated everywhere, there is no question that it is heated most intensively within active regions where the magnetic field is the strongest. Detailed models of coronal heating in active regions typically invoke mechanisms belonging to one of the two broadly defined categories: wave (AC) or stress (DC) heating. In the AC heating, the large-scale magnetic field serves essentially as a conductor for small-scale MHD waves propagating into the corona. Thus the properties of these waves are of principal importance.
In the corona, the low-frequency MHD oscillations can be studied comprehensively. They are observed almost at all wavelengths (see Aschwanden M.J., Physics of the Solar Corona, Berlin, Springer,2004). Most of these oscillations are commonly interpreted as standing oscillations of various types in coronal magnetic loops. Meanwhile the oscillations of coronal loops observed from TRACE satellite in EUV are, as a rule, damped rapidly. The ratio of the characteristic damping time τ d to the oscillation period τ ω is τ d /τ ω = 1.8 ± 0.8 in the range of periods τ ω = 317 ± 114 s. Such rapid damping of the MHD oscillations seemed difficult to explain.
Why rapidly damped oscillations are seen best in a small group of loops precisely in EUV radiation is a key question. Contrary to popular belief, the answer is simple. Where the rate of energy losses via optically thin plasma radiation has a maximum (i.e. at T ∼ 10 5 K), the brightness of the oscillating loops also has a maximum (i.e. in EUV) and, as a consequence, the MHD oscillations are damped more rapidly than in other places. This is the case of slow magnetoacoustic waves (Somov B.V., Dzhalilov N.S., and Staude J., Astron. Lett., 2007, 33, 309). The significant advantage of slow magnetoacoustic waves over fast ones is that the regions of reduced magnetic field in the former coincide with the regions of enhanced plasma density. Here the rapid radiative losses manifest themselves. Meanwhile, as calculations show, fast magnetoacoustic waves radiate little and, therefore, are damped too slowly.
Another feature of small MHD perturbations in an optically thin, perfectly conducting plasma with a cosmic abundance of elements is an instability of entropy waves. The instability mechanism is simple. In the temperature regions of a rapid decrease in the radiative loss function with temperature, a small decrease in temperature causes a large increase in the rate of radiative energy losses. Conversely a small increase in temperature is accompanied by a decrease in the rate of radiative plasma cooling. As a result, small perturbations grow rapidly. The growth time for the entropy waves in the corona can vary over a wide range: from tenths of a second to tens minutes.
The fact that the instability condition for entropy waves is almost independent of the magnetic field strength and configuration is fundamentally important for the theory of coronal heating. This means that among the various physical processes involved in the coronal heating, the growth of entropy waves can manifest itself everywhere. The peculiarities of entropy and magnetoacoustic waves, related to radiative losses of energy, should be taken into account in general theory of evolutionarity of MHD discontinuities (see Ch. 17 in Somov B.V., Plasma Astrophysics, Part I, Fundamentals and Practice, New York, Springer SBM, 2013). | 2013-03-13T10:08:09.000Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "2d6f0e6883bf9d82c08f48f24f3a2abf86b6643f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2d6f0e6883bf9d82c08f48f24f3a2abf86b6643f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234003311 | pes2o/s2orc | v3-fos-license | THE VALUE AND CHARACTER BUILDING EDUCATION IN FOLKLORE FROM BATAKNESE "SIGALE-GALE"
Article History Received: November 2020 Revised: January 2020 Published: January 2020 Education is the learning of knowledge, the skills of every person that are passed down from generation to generation for one generation. But in education, it is also essential to learn the formation of good character. In this relevant research, especially those that raise Folklore as the object of study, only a few people know about it. Folklore is a medium that can use as a means of forming positive characters in children through moral values and character education contained in the story. This type of research is qualitative descriptive obtained, analyzed, and analyzed in writing to describe the moral values and character building contained in the Toba Batak folklore "Sigale-gale." Data collection was carried out by reading the Sigale-gale folklore text repeatedly and identifying the data in the form of keywords related to the values forming the characters in the story. Furthermore, the collected data were analyzed using content analysis techniques. The results showed that there were four character-forming values in the Sigalegale Folklore; namely hard work, curious, friendly, and wise.
INTRODUCTION
Education is the learning of knowledge, skills of each person passed down from generation to generation. In shaping quality human resources, education must have good character and moral values. Education should be useful for the academic field and have good moral and character values. Muslich (2013) states that character education should take precedence because it is in a better park, elementary school, junior high school, middle school, and university. Therefore, internalization of the values of character education must begin at the necessary level of formal education and have to do to various environments such as at school, family, and community life. The purpose of the realization and implementation of character education is to make students have good skills, good knowledge, and morality.
Students must not only be useful in academics but also have to be right. That is because higher education must have good morals. Thus, education produces extraordinary output with great characters (Youpika, 2016). According to Aynur (2011), Character Education is a national movement creating a school that fosters ethical, responsible, and caring young people by modeling and teaching good character through an emphasis on universal values that all share. However, the phenomenon in our lives is not in line with expectations, for examples of many facts that show the use of electronic and social media deviations such as Facebook, Instagram, and games. The use of the internet is so out of control that children can access it freely.
Furthermore, based on several researchers' observations, there are still many students who need character education programs such as discipline, responsibility, doing homework on time, respecting friends and teachers, and being honest. It must be corrected by yourself. According to (Barone, 2011), Folklore is part of traditional literature.
The researchers take some previous study about the moral Value and character-building education in Folklore. The previous study's object is Moral Value Assessment and Character Education in Folklore in Karo District, The Value of Characteristic Education in Andai-Andai Folklore, the need for character education. Based on facts, Folklore has many benefits in the world of education, including cultivating moral values to build student character. one of the famous Folklore is the "Si Gale-Gale" folklore originating from Bataknesse, North Sumatra. This Folklore can be one of several alternative learning materials for students in junior high school. Awang (1985) states that Folklore has an entertaining function, teaching material, encouraging people to articulate polite words, and encouraging literature as the basis for doing literary works.
Based on the phenomenon, the researcher did this research to increase the knowledge taken by readers, and the research can be used as teaching material in junior high school because it contains moral Value and character building for children. Considering that Folklore has a pearl of local wisdom that contains educational values, it is necessary to develop and disseminate Folklore to future generations. In addition, many positive values can shape the character of children from an early age. This research hopes that inputs about life and Moral Value in Folklore can be obtained.
Researchers use descriptive qualitative, where researchers try to describe and interpret the elements in the story, situation and condition, symptoms, and development to see the contribution of the Value of the story on character education. In data collection, researchers conducted literature studies and observations.
Research Design
This type of research is a Qualitative Descriptive Study. Meleong (In Hidayati, 2016: 38), Qualitative Descriptive means that the data generated and collected are in the form of words, pictures, through descriptions but not by numbers. Next, the data obtained will be processed and analyzed in written form. According to Moleong (in Hidayati,2016:38), Qualitative Research is an effort to present the social world, concepts, behaviours, perceptions, and human under study issues. This qualitative descriptive study is used to obtain an overview of displaying educational values in the "Sigale-gale" Batak folklore.
Instruments
As qualitative research should be, this research too a human instrument, to be precise, the researchers themselves. Humans are using it as a tool for collecting data based on criteria understood. The standard in question is knowledge of morals. The device in this research is a data card. They have used data cards to record and transcribe all data obtained.
Data Analysis
In KBBI (2001), analysis means an investigation of events (essays, deeds, and others.) to find out the situation the truth (causation, sit down the case, and so on). According to Sugiyono (2012), data analysis involves finding and compiling data obtained from interviews, notes fields, and documentation, with data organized into categories and break down into units. In addition, this study did synthesis, arrange in the pattern, choose what is essential and what will learn, and conclude it can quickly understand by oneself or others. This study used descriptive qualitative analysis techniques. Furthermore, the data obtained were processed and analyzed in written form. This descriptive analysis uses a pragmatic approach. According to Siswanto and Roekhan (in Siswanto, 2013), the pragmatic approach is a literary study approach that emphasizes his study of the reader's role in accepting, understanding, and living literary works. This pragmatic approach uses to analyze the educational values inSigale-gale Folklore, including the Value of moral education and the Value of social education.
The steps of data analysis performed are 1) classifying data, data obtained from the analysis of educational values contained in the Sigale-gale Folklore, namely in the form of Educational ValuesMoral and Value of Social Education; 2) data in the form of educational values such as; the Value of moral education and the Value of social education also are analyzed in terms of what behaviors or patterns. Those are containing in it. 3) Linking the educational values contained in folklore Sigale-gale with its application in learning moral values for the formation of character in students. 4) Summing up the results of the overall data analysis. Meanwhile, in a qualitative study, the author(s) should mention the model to analyze the data. This brief explanation is further followed by the application of each stage within the selected model.
RESEARCH FINDINGS AND DISCUSSION Research Findings
Research finding the data will be analyzed and interpreted based on the theory and research interpretation. The research finding in this research can be presented as follows.
Discussion
This study was designed to analyze moral values and character formation in the Toba Batak folklore entitled Sigale-gale as learning for children in the formation of good moral values and character. The main focus of this research is to analyze the character of each character in the Folklore. From our findings, there are four moral values in character building, namely hard work, Curiosity, friendship, and wise, with the four characters in the Folklore as learning for students so that it is carried out in real life. The following is an explanation of the moral Value in the Sigale-gale Folklore.
Hard work
Hard work is a behavior that shows a genuine effort being to make to overcome various and learning and assignment obstacles. The Value of hard work can see in the narrative and quote below.
"There was a man. His name was Datu Panggana. He's an expert statue. He always sought materials for the statue himself. If a king wants to make a tomb, and for that, a stone statue is needed, so he went up and downhills. And valleys to find a suitable stone for that purpose. When a wooden statue is ordered, he goes out into the forest looking for suitable wood. That is how good he is with his job so that he is well known in his Huta ( village). "(Sigale-gale, 1978: 5)
Based on the quotation above, Datu Panggana is a hard worker. He is willing to go up and down hills and go out into the forest to find the materials and needed for his work. According to Gunawan (2012) hard work is a behavior that shows serious effort to overcome various obstacles in order to complete tasks ( study work) as well as possible.
The hard work attitude shown in the character Datu Panggana is very important to be taught to students, so that students can instill hard work from an early age so that later they can become useful children.
Curiosity
Curiosity is an attitude and action that always try to know more deeply and broadly something that is learning seen and heard. We can see the Value of Curiosity from the quote from the conversation between Datu Panggana and Bao Partigatiga below.
"What trade are you Par?" asked Datu Panggana, glancing at him serious obstacles hung on Bao Partigatga's shoulders." Trading pearls, gold jewelry, clothes. Uh, Datu, suddenly I got sense. This statue will be more beautiful if we give clothes and more jewelry. Can I try it Datu? "Suggested Bao Partigatiga suddenly. (Sigale-gale, 1978: 12) In the quote above, it is clear to Datu Panggana's Curiosity, how the statue looks like when used for clothes and jewelry and because of that Datu Panggana asked Bao Partigatiga's permission to try it. According to Mustari (2011), curiosity is an attitude and action that always seeks to know more deeply and broadly from what one has learned, seen and heard, this is to obligations towards oneself and the environment.
Thanks to the curiosity of Datu Panggana, who always tries to decorate the statue, the statue is very beautiful and almost looks human. Datu Pangga's sense of curiosity attitude is an attitude that we need to teach children because curiosity can lead us to continue learning something.
Friendly
Friendships are actions that show a sense of pleasure in talking, socializing, and cooperating with others. We can see the Value of friendship from the following narrative quote.
"Because he wanted to find clothes that were equivalent to the image of a statue, Bao Partigatiga became so diligent that he tried all the clothes he was carrying, in turn, trying on the body of the statue. Finally, a pair of crimson clothes slung across the graceful girl's body, enhancing her dazzling face. As it is with clothes, so it is with jewelry. He tried all the jewelry on the body of the statue. Earring, necklaces, rings, various models, when he found the right jewelry, Bao Partigatiga jumped for joy like a child ". (Sigale-gale, 1978: 14) From the friendship shown by Bao Partigatiga, he is willing to help in choosing clothes and jewelry for the statue made by Datu Panggana, and it is also clear that Bao Partigatiga is happy to be able to choose clothes and jewelry for the statue. From the friendship attitude of the figure of Boa Partiga-Tiga who wants to help Datu Panggana to decorate the statue is an attitude that we must also teach to children. From a sense of friendship, we can get an attitude of tolerance and kinship between our neighbors to help each other in any situation.
Wise
Wise is an action or decision made by someone without burdening or causing loss to someone with another. Can see The Value of wisdom in the following narrative.
" if he finds out that he is the source of your constant bickering, "said the Aji Bahirbahir." (SGG, 1978: 42-43) According to Kitchener & Brenner (in Sternberg & Jordan, 2005:17) argues wise as the intellect ability to realize limitation knowledge and how it impacts in solving problems that are not clear and make judgments. It is clear from the quote above with the wise of Aji Bahirbahir that s can resolve their problems. We have to instill this kind of attitude from an early age in children, so that later they can make it.
CONCLUSION
Based on the analysis of the character-forming values contained in the Sigale-gale Folklore, can conclude that in the Folklore, four character-forming values are founding, namely; hard-working, curious, friendly, thoughtful. One of the values that shapes character is the Folklore form. As a cultural heritage, Folklore needs to be preserved, processed and used as an essential medium in character education for the nation. The character-forming values contained in Folklore are not just to be understood but to be practised in everyday life. However, it is not only understanding that is important, but appreciation and practice in everyday life are equally important.
From the description of the character-forming⁷ values in Sigale-gale, it shows that the Folklore contains quite a lot of character-forming values that every human being especially needs to have in shaping students' character. By having these character-forming values, it will form better attitudes and morals. The character-forming values contained in the Sigale-gale are the message the author sends to the reader to imitate the good characters contained in the character of the story. Character-forming values must be instilled in students and implemented in real life to form positive attitudes and behaviors. | 2021-05-10T00:04:46.252Z | 2021-01-25T00:00:00.000 | {
"year": 2021,
"sha1": "e0bb4edb7712161e975e4f750f3ad325d0461d2f",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.undikma.ac.id/index.php/jollt/article/download/3228/2406",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c6c3a0f91ac1c122b0d8b2849aaab0f1a7c89b0d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
252361581 | pes2o/s2orc | v3-fos-license | Perivascular Epithelioid Cell Tumor (PEComa) of the Lung in a 56-Year-Old Female Patient: A Case Report
Perivascular epithelioid cell tumors, best known as PEComas, are extremely uncommon mesenchymal tumors The etiology of PEComas remains unestablished and its clinical presentation is usually benign. PEComas lack a distinctive symptomatic presentation; thus, the diagnosis of these tumors relies mainly on pathological examinations. These neoplasms have a very distinct immunoreactivity for melanocytic markers critical for their identification. Due to the rarity of these tumors and lack of a distinct disease presentation, we discuss the diagnostic relevance of imaging and pathologic findings in a 56-year-old woman diagnosed with a PEComa in the right middle lobe of the lung.
Introduction
Perivascular epithelioid cell tumor (PEComa) is an uncommon tumor of mesenchymal origin [1]. This tumor family is characterized by the perivascular distribution of their distinctive epithelioid or spindle cells. PEComas were first identified in 1963 [2] and have since been described in multiple organs, including the kidneys and other genitourinary sites, retroperitoneum, uterus, liver, and lungs. This subset of tumors is now considered to comprise angiomyolipomas (AML), lymphangioleiomyomatosis (LAM), and clear cell sugar tumors (CCST). Most PEComas are benign, although a number of malignant PEComas have been cited in the literature [1]. PEComas are more common in middle-aged individuals and are nearly three times as common in females in comparison to males [3]. Variants of PEComa in the lung include LAM and CCST [3]. The diagnosis of these neoplasms is frequently complicated by the lack of a symptomatic presentation and non-specific imaging features, thus causing physicians to rely mainly on immunohistochemical markers of these tumors [4]. Myogenic and melanocytic markers are commonly reactive and are vital in diagnosing these tumors. Due to the scant evidence of this pathology in the literature and the diagnostic complications associated with it, we present the imaging and histological findings in a 56-year-old woman diagnosed with a PEComa of the lung.
Case Presentation
A 56-year-old female smoker (40 pack years), with a history of breast cancer treated with (right-sided) lumpectomy and radiation followed by five years of tamoxifen, presented to her family doctor for evaluation of a right axillary lump. Physical and laboratory results were unremarkable except for a hard 1 cm large fixed nodule in the right axillary region. Due to the smoking history, a low-dose non-contrast screening chest CT was performed and revealed a 1 cm diameter right middle lobe nodule (Figures 1,2) as well as multiple subcentimeter (0.2-0.4 cm) nodules elsewhere in both lungs and right axillary lymphadenopathy.
Discussion
Clear cell tumor of the lung (CCTL), a type of PEComa, usually presents as a singular, asymptomatic nodule. As they are often asymptomatic or present with non-specific symptoms, these tumors present a significant diagnostic dilemma. CCTLs are often detected in routine screenings, such as in the case of our patient, appearing as well-circumscribed "coin" lesions [5]. Due to their rich vascular stroma, they frequently demonstrate prominent enhancement on contrast-enhanced CT [5]. This appearance of a solitary pulmonary nodule is non-specific and can be seen in a wide variety of benign lesions as well as primary and metastatic malignancies. Due to the non-specific clinical and imaging features, pathological evaluation of these tumors is paramount in making the correct diagnosis. CCTLs are frequently misdiagnosed as pulmonary metastases of primary lung or renal clear cell carcinomas [6]. Despite the pathogenesis of these lesions being uncertain, the World Health Organization (WHO) has categorized these neoplasms as unique mesenchymal tumors composed of perivascular epithelioid cells. On routine hematoxylin and eosin (H&E) stains, these tumors demonstrate a "clear cell" appearance due to abundant intracytoplasmic glycogen [7], leading to them also being referred to as "sugar tumors" of the lungs. Mitoses are typically not present [7]. It is vital that these be differentiated from other, malignant clear cell neoplasms such as the clear cell variant of pulmonary adenocarcinoma and metastatic clear cell renal malignancy [8]. Immunohistochemical analysis permits this differentiation as CCTLs demonstrate the presence of melanosomes, perivascular myoid cell antibodies, and HMB-45 reactivity not seen in more aggressive clear cell neoplasms. All these findings were present in our case and are evidenced by literature to be of pericyte origin [9]. Supporting these findings, HMB-45 immunohistochemical positivity has proven to be one of the most reliable and distinctive CCTL diagnostic markers [10][11][12][13][14][15], although there have been rare documented CCTL lacking HMB-45 positivity [16]. CCTLs are negative for cytokeratin, chromogranin, CD10, and EMA, which help to distinguish them from malignant clear cell tumors (CCTs) [10][11][12][13][14][15]. Knowing this, our presented case serves to underscore the immunologic variability of this disease and the importance of considering several immunohistochemical markers such as MelanA when clear cell lesions are suspected [1].
Despite the majority of the cases being benign, rare malignant instances of primary and metastatic CCTs have been reported [17], and in those cases, a long follow-up period is often suggested.
The differential diagnoses of CCTs often include carcinoid, granular cell tumor, lung clear cell adenocarcinoma, metastatic malignant melanoma, and renal cell carcinoma [18]. Such cancers can be distinguished by the presence or absence of certain immunohistochemical markers and clinical presentations of the respective diseases. As stated previously, sugar tumors distinguish themselves most frequently via HMB-45 positive, abundant intracellular glycogen, and a negativity for chromogranin and cytokeratin staining. Once the diagnosis is made, surgical resection of the neoplasm with no neoadjuvant therapy or radiation is considered the standard procedure.
Conclusions
This case report evidences that CCTLs often present with no symptoms and non-specific imaging features. The standard diagnosis of PEComa, specifically CCTLs, is centered on the histopathology and immunohistochemical analysis of the disease. Surgical resection is the standard therapeutic approach to this pathology, and in the vast majority of cases will be curative.
Additional Information Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-09-19T15:07:24.812Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "39547e52e560edfd9f9237d4a62d25da31048558",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/103699-perivascular-epithelioid-cell-tumor-pecoma-of-the-lung-in-a-56-year-old-female-patient-a-case-report.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96cc74e355b58ac7a09219bf29e73228a30f1afa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.